Sie sind auf Seite 1von 357

i

CONTENTS


Sl.
No.
Paper ID Paper Title

Page No.


1 109
ROBOTIC WALKING STICK WITH VOICE AND TRAFFIC
SIGNAL ANALYSER-FOR THE BLIND
Ushus.S.Kumar
1


2 120
DIGITAL AUTHENTICATION AND INSPECTION SYSTEM (DAIS)
FOR AUTOMOBILES
Nevil Raju Philip, Tibi Tom Abraham
5


3 121
IMPLEMNTATION OF BLOWFISH ENCRYPTION
ALGORITHM IN FPGA
Meghna Khot, Dr. R.V. Kshirsagar
9


4 122
COMBINED WAVELET BASED UWB COMMUNICATION
WAVEFORM
Asmita R. Padole, Prof.A.P.Rathkanthiwar

13


5 139
CAPACITY ANALYSIS OF PLC FOR RAILWAY
APPLICATIONS
Anish Francis ,Gheevarghese Titus
18


6 140
OPTIMAL TUNING OF PI CONTROLLER FOR VECTOR
CONTROLLED PMSM MODEL
Deepthi .R .Nair, A.B.Patil
22


7 154
PERFORMANCE EVALUATION OF A QOS MAC
PROTOCOL IN WIRELESS SENSOR NETWORKS
Jeena A Thankachan, B.R.Shyamala Devi
28


8 162
MULTI-PORTED ENHANCED SECURITY
ARCHITECTURE TO PREVENT DDOS ATTACKS USING
PRS ARCHITECTURE
Ramesh R, Pankaj Kumar G
33


9 165
A NEW APPROACH TO NEURAL NETWORK PARALLEL
MODEL REFERENCE ADAPTIVE INTELLIGENT
CONTROLLER
R.Prakash, R.Anita
37


10 178
SIMULATION OF HIGH Q MICROMECHANICAL
RESONATOR USING HFSS
P.Sangeetha, Alka Sawlikar
43


11 182
A NOVEL APPROACH FOR SATELLITE IMAGERY
SYSTEM
John Blesswin A, Rema R, Cyju Elizabeth Varghese,
Greeshma Varghese, Vani K
48

ii


12 184
DESIGN AND IMPLEMENTATION OF ADVANCE
PLANNER USING GPS NAVIGATION FOR MOBILE
COMMUNICATION WITH ANDROID
Sasikumar Gurumurthy, Abdul Gafar H, A.Valarmozhi
53


13 197
HYBRID LOAD BALANCING IN GRID COMPUTING
ENVIRONMENT USING GENTIC ALGORITHM
M.Kalaiselvi, R.Chellamani
57


14 200
KRIDANTA ANALYZER N.
N.Murali, Dr. R.J. Ramasree
63


15 223
IMPLEMENTATION OF UART DESIGN USING VHDL
Amol B. Dhankar, C. N. Bhoyar
67


16 229
ON DEMAND MULTICAST ROUTING PROTOCOL WITH
LINK STABILITY DATABASE
Sangeetha M.
72


17 238
WSD AND ITS APPLICATION IN INFORMATION
RETRIEVAL: AN OVERVIEW
Anand Prakash Dwivedi, Sanjay K. Dwivedi, Vinay Kumar
Pathak
77


18 239
SCALABLE NETWORKING ALGORITHM IMPROVING
BANDWIDTH UTILIZATION
Eldho Varghese , Dr. M.Roberts Masillamani
85


19 242
COMPARATIVE PERFORMANCE EVALUATION OF
MANET ROUTING
PROTOCOLS BASED ON NODE MOBILITY
Gurleen Kaur Walia
90


20 247
ESTIMATING AND ANALYZING OF EXACT SAR
PARAMETERS IN
MOBILE COMMUNICATION
Jacob Abharam, Vipeesh P
95


21 260
REAL - TIME MONITORING TOUCH SCREEN SYSTEM
Achila T.S
98


22 266
3D MAPPING OF ENVIRONMENT USING 2D LASER
IMAGES
Jagadish Kota, Barjeev Tyagi
101

iii


23 274
DIFFERENT APPROCHES OF SPECTRAL SUBTRACTION
METHOD FOR
ENHANCING THE SPEECH SIGNAL IN NOISY
ENVIRONMENTS
Anuradha R. Fukane, Shashikant L. Sahare
106


24 276
CONTEXT AWARENESS
Anish Shrestha, Saroj Shakya
112


25 278
PERFORMANCE EVALUATION OF SIZE CONSTRAINT
CLUSTERING ALGORITHM
J.Jayadeep, D.JohnAravindhar, M. Roberts Masillamani
117



26 283
MEMS BASED SENSORS FOR BIOMOLECULAR
RECOGNITION
P. Sangeetha , A.Vimala Juliet
123


27 291
MULTIVARIABLE NEURAL NETWORK SYSTEM
IDENTIFICATION
H.M.Reshma Begum, G.Saravanakumar
128


28 295
SOFT HANDOFF AND POWER CONTROL IN WCDMA
Ms Swati Patil, Ms Seema Mishra, Ms Sonali Kathare 133


29 296
CONCEPT- BASED PERSONALIZED WEB SEARCH AND
A HIERARCHICAL APPROACH FOR ITS
ENHANCEMENT
Jilson P. Jose, A. Murugan
138


30 300
IMPLEMENTATION OF EVOLVABLE HARDWARE
(EHW) WITH FAULT TOLERANT COMPUTATION ON
MULTIPLE SENSOR APPLICATION INCLUDING
INTERRUPT DRIVEN ROUTINES ENABLING WIRELESS
THOUGH IEEE 802.15.4 PROTOCOL SUITE
S.P. Anandaraj, Dr. S. Ravi, S.Poornima
143



31 302
SMART PHONE FORENSICS - AN AGENT
BASED APPOROACH
Deepa Krishnan, Satheesh Kumar S, A. Arokiaraj Jovith
149



32 307
DATA VISUALIZATION MODEL FOR
SPEECH ARTICULATORS
Dr. A Rathinavelu ,G Yuvaraj
155


33 308
DATA SAFE TRANSMISSION OVER PUBLIC NETWORK
USING INTEGRATED ALGORITHM
SaliniI Dev P V, Mayadevi P A, Prince Kurian
160

iv


34 310
A PROVENANCE OF BLACK HOLE ATTACK ON
MOBILE AD-HOC NETWORKS (MANETS) AD-HOC
DEMAND ROUTING (AODV) PROTOCOL
Mr. Amol V. Zade, Prof. V. K. Shandilya
165



35 312
HUMAN IRIS RECOGNITION IN UNCONSTRAINED
ENVIRONMENTS
Ali Noruzi,Mohammad Ali Azimi Kashani, Mohamod Mahloji
171


36 313
SPOS-H: A SECURE PERVASIVE HUMAN-CENTRIC
OBJECT SEARCH ENGINE
Arun A K, M. Sumathi, R. Annie Uthra
175



37 319
COMPUTER USERS STRESS MONITORING
USING A BIOMEDICAL APPROACH &
CLASSIFICATION USING MATLAB
Arunraj M, Dr. M.Pallikonda Rajasekaran
181



38 326
SEGMENTATION OF HEART BY USING TEXTURE
FEATURES AND SHAPES
Mrs. Shreyasi Watve, Prof. Mrs. R. Sreemathy
186


39 328
IMAGE DE-NOISING WITH EDGE PRESERVATION
USING SOFT COMPUTING TECHNIQUES
Kartik Sau, Amitabha Chanda, Pabitra Karmakar
192



40 336
IMPLEMENTATION OF AN EDGE ORIENTED IMAGE
SCALING PROCESSOR IN SYNOPSYS
USING MODIFIED MULTIPLIER
Angel.P.Mathew, C.Saranya
198



41 351
MODIFIED SQUEEZE BOX FILTER FOR
DESPECKLING ULTRASOUND IMAGES
Blessy Rapheal M , J.A Laxminarayana, V.K Joseph
203



42 354
INTERFERNCE MANAGEMENT OF FEMTOCELLS
Mr.N.Umapathi, S.Sumathi
208


43 358
HAND OFF ALGORITHM FOR FUTURE GENERATION
NETWORKS
S. Nanda Kumar ,Rahul Singh, Sanjeet Singh
213



44 362
COMPARISON OF TRANSPARENT NETWORK AND
TRANSLUCENT NETWORK IN DWDM OPTICAL
NETWORKS
Nivedita G. Gundmi , Soumya A, E.S Shivaleela,
Shrikant S Tangade
220


v


45 365
VHDL IMPLEMENTATION OF BIT PROCESSES FOR
BLUETOOTH BITSTREAM DATAPATH
G. N. Wazurkar, D. R. Dandekar
226



46 370
AUTOMATION OF MICRO-ARCHITECTURAL
COVERAGE FOR NEXT GENERATION GRAPHICS
PROCESSOR
Sruthi Paturi, R.A.K. Sarvananguru, Singhal Mayank
230



47 373
FUZZY VAULT USING LOCAL SPECTRAL
AND LINE FEATURES OF PALMPRINT
Nisha Sebastian
235


48 374
SQL INJECTION IDENTIFICATION USING BLAH
ALGORITHM
Justy Jameson ,Sherly K. K .
241


49 377
INTERACTION OF STEM AND SPAN TOPOLOGY
MANAGEMENT
SCHEMES FOR IMPROVING ENERGY CONSERVATION
AND LIFETIME IN WIRELESS SENSOR NETWORK
R.Sharath Kumar, A.Jawhar
247



50 384
ADAPTIVE DATA HIDING SCHEME FOR
8-BIT COLOR IMAGES
K.Kannadasan, C. Vinothkumar
252


51 389
SIGNIFICANCE OF ALIVE-SUPERVISION
ALGORITHM IN AUTOSAR SPECIFIC WATCHDOG
MANAGER
Remya Krishna J.S., Anikesh Monot
256



52 390
FAST ASYNCHRONOUS BIT-SERIAL COMMUNICATION
Ms.Abhila R Krishna, Mr.Dharun V.S
261


53 394
FAULT DIAGNOSIS ON PNEUMATIC
ACTUATOR USING NEURO-FUZZY
TECHNIQUE
Kaushik.S, Kannapiran.B, Prasanna.R
265



54 395
EFFECTIVE PATH IDENTIFICATION PROTOCOL FOR
WIRELESS MESH NETWORKS
M.Riyaz Pasha, B.V.Ramana Raju
272


55 396
AN ADAPTIVE SPECTRUM SHARING MECHANISM FOR
MULTI-HOP WIRELESS NETWORKS
Liju Mathew Rajan
275


vi


56 398
SOLVING THE PROTEIN STRUCTURE PREDICTION
PROBLEM THROUGH A GENETIC ALGORITHM WITH
DISCRETE CROSSOVER
G.Sindhu, S.Sudha
280



57 405
PERFORMANCE ANALYSIS OF MAC PROTOCOL WITH
CDMA SCHEDULING AND ADAPTIVE POWER
CONTROL FOR WIRELESS MULTIMEDIA NETWORKS
S.P.V.Subba Rao, Dr.S.Venkata Chalam,
Dr.D.Sreenivasa Rao
285




58 407
A NEW APPROACH PARTICLE SWARM
OPTIMIZATION FOR ECONOMIC LOAD DISPATCH
Anish Ahmad, Nitin Singh
291


59 409
COMPARISON OF REACTIVE ROUTING PROTOCOLS IN
MOBILE AD-HOC NETWORKS
Gurleen Kaur Walia, Charanjit Singh
295



60 410
IMPLEMENTATION OF HAND-HELD OPTO
ELECTRONIC SYSTEM FOR THE ESTIMATION OF
CORROSION OF METALS
M. Narendra Babu, A. Balaji Ganesh
300



61 415
AREA AND ROUTING OPTIMIZATION FOR NETWORK
ON CHIP ARCHITECTURE
Denathayalan R, Thiruvenkadam K
303


62 422
CLASSIFICATION OF TEMPORAL DATABASE USING
TRAIN & TEST APPROACH
S.Nithya Shalini , A.M.Rajeswari
309


63 430
USAGE OF MALWARE ANALYSIS TECHNIQUES AND
DEVELOPMENT OF AUTOMATED MALWARE
CATEGORIZATION SYSTEM
Sai Lohitha.N, Arokiaraj Jovith.A
314



64 437
REAL TIME ENERGY OPTIMIZATION TECHNIQUE FOR
SCRATCH PAD MEMORY
Kavitha. R, Ranjith Balakrishnan, Kamal.S
319


65 439
ENHANCING LIBRARY AUTOMATION IN UBIQUITOUS
ENVIRONMENT
M.Raghini, M.Ishwarya devi, S.Amudha, M.Satheesh kumar
323



66 445
COMPLEMENTARY ACOUSTIC FEATURES FOR VOICE
ACTIVITY DETECTION
Ginsha Elizabeth George ,M.R Mahalakshmi, Leena Mary
329


vii


67 446
A MODIFIED ALGORITHM FOR THRESHOLDING AND
DETECTION OF FACIAL INFORMATION FROM COLOR
IMAGES USING COLOR CENTROID SEGMENTATION
AND CONTOURLET TRANSFORM
Jos Angel George, Jacob A.Abraham, Anna Vinitha Joseph,
Anuja S.Pillai, Sunish Kumar O S
334




68 447
A NOVAL ARCHITECTURE FOR DISTANT PATIENT
MONITORING SYSTEM USING NI LAB VIEW
Sterin Thomas, Stephen varughese, Sumi R., Treasa
Varghese,Sunish kumar O.S
339




69 450
AGENT BASED QUERY OPTIMIZATION KB FOR
HETEROGENOUS, AUTONOMOUS & DISTRIBUTED
MULTIDATABASE
Shiju Geroge, Shelly Shiju Geroge

344





ROBOTIC WALKING STICK WITH VOICE AND TRAFFIC
SIGNAL ANALYSER-FOR THE BLIND
Ushus.S.Kumar,M.E.Applied Electronics,Sri Muthukumaran Institute of Technology,Anna University.
ushusjay@yahoo.com
ABSTRACT
Presently, blind people use a white stick as a
tool for directing them when they move or walk. Though, the
white stick is useful, it cannot give a high guarantee that it can
protect blind people away from all level of obstacles.. This
paper introduces an obstacle avoidance alternative by using an
electronic stick that serves as a tool for blind people in
walking. The main project is divided into 3 modules. 1) The
project is used to help the blind people by using multi sensors
and voice assistance. The multisensory is placed at the bottom
of the walking stick. It is used to detect the obstacles while
he/she is walking. The multisensory is interfaced with
microcontroller and if any obstacles are detected it sends the
signals to the microcontroller. 2) We also analyze the traffic
signal through RF communication. The RF transmitter is fixed
with the traffic light controller and the RF receiver is interface
with the microcontroller in robotic basement unit.Depending
upon the signal received, the microcontroller send the
command to APR9600 (voice storage IC). This IC sends the
voice message to the user by connecting head phone to that
circuit. 3) The RFID Reader module is also fixed in the
robotic unit is used to identify the location name .The voice
messages describe about the obstacle locations and guide the
blind for safe walking.

1. INTRODUCTION
World Health Organization (WHO) has specified
that, 1% of the population in developing country blinds [1].
Currently, the population of Thailand is about 62 millions;
therefore, there are approximately 620,000 blind people in
Thailand.
These blind people needs a special tool for directing
them when they walk. A very important tool that helps blind
people walk is called white sticks. This walking stick for
blind people does not have anything special from the sticks
that old people use, except the function they use, which is
that the blind people use the stick to detect obstacle but the
old people use the stick to support their body to stand or
walk.
Although, a walking stick for blind people is useful,
it cannot give a high guarantee that it will protect blind
people from all level of obstacles. Usually, blind people use a
white stick for detecting low obstacle, i.e., level from waist to
feet, but not for high obstacle, i.e., level from breast to head.
Normally, blind people do not use white stick to detect
obstacles in higher than 30 centimeters from ground. Many
researchers have been interested in studying application of



range finding for obstacles avoidance. Reference [2] was
firstly proposed the application of ultrasonic ranging scheme
for producing electronic walking stick for blind. Then,
reference [3] has proposed use of embedded system for
developing an excellent stick using ultrasonic sensors for
helping blind people instead of using a white stick. However,
ultrasonic sensors are not a suitable technology because they
are too large, have high weight, and consume a lot of power.
Reference [4] has proposed application of infrared sensors for
obstacle avoidance scheme, which can be employed for
producing electronic walking stick. Infrared sensors seems
suitable for distance measurement as they have a small
package, very little current consumption, a low cost, and a
variety of output options.

2. Literature Review

2.1 Existing System
An electronic white stick can be produced by selecting an
infrared sensor that enables to detect obstacles within a range.
The IR value, the value returned by the infrared sensor, is
sent to microcontroller and is converted to analog output
voltage value. This voltage value varies proportional to the
distance between the obstacle and the sensor. If the result
indicates that objects are located in range that can be danger
to users or blind people, it will vibrate and alarm to warn the
user that there is an obstacle in front.

2.2 Proposed System

Infrared sensors enable to detect all materials with in 20
cm with acceptable error which helps blind people to
avoid any obstacle.
Voice commands given to headphone indicates about
obstacle locations and guide to safe walking
RF module helps the blind to identify the traffic signal as
it is informed to them as voice message.
RFID technology helps the blind to identify the area /
location


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 1
3. FUNCTIONAL BLOCK
DIAGRAM

3.1 Main circuit diagram


3.2 Block Explanation
Transmitter circuit
The Traffic signal is controlled by using
microcontroller. We write the timer program for switching
the three leds (red,green,yellow) simultaneously with some
time intervals. This time intervals are controlled by the
microcontroller and also send the relevant signal to the RF
Transmitter. The RF Transmitter Send the status of the
Traffic signal Through Wireless RF signal.
Receiver Circuit:
The RF receiver Circuit Is fixed in the Walking stick
and interfaced with the Microcontroller. Depending upon the
signal received from the RF receiver, the microcontroller
sends the command to the APR9600 voice IC. The APR9600
IC Repeat the status of the signal. The Three IR Sensor
module is fixed at the bottom of the robot (front, left, right).
This IR sensor module is interfaced with the Microcontroller
I/O ports. If any obstacle Is Detected from the IR Sensor, it
send the Signal to the microcontroller and the corresponding
voice is play backed in your headset. And the Robotic
Basement module fixed in the walking stick pull you to the
safe direction. The RFID reader is interfaced with the
microcontroller through Serial communication. It is used to
identify the location Name. The passive Tag fixed at every
location is used to inform the status of location. The PIR
Sensor is used to find the Moving Object in front of the
Person.
4. SYSTEM REQUIREMENTS
4.1. Hardware requirement
Microcontroller (P89V51RD2)
Infrared sensors
PIR Sensors
DC Motor driver circuit
RF Transmitter Circuit(TWS-434)
RF receiver circuit(RWS-434)
RFID Reader module
Voice Playback Device(APR9600)
4.2. Software requirement
Downloader Flash Magic 2.47
RIDE Compiler
Orchad(circuit designing software)
5.HARDWARE SPECIFICATIONS
5.1 Microcontroller (P89V51RD2BN )
The P89V51RD2 is an 80C51 microcontroller with 64 kB
Flash and 1024 bytes of data RAM.A key feature of the
P89V51RD2 is its X2 mode option. The design engineer can
choose to run the application with the conventional 80C51
clock rate (12 clocks per machine cycle) or select the X2
mode (6 clocks per machine cycle) to achieve twice the
Multiple
Obstacle
detecting
sensors
Signal
Amplifier
Micro
controlle
r
DC
Motor
Driver
Circuit
Voice
Playback
Device
DC
Motors
for
Wheel
Control
RF Signal
Receiver
Circuit
(Traffic
Signal)
Power
Rectifier
Circuit
(12V,5V
DC)
RFID
Antenna
RFID
Reader
Serial
Commu
nication
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 2
throughput at the same clock frequency. Another way to
benefit from this feature is to keep the same performance by
reducing the clock frequency by half, thus dramatically
reducing the EMI. The P89V51RD2 is also In-Application
Programmable (IAP), allowing the Flash program memory to
be reconfigured even while the application is running.
5.2 IR Sensors
This sensor can be used for most indoor
applications where no important ambient light is present.
However, this sensor can be used to measure the speed of
object moving at a very high speed, like in industry or in
tachometers. In such applications, ambient light ignoring
sensor, which rely on sending 40 KHz pulsed signals cannot
be used because there are time gaps between the pulses where
the sensor is 'blind. The solution proposed contains only a
couple of IR LEDs, an Op amp, a transistor and a couple of
resistors. A standard IR LED is used for the purpose of
detection. Due to that fact, the circuit is extremely simple,
and so it can easily understand and build it.
5.3 PIR Sensor
A Passive Infrared sensor (PIR sensor) is an electronic
device which measures infrared light radiating from objects
in its field of view. Apparent motion is detected when an
infrared source with one temperature, such as a human,
passes in front of an infrared source with another
temperature, such as a wall. All objects emit what is known
as black body radiation. This energy is invisible to the human
eye but can be detected by electronic devices designed for
such a purpose. The term 'passive' in this instance means the
PIR does not emit energy of any type but merely accepts
incoming infrared radiation.
5.4 APR9600 Voice Storage IC
This technology enables the APR9600 device to
reproduce voice signals in their natural form. It eliminates the
need for encoding and compression, which often introduce
distortion. It is a Single-chip, high-quality voice recording &
playback, having Non-volatile Flash memory technology,
User-Selectable messaging options, User-friendly, easy-to-use
operation and Random access of multiple fixed-duration
messages.
5.5 Motor Driver Circuit (L293D)
It is a push-pull four channel driver with diodes,
used for vehicle movement. It has enable facility, over
temperature protection and Internals clamp diodes The
Device is a monolithic integrated high voltage, high current
four channel driver designed to accept standard DTL or TTL
logic levels and drive inductive loads (such as relays
solenoids, DC and stepping motors) and switching power
transistors. To simplify use as two bridges each pair of
channels is equipped with an enable input. A separate supply
input is provided for the logic, allowing operation at a lower
voltage and internal clamp diodes are included. This device is
suitable for use in switching applications whose operating
frequencies up to 5 kHz.

Fig 5.7.1 block diagram of L293D
5.6 Radio Frequency Detection:

The TWS-434 and RWS-434 are extremely small,
and are excellent for applications requiring short-range RF
remote controls.
5.6.1 TWS-434:-The TWS-434 transmitter accepts both
linear and digital inputs can operate from 1.5 to 12 Volts-DC,
and makes building a miniature hand-held RF transmitter
very easy. The TWS-434 is approximately the size of a
standard postage stamp
5.6.2 RWS-434:-The receiver also operates at
433.92MHz.The RWS-434 receiver operates from 4.5 to 5.5
volts-DC, and has both linear and digital outputs. For
maximum range, the recommended antenna should be
approximately 35cm long. The following circuit shows the
receiver section using the HT-12D decoder IC for a 4-bit RF
remote control system. The transmitter and receiver can
also use the Holtek 8-bit HT-640/HT-648L remote control
encoder/decoder combination for an 8-bit RF remote
control system

Fig 5.8.1 RF Transmitter Circuit
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 3

Fig5.8.2 RF Receiver Circuit
5.7 RFID Reader Module Details:
RFID reader module, are also called as
interrogators. They convert radio waves returned from the
RFID tag into a form that can be passed on to controllers,
which can make use of it RFID tags and readers have to be
tuned to the same frequency in order to communicate. RFID
systems use many different frequencies, but the most
common and widely used & supported by our reader is
125KHZ .The reader has been designed as a plug and play
module and can be plugged on a standard 300 MIL-28 Pin IC
socket form factor
6 .RESULTS AND DISCUSSIONS
SCREEN SHOT

7.Future enhancements
In future we will be able to modify the project to an
advanced level. It can be made to know the size and type of
the obstacle in front of the walking stick. We can fix a digital
camera to capture the image and using the image processing
concept to analysis the size and type of the Obstacle. With
the help of the digital signal process involved, we will able to
analyze the image and send the voice signal to the user.

Conclusion
In conclusion, concept of distance measurement can
be applied for making robotic walking stick for blind. The
Robotic basement unit coupled with the walking stick guides
the person against the obstacles by guiding him along the
direction of motion of the robot. By using infrared sensors
with microcontroller, detects all types of obstacles within
range 20-25cm. providing additional voice information
through the headphone would ease the blind person to the
maximum possible extent. For analyzing the
traffic signal through RF communication, the RF transmitter
is fixed with the traffic light controller and the RF receiver is
interfaced with the microcontroller in the robotic basement
unit. The RFID Reader module is also fixed in the robotic
unit and is used to identify the location name. The voice
messages describe about the traffic signals, obstacle locations
and location names. The robotic walking stick used here
interacts with the environment and provides voice as well as
physical assistance to the user, ensures safety.
Reference
1) Thailand Association of the Blind. http://www.tabod.net/,
accessed by 4 April 2008.
2) R. Palee, S. Innet, K. Chamnongthai, and C.
Eurviriyanukul, Point-to point Distance Measurement Using
Ultrasonic for Excellent Stick,International Technical
Conference on Circuits/Systems, Computers and
CommunicationsConference, Sendai, Japan,July2004.
3 )Yuzbasioglu, C., and Barshan, B., A new method for range
estimation using simple infrared sensors, IEEE/RSJ
International Conference on Intelligent Robots and Systems,
2005, pp. 1066-1071.
4)Parallax,Sharp GP2D12 Analog Distance Sensor
http://www.parallax.com/dl/docs/prod/acc/SharpGP2D12Snr
s.pdf, access by 20 April 2008.
5) Microchip Technology, PIC16F87X 28/40-Pin 8-Bit
CMOS FLASH
Microcontrollers,http://www.dtf.fi.upm.es/~gtrivino/IFOTO
N/16F873_30292b.pdf, accessed by 20 April 2008.
6) S.Innet,N.Ritnoom,An application of infrared sensors for
electronic white stick ,Intelligent Signal Processing and
Communication Systems,2008.ISPACS2008, Issue date 8-
11Feb 2009.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 4


Digital Authentication and Inspection System (DAIS) for Automobiles

Tibi Tom Abraham
Dept. of Electronics and Communication
College of Engineering Kallooppara
Thiruvalla, Kerala INDIA
tibitomabraham@gmail.com

Nevil Raju Philip
Dept. of Electronics and Communication
College of Engineering Kallooppara
Thiruvalla, Kerala INDIA
nevil89@gmail.com

Abstract: Conventional vehicle registration system
uses a unique number based on the region of
registration for every automobile that is to be
displayed on its license plate. This system has many
inherent short comings. The number can be faked
and also there is no system by which the details of
vehicles and their owners can be obtained at any
location. In the present scenario where terrorism
and other anti-social activities are a matter of
serious concern, we need a more reliable system for
monitoring and authenticating vehicles. We hereby
present a new system that issues a unique ID (UID)
to the vehicles so that faking the number is not
possible in any way. All vehicles can be monitored
using the UID by mobile inspection units and can
be immobilized if found suspicious. This also
provides a database for storing the details of
vehicles and their owners which can be accessed by
the mobile inspection units via the internet or a
secure wireless link whichever is convenient. This
paper provides critical insight to the concept
described above on a modular point of view.
Key Words: UID, Authentication, Onboard
Module, Inspection, Encrypted UID, Message
Plausibility, RC4 Encryption.


I.INTRODUCTION
At present in India we use the analog
numbering system wherein a number allotted to
each vehicle by the Road Transport Authority of
each region is displayed on a number plate on
each vehicle. This number can be replaced on
will by anyone who intends to hide their identity.
The police authorities mainly depend on this
number to identify vehicles. There is no means
by which one can be sure that a number
displayed on a vehicle is its own. This makes the
system practically vulnerable to terrorist
activities and anti-social atrocities.
DAIS is an advanced, decentralized system for
authenticating and inspecting vehicles. The Road
Transport Authority of each region can use this
system to have a strict watch over the vehicles in
their respective regions. With this system the
police units stationed at random locations can
have a direct control on each vehicle passing by
their vicinity. The police have an automatic and
manual override capability over the vehicles
drivers to stop the vehicle.
The proposed system requires three separate
but inter dependant modules. All the vehicles are
to be fitted with an onboard module that is to be
programmed at the road transport office. At the
RTO the software installed on a PC gathers the
information about the owner and the vehicle into
a database and assigns each vehicle a UID at the
time of vehicle registration. This UID along with
its encrypted version are to be programmed into
the onboard module. The encryption of the UID
ensures message plausibility.
The inspection units are to be positioned at
various random locations throughout the region.
These mobile inspection units wait for vehicles
and on being provoked by the police authorities
inspect each vehicle that passes by. The onboard
module then sends the encrypted number and the
UID to the inspection module after a request is
received from the inspection module. The
inspection module then checks for the validity of
message and sends a conformation signal to the
onboard module. If the onboard module does not
receive the conformation signal once a request
has been received it switches off within a preset
time.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 5




II. FEATURES OF THE SYSTEM AT RTO
DAIS software has a hardware controller part
which is implemented in visual basic (VB 6.0)
and an internet communication part that is
implemented in ASP.net. The DAIS has a central
server located at the headquarters which can be
accessed through a LAN and the internet. The
LAN is built by a wireless link used only by the
DAIS authorities. This ensures security of the
database. The internet section is opened to the
public and the authorities. The public can view
the details regarding registration, insurance, tax,
etc of their vehicles. The owners can view the
status of their vehicle that has been stolen.

The DAIS software with the police authorities
can be accessed in two ways:-

Administrative Login
Inspective Login
The administrative login is provided in the
Authentication System for the Road Transport
officers to register the vehicles, to validate and to
modify the database. They have the sole right to
edit the data in the database.
The software of the authentication system
provides only an administrative login.
The authentication system has a facility to
input the users details such as
owner name
vehicle name
type
address
photo
Regional no. (eg:TN03654,KL3A5676)
Engine number
Unique ID (UID)(32-bit)
Insurance details
Road Tax details
These details should be stored in a database.


Fig.1.Authentication System Block Diagram
A unique ID (UID) is assigned to each vehicle
at the time of registration.
The authentication system can program the
module which is installed in every vehicle.
While programming the module, the UID that is
assigned to that vehicle is programmed into the
onboard module through the serial port. There is
a confirmation signal from the onboard unit that
it has correctly received the UID. This can be
done as ISP (In System Programming).

The System also has the provision to enter the
list of UIDs of stolen vehicles in a separate
database. This is to track down stolen vehicles.
The database is updated every time it is modified
through a wireless link provided for sole
communication between the various units of the
DAIS.

The vehicle owners whose vehicles have been
stolen can report either to the officers at the
RTO. The officers can thereafter enter the
vehicle registration number in the stolen vehicle
database.

III.FEATURES OF THE ONBOARD MODULE
The main components of this unit are:
microcontroller
Serial port interface
Wireless transceiver
Vehicle turn-off mechanism
This unit houses a microcontroller that receives
the UID from the PC. The microcontroller in the
module is programmed in such a way that the
UID is encrypted using a very strong and secure
encryption algorithm. The UID and its encrypted
versions are stored in the module to be sent later
to the inspection module along with some
additional flags to ensure impeccable security.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 6



Fig.2 Onboard Module Block Diagram

While the vehicle passes by an inspection toll
gate the microcontroller receives a secret code to
perform the handshake with the inspection unit.
When the inspection module sends the code the
wireless transceiver on the on board module
receives the code and the microcontroller
compares this received code with the code stored
in it and if they match, the onboard module
transmits the UID & the encrypted code to the
inspection unit through the wireless transceiver.
The encryption that is used is the static version
of RC4 encryption whose dynamic version is
used to secure the SSL certificates in the
internet.
The microcontroller waits for a conformation
signal after this process. If it does not receive the
conformation signal within 1 sec or so then the
microcontroller runs a program to make an
output pin low. This can be used to switch off a
pneumatic valve to cut off the fuel supply or a
relay mechanism to immobilize the vehicle.
Again the microcontroller keeps on waiting for
the conformation signal from the inspection unit
after a fixed delay (for the vehicle to be restarted
once stopped). The conformation signal is send
by the police authorities once the vehicle has
been safely immobilized. Once the conformation
is received the onboard module switches on the
fuel supply.
The mechanical control can be done by
turning on or off a pneumatic valve that is fitted
on the fuel pipe. This can cut off the fuel supply
and hence demobilize the automobile.
Another practical, easy but less secure method
is to use a relay mechanism to turn off the
ignition to the engine via the CDI (Capacitive
Discharge Ignition) unit or the IDI (Inductive
Discharge Induction) unit.

IV. FEATURES OF THE INSPECTION
MODULE


Fig.3 Inspection Module block Diagram
The main components of this unit are:-
Wireless Transceiver
PC
VB interface
Internet Communication Section
The software at the inspection module
provides inspective login for the police. This
software enables the police to wirelessly receive
the authentication message from each vehicle
that passes by the inspection toll gate.
The inspection module initially sends the
secret code on being provoked by the police
units at the inspection toll gate. It waits for the
UID and encrypted UID. Thereafter it stores
these two numbers and then decrypts the
encrypted UID to see if it matches with the
received UID from onboard module. If they
match, the inspection module sends the
conformation signal to the on board module. The
inspection module then accesses the database at
the server to see if the vehicle is stolen or not. If
the vehicle is stolen, a stop signal is send to the
vehicle. Otherwise another conformation signal
is send to the vehicle. This ends the inspection
procedure for one vehicle.
If the vehicles UID is found in the stolen
vehicle database then the vehicle is immediately
stopped after an initial warning to the driver. The
police authorities can also report of the seizure
of stolen vehicles if any to the RTO.
The PC has internet connectivity to access the
list of stolen vehicles from the stolen vehicle
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 7


database as well as the details of the passing by
vehicle. The interface also has the facility to
check the details of the owner and the vehicle.
Table 1
Parameter Spec
Power supply

12V DC
Weight

500g
Dimension 10*10*10cm

Temp Range

-10 to 55
Processor AT89C51

Transceiver Maxstram
Xbee


V. EXPERIMENTS
We tested the DAIS concept on a private
automobile. The central server which holds the
critical database was connected to the
authentication system and inspection system via
a secure LAN connection. In the authentication
system the DAIS software was configured to
have an administrative access. In the inspection
system the DAIS software was configured for
inspective access. The automobile was fitted
with the on-board module, in a way that the
automobile could only be turned on through the
DAIS onboard module. This enhances the
security i.e. the module couldnt be tampered
with.
The test run was done on a test platform similar
to the conceptual tollgate. The speed of the
automobile was limited to 25 kmph by speed
breakers. When the vehicle passed by the toll
gate the data transfer took place between the two
concerned modules. The transfer was up to 95%
flawless and the delay involved was also
minimal.
We had setup the LAN with a limited number of
attached systems, we can only know of the
latencies and the time delays in the information
exchange between the database and the systems
involved only when implemented on a large
scale.
VI. CONCLUSION
This system is a unique and an innovative
solution to enable the police to strictly enforce
the laws as they can track down stolen vehicles,
stop suspicious vehicles, control speeding
vehicles and so on. Since the UID cannot be
decoded or changed in any manner, vehicle
thefts can be avoided. If the owner issues a
complaint on missing vehicle then this particular
number can be monitored by other police
authorities stationed at other places. This system
serves to avoid the use of vehicles for illicit
activities like smuggling, terrorism, etc. Any
damage to the onboard system switches off the
engine.

The implementation of this paper led to a critical
insight into a well coordinated, decentralized
system for vehicle monitoring, inspection and
control.

REFERENCES
[1] VISIONS: A Service Oriented Architecture for
Remote Vehicle InspectionFornasa, M. Zingirian, N.
Maresca, M. Baglietto, P.
Dept. of Inf. Eng., Padova Univ.
[2] Message Authentication in Vehicular Ad Hoc
Networks: ECDSA Based Approach Manvi, S.S.
Kakkasageri, M.S. Adiga, D.G.
Dept. of Inf. Sci. Eng., REVA Inst. of Technol. & Manage.,
Bangalore, India
[3] Design and Implementation of Vehicle Monitoring
System Based on GSM/GIS/GPS Tan, Hu.
[4] Message Authentication in Vehicular Ad hoc
Networks: ECDSA Based Approach S. S. Manvi*, M. S.
Kakkasageri**, D. G. Adiga**

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 8

IMPLEMNTATION OF BLOWFISH ENCRYPTION ALGORITHM
IN FPGA
Meghna Khot
M.Tech student
Electronics Department
Priyadarshini college of
Engg. Nagpur, India
meghna1khot@rediffmail.c
om
Dr. R.V. Kshirsagar
Vice Principal
Electronics Department
Priyadarshini college of
Engg. Nagpur, India
ravi_kshirsagar@yahoo.co
m

Abstract-To implement the Blowfish algorithm in
VHDL and provide a simple, robust
implementation of Blowfish in hardware. A
hardware implementation of Blowfish would be a
powerful tool for any mobile device or any
technology requiring strong encryption. The
performance indices here are the security and
speed of algorithm. The overall design is an
incredibly fast, efficient blowfish implementation
suitable for a plethora of applications. Encryption
is used to disguise data making it to unintelligible
unauthorized observers. Providing such security is
especially important when data is being
transmitted across open network such as internet.
Blowfish encryption algorithm suitable for wireless
network application security. Messages transmitted
across the internet are susceptible to eavesdropping
attacks via any path along the transmission of
message. Here cryptography is required to protect
information from being intercepted and stolen by
unwanted third party. For information that needs
to be secure for only minutes, hours, or perhaps
weeks, a 64-bit symmetric key will suffice. For data
that needs to be secure for years, or decades, a 128-
bit key should be used. For data that needs to
remain secure for the foreseeable future, one may
want to go with as much as a longer key. The
elementary operators of Blowfish algorithm
include table lookup, addition and XOR. The
encryption concept can be used in many
applications like internet applications to encrypt
the password, visa card number, banks, military
communications, satellite channels and some other
communication system.

Keywords- Blowfish, crypto, encryption

I. INTRODUCTION

A network is a series of individual
elements transmitting and receiving various data.
Whenever sensitive or confidential information
is transmitted, there is a possibility of an
unauthorized third party eavesdropping on a
transmission and learning contents of the
sensitive message. This possibility is
unacceptable in many scenarios. Cryptography is
the process of translating a message into a form
which is unreadable to everyone except the
intended recipient. This is typically done with
use of keys. A cryptographic key is roughly
equivalent to the concept of a physical which can
unlock the correct lock. In cryptography , keys
are used to encrypt the message into a format
which would appear as unreadable random
information to an unauthorized third party.
The Blowfish encryption scheme
was designed by Bruce Schneier in 1993 to
replace DES, which was federal Information
Processing Standard cryptography. The intent
was to Create a cryptographic algorithm which
did not posses the limitation. and issues common
in other crypto algorithms and to provide an
open, readily available crypto for users rather
than the common patented or classified crypto
algorithms being used contemporaneously.
Blowfish continues to attain its lofty goals of
secure, open encryption that is realizable in
software and hardware.
The Blowfish algorithm is
conceptually simple, but is actual
implementation and use is complex. Blowfish
has fixed 64 bit block size. The key length of
Blowfish is anywhere from 32 bits to 448 bits.
The cipher is 16 round Feistel network is one
that utilizes a structure which makes encryption
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 9

and decryption very similar through use of
following elements.
P-boxes(permutation boxes; these
perform bit shuffling)
S-boxes(substitution boxes, similar
non-linear function)
Xoring to achieve Linear mixing.
Blowfish was designed to meet following design
goals:
Speed: It is meant to be significantly faster than
DES on 32-bit microprocessors with relatively
large caches. This type of architecture is readily
available today to everyone.
Compactness: It is designed to run in a relatively
small memory space, less than 5K.
Simplicity: Only simple operations are used,
including addition, exclusive- or, and table
lookups.
Flexibility of key size: The size of key can vary
up to 448 bits (in 32 bit increments).








Figure 1. The Blowfish Algorithm
The F function is the feistel function of
Blowfish, the contents of which are shown
below.

Figure 2: The Feistel Function of Blowfish
Hence, Blowfish encrypts by splitting half the
block (32 bits) into 8-bit chunks (quarters) and
inputting this into the S-box. The result from S-
boxes then are added and XORed. Decryption is
quite simple and accomplished by merely
inverting the P17 and P18 cipher blocks and
using P entries in reverse. The S-boxes and P-
boxes are initialized with values from hex digits
of pi. The variable length user-input key is then
XORed with P-entries. Then a block of zeros is
encrypted, and this result is used for P1 and P2
entries. The ciphertext resulting from the
encryption of a zero block is then encrypted
again and use for P3 and P4. This process
continues until every P-box entry and S-box
entry has been replaced, resulting in 521
successive key generations. This involves about
4kb of data processing. This relatively complex
key schedule makes Blowfish an effective and
durable cryptographic algorithm.
Blowfish is among the fastest block
ciphers available and yet remains
cryptographically secure.

II. CRYPTANALYSIS OF BLOWFISH

It is not surprising that many interesting
results were proposed; however, none came close
to actually successfully cracking or providing a
cryptanalysis of Blowfish.
Only five results in total were submitted.
John Kesley could only break 3-round Blowfish.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 10

The most promising attack was proposed in 1996
by Vincent Rijmen in his doctoral disserertation,
but this attack can only break 4 rounds of
Blowfish and no more.
Given that these attempts are the only ones
known thus far and the at they are surprisingly
weak in decrypting the actual Blowfish
algorithm, future of Blowfish as a secure
algorithm is very promising indeed.
III. PROPOSED DESIGN

Our design utilizes the simplicity of
Blowfish to create a relatively straightforward
implementation. The S-Box contains a table of
non-linear data which maps inputs to outputs.
When Blowfish design is started, the S-Boxes
and P-Boxes are initialized based on input key
and non-linear data. The writing of S-Box data to
table is tied to a clock and is currently slowest
part of the design. The table look up procedure is
handled without a clock pulse, so a uniform clock
is possible between initialization and runtime
without sacrificing performance.

Figure 3: Waveforms for data encryption process










Figure 4: Waveforms for memory loading of S-BOXES

Figure 5: Read data from memory through a flip -flop




Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 11


Figure 6: Write data into memory through a flip flop
IV. CONCLUSION
Blowfish is not only secure, but also fast, and
suitable for different platforms, therefore, it has
high value of application in the field of info
security. One should find it important that the
maximum key size was used, and the key was
chosen at random from the full key space of size
2
448
, since maximum key length is 448 bits.
In our future research, we are
looking to optimize the encryptions algorithms to
accommodate the wireless network application.
Furthermore, we are try to develop stronger
encryption algorithm with high speed and
minimum energy consumption.














REFERENCES
[1] B. Schneier, High Speed SOC Design for
Blowfish Cryptographic Algorithm, 2007 IEEE.
[2] B. Schneier, Description of a New Variable
Length
Key, 64-bit Block Cipher (blowfish), Proceedings of
Fast Software Encryption, pp. 191-204.
[3] Russell K. Meyers and Ahmed H. Desoky,
An Implementation of the Blowfish Cryptosystem,
2008 IEEE.
[4] Wikipedia, Advanced encryption standard
process.
http://en.wikipedia.org/wiki/advanced_Encryption_sta
ndard_process.
[5] D. Schmidt, On the key schedule of Blowfish,
Cryptology ePrint Archive,2005.




Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 12
Combined wavelet based UWB Communication Waveform

Asmita R. Padole
M.tech student
Electronics Department
Priyadarshini college of Engg.
Nagpur, India
Email: vaidehidhopte@gmail.com


Abstract- UWB pulse design is very critical as the
Federal Communication Commission (FCC) has
established a strict standard for Ultra Wide Band
(UWB) communication. In UWB information is
transferred by short pulse, and there is a close
relationship between the performance of UWB
system and pulse shape, resulting in increasing
attention on the research of UWB pulse design. So,
this project will introduce pulse shaper design for
UWB, which utilize the optimum bandwidth and
power allowed by the FCC spectral mask. Sheng
proposed an algorithm to generate a kind of pulse,
the power spectral density (PSD) of which is close
to FCC spectral mask by selecting optimal
differential order and shaping factor of Gaussian
monocycle. Wavelets with the diverse translation
factors and diverse spectral moving factors act as
basis functions. These factors are selected
according to the FCC emission mask. The principle
of linear minimum mean square error (LMMSE)
will be introduced in order to get the weighting
factors of basis function. The simulation results
show the spectrum efficiency and bit error rate
(BER) performance of combined wavelet waveform
This waveform has extremely short duration,
optimum spectrum efficiency and good BER
performance.

Keywords-UWB; pulse shape design; wavelet; FCC
emission mask; LMMSE


I. INTRODUCTION

Advancement of wireless communications
enabled us to generate broad spectrum signals,
still band limited, such as UWB systems. Lately
this has gained considerable interest because it
spread signals over a large band width,
characterizing with a low power spectral density
but high data rates and thus a large channel
capacity. These advantages make UWB
technology to kindle considerable interest in the
research and standardization communities.





Prof.A.P.Rathkanthiwar
Head of Department
Electronics Department
Priyadarshini college of Engg.
Nagpur, India
Email: anagharathkanthiwar@yahoo.co.in


UWB technology is greatly different from
conventional communication technology in many
characteristics. Conventional system uses sine-
wave, where as UWB uses a wide band pulse
which is a carrier free (base band) technique.
Here communication precedes with trains of
nanosecond baseband pulses. The pulse is
characterized with a low duty cycle, and so it
spread the signal energy over a large band width
which is nearly in the order of one to several
giga-hertz. Here due to extreme short pulses,
there wont be any overlap with multipath
reflected and refracted waves, results in a high
time resolution. This property makes UWB pulse
suitable for transmission in the multipath
propagation channels like indoor applications
with high speed data communications.
According to FCC specification any
signal that occupies a fractional band width (BW)
20% of its centre frequency (CF), or more than
500MHz of absolute bandwidth can be specified
as UWB signal. Since pulse width used for data
transfer is extremely short, with strictures on BW,
UWB technology requires a careful pulse
shaping.
Since information is transferred by
short pulse, there is a close relationship between
the performance of UWB system and pulse
shape, resulting in an increasing attention on the
research of UWB pulse design. The FCC
approved the deployment of unlicensed UWB
technology in the band 3.1GHz-10.6GHz by
limiting the PSD measured in a 1MHZ
bandwidth. Such a spectral mask protects critical
application of exiting wireless communication
system, such as GPS, Bluetooth, 802.11a/b.
The typical UWB pulse shape is
Gaussian monocycle which can be generated in
the easiest way by a pulse generator. An
algorithm has been proposed by Sheng to
generate a pulse, whose PSD is close to FCC
spectral mask by selecting optimal differential
order and shaping factor of Gaussian monocycle.
But a Gaussian monocycle alone can not meet the
required specifications of FCC emission mask
outside the pulses bandwidth.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 13
II. PULSE DESIGN

The main objective of UWB pulse
design is to meet the requirements of FCC with a
high spectrum efficiency, that is, ensuring UWB
system work well and the PSD of the pulse
reaches its maximum under the restriction of
FCC emission mask. The UWB pulse is design
with the objective of minimizing the mean square
error between designed PSD and the emission
mask. UWB pulse is particularly well suited for
high speed data communications, for example,
home networks which may interconnect various
home entertainment devices.
In this paper, combined pulse algorithm
is proposed. Wavelet functions with diverse
translation factors and diverse spectral moving
factor act as basis functions. These factors are
selected according to the FCC emission mask.
Further, under the principle of LMMSE the
weighting factors are obtained.


A. Gaussian Pulse And Monocycle

Gaussian Pulse
The Gaussian pulse can be represented by


Where, A is amplitude,
Tau is the pulse shape parameter,
Tc is pulse duration.













Fig.1. The pulse and PSD of Gaussian pulse







Gaussian Monocycle
The Gaussian monocycle is similar to the first
derivative of the Gaussian pulse and is given by



Where, A is amplitude,
Fc is central frequency,
and Tc is pulse duration.




Fig2. The pulse and PSD of Gaussian monocycle

B. Wavelet
Wavelet is defined as signals which can be
modeled by combining translations and dilations
of a simple, oscillatory function of finite duration
called wavelet. In this paper, Morlet wavelet is
selected which is defined as, wavelet which is
constructed by modulating a sinusoidal function
by a Gaussian function, is a wavelet of infinite
duration. But most of the energy in this wavelet
is confined to a finite interval.


C. Selection of Coefficients

Morlet wavelet is selected as basis function.
Mother wavelet is defined as

-(1)

Accordingly, Morlet wavelet function can be
expressed as,



-(2)



2
] / ) [(
) (
Tau Tc t
Ae t w
2
)] ( [ 2
) (
Tc t F
c
c
e t F t w
) 2 cos(
2
exp ) (
0
2
t f
t
c t
a
b t
f
a
b t
a
c
t
b a 0 2
2
,
2 cos
2
) (
exp ) (
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 14
Where, a and b denote translation and dilation
factors respectively, f0 is spectral moving factor

The single sided PSD of is,



-(3)
The normalized PSD is then given by
-(4)

PSD can be expressed in dB as


-(5)
The values of a and f0 are determined by the
frequency corners of emission mask. Table 1
shows the normalized FCC emission mask.


TABLE 1. THE NORMALIZED FCC EMISSION MASK

Frequency
(GHz)
Normalized
Emission
Mask (dBm)
Normalized Emission
Mask (dB)
0~0.96 0 -30
0.96~1.61 -34 6
1.61~1.99 -12 -42
1.99~3.1 -10 -40
3.1~10.6 0 -30
>10.6 -10 -40

According to equ.(5) and FCC emission mask
table, the corresponding value of a and f0 are
calculated as
For the corner of freq 0.96 GHZ, we get
for f=0 GHZ


We get, f1 =0.4185
for f=1.61 GHZ


We get, a1=1.4369X10^-10
Based on function (2), let c/ a=1, we obtain the
first Morlet wavelet basis function .Its pulse and
spectrum is given fig.3


In the same way ,according to the corners of
1.61GHz ,
for f=0.96 GHZ


for f=1.99 GHZ,


By solving two equations, the values of a and f0
calculated as,
a2 = 6.6286x10^-10, f2 = 0.8235



Fig.3 The pulse and PSD of first wavelet


We obtain the second Morlet wavelet Basis
function. Its pulse and spectrum is given in fig.4



Fig.4 The pulse and PSD of second wavelet


Simillarly another three values of a and f0 are
calculated as follows,
a3 = 6.6228x10^-10, f3 = 0.5078

a4 = 4.9294x10^-10, f4 = 0.4979

a5 = 2.6445x10^-10, f5 = 0.4015

) (
,
t
b a
2
) 2 2 (
exp 2
2
) (
2
0
af af
a
c
f
2
0
) 2 2 ( exp ) ( f af f p
dB f af e f P P
dB
2
0 10 10
) 2 2 .( log 10 ) ( log 10
2
0 10
) 2 0 2 .( log 10 30 f a e
2
0
9
10
) 2 10 61 . 1 2 .( log 10 6 f a e
2
0
9
10
) 2 10 96 . 0 2 .( log 10 6 f a e
2
0
9
10
) 2 10 99 . 1 2 .( log 10 42 f a e
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 15
The pulse and spectrum for all these values is
shown in fig.5,fig.6 and fig.7.




Fig.5 The pulse and PSD of third wavelet




Fig.6 The pulse and PSD of fourth wavelet



Fig.7 The pulse and PSD o fifth wavelet

From these figures, we can conclude that the
pulses bandwidth becomes narrower and the
spectrum shifts towards left while a rises. For f0
is spectral moving factor, its spectrum shifts to
left or right as the values of f0 rising or falling.




Accordingly, let b=0, the five base wavelet
functions is given by,


















Then, the combined wavelet pulse is expressed
as,


Where are weighting factors.

What must be clear , here, is that , the selection of
the number of basis functions and the value of a
and f0 are both flexible, thus, there will be quite a
number of alternate combined pulses for UWB
system, which provides a flexible and effective
method for different UWB communication
applications.

D. Selection Of Weighting Factors

LMMSE (Linear minimum mean square error)
method is adopted for selecting weighting factors.
when only wave1 is considered, x1=0.10 makes
its spectrum almost approaching FCC mask. Only
apply LMMSE method on other four basis
functions.
Initialize the other weighting factors as,


Criterion for iteration is LMMSE which is given
by,


Where denotes FCC emission mask, and

is the PSD of combined wavelet pulse.



10 2 10
2
1
10 4369 . 1
4185 . 0 2
cos .
) 10 4369 . 1 ( 2
exp
t t
wave
10 2 10
2
2
10 6286 . 6
8235 . 0 2
cos .
) 10 6286 . 6 ( 2
exp
t t
wave
10 2 10
2
3
10 228 . 6
5078 . 0 2
cos .
) 10 228 . 6 ( 2
exp
t t
wave
10 2 10
2
4
10 9294 . 4
4979 . 0 2
cos .
) 10 9294 . 4 ( 2
exp
t t
wave
10 2 10
2
5
10 6445 . 2
4015 . 0 2
cos .
) 10 6445 . 2 ( 2
exp
t t
wave
5 5 4 4 3 3 2 2 1 1
wave x wave x wave x wave x wave x wave
5 4 3 2 1
, , , , x x x x x
1 . 0 , 1 . 0 , 1 . 0 , 1 . 0
05 04 03 02
x x x x
2
, ,
5 4 3 2
5 4 3 , 2
min ) , , , ( f P f P sum x x x x e
W M
f x x x x
f P
M
f P
W
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 16



Process for selecting weighting factor is as
follows
1) initialize weighting factors.
2) calculate PSD of combined wavelet pulse
and mean square error denote as emin.
3)Start iteration. Calculate PSD and e min of
combined pulse with every set of weighting
factors, denote as e.If PSD is not higher than
emission mask and e<emin, let emin=e.
4)Increase the weighting factors in their limit
ranges.
The generated weighting factors are given by



According to function (6),The combined pulse is,
Wave=














Fig.8 demonstrates the waveform and PSD of
combined wavelet




Fig.8 The pulse and PSD of combined wavelet








III. CONCLUSION

Based on the theory of wavelet and its
spectral characteristics, we have generated
wavelet pulse as well as combined wavelet pulse
for UWB system. It is found that this algorithm
provides a flexible and effective method for
designing suitable pulses for UWB
communication application. LMMSE method is
adopted to get weighting factors. Also we have
generated Gaussian Pulse and Gaussian
monocycle. Spectra of every generated pulse are
also obtained and are shown along with the
pulses.



REFERENCES
[1] Yang Jie, Weng Li Na, Waveform Design for
UWB communication based on combined wavelet
pulse,Beijing,China,2-4 Nov.2007.
[2] Xiaomin Chen, Sayfe Kiaei,Monocycle shapes for
ultra wideband system,IEEE 2002.[
3] Raghuveer M. Rao, Ajit S. Bopardikar, Wavelet
Transforms, Pearson Education,Asia,2001.
[4] Zhaohui Liang, Zheng Zheng Zhou, Wavelet
Based Pulse Design For Ultra Wideband system,
Journal of Beijing University Of Posts and
Telecommunications, vol.28, No.3, PP.43-45, June
2005.
[5] Mohit Lad, Ultra-Wideband: The Next Generation
Personal Area Network Technology.
[6] www.conceptualwavelets.com


99 . 0 , 91 . 0 , 26 . 0 , 76 . 0 , 10 . 0
5 4 3 2 1
x x x x x
10 2 10
2
10 4369 . 1
4185 . 0 2
cos .
) 10 4369 . 1 ( 2
exp 10 . 0
t t
10 2 10
2
10 6286 . 6
8235 . 0 2
cos .
) 10 6286 . 6 ( 2
exp 76 . 0
t t
10 2 10
2
10 228 . 6
5078 . 0 2
cos .
) 10 228 . 6 ( 2
exp 26 . 0
t t
10 2 10
2
10 9294 . 4
4979 . 0 2
cos .
) 10 9294 . 4 ( 2
exp 91 . 0
t t
10 2 10
2
10 6445 . 2
4015 . 0 2
cos .
) 10 6445 . 2 ( 2
exp 99 . 0
t t
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 17
Capacity Analysis of PLC for Railway Applications
Anish Francis
Dept . of Electronics & Communication
Amal Jyothi College of Engineering
Kottayam, India
anishfrancis@amaljyothi.ac.in
Geevarghese Titus
Dept . of Electronics & Communication
Amal Jyothi College of Engineering
Kottayam, India
geevarghesetitus@amaljyothi.ac.in


Abstract Powerline communications are gaining
prominence, due to the fact that existing network structures can
be used for communication purposes, thereby reducing the cost.
Possibility of transmission of various forms of data like voice,
video etc have to be properly studied. This paper deals with the
basic analysis of the railway system, dealing with the possibilities
of such transmissions.
Keywords Power lines, ABCD parameters, channel
capacity, frequency response, video transmission
I. INTRODUCTION
Currently systems available for monitoring the railway tracks
are limited, which are mainly based on radio wave
technologies like GPS as in [1]. The accidents due to
unguarded level crosses, and failure of the mechanical
components of the track etc. indicates the need of an advanced
monitoring system for railways.
A conceptual system based on video cameras placed at
electric poles in the railway track, is an example of an
advanced system that can tackle the above mentioned issues.
Since railway network is vast, the usage of existing railway
power cable as channel for data communication for the
development of the system is a good option. Hence, a study of
channel capacity of the railway powerlines, and how it
responds to the different frequency ranges of audio and video,
based on the existing modulation schemes has got a high
priority. It has been found that, OFDM is best suited for
broadband video transmission through indoor powerlines [8],
[9]. The objective of the paper is to model the power line
using a multi conductor transmission theory approach and
estimate the channel capacity. The modeling of the channel
and specification of transfer function is required in order to
apply the Shannons theory for estimation of channel capacity.
II. LITERATURE OVERVIEW
In the literature published so far, in power line channel
capacity analysis, several techniques were used for modeling
the PLC. Generally there are two approaches, a bottom-up
approach which is basically transfer function modeling of
power line channel, using Two port networks. And the top-
down approach, that depends on parameter measurement like
impulse response, and estimating the channel function as in
[11]. Authors of [11] analyzed the broad band capacity of low
voltage single phase power lines. They used a bottom-up
approach for transmission line modeling.
In another remarkable work in [12] authors used a
top- down approach for analyzing the low voltage and medium
voltage channels. In that impulse response was used to
compute the transfer function, which also takes into account
for number of branches in the power line network. The present
work, leaved the top-down approach used in [12] because of
lack of resources and sufficient data. The present work is
closely related to [13] where medium voltage overhead lines
are characterized based on transmission matrices for 3 wire
powerlines. In [13], the authors discussed the channel
characteristics of low voltage power lines for data transfer,
based on the same techniques used in the present work. Also,
in [9] authors successfully studied the channel characteristics,
using both top-down as well as bottom down approach, in a
frequency range of 1MHz-30Hz, by modeling multipath
channel using two port. In that paper authors emphasize the
advantage of bottom-up approach over the comparatively
complex top-down approach. We formulated this analysis for
railway power lines, by following the pattern of [15] where the
authors analyzed channel capacity of ship board power line
systems, using bottom up approach.
III. ELECTRIC POWER DISTRIBUTION SYSTEM OF INDIAN
RAILWAY
This work focuses on Indian railways. Indian railway network
is the largest railway network with more than 60% route is
electrified [4]. The Indian railway had adopted a
25kv,50Hz,A.C. supply system for traction purposes according
to[4],[5].The power supplies are derived from 220 KV/132
KV 3-phase transmission system from various grids. The basic
schematic of railway power distribution system is show above
in Fig.1. Power from the 3- phase grid is transported to Grid
substations, from where it is transferred to traction substations.
The traction substations supply power for 35-40 km track
length [4], by over head single phase lines operating at 50 Hz.
The A.C power from the lines is collected by pantograph and
converted to D.C by loco-transformer and D.C series motors
in the train. [4].This medium voltage, 25 kv,50/60 Hz supply
system is also employed in U.S,U.K ,Russia and several other
countries[6]. The average distances between substations in that
case is 20-40 miles [5]. The network of the above described
overhead single phase power line is subjected to study for
monitoring purposes.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 18

Figure 1. Data specification for railway powerline

IV. ANALYTICAL MODEL OF RAILWAY POWERLINES
Railway overhead lines uses copper, copper alloys and
aluminium based catenary cables. In some parts of the world,
insulated wire is also used depending on the voltage. In India,
we have single phase double lines for over head contact,
mostly bare catenary cables are used. In the present work, we
tested both copper and aluminium, for varying distances of
separation. Also the simulations tested for different
frequencies and different lengths of wires.
To model the overhead lines, we followed the
multiconductor transmission line theoretical approach [1].This
is a bottom down approach and the lines are viewed as a two
port network. The two port network is specified by the ABCD
matrix which is also called transmission matrix. Every
uniform transmission line can be modeled as a two port
network from the basic transmission line theory. Now the
electrical characteristics between the lines are characterized by
the transmission matrix. The ABCD coefficients are derived
from the primary line equations for characteristic impedance
and propagation constant
In the above equations R, L, C & G can be found
using standard formulae. These parameters depend on radius
of cable, distance between cables, wave frequency,
permeability and conductivity of conducting material. The
parameters can be computed using the simulation program by
inputting data relevant to the railway powerlines. After finding
the above parameters, we can model the transfer function of
overhead lines using ABCD parameters. Transmission matrix
is given by






where the parameters represents the propagation constant, the
length and characteristic impedance of the powerline cable.
The channel between the source and load is represented by the
ABCD matrix. The transfer function of the channel is
represented by the function

By cascading the transmission matrices, the transfer
function of systems with multiple transmitters can be
computed using, chain rule in [2]. In that case, the overall
transmission matrix can be computed by taking the product of
individual transmission matrices. For example as in our
present work, if the video transmitters are connected at electric
poles, that support the overhead catenary, the individual
section can be modeled as a transmission matrix. If there are n
sections between the source and receiving centers, each having
a transmission matrix T (n1),T(n2)etc. Then the over whole
transmission matrix can be found by

Channel gain can be computed by plotting frequency
versus H(f) function found from equation (5). From that gain
and phase variation, with respect to frequency, length,
diameter and distance between the cables, can be found.
The channel capacity can be estimated using the
Shannons channel capacity equations in [3].

The signal power can be computed from H(f) using the
equations in [3] which require the estimation of noise power
A B
C D
T= (3)
(4)
(5)
(6)
(7)
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 19

and PSD for railway overhead lines. We assumed a -40 dB
noise power and -50dB PSD for the present work, based on
[3].The load and source impedance were assumed to be
100.The characteristic impedance also varies from 100 to
1M .
V. RESULTS AND FINDINGS
The simulations were done in MatLab. The data used for
analyzing is included in Table 1 which was compiled from
[6],[10]. In this we assume a distance of 0.125 -0.75 meter of
separation between the cables. The simulations were done to
compute channel gain vs. frequency, channel capacity vs.
diameter of the cable and distance transmitter to receiver. In
the present work we are not considering the capacitive effect
between the lines.
For a one meter length, 50 mm diameter, the gain
response is plotted, for frequency up to 50 Mhz as in Fig.2a,
from which we can see that, attenuation happens at high
frequencies, approximately greater than 10MHz. As diameter
of the cable is varied keeping the frequency constant, it is
observed that gain increases for higher diameters. The input
diameter was in the range of practically used power cable
standards. The gain response in Fig.2b is plotted for a length
of one meter, and frequency 1 MHz.
The channel capacity was plotted for frequencies up to
100Mz for a 50mm diameter, one meter length, of cable which
is shown in Fig.2c. It shows that capacity increases at low
frequency, reaches a peak value and then decreases thereafter.
A maximum value of 10 Mbps was observed.
This capacity is enough for video transmission across
the cable. But as the distance increases, the channel capacity
reduces considerable, which suggests the requirement of
repeaters, across the channel for robust video transmission.
Channel gain drops after a 10 meter distance between the
source and load as per the simulation for a 1 MHz, 40 mm
diameter cable as in Fig.2d.
But for the case of audio frequency, the channel gain
drops only after a significant distance. We found that channel
attenuates audio frequency completely after a distance of
around 45 meters. The phase is non-linear at distance near the
transmitter and tends to be linear near the receiver. This non-
linearity suggests the need of FIR filters for audio
regeneration.

TABLE I. DATA SPECIFICATION FOR RAILWAY POWERLINES


Figure 2. Various simulations in MatLab
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 20

REFERENCES

[1] Zu Whan Kim,Theodore E.Kohn, Pseudo realtime activity detection for
rail road grade crossing safety
[2] C.R Paul, Analysis of Multiconductor Transmission Lines, Wiley
publications.
[3] S. Galli and T. Banwell, A Novel Approach to the Modeling of the
Indoor Powerline ChannelPart II: Transfer Function and its
Properties IEEE Trans. Power Del., vol. 20, no. 3, pp. 18691878, Jul.
2005.
[4] Motilal Nehru National Institute of Technology,
www.mnnit.ac.in/departments/eed/ iee_sem/ sem_proced/vol1/ Electric
Traction Supply System on Indian Railways.pdf
[5] Indian Railway, http://www.indianrailways .gov.in /indianrailways
/codesmanual/ ACTraction-II-P-I/ACTractionIIPartICh1_data.htm
[6] Bharat Bhargava, Railway Electrification Systems and Configurations,
IEEE paper no: 0-7803-5569, 1999.
[7] A. G. Goldring, Development of overhead equipment for British
Railways 50 Hz A.C. Electrification since 1960, PROC. IEEE, Vol.
118, No. 8, August 1971
[8] Jun Zhang and Julian Meng, Robust Narrrowband Interference
Rejection for Powerline Communication System using IS-OFDM,
Digital Object Identifier 10.1109/ TPWRD.2009.2036179,IEEE paper
no: 0885-8977,2010
[9] Jun Zhang and Julian Meng, Noise Resistant OFDM for Power-Line
Communication Systems, IEEE transactions on Power Delivery, vol. 25,
No. 2, April 2010
[10] Wester Railway, www.wester-rail.co.uk
[11] H. Meng, S. Chen, Modeling of Transfer Characteristics for the
Broadband Power Line Communication Channel, IEEE transaction on
power delivery, vol. 19, NO. 3, july 2004
[12] Justinian Anatory, Nelson Theethayi, Rajeev Thottappillil, Mussa M.
Kissaka, and Nerey H. Mvungi. Broadband Power-Line
Communications: The Channel Capacity analysis, IEEE transaction on
Power Delivery, vol. 23, No. 1, Jan 2008
[13] Athanasius G. Lazaropoulos and Panayiotis G. Cottis, Capacity of
Overhead Medium Voltage Power Line Communication Channels,
IEEE Transactions on Power Delivery, vol. 25, NO. 2, April 2010
[14] Eklas Hossain, Sheroz Khan, and Ahad Ali, Low Voltage Power Line
Characterization as a Data Transfer Method in Public Electricity
Distribution Networks and Indoor Distribution Networks, 2008 IEEE
Electrical Power & Energy Conference.
[15] Ayorinde Akinnikawe, Karen L. Butler-Purry, Investigation of
Broadband over Power Line Channel Capacity of Shipboard Power
System Cables for Ship Communication Networks, IEEE Paper
no:978-1-4244-4241-6/09,2009 IEEE.





























































Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 21
Optimal Tuning Of PI Controller for Vector controlled PMSM model
Deepthi .R .Nair
1

1
PG Student
Department of Electrical Engineering. Walchand
College of Engineering, Sangli, 416415,
nair.deepthi_r@yahoo.com

A.B.Patil
2
.
.
2
Assistant Professor.
Department of Electrical Engineering. Walchand
College of Engineering, Sangli, 416415,
ajaybpatil@yahoo.com
Abstract- This paper is concerned with vector
control of permanent magnet synchronous motor
(PMSM) with PI tuning. The concept of vector control
is applied to PMSM to obtain linear dynamics similar
to that of a DC motor. The linearized model consists
of two control loops namely, current loop and speed
loop. The objective of the control scheme is to achieve
very fast response. A genetic-algorithm-based PI
controller for Speed control of permanent magnet
synchronous motor is introduced. While the novel
strategy enhances the performance of traditional PI
controller greatly and proves to be a completely
model-free approach, it also preserves the simple
structure and features of PI controllers. Real time
implementation using TMS320LF2407 processor is
done. Experimental results show that using genetic-
algorithm-based PI controller gives the best
performance compared to conventional PI controller

Keywords - vector control, permanent-magnet
synchronous motors, proportional plus integral
controllers.

I. INTRODUCTION
Permanent magnet (PM) synchronous motors
are progressively replacing dc motors in high-
performance applications like robotics, aerospace
actuators and industrial applications. The PMSM is more
efficient and has a larger torque inertia ratio and power
density when compared to the IM for the same output
capacity. The PMSM is smaller in size and lower in
weight that makes it preferable for certain high
performance applications [1], [2].
Vector control is normally used in ac machines
to convert them, performance wise, into equivalent
separately excited dc machines which have highly
desirable control characteristics.[2],[3].
The model of the PMSM is however nonlinear.
This paper applies the concept of vector control that has
been extensively applied to derive a linear model of the
PMSM for controller design purposes [3]. The speed and
current controller are then designed.
Since most of the PMSM drive systems in
industrial applications are controlled with proportional
plus integral (PI) and proportionalintegralderivative
(PID) controllers, so there exists a growing demand to
optimally set the PI and PID parameters in real time
according to the varying operating conditions so that
even better system performance can be achieved.
This paper is organized as follows. Section 1
presents the introduction. In sections 2 presents motor
modeling of PMSM and vector control of PMSM drive
system. In section 3, the operation and relevant equations
of the current controller is presented. In Section 4 and 5
genetic algorithms is applied to optimize the PI controller
parameters. In section 6 TMS320LF2407 is briefly
explained. In section 7, simulation results will be shown.
Section 8 gives conclusion and reference.

II. MOTOR MODELING AND VECTOR
CONTROL OF PMSM
A. Motor modeling
1).Clarke and Park Transformation [2]:
Three-phase ac machines conventionally use
phase variable notation. For a balanced three-phase, star-
connected machine:
Sa + Sb + Sc = 0


Where Sa, Sb and Sc denote any one of current,
voltage and flux linkage [3]. The space vector may be
viewed in the complex plane as shown in fig. 1.

Three Phase to Two Phase Transformation:

Forward Park Transformation:

Stator current space vector in Stationary Reference frame


Where =(2)/3.
Reverse Park Transformation:


Stator current space vector in Rotating Reference frame :

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 22

The elimination of position dependency from
the machine variable is the main advantage of a vector
rotation. This transformation will be referred as the
Reverse Park Transformation. The inverse vector
rotation, to transform from a rotating to a stationary
reference frame is Forward Park Transformation

2).Mathematical model of PMSM
There is no external source connected to the
rotor side and variation in the rotor flux with respect to
time is negligible, there is no need to include the rotor
voltage equations. Rotor reference frame is used to derive
the model of the PMSM [3].
The electrical dynamic equation in terms of phase
variables can be written as:

The flux linkage equation is:

Considering, Symmetry of mutual inductances such as
Lab =Lba ,
Self inductances Laa = Lbb = Lcc and
Flux linkage ma= mb = mc = m .
For this model, input power pi can be represented as:

Applying the transformations (1) and (6) to voltages, flux
linkages and currents from equation (7)-(9), we get a set
of simple transformed equations as :

Ld and Lq are called d- and q-axis synchronous
inductances,respectively. wr is motor electrical speed
Instantaneous power (Pi) can be derived from
above power equation via transformation as :

The produced torque Te,
Te is the sum of the mutual reaction torque and
the reluctance torque. In order to produce additive
reluctance torque, id must be negative since Lq > Ld
.

B. Vector control
A vector control refers to the magnitude and
phase of the controlled variables. Matrix and vectors are
used to represent the control quantities (voltages,
currents, flux etc). This method takes into consideration
the mathematical equations that describe the motor
dynamics. This approach needs more calculations than a
standard control scheme. It can be solved by the use of a
calculation unit included in specialized digital signal
processor (DSP) [4]. Fig. 2 shows a vector diagram of the
PMSM.

Phase a is assumed to be the reference. The
instantaneous position of the rotor (and hence rotor flux)
is at r=rt angle from phase a. The application of
vector control, so as to make it similar to a DC machine,
for independent control of torque and flux, demands that
the quadrature axis current iq be in quadrature to the rotor
flux. Consequently id has to be along the rotor flux since
in the reference used id lags iq by 90 degree. If id is in
the same direction as the rotor flux, the d-axis stator flux
adds to the rotor flux that leads to increase in the net air
gap flux. On the other hand if id is negative then the
stator d-axis flux is in opposite to that of the rotor flux
resulting in a decrease in air gap flux. The PMSMs are
designed such that the rotor magnet alone is capable of
producing the required air gap flux up to the rated speed.
Hence id is normally zero in the constant torque mode of
operation. Consider three phase currents are :

From phasor we get,

iq = Torque-producing component of stator current = iT
id = Flux-producing component of stator current = if
Electric torque equation (11) becomes :

where, K = (3/2)(P/2)m
Electric torque depends only on the quadrature
axis current and a constant torque is obtainable by
ensuring that iq is constant. The constant air gap flux
required up to rated speed. Vector control is only possible
when precise knowledge of the instantaneous rotor flux is
available. Hence it is inherently easier in the PMSM than
in the induction motor because the position of the rotor
flux is uniquely determined by that of the rotor position
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 23
in the PMSM. Hence with the application of vector
control, independent control of the torque (iq) and flux
(id) producing currents are possible.

III. CURRENT CONTROL

A. Drive System

Fig. 3 shows the schematic of the overall speed and
current servo drive system. The error between the
reference and actual speeds is processed by the speed
controller to generate the torque or current reference. In
the constant air gap flux mode of operation where id ref. =
0, the torque reference is divided by the motor torque
constant to give the reference axis currents. The current
controller attempts to force the actual motor currents to
commanded values at all times. The commanded signal
for this control loop never exceeds rated values. So the
machine does not operate in the constant power or flux
weakening mode (for higher than rated speed reference
motor has to go in flux weakening mode and here more
than rated speed is not allowed). The air gap flux is
constant and is maintained at the value of the PM flux.

B. Block Diagram Derivation

Under assumption, id = 0,system becomes linear
and resembles a separately-excited DC motor
Motor q-axis voltage equation with id=0 becomes :
Eelectro-mechanical equation is :

C. Current Controller
The dynamic performance requirements for
current control loop are :
1. Band width is more than 1000 Hz.
2. Rise time is less than 500 micro seconds.

Equations (16) and (17) describe the stator
current dynamics.As shown in figure 5, the induced-emf
loop crosses the q-axis current loop, and it could be
simplified by moving the take-off point for the induced-
emf loop from speed to current output point. This gives
the current-loop transfer function as follows

where a2 = KpcJ , a1 = (KpcB + KicJ) , a0 =
KicB and b3 = LsJ , b2 = (LsB+RsJ+KpcJ) , b1 =
(BRs+KbK+KpcB + KicJ , b0 = KicB and the PI speed
controller is :Kpc + (Kic/s)
The transfer function given by above is a type
zero system.Hence this system has steady state error to
step change. Hence,a controller with integral action is
necessary to eliminate this error. For a simple low cost
system, the proportional-plus integral(PI) controller is a
good choice for such action. The PI-controller output
equation in time domain is :

Here, e(t) is the input to the controller, Kpc and Kic are
the proportional and the integral gains of the controller.
In s-domain PI current controller is given by
Kpc +( Kic/s)
For a second order system

Rise time :

Band width :

From transfer function equation (18), we have a third
order characteristics equation. From given dynamic
performance requirements, damping ratio and natural
frequency is calculated( from given bandwidth and rise
time for current-loop)using second order system equation
(20).We get damping ratio = .7071 and natural
frequency wn = = 6280 rad./sec. The poles are at -4440.6
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 24
+ j 4440.7 and -4440.6 - j 4440.7. To make the desired
characteristics equation 3rd order, a pole should be
placed suitably in the left half of s-plane so as to obtain
an acceptable overall response. we put an assumed pole
at -1e4. Which is far away from the imaginary axis so it
will effect less to the dynamic response of the system [3].
After comparing both desired characteristics equation and
current loop transfer function characteristics equation, we
get the PI controller constants (Kpc = 296.34 and Kic =
2.04e6). The same values of the gains will be used for PI
controller of flux loop (both flux and torque loop have
approximately same dynamics) as it is shown in fig 4.

IV. GENETIC ALGORITHM
The Genetic algorithm is a method for solving
optimization problems that are based on natural selection
(the process that drives biological evolution). The main
advantages of the GA over other conventional
optimization techniques are summarized as follows
[5,6,7]:
1) GA technique uses a population of trials representing
possible solutions of the problem, not a single point. As a
result, the GA will be less susceptible to getting trapped
on local minima.
2) GAs use probabilistic rules to make decisions when
solving problems
3) GAs apply a performance index assessment to guide
the search in the problem space.
A flow-chart of the GA algorithm optimization
procedure is given in Fig.6
Fig:6. Flow-chart of the general GA optimization procedure
Individual solutions are initialized, then, the
algorithm repeatedly modifies these solutions until a
predefined convergence criterion is met. The
convergence criterion of a genetic algorithm is a user-
specified condition. In general,the genetic algorithm uses
three types of rules at each step to create the next
generation from the current population; reproduction,
crossover, and mutation. Details of these rules are listed
in [5,6].
The individuals are encoded in either binary or
real numbers. Binary encoded numbers have many
drawbacks . They take long time of calculation since they
need the real values to be converted to binary and at the
end of each genetic algorithm cycle the binary numbers
are converted back to real ones. In addition, dealing with
binary numbers may affect the precision and conversion
process. These problems affect the accuracy of the
algorithm and as a result, real numbers are chosen in this
work.
V. APPLICATION OF GENETIC ALGORITHM
FOR TUNING PI CONTROLLER
The described genetic algorithm that was
applied successfully in [6] is used as a method of tuning
PI controller parameters of the speed controller in this
paper.The individuals are the parameter gains of the PI
controller that are selected in a certain interval.. The
population size, number of individuals is chosen as ten.
Although bigger population size results in more accurate
but this need more time of calculations. Therefore, we
see that population size of ten is a suitable choice in our
study.The objective function is selected as the absolute of
error and the target is to minimize this function. The
operation of genetic algorithm based PI controller is
explained by the following steps:
1) Initialize a population of Kp and Ki having a size
of'10'.i.e. Kp (n) and Ki (n) for n = 1 to n = 10. Define a
period of time equals 'ts', an objective function as
absolute of error abs (error), and a constant R ]0,1[
2) Set n = 1 and apply Kp (n) and Ki (n) to the system for
a period of' ts'
3) At the end of' ts' catch the value of error and hold it as
abs (error). Then increment 'n' by one and apply Kp (n)
and Ki (n) to the system for a period of' ts'.
4) Repeat step '3' until 'n' reaches 10.
5)Search for the minimum value of the objective function
for n = 1 to n = 10 and catch the corresponding values of
Kp and Ki For example if the objective function is
minimum when n = m(ie. m is the individual which
minimize the objective function) then catch Kp(m) and
Ki(m) and use them in the next generation. This is the
reproduction stage.
6)-Apply the crossover operation to the rest of Kp(n) and
Ki(n). Again if the objective function is minimum when
n = m, then apply the following formula [5,6] to calculate
the crossover individuals (new modified Kp(n) and Ki(n)
that will be used in the next generation for n = 1 to n
= 10 and n!=m) .
Kp(n) = R * Kp(m)+ (l-R) * Kp(n) (23)
Ki(n) = R * Kp(m) + (l-R) * Ki(n) (24)
7) Check meeting the conversion criterion then if it is not
met repeat steps 2-6 otherwise stop genetic algorithm
and maintain the final results of the fittest values of Kp
and Ki and apply them continuously to the system.

VI. MOTOR CONTROLLER DSP-
TMS320LF2407 DSP[4]
Event manager (EV) is a special module, which
was designed for controlling the motor. It can produce all
kinds of PWM waves, and these waves can adjust the
blind area. It can measure the rotating speed, divert and
angle displacement of the motor by the interface of the
increment photo-electricity coder, and measure the pulse
width by capture the power. There are 2 Event mangers
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 25
in TMS320LF2407A DSP, which are EVA and EVB.
Every event manger has 2 general timers of 16bit, has 8
pulse width modulation passes PWM of 16bit. 3
comparing units with programming bead area control can
produce 3 pairs of absolutely PWM waves (ie.6 outputs),
these 2 timers can produce 2 kinds absolutely PWM
waves. They could be realized: output the symmetrical
and asymmetric PWM wave; avoid output pulse at the
same time in up and down bridges. During the
application of the motor controlled, the PWM circuit
could reduce the CPU overhead for producing PWM
wave and the workload of the user, also could simplify
the control soft for produce the symmetrical pulse width
modulation and symmetrical hardware.

VII. SIMULATION RESULTS
Simulation has been carried out using
MATLAB Simulink.
Case 1: Design of the conventional PI controllers is
carried out using Ziegler-Nichols method . Figures[ 7-9]
show the simulation results , notice that there is a torque
ripple in Te, and the settling time is longer compared to
the results of the later case.
Case 2: The proposed controller is simple to implement
and gives good performance at the same time. The
proposed controller is applied to speed controller.Vector
of initial population for Kp and Ki are given in Table I.
Current controllers is kept as conventional PI controller
for the aim of simplicity.

TABLE I

Figures10- 12 show simulation results in case 2. It is
noticed that applying genetic algorithm modifies the
system behavior by reducing the rise time and settling
time as compared to case 1. The results show the
superiority of the proposed method in the reduction of the
algorithm and consequently the execution time. Steady
state error is zero using the proposed controller.
Variations of proportional and integral gains are
illustrated in Fig. 13 and Fig. 14 . The developed torque
has a percentage of torque ripples which can be
minimized in a future study.

Fig 7:. case 1: Rotor position and speed, t (sec).

Fig 8: case 1: Phase currents (A) - t (sec).

Fig.9: case 1: Developed torque (N.m) - t (sec).

Fig 10: case 2: Rotor position and speed,

Fig 11: case 2: Phase currents (A) - t (sec)

Fig12:case2:Developed torque (N.m)- t(sec)
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 26

Fig 13: Variation of proportional gain kp during
operation.

Fig 14: Variation of proportional gain ki during operation
VIII. CONCLUSIONS

In this paper vector control has been described
in adequate detail and has been implemented on PMSM
in real time.The time varying abc currents are made
stationary using Reverse Park Transformation, to
simplify the calculation of PI controllers constants. This
method enables the operation of the drive at zero direct
axis stator current.The controllers include conventional
PI controller for current controller, genetic algoritm
based PI controller for speed controller. Simulation
results show good performance using the genetic
algorithm based PI controller compared to the
conventional PI controller in rise time, settling time and
steady state error.

REFERENCES

[1]P. Pillay and R. Krishnan. Control characteristics and
speed controller design for a high performance
permanent magnet synchronous motor drive. IEEE Trans.
Energy Conversion., vol. 19, no. 1, 19(1), March 2004.
[2] P. C. Krause, Analysis of Electric Machinery. New
York: McGraw-Hill, 1986.
[3] R. Krishnan. Electric Motor Drives (Modeling,
Analysis, and Control).Prentice-Hall, 2003.
[4] TI Company, AC Speed Governing Controlling
System,TMS320LF2407A, DSP Controllers2005
[5] Genetic Algorithms: Concepts and Applications K. F.
Man, Member, IEEE, K. S. Tang, and S. Kwong,
Member, IEEE
[6]PI Controller Based on Genetic Algorithm for PMSM
Drive System Faeka Khater, Adel Shaltout, Essam
Hendawi, and Mohamed Abu El-sebah Electronics
Research Institute, Dokki, Giza, Egypt. Cairo University,
Giza, Egypt.
[7]Genetic PI controller for a permanent magnet
Synchronous motor -Cetin ELMAS and M. Ali
AKCAYOL

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 27
Performance Evaluation Of A QoS MAC Protocol
In Wireless Sensor Networks

Jeena A Thankachan B.R.Shyamala Devi, Asst.Professor
Department of Electronics and communication Engineering Department of Electronics and communication Engineering
Hindustan University, Chennai Hindustan University, Chennai
jeena.thankachan86@gmail.com shyamala_rajesh2000@yahoo.co.in

Abstract - The media access control (MAC) protocol
in wireless sensor networks provides a periodic listen/sleep
state for protection from overhearing and idle listening and
plays key role in determining channel capacity utilization ,
network delays and power consumption. There are many
scenarios and applications exist in which sensor nodes must
send data quickly to destination nodes and also serve diverse
applications from low data rate event driven monitoring
applications to high data rate real time industrial applications.
This paper presents the implementation and performance
evaluation of priority-based quality-of-service MAC (PQMAC)
protocol and a hybrid Z-mac protocol for wireless sensor
networks. PQMAC uses data priority levels to differentiate
among data transmissions. This protocol manages scheduling
by adaptively controlling network traffic and the priority level.
It focused on reducing the latency of the message transmission
from the source to the destination. Simulation results showed
that PQMAC reduces latency problems in wireless sensor
networks while maintaining energy efficiency. Whereas Z-mac
combines strengths of TDMA and CSMA while offsetting their
weakness. It achieves high channel utilization and reduces
collision under high contention like TDMA and also achieves
high channel utilization and low latency under low contention
like CSMA. Z-mac is robust to synchronization errors ,slot
assignment failures and time varying channel conditions.
Key Terms MAC, QoS, PQMAC, Z-MAC, latency,
energy efficiency.
I INTRODUCTION
Wireless sensor networks (WSNs) are made up of sensor nodes
with a wide range of abilities. WSN nodes are characterized by
their small size and specialized application like those
monitoring the environment may periodically send temperature
,humidity and light data to the destination. Medium access
control (MAC) protocols are used for multiple nodes to share
scarce bandwidth in an orderly and efficient manner. The key
concern of MAC design in wireless sensor networks is energy
consumption, as sensor nodes are battery powered. The battery
has limited capacity and often cannot be replaced or recharged,
due to environmental or cost constraints. End-to-end latency
can be important or unimportant depending on what application
is running. For example, object tracking or event detection
requires quick response to observed phenomena, and high
latencies are not feasible. Designing energy efficient solutions
which at the same time achieve low latency in packet delivery
is thus a challenging task. Other typically important attributes
including fairness, throughput and bandwidth utilization may
be secondary in sensor networks.
In most applications, the quality of service (QoS) in WSN
MAC has received little consideration. Thus , even though
most WSN research has concentrated on energy conservation,
QoS dependent on the application and scenario is very
important. Providing data priorities has not been addressed by
most research efforts. By assuming that all event data have the
same priority, most MAC schemes suffer from these
drawbacks:
Fast sending of event data does not guaranteed
in emergency environment.
No idea of what is important data because there
is no priority.
If sensor node changes their schedule for fast
sending data, sensor node consumes energy.
The paper implements PQMAC which provides QoS by
assigning priority to data in wireless sensor network. PQMAC
provides energy efficiency and reduces transmission latency of
high priority data simultaneously. PQMAC provides
classification by type of data and scheduling scheme for fast
transmission of event data. Fast packet transmission is
provided by a priority queue and additional listen time. The
priority queue is located in the MAC. The sensor node sends
high-priority data from the high priority queue first. If no data
are in the high-priority queue, the sensor node sends low-
priority data from the low-priority queue. The priority queue
guarantees faster transmission of high-priority data compared
to low-priority data.Additional listen time solves the
transmission latency problem in the sleep state. Sensor nodes
do not send low-priority data to other nodes during the
additional listen time because that is used only for high-priority
data. PQMAC also provides an advanced wakeup scheme and
dynamic priority listening to maintain energy efficiency. The
advanced wake-up scheme uses the RTS/CTS message that has
a priority check field for broadcasting. A dynamic priority
listening scheme manages scheduling based on traffic
information. Scope of the paper is immediate sending of
emergency event data to the destination node under high or low
contentions. A fire, typhoon, earthquake ,enemy appearance
and medical emergency are examples of emergency data. In
addition a destination node may collect data by query flooding
when it needs data quickly.
Z-MAC (Zebra MAC),is a hybrid MAC scheme, for sensor
networks that combines the strengths of TDMA and CSMA
while offsetting their weaknesses. The main feature of Z-MAC
is its adaptability to the level of contention in the network so
that under low contention, it behaves like CSMA, and under
high contention, like TDMA. It is also robust to dynamic
topology changes and time synchronization failures commonly
occurring in sensor networks.
The paper presents the implementation of PQMAC and
evaluates the performance of PQMAC with respect to Z-mac
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 28
for QoS parameters as energy efficiency, latency, throughput
and bandwidth utilization.

II LITERATURE SURVEY

The research [1] outlines the sensor network properties that are
crucial for the design of MAC layer protocols and describe
several MAC protocols proposed for sensor networks
emphasizing their strengths and weakness. Existing wireless
MACs are known to have fairness and QoS problem . In
response, the wireless community have come up with
numerous point solutions, with one-to-one comparison of
their solution with existing MACs for the specic fairness
model they target. However, with some tradeoff of exibility
and efciency, many of these MACs can be made relatively
free of a specic fairness model[5]. Quality of Service (QoS)
is an important requirement for the well-functioning of
traditional and new networks. Wireless Sensor Networks
(WSN) is prone to numerous events due to their mobility, harsh
communication medium, and environment behavior. Little
attention has been paid to the QoS requirements of such
networks. This work presents a preliminary study on the use of
QoS in WSN. We focus our research on QoS techniques
applied to medium access control (MAC), and propose a
classication of such techniques in four categories[8]. The
integration of wireless networking technologies with medical
information systems (telemedicine) has a significant impact on
healthcare services provided to our society. Applications of
telemedicine range from personalized medicine to affordable
healthcare for underserved population. An Energy-Efcient
QoS-Aware Media Access Control Protocol for Wireless
Sensor Network is an innovative MAC protocol (Q-MAC) that
minimizes the energy consumption in multi-hop wireless
sensor networks (WSNs) and provides Quality of Service
(QoS) by differentiating network services based on priority
levels. The priority levels reflect application priority and the
state of system resources, namely residual energy and queue
occupancies[3]. Real-time wireless sensor networks are
becoming more and more important by the requirement of
message delivery timeliness in emerging new applications.
Supporting real-time QoS in sensor networks faces severe
challenges due to the wireless nature, limited resource, low
node reliability, distributed architecture and dynamic network
topology. RL-MAC, a novel adaptive media access control
(MAC) protocol for wireless sensor networks (WSN) that
employs a reinforcement learning framework. Existing
schemes center on scheduling the nodes sleep and active
periods as means of minimizing the energy consumption.
Recent protocols employ adaptive duty cycles as means of
further optimizing the energy utilization[9].Priority-based QoS
MAC protocol for wireless sensor networks [11] is the paper
being implemented for a medical emergency application and
performance is being evaluated in comparison to Z-MAC.

The need to estimate multiple channel propagation
states, the channel estimation procedure has a higher
complexity, compared to a non-reconfigurable system. An
ideal switching scenario in which the switching time is
negligible compared to the symbol duration in the multiple
channels. The effects of non-ideal switching time into our
model and quantify the losses due to switching delay in the
adhoc network. Major problems introduced by the switching
delay are identified by the Lower data rate for state-switching
scheme and the Loss in the received SNR for state-selection
scheme.
III IMPLEMENTATION OF PQMAC
A.Priorities for faster transmission:The MAC protocol
must be able to adapt to different applications. Event data
have different transmission times and frequencies because
each application is different. Here the system will divide
data into four classes according to scenario, application,
and transmission type. PQMAC protocol for WSNs is
based on priorities for fast data transmission. Fast
transmission is provided by additional listen time in the
middle of the sleep state. The system created additional
listen time using a doubling scheme for data priority. The
doubling scheme provides more opportunity to send high
priority data than low-priority data.

Fig 1.1 Doubling scheme
B.High-priority data into a high-priority queue:PQMAC
uses a priority queue. The sensor node places high-priority data
into a high-priority queue and low-priority data into a low-
priority queue. Therefore, high-priority data have the
opportunity to be sent during any listen time. If the sensor node
knows the probability of receiving high-priority data, it can
reduce its energy consumption. If the node has a low
probability of receiving high-priority data, it remains in the
sleep state to reduce energy consumption. A node can wake up
at any time to send high-priority data during the additional
time. If a node does wake up during the additional time and no
nodes are receiving data, the node goes back to
sleep.
Priorities for faster
transmission
Priority of event data
Differentiating
scheduling

by priority
Different transmission times
and frequencies
Data into different
classes
Doubling schemes
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 29
Fig 1.2. Priority Levels
C. Changes the listen/sleep state based on network
traffic conditions:The advanced wake-up scheme makes
a sensor node more energy-efficient, but also causes some
data transmission latency. If the sensor network is
experiencing low-traffic conditions, the advanced wake-
up schedule is useful. Otherwise, under heavy-traffic
conditions, sensor nodes need to be able to send without
latency. The system provided an advanced adaptive wake-
up scheme. Dynamic priority listen-scheduling changes
the listen/sleep state based on network traffic conditions.
Each node measures the traffic interval of the received
data. Avoiding latency is important in heavy-traffic
environments while energy conservation is important in
low- traffic environments. DPL is a good scheme in a
dynamic traffic environment.

Fig1.3 Advanced Wake Up

IV IMPLEMENTATION AND PERFORMANCE
EVALUATION
PQMAC is implemented in an ICU (hospital
environment) where 10 nodes are considered. Out of
10 nodes 5 nodes are for sensing the each patient , 1
node will be a scheduler (common sensing device) ,1
node will be for the database and the other one to give
the doctor alert messages in emergencies. Simulation
results of PQMAC implementation are compared with
Z-MAC and following result graphs are obtained :



Fig4.1 Simulation of 10 nodes




Fig4.2 Node1(patient 1) in high priority state
Dynamic priority listen
scheduling
Adaptive wakeup
scheme
Changes the
listen/sleep state based
on network traffic
conditions.
Measure the traffic
level of the received
data
Dynamic
priority listen
scheduling
PQMAC
Priority queue Low priority
data into low
priority queue
Reduce energy
Consumption
High priority data in high
priority queue
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 30

Fig 4.3 Node4(patient 4) in high priority state
Performance graph of PQMAC is as shown below:





Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 31
Performance evaluation of PQMAC is done in
comparison to Z-mac which combines the strengths of
TDMA and CSMA.Evaluated and compared graphs
will be obtained soon. Perfomance of PQMAC with
respect to S-mac was studied .

V CONCLUSION
In this paper , PQMAC is implemented, which provides
QoS by priority of data in wireless sensor network,used in
emergency cases. Simulation results shows that PQMAC have
highly reducing packet transmission latency than Z-mac..Also
energy consumption used in additional listen time is
minimized. The performance graphs obtained are :Average
throughput (bytes /sec) Vs Interval of traffic(sec),Average
queuing delay (sec) Vs interval of traffic, Avearge energy
consumption per node (in high traffic condition), Average
energy consumption per node (in low traffic condition) and
Average packet latency (sec) Vs interval of traffic , which
evaluates PQMAC to be best in comparison to S-mac, Q-mac
and now Z-mac(hybrid mac)

VI REFERENCES
[1] I. F. Akyildiz et al., Wireless Sensor Networks: a
survey, Computer Networks, Vol. 38, pp. 393-422,
March 2002
[2] T. V. Dam and K. Langendoen, An Adaptive Energy-
Efficient MAC Protocol for Wireless Sensor Networks,
1st ACM Conf. Embedded Networked Sensor Sys., Los
Angeles, CA, Nov. 2003
[3] W. Ye, J. Heidemann, D. Estrin, An energy-efficient
MAC protocol for wireless sensor networks, in:
Proceedings of the Joint Conference of the IEEE
Computer and Communications Societies (InfoCom), Vol.
3, 2002
[4] I. Rhee, A. Warrier, M. Aia, J. Min, Z-MAC: A
hybrid MAC for wireless sensor networks, in:
Proceedings of the International Conference on Embedded
Networked Sensor Systems (SenSys), 2005
[5]. Ajit Warrier, Injong Rhee Dept of Computer Science
North Carolina State University Jae H. Kim Boeing
Phantom Works Experimental Evaluation of MAC
Protocols for Fairness and QoS Support in Wireless
Networks
[6] GholamHossein EkbataniFard, Mohammad H.
Yaghmaee, Reza Monsefi Faculty of Engineering,
Ferdowsi University of Mashhad, Mashhad, Iran Received
May 27, 2010 A QoS-Based Multichannel MAC
Protocol for Two-Tiered Wireless Multimedia Sensor
Networks
[7] Islam T. Almalkawi, Manel Guerrero Zapata 1, Jamal
N. Al-Karaki 2 and Julian Morillo-Pozo Published: 9 July
2010 Wireless Multimedia Sensor Networks: Current
Trends and Future Directions
[8] 5. Nauman Aslam, William Phillips and William
Robertson A Unified Clustering and Communication
Protocol for Wireless Sensor Networks
[9]. Zhenzhen Liu, Student Member, IEEE, Itamar
Elhanany , Senior Member, IEEE RL-MAC: A QoS-
Aware Reinforcement Learning based MAC Protocol for
Wireless Sensor Networks
[10] .QUATTRO: QoS-capable cross-layer MAC
protocol for wireless sensor networks Joel Ruiz, Jose R.
Gallardo, Member, IEEE, Luis Villasenor-Gonzalez,
Member, IEEE, Dimitrios Makrakis, Member, IEEE,
Hussein T. Mouftah, Fellow, IEEE University of Ottawa,
Ottawa, Ontario, Canada K1N 6N5.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 32
Multi-Ported Enhanced Security Architecture to
prevent DDOS Attacks usingPRS Architecture
Ramesh R
Department of CSE
VJCET
Ernakulam, Kerala, India
ramesh-r@ieee.org
Pankaj Kumar G
Department of CSE
FISAT
Ernakulam, Kerala, India
pankusoft@gmail.com

Abstract The existing web referral system coupled with reliable
persistent referral service architecture, PRS, is indeed an
onslaught for attackers launching DDoS attacks on popular web
servers. We can enhance the referral channel service to normal
users by imparting multi-port security. A centralized referral
service could be implemented in search engines for providing
valid clients a privilege channel to target websites, during DDoS
attacks.
Keywordsdenial of service; referral; multi-port; web sitegraph;
network security and information assurance.
I. INTRODUCTION
Existing referral service coupled with Persistent Referral
Service Architecture (PRS) [1] can augment the security of
web-servers from distributed denial of service (DDoS) [2]
attacks. The referral service could be employed from one
website A to another website B in Site-Graph [3] with
transitive method. That is, the referred client should be a
member of website A as well as website B. Also the existing
system taken advantage of the referral tree can expand the
security perimeter around an important web server. There by
enhancing the security of an important web server under
DDoS attacks.
In this paper, we are proposing a system in which the
existing reliability of popular search engines such as Google
Yahoo etc. can be utilized for providing value-added service
to important web servers, under DDoS attacks. Also there is
no mandatory requirement for accounts in both source and
target server. The user needs a valid account in any one of the
source servers.
Main advantage of this system is that, here normal users
are also given privileged channels. Even though the bandwidth
quota in this service would not be similar to the bandwidth
provided in normal WRAPS [4] privilege channel to important
websites. But we can ensure privilege channel access to
websites using this method in search engines. Here the
proposed method is trying to exploit the existing reliability
and serviceability offered by popular search engines. This
method can also serve as an alternative for establishing safe
channel connection to important financial websites. This
approach can be extended for the implementation of a
centralized database system in which users accessing financial
websites can be protected from phishing [5] and pharming
attacks [16].
II. RELATED WORKS
Familiar works in the field of web server protection
involves Overlay node architecture [6], [7], [8] Capability
token approach [9], Web referral architecture for privilege
service, etc. In overlay node architecture, the target website to
be protected is surrounded by overlay nodes, which provides
suitable protection perimeter for the inbound and outbound
traffic of the web server. Because of its implementation
complexities, Capability-based approaches [10], [11] came
into existence. In this approach, any remote or referral web
server willing to establish a reliable channel with the target
website, need to pass through a capability token acquisition
process. But the main drawback of this approach undermines
the very existence of this method, as the server providing the
token might come under denial of service attack. Then a more
sophisticated approach called web referral architecture for
referral service was developed.
In WRAPS smaller websites provides referral service to
important web servers, such that under DDoS attacks on
important websites, the privileged service offered by these
smaller or other important websites could be utilized to
provide privileged clients the privileged channels. In this,
websites offering WRAPS service to important websites will
generate privilege URL or a privilege referral hyperlink using
a script. This privilege URL will be a fictitious URL, which
differs from normal hyperlink URL, such that in this one a
capability token will be hidden within the URL. The privilege
URL with the help of meta-refresh redirection will redirect the
clients browser to the target website. The websites willing to
offer WRAPS service to important websites register as
referrers with the target website. This will be on a contract
basis. The main inclination for smaller websites to provide
WRAPS service is that, they will be provided with rewarding
links from important websites which improves there siterank
[3]. The main disadvantages of WRAPS method is that, the
normal users are not considered and it follows transitive
approach in account verification that is we need to be member
of both source and target websites. Also it provides privilege
service to users only when the normal user becomes privileged
one. Our paper illustrates a method to overcome this problem,
by offering valid clients privilege channel service to the target
web server. Other countermeasures to overcome DDoS attacks
[12], [13], [14], [18], [19], [20].




Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 33

III. DESIGN
Using PRS we can implement an architecture using
popular search engines like Google, Yahoo etc, such that
websites can register with these search sites on contract basis.


Figure 1. Normal DDoS Attacks

These search engines provides a privilege channel to target
web server to valid clients only. Fig.1 shows a basic structure
of DDoS attacks. Each and every websites have their own
protection perimeter. Protection perimeter has been the sole
purpose of Internet Service Providers, for which they are paid
for a reliable and efficient 24X7 service. Most of the security
related programs for Encryption/Decryption, Traffic
management, Privilege channel establishment etc. were
centered at Edge-Routers, which also act as a gateway to the
protection perimeter. Local routers are for forwarding the
packets to the target server. Here normal channel within the
protection perimeter will be congested with TCP/UDP packets
of attackers. So normal users packets would take more time to
get their target site (eg: www.eBay.com).
Our proposed system mainly focused on providing valid
clients a privilege or reliable service channel to target server
through search sites. The client after authenticating with the
particular search site gets referral service to other valid
registered sites. When the client search for the target site,
normal search results will be displayed. Whenever the client
clicks on the target sites URL, search engine performs a
validation check of that particular target website. If it is a
registered site, depending upon the privilege of valid client, a
privilege URL (containing the capability token) will be
generated at the target server and will be forwarded to the
source server requesting service and using meta-refresh
redirection command, browser will be redirected to target
website. If the clicked target server is not registered, then a
normal hyperlink will be provided.
For target web servers in need of this service, have to
register with the search engines or other social networking
websites. Since it is a value added service, contracts similar to
ad-click service like Google ads can be offered. The main
security issue for target servers is to protect their port number
from being identified. Here the search engines are referred as
Source servers. The source server is aware of the port numbers
of all registered web servers. Whenever valid clients are
accessing the URL for registered target server, the URL
contains the usual IP address and port number field (16-bits)
contains an encrypted message authentication code. When
valid clients request for target server link, the source IP of
valid client will be taken by the source server and it will be
send to the edge router for getting a privilege URL. The edge
router contains a record of registered source servers. After
validating the request for URL from the source server, edge
router forwards the request to the target server. The target
server checks the validity of source server and after that stores
the source IP of valid client in the whitelist of firewall.
Now after updating the whitelist entries, a message
authentication code will be inserted in the port number filed of
privilege URL. The privilege URL is forwarded back to the
source server and the valid clients browser will be
automatically redirected to the target server.
While redirecting, the edge router checks the validity of
URLs MAC and forward the packets to the target server. The
target server on reception of the packets from valid client
checks the validity of source IP of the client with firewalls
whitelist. If valid, service is provided otherwise rejected to the
unprivileged channel.
The source server program calculates the privilege level of
valid clients (fig.2) depending on several factors like account
access rate by viewing his/her login information , then
checking number of emails sent or received, number of linked
email accounts etc. The source IP and privilege level will be
sent along with the request URL to the target server. The edge
router offers a privilege channel service for the source server
packets. If the packet doesnt contain valid encrypted port
number, then the packets will be forwarded to normal port 80.
And source IP will be included in the whitelist of firewall of
target server. Message authentication code (MAC) contains
port number field, privilege level field etc. The privilege level
is added to decrease the probability of cracking the MAC code
by exhaustic search methods. In this method, port numbers
assigned to source server as well as valid clients differs.

Authentication
(Source Server)
Referral Level
Calculator
V
a
l
i
d
Searches the Target
Website
Target Site
Registered/Not?
Privilege Channel
Normal Channel
Y
e
s
No
Client

Figure 2. Referral service
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 34
IV. IMPLEMENTATION
The implementation phase is as follows. We used
Virtualization software, VirtualBox, for simulating the test
phase in Windows XP platform. Three separate Fedora
operating systems are installed, two acting as the attacker
nodes and one for installing the router simulator. Two other
systems are used as source server and target server. All
operating systems are configured to a single network. Apache
web server is configured in respective operating systems.
Attacker nodes start congesting the network to target node by
sending UDP/TCP packets continuously. Hence normal users
get delayed service to target node (E-bay).
Multi-Ported enhanced PRS Algorithm:
A. Validity check of User.
1. User performs authentication at source server.
2. Valid user clicks for Referral link (1), fig.3.
3. Server program automatically calculates a
privilege level for clients.
4. Assume target server is registered with source
server. Source server has the encrypted port
number (7027) of target site. Source sends a
request along with privilege level and valid
clients source IP, to target site.
B. Validation of Source server.
1. Edge router on reception of service request
from source server, checks for validity of the
source (2).
2. If valid, (3) decrypts the port number (7027)
and embed it into the port number field of
request.
3. Local router forwards (4) the request to target
server.
4. Target server also performs a validation check
of source server and puts the source IP of valid
client into the whitelist of firewall.
C. Privilege acquisition.
1. A script program at target server generates a
privilege URL. Privilege URL
(http://<E.F.G.H.>:<port number>) contains
encrypted MAC at port number field. MAC
contains encrypted message of privileged port
number (20007) of target server and privilege
level of valid client.
2. The response from target site reaches source
site with privilege URL (5) containing
encrypted privilege port number of target site.
3. Source server using an http redirection
command (6), redirects the clients browser
automatically to target server. Example for http
redirection: <meta HTTP EQUIV=Refresh
CONTENT =1; URL= http://E.F.G.H:XX/
index.htm>, where E.F.G.H is the target sites
IP address, XX is the encrypted port number of
target site.
D. Privilege channel establishment.
1. After http redirection, clients packet reaches
edge router, at which the encrypted privilege
port number is decrypted.














Figure 3. PRS Architecture

2. After decryption, the port number field of
clients packet header will be embedded with
privilege port number (20007). Then forwards
the packet to local router, then to target site.
E. Validation of client by Target server.
1. When clients packet reaches target site (7),
target site checks whether clients source IP is
present in the whitelist of firewall. If valid,
provides a privilege service, else drops the
packet.

Discussion. Users entering the source server are validated.
The Referral link in the source server is clicked for service.
Small PHP program running at source server calculates the
privilege level of the valid user. If privilege level is too low, as
the user might be newly registered to the site, CAPTCHA [15]
test can be performed. For calculating privilege level, several
factors are taken into account like users valid contact
members, number of mails send or received, linked email
accounts etc. Here as an example, source server is considered
as Google and target server as eBay. The target site eBay.com
is searched by the client, if it is registered with the source
search site Google.com, privilege service channel will be
created else normal channel will be provided as in usual case.
If request for eBay from Google is made, edge router
evaluates whether Google is in valid registered list or not. If
valid, forwards the request URL with source IP of client and
privilege level to the destination site (port number decrypted).
The source IP is entered into the whitelist of firewall (eBay)
and MAC is generated. Perl script is used at eBay for
generating privileged URL for the user. Privilege URL
contains the target servers IP and MAC code. It will be sent
to Google, where using Meta-Refresh-Redirection, clients
browser will be automatically redirected to eBay. At this
instance, the eBay validates the MAC code with its privileged
port number and the source IP with whitelist entries. If all
seems valid, valid client gets his/her service in privileged
manner.
V. LIMITATIONS
Firstly, users having no valid accounts in that particular
search site will not get privilege service. This can be sorted
out by embedding features like integrated account verification
from other websites and obtaining privilege level from that
website. This is one of the popular methods increasingly used
(6)
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 35
nowadays by smaller websites that are not so popular, e.g.
unpopular sites in order attract users, started employing
verification of users Facebook or Twitter accounts. Secondly
compromised normal users should be properly dealt with. For
this their social behavior parameters like number of valid
emails in inbox, number of valid emails address in contacts,
period of existence of mail, activity behavior etc should be
properly studied and privilege level should be estimated with
the utmost care.
VI. FUTURE WORKS
So far, our discussion was constrained to only securing
target servers using node structure and referral systems. Using
this method we are looking for certain methods in order to
abate Phishing attacks [5] in financial net-banking segment.
Existing anti-phishing methods includes Blacklist, heuristic
detection (Spoof Guard [17]), the page similarity assessment
etc. All have their own short comings. Blacklist is actually
Google toolbar integrated software which can be made active
within the web browser for providing security from phishing
attacks. By using this method instead of installing 3
rd
party
programs for Phishing attack security, we can have a
centralized program in source servers for averting such
attacks.
More and more value added services are expected from
this architecture. More powerful encryption method is indeed
an option to be considered. More bits can be added in MAC in
order to enhance the security of port number.
VII. CONCLUSION
We are considering a method for providing normal web
surfing users a chance to get privilege channel service using
PRS. The users validity is checked by authenticating websites
which in turn provide the valid users a privilege channel to
target websites. The privilege level of the valid user is
cautiously calculated by taking into account his/her social
behaviour. So by effectively implementing this system in
existing social networking and search engines, clients will be
given privilege channel service even when target website is
under DDoS attacks. Hence we can improve the serviceability
of web servers even under DDoS attacks. Also the concept of
Multi-ported security gives an overall new experience into
network security architecture.
ACKNOWLEDGMENT
The authors thank the head of ITS-Groups for providing
valuable guidelines and resources required for the
implementation phase and helping in conducting experiments.
REFERENCES
[1] Ramesh.R, Pankaj Kumar G, Persistent Referral Service for Mitigating
DDos Attacks using Search Engines:PRS. To appear in Proc. of Int.
Conf. Information Security ,2011.
[2] Charalampos Patrikakis, Michalis Masikos, and Olga Zouraraki
Distributed Denial of Service Attacks, The Internet Protocol Journal -
Volume 7, Number 4, National Technical University of Athens, Cisco
Systems Inc.
[3] J.WuandK.Aberer,UsingSiterankforp2pWebRetrieval,
Technical ReportIC/2004/31,SwissFed.Inst.Technology,Mar.2004.
[4] X.WangandM.Reiter,Wraps:Denial-of-ServiceDefense
throughWebReferrals,Proc.25thIEEESymp.ReliableDistributed
Systems(SRDS),2006.
[5] Anti-Phishing Working Group, Phishing Activity Trends Report, 1st
Half 2009. http://www.apwg.org/reports/apwg_report_h1_2009.pdf
[6] A. Keromytis, V. Misra, and D. Rubenstein, SOS: Secure Overlay
Services, Proc. ACM SIGCOMM 02, Aug. 2002.
[7] A.Keromytis,V.Misra,andD.Rubenstein,SOS:SecureOverlay
Services,Proc.ACMSIGCOMM02,Aug.2002.
[8] D.Andersen,Mayday:DistributedFilteringforInternetSer-
vices,Proc.FourthUSENIXSymp.InternetTechnologiesand
Systems(USITS),2003.
[9] T.Anderson,T.Roscoe,andD.Wetherall,PreventingInternet
Denial-of-ServicewithCapabilities,Proc.SecondWorkshopHot
TopicsinNetworks(HotNets03),Nov.2003.
[10] A.Yaar,A.Perrig,andD.Song,AnEndhostCapability
MechanismtoMitigateDDoSFloodingAttacks,Proc.IEEESymp.
SecurityandPrivacy(S&P04),May2004.
[11] T.Anderson,T.Roscoe,andD.Wetherall,PreventingInternet
Denial-of-ServicewithCapabilities,Proc.SecondWorkshopHot
TopicsinNetworks(HotNets03),Nov.2003.
[12] R.Mahajan,S.Floyd,andD.Wetherall,ControllingHigh-
BandwidthFlowsattheCongestedRouter,Proc.NinthIEEEIntl
Conf.NetworkProtocols(ICNP01),Nov.2001.
[13] P.FergusonandD.Senie,RFC2267:NetworkIngressFiltering:
DefeatingDenialofServiceAttacksWhichEmployIPSource
AddressSpoofing,ftp://ftp.internic.net/rfc/rfc2267.txt,Jan.
1998.
[14] J.Li,J.Mirkovic,andM.Wang,Save:SourceAddressValidity
EnforcementProtocol,Proc.IEEEINFOCOM,2002.
[15] L.vonAhn,M.Blum,N.J.Hopper,andJ.Langford,
CAPTCHA:UsingHardAIProblemsforSecurity,Advances
inCryptologyEUROCRYPT03.SpringerVerlag,2003.
[16] Sid Stamm, Zulfikar Ramzan, and Markus Jakobsson, Drive-By
Pharming. Published at the 9th International Conference on Information
and Computer Security, Dec.2006.
[17] Chou, N., R. Ledesma, Y. Teraguchi, D. Boneh, and J.C.Mitchell,
"Client-Side Defense against Web-Based Identity Theft". In Proceedings
of The 11th Annual Network and Distributed System Security
Symposium (NDSS '04).
[18] A.JuelsandJ.Brainard,ClientPuzzle:ACryptographicDefense
againstConnectionDepletionAttacks,Proc.Symp.Networkand
DistributedSystemSecurity(NDSS99),S.Kent,ed.,pp.151-165,
1999.
[19] X.WangandM.Reiter,DefendingagainstDenial-of-Service
AttackswithPuzzleAuctions,Proc.IEEESymp.Securityand
Privacy(S&P03),May2003.
[20] X.WangandM.Reiter,MitigatingBandwidth-Exhaustion
AttacksUsingCongestionPuzzles,Proc.11thACMConf.
ComputerandComm.Security(CCS04),Nov.2004.



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 36
A New Approach to Neural Network Parallel Model Reference Adaptive Intelligent
Controller

R.Prakash

Department of Electrical and Electronics Engineering,
Muthayammal Engineering College,
Rasipuram,Tamilnadu,India.
prakashragu@yahoo.co.in


R.Anita
Department of Electrical and Electronics Engineering
Institute of Road and Transport Technology,
Erode, Tamilnadu, India.
anita_irtt@yahoo.co.in


AbstractThe aim of this paper a new approach to a neural
network based intelligent adaptive controller is proposed. It
consists of an online multilayer back propagation neural
network structure along with a conventional Model Reference
Adaptive Control (MRAC). The idea is to control the plant by
conventional model reference adaptive controller with a
suitable single reference model, and at the same time control
the plant by online tuning of a multilayer Back Propagation
Neural controller. The training patterns for the neuron
controller are obtained from the conventional PI controller. In
the conventional model reference adaptive control (MRAC)
scheme, the controller is designed to realize plant output
converges to reference model output based on the plant which
is linear. This scheme is for controlling linear plant effectively
with unknown parameters. However, using MRAC to control
the nonlinear system at real time is difficult. In this paper, it is
proposed to incorporate a Neural Network (NN) in MRAC to
overcome the problem. The control input is given by the sum of
the output of conventional MRAC and the output of NN. The
NN is used to compensate the nonlinearity of the plant that is
not taken into consideration in the conventional MRAC. The
proposed NN-based model reference adaptive controller can
significantly improve the system behavior and force the system
to follow the reference model and minimize the error between
the model and plant output. The effectiveness of the proposal
control scheme is demonstrated by simulations.
Keywords- Model Reference Adaptive Controller (MRAC),
Artificial Neural Network (ANN), Backlash and Dead Zone.

I. INTRODUCTION
Model Reference Adaptive Control (MRAC) is one of
the main schemes used in adaptive system. Recently Model
Reference Adaptive Control has received considerable
attention, and many new approaches have been applied to
practical process [1], [2]. In the MRAC scheme, the
controller is designed to realize plant output converges to
reference model output based on the assumption that the
plant can be linearized. Therefore this scheme is effective
for controlling linear plants with unknown parameters.
However, it may not assure the controlling nonlinear plants
with unknown structure. The neural network has been an
active research area in recent years. A neural network
parallel adaptive controller for dynamic system control is
designed in [3]. The abilities of neural networks to learn by
example, pattern recognition and approximation of
nonlinear control systems [4], [5], [6], [7], [8].Theoretically,
neural network can approach any function no matter linear
or nonlinear. Narendra and Parthasarathy demonstrate the
use of neural network for identification and control of
nonlinear dynamic systems in [6]. The multilayer back
propagation network has been utilized in our proposed
method because of its inherent nonlinear mapping
capabilities, which can deal effectively for real-time online
computer control [8].A robust adaptive control of uncertain
nonlinear systems using neural network discussed in [9].
The authors propose a hybrid approach to the problem of
controlling a class of nonlinear systems in the presence of
both unknown nonlinearities and unmodeled dynamic. One
of the characteristic features of the proposed structure is to
give an efficient method for calculating the derivative of the
system output respect to the input by using one identified
parameter in the linearized model and the internal variables
of the NN, which enabled to perform the back prorogation
algorithm very efficiently [10].
The abilities of a neural network for nonlinear
approximation and development for nonlinear
approximation and the development of a nonlinear adaptive
controller based on neural networks has been discussed in
many works[11]-[13]. In particular, the adaptive tracking
control architecture proposed in [14] evaluated a class of
continuous-time nonlinear dynamic systems for which an
explicit linear parameterization of uncertainty is either
unknown or impossible. The use of neural networks for
identification and control of non linear system has been
demonstrated in [15] discusses a direct adaptive neural
network controller for a class of non linear system. An
online Radial Basis-Function Neural Network (RBFNN) in
parallel with a Model Reference Adaptive Controller
(MRAC) is discussed in [16]. An adaptive output-feedback
control scheme is developed for a class of nonlinear SISO
dynamic systems with time delays [17].
In this paper a proposal MRAC is designed from a
multilayer back propagation neural network in parallel with
a model reference adaptive controller. From the designed PI
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 37
controller training patterns are generated and it is used to
train the artificial BPN neural network. The trained neural
network connected in parallel with an MRAC and it output
the appropriate control signals for achieving the desired
response. The control input given by the plant is the sum of
the output of adaptive controller and the output of neural
network. The neural network is used to compensate the
nonlinearity of the plant that is not taken into consideration
in the conventional MRAC. The role of model reference
adaptive controller is to perform the model matching for the
uncertain linearized system to a given reference model. The
network weights are adjusted by multilayer back
propagation algorithm which is carried out in online. Finally
to confirm the effectiveness of proposed method, it is
compared with the simulation results of the conventional
MRAC.

II. STATEMENT OF THE PROBLEM

To consider a Single Input Single Output (SISO), Linear
Time Invariant (LTI) plant with strictly proper transfer
function
) (
) (
) (
) (
) (
s R
s Z
K
s u
s y
s G
P
p
P
p
P
(1)
where up is the plant input and yp is the plant output. Also,
the reference model is given by
) (
) (
) (
) (
) (
s R
s Z
K
s r
s y
s G
m
m
m
m
m
(2)
where r and ym are the models input and output. To define
the output error as
m p
y y e
(3)
Now the objective is to design the control input u such as
that the output error e goes to zero asymptotically for
arbitrary initial condition, where the reference signal r(t) is
piecewise continuous and uniformly bounded

III. STRUCTURE OF AN MRAC DESIGN

III.1. Relative Degree n =1
As in Ref [1] the following input and output filters are
used,
p
gu F
1 1

(4)
p
gy F
2 2


where F is an
) 1 ( * ) 1 ( n n
stable matrix such that
det
) ( F SI
is a Hurwitz polynomial whose roots include
the zeros of the reference model and that (F,g) is a
controllable pair. We define the regressor vector
T
p
T T
r y ] , , , [
2 1
(5)
In the standard adaptive control scheme, the control u is
structured as
T
u
(6)
where
T
C ] , , , [
0 3 2 1
is a vector of adjustable
parameters, and is considered as an estimate of a vector of
unknown system parameters * .
The dynamic of tracking error
T
m
p s G e
~
) (
*
(7)
where
m
p
k
k
P
*
and
*
) (
~
t

represents parameter error. Now in this case, the transfer
function between the parameter error
~
and the tracking
error e is Strictly Positive Real (SPR) [1], the adaptation
rule for the controller gain is given by
*
1
sgn p e

(8)
where is a positive gain.

III.2. Relative Degree n =2
In the standard adaptive control scheme, the control u is
structured as
) / sgn(
1 m p
T T
T
T
K K e u
(9)
where
T
C ] , , , [
0 3 2 1
is a vector of adjustable
parameters, and is considered as an estimate of a vector of
unknown system parameters
*
.
The dynamic of tracking error is

T
m
p p s s G e
~
) )( (
*
0
(10)
where
m
p
k
k
P
*
and
*
) (
~
t
represent the parameter error. The equation, ) )( (
0
p s s G
m
is
strictly proper and Strictly Positive Real (SPR). Now in this
case, since the transfer function between the parameter error
~
and the tracking error e is Strictly Positive Real (SPR),
[1], the adaptation rule for the controller gain is given
) / sgn(
1 m p
K K e
(11)
where e
1=
y
p-
y
m
and is a positive gain.
The adaptive laws and control schemes developed are
based on a plant model that is free from disturbances, noise
and unmodelled dynamics. These schemes are to be
implemented on actual plants that are most likely to deviate
from the plant models on which their design is based. An
actual plant may be infinite in dimensions, nonlinear and its
measured input and output may be corrupted by noise and
external disturbances. It is shown by using conventional
MRAC that adaptive scheme is designed for a disturbance-
free plant model and may go unstable in the presence of
small disturbances.

IV. PI CONTROLLER-BASED MODEL REFERENCE
ADAPTIVE CONTROLLER

The disturbance and nonlinear component are added to
the plant input of the conventional model reference adaptive
controller, in this case the tracking error has not come to
zero and the plant output is not tracked with the reference
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 38
model plant output. The large amplitude of oscillations will
come with the entire period of the plant output and the
tracking error has not come to zero .The disturbance is
considered as a random noise signal. To improve the system
performance, the PI controller-based model reference
adaptive controller is proposed. In this scheme, the
controller is designed by using parallel combination of
conventional MRAC system and PI controller.
In the PI controller-based model reference adaptive
controller, the value for the PI controller gains K
p
and K
i
are
calculated by using the ZieglerNichols tuning method. The
control input U of the plant is given by the following
equation,
pi mr
U U U (12)
T
mr
U
where U
mr
is the output of the adaptive controller and U
pi
is
the output of the PI controller. The input of the PI
controller is the error, in which the error is the difference
between the plant output y
p
(t) and the reference model
output y
m
(t). In this case also, the disturbance (random noise
signal) and nonlinear component is added to the input of the
plant .The PI controller- based model reference adaptive
controller effectively reduces the amplitude of oscillations
of the plant output. In this case the tracking error has not
come to zero. The PI controller-based model reference
adaptive controller improves the performance compared
with the conventional MRAC.

V. NEURAL NETWORK-BASED MODEL
REFERENCE ADAPTIVE CONTROLLER

To make the system more quickly and efficiently
adaptable than conventional MRAC system and PI
controller-based MRAC, a new idea is proposed and
implemented. The new idea which is proposed in this paper
is the neural network-based model reference adaptive
controller. In this scheme, the controller is designed by
using parallel combination of conventional MRAC system
and neural network controller. The training patterns of
neural network are extracted from the PI controller of
designed PI controller based MRAC scheme. The block
diagram of proposed neural network- based model reference
adaptive controller is shown in Fig. 1
.
Fig. 1 Block diagram of the proposed MRAC
The state model of linear time invariant system is given
by the following form
) ( ) ( ) ( t BU t AX t X (13)
) ( ) ( ) ( t DU t CX t Y
This scheme is restricted to a case of Single Input Single
Output (SISO) control, noting that the extension to Multiple
Input Multiple Output (MIMO) is possible. To keep the
plant output y
p
converges to the reference model output y
m
,
it is synthesize the control input U by the following
equation,

nn mr
U U U
(14)
where U
mr
is the output of the adaptive controller

T
mr
U


T
C ] , , , [
0 3 2 1
(15)

T
p
r y ] , , , [
2 1

Stability of the system and adaptability are then
achieved by an adaptive control law U
mr
. By tracking the
system state x to a suitable reference model, the error
becomes e = y
p
y
m = 0
asymptotically. In this section, the
training patterns of neural network are extracted from the PI
controller of designed PI controller based MRAC scheme is
discussed
The ANN controllers designed in most of the work use a
complex network structure for the controller. The aim of
this work is to design a simple ANN controller with as low
neurons as possible while improving the performance of the
controller. The inputs of the neural network are the error and
change in error. . Here the multilayer back propagation
neural network is used in the proposed method. The
multilayer back propagation network is especially useful for
this purpose, because of its inherent nonlinear mapping
capabilities, which can deal effectively for real-time online
computer control.
The NN of the proposed method has three layers: an input
layer with 2 neurons, a hidden layer with 2 neurons and an
output layer with one neuron.
Let x
i
be the input to the i
th
node in the input layer, z
j
be
the input to the j
th
node in the hidden layer, y be the input to
the node in the output layers. Furthermore V
ij
be the weight
between the input layer and hidden layer,W
j1
is the weight
between the hidden layer and the output layer. The relations
between inputs and output of NN is expressed as,
n
i
ij i oj inj
V x V Z
1
(16)
P
j
j j ink
W z W Y
1
1 01
(17)
) (
_
inj
j
Z F Z
(18)
) (
ink k
Y F Y (19)
where F (.) is the activation function.
The set of inputs and desired outputs of neural network
are extracted from the PI controller of designed PI controller
based MRAC scheme. A back propagation neural network is
trained till a certain fixed error goal is reached. Here, the
network is trained for an error goal of 0.0005.
In this proposed neural network based MRAC method
tracking error became zero within 4 seconds and no
oscillation has occurred. The plant output has tracked with
the reference model output. This method is better than
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 39
conventional MRAC system and PI controller based MRAC
system.

VI. RESULTS AND DISCUSSION

In this section, the result of the computer simulations for
conventional MRAC, PI controller based MRAC and neural
network-based MRAC system is reported. The result shows
that the effectiveness of the proposed neural network-based
MRAC scheme and its performance is superior to the
conventional MRAC technique and PI controller based
MRAC.
Example 1:
In this example, the nonlinearity of backlash is followed
by linear system is shown in Fig. 3.


Fig. 2 Non-linear System
Consider a second order system with the transfer
function.
10 3
1
) (
2
S S
S
S G

The reference model is taken as,
1
1
) (
S
S G
M

The initial value of the conventional MRAC scheme the
controller parameters are chosen as (0) = [3, 18,-8, 3]T.
The conventional model reference adaptive controller is
designed by using the equations (9) and (11). The
simulation was carried out with MATLAB and the input is
chosen as r(t) =15+12 sin0.7t+35cos 5.9t. In the PI
controller based model reference adaptive controller, the
value for the PI controller gains Kp and K
i
are equal to 22
and 78 respectively.
In the neural network based model reference adaptive
controller, the details of the trained network are shown in
Fig. 4

Fig.3.Details of the trained network
The results for the conventional MRAC, PI controller
based MRAC and neural network based MRAC system are
given in Fig. 4.

4(a)

4(b)

4(c)

4(d)

4(e)

4(f)

Fig 4: Simulation results : 4(a) Plant output yp(t) (solid lines) and the
Reference model output ym(t) (dotted lines) of the conventional MRAC
system for the input r(t) =15+12 sin0.7t+35cos 5.9t. 4(b) Plant output yp(t)
(solid lines) and the Reference model output ym (t )(dotted lines) of the PI
controller based MRAC scheme for the input r(t) =15+12 sin0.7t+35cos
5.9t. 4(c) Plant output yp(t) (solid lines) and the Reference model output ym
(t )(dotted lines) of the neural network based MRAC scheme for the input
r(t) =15+12 sin0.7t+35cos5.9t. 4(d) tracking error e for the conventional
MRAC.4 (e) tracking error e for the PI controller based MRAC scheme and
4(f) tracking error e for the neural network based MRAC scheme



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 40
Example 2:
In this example, the nonlinearity of Dead Zone is
followed by linear system. The disturbance (random noise
signal) is also added to the input of the plant.
The disturbance (random noise signal) is also added to
the input of the plant.
A third order system with the transfer function
4 7 6
5 2
) (
2 3
S S S
S
S G

is used to study and the reference model is chosen as
6 11 6
5 . 2
) (
2 3
S S S
S
S G
M

The initial value of the conventional MRAC scheme
controller parameters is chosen as (0) = [0.5, 0, 0, 0]
T
. The
conventional model reference adaptive controller is
designed by using the equations (6) and (8). The simulation
was carried out with MATLAB and the input is chosen as
r(t)= 15sin4.9t.
The simulations are done for the conventional MRAC, PI
controller -based MRAC and neural network based- MRAC
system with random noise disturbance and nonlinear
component are added to the plant. In the PI controller based
model reference adaptive controller, the value of the PI
controller gains K
p
and K
i
are equal to 24 and 82
respectively. In the neural network based model reference
adaptive controller, the details of the trained network are
shown in Fig. 5.

Fig. 5 Details of the trained network

The results for the conventional MRAC, PI controller-
based MRAC and neural network- based MRAC system are
given in Fig. 6.

6(a)

6(b)

6(c)

6(d)

6(e)

6(f)

Fig 6: Simulation results 6(a) Plant output yp(t) (solid lines) and the
Reference model output ym (t) (dotted lines) of the conventional MRAC
system for the input r(t)= 15sin4.9t . 6(b) Plant output yp(t) (solid lines) and
the Reference model output ym (t )(dotted lines) of the PI controller based
MRAC scheme for the input r(t)= 15sin4.9t . 6(c) Plant output yp(t) (solid
lines) and the Reference model output ym (t )(dotted lines) of the neural
network based MRAC scheme for the input r(t)= 15sin4.9t.6(d) tracking
error e for the conventional MRAC. 6(e) tracking error e for the PI
controller based MRAC scheme and 6(f) tracking error e for the neural
network based MRAC scheme.

The nonlinear component and the disturbance (random
noise signal) are added to the plant input of conventional
MRAC. The plant output is not tracked with the reference
model output, large amplitude of oscillations occur at the
entire plant output signal as shown in fig 4(a) and 6(a) and
tracking error has not come to zero as shown in fig 4(d) and
6(d). But when the disturbance (random noise signal) and
non linear component are added to the input of the plant of
PI controller based model reference adaptive controller, it
improves the performance comparing to the conventional
MRAC and reduces the amplitude of oscillations of the
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 41
plant output as shown in fig 4(b) and 6(b). In this case also
plant output does not track the reference model output and
the tracking error has not come to zero as shown in Fig.4 (e)
and 4(e). When the disturbance (random noise signal) and
non linear component are added to the input of the plant of
the proposed neural network based MRAC scheme, the
plant output has tracked with the reference model output as
shown in Fig 4(c) and 6(c). The tracking error becomes zero
within 4 seconds with less control effort as shown in Fig.
4(f) and 6(f) and no oscillations has occurred.
From the plots, one can see clearly that the transient
performance, in terms of the tracking error and control
signal, has been significantly improved by the proposed
MRAC using neural network. The proposed neural network
based MRAC schemes shows better control results
compared with those by the conventional MRAC and PI
controller -based MRAC system. On the contrary, the
proposed method has much less error than conventional
method in spite of nonlinearities and disturbance

VII. CONCLUSION
In this section, the response of the conventional model
reference adaptive controller is compared with the PI
controller based MRAC system and proposal neural
network-based model reference adaptive controller. The
controller is checked with the two different plants. The
proposed neural network-based MRAC controller shows
very good tracking results when compared with the
conventional MRAC and PI controller based MRAC
system. Thus the proposed intelligent MRAC controller
modifies its behavior in response to the variation in the
dynamics of the process and the characteristics of the
disturbances. The proposed scheme utilizes a growing
dynamic neural network controller in parallel with the
model reference adaptive controller. Simulations and
analyses have shown that the transient performance can be
substantially improved by proposed MRAC scheme and
also the proposed controller shows very good tracking
results when compared to conventional MRAC. Thus the
proposed intelligent parallel controller is found to be
extremely effective, efficient and useful.

REFERENCES
[1] K.J. Astrom and B. Wittenmark Adaptive control (2nd Ed.)
Addison-Wesley-1995.
[2] Petros A loannou, Jing sun. Robust Adaptive control, upper
saddle River, NJ: Prentice-Hall 1996.
[3] M.B.Mofal and A.J Calise, Robust adaptive control of uncertain
nonlinear systems using neural network, in proc. Amer control
cont 1997. PP.1996-2000
[4] Chen.S,Billings, S.A and Grant, P.M.,1991, Nonlinear system
identification using neural network. Int.J.control 51, 1191-1214
[5] Guchi,Y and Sakai, H, 1991, A nonlinear regulator design in the
presence of system uncertainities using multilayered neural
network, IEEE Toans Neural network, 2, 427-432
[6] Yamada, T and Yabuta,T., 1992,Neural network controller using
autotunning method for non linear function. IEEE Trans. Neural
Network,3, 595-601
[7] Kawalo, M. Furukawa.K and Suzuki, R., 1987, A hierachical
neural network for control and learning of Voluntary movement,
Biol Cybern, 57, 169-185
[8] Narendra, K.S and parthasarathy, 1990, identification and control
of dynamic systems using neural network. IEEE Tans. Neural
network 1, 4-27
[9] Chen, F.C, 1990, Back propagation neural networks for non
linear Self tuning adaptive control, IEEE contr. Syst. Mag.,
10,40-48
[10] Liu, c.c and chen, F.C., 1993, Adaptive control of non linear
continuous time systems using neural networks- general relative
degree and MIMO cases, INT.J. control, 58, 317-335
[11] S.Kamalsudan and Adel A Ghandakly. A Nueral Network
Parallel Adaptive Controller for Fighter Aircraft Pitch- Rate
Tracking,IEEE transaction on instrumentation and measurement.
Nov 2010.
[12] A.J.Calise, N. Hovakimyan and M. Idan, Adaptive Output
Feedback Control of Nonlinear System Using Neural Networks,
Automatica vol. 37, No.8 pp1201-1211 Aug 2001.
[13] J.I. Arciniegas, A.H. Eltimashy and K.J. Cios, Neural Networks
Based Adaptive Control of Flexible Arms,Neurocomputing,
vol.17 no.314, pp.141-157, Nov.1997.
[14] R.M. Sanner and J,E. Slotine, Gaussian Networks for Direct
Adaptive Control, IEEE trans neural networks vol3, no.6. pp.837-
763, Nov1992
[15] M.S. Ahmed Neural Net-Based Direct Adaptive Controller a
Class Of Nonlinear Plants, IEEE trans Autom. control. Vol.45,
no, 1 pp-119-123, Jan 2000
[16] S. Kamalasadan, A. Ghandakly,A Neural Network Parallel
Adaptive Controller for Dynamic System Control, IEEE
Transactions on Instrumentation and Measurement, vol.56, no.5.
pp. 1786 - 1796, Oct. 2007
[17] Mirkin, B.; Gutman, P.-O., Robust Adaptive Output-Feedback
Tracking for a Class of Nonlinear Time-Delayed Plants, IEEE
Transactions on Automatic Control, vol.55, no.10. pp. 2418 - 2424,
Oct. 2010

R.Prakash received his B.E degree from Government
College of Technology, affiliated to Bharathiyar
University, Coimbatore, Tamilnadu, India in 2000 and
completed his M.Tech degree from the College of
Engineering, Thiruvanandapuram, Kerala, India, in
2003. He is currently working for his doctoral degree at
Anna University, Chennai, India. He has been a
member of the faculty Centre for Advanced Research, Muthayammal
Engineering College, Rasipuram, Tamilnadu, India since 2008. His
research interests include Adaptive Control, Fuzzy Logic and Neural
Network applications to Control Systems.

R.Anita received her B.E Degree from Government
College of Technology in 1984 and completed her M.E
Degree from Coimbatore Institute of Technology,
Coimbatore, India in 1990, both in Electrical and
Electronics Engineering. She obtained her Ph.D degree
in Electrical and Electronics Engineering from Anna
University, Chennai, India, in 2004. At present she is
working as Professor and Head of Department of
Electrical and Electronics Engineering, Institute of Road and Transport
Technology, Erode, India. She has authored over sixty five research papers
in International, National journals and conferences. Her areas of interest are
Advanced Control Systems, Drives and Control and Power Quality



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 42



SIMULATION OF HIGH Q MICROMECHANICAL
SENSOR/RESONATOR USING HFSS

P.Sangeetha Alka Sawlikar
2

RCERT, RCERT,
Chandrapur. Chandrapur.
vikas_selvan@yahoo.co.in alkaprasad.sawlikar@gmail.com




Abstract- A micromechanical, resonator, fabricated via a technology
combining poly silicon surface-micromachining and metal
electroplating to attain submicron lateral capacitive gaps, has been
demonstrated at frequencies as high as 829 MHz and with Qs as high
as 23,000 at 193 MHz. These results represent an important step
toward reaching the frequencies required by the RF front-ends in
wireless transceivers. The geometric dimensions necessary to reach a
given frequency are larger for this contour-mode than for the flexural-
modes used by previous resonators. This, coupled with its
unprecedented Q value, makes this resonator a choice candidate for
use in the IF and RF stages of future miniaturized transceivers. This
paper presents design and simulation of high Q micromechanical
resonator using HFSS
.

INTRODUCTION
Micro-Electro-Mechanical Systems (MEMS) is the integration
of mechanical elements, sensors, actuators, and electronics on a
common silicon substrate through micro fabrication
technology. While the electronics are fabricated using
integrated circuit (IC) process sequences (e.g., CMOS, Bipolar,
or BICMOS processes), the micromechanical components are
fabricated using compatible "micromachining" processes that
selectively etch away parts of the silicon wafer or add new
structural layers to form the mechanical and electromechanical
devices.
MEMS promises to revolutionize nearly every product
category by bringing together silicon-based microelectronics
with micromachining technology, making possible the
realization of complete systems-on-a-chip. MEMS is an
enabling technology allowing the development of smart
products, augmenting the computational ability of
microelectronics with the perception and control capabilities of
micro sensors and micro actuators and expanding the space of
possible designs and applications. Sensors gather information
from the environment through measuring mechanical, thermal,
biological, chemical, optical, and magnetic phenomena. The
electronics then process the information derived from the
sensors and through some decision making capability direct the
actuators to respond by moving, positioning, regulating,
pumping, and filtering, thereby controlling the environment for
some desired outcome or purpose. Because MEMS devices are
manufactured using batch fabrication techniques similar to
those used for integrated circuits, unprecedented levels of
functionality, reliability, and sophistication can be placed on a
small silicon chip at a relatively low cost.

QUALITY FACTOR
The quality factor (Q) is a commonly used dimensionless
parameter to model the loss in a specific system. It is also
possible to model a system with damping ratio ( = 1/ (2Q)),
which models the total damping due to losses. Although
several other equivalent descriptions for quality factor can be
found in the literature, a general accepted definition is
Q = 2 (Energy stored per cycle/Energy dissipated per cycle)
which essentially does not put any restriction to the type of the
system. It is perfectly legal to talk about resonant systems'
quality factor, as well as the non-resonant ones, such as a
simple RC circuit. For resonant systems, a high quality factor
helps to increase the sensitivity of the (resonant mode) sensors
and reduce the phase noise of the oscillator. The Q factor or
quality factor compares the time constant for decay of an
oscillating physical system's amplitude to its oscillation period.
Equivalently, it compares the frequency at which a system
oscillates to the rate at which it dissipates its energy. A higher
Q indicates a lower rate of energy dissipation relative to the
oscillation frequency.

R RE ES SO ON NA AT TO OR RS S


A resonator is a device or system that exhibits resonance or
resonant behavior. Objects that use the principle of resonant
effects are referred to as resonators. Resonance is the tendency
of a system to oscillate at maximum amplitude at a certain
frequency. This frequency is known as the system's natural
frequency of vibration, resonant frequency, or Eigen-
frequency.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 43







MEMS IN ELECTRONICS AND COMMUNICATION


Fig. 1 Block level schematic diagram of a receiver where the off-chip
components are represented with shaded boxes
S SC CO OP PE E O OF F T TH HE E W WO OR RK K
Polysilicon micromechanical resonators based on MEMS
technology operating vibrations have been demonstrated at
high frequencies. Via strategic placement of electrodes and
careful determination of the exact support beam attachment
locations that minimize anchor loss, these new resonators
actually exhibit higher Q in the second mode than in the
fundamental at the same frequency. Experiments to gauge the
effect of variations in support beam dimensions and attachment
locations on microresonator performance show that an offset of
only 0.6 m in the support beam attachment location results in
a 7X degradation in the Q.


E EX XP PE ER RI IM ME EN NT TA AL L M ME ET TH HO OD DS S A AN ND D M MA AT TE ER RI IA AL LS S
Vibrating micromechanical resonators are emerging as
attractive candidates for on-chip versions of the high-Q
mechanical passive components (e.g., quartz crystals, and
SAW resonators) used in transceivers for wireless
communications. Acute interest in these devices arises from
their tiny sizes, their zero DC power consumption, and their
use of IC-compatible fabrication technologies to enable on-chip
integration of high-Q frequency selective components with
transistor electronics]. However the need for further size
reduction to attain even higher frequencies can conjure up
scaling-induced problems such as higher motional impedance
and greater susceptibility to contaminants. To avoid these
limitations, a method for raising frequency without the need for
excessive scaling is desired. Pursuant to reducing the amount
of size reduction needed for increasing frequency, this work
investigates higher-mode operation of micromechanical
resonators. With larger dimensions than previous fundamental-
mode counterparts at the same frequency, and with slightly
higher Qs (as will be seen), these higher-mode resonators
provide several key advantages over the former, including (1)
lower series motional resistance Rx; (2) higher dynamic range
and power handling; and (3) multiple ports that permit the 0o
input-to-output phase-shift often preferred for high impedance
micromechanical oscillators and invertible band pass mixer-
filters targeted for wireless applications.


Fig. 2.Resonator in a typical bias and excitation configuration
RESONATOR STRUCTURE AND OPERATION

. As seen in the figure, the device is comprised of a 2 m thick,
free-free beam suspended 1000 above capacitive transducer
electrodes by three support beams designed with dimensions
corresponding to a torsional quarter wavelength of the
resonator center frequency and attached at precise nodal
locations. Two separate electrodes are placed under the device
at locations specifically chosen to excite .To operate this
device, a DC bias VP is applied to the beam and an AC drive
voltage vi is applied to one of the electrodes. These voltages
then collectively create a time-varying electrostatic excitation
force between the electrode and the beam, in the vertical
direction, and at the frequency of the AC drive voltage if VP >
vi. When the AC drive frequency matches the beam resonance
frequency, the force causes the beam to vibrate, which then
results in a DC biased time-varying capacitance at the output
electrode, which in turn produces an output current io. It is also
possible to excite the fundamental vibration mode of this
device using the same electrical configuration, but applying an
AC drive voltage at the frequency of the fundamental mode.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 44



The procedure for designing the higher-mode devices of this
work involves:
(1) Selection of resonator beam dimensions for a desired
center frequency;
(2) Determination of the support beam dimensions and free-
free beam attachment locations that minimize anchor loss-
induced Q degradation; and
(3) Proper electrode placement to excite higher modes and to
achieve the desired input-to-output phase difference
S SI IM MU UL LA AT TI IO ON N A AN ND D I IM MP PL LE EM ME EN NT TA AT TI IO ON N O OF F R RE ES SO ON NA AT TO OR R
Here we are going to design the above mentioned resonator in
HFSS (Ansoft) 9.2 version. We are going to show how to
create, simulate and analyze the resonator.


fig.3.HFSS Desktop
RESULTS
OUTPUT GRAPHS OF DIFFERENT RESONATORS

Plot of Permitivity Vs Quality factor for a resonator






Plot of Permitivity Vs length of the resonator



Plot of dielectric constant Vs frequency and Quality factor of
the resonator






Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 45




HFSS RESULTS

Plot of Frequency(GHz) Vs Gain(dB)


3D polar plot










Radiation pattern










Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 46




CONCLUSION

The project involves in the synthesis of a micro mechanical
resonator in software of Ansoft corp.How ever this design has resulted in
resonater producing vibrations.. Analysis is done using major simulation
software; HFSS (High Frequency Sructure Simulator).The frequency response
is obtained with the centre frequency aroung the Giga Hertz range.
Ansoft HFSS is used in the construction of the structure of the resonator with
high Q factor .the simulated outputs of the constructed device shows the
operating characteristics.Ansoft HFSS is used in the design of a Radiation
pattern of the resonator is obtained by designing a 3 dimensional model of the
resonator and executing it in a high frequency synthesis simulator (HFSS).


REFERENCES
[1] K. Wang, A.-C. Wong, and C. T.-C. Nguyen, VHF free-free beam
high-Q micromechanical resonators, J. Microelectromech. Syst., vol. 9, no.
30, Sep. 2000, pp. 34736.

[2] J. R. Clark, W.-T. Hsu, and C. T.-C. Nguyen, High-Q VHF
micromechanical contour-mode disk resonators, in Tech. Digest, IEEE Int.
Electron Devices Meeting, San Francisco, CA, Dec. 1113, 2000, pp. 399402.

[3] X. M. H. Huang, M. K. Prakash, C. A. Zorman, M. Mehregany, and M.L.
Roukes, Free-free beam silicon carbide nanomechanical resonators, in Dig.
of Tech. Papers, the 12th Int. Conf. on Solid-State Sensors & Actuators
(Transducers03), Boston, MA, Jun. 812, 2003, pp. 342343.

[4] M. A. Abdelmoneum, M. U. Demirci, and C. T.-C. Nguyen, Stemless
wine-glass-mode disk micromechanical resonators, in Proc. 16
th
Int. IEEE
Micro Electro Mechanical Systems Conf., Kyoto, Japan, Jan. 1923, 2003, pp.
698701.
[5] S. Pourkamali and F. Ayazi, SOI-based HF and VHF single-crystal
silicon resonators with sub-100 nanometer vertical capacitive gaps, in Dig. of
Tech. Papers, the 12th Int. Conf. on Solid-State Sensors & Actuators
(Transducers03), Boston, MA, Jun. 812, 2003, pp. 837840.

[6] C. T.-C. Nguyen, Transceiver front-end architectures using vibrating
micromechanical signal processors (invited), in Dig. of Papers, Topical
Meeting on Silicon Monolithic Integrated Circuits in RF Systems, Sep. 1214,
2001, pp. 2332.

[7] , Frequency-selectiveMEMSfor miniaturized low-power communication
devices (invited), IEEE Trans. Microw. Theory Tech., vol. 47, Aug. 1999, pp.
14861503.










[8] R. Navid, J. R. Clark, M. Demirci, and C. T.-C. Nguyen, Third-order
intermodulation distortion in capacitively-driven CC-beam micromechanical
resonators, in Technical Digest, 14th Int. IEEE Micro Electro Mechanical
Systems Conference, Interlaken, Switzerland, Jan. 2125, 2001,pp. 228231.

[9] J. R. Vig and Y. Kim, Noise in microelectromechanical system
resonators, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency
Control, vol. 46, 1999, pp. 15581565.

[10] F. D. Bannon III, J. R. Clark, and C. T.-C. Nguyen, High frequency
micromechanical filters, IEEE J. Solid-State Circuits, vol. 35, Apr. 2000, pp.
512526.




Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 47
A Novel Approach for Satellite Imagery System

John Blesswin A, Rema R, Cyju Elizabeth Varghese, Greeshma Varghese, Vani K
Dept. of Computer Science and Engineering
Karunya University
Coimbatore, India
johnblesswin@gmail.com, remarrr@gmail.com

Abstract Security has become an inseparable issue even in
the field of space technology. The fast growth of the exchange
traffic in space imagery on the Internet justifies the creation of
adapted tools guaranteeing the quality and the confidentiality
of the information while respecting the legal and ethical
constraints, specific to this field. In this paper we propose a
framework design of Service Platform for satellite images
which includes secure transmission and self recovery of
satellite images. This service platform corrects duplication of
images using similarity score computation from the user
browser to the server side and sent the corrected images to
users via the internet. Abandoning the conventional aim of
defending all malicious attack we adopted a intrusion-
tolerance architecture for the web server and database to make
it more resilient to attacks. The system reduces the loss of data
which is transferred through the network, by inferring the
topology of the network through which the data is sent. Also,
the system substantially improves the security of satellite
images based on the recovering of secret image using a binary
logo which is used to represent the ownership of the host image
which generates shadows by visual cryptography algorithms.
The logo extracted from the half-toned host image identifies
the cheating types. Furthermore, the logo recovers the
reconstructed image when shadow is being cheated using an
image self-verification scheme based on the Rehash technique,
which rehash the halftone logo for effective self verification of
the reconstructed secret image without the need for the trusted
third party (TTP).
Keywords-Satellite image; Intrusion-tolerance; Inferring;
Verifying shares; Visual Cryptography
I. INTRODUCTION
As there is rapid development in the internet and web
based service technology, the service of obtaining complex
and massive data information is required urgently. In the
field of space information services, it has obtained certain
achievements in providing satellite image maps via web
services. In recent years, a lot of excellent databases servers
are available that can provide image data extraction and
services [6]. Large amounts of high resolution satellite
images are extracted by such database servers. In such Web
database scenario, where the image records to match are
query results dynamically generated on the-fly. Such records
are query-dependent and a pre learned method [5] using
training examples from previous query results may fail on
the results of a new query. Also, the growth of the Internet
use has unfortunately been accompanied by a growth of
malicious activity in the Internet [1]. More and more
vulnerabilities are discovered, and nearly every day, new
security advisories are published. Potential attackers are very
numerous, even if they represent only a very small
proportion among the millions of Internet users. The problem
is thus particularly tricky: on one hand, the development of
the Internet allows complex and sophisticated services to be
offered, and on the other hand, these services offer to the
attacker many new weaknesses and vulnerabilities to exploit.
Almost all traditional approaches for building secure
systems only focus on preventing attacks to be successful.
Such approaches are becoming insufficient when used in the
context of open networks like the Internet, which are
characterized by frequent appearance of new attacks. Current
systems are so complex that it is impossible to identify and
correct all their vulnerabilities before they are put in
operation. Thus, preventive approaches [3] require regular
updates of some components of the system as soon as a new
vulnerability is discovered, that is, nearly every day.
Furthermore, security updates of some components may lead
to degradations of the service that they provide due to
incompatibility with previous versions or reduction in
functionality. It is clear that the preventive approaches are
not sufficient: it is necessary to build systems that survive
attacks, because it is not possible to stop them all.
When such vulnerabilities occur during transfer of the
images through internet monitoring the network can help a
network operator obtain routing information and network
internal characteristics (e.g., loss rate, delay, utilization) from
its network to a set of other collaborating networks that are
separated by non-participating autonomous networks [12]. In
application design, this tool can be particularly useful for
peer-to-peer style applications where a node communicates
with a set of other nodes for file sharing and multimedia
streaming[13] Such an approach is limited as todays
communication networks are evolving towards more
decentralized and private administration. At the same time,
Computer and internet technology has advanced greatly,
which makes the exchange and transmission of spatial
information via the internet is even more mature. The current
needs in spatial imaging security come mainly from the
development of the traffic on Internet (tele-expertise, tele-
medicine) and to establishment of personal file [11]. Various
confidential data such as the secure transfer of images are
transmitted over the Internet. However, it is easier that the
hackers can grab or duplicate the information on the Internet.
While using secret images, security issues [10] should be
taken into consideration because hackers may utilize this
weak link over communication network to steal the
information that they want. To deal with the security
problems of secret images, various image secret sharing
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 48
schemes have been developed. Visual cryptography scheme
eliminates complex computation problem in decryption
process thus enabling the transfer of medical images in a
more convenient, easy and secure way. Overall, most of
todays Visual Cryptography schemes focus [18] on four
general criteria: security, accuracy, computational
complexity, and pixel expansion. However, other
requirements that must also be concentrated on are
preventing legal participants from being cheated by the
dishonest participants or dealer.
Google Earth which is the one of the most successful
cases based on this structure. Google Earth realized the
functions of querying, browsing, and measuring on global
scale. In addition, users could upload images superimposed
on Google Earth. The client side of Google Earth could get
the image data through real-time downloading from the
server side, while the image extraction algorithm collects
thousands of satellite images and stores in digital globe
which contains all extracted images is employed in images
transmission. With this structure a complete set of
application that is the integration of subroutine has to be
installed in the client side, which can share the tasks with the
server side. The proposed scheme introduces a frame work
design of Service Platform for satellite images which
includes secure transmission and self recovery of satellite
images. This system address the problem of record matching
in the Web database scenario, by providing an unsupervised,
online record matching method, which uses Similarity Score
Computation[6]. For a given query it can effectively identify
duplicates from the query result records of multiple Web
databases. After removal of the same-source duplicates, the
non-duplicate records from the same source can be used as
training examples we use two cooperating classifiers, a
weighted component similarity summing classifier and an
SVM classifier [5], to iteratively identify duplicates in the
image query results from multiple Web databases and send
the corrected image to the client via internet.
Malicious activity in the Internet is solved by using
intrusion-tolerant Web servers [4]. The web server is
composed of redundant proxies that mediate client requests
to a redundant bank of diversified application Servers which
increase system availability and integrity. The technique can
be used for static web servers [2], where information
updates are executed immediately on an online database.
When image is transmitted through internet monitoring the
network can help a network operator obtain routing
information and network internal characteristics (e.g., loss
rate, delay, utilization) [13] from its network to a set of
other collaborating networks that are separated by non-
participating autonomous networks. Monitoring includes
inferring the routing topology and link performance from a
node to a set of other nodes is an important component in
network monitoring [15]. Also, the system substantially
improves the security of satellite images based on the
recovering of secret image by preventing legal participants
from being cheated by the dishonest participants or dealer.
A binary logo which represent the ownership of the host
image which generates shadows by visual cryptography
algorithm, the logo recovers the reconstructed image when

Figure 1. Flowchart of proposed scheme
shadow is being cheated using an image self-verification
scheme based on the Rehash technique..
II. PROPOSED SCHEME
This section presents a detailed description of a novel
scheme for satellite imagery system, which includes a
service platform for secure image transmission through
internet and self recovery of satellite images. This service
platform contains a digital globe database in which multiple
copies of space images are stored. The duplication of images
in this database is eliminated using similarity score
computation [5].
Intrusion-tolerance architecture [1] is adopted for
defending all malicious attack for the web server and
database to make it more resilient to attacks. The system
reduces the loss of data which is transferred through the
network, by inferring the topology [12] of the network
through which the data is sent. Also, the system substantially
improves the security of satellite images based on the
recovering of secret image by identifying the cheating
during the revealing phase using a binary logo and binary
image, verifying it and self recovering the cheated image
using visual cryptography algorithms. The flowchart of the
proposed scheme is shown in Fig. 1.
A. Image Extraction Algorithm
Thousands of images are captured by satellite and are
sent to the database server on a daily basis. Each image will
be duplicated and be stored at different databases at various
locations. The dataset digital globe contains all such
retrieved images [5]. Images will be stored as records in the
database with parameters like longitude, latitude, date, its
size. These satellite images are employed by applications
like Google Earth, Google Map etc.
When a request for an image comes from such
applications in terms of latitude and longitude, corresponding
images is being retrieved by the web server and transferred
to the client. Since there are multiple copies of each image
stored in the digital globe database, to extract the required
image is difficult.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 49
To accomplish this, similarity calculation [6] is
performed as follows:

Figure 2. Structure of Image Extraction Algorithm
1) Similarity Score Computation: Cosine similarity [7]
can be used to measure the similarity. We obtain the
similarity score, by comparing the similarity between the
required latitude and longitude values with the records in the
database. The similarity score is compared against a
similarity threshold value and thus they are grouped as
matched and non-matched records.
Algorithm:
Input: Potential Duplicate set P (Matched records)
: Non-Duplicate set N (Non-matched records)
Output: Requested image I
C
1:
a supervised classifier, eg. SVM
1. Train classifier C
1
using P and N
2. Classify P using C
1
and get a set of newly
identified duplicate pairs d1.
3. P = P-d
1
4. D= D+d
1
5. Repeat the above steps until no duplicates in P.
6. Image I is retrieved. The Image I is the recently
stored data.
Initially, two set of images are formed.
1. A non-duplicate image set (N) which includes all
images of no or lesser similarity.
2. A potential duplicate set (P) which includes all
records which are similar.
Now that we have potential duplicate images and non-
duplicate images, we can train a classifier eg.SVM (Support
Vector Machine) to identify new duplicates or multiple
copies of the image from the database [8]. Once the multiple
copies of the requested images are extracted, the latest stored
images I is retrieved.
B. Intrusion Detection System
Requests for the images from the RAID storage are
handled by a Digital Globe Web Server. Since Web server is
publicly exposed in the Internet, they can be target of most
attacks. Hackers take advantage of the different security
flaws in the server infrastructure and exploit this
vulnerability to compromise the system. These security flaws
can be because of the insufficient network security controls,
insecure design, bugs or flaws in OS or software etc. So
there is need to secure the server from all possible attacks
[4]. Database of satellite images also more sensitive to
attacks and should be protected. Since attacks are increasing
in the Internet very fast, it is difficult to prevent all kinds of
security attacks. So here we adopted intrusion-tolerance
architecture [1] for the Digital Globe web server and
database to make it more resilient to attacks. The architecture
is shown in the Fig. 3.
The architecture provides multi layered security. It starts
with a firewall which will filters out the https traffic based on
the security policies. Then next component of the
infrastructure is an intrusion detection system [2] which will
analyze all the requests based on the intrusion signature
database and filters out the traffic. It is having different
intrusion detection sensors on the web servers to detect
compromise. In case of any failed components, it will alert
the adaptive reconfiguration module to recover the
component. After the analysis of requests in the intrusion
detection system, it will be directed to the load balancer
where all requests are forwarded to multiple diverse servers
and facilitates load balancing. Hence the architecture
improves availability of service. Multiple servers are running
on different hardware and software platform thus eliminates
the chance for the attacker to target the vulnerability of a
single server. Since load balancing is used multiple servers
will be serving the request and thus it is difficult for attacker
to gain information about particular servers. The next line of
defense in the architecture is for protecting the database
using a database filter. This is mainly to protect database
from SQL injection and inference problem.

Figure 3. Intrusion Tolerance Digital Globe servers
By using a role based access control and an inference engine
[3] is used in database filter to protect database and thereby
add confidentiality to the data.
C. Inference Identification System
The image has to be transferred from the web server to
the requested application through the network. Generally
when any data is transferred through the network, it suffers
from loss and delay irrespective of the topology used in the
network. In order to reduce the loss of data which is
transferred through the network, it is vital to infer the
topology [13] of the network through which the data is sent.
For example, a node may want to know the routing topology
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 50
to other nodes so that it can select peers with low or no route
overlap to improve efficiency against network failures. For
applications where nodes may join or leave frequently such
as overlay network construction, application-layer multicast
[12], peer-to-peer file sharing / streaming, a sequential
topology inference algorithm is used which significantly
reduces the probing overhead and can efficiently handle
node dynamics. We employ probing to know whether the
destination is reachable by finding the loss and delay in all
the links. When loss and delay value is equal to infinity, then
the destination is not reachable. The loss of the links[13] in
the network is inferred as follows. The link state variable Z
e

is a Bernoulli random variable which takes value 1 with
probability
e
if the probe can go through link e, and takes
value 0 with probability
e
1 if the probe is lost on
the link.
e
is called the success rate or packet delivery rate
of link e, and
e
is called the loss rate of link e. The
outcome variable L
k
is also a Bernoulli random variable,
which takes value 1 if the probe successfully reaches node k.


e k s P e ek k f k
Z Z L L
) , ( ) (
.

(1)
Where the state of link e
k
= (f(k), k) (i.e., Z
ek
) and f(k) is
used to denote the parent node of node k, s is the source
node. L
f(k)
is the loss value at the parent node. The loss of all
the links from the source to the application is measured and
the link with the minimum loss is used to transfer the image.
D. Revealing Phase
The system substantially improves the security of
satellite images based on the recovering of secret image
using a binary logo which is used to represent the ownership
of the host image which generates shadows by visual
cryptography algorithms. The logo extracted from the half-
toned host image identifies the cheating types in Revealing
phase. In this phase the reconstructed secret image and the
extracted halftone logo HL is generated.
In the verifying phase, any cheating occurs [19] is
discovered by comparing HL` and Halftone secret logo HL
using human vision or the MSE value. Note that in this
phase, HL` is a half-sampled image of HI, which is created
from the set of two collected shadows. Moreover, the
halftone image HI extracted during the revealing phase could
be either a meaningful image or a noise-like image,
depending on whether the collected shadows are true or fake.
Before sending images, both receiver and sender have to
determine a secure key SK and several hash functions, and
then the senders can calculate the HIT value first and embed
the value to the rightmost two bits of every pixel to generate
an image containing self-verification information [20]. Upon
receiving such images, the receivers can conduct the same
procedure to calculate the HIT values for the authentication.
As the authentication information is directly embedded in the
images, no extra transmission cost will occur. Thus, our
scheme will not increase transmission cost. This phase
verifies the reliability of the reconstructed secret image and
the set of collected shadows. The HL is the extracted
halftone image whose original image is the halftone logo
HL` and HL is the half sampled image of HI. The
reconstructed halftone logo HL depends on the intermediate
shadow S1, which is only extracted from shadow SH1. If
there is no cheating [19], the intermediate shadow S1 in the
revealing phase is the same as the intermediate shadow S1 in
the shares construction phase. In other words, the halftone
logo HL` is the same as halftone logo HL when no cheating
occurs. This phase recovers the reconstructed secret image
when the shadow is being cheated. The cheated image is
recovered by applying Double-Sampling and Inverse half-
toning [10]. In this first, find the value of d which is the
difference between HL` and HI``, d = HL` - HI``. When the
value of d is equal to zero, then the reconstructed secret
image GI is generated completely from HI` by inverse half-
toning transformation. Obviously, when d is not equal to
zero, if a fake shadow drops in the first shadow, the
reconstructed image GI` is usually a noise-like image and
extracted halftone logo HL` is either a noise-like image or a
meaning halftone image[9]. If the fake shadow is the second
one, a noise-like image GI` is generated in addition to a
meaning halftone image HL` [11]. In this case, we not only
know that GI` is fake but also can recover GI` by using HL`.
To recuperate GI from HL`, we first perform double-
sampling by applying an interpolating operation into HL` to
retrieve HI`.
III. EXPERIMENTAL RESULTS
The Experimental Results shows the extraction of the
space images from the dataset and that the training image
data were used to train the support vector machine and the
resulting model was used to classify the whole image into
two features. Table 1 shows the efficiency of SVM in
classifying the images as duplicates and non-duplicates.
Producers Accuracy depicts what percentage of the
particular class was correctly classified. On the other hand,
the Users Accuracy depicts the probability that a pixel
classified is actually that of the image.
TABLE I. SUPPORT VECTOR MACHINE (SVM) CLASSIFICATION
MATRIX
Classification
Duplicate Non-Duplicate
Users
Accuracy
Duplicate
94 0 100
Non-Duplicate
2 88 98
Producers Accuracy
98 100 -

Overall classification accuracy = 99%. The corrected
image is checked for malicious activity in the web server
using intrusion-tolerant technique. The system proves to be
tolerant against the intrusion. The routing information and
network internal characteristics such loss rate is calculated.
The image is transmitted via internet in a link with low loss
rate after calculation. For verifying the cheating type using
the enhanced recovery technique more than 100 satellite
images have been tested. Sample tested space images are
given in Table 2. The first is the generation of the
constructed secret image with high quality, with no
computational complexity and no pixel expansion. The
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 51
second is the reconstruction of images and verification of the
reliability of the set of collected shadows as well as the
reconstructed secret image. In our scheme, peak signal-to-
noise ratio (PSNR) is used to evaluate the quality of the
reconstructed original image GI`. Similarly, we use mean
square error (MSE) to identify the difference between the
extracted halftone logo HL` and halftone image HI``. The
reliability of the VSS scheme is guaranteed if MSE is equal
to zero. The third objective is the image self-verification
code embedding phase for the reliability of HL followed by
the recovering of images. Experiments were based two
assumptions corresponding to two circumstances.
TABLE II. RECONSTRUCTED IMAGE QUALITY AND RELIABILITY
CONCLUSION WHEN NO CHEATING IS DETECTED

The first circumstance assumes that neither the dealer nor
the participants are cheating. If the MSE value of HI and HL
is zero, the parameter is Sure, and vice versa. The quality
of the reconstructed secret image is considered by using two
points of view. First, under the human visual system, the
reconstructed secret image GI is almost indistinguishable
from the original image GI. Secondly, the PSNR values of
the reconstructed secret images and the original images range
from 32 to 34.5 dB. Moreover, all MSEs are equal to zero
when no cheating occurs. The reconstructed images can be
assumed to be completely believable.
IV. CONCLUSIONS
In this paper, we propose a novel scheme for satellite
imagery system which provides a framework for secure
transmission of space images through internet and self
recovery of satellite images. Our scheme finds the
duplication of the images in the digital globe dataset using
the Similarity Score computation. For defending against the
malicious attack for the web server and the database, an
intrusion tolerance system is adopted. The image transferred
via internet is monitored and the network internal
characteristics such as loss rate, delay, utilization from its
network to a set of other collaborating networks is obtained.
Inferring the routing topology and link performance is also
done.
To improve security, cheating of the image by the dealer
or participants is verified using the self verifiable rehash
technique when some of collected shadows are forged during
the revealing process. Moreover, the original reconstructed
satellite image is established only when k out of n valid
shadows are collected and no one can force the honest
participant to reconstruct a wrong secret image. Thus, this
platform improves the security of the satellite images when
transmitted through Internet.

REFERENCES
[1] Ayda Saidane, Vincent Nicomette, and Yves Deswarte, The Design
of a Generic Intrusion-Tolerant Architecture for Web Servers, IEEE
Transactions On Dependable And Secure Computing, Vol. 6, No. 1,
January-March 2009.
[2] M. Cukier, T. Courtney, J. Lyons, H.V. Ramasamy, W.H. Sanders,
M. Seri, M. Atighetchi, P. Rubel, C. Jones, F. Webber, P. Pal, R.
Watro, and J. Gossett, Providing Intrusion Tolerance with ITUA,
Proc. Intl Conf. Dependable Systems and Networks (DSN 02), June
2002.
[3] F. Majorczyk, E. Totel, and L. Me, COTS Diversity Based Intrusion
Detection and Application to Web Servers, Proc. Eighth Intl Symp.
Recent Advances in Intrusion Detection (RAID 05), Sept. 2005.
[4] A. Valdes, M. Almgren, S. Cheung, Y. Deswarte, B. Dutertre, J.
Levy, H. Sadi, V. Stavridou, and T. Uribe, An Adaptative
Intrusion-Tolerant Server Architecture, Proc. 10th Intl Workshop
Security Protocols, pp. 158-178, 2003.
[5] L.M. Manevitz and M. Yousef, One-Class SVMs for Document
Classification, J. Machine Learning Research, vol. 2, pp. 139-154,
2001.
[6] Weifeng Su, Jiying Wang, and Frederick H. Lochovsky, Record
Matching over Query Results from Multiple Web Databases, IEEE
Transactions On Knowledge And Data Engineering, Vol. 22, No. 4,
April 2010.
[7] M. Bilenko and R.J. Mooney, Adaptive Duplicate Detection Using
Learnable String Similarity Measures, Proc. ACM SIGKDD, pp. 39-
48, 2003.
[8] M. Bilenko and R.J. Mooney, Adaptive Duplicate Detection Using
Learnable String Similarity Measures, Proc. ACM SIGKDD, pp. 39-
48, 2003.
[9] Z. Zhou, G. R. Arce, and G. Di Crescenzo, Halftone visual
cryptography, IEEE Transaction. Image Process., vol. 15, no. 8, pp.
24412453, Aug. 2006.
[10] T. C. Chang and J. P. Allebach, Memory efficient error diffusion,
IEEE Tranaction s. Image Process., vol. 12, no. 11, pp. 13521366,
Nov.2003.
[11] M. Analoui and J. P. Allebach, Model-based halftoning using direct
binary search, Proc. SPIE, vol. 1666, pp. 96108, Feb. 1992.
[12] N. G. Duffield, J. Horowitz, F. Lo Presti, D. Towsley, Multicast
Topology Inference From Measured End-to-End Loss, IEEE
Transactions on Information Theory, vol. 48, no. 1, pp. 26-45, Jan.
2002.
[13] R. Caceres, N. G. Duffield, J. Horowitz, D. Towsley, Multicast-
Based Inference of Network-Internal Loss Characteristics, IEEE
Transactions on Information Theory, vol. 45, no. 7, pp. 2462-2480,
Nov. 1999.
[14] M. Coates and R. Nowak, Network Loss Inference using Unicast
End to- End Measurement, Proc. ITC Conference on IP Traffic,
Modelling and Management, Monterey, CA, Sept. 2000.
[15] C. Cowen, Software Security for Open Source Systems, IEEE
Security and Privacy, pp. 38-45, Jan./Feb. 2003.
[16] Kyung-Joon Park, Hyuk Lim, Chong-Ho Choi, Stochastic Analysis
of Packet-Pair Probing for Network Bandwidth Estimation, The
International Journal of Computer and Telecommunications
Networking, Volume 50 Issue 12, 24 August 2006.
[17] B. Yao, R. Viswanathan, F. Chang, D. Waddington, Topology
Inference in the Presence of Anonymous Routers, Proc. IEEE
INFOCOM, Apr. 2003.
[18] M. Naor and A. Shamir, Visual cryptography, Advances in
Cryptograhy:EUROCRYPT94, LNCS, vol. 950, pp. 112, 1995.
[19] G. Horng, T. Chen, and D. Tasi, Cheating in visual
cryptography,Designs Codes Crypto., vol. 38, pp. 219236, 2006.
[20] R. Zhao, J. J. Zhao, F. Dai, and F. Q. Zhao, A new image secret
sharing scheme to identify cheaters, Comput. Standards Interfaces,
vol. 31, pp.252257, 2007.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 52
Design and Implementation of Advance planner using
GPS Navigation for mobile communication with Android

Sasikumar Gurumurthy
1
, Abdul Gafar H
2

School of Computing Science & Engg.,
VIT University
Vellore, Tamil Nadu, India
e-mail: g.sasikumar@vit.ac.in
A.Valarmozhi
3

Department of Information Technology
Bannari Amman Institute of Technology
Sathyamangalam, Tamil Nadu, India
e-mail: valar_113@yahoo.com


Abstract Route recorder is an application that helps to keep
track of the route traveled by the Android device. When the
user presses the start button on the application, the application
starts recording the track traveled by the device dynamically.
The application tracks the location by GPS. The application
stores the route traveled for the review of the user. The route is
stored by the device in KML (Keyhole Mark-up Language)
format. The coordinates of the route recording are stored in
the KML file with the specified name at regular intervals. The
KML file can be exported to SD card and the file can be used
to view the trip on Google earth. It also helps in geo-tagging
the photos on Google earth along the route traveled. Android is
an open source operating system developed by Google for
mobile devices such as cellular phones, tablet computers and
net books. Android is based upon Linux kernel and GNU
software. Android OS smart phones are first among all smart
phone OS handsets. The recent version is Android 2.2.
Keywords-GPS;KML;Route Reecorder;Android
I. INTRODUCTION
The main aim of the GPS navigation using android is to
develop a route recorder using GPS for the android
platform. The objective of the program is to track the device
using GPS and display the latitude and longitude
coordinates to the user. The route traveled is recorded with
the latitude and the longitude coordinates and the stored in
the KML file so that the user can review the trip whenever
he wants using Google earth.
II. METHODOLOGIES
A. Android Platform
The application is developed to run it on the Android
platform. Android is a new operating system and updating
the present applications with the technology is also
important. The application can be developed in Java EE IDE
Eclipse connected the Android emulator[5]. Eclipse and the
Android emulator are connected using the ADT plug-in.
B. SQLite
The values of the gps logs are stored in a table in the
database created using SQLite. Android uses SQLite as its
built-in embedded database. If we need to store local
application data then we can use the database rather than
going for a simple file mechanism system or a complex
network system.
III. ARCHITECTURE ANALYSIS
The application has broadly three modules. 1. Starting
and stopping of the trip recording. 2. Saving the trip to
Database. 3. Converting the values in the database to KML
format and exporting it to the SD card.
A. Development Phase
Initially the development of the project is done in
Android software Development Kit Generation of the
interface is done in Eclipse. Eclipse is a multi-language
software development environment comprising an
integrated development environment (IDE) and an
extensible plug-in system. Eclipse is free and open source
software. The interface created is converted into a apk file
which is an .apk file extension denotes an Android Package
(APK) file. This file format, a variant of the JAR format, is
used for the distribution and installation of bundled
components onto the Android mobile device platform. An
APK file is an archive that usually contains the following:
META-INF (folder), res (folder), AndroidManifest.xml,
classes.dex and resources.arsc[3][1]. An .apk file can be
opened and inspected using common archive tools such as
z-zip, WinZip,1inner , we install the .apk file in Android
mobile, After the installation we can perform the route
recorder.
B. System Design and data flow
The application is started by opening the android
interface. We enter into the application area where the GPS
logger is located. The GPS logger is activated. The interface
of the GPS logger is seen. The application is started by
pressing the start button on by entering the values. The
corrections in the view points are made if necessary by
specifying the corrections in air or in ground[7][10]. The
route travelled is recorded in a .KML format. If there is a
necessary for another trip a condition is checked .if not the
KML file is saved in the sd-card.
C. Modules of the System
a) Start/stop module: The user carrying the device
starts logging. Then the application requests the
GPS provider for the location updates. The location
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 53
updates are sent to the application and the values
are stored to the database created using SQLite.
b) Status Provider: The application constantly checks
for the change in status of the provider. The same
status is displayed to the user. The user is notified
whenever the provider is enabled or disabled.
c) New trip: When a new trip is started the old is
converted to KML format and the database is
cleared. Then the database is ready to store the new
values of the location updates. The database
contains the following entities.
d) Debugging: Debugging is used when the value are
inaccurate. The in inaccuracy occurs when there is
a disturbance such as cloudes, buildings. We give
correction value in air or in ground to measure the
correct value. Debugging is stopped after the
values are corrected.
Converting the values in the database to KML format
and exporting it to the SD card. A new KML file is created
in the SD card with the new trip name[4][2]. The values of
the latitude, longitude and altitude are retrieved from the
database and they are appended to the KML file in the SD
card. Errors are reported when the application cannot open
the SD card or create new KML file in it or could not
append the values to the file.
TABLE I. DATABASE TABLE
Name Type
Timestamp Varchar
Latitude Real
Longitude Real
Altitude Real
Accuracy Real
Speed Real
Bearing Real
IV. IMPLEMENTATION
The application is started. The user is provided with the
options to start tracking, stop tracking, create new trip and
to export the KML file to the SD card. The KML file can
further be put on Google earth so that the trip can reviewed
by the user or he can geo tag in the trip[9]. When the user
starts tracking, the location updates are received from the
GPS satellites and the values are recorded in the database.

Figure 1. The interface of the eclipse
The tracking stops when the user clicks the stop button.
The current trip is saved in the database with the date. The
user can export the database to SD card by converting the
file into KML format. He can add an altitude correction if
he thinks there is an error in tracking. The user is notified
whether the tracking is relatively accurate or not so that he
may wish to add the altitude correction.

Figure 1. The interface of the eclipse
A. Installing ans tesitng the application on real device
When building a mobile application, it's important that
you always test your application on a real device before
releasing it to users. This page describes how to set up your
development environment and Android-powered device for
testing and debugging on the device. You can use any
Android-powered device as an environment for running,
debugging, and testing your applications. The tools included
in the SDK make it easy to install and run your application
on the device each time you compile[1][5]. You can install
your application on the device directly from Eclipse or from
the command line. If you don't yet have a device, check
with the service providers in your area to determine which
Android-powered devices are available.
B. Setting upDevice for development
With an Android-powered device, you can develop and
debug your Android applications just as you would on the
emulator. Before you can start, there are just a few things to
do: 1. Declare your application as "debuggable" in your
Android Manifest. In Eclipse, you can do this from the
Application tab when viewing the Manifest (on the right
side, set Debuggable to true). Otherwise, in the
AndroidManifest.xml file, add android:debuggable="true"
to the <application> element[7]. 2. Turn on "USB
Debugging" on your device. On the device, go to the home
screen, press MENU, select Applications Development,
then enable USB debugging. 3. Setup your system to detect
your device. If you're developing on Windows, you need to
install a USB driver for adb.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 54
C. Signing the application for releasing
The Android system requires that all installed
applications be digitally signed with a certificate whose
private key is held by the application's developer. The
Android system uses the certificate as a means of
identifying the author of an application and establishing
trust relationships between applications. The certificate is
not used to control which applications the user can install.
The certificate does not need to be signed by a certificate
authority: it is perfectly allowable, and typical, for Android
applications to use self-signed certificates. The important
points to understand about signing Android applications are:
All applications must be signed. The system will
not install an application that is not signed.
You can use self-signed certificates to sign your
applications. No certificate authority is needed.
When you are ready to release your application for
end-users, you must sign it with a suitable private key. You
can not publish an application that is signed with the debug
key generated by the SDK tools.
The system tests a signer certificate's expiration
date only at install time. If an application's signer certificate
expires after the application is installed, the application will
continue to function normally.
You can use standard tools Keytool and
Jarsigner to generate keys and sign your application .apk
files.
Once you have signed the application, use the
zipalign tool to optimize the final APK package.
The Android system will not install or run an
application that is not signed appropriately. This applies
wherever the Android system is run, whether on an actual
device or on the emulator. For this reason, you must set up
signing for your application before you will be able to run or
debug it on an emulator or device. The Android SDK tools
assist you in signing your applications when debugging.
Both the ADT Plugin for Eclipse and the Ant build tool
offer two signing modes debug mode and release mode.
While developing and testing, you can compile in
debug mode. In debug mode, the build tools use the Keytool
utility, included in the JDK, to create a keystore and key
with a known alias and password. At each compilation, the
tools then use the debug key to sign the application .apk file.
Because the password is known, the tools don't need to
prompt you for the keystore/key password each time you
compile.
When your application is ready for release, you
must compile in release mode and then sign the .apk with
your private key. There are two ways to do this:
a) Using Keytool and Jarsigner in the command-line.
In this approach, you first compile your application
to an unsigned .apk. You must then sign the .apk
manually with your private key using Jarsigner (or
similar tool). If you do not have a suitable private
key already, you can run Keytool manually to
generate your own keystore/key and then sign your
application with Jarsigner.
b) Using the ADT Export Wizard. If you are
developing in Eclipse with the ADT plugin, you
can use the Export Wizard to compile the
application, generate a private key (if necessary),
and sign the .apk, all in a single process using the
Export Wizard
D. Basic setup for singing
Before you begin, you should make sure that Keytool is
available to the SDK build tools. In most cases, you can tell
the SDK build tools how to find Keytool by setting your
JAVA_HOME environment variable to references a suitable
JDK. Alternatively, you can add the JDK version of Keytool
to your PATH variable. If you are developing on a version
of Linux that originally came with GNU Compiler for Java,
make sure that the system is using the JDK version of
Keytool, rather than the gcj version. If Keytool is already in
your PATH, it might be pointing to a symlink at
/usr/bin/keytool. In this case, check the symlink target to be
sure it points to the Keytool in the JDK. If you will release
your application to the public, you will also need to have the
Jarsigner tool available on your machine. Both Jarsigner and
Keytool are included in the JDK.
E. Signing in Bebudg Mode
The Android build tools provide a debug signing mode
that makes it easier for you to develop and debug your
application, while still meeting the Android system
requirement for signing your .apk. When using debug mode
to build your app, the SDK tools invoke Keytool to
automatically create a debug keystore and key. This debug
key is then used to automatically sign the .apk, so you do
not need to sign the package with your own key. The SDK
tools create the debug keystore/key with predetermined
names/passwords:
Keystore name: "debug.keystore"
Keystore password: "android"
Key alias: "androiddebugkey"
Key password: "android"
CN: "CN=Android Debug,O=Android,C=US"
If necessary, you can change the location/name of the
debug keystore/key or supply a custom debug keystore/key
to use. However, any custom debug keystore/key must use
the same keystore/key names and passwords as the default
debug key. If you are developing in Eclipse/ADT, signing in
debug mode is enabled by default. When you run or debug
your application, ADT signs the .apk with the debug
certificate, runs zipalign on the package, then installs it on
the selected emulator or connected device. No specific
action on your part is needed, provided ADT has access to
Keytool[10].
The self-signed certificate used to sign your application
in debug mode (the default on Eclipse/ADT and Ant builds)
will have an expiration date of 365 days from its creation
date.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 55
V. CONCLUSION
The paper suggests an efficient approach to the
execution Route recorder is an application that helps to keep
track of the route travelled by the Android device. When the
user presses the start button on the application, the
application starts recording the track traveled by the device
dynamically. The application tracks the location by GPS.
The application stores the route travelled for the review of
the user. The route is stored by the device in KML (Keyhole
Mark-up Language) format. The coordinates of the route
recording are stored in the KML file with the specified
name at regular intervals. It also helps in geo-tagging the
photos on Google earth along the route travelled.To realize
this vision of devices, applications and environments, we
believe a new application model is needed. The model is
characterized by a device-independent application
development process, which includes abstract specification
of the application front-end and the application's resource
and service requirements. It also includes monitoring and
check pointing, and enables a running application to migrate
from device to device or to simultaneously utilize the
interface capabilities of multiple devices.

REFERENCES
[1] H. Ishiguro , Android science: conscious and subconscious
recognition Connection Science, vol. 18, no. 4, pp. 319332, 2006.
[2] D. Matsui, T. Minato, K. MacDorman, and H. Ishiguro, Generating
natural motion in an android by mapping human motion
Proceedings of the 2005 IEEE/RSJ International Conference on
Intelligent Robots and Systems, pp. 33013308, 2005
[3] Deshpande. C. Breazeal, Designing Sociable Robots. Cambridge,
MA, USA: MIT
[4] H. Miwa, K. Itoh, M. Matsumoto, M. Zecca, H. Takanobu, S.
Rocella,
[5] M. Carrozza, P. Dario, and A. Takanishi, Effective emotional
expressions with expression humanoid robot we-4rii: integration of
humanoid robot hand rch-1, in Proceedings of the 2004 IEEE/RSJ
International Conference on Intelligent Robots and Systems, vol. 3,
2004, pp. 22032208 vol.3.
[6] Kim, C.S., Kim, J.I., Han, W.Y., Kwon, O.c., Development of
telematics service based on Gateway framework Proc. of the
ICACT, 2006, pp.1349-1352.
[7] Han, W.Y., Kwon, O.C., Park, J.H., Kang, J.H., "A Gateway and
Framework for Interoperable Telematics Systems Independent on
Mobile Networks", ETRI Journal, Vo1.27, No.1, 2005, pp.106-109.
[8] D.W. Lee, H.K. Kang, D.O. Kim, KJ. Han, "Development of a
Telematics Service Framework for open Services in the
Heterogeneous Network Environment", Proc. of the International
Congress on Anti Cancer Treatment ICACT 2009.
[9] Shin-Hun Kang, Jae-Hyun Kim, "QoS-Aware Path Selection for
Multi Homed Mobile Terminals in Heterogeneous Wireless
Networks", Proc. Of the IEEE CCNC 2010, Jan, 2010.
[10] F. Davoli, M. Marchese, M. Mongelli, "Bandwidth Adaptation for
Vertical QoS Mapping in Protocol Stacks for Wireless Links", Proc.
Of IEEE Global Communication Conference 2009, 30 Nov.- 4 Dec.
2009.

.





















































Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 56
Hybrid Load Balancing in Grid Computing
Environment using Genetic Algorithm
M.Kalaiselvi
Department of Computer Science and Engineering
Thiagarajar College of Engineering
Madurai, Tamilnadu, India
kalaiselvim@tce.edu
R.Chellamani
Department of Computer Science and Engineering
Thiagarajar College of Engineering
Madurai, Tamilnadu, India
rcmcse@tce.edu

AbstractA distributed system consist of number of machines
working independently with each other. Each resource
processes an initial load, which represents an amount of work
to be performed and each may have a different processing
capacity. To minimize the time needed to perform all tasks, the
workload has to be evenly distributed over all resources based
on their processing speed. Hybrid load balancing is applied to
balance the load. A Min-Min Algorithm and Genetic
Algorithm are combined here. Initially Min-Min is used to find
the earliest completion time of all task among the processors.
When the number of processor increases, the system workload
will get increased. Also when the number of tasks in the queue
increases, the Min-Min will not get the optimal output. The
objective is to minimize the total execution time of those
waiting task as well as a well-balanced load across all nodes.
To obtain a near-optimal solution Genetic Algorithm is used.
Here we used a modified crossover to avoid the problem of
twins occurrence. The fitness function combines both
makespan and average node utilization.
Keywords- Average node utilization; Genetic Algorithm; Grid
computing Hybrid Load Balancing; Makespan; Min-Min
algorithm; Sliding window technique.
I. INTRODUCTION
Distributed system is the collection of heterogeneous
computer which are connected by communication links,
which appears to the users of the system as a single machine.
Cluster computing and grid computing are used to form the
distributed environment. The cluster computing combines
the personal computers connected by local networks in a
fixed area. The grid computing combines the personal
computers or workstations which are in different geographic
area. The aim of grid computing is that have to integrate the
power of wide spread resources. In this distributed system,
if some systems are idle while others are busy, which will
decreases the overall performance of the distributed system.
To overcome this load balancing is necessary.
The load balancing is generally classified into two types.
Static load balancing and dynamic load balancing. In static
load balancing [15], the agent should know about the load
information and processing capacities of the computing
nodes prior to balancing the load. In dynamic load balancing,
it balances the load at run time. It is not necessary that the
agent should collect the processing information prior to start
the load balancing. The main issue in grid computing are the
heterogeneity, autonomy, dynamicity[2]. Dynamicity is
handled in this paper.
The load balancing can be classified based on the
scheduling information as centralized and decentralized load
balancing. In centralized load balancing[3], the load
balancing decision is done by a single node in the grid
system. This scheduler collects the load information of all
other nodes. Based on the load information, the scheduler
schedules the load. In decentralized load balancing[6,3], all
the nodes in the system will act as the scheduler. That is all
the nodes involved in the load balancing decision. But it is
tedious to obtain the dynamic state information of all nodes
in the grid. So each node knows only the partial state
information about other nodes.
While balancing the load, the task should be divisible.
But leads to some constraints while dividing the tasks. Each
job consists of several tasks and each of those tasks can
have different execution times. Also, the load on each
resource and network can vary for time to time. Memory
size and disk space may also vary. So the load balancing
considers the following steps[2],
Monitoring of resource load and state
Exchanging load and state information between
resources
Calculating work distribution
Data movement
Load balancing is used to balance the load across the
computing nodes. But the problem is which node is suitable
for balancing the load. So the selection of best node is the
main objective in load balancing. Also there are some
complexities involved in balancing the load in distributed
system. Because distributed system is the collection of
computing nodes. So the state of the system changes
dynamically. The agent has to collect the state information
periodically[6]. Also the number of systems in a grid
environment will be large. Next distributed system is the
collection of heterogeneous system. So while balancing the
load, the heterogeneous nature should also be considered.
For solving computationally intensive tasks the algorithm
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 57
should be highly computationally intensive. In that way the
load balancing algorithm will be selected.
It is more advantageous to combine both static and
dynamic load balancing. In this paper we proposed a hybrid
load balancing, which combines Min-Min[13,10] algorithm
and Genetic algorithm. Genetic algorithm[14,13,10], is
more computationally intensive. GA is an optimization
technique which produces the best solution from the set of
possible solutions.
This paper is organised as follow. The section II deals
with the related work. Section III specifies the load
balancing process. Section IV describes the proposed
methodology. The implementation details are given in
Section V. The conclusion is provided in Section VI.
II. RELATED WORK
A. Static Load Balancing
In static load balancing, the load balancing decisions are
made in advance and during the system operation the
decision will not change. For example, in a simple static
load balancing, fobs can be assigned to resources in a round-
robin fashion so that each resource executes approximately
the same number of tasks. Thus the load balancing is static.
The advantage of static load balancing[14] is that it is
easier to implement. The major disadvantage of static load
balancing is that the characteristics of the computing
resources and the communication network are known in
advance and will not change during the operation. But in
grid environment this assumption is not possible.
B. Dynamic Load Balancing
In this method the agent or scheduler collects the load
information dynamically. There are two main components
for load balancing. Location policy and transfer policy [7].
The location policy deals with the selection of nodes for the
particular task to transfer from an overloaded node to an
underutilized node. Transfer policy deals with whether a
task should be processed locally or remotely. There are
three factors important. The first one is Transfer Threshold
(TT), which is used to decide whether a node is overloaded
or not. The second one is when to initiate a load-balancing
operation in each node, including information-collecting and
decision-making. The last parameter is the Transfer Size
(TS), the amount of load transmitted for each load transfer.
In [1], dynamic load balancing is proposed for HLA
(High Level Simulation) during its execution. The system
consists of a Cluster Load Balancer (CLB), a Monitoring
Interface, Local Load Balancers, Local Monitoring
Interfaces, and Migration Mechanisms. The CLB contains
three components namely Monitoring, Re-Distribution and
Migration components. The monitoring component
monitors the workload information and access Grid Index
Services from the CLB through Monitoring interface. It also
directly communicates with the Local Load Balancer to
gather information about the grid system.
The default scheduling for desktop grid is FCFS. The
disadvantage is that, it performs well, when there is limited
task. Also it does not work for dynamic nature. So in [5],
dynamic scheduling for desktop grids is proposed. The
Linear Programming Based Affinity Scheduling policy for
Desktop Grids (LPAS DG) prevents the assignment of
particular task classes to inefficient machines.
In[9] proposed a dynamic load balancing scheme, that
considers heterogeneous nature of processors and dynamic
load of the networks. For the heterogeneity of networks, the
load balancing policy is divided into two phase. Global load
balancing and local load balancing. Due to the refinement of
some finite elements, the number of nodes increases
drastically in runtime, which results in load imbalance.
In [12], to improve the efficiency of the system, dynamic
load balancing is proposed in two phases. In first phase the
heterogeneous distributed system is embedded onto B+ tree.
In the second phase load balancing is executed on the virtual
structure. Positional Scan Load Balancing(PSLB) is
proposed. In this method, the work units are indexed and the
scan operator collects the load information and broadcast it.
For each node, the destination of each work unit is
calculated and then the migration for each work unit is
performed.
C. Hybrid Load Balancing
Hybrid load balancing combines static and dynamic load
balancing. In [13], a reliable hybrid load balancing is
proposed using Jingle-Mingle model. A Process Migration
Server (PMS) that also acts as future cluster management
server ensures that latency time in migrated process
execution is reduced along with no starvation policy for any
process. This hybrid scheduling algorithm maintains the
history of events in order to reconfigure the system. Also
proposes an effective scheme to recover from single point
crash. So it will increase the overall performance. Jingle
node is an underloaded node which can receive processes
from other nodes. Mingle node is an overloaded node which
can migrate a process to a Jingle node. The PMS broadcast
the status of the underloaded node.
In[14], a hybrid load balancing which combines
FCFS and GA is proposed. The FCFS considers the
minimum execution time of each task individually. The GA
comprises of selection, crossover and mutation operators.
The sliding window technique is used to enhance the
functionality of GA. But the drawback in this GA is that the
crossover operator applied here may produce twins. That is
one task may be executed in more than one processor. But
this is not practical, because a task cannot be executed in
more than one processor. So twin removal is necessary in
this case. To overcome this, a different crossover is
proposed in this paper.



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 58
III. THE LOAD BALANCING PROCESS
The load balancing process is generally classified into
three steps [2] such as Information Collection, Resource
Selection and Task Mapping.
A. Information Collection
The agent is responsible for collecting the current status
information about the systems that are in the grid
environment. It should be performed during the whole
execution of the system. So that can get the dynamic nature
of the system resources.
B. Resource Selection
Resource Selection can be done in two ways. In first step,
filtering is performed to identify a list of resources that are
available for the particular application. In second step, the
resources are aggregated, which is expected to provide the
performance desired by the application.
C. Task Mapping
The mapping of set of tasks onto a set of aggregated
resources including both computational and network
resources are done here.
To balance the load, several algorithms are proposed.
Some of them are discussed here. One simple load
balancing algorithm is best-fit [2]. The tasks are assigned to
the computing nodes based on the completion time offered
by the nodes. The node which completes the task fastest is
chosen. Once a task is assigned to a node, it will be
executed on that node and will not be re-assigned to another
node. The scheduler will wait until the nodes are available.
This is static load balancing.
In FCFS (First Come First Served) one ready queue is
available. The job which comes first will be executed first.
The new processes come in at the end of the queue.
Independent of the job execution time, the jobs are executed
based on the arrival time. The advantage is that, it is easier
to implement and less overhead for scheduling. The
disadvantage is that, the load information and processing
capacity of all the nodes involved in the grid must be known
in advance. Also cannot handle the balancing at run time.
Round Robin assigns time slices to each process in the
queue and handles the process according to the time slice
provided for that process. Once a process finishes its
execution for the specified time slice, then it is pre-empted
and next process is executed. The pre-empted process is
resumed for the next time slice. This cycle continues until
all the processes finish its execution.
For dynamic load balancing Genetic algorithm is more
suitable. GA is an optimization technique. It selects the best
optimal solution from the set of possible solutions. The
optimal solution is compared with all other possible
solutions to produce the best solution.
In Tabu search[10], initial solution is chosen from the set
of feasible solutions. The next solution is chosen from the
neighbors of current solution by using neighborhood
algorithm.
IV. METHODOLOGY USED
The distributed system we considered here consists of
heterogeneous system. That is, each processor having
different processing capacity. The hybrid technique
combines static and dynamic load balancing. Static load
balancing is based on the prior knowledge of the
information about the processors, communication networks
and tasks. It allocates the tasks to the processors and once
the task is assigned to a processor, it cannot be retained.
Min-Min is used here.
In dynamic load balancing no prior knowledge is needed.
It allocates the tasks to the processors at run time. Genetic
Algorithm is used for dynamic load balancing.
A. Min-Min Algorithm
Initially Min-Min is executed. This is used as static load
balancing algorithm. It is based on the Minimum
Completion Time (MCT). Initially the completion time for
each task in each processor is calculated. Then the minimum
completion time for each task is calculated. The Min-Min
algorithm chooses the minimum MCT task and this task is
assigned to the processor corresponding to the tasks MCT.
Considered there are n tasks in the queue. The execution
time for the task j is given by . The arrival time for the
task j is given as . Then the MCT is calculated for each
task in each processor is calculated as follows


Where





The task with the minimum MCT is given to the
processor corresponding to the tasks MCT.
When the number of processor increases, the agent has to
collect more information. So it becomes an overhead for
Min-Min. So we are moving to Genetic Algorithm (GA).
B. Genetic Algorithm
GA is an evolutionary algorithm. It evolves through
iterations. Genetic representation and fitness function are
two important factors. The genetic representation used here
is array of bits. The individual solution is evaluated by the
fitness function. It depends on the problem domain. Initial
population consist of set of possible solutions. The selection
is based on the fitness value. The set of schedules which has
the highest fitness value is selected for next generation. The
reproduction is performed using the two genetic operations
crossover and mutation.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 59
GA comprises of three genetic operators like Initial
selection, crossover and mutation. The algorithm for GA is
shown in fig 1. It starts with a set of random solutions called
population. Each individual in the population is called as
string or chromosome. The successive iterations are called
as generations. The chromosomes are evaluated using a
fitness function during each generation. The selection
operator is used to form the chromosomes for the next
generation. Before making it into next generation, the
selected chromosomes may undergo crossover or mutation
or both (based on the probability of crossover and mutation).
In crossover, the two parent chromosomes are merged to
form the new child chromosomes. In mutation, the parent
chromosome is modified to form the child chromosome.


Fig 1. Genetic Algorithm

The encoding scheme for GA is a decimal tuple of two
attributes , where denotes the task j, j=1n, and
denotes the processor j, j=1m. The decimal tuple
representation is shown in fig 2.

Processor
P1 P2 P3 P2 P3 P2 P1 P3
Task
T1 T3 T8 T4 T7 T6 T2 T5
Fig 2. Encoding Mechanism
1) Genetic Operators
a) Selection
Selection operator is used to form the chromosomes for
the next generation. The string which has the higher fitness
value will have the higher chance to get into the next
generation. So the best chromosomes are selected from each
generation and passed as input to the next generation.

b) Crossover
The crossover operator combines two chromosomes
(parents in Fig 3(a)) to form the new chromosomes
(offspring in Fig 3(b)). The new chromosome may be better
than both of the parents, if it takes the best characteristic
from both the parents. Crossover is performed based on the
crossover probability. The important thing is that, the task
should appear only once in a schedule. The crossover
operation is as follows.

Initially two random cut points are selected. The
sequence between the cut points is copied into a offspring.

Parent 1:
T1 T5 T2 T8 T3 T7 T4 T6
Parent 2:
T2 T7 T6 T4 T5 T1 T3 T8
(a)
Offspring 1:
T8 T3 T7
Offspring 2:
T4 T5 T1
(b)
Fig 3. (a)Parent Chromosomes (b)offspring
From the second cut point of one parent, the tasks are
copied from another parent in the same order. To do this, in
the first parent, the tasks are written in sequence from the
second cut point as follows
T4 T6 T1 T5 T2 T8 T3 T7

The second offspring is removed from this sequence and
looks as T6 T2 T8 T3 T7

T4 T5 T1 T8 T3 T7 T2 T6
(a)

T8 T3 T7 T4 T5 T1 T6 T2
(b)
Fig 4. The final offspring (a).Offspring 1 (b).Offspring 2

This sequence is placed in the second offspring from the
second cut point (Fig 4(b)). The same operation is done for
the second parent (Fig 4(a)). This kind of crossover reduces
the overhead of twin removal. Each task is executed in only
one processor.
c) Mutation
The mutation operator alters one or more gene values.
This will produce a new gene and passed to the next
generation. The mutation process occurs based on the user
defined probability. Here, the mutation process selects two
random gene, and exchange these random genes to form the
new gene values. This is illustrated in Fig 4.

Algorithm:
1. Begin
2. Generate initial population
3. Repeat
a. Apply selection
b. Perform crossover/mutation according
to crossover and mutation probability
c. Calculate fitness value
4. Until termination condition
5. End
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 60
T1 T5 T6 T2 T4 T8 T3 T7
(a)
T1 T5 T4 T2 T6 T8 T3 T7
(b)
Fig 4. The mutation process (a).Before mutation (b).After mutation
2) Fitness Function
Fitness function is the particular type of objective
function in Genetic algorithm. It decides the optimality of a
solution. It allows the optimal chromosomes to pass into the
next generation. The main objective here is to attain the task
assignment which results in minimum execution time and a
well balanced load across all nodes in the grid environment.
The fitness function considers makespan, average node
utilization and the current workload. So that can achieve the
objective.

a) Makespan
Makespan is the longest completion time of tasks
among all processors in the system. For the understanding
consider the following example.

1(5) 2(7) 3(8) 4(2) 6(8) 5(9) 7(3) 8(10)


P1 P2 P3
Fig 5. Initial schedule

Initially assume that all processors are idle. The Fig
5 represents the allocation of tasks to the processors. 1(5)
represents that the task 1 has the execution time of 5 units.
The task 1 and 2 are executed in processor P1, task 3,4 and
8 are executed in processor P2 and task 5,7 and 8 are
executed in processor P3. The total execution time for
processor P1 is 5+7=12(Task 1 and 2). For P2 is 8+2+8=18
and for P3 is 9+3+10=22. So the makespan for this task
schedule is 22.
The processors may not be idle always. So there may be
some tasks executed already, with some execution time.
This current work load should be considered when
calculating makespan. For example if the current workload
in processors P1, P2 and P3 are 10,13 and 11 units. Then the
makespan for the same schedule is as follow.

P1 =12+10=22 P2=18+13=31 P3=22+11=33

So the makespan for this schedule is 33units. . The
objective is to minimize the makespan.

d) Average Node Utilization
The other objective is well balanced load across all
the nodes. This can be achieved by using this factor. High
average node utilization implies that the load is well
balanced across the nodes. The expected utilization of each
processor is calculated by dividing the total completion of
each processor by the makespan. So the utilization of each
processor is
P1=22/33=0.6667 P2=31/33=0.9394 P3=33/33=1

The Average Node Utilization (ANU) is calculated by
dividing the sum of all processors utilization by the total
number of processors. The objective is to maximize the
ANU.
ANU = (sum of all processors utilization)/total number of
processors

ANU = (0.6667+0.9394+1)/3 = 0.8687.

c) Combined Fitness Function
The combined fitness function is given by the
following equation
Fitness= (1/makespan) + Average Node Utilization.

V. IMPLEMENTATION
The overview of hybrid load balancing is shown in Fig 6.
The input specifies the number of tasks, number of
processors, the execution time of each task in each
processor and threshold value (for the number of
processors). The start time of each task is considered as 0.
That is all tasks enter at the same time. The information is
given to the client. If the number of processor is less than
the threshold value, then Min-Min algorithm is executed. It
decides the current schedule for the processors. The task
with the lowest execution time in a processor is executed
first. If the processor already executing a task, then the new
tasks execution time is added to the current execution time
to get the final execution time. The tasks are allocated so
that the final execution time for each processor should be
low. When the number of processors exceeds the threshold
value, Genetic Algorithm is executed. The crossover rate is
set as 0.8 and mutation rate is set as 0.1. Number of
generations used here is 30.
The initial population is generated based on the number
of tasks. Size of the initial population is as twice as the
number of tasks. Then the random numbers are set to each
chromosome. Based on the crossover and mutation
probability, the reproduction is performed. To retain a best
schedule over all the generations, the initial population and
the reproduced chromosomes are combined and the repeated
chromosomes are eliminated. Then fitness value is
calculated for each chromosome and sorted based on the
fitness value. The top chromosomes with initial population
size are passed to the next generation. If the termination
condition is not met, then the random number assignment,
crossover, mutation and evaluation of fitness function is
performed and passed to next generation. This process is
calculated until the termination condition is met. If the
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 61
termination condition is met, then the top schedule is the
final result. The termination criteria used here is the number
of generations.






















Fig 6. Overview of hybrid load balancing
VI. CONCLUSION
In this paper we proposed a hybrid load balancing, which
combines Min-Min algorithm and Genetic algorithm. The
Min-Min algorithm is used to find the earliest completion
time of all tasks. When the number of processors and tasks
increases, the Min-Min algorithm will not provide better
output since the order of task execution is determined only
by the earliest completion time. Hence Genetic algorithm is
used to find the near optimal solution, by using makespan
and maximum load balancing factors. Also a different
crossover is used to avoid the problem of twin removal,
which will provide optimal schedule for load balancing.
REFERENCES
[1] Zeng and Bharadwaj Veeravalli, Rate-Based and Queue-Based
Dynamic Load Balancing Algorithms in Distributed Systems,
Proceedings of the Tenth International Conference on Parallel and
Distributed Systems (ICPADS04), 1521-9097, 2004.
[2] Belabbas Yagoubi, Hadj Tayeb Lilia and Halima Si Moussa, Load
Balancing in Grid Computing, Asian Journal of Information
Technology 5(10): 1095-1103, 2006.
[3] Feng Zhang, Andryas Mawardi, Eugene Santos Jr.,Ranga Pitchumani
and Luke E.K. Achenie, Examination of load-balancing methods to
improve efficiency of a composite materials manufacturing process
simulation under uncertainty using distributed computing,
ELSEVIER Journal on Future Generation Computer Systems 22 571
587, 2006.
[4] Issam Al-Azzoni and Douglas G. Down, Decentralized Load
Balancing for Heterogeneous Grids, Computation World: Future
Computing, Service Computation, Cognitive, Adaptive, Content,
Patterns, 2009.
[5] Riky Subrata, Albert Y.Zomaya and Bjorn Landfeldt, Artificial Life
Techniques for Load Balancing in Computational Grids, Elsevier
Journal of Computer and System Science, 73 (2007) 11761190.
[6] Kuo-Qin Yan, Shun-Sheng Wang, Shu-Ching Wang and Chiu-Ping
Chang, Towards a hybrid load balancing policy in grid computing
system, Journal on Expert Systems with Applications, 957-4174,
2009.
[7] Thanasis Loukopoulos, Petros Lampsas and Panos Sigalas,
Improved Genetic Algorithms and List Scheduling Techniques for
Independent Task Scheduling in Distributed Systems, Eighth
International Conference on Parallel and Distributed Computing,
Applications and Technologies, 7695-3049, 2007.
[8] Ted Scully and Kenneth N. Brown, Wireless LAN load balancing
with genetic algorithms, ELSEVIER Journal on Knowledge-Based
Systems 22 (2009) 529534.
[9] Azzedine Boukerche and Robson Eduardo De Grande, Dynamic
Load Balancing Using Grid Services for HLA-Based Simulations on
Large-Scale Distributed Systems, 13th IEEE/ACM International
Symposium on Distributed Simulation and Real Time Applications,
1550-6525, 2009.
[10] Issam Al-Azzoni and Douglas G. Down, Dynamic Scheduling for
Heterogeneous Desktop Grids, 9th Grid Computing Conference,
978-1-4244-2579-2, 2008.
[11] Maheen Islam and Upama Kabir, A Dynamic Load Balancing
Approach for Solution Adaptive Finite Element Graph Applications
on Distributed Systems,
[12] Ilias K. Savvas and M-Tahar Kechadi, Efficient Load Balancing on
Irregular Network Topologies Using B+tree Structures, Sixth
International Symposium on Parallel and Distributed Computing
(ISPDC'07), 0-7695-2936-4, 2007.
[13] Shakti Mishra, D.S.Kushwaha and A.K.Misra, Jingle Mingle: A
Hybrid Reliable Load Balancing Approach for a Trusted Distributed
Environment, Fifth International Joint Conference on INC, IMS and
IDC, 978-0-7695-3769-6, 2009.
[14] Yajun Li, Yuhang Yang, Maode Ma and Liang Zhou, A hybrid load
balancing strategy of sequential tasks for grid computing
Environments, Journal on Future Generation Computer Systems 25
(2009) 819-828.
[15] Jong-Chen Chen, Guo-Xun Liao, Jr-Sung Hsie, Cheng-Hua Liao, A
study of the contribution made by evolutionary learning on dynamic
load-balancing problems in distributed computing systems,
ELSEVIER Journal on Expert Systems with Applications 34 (2008)
357365.



GA
Fitness Function
Initial Population
Generation
Next Generation
Chromosomes
Modified
Crossover
Mutation
if
termination
condition
no
yes
no
yes
Final
Schedule
Input Client Min_Min
If no. of
Processor
<threshold
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 62

KRIDANTA ANALYZER

N. Murali
1

Dept. of Computer Applications,
S.V. Vedic University,
Tirupati.
murali.nandi@gmail.com

Dr. R.J. Ramasree
2

Reader & HoD,
Dept. of Computer Science
Rashtriya Sanskrit Vidyapeetha,
Tirupati.
rjramasree@yahoo.com


ABSTRACT - This paper briefly describes
KRIDANTA. Role of Upapadam, Upasarga (Prefix),
and KRITPRATYAYAs (Suffixes). The information
encoded in KRIDANTA. And also describes in-
detail the KRIDANTA ANALYZER which was
developed as a part of the research work for the
degree of Doctor of Philosophy to be submitted to
the Department of Computer Science, Sri
Chandrasekharendrasaraswathi Viswa
mahavidyalayam, Kanchipuram, India.

Keywords Machine Learning, Natural Language
Processing, Morphological Analyzer, Kridanta,
Upapadam, Upasarga.

I. INTRODUCTION
Sanskrit is the oldest and most complex
language in the world. Many grammars were there
for Sanskrit, out of which Paninis Astadhyayi is
the most widely accepted grammar throughout the
world. The complexity of the language is that
many affixes can change the meaning of the word
completely. In Sanskrit every non-verbal
categories are subana-padas [2] but some times
KRIDANTA may also behave as subantas.
Many Morphological Analyzers for Sanskrit were
failed in identifying the information encoded in
KRIDANTA correctly ([1], [7]). In the field of
Natural Language Processing, there are a very few
research groups which are working on Sanskrit
Language. Morphological Analyzer is a tool used
to analyze any Natural Language text and is useful
for developing any NLP applications. People at
large believe that a vast literature containing most
advanced scientific knowledge is available in this
language. The constitution of India recognized
Devanagari Script as the standard script for
Sanskrit. However, in ancient days and to some
extent at present, people in various regions use
different scripts i.e. people read Sanskrit Language
in the script of their own mother tongue. Roman
notation was used for computational purpose in the
present study.

KRIDANTA, when a root of verb gets
added with certain Suffixes it may form
Noun or Adjective or Indeclinable. These suffixes
are called as (KRITPRATYAYA)
and its derivatives are called as KRIDANTAH.
For example let us take the word which is a
KRIDANTA and it means the one who makes the
others happy. This word is derived from the
combination of the Verbal Root and the
Suffix . Sanskrit is a morphologically rich
language. Because of the nature of the language,
the meaning of a word may change when another
word is added to a word. The words which may
change the meaning of another word with its
addition to that word may be classified as
Upapadam and Upasarga
(Prefix). Upapadam means the preceding word to
any word on which the preceding word has some
effect. If a KRIDANTA contains both
Upapadam and Upasarga, first it contains
Upapadam and next Upasarga. A KRIDANTA
can contain any number of Upapadas and
Upasargas (Prefixes). The following examples
can explain the complexity involved in analyzing a
Kridanta.
a. means rest / relax and was derived from
i.e. Prefix (vi)
Verbal Root (ramu) and
Suffix (Ghai)
b. means abode and was derived from
i.e. Prefix (A)
Verbal Root (ramu) and
Suffix (Ghai)
c. means continuously and was derived
from
i.e. Prefix and
(nir and vi)
Verbal Root (ramu) and
Suffix (Ghai)
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 63

d. means the one who makes others
infinitely happy and was derived from
(Upapadam) (ananta)
Verbal Root (ramu) and
Suffix (Ghai)
The main source of Grammatical
Information that was followed is
(Siddantakaumudi). Nearly 130 suffixes were
mentioned in Siddantakaumudis
Kridantaprakaranam. For the present study we
have taken 109 suffixes, the remaining suffixes
deals with Vedic Svaras. The Verbal information
was obtained from Dhaturatnakara written by
Muni Lavanya Vijayasuri, There are nearly 2000
Verbal Roots ( ) available in Sanskrit
Language which were listed out in this
monumental work. One or more Suffixes can be
added to almost all Verbal Roots resulting in a vast
number of KRIDANTA RUPAS.

When any Morphological Analyzer
analyzes the KRIDANTA, it should give the
information about Upapada, Upasarga (Prefix),
Verbal Root and Suffix. But at present no
morphological analyzer [1] is giving the
information in this demeanor. Hence, by keeping
in view the relevance of identifying a
KRIDANTA by a Morphological Analyzer for
Sanskrit Language, this KRIDANTA Analyzer
was developed. KRIDANTA ANALYZEER was
described comprehensively below. The following
picture describes the functioning of Kridanta
Analyzer

















II. KRIDANTA ANALYZER
The KRIDANTA Analyzer is rule
based. According to Panini, when a Suffix is
added to Verbal Root, mainly three variations may
occur in the Verbal Root. The first two variations
may be termed as VRIDDIH ,
GUNAH and the third variation is there
will not be any change in the Verbal Root.
VRIDDIH means if the Verbal Root ends in any
of , , , then it may be changed as , ,
or respectively. GUNAH means if the
Verbal Root ends in any of , , then it may
be changed as , or respectively. In the
third variation no change will occur to the Verbal
Root when some suffixes get added to the Verbal
Root (for e.g. the suffix vic). By following these
principles, the phonetic variations of all the 2000
Verbal Roots which are indicated in the
Dhaturatnakarah were generated. In the same way
the possible phonetic variations of Upapadas,
Upasargas (Prefixes), and Pratyaya (Suffixes) that
are mentioned in Siddhantakaumudi were also
generated.

A. Description of KRIDANTA ANALYZER
The KRIDANTA ANALYZER will take
KRIDANTA PRATIPADIKA as input and first it
checks for KRIDANTA in KRIDANTA source,
if it finds it in the source then it will display the
details as mentioned in the example of 2.6. If the
KRIDANTA PRATIPADIKA is with Upapada or
Prefix or Prefix along with another Prefix or
Upapada along with Prefix then the entry cannot
be found in the KRIDANTA source, then the
analyzer will check for Upapada and Upasarga
sources and display the details such as Upapada,
Upasarga, Verb and Suffix information.

B. Upapada
A file of all Upapdas was generated with
the possible phonetic variations of the Upapadas.
This file contains a total number of 710 entries
which describes the possible forms of the
Upapadas. The fields are separated with a comma.
The following example describes an entry in the
file:

ananwa,ananwa
ananw,ananwa
sva,sva
sv,sva

Kridatnta
Kridatnta
Analyzer
Upapada
Upasarga
KRIDANTA PRATIPADIKA
Upapadam/Verbal Root/gaNa/paxi/set or vet or
anit/XhAtu No/Suffix
Suffix-Verbal-Root-Match
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 64

The right part describes the Upapada and the left
part describes the possible phonetic variation of
the upapada.

C. Upasarga (Prefix)
A file of all Upasargas was generated
with the possible phonetic variations of the
Upasargas. This file contains a total number of
124 entries which describes the possible forms of
the Upasargas. The fields are separated with a
comma. The following example describes an
entry in the file:

A,Af
nir,nir
niH,nir
nil,nir
vi,vi
vy,vi
v,vi

The right part describes the Prefix and the left part
indicates the possible phonetic change of the
prefix.

D. Dhatu:
A file of all possible phonetic changes of all 2000
Verbal Roots was generated. This file contains a
total number of 10169 entries corresponding to all
the 2000 Verbal Roots described in
Dhturatnkara. The fields are separated with a
comma. The following example describes an
entry in the file about Verbal Root :
"ram","ram/ramaz(ramuz)/1/A/a/853"
"ran","ram/ramaz(ramuz)/1/A/a/853"
"ra","ram/ramaz(ramuz)/1/A/a/853"
"raM","ram/ramaz(ramuz)/1/A/a/853"
"rAm","ram/ramaz(ramuz)/1/A/a/853"
"raw","ram/ramaz(ramuz)/1/A/a/853"
"rem","ram/ramaz(ramuz)/1/A/a/853"
"riraM","ram/ramaz(ramuz)/1/A/a/853"
"raMram","ram/ramaz(ramuz)/1/A/a/853"
"iR","iR/iRaz/4/pa/se/1127"
"IR","iR/iRaz/4/pa/se/1127"
"eR","iR/iRaz/4/pa/se/1127"
"ER","iR/iRaz/4/pa/se/1127"
"iR","iR/iRaz/6/pa/se/1351"
"IR","iR/iRaz/6/pa/se/1351"
"eR","iR/iRaz/6/pa/se/1351"
"ER","iR/iRaz/6/pa/se/1351"
"iR","iR/iRaz/9/pa/se/1525"
"IR","iR/iRaz/9/pa/se/1525"
"eR","iR/iRaz/9/pa/se/1525"
"ER","iR/iRaz/9/pa/se/1525"
The right part describes the Verbal
Root/gaNa/paxI/set or vet or anit/XAwu Number
and the left part describes the possible phonetic
variations of the Verbal Root.

E. Pratyaya (Suffix)
A file of all Suffixes was generated with the
possible phonetic changes of the Suffixes. This
file contains a total number of 197 entries which
describes all possible forms of a total 109
Suffixes. The fields are separated with a comma.
The following example describes an entry in the
file:
anIya,anIyar
aNIya,anIyar
nIya,anIyar
NIya,anIyar
a,GaF
The right part is the Suffix and the left part is the
possible phonetic variations of the Suffix.

F. KRIDANTA
With the help of Dhatu and pratyaya, all the
possible KRIDANTA forms were generated and
sorted in alphabetical order, which will be more
helpful in searching for a possible KRIDANTA
form. This file contains 109 forms for each
Verbal Root in the Dhatu source. Here Some
suffixes are agreed only with some Verbal Roots
and Some suffixes are applicable for all 2000
Verbal Roots. No matter whether it is applicable
or not, this source contains each entry in
Pratyaya concatenated with each entry in
Dhatu file, causing some unnecessary scrap in
the source. This may be ignored as there is no
scarcity of computers memory and speed. The
fields are separated with a $ dollar symbol. The
following example describes the possible forms of
the Verbal Root ramu with various Suffixes in
the KRIDANTA file:
rAma$rA/rA/2/pa/a/1057$wa
rama$ram/ramaz(ramuz)/1/A/a/853$a
raMa$ram/ramaz(ramuz)/1/A/a/853$a
rAma$ram/ramaz(ramuz)/1/A/a/853$af
rama$ram/ramaz(ramuz)/1/A/a/853$aN
raMa$ram/ramaz(ramuz)/1/A/a/853$aN
rAma$ram/ramaz(ramuz)/1/A/a/853$aN
rAma$ram/ramaz(ramuz)/1/A/a/853$da
raMa$ram/ramaz(ramuz)/1/A/a/853$Ga
rAma$ram/ramaz(ramuz)/1/A/a/853$Ga
rama$ram/ramaz(ramuz)/1/A/a/853$GaF
raMa$ram/ramaz(ramuz)/1/A/a/853$GaF
rAma$ram/ramaz(ramuz)/1/A/a/853$GaF
rama$ram/ramaz(ramuz)/1/A/a/853$ka
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 65

The first part indicates the possible phonetic
variation of Verbal Root , the second part
indicates the details of the Verbal Root and the
third part indicates the Suffix.

III. LIMITATIONS & CONCLUSION
There are some limitations in this
KRIDANTA ANALYZER,
KRIDANTA ANALYZER works
correctly when KRIDANTA
PRATIPADIKA was given as input
Some times it may also show some
wrongly coupled Suffixes along with
the correct one which should be
filtered out.
Suffixes which are classified as Unadi
and Sanach Suffixes were not
considered for the present study.
To overcome these difficulties now we are
improving KRIDANTA ANALYZER even to
recognize the KRIDANTA PRATIPADIKA,
Vibhakti i.e. Case and Gender when we feed
Kridanta Sabda. If we identify the Case and
Gender then most of the unnecessary information
can be filtered out. Even though, the results are
displayed based on inbuilt ranking system which
works on the assumption that some Suffixes can
be coupled only with some Verbal Roots and
some Suffixes can be coupled with any Verbal
Root. Based on this intuition the Suffixes which
can be coupled with any Verbal Root are given
secondary priority and the Suffixes which can be
coupled only with some Verbal Roots are given
high priority. This inbuilt ranking system should
be improved. This can be done only after the
identification of Case and Gender.
ACKNOWLEDGEMENT
We here by express our deep sense of
gratitude to Vyakarana Siromani
Mahamahopadhyaya Prof. K.V.
Ramakrishnamacharyulu, Dept. of Vyakarana,
Rashtriya Sanskrit Vidyapeetha, Tirupati who is
the driving force behind us. He is the first person
who has worked on this task and is kind enough to
give us the linguistic information to us. We are
also thankful to Dr. Manjokumar Mishra,
Asst.Professor in Nirukta, S.V. Vedic University,
Tirupati who has given us valuable suggestions in
completing this work. Finally we thank the
Students of Rashtriya Sanskrit Vidyapeetha Mr.
Veluvarti Srinvasa Narayana, Rohit Salunke
Vishnu, Revuru Kodandapani who helped us by
providing Grammatical inputs in and helped us in
testing this tool.
REFERENCES
[1] N. Murali, Evaluation of Sanskrit
Morphological Analyzers,
(July, 2010-December, 2010) a
research journal of Sri Venkateswara
Vedic University, Tirupati.
[2] Subhash Chandra, Automatic Nominal
Morphological Recognizer and Analyzer for
Sanskrit: Method and Implementation
[3] Amba Kulkarni, A Constraint based
Dependancy Parser for Sanskrit, at the one
day International Seminar 'Indian Linguistic
Thought in Retrospect and Prospect' on 19th
Feb 2010 at Calicut University, Kerala
[4] Amba Kulkarni, Panini: An Information
Scientist at 1 day Symposium on Dr Vineet
Chaitanyaji's 65th Birthday, on 4th April,
2009 at IIIT-Hyderabad
[5] Amba Kulkarni, Panini's Ashtadhyayi: A
Computer Scientist's viewpoint, at 4th Asia
Pacific Conference of Computational
Philosophy, NIAS, IISC, Bangalore.
[6] Grard Huet, INRIA Rocquencourt Formal
structure of Sanskrit text: Requirements
analysis for a mechanical Sanskrit processor,
Sanskrit Computational Linguistics 1 & 2,
Springer-Verlag LNAI 5402.
[7] Girish Nath Jha, Muktanand Agrawal,
Subash, Sudhir K. Mishra, Diwakar Mani,
Diwakar, Mishra, Manji Bhadra, Surjit K.
Singh, Inflectional Morphology Analyzer
for Sanskrit
[8] Amba Kulkarni and Devanand
Shukla. 2009. Sanskrit
Morphological analyzer: Some
Issues. To appear in Bh.K
Festschrift volume by LSI.

***


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 66

IMPLEMENTATION OF UART DESIGN USING VHDL


AMOL B. DHANKAR
Lecturer, Electronics and Communication
Dept., Kulguru Institute of Technology
And Science, Ramtek, DT. Nagpur
(M.S.) India-441106
Email: - dhankar_amol@indiatimes.com

C. N. BHOYAR
Lecturer, Electronics Engineering Dept.,
Priyadarshini College of Engineering,
Near C.R.P.F. Campus, Hingana Road,
Nagpur (M.S.) India - 440019
Email: - cbhoyar@gmail.com



Abstract VHDL modeling and simulation of a Universal
Asynchronous Receiver/Transmitter (UART) for parallel to
serial and serial to parallel data communication has been
presented. The transmitter and receiver has been
implemented using VHDL approach which allows the
reconfigurability of the proposed system. The power
consumption and space is very less compared to
conventional discrete I.C. design which is pre-requisite for
any system designer. The design has been synthesized on
Spartan-2 FPGA family. The results of the simulation have
been found satisfactory and are in conformity with
theoretical observation.


Keywords-UART Transmitter ans Receiver.
INTRODUCTION
Most computers and microcontroller have one or more serial
data ports used to communicate with serial I/O devices such
as keyboards and serial printers. By using MODEM
(modulator- Demodulator) connected to a serial port, serial
data can be transmitted to and received from a remote
location via telephone lines as shown in FIG (1).
The serial communication interface, which receives and
transmits serial data is often called a UART. RXD is the
received serial data signal and TXD is the transmitted data
signal.



FIG (1): UART data communication


Standard Format for Serial Data Transmission

FIG (2) shows the standard format for serial data
transmission. Since there is no clock line, the data (D) is
transmitted asynchronously, one byte at a time.
When no data is being transmitted, D remains high. To mark
start of transmission, D goes low for one bit time, which is
referred to as the start bit.
Then eight data bits are transmitted, LSB first. When text is
being transmitted, ASCII code is usually used. In ASCII
code, each alphanumeric character is represented by a parity
check bit.
After eight bits are transmitted D must be go high for at
least one bit time, which is referred to as stop bit. Then
transmission of another character can start at any time.
The number of bits transmitted per second is referred to as
the BAUD rate.


FIG (2): Standard format for serial data transmission.

UART Design:-

FIG (3) shows the UART connected to the 8-bit data bus.
The following six 8-bit registers are used:

RSR: - Receive shift register

RDR: - Receive data register

TDR:-Transmit data register

TSR:- Transmit shift register

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 67

SCSR :- Serial communications status register
RDR, TDR, SCCR and SCSR are memory-mapped; that is,
each register is assigned an address in the microcontroller
memory space.



FIG (3): UART design

Besides the registers, the three main components of the
UART are the BAUD rate generator, the receiver control
and transmitter control. The BAUD rate generator divides
down the system clock to provide the bit clock (BClk) with
a period equal to one bit time and also BClkX8, which has a
frequency eight times the BClk frequency.

The TDRE (transmit data register empty ) bit in the SCSR is
set when TDR is empty. When microcontroller is ready to
transmit data , the following occurs:
1) The microcontroller waits until TDRE = 1 and
then loads a byte of data into TDR and clears
TDRE.
2) The UART transfers data from TDR to TSR and
sets TDRE.
3) The UART outputs a start bit (0) for one bit time
and then shifts TSR right to transmit the eight data
bits followed by a stop bit (1).


SM Chart for UART Transmitter

FIG (4) shows the SM chart for transmitter. The
corresponding sequential machine is clocked by the
microcontroller system clock (CLK).In the IDLE state, the
SM waits until TDR has been loaded and TDRE is cleared. .
In the SYNCH state, the SM waits for rising edge of the bit
clock (BClk) and then clears the low-order bit of TSR to
transmit a 0 for one bit time In the TDATA state, each
BClk is detected, TSR is shifted right to transmit the next
data bit and the bit counter (Bct) is increment When Bct=9,
8 data bits and a stop bit have transmitted. Bct is then
cleared and the SM goes back to IDLE state.

FIG (4): SM chart of UART transmitter

UART Receiver

The operation of the UART receiver is as follows:
1) When the UART detects a start bit, it reads the remaining
bits serially and shifts them into RSR.
2) When the all data bits and the stop bit have been
received, the RSR is loaded into the RDR and Receive
Data Full (RDRF) flag in the SCSR is set.
3) The microcontroller checks the RDRF flag and if it is set,
the RDR is read and the flag is cleared.
The bit stream coming in on RXD is not synchronized with
the local bit clock (BClk). If we attempted to read RXD at
the rising edge of BClk we would have a problem if RXD
changed near the clock edge. We could have setup and hold
time problems. If the bit rate of the incoming signal differed
from BClk by a small amount, we could end up reading
some bits at the wrong time.
Ideally; we should read the bit value at the middle of each
bit time for maximum reliability. When RXD first goes to 0,
we will wait for four BClkX8 periods and we should be near
the middle of the start bit. Then we will wait eight more
BclkX8 periods, which should take us near the middle of the
first data bit. We continue reading once every eight BClkx8
clocks until we have read the stop bit.
FIG (5) shows an SM chart for the UART receiver. Two
counters are used. Ctl counts the number of BClkX8 clocks.
Ct2 counts the number of bits received after the start bit. In
the IDLE state, the SM waits for start bit (RXD=0) and
then goes to the start detected state. The SM waits for the
rising edge BClkX8 and then samples RXD again. Since the
start bit should be 0 for eight BClkX8 clocks, we should
read 0. Ctl is still 0, so Ct1 is incremented and the SM
waits for the rising edge of BClkX8. If RXD =1, this is an
error condition and the SM clears Ct1 and resets to the
IDLE state.



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 68

SM Chart for UART Receiver


FIG (5): SM chart of UART receiver

. Otherwise, the SM keeps looping. When RXD is 0 for
the fourth time, Ct1 is cleared and the state goes to receive
data. In this state, the SM increments Ct1 after every rising
edge of BClkX8. After the eighth clock, Ct1=7 and Ct2 is
checked. If it is not 8, the current value of RXD is shifted
into RSR, Ct2 is incremented and Ct1 is cleared. If Ct2 =8,
all 8 bits have been read and we would be in the middle of
the stop. FIG (6) shows a block diagram for the BAUD
rate generator. The 8-MHz system clock first divided by 13
using a counter. This counter output goes to an 8-bit binary
counter. The outputs of the flip-flop in this counter
correspond to divide by 2, divide by 4and divide by
256.


FIG (6): Baud rate generator

CONCLUSION

An efficient scheme for data transmission and reception
using VHDL has been proposed. The proposed scheme has
been synthesized and simulated for target device Spartan-2
FPGA family. It has been found that the proposed scheme
capable of providing a range of applications in data
communication using microcontroller or computers. The
transmitting and receiving ends of the system has proved the
efficacy of the proposed scheme. The results have been
presented in the form of various waveforms.

REFERENCES

1) Implementation of a UART controller Based on FIFO
Technique On FPGA By Shouqian Yu Lili Yi Weihai
Chen Zhaojin Wen Beijing University of Aeronaut and
Astronaut, Beijing; Industrial Electronics and Applications,
2007, ICIEA 2007, Second IEEE Conference , Publication
Date: 23-25 May 2007, On page : 2633-2638, location:
Harbin.

2) A VHDL Implementation Of UART Design With BIST
Capability By Mohd Yamani Idna Idris, Mashkuri Yaacob,
zaidi Razak, Faculty Of Computer Science and Information
Technology , University of Malaya 50603 Kuala Lumpur
Malaysia.

3) Z. Navabi, VHDL Analysis and Modeling of Digital
Systems, McGraw-Hill Inc., 1991.

4) PC16550D Universal Asynchronous
Receiver/Transmitter with FIFO, National Semiconductor
Application
Note, June 1995.

(5) M. S. Harvey, Generic UART Manual, Silicon Valley,
December 1999.




FIG (7): Simulation result of UART
transmitter
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 69



FIG (8): RTL Schematic of UART
transmitter




FIG (9): Total estimated power consumption
of UART transmitter



FIG (10): Simulation result of UART
receiver



FIG (11): RTL schematic of UART receiver





FIG (12): Power estimation of UART
Receiver




FIG (13): Simulation result of baud rate
Generator






Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 70







FIG (14): RTL schematic of baud rate
generator

































FIG (15): Power estimation of baud rate
generator
































Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 71



On Demand Multicast Routing Protocol with Link
Stability Database


Sangeetha.M
ME Student
Department Of Computer Science and Engineering
Thiagarajar College Of Engineering - Madurai
abisangeetha87@gmail.com


Abstract Mobile wireless network is capable of autonomous
operation. It operates without base station infrastructure
because the nodes cooperate with each other to provide the
connectivity for wireless communication. Multicasting is very
much important in mobile ad hoc networks which supply the
information from one source node to many client nodes.
Multicast routers execute a multicast routing protocol to define
delivery paths that enable the forwarding of multicast
datagrams across an internetwork. The existing paper
proposes the performance of On Demand Multicast routing
protocol is enriched by deleting lost join query packets which
is achieved by accumulation of additional field (NOPFG). But
it has one drawback when the speed of node increases. It
makes link failure. To avoid the link failure, this paper
provides a mesh based multicast routing scheme that finds
stable multicast path from source to receivers. The link
stability is computed using parameters such as received power
and distance between the two neighboring nodes and the link
quality is gained by packet bit errors. The proposed system is
used to provide control overhead, limited node mobility, stable
path in On Demand Multicast Routing protocol.

Keywords- Ad hoc network, On Demand Multicast Routing
Protocol (ODMRP), Mesh based Multicast routing, Routing
information cache, and Link stability cache, Number of Previous
Forwarding Group (NOPFG), Multicast Route Information
Cache(MRIC), Stable Forward Node (SFN).

I. INTRODUCTION
A mobile ad hoc network [7] is made up of number of
mobility nodes, and free infrastructure. In MANET, the
nodes are portable, and there is no fixed connectivity
between the nodes due to the node mobility. Recently, the
phenomenal growths of MANET applications are
multiplayer online gaming, email and file transfer, video
conference [3]. A mobile ad hoc network (MANET) [5] is an
autonomous system of mobile hosts connected by wireless
links, the union of which forms a communication network
modeled in the form of an arbitrary communication graph
[1]. This is in contrast to the well-known single-hop cellular
network model that supports the needs of wireless
communication by installing base stations as access points
Multicasting broadcasts datagrams to number of hosts which
is identified by a distinct destination and proposed for group
oriented computing. By sending multiple copies of messages,
Multicasting can improve the efficiency of wireless
links[2].In a typical ad hoc environment, network hosts work
in groups to carry out a given task. It provides efficient
multicasting over MANET, including non-static group
membership and static update of delivery path due to node
mobility. Multicasting in MANET is done through flooding
the packets around the network. After that these packets are
flooded to immediate neighboring nodes [2]. Since group-
oriented communication is one of the key application classes
in MANET environments, a number of MANET multicast
routing protocols have been proposed [6]. These protocols
are classified according to two different criteria. The first
criterion [8] maintains routing state and classifies routing
mechanisms [16] into two types: proactive and reactive.
Proactive protocols maintain routing state, while the reactive
protocols reduce the impact of frequent topology changes by
acquiring routes on demand. The second criterion [9]
classifies protocols according to the global data structure that
is used to forward multicast packets. Several approaches
have been developed for multicast routing algorithms based
on route prediction for the member of the group [2]. There
are two types of approaches [1] are available for multicast
routing. Those are tree based and mesh based approaches.
Tree based approach is defined as only one route
intercommunicating nodes. Mesh based approach has
alternate route deliver the information if link fails. Mesh
based approach is better [1] because of free infrastructure
even if link fails. Tree based approach is not suitable for
multicast routing protocol [1]. This mesh based protocol is
used for making high efficient delivery performance in
ODMRP.
The goal of paper is the improved performance of
ODMRP by avoiding the drawback of ODMRP when the
speed of node is enlarged. It is possible to provide link
failure among the network. Over control overhead and
network resource consumption [16] based on backward
learning concept [6].If mobility of nodes is low, intersection
of new forwarding groups and previous forwarding group
will be high [1]. So there is possible to constrain the join
query packets which have been lost. Without stable links, the
paths established are vulnerable due to large mobility
patterns of nodes. Thus, there is a need to develop an
efficient link stability based multicast routing scheme that
provides better packet delivery ratio, delays and control
overheads [2]. To avoid link failure because of increasing
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 72
NODE ID POWER
LEVEL
STABILITY
FACTOR
DISTANCE LINK
QUALITY




speed of node. It uses constant connection by choosing stable
forwarding nodes (SFN).
The rest of the paper has been organized as: Section 2 is
the discussion on the On demand multicast mesh creation,
Section 3 describes a method to find stable path, Section 4
description about the proposed work of ODMRP, Section 5
discusses the NS2 environment, Section 6 draws conclusion.

II. ON DEMAND MULTICAST MESH CREATION
ODMRP [5] is a mesh-based multicast routing protocol.
It works as only a subset of nodes forwards the multicast
packets (that uses a forwarding group concept). A soft state
approach is taken in ODMRP to maintain multicast group
members. There is no need of open control message to leave
and to enter into the group. In ODMRP [15], multicast routes
are established and updated when the source on demand to
provide group membership communication. When a source
has data to send, but does not have any route to reach the
multicast receivers. It floods a Join-Query control packet to
the entire network. This Join-Query packet is periodically
broadcast to refresh the membership information and updates
routes as shown in Figure1. When an intermediate node
receives a Join-Query packet, it stores the source ID of
incoming packet. The routing table [10] is updated with an
appropriate node ID from which the message has been
received. This is done by backward learning concept. Then
the multicast receivers provide join reply packet with join
table to the immediate neighboring nodes. Then it checks if
the next hop node ID of one of the entries matches its own
ID. If it does, the node realizes that it is on the path to the
source and thus it floods the reply packet to the next node.
Then it sets the forwarding group flag to the nodes. The
source id and time [1] is recorded when the lost join query is
received for each forward multicast group. The packet which
doesnt see the previous forwarding group nodes means that
packets will not be useful. So that join query packets will be
discarded. By flooding join query packet across the network,
sequence traffic flow will refresh the previous and new
forwarding group. Therefore many nodes will be the part of
both groups. So the existing method describes [1], the join
query packets are all sent to zone isolated from the previous
forwarding group. These packets will be deleted from that
group. The number of the previous forwarding group nodes,
which was visited by this join query packet, is counted by
adding extra field which is named as number of previous
forwarding group node (NOPFG). This field is added as
extra field into the join query packet. For each packet, it
computes hop count from the source and number of previous
forwarding group. Then it checks the deviation for hop count
and number of previous forwarding group. If it is maximum
means that it decides that path is good for sending data.




Figure 1. Forwarding mesh creation in ODMRP [6]



III. METHOD TO FIND STABLE PATH
Multicast mesh[11] is created by two phases; 1. Route
request 2. Route reply. When the node has data to send it
wants to create steady path among network. Link stability
metric [2] is used to select stable forward node (SFN). It is
better than to go to route encounter when the link failure
occurs. Link stability is computed by received power,
distance, and length between two adjacent nodes. When the
multicast mesh is created multicast route information cache,
link stability database, and route error packets are maintained
[2]. These should be maintained with each node of multicast
mesh.

The packet format for RR packet [2] is as follows.

SRC.
ADDR
.
MULTCAS
T GROUP
ADDR.
SEQ
.
NO.
RR
FLA
G
PREVIOU
S NODE
ADDR.
POWE
R
ANTENN
A GAIN

The field of Multicast route information cache (MRIC)
[2] is as follows.

GROUP
ADDR.
MULTCAST
GROUP
ADDR.
FWD FLAG STABILITY
FACTOR
SEQ. NO.

The content of link stability database [2] is as follows.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 73





2



IV. PROPOSED WORK

The improved performance of ODMRP [1] is existed by
deleting lost join query packets. But it makes the link failure
S

when speed of node increases. The main contribution of the
paper is to avoid link failure by selecting stable forward node
which is based on distance of two nodes, received power of
the node and stability factor of node. Each node should
maintain multicast information cache, link stability database.

The distance of two neighboring node is computed by
propagation model [2],



p
d
Pt Gt Gr
a R1




b
R2




c R3



RR packets

Control message

Figure 2. Route request path from S to R1 and R2 [2].
r
4
d
L



Where Gt and Gr= the antenna gains of the transmitter
and the receiver.
L = system loss.
= the wavelength and d is the distance between two
nodes.

The stability factor [2] is computed by,


SELECT STABLE FORWARD NODE

SFN selection is done for all forwarding nodes. A node
which has high stability factor that takes it as next hop to
reach destination (group id). For example, in Figure3, the
SFN selected at R1 is node b since it has higher value of S =
0.7 than the other node a, whose S = 0.5. As b belongs to the
forwarding node, it updates FW flag = 11 in its MRIC. This
node is an SFN. A complete example of SFN selection from
S to R1,R2 and R3 based on stability factor.



Pwij

q
ij
S ij
d ij
where Pwij = the signal strength.
qij = link quality
dij= distance between nodes i and j, respectively.

In figure2, Source S floods Route Request packet to
discover the route for two multicast receivers R1 and R2.
Nodes a, b and c receive Route Request packet from Source.
These nodes update the paths to Source in its MRIC by using
next hop as Source. Also updates the link stability database
and stability factor of next hop in MRIC. Node a broadcasts
Route Request packet to R1 and b. Node c broadcasts to b
and R2. Node b broadcasts to a, R2, R1 and b. Node b finds
that these packets are duplicates of the same Route Request
packet already received. Thus they will be discarded by node
b, which is indicated by cross mark in the figure. Similarly
nodes a and c discard duplicate Route Request packets
received from b. R2 and R1 discards duplicates from nodes c
and a, respectively. R2 and R1 updates MRIC and link
stability database. Now, R2 and R1 have path to the source




0.2



S






0.4


a




0.3

0.4




0.6
b




c


0.5

R
1





0.5
R

2


0.7



R
3
S, R1-a-S, R1-b-S, R2-c-S, and R2-b-S.

Figure 3. SFN selection of ODMRP based on stability factor [2].
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 74


V. NS2 ENVIRONMENT
NS2 [4] is a discrete event simulator. It is object oriented
network simulator targeted at networking research. It
provides substantial support for simulating multi-hop
wireless networks complete with physical and IEEE 802.11
MAC layer models. NS is primarily useful for simulating
local and wide area networks. It supports TCP, routing, and
multicast protocols over wired and wireless (local and
satellite) networks.

NS2 [12] began as a variant of the REAL network
simulator and has evolved substantially over the past few
years. In NS2 development was supported by DARPA
through the VINT project at LBL, Xerox PARC, UCB, and
USC/ISI. Currently NS development is supported through
DARPA with SAMAN and through NSF with CONSER,
both in collaboration with other researchers including ACIRI
[14].

In NS2 [3], the user defines arbitrary network topologies,
composed of routers, links and shared media. Protocol
instances can then be attached to nodes. NS2 does not by
itself support any wireless multicasting protocols but there
have been a few multicast implementations in NS2.

The programming language is required for the detailed
simulation of protocols for the calculation of byte transfers,
packet headers and other algorithms [13] on large data sets.
The speed of executing is more important than the time
required to run these simulations, finding the bugs in these
simulations, fixing them, recompiling and re-running the
simulations. C++ is fast to run but slower to change making
it a good programming language for this purpose. [6]

Otcl (Object Oriented Tcl) [4] can be changed very
quickly even though it runs pretty slow. This makes Otcl
ideal for simulation configuration. NS2 (via Tclcl- A
C++/Tcl interface) provides glue to make objects and
variables appear on both languages. Otcl is the Object
Oriented scripting language Tcl (Tool Command Language).
Tclcl [14] provides linkage for class hierarchy, object
instantiation, variable binding and command dispatching.

The simulator supports a class hierarchy in C++ [4] (also
called the compiled hierarchy in this document), and a
similar class hierarchy within the OTcl interpreter (also
called the interpreted hierarchy in this document). The two
hierarchies are closely related to each other; from the users
perspective, there is a one-to-one correspondence between a
class in the interpreted hierarchy and one in the compiled
hierarchy.

The root of this hierarchy is the class TclObject. Users
create new simulator objects through the interpreter; these
objects are instantiated within the interpreter, and are closely
mirrored by a corresponding object in the compiled
hierarchy. In the class hierarchy of NS components (Otcl
class), the TclObject class is the root of the hierarchy. As an
ancestor class of TclObject, NsObject class is the superclass
of all basic network component objects that handle packets.
The basic network components are further divided into two
subclasses, Connector and Classifier, based on the number of
the possible output data paths.

VI. CONCLUSION
ODMRP is a multicast routing protocol that sends
information to more than one receiver. It is a mesh-based,
rather than a conventional tree-based multicast scheme. It
uses a forwarding group concept i.e., only a subset of nodes
forwards the multicast packets via scoped flooding. It applies
on-demand procedures to dynamically build routes and
maintain multicast group membership. It maintains a mesh
based on soft state (when a node wishes to leave the group, it
simply stops sending request/reply packets to the group). No
explicit control message is required to leave. It improves
efficiency by deleting lost join query packets. It makes less
mobility among network. But it has one disadvantage. When
the speed of node increases it is affected by link failure. That
is recovered by mesh creation of ODMRP which selects
stable forward node that is done by selecting maximum
stability of forwarding nodes. This makes link quality and
avoid link failure. It produces scalability, reduce control
overhead and delay constrained routing.



REFERENCES


[1] Kamran Abdollahi, Alireza shams Shafigh Andreas, Andreas J.
Kassler, Improving performance of On Demand Multicast Routing
Protocol by deleting lost join query packets, Sixth Advanced
International Conference on Telecommunications,Issue 9-15, pp. 316
322, June 2010.
[2] Rajashekhar Biradar, Sunilkumar Manvi, Member, IACSIT , Mylara
Reddy , Mesh based multicast routing in MANET: stable link based
approach,International Journal of Computer and Electrical
Engineering, Vol. 2, No. 2, April, 2010.
[3] Luo Junhai, Ye Danxia, Xue Liu, and Fan Mingyu, A Survey of
Multicast Routing Protocols for Mobile Ad-Hoc Networks, IEEE
Computer Communications Surveys and Tutorials, Vol. 11, No. 1, pp.
78-91,March 2009.
[4] E. Baburaj and V. Vasudevan, "Performance evaluation of multicast
routing protocols in MANETs with Realistic Scenarios", IETECH
Journal of Advanced Computations, Vol. 2, No. 1, pp. 15-20, 2008.
[5] Qing Dai, Jie Wu,Computation of minimal uniform transmission
range in ad hoc wireless networks, Cluster Computing, Springer
Science, Vol.8, pp.127-133,Jan2005.
[6] C. d.M. Cordeiro, H. Gossain, and D.P. Agrawal ,"Multicast Over
Wireless Mobile Ad Hoc Networks:Present and Future Directions",
IEEE Network, Issue 3-2, Vol. 17, pp. 52-59, Jan 2003.
[7] H. Moustafa and H. Labiod, "A performance comparison of
Multicast routing protocols in ad-hoc networks", The 14th IEEE 2003
International Symposium on personal, indoor and mobile radio
communication proceedings, pp. 497-501, 2003.
[8] Y. Zhao, Y. Xiang, L. Xu, and M. Shi, "On-Demand Multicast Route
Protocol with Multipoint Relay(ODMRP-MPR) in Mobile Ad-hoc
Wireless Network", In Proceeding of the International Conference on
Communications Technology (ICCT),Vol. 2, pp.1295-1300,April
2003.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 75


[9] Moustafa H, Laboid, H, A multicast on-demand mesh-based routing
protocol in multihop mobile wireless networks, Proceedings of IEEE
58th Vehicular Technology Conference, VTC 2003, Vol.4, No.6, pp.
2192-2196,2003.
[10] S.-J. Lee, W. Su, and M. Gerla, "On-demand multicast routing
protocol in multihop wireless mobile networks", ACM/Kluwer
Mobile Networks and Applications, vol. 7, no. 6, pp. 441452,
December 2002.
[11] H. Gossain, C.M. Cordeiro, and D.P. Agrawal, "Multicast: wired to
wireless", IEEE Communications Magazine, Vol.40, Issue 6, pp. 116-
123, June 2002.
[12] L. Klos and G. Richard III, "Reliable Group Communication in an
Ad Hoc Network", In Proceedings of the IEEE International
Conference on Local Computer Networks (LCN),pp.458-459, 2002.
[13] William Su Sung-Ju Lee, Mario Gerla, On-demand multicast
routing protocol in multihop wireless mobile networks, Mobile
Networks and Applications, Kluwer Academic Publishers, Vol. 7,
2002, pp. 441-453.
[14] B. S. Manoj Subir Kumar Das, C. Siva Ram Murthy, A dynamic
core based multicast routing protocol for ad hoc wireless networks,
Proceedings of the 3rd ACM International Symposium on Mobile Ad
Hoc Networking and Computing, Switzerland, pp. 24-35.
[15] J.-G. Jetcheva and D.-B. Johnson, "Adaptive Demand-Driven
Multicast Routing in Multi-Hop Wireless Ad Hoc Networks",
Proceedings of ACM MobiHoc01, Long Beach, CA, pp. 3344,
October 2001.
[16] Ching-Chuan Chiang, Mario Gerla, Lixia Zhang,Forwarding group
multicast protocol (FGMP) for multihop, mobile wireless networks,
Cluster Computing, Vol. 1, No. 2, 1998, pp. 187-196.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 76

WSD and its application in information retrieval: an overview
Anand Prakash Dwivedi
1
, Sanjay K. Dwivedi
2
Vinay Kumar Pathak
3
1
Maharana Institute of Professional Studies, Kanpur, India
2
Dept. of Computer Science,

B.B.Ambedkar(Central) University, Lucknow, Lucknow ,India
3
Uttarakhand Open University, Haldwani , India
(dwivedi_anand@hotmail.com, skd200@yahoo.com,vinaypathak.hbti@gmail.com)

Abstract- Word Sense Disambiguation is defined as the
problem of computationally determining the correct or
exact sense of a word in the particular context. Word
Sense Disambiguation (WSD) is a significant technique
which has been an interest and concern for the various
researchers from the early days of natural language
processing (NLP). Although there are hundreds of
successful algorithms developed till date for the WSD
applications, but the researchers find themselves in the
miserable condition in choosing the optimal WSD
algorithm for their specific needs. The aim of the paper is
to classify the WSD algorithms on the basis of the
information sources used. The paper will provide detail
and diversified knowledge of WSD application areas to
help resolve the problem of selecting the appropriate
information source for the WSD algorithms.

Key Words: word sense disambiguation, information
retrieval, synsets, natural language processing.

1. Introduction

WSD is essentially a task of classification where
word senses are the classes, the context provides the
evidence, and each occurrence of a word is assigned to
one or more of its possible classes based on the
evidence. This is the traditional and common
characterization of WSD that sees it as an explicit
process of disambiguation with respect to a fixed
inventory of word senses. However one can not limit
the definition of WSD to only disambiguate the sense
of the words in a given context.
WSD has obvious relationships to other fields where
main endeavor is to define, analyze, and ultimately
understand the relationships between word,
meaning, and context. Word meaning is at the
heart of the problem of WSD. The importance of WSD
has been widely acknowledged in computational
linguistics. WSD is not thought of as an end in itself,
but as an enabler for other tasks and applications of
computational linguistics and natural language
processing (NLP) such as parsing, semantic
interpretation, machine translation, information
retrieval, text mining, and (lexical) knowledge
acquisition.
WSD has been related to various computational areas
like: Machine Translation (MT), Information Retrieval
and Hypertext Navigation (IR), Content and Thematic
Analysis (CTA), Grammatical Analysis (GA), Speech
Processing (SP), Information Extraction (IE).
The task of WSD can be divided in two phases: (1)
the determination of all the different senses for every
word in the text 2) and the assignment of each
occurrence of a word to the appropriate sense. Much of
the recent work relies on pre-defined senses that can be
found in everyday dictionaries, thesauri and bilingual
dictionaries. Step (2) is done by relying on two major
sources of information: the context of the ambiguous
word and external knowledge sources.



Figure 1: The task of Word Sense Disambiguation

The evaluation of WSD systems are based on two
main performance measures -
Precision: the fractions of system assignments
made that are correct.
Recall: the fraction of total word instances
correctly assigned by a system.
If a system makes an assignment for every word, then
precision and recall are the same, and can be called
accuracy.
Text
Collect Senses
Assign Correct
Senses for Words
MACHINE
TRANSLATION
INFORMATION
RETRIEVAL
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 77
Besides the above mentioned performance measures
Semantic/communicative distance and Receiver
Operation Characteristic (ROC) are also used for the
performance measure
Semantic/communicative distance: It is a cost
which weighs misclassification penalties by the
distances between the predicted and correct senses.
Receiver Operation Characteristic (ROC): A ROC
graph plots the tradeoff between true positive rate
and false negative rate in a binary classifier as a
threshold value is modified. The true positive rate
(TPR or recall) is defined as the pro-portion of
positive instances predicted as positive. The false
positive rate (FPR or fallout) is defined as the
proportion of negative instances predicted as
positive. The rationale behind graphing the
relationship between these two factors for a given
classifier is that various uses of the classifier may
demand different optimization criteria such as
maximizing the TPR given a highest acceptable
FPR, or finding the optimal classifier given the
costs of errors and class distribution.

The remainder of the paper is organized as follows: in
section 2, we discuss the various knowledge types used
by WSD algorithms, while in section 3 we classify the
various WSD algorithms on the basis of information
resource used. In section 4 we had discussed various
application areas of WSD. A short conclusion finishes
the paper.

2. Knowledge Type Used for WSD
The information resource or a knowledge base used in
WSD approaches can be classified into various
categories. Following are the knowledge types-

2.1. Part of speech (POS)
In grammar, a part of speech (also a word class, a
lexical class, or a lexical category) is a linguistic
category of words (or more precisely lexical items),
which is generally defined by the syntactic or
morphological behaviour of the lexical item in
question. Common linguistic categories include noun
and verb, among others. There are open word classes,
which constantly acquire new members, and closed
word classes, which acquire new members infrequently
if at all. POS is used as an important aspect for
organizing the word senses. For instance, in Word-Net
1.6 bat has 5 senses as a verb and 5 as a noun.

2.2. Morphology
Morphology is a field of linguistics focused on the
study of the forms and formation of words in a
language. A morpheme is the smallest indivisible unit
of a language that retains meaning. The rules of
morphology within a language tend to be relatively
regular, so that if one sees the noun morphemes for the
first time, for example, one can deduce that it is likely
related to the word morpheme specially the relation
between derived words and their roots. For instance,
the noun computation has 2 senses as noun, its verbal
root computer 1.

2.3. Collocations
A collocation is two or more words that often go
together. These combinations just sound "right" to
native English speakers, who use them all the time. On
the other hand, other combinations may be unnatural
and just sound "wrong". Within the area of corpus
linguistics, collocation defines a sequence of words or
terms that co-occur more often than would be expected
by chance The 9-way ambiguous noun match has only
one possible sense in football match.

2.4. Semantic word associations, which can be further
classified as follows:
a Taxonomical organization, e.g. the association
between chair and furniture.
b Situation, such as the association between chair and
waiter.
c Topic, as between bat and baseball.
d Argument-head relation, e.g. dog and bite in the dog
bite the postman.
These associations, if given as a sense-to-word
relation, are strong indicators for a sense. For instance,
in The chair and the table were missing the shared
class in the taxonomy with table can be used to choose
the furniture sense of chair.

2.5. Syntactic cues.
Hints based on syntax that help reader decode and
comprehend text, also known as grammatical cues.
Good readers perceive relationships among words and
phrases, sentences, and paragraphs. They use their
knowledge of these relationships and language
structure (syntax) to help understand meaning of text.
Sub categorization information is also useful, e.g. eat
in the take a meal sense is intransitive, but it is
transitive in other senses.

2.6. Semantic roles
A semantic role is the underlying relationship that a
participant has with the main verb in a clause.
Semantic role is the actual role a participant plays in
some real or imagined situation, apart from the
linguistic encoding of those situations. In The bad
new will eat him the object of eat fills the experiencer
role, and this fact can be used to better constrain the
possible senses for eat.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 78

2.7. Selectional preferences. For instance, eat in the
take a meal sense prefers humans as subjects. This
knowledge type is similar to the argument-head
relation, but selectional preferences are given in terms
of semantic classes, rather than plain words.

2.8. Domain. For example, in the domain of sports, the
tennis racket sense of racket is preferred.

2.9. Frequency of senses. Out of the 4 senses of people
the general sense accounts for 90% of the occurrences
in Semcor.

2.10. Pragmatics. Pragmatics is a subfield of
linguistics which studies the ways in which context
contributes to meaning. In some cases, full-fledged
reasoning has to come into play to disambiguate head
as a nail-head in the now classical utterance Nadia
swing the hammerat the nail, and the head flew off..

3. Categorization of WSD approaches on the basis of
knowledge sources used

WSD systems can be characterized by the information
source used in their algorithms, namely MRDs, light-
weight ontologies, corpora, or a combination of them
This section reviews some of the major contributors to
WSD.

3.1 Machine Readable Dictionaries
Machine Readable Dictionaries provides the list of
meanings, definitions and typical usage examples for
most word meanings. In general the information
retrieval systems index documents based on the words
they contain, and the retrieval is dependent on the
frequency of occurrence. This leads to the retrieval of
many irrelevant documents because words are often
ambiguous. Machine Readable Dictionary is the
approach in which documents are indexed by word
senses rather than the word itself.
Lesk algorithm [1] utilized MRDs successfully. The
algorithm first identified senses of words in context
using definition overlap. The sense definitions of all
the words to be disambiguated are retrieved from
MRD. Further, determine the definition overlap for all
possible sense combinations. Finally select the senses
that lead to highest overlap. Lesk algorithm was
suitable for the disambiguation of the single words, but
the complexity of its algorithm gets worst when one
has to deal with multiple words disambiguation.
Cowie [2] proposed Simulated Annealing which
operates on complete sentences and attempts to select
the optimal combinations of word senses for all the
words in the sentence simultaneously. The method
operates on complete sentences and attempts to select
the optimal combinations of word senses for all the
words in the sentence simultaneously. The words in the
sentences may be any of the 28,000 headwords in
Longman's Dictionary of Contemporary English
(LDOCE) and are disambiguated relative to the senses
given in LDOCE.
Kilgarriff & Rosensweig [3] reduced the complexity
of the Lesk Algorithm. They framed the simplified
version of Lesk algorithm in which besides measuring
overlap between sense definitions for all words in the
context they measured overlap between sense
definition of a word and current context.
MRDs The first sense in dictionaries can be used as
an indication of the most used sense. Other systems try
to model semantic word associations processing the
text in the definitions in a variety of ways. Besides,
using the additional information present in the
machine-readable version of the LDOCE dictionary.
Few examples of Machine Readable Dictionaries are
Oxford English Dictionary, Collins and Longman
Dictionary of Ordinary Contemporary English
(LDOCE).

3.2 Ontologies
In computer science and information science, ontology
is a formal representation of knowledge as a set of
concepts within a domain, and the relationships
between those concepts. It is used to reason about the
entities within that domain, and may be used to
describe the domain.
Ontology is a knowledge base with information
about concepts existing in the world or domain, their
properties, and how they relate to each other. Three
principal reasons to use ontology in machine
translation (MT) are to enable source language
analyzers and target language generators to share
knowledge, to store semantic constraints, and to
resolve semantic ambiguities by making inferences
with the concept network of the ontology. Ontology is
different from a thesaurus in that it contains only
language independent information and many other
semantic relations, as well as taxonomic relations.



Very few systems used proprietary ontologies like
[4][ Guntis Brzdi, Sin-Jae Kang and Jong-Hyeok
Lee]. In general WordNet is used as ontology for most
of the systems.[5][6]

3.3 Corpora
Hand tagged corpora has been used to train machine
learning algorithms. The training data is processed to
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 79
extract features, that is, cues in the context of the
occurrence that could lead to disambiguate the word
correctly.
A supervised approach uses sense-tagged corpora to
train the sense model, which makes it possible to link
contextual features (world knowledge) to word sense.
Supervised class of methods induces a classifier from
manually sense-tagged text using machine learning
techniques. Supervised approach reduces WSD to a
classification problem where a target word is assigned
the most appropriate sense from the given set of
possibilities based on the context in which it occurs.

For instance, Yarowsky [7] showed how collocations
could be captured using bigrams and argument-head
relations. In the literature, easily extracted features are
preferred, avoiding high levels of linguistic processing
[7] [8] [9] [10].

3.4 MRD and ontology combinations
WordNet lack the semantic associations. Semantic
associations are meaningful and relevant complex
relationships between entities, events and concepts. For
instance, [11] combines the use of taxonomies and the
definitions in WordNet yielding a similarity measure
for nominal and verbal concepts which are otherwise
unrelated in WordNet. They get the outstanding results
for their experiments for the disambiguation of the
noun-verb pair.

3.5 MRD and corpora combinations
This combination of information sources are used
with minimally supervised approaches. The semantic
annotations are usually done by humans, hence it is
very expensive and the size of such corpora is also
limited to a handful of tagged texts. The use of the
automatically created large sense tagged corpora is
required. Minimally supervised WSD used learning
sense classifiers form annotated data, with minimal
human supervision.
It is initialized with some sort of seed that grows
into a full classifier (or generative model). A seed is
fertile if it grows into a classifier (or model) that
performs well on some desired criterion. Ordinarily, it
is up to a human to choose a seed that he or she
intuitively expects to be fertile. Yarowsky[15] and
Rada Mihclea[16] used this combination of
information sources and found good results.
[12] uses the hierarchical organization in Rogets
thesaurus to automatically produce sets of salient
words for each semantic class. These salient words are
similar to McRoys clusters [13], and could capture
both situation and topic clusters. In [14], seed words
from a MRD are used to bootstrap a training set
without the need of hand-tagging.

3.6 Ontology and corpora combinations
In an exception to the general rule, selectional
preferences have been semi-automatically extracted
and explicitly applied to WSD. [17] [18]
Disambiguation using automatically acquired selection
constraints leads to significant performance
improvement in comparison to the random choice. The
automatic extraction involved the combination of
parsed corpora to construct sets of e.g. nouns that are
subjects of an specific verb, and a similarity measure
based on a taxonomy is used to generalize the sets of
nouns to semantic classes.
Selectional Preferences try to capture the fact that
linguistic elements prefer arguments of a certain
semantic class. Selectional constraints are limitations
on the applicability of predicates to arguments. For
example, the statement The number two is blue may
be syntactically well formed, but at some level it is
anomalous BLUE is not a predicate that can be applied
to numbers.
Selectional Preference has many variations. 1)
Word-to-Word Relations used the pair of words
which are connected by some syntactic relations. A)
Word-to-Class Relations used class of words which
have attributed to it the accumulated properties of its
members, even if observations of the individual
members are sparse; in addition, word classes can be
used to capture higher level abstractions such as
syntactic or semantic features. The premise behind this
approach is that the relatedness of words is reflected by
similarities in their distributional contexts, as observed
in large collections of naturally occurring text. 3)
Class-to-Class Relations depends upon the classes of
verbs. Here the relations between the classes in the
hierarchy are capitulated rather than the relation
between the word and classes.
In a different approach [10], the information in
WordNet has been used to build automatically a
training corpus from the web. A similar technique has
been used to build topic signatures, which try to give
lists of words topically associated for each concept
[19].

3.7 Semantic Roles
Semantic roles, also known as thematic roles, are one
of the oldest classes of constructs in linguistic theory.
Semantic roles are used to indicate the role played by
each entity in a sentence and are ranging from very
specific to very general. The entities that are labelled
should have participated in an event. Some of the
domain-specific roles are from airport, to airport, and
depart time. Some of the verb-specific roles are eater
and eaten for the verb eat. Although there is no
consensus on a definitive list of semantic roles some
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 80
basic semantic roles such as agent, instrument, etc are
followed by all.

Semantic relatedness between words or concepts
measures how much two words or concepts are related
by encompassing all kinds of relations between them,
such as hypernymy, hyponymy, antonymy and
functional relations. There is a large number of
literature on computing semantic relatedness between
words or concepts using knowledge extracted from
Wikipedia, such as [20]. However, the main limitation
of these methods is that they only make use of one or
two types of features.
Anna and Ziqi [21] Experiments showed that the
paradigm achieves significant results: the overall
accuracy is 91.46% and 89.83% on two different
datasets, which is a result competitive to the state of
the art. The successful accuracy reached hints at the
usefulness of Semantic Relatedness measures for the
process of Named Entities Disambiguation.

3.8 Domain Resource for WSD
WordNet Domains [22] is an extension of WordNet 1.6
where each synset has one or more domain labels.
Synsets associated to different syntactic categories can
have the same domain labels. These domain labels are
selected from a set of about 250 hundred labels,
hierarchically organized in different specialization
levels. This new information added to WordNet 1.6.,
allows to connect words that belong to different
subhierarchies and to include into the same domain
label several senses of the same word. Thus, a single
domain label may group together more than one word
sense, obtaining a reduction of the polysemy. Table 1
shows an example. The word music has six different
senses in WordNet 1.6.: four of them are grouped
under the MUSIC domain, causing the reduction of the
polysemy from six to three senses.

[Sonia and German ][23] used WordNet Domains to
collect examples of domains associations to the
different meanings of the words. To realize this task,
WordNet Domains glosses will be used to collect the
more relevant and representative domain labels for
each English word. In this way, the new resource
named Relevant Domains, contains all words of
WordNet Domains glosses, with all their domains and
they are organised in an ascendant way because of their
relevance in domains




The results obtained in the evaluation process confirm
that the new WSD method obtains a promising
precision and recall measures, for the word sense
disambiguation task. We extract an important
conclusion about domains because they establish
semantic relations between the word senses, grouping
them into the same semantic category (sports,
medicine).

Their WSD method also we can resolve WordNet
Granularity for senses. Also, the new lexical resource
Relevant Domains, is a new information source that
can complete other WSD methods like Information
Retrieval Systems, Question Answering...
4. Classification of WSD Applications
Word sense disambiguation a task of removing the
ambiguity of word in context, is important for many
NLP applications such as:

4.1. Information Retrieval

As proposed by Krovetz and Croft [24] WSD helps
in improving term indexing in information retrieval.
They proved that word senses improve retrieval
performance if the senses are included as index terms.
Thus, documents should not be ranked based on words
alone, the documents should be ranked based on word
senses, or based on a combination of word senses and
words. For example: Using different indexes for
keyword Java as programming language, as type
of coffee, and as location will improve accuracy of
an IR system
4.2. Machine Translation

Machine translation, sometimes referred to by the
abbreviation MT, is a sub-field of computational
linguistics that investigates the use of computer
software to translate text or speech from one natural
language to another. At its basic level, MT performs
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 81
simple substitution of words in one natural language
for words in another. Using corpus techniques, more
complex translations may be attempted, allowing for
better handling of differences in linguistic typology,
phrase recognition, and translation of idioms, as well as
the isolation of anomalies.
WSD is important for Machine translations. It helps
in better understanding of source language and
generation of sentences in target language. It also
affects lexical choice depending upon the usage
context.

4.3 Speech Processing and Part of Speech tagging

Speech processing is the study of speech signals and
the processing methods of these signals. The signals
are usually processed in a digital representation
whereby speech processing can be seen as the
intersection of digital signal processing and natural
language processing.
Speech recognition face problems when processing
with homophones words which are spelled differently
but pronounced the same way. For example: base
and bass or sealing and ceiling.

4.4. Text Processing

Text to Speech translation faces problems when
words are pronounced in more than one way depending
on their meaning. For example: lead can be in
front of or type of metal.

4.5. Classification of Documents

Document classification/categorization is a problem
in information science. The task is to assign an
electronic document to one or more categories, based
on its contents. Document classification tasks can be
divided into two sorts: supervised document
classification where some external mechanism (such as
human feedback) provides information on the correct
classification for documents, and unsupervised
document classification, where the classification must
be done entirely without reference to external
information. There is also a semi-supervised document
classification, where parts of the documents are labeled
by the external mechanism
Document classification has been used to enhance
information retrieval. This is based on the clustering
hypothesis, which states that documents having similar
contents are also relevant to the same query. A fixed
collection of text is clustered into groups or clusters
that have similar contents.
Unsupervised Document Classification faces problem
of Word Sense Disambiguation.

4.6. Question Answering

Question answering (QA) is a type of information
retrieval. Given a collection of documents (such as the
World Wide Web or a local collection) the system
should be able to retrieve answers to questions posed in
natural language. QA is regarded as requiring more
complex natural language processing (NLP) techniques
than other types of information retrieval such as
document retrieval, and it is sometimes regarded as the
next step beyond search engines.
Question answering (QA) systems used to avoid the
user overhead and present them with the direct answer
to the question. Question Answering (QA) systems are
faced with the challenges posed by language variability
and word ambiguity.

4.7. Cross Language Information Retrieval (CLIR)

Cross-language information retrieval (CLIR) is a
subfield of information retrieval dealing with retrieving
information written in a language different from the
language of the user's query. For example, a user may
pose their query in English but retrieve relevant
documents written in French.

4.8. Synonymy Test

It is a type of intelligence test item in which the
respondent is presented with a word and is asked to
supply a word with the same meaning to it, often by
choosing from a set of response alternatives.
Synonymy Test is an exact application of WSD where
a single ambiguous word is delivered to the respondent
to find its multiple senses.
5. Discussion
In the section 3 we had categorized WSD approaches
on the basis of knowledge resources used. The
knowledge resources used in any WSD approach
affects its performance level. The way, information is
categorized and stored hierarchically in the information
resources affects its retrieval efficiency.
The MRDs are the oldest information resource used
successfully by many researchers but it has drawbacks
like dictionary definitions are too small to rely on.
However LDOCE is found good in terms of having
almost 2200 words as an explanation for any word.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 82
Ontologies are also utilized by the various researchers.
Their work justified the improvement in the
performance level from the baseline.
Corpus is the most versatile resource of knowledge for
the WSD algorithms. With the help of corpus one can
extract various levels of knowledge which is beneficial
for the algorithms. Corpus may be dynamic in nature
thats also improves its quality of contents.
The combinations of corpora and ontologies that try to
acquire training data automatically are promising, but
not always give good results. Although the
combination of corpora and ontologies has an
overhead of creating corpora which poses an extra load
on any working algorithms. The researchers worked
with this combination got the remarkable results so it
can be a good choice for the WSD approaches.
Semantic Roles is also a good choice for WSD
approaches the experiments show clearly that usage of
semantic roles turn out into good performance level.
As discussed in section 4, it has been identified that a
new lexical resource named Relevant Domains from
glosses of WordNet Domains can also be utilized as an
information resource for WSD process. The results
obtained in the evaluation process confirm that the new
WSD method obtains a promising precision and recall
measures, for the word sense disambiguation task. The
new lexical resource Relevant Domains, is a new
information source that can complete other WSD
methods.
6. Conclusion
The WSD is still in the process of performance
improvement. A large number of algorithms have been
proposed by the researchers so far. Almost all the
algorithms have their own constraints. In this paper we
took a comprehensive overview of some of the most
significant works in this area, and provided
classification and comparisons between various WSD
techniques based on the resources utilized by them and
the techniques used.
The scope of WSD is wide and a variety of
techniques are employed to resolve ambiguity form the
text. As selecting the most suitable technique is the key
to success of any application areas of WSD, an attempt
has been made in this paper to classify WSD based on
knowledge sources.

References
[1] Lesk, M. , Automatic sense disambiguation using
machine readable dictionaries: How to t ell a pine
cone from a ice cream cone, Proceedings of the
SIGDOC, Toronto, ON, Canada, pp. 24-26, 1986
[2] Cowie, J. , Guthrie, J. and Guthrie, L. , Lexical
disambiguation using simulated annealing,
Proceedings of the 14th International Conference on
Computational Linguistic, pp. 359-365 Nantes,
France, 1992.
[3] Kilgarriff A. and Rosenzweig J. , English SENSEVAL:
Report and Results, Proceedings of Second
Conference on Language Resources and Evaluation,
Athens, Greece, pp. 1239-1244, 2000
[4] Yarowsky, D. , Word-sense disambiguation using
statistical models of Roget' s categories trained on
large corpora. Proceedings of COLING, Nantes,
France, pp. 23-28 , 1992. .
[5] Resnik, P. : Selection and Information: A Class -Based
Approach to Lexical Relationships. Ph. D. University
of Pennsylvania (1993)
[6] Agirre, E. : Formali zation of concept -relatedness using
ontologies: Conceptual Density. Ph. D. thesis.
University of the Basque Country (1999)
[7] Yarowsky, D.: One Sense per Collocation. Proc. of the 5th
DARPA Speech and Natural Language Workshop (1993)
[8] Ng, H. T., Lee, H. B.: Integrating Multiple Knowledge Sources
to Disambiguate Word Sense: An Exemplar-based Approach.
Proceedings of the ACL (1996) .
[9] Leacock, C., Chodorow, M., Miller, G. A.: Using Corpus
Statistics and WordNet Relations for Sense Identification.
Computational Linguistics, 24(1) (1998)
[10] Agirre, E., Martinez, D.: Exploring automatic word sense
disambiguation with decision lists and the Web. Proceedings of
the COLING Workshop on Semantic Annotation and
Intelligent Content. Saarbrcken, Germany (2000)
[11] Mihalcea, R., Moldovan, D.: Word Sense Disambiguation
based on Semantic Density.Proceedings of COLING-ACL
Workshop on Usage of WordNet in Natural Language
Processing Systems. Montreal, Canada (1998)
[12] Yarowsky, D.: Word-Sense Disambiguation Using Statistical
Models of Rogets Categories Trained on Large Corpora.
Proceedings of COLING. Nantes, France (1992 .
[13 McRoy, S.: Using Multiple Knowledge Sources for Word Sense
Discrimination. Computational Linguistics, 18(1) (1992)
[14] Yarowsky, D. Unsupervised Word Sense Disambiguation
Rivaling Supervised Methods. Proceedings of the ACL.
Cambridge, USA (1995)
[15] Yarowsky D, Unsupervised Word Sense Disambiguation
rivaling Supervised Methods, Proceedings of the 33rd annual
meeting on Association for Computational Linguistics,
Cambridge, Massachusetts, pp. 189 - 196, 1995
[16] Mihalcea R., Co-training and Self-training for Word Sense
Disambiguation, Proceedings of Conference on Natural
Language Learning, 2004.
[17] Resnik, P.: Selection and Information: A Class-Based
Approach to Lexical Relationships. Ph.D. University of
Pennsylvania (1993)
[18] Agirre, E., Martinez, D.: Learning class-to-class selectional
preferences. Proceedings of the ACL CONLL Workshop.
Toulouse, France (2001)
[19] Agirre, E., Ansa, O., Martinez, D., Hovy, E.: Enriching
WordNet concepts with topic signatures. Proceedings of the
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 83
NAACL worshop on WordNet and Other Lexical Resources:
Applications, Extensions and Customizations. Pittsburg, USA
(2001)
[20] Agirre, E., Rigau, G.: Word Sense Disambiguation using
Conceptual Density. Proceedings of COLING. Copenhagen,
Denmark (1996)
[21] Gentile, Z Zhang, L Xia, J Iria, Digital Libraries:
Communications in Computer and Information Science,
Springer-Verlag, 2010.
[22] Magnini B. and Cavagli G., Integrating Subject field
Codes into WordNet. In Proceedings of LREC2000, Second
International Conference on Language Resources and
Evaluation, Athens, Greece, June 200
[23] Sonia Vzquez, Andrs Montoyo, German Rigau. Using
Relevant Domains Resource for Word Sense Disambiguation.
In Proceedings of IC-AI'2004. pp.784~789
[24] Krovetz R. and W. Bruce C., Lexical ambiguity and
information retrieval. Information Systems, Vol. 10, No.2,
pp.115141, 1992..




Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 84
Scalable networking algorithm improving
bandwidth utilization
Eldho Varghese and Dr. M.Roberts Masillamani
School of Computer Science and Engineering,
Hindustan Institute of technology and science, Chennai, India
eldhokv1987@yahoo.com and deancs@hindustanuniv.ac.in




Abstract
Data broadcasting has become a promising
technique in designing a mobile
information system with power
conservation, high scalability, and high
bandwidth utilization. Data broadcasting
generates a broadcast program based on
historical access statistics and is useful for
certain applications. . In this project, we
address the problem of generating a
broadcast program to disseminate data via
multiple channels of time-variant
bandwidth. In view of the characteristics
of time-variant bandwidth, we propose an
algorithm size index aware using adaptive
allocation on time-variant bandwidth to
generate the broadcast program to avoid
the drawback to minimize average waiting
time.

Keywords:- Data broadcast, time-variant
channel bandwidth allocation, data indexing.

1.0 Introduction
To design a adaptive allocation on time-
variant bandwidth to generate the
broadcast program to minimize average
waiting time. In view of the characteristics
of the variant bandwidth, we proposed size
index aware algorithm using adaptive
bandwidth allocation to generate a
broadcast program to avoid the drawback
so as to minimize average waiting time.
Going beyond previous methods, ABA
was proposed to generate a broadcast
program for minimizing average waiting
time.. Sub problems. First, given access
frequencies, the sizes of data items and
bandwidth for each channel, we need to
generate a broadcast program which can
minimize average waiting time. Moreover,
as the channel bandwidth changes
dynamically, we also need to make sure
that the broadcast program can be adjusted
adaptively without quality loss.

Literature review
2.1. Mobile Computing and Database
A Survey
A wireless network with mobile clients is
essentially a distributed system; there are
some characteristic features that make the
system unique and a fertile area of
research. These are:
2.1. A. Asymmetry in the
communications: The bandwidth in the
downstream direction (servers-to-clients)
is much greater than that in the upstream
direction (clients-to-servers). Moreover, in
some systems, the clients do not have the
capacity to send messages to the servers at
all. As we mentioned above, even the
bandwidth in the downstream direction is
limited: 10 to 20 Kbits/sec. in a cellular
network and up to 10 Mb/sec. in a wireless
LAN.
2.1. B. Frequent disconnections: Mobile
clients do not stay connected to the
network continuously (as fixed hosts do),
but rather users switch their units on and
off regularly. Moreover, mobile clients can
roam, disconnecting from a cell to connect
to another.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 85
2.1. C. Power limitations: Some of the
portable units are severely limited by the
amount of energy they can use before the
batteries have to be recharged. _ Screen
size: Some of the portable units, such as
the Personal Digital Assistants, have very
small screens. Each one of these features
has an impact on how data can be
effectively managed in a system with
mobile clients.
2.2. Variant Bandwidth Channel
Allocation In The Data Broadcasting
Environment
To remedy the drawbacks of
TOSA, we propose algorithm AP
(standing for Adaptive Partition on
bandwidth) to perform channel allocation.
AP can achieve high quality with much
lower complexity so that the information
system is capable of broadcasting the items
via channels with variant bandwidth. In
contrast to TOSA which adopts the log-
time algorithm to schedule data, AP
allocates popular items into the channels of
larger bandwidth and avoids the
broadcasting unfairness of each item so as
to reduce overall waiting time.
2.3. On-Demand Broadcast: New
Challenges and Scheduling Algorithms
We are addressing the issue of
efficient delivery of summary tables to
wireless clients (e.g., on a company
wireless intranet) equipped with OLAP
front-end tools. In wireless networks,
broadcasting is the primary mode of
operation for the physical layer. Thus,
broadcasting is the natural method to
propagate information in wireless links
and guarantee scalability for bulk data
transfer. Specifically, data can be
efficiently disseminated by any
combination of the following two
schemes: broadcast push and broadcast
pull. These exploit the asymmetry in
wireless communication and the reduced
energy consumption in the receiving mode.
3.0 Existing System
Here we overcome the problem of
Dynamic Bandwidth Allocation
Two Sub problems:
1 Given access frequencies, the
sizes of data items and
bandwidth for each channel, we
need to generate a broadcast
program which can minimize
average waiting time.
2 As the channel bandwidth
changes dynamically, we also need to
make sure that the broadcast program can
be adjusted adaptively without quality
loss.
4.0. Proposed System
We propose algorithm SIA (standing for
data Size aware Index Allocation) to
allocate data and their indices into multiple
broadcast channels. SIA first creates a set
of all items and then partitions the set into
two subsets to minimize average waiting
time. This procedure repeats until the
number of subsets reaches the number of
channels. In order to evaluate the
performance of SIA, we conduct several
experiments. During experiments, we
analyze the effectiveness of SIA with
average waiting time and also investigate
the efficiency of SIA by measuring its
execution time. The experimental results
show that SIA is of very high quality and
in fact is very close to an optimal scheme
OPT, which is designed by using a genetic
algorithm1 for comparison purposes.
Therefore, SIA has the same quality as
OPT with incurring much lower
complexity.

5.0.Modules

5.1.Network Generation with Channel
Aware Indexing

Wireless network refers to any type of
computer network that is wireless, and is
commonly associated with a
telecommunications network whose
interconnections between nodes are
implemented without the use of wires .A
computer network is a group of
interconnected computers. Networks may
be classified according to a wide variety of
characteristics. The network allows
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 86
computers to communicate with each other
and share resources and information. In
communication networks, a node is an
active electronic device that is attached to
a network, and is capable of sending,
receiving, or forwarding information over
a communications channel. A node is a
connection point, either a redistribution
point or a communication endpoint.














5.2.Implementing Dynamic Bandwidth
Allocation Algorithm

In dynamic, bandwidth may be
change.
The objective of dynamic
bandwidth is to maximize the cost
reduction.
For this cost reduction, initially we
have to consider the procedure tree
construction afterwards distributed
indexing.

























5.3.Time-variant channel bandwidth
allocation

In this,first we have to collect all
the items and then allocates the
large items into the larger
bandwidth for reducing the average
waiting time.

Then it calculates the product value
and frequency for each items to
refine the broadcast program.























Mobile
User
Uplink
Channel

Informa
tion

System


Mobile
User

Uplink
Channel

Informati
on
System
Broadcasting

Mobile
User

Uplink
Channe
l

Informati
on
System
Broadc
asting
ABA
algorithm

Broadcast
Program
Downlink
Channel
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 87
6.0. Architecture Diagram
































































Fig 1: Architectural Diagram







Broadcast Program
Information
System
D4 D5 D6 D7 D8


Mobile User
D1 D2 D3
Uplink
channel

Downlink
Channels
ABA
algorithm
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 88
7.0 Conclusion.

We analyze the effectiveness of SIA with
average waiting time and also the
efficiency of SIA by measuring its
execution time. The SIA is of very high
quality. Therefore, SIA has lower
complexity. We have simulated and
showed using NS2 software that SIA is
more efficient.

References

[1]CommView,http://www.tamos.com/pro
ducts/commwifi/, 2009.

[2]EBPropsimC8,http://www.elektrobit.co
m/index.php?207, 2009.

[3] The Working Group for IEEE 802.11
WLAN Standards, http://
www.ieee802.org/11/, 2009.

[4] Spatial Channel Model for Multiple
Input Multiple Output Simulations, 3GPP
TR 25.996v1.0.0, http://www.3gpp.org,
2003.

[5] Predictive Data Rate Control in
Wireless Transmitters, US Patent
6,707,862, 2004.

[6] S. Acharya, R. Alonso, M.J. Franklin,
and S.B. Zdonik, Broadcast Disks: Data
Management for Asymmetric
Communications Environments, Proc.
1995 ACM Intl Conf. Management of
Data, pp. 199-210, May 1995.

[7] S. Acharya and S. Muthukrishnan,
Scheduling On-Demand Broadcasts: New
Metrics and Algorithms, Proc. Fourth
ACM/IEEE Intl Conf. Mobile Computing
and Networking, pp. 43-54, 1998.

[8] D. Barbara , Mobile Computing and
DatabaseA Survey, IEEE
Trans. Knowledge and Data Eng., vol. 11,
no. 1, pp. 108-117, Jan./ Feb. 1999.

[9] J.-L. Huang and M.-S. Chen,
Dependent Data Broadcasting for
Unordered Queries in a Multiple Channel
Mobile Environment, IEEE Trans.
Knowledge and Data Eng., vol. 16, no. 9,
pp. 1143-1156, Sept. 2004.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 89
Comparative Performance Evaluation of MANET
Routing Protocols based on Node Mobility
Gurleen Kaur Walia
UCoE Deptt., Punjabi University, Patiala, India
shaan7_rulez@yahoo.co.in

Abstract - Mobile Ad-hoc network (MANET) is a group of mobile
nodes that are dynamically located such that the interconnections
between nodes are capable of changing on a continual basis. The
wireless links in this network are highly error prone and can go
down frequently due to mobility of nodes, interference and less
infrastructure. Although numerous routing protocols have been
proposed for MANETs, there is no universal scheme that works
well in scenarios with different network sizes, traffic loads and
node mobility pattern. This paper evaluates the performance of
Proactive and Reactive Routing Protocols in MANETs with
respect to node mobility. Performance metrics: Throughput and
Delay are used for the performance analysis.
Keywords: MANET, AODV, DSR, OLSR, Delay, Throughput
1. INTRODUCTION

In the last couple of years, the use of wireless networks has
become more and more popular. Wireless networking is an
emerging technology that allows users to access information
and services electronically, regardless of their geographic
position. Wireless networks can be classified in two types:
Infrastructured and Infrastructureless (Ad-hoc) networks.
In Infrastructured wireless networks as shown in fig. 1, the
mobile node can move while communicating, the base stations
are fixed and as the node goes out of the range of a base
station, it gets into the range of another base station. There is a
centralized administration.

Fig. 1: An Infrastructured Network with two base stations
In Ad-hoc networks as shown in fig. 2, the mobile node can
move while communicating, there are no fixed base stations
and all the nodes in the network act as routers. The mobile
nodes in the Ad-hoc network dynamically establish routing
among themselves to form their own network on the fly. It
lacks any infrastructure and has no fixed routers and
centralized administration.


Fig. 2: An Ad-hoc Network

A Mobile Ad-hoc Network (MANET) is a type of Ad-hoc
network. They are multi-hop self-organization systems
comprised of mobile nodes equipped with wireless transmit
receive units. Each mobile node in these networks may act as
a host and a router in the meantime that it runs both
application programs and routing protocols.



Fig. 3: Mobile Ad-hoc Network

However, traditional routing protocols for wired network are
not suited for MANETs because of its dynamic topologies,
constrained bandwidth, constrained energy and limited
physical security. Fig. 3 shows a MANET network with 3
mobile nodes.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 90
II. PROBLEM STATEMENT

MANET performance is sensitive to node mobility, thus
variation in this aspect will affect the performance of
MANET, and this may result in either to increment or
decrement on overall efficiency of the network. So to examine
the different protocol performance while the amount of traffic
and speed of nodes varies even plays a crucial role in efficient
traffic routing. So far Research studies on performance
analysis of MANET routing protocols have shown distinctive
results, based on the different network conditions such as
traffic type, parameters, network size and by using different
simulators. Many researchers intensively worked in analyzing
the performance of MANET routing protocols focusing on
Constant Bit Rate (CBR) traffic, File Transfer Protocol (FTP)
traffic, User Datagram Protocol (UDP) traffic and
Transmission Control Protocol (TCP) traffic etc. But MANET
is one of the most usable and reliable network for
communication with distinct applications in universities,
offices, airports and hotels etc. In such user applications good
HTTP performance is required, to enable the web based
applications. So it is necessary to investigate the performance
of chosen MANET routing protocols DSR, OLSR and AODV
over HTTP traffic as this plays a key role in MANET
applications.

III. AIM
The goal of this paper is to analyze the MANET routing
protocols DSR, OLSR and AODV performance over HTTP
traffic with respect to node mobility.
OVERVIEW OF MANET

MANET is a Wireless Ad-Hoc Network technology.
Mobile nodes in the network act as clients and servers. Fig. 4
shows the decentralized MANET consisting of mobile nodes
functioning as routers along with the respective mobile nodes.

Fig 4: Ad-Hoc Wireless Network

IV. MANET CHARACTERISTICS

MANETs do not have any central authority or fixed
infrastructure, unlike the traditional network makes MANET
decentralized system. MANETS connects themselves by
discovering the topology and deliver the messages themselves
makes MANET a self configuring network. Mobile nodes in
the MANET are free to take random movement. This will
result frequent changes in the topology, where alternative
paths are found automatically.
They use different routing mechanisms in transmitting the
data packet to the desired nodes by this it exhibits dynamic
topology. MANET usually operates in bandwidth-constrained
variable-capacity links. That results in high bit errors, low
bandwidth, unstable and asymmetric links results in
congestion problems. Power conservation plays a key role in
MANET as the nodes involved in this network generally uses
exhaustible battery/energy sources this makes MANETS
energy-constrained.


V. FACTORS FOR CONSIDERATION WHILE
DEPLOYING A MANET ARE:

Bandwidth :
Error-prone in case of wireless networks are also insecure, due
to which we have lower capacity and higher congestion
problems in throughput, so bandwidth is key factor while
deploying MANET.

Energy efficiency of nodes:
The key goal of consideration is to minimize overall network
power consumption.

Topology changes:
This factor is considered because topology is changed with the
movement of mobile nodes resulting in route changes which
lead to network partition and in largest occurrences few
packet losses.


VI. APPLICATION OF MANET

MANETs application typically includes communication in
battlefield environment. These are referred to tactical
networks, monitoring of weather, earth activities and
generation of automation equipment. These are examples of
sensor networks under area of MANET. Emergency services
include medical services such as records of patients at runtime
typically in disaster. Electronic commerce is another example
of MANET which includes receiving of payments from
anywhere, Records of customers are accessed directly from
field, local news, weather and road conditions are carried
through vehicular access. Enterprise networking is example of
MANET in which one can have access for Personal Digital
Assistant (PDA) from anywhere; networks so formed are
personal area networks. These applications are used for the
educational or business purpose. An educational application
can be used a virtual conference calls to deliver lecture or for
meetings, it also supports multiuser games, robotic pets. By
using this network a call can be forwarded anywhere, can be
used to transmit actual work space to the current location. The
most common applications of MANET are Inter vehicle
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 91
communication for Intelligent Transportation System (ITS)
involving accident information on a highway, collision
avoidance at a crossroad and connection to the internet. The
application of MANET has further improved the
communication infrastructure of rescue team in operating at
the disaster hit area around the clock and modern military has
benefited more from its advancement in the battle fields.

VII. OVERVIEW OF ROUTING PROTOCOLS

A. Proactive (table driven) Routing Protocols:
Proactive routing protocols maintain the routing
information of all the participating nodes and update their
routing information frequently irrespective of the routing
requests. Proactive routing protocols transmit control
messages to all the nodes and update their routing information
even if there is no actual routing request. This makes
proactive routing protocols bandwidth deficient, though the
routing itself is simple having this prior updated routing
information. The major drawback of proactive protocols is the
heavy load created from the need to flood the network with
control messages.

B. Reactive (On demand) Protocols:
Reactive protocols establish the route only when it is
required unlike the proactive protocols these protocols do not
update their routing information frequently and will not
maintain the network topology information. Reactive
protocols use the connection establishment process for
communication.

C. Ad Hoc On-Demand Distance Vector Protocol (AODV):
AODV is a reactive routing protocol that minimizes the
number of broadcasts by creating routes on demand. To find a
path to the destination, a route request packet (RREQ) is
broadcasted by the source till it reaches an intermediate node
that has recent route information about the destination or till it
reaches the destination. Features of this protocol include loop
freedom and that link breakages cause immediate notifications
to be sent to the affected set of nodes, but only that set.
Additionally, AODV has support for multicast routing and
avoids the Bellman Ford "counting to infinity" problem. The
use of destination sequence numbers guarantees that a route is
"fresh". The algorithm uses hello messages (a special RREP)
that are broadcasted periodically to the immediate
neighbors. These hello messages are local advertisements for
the continued presence of the node and neighbors using routes
through the broadcasting node will continue to mark the
routes as valid. If hello messages stop coming from a
particular node, the neighbor can assume that the node has
moved away and mark that link to the node as broken and
notify the affected set of nodes by sending a link failure
notification (a special RREP) to that set of nodes.

D. Dynamic Source Routing (DSR):
DSR also belongs to the class of reactive protocols and
allows nodes to dynamically discover a route across multiple
network hops to any destination. Source routing means that
each packet in its header carries the complete ordered list of
nodes through which the packet must pass. DSR uses no
periodic routing messages (e.g. no router advertisements),
thereby reducing network bandwidth overhead, conserving
battery power and avoiding large routing updates throughout
the ad-hoc network. Instead DSR relies on support from the
MAC layer (the MAC layer should inform the routing
protocol about link failures). The two basic modes of
operation in DSR are route discovery and route maintenance.
DSR uses the key advantage of source routing. Intermediate
nodes do not need to maintain up-to-date routing information
in order to route the packets they forward. There is also no
need for periodic routing advertisement messages, which will
lead to reduce network bandwidth overhead, particularly
during periods when little or no significant host movement is
taking place. Battery power is also conserved on the mobile
hosts, both by not sending the advertisements and by not
needing to receive them; a host could go down to sleep
instead.

E. Optimized Link State Routing (OLSR):
It is a proactive routing protocol and is also called as table
driven protocol because it permanently stores and updates its
routing table. OLSR keeps track of routing table in order to
provide a route if needed. OLSR can be implemented in any
ad hoc network. Due to its nature OLSR is called as proactive
routing protocol. All the nodes in the network do not
broadcast the route packets. Just Multipoint Relay (MPR)
nodes broadcast route packets. These MPR nodes can be
selected in the neighbor of source node. Each node in the
network keeps a list of MPR nodes.
This MPR selector is obtained from HELLO packets sending
between in neighbor nodes. These routes are built before any
source node intends to send a message to a specified
destination. Each and every node in the network keeps a
routing table. This is the reason the routing overhead for
OLSR is minimum than other reactive routing protocols and it
provide a shortest route to the destination in the network.
There is no need to build the new routes, as the existing in use
route does not increase enough routing overhead. It reduces
the route discovery delay.

VIII. SIMULATION DESIGN & IMPLEMENTATION
OPNET (Optimized Network Engineering Tool) Modeller
14.5 is used for the design and implementation of our work.
OPNET is a network simulator that provides virtual network
communication environment. It is prominent for the research
studies, network modelling and engineering, R & D Operation
and performance analysis. OPNET plays a key role in todays
emerging technical world in developing and improving the
wireless technology protocols such as WiMAX, WiFi, UMTS,
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 92
etc, Design of MANET routing protocols, working on new
power management systems over sensor networks and
enhancement of network technologies such as Ipv6, MPLS
etc.
Fig. 5: An example of network model design MANET 50 node scenario with
network entities
A. Impact of Node Mobility on MANET routing protocols
performance:

In this scenario simulation environment is modeled in
OPNET 14.5 modeler using DSR, OLSR and AODV routing
protocols to analyze the chosen protocol performance on
varying network nodes speed over HTTP traffic.
To observe the effect of mobility over MANET routing
protocols two simulation scenarios were developed each
scenario constitute 50 mobile nodes with nodes speed 10 M/S
and 28 M/S respectively using DSR, OLSR and AODV
routing protocols in the campus area of 1000 meters x 1000
meters. Initially, simulation environment is developed by
using 50 mobile nodes with 10 M/S speed and then using the
performance metrics delay and throughput DSR, OLSR and
AODV protocols performance are analyzed.
On comparing the graphs we can observe that delay for
DSR protocol gradually increases and then it remains constant
in case of nodes moving with 10 M/S speed where as the
delay for DSR protocol when the nodes moving with 28 M/S
shows a very high increment in delay. Our simulation results
shows that DSR protocol shows low delay in case of low node
speed than higher speed, this results in poor performance of
DSR protocol in case of network nodes moving with higher
speeds. In this aspect, mobility of the nodes results change in
position of the destination node. It will then initiates the route
maintenance process to find out the new routes as it noticed
change in network topology. But due to the mobility of all the
participating nodes it may not possible to find the alternate
routes to the destination by route maintenance mechanism.
Thus it implements re-establishment of route discovery
mechanism to find the new routes to the destination nodes for
efficient data transmission and this results higher delay on
increasing the node speed.


Fig. 6: Delay for DSR, OLSR and AODV over 10 M/S speed.

In fig 6 and fig 7, we can see the delay for the OLSR
protocol for nodes moving with 10 M/S and 28 M/S
respectively. We can notice that the delay for the OLSR
protocol approximately constant for the nodes moving with 10
M/S speed where as the delay for OLSR protocol when the
nodes moving with 28 M/S even show approximately same
delay and even stays constant. On comparing the graphs for
OLSR delay in case of 10 M/S and 28 M/S we can see that
there is no significant variation in delay.


Fig. 7: Delay for DSR, OLSR and AODV protocols over 28 M/S speed.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 93
Our simulation results show that OLSR protocol shows
same delay on varying the nodes speeds, this result in same
performance by OLSR protocol on varying the nodes speed.
OLSR protocol unlike the reactive protocols, it will maintains
and updates its routing table frequently, this will help OLSR
protocol to maintain consistent paths. OLSR protocol
exchanges hello messages with its neighboring nodes and
forms symmetric links though there is variation in nodes
speeds and by this it can make successful routing, thus
mobility of nodes shows less impact over the performance of
OLSR protocol.
In fig 6 and fig 7, we can see the delay for the AODV
protocol for nodes moving with 10 M/S and 28 M/S speed
respectively. The delay for the AODV protocol when
subjected to 10 M/S nodes speed its delay gradually decreases
and then after it stays constant where as the delay observed for
the AODV protocol on 28 M/S node speed it gradually
increasing. On comparing both the graphs for AODV protocol
delay in case of 10 M/S is quite less than that of delay in case
of 28 M/S. Our simulation results show that AODV protocol
show higher delay on increasing the nodes speed.
In fig 8 and fig 9, graphs shows the throughput for the DSR,
OLSR and AODV protocols for the nodes moving with the 10
M/S and 28 M/S respectively. On comparing the fig 8 and fig
9 we can see that the throughput for the DSR protocol is high
for the nodes moving with the 10 M/S speed than the nodes
moving 28 M/S speed. The throughput for the AODV protocol
even show higher in case of nodes moving with 10 M/S speed
than that of nodes moving with 28 M/S speed. But the OLSR
protocol shown differently than DSR and AODV protocols,
OLSR protocol shown quite higher throughput for the nodes
moving with speed 28 M/S that that of the nodes moving with
10 M/S speed.



Fig. 8: Throughput for DSR, OLSR and AODV protocols over 10 M/S speed.



Fig. 9: Throughput for DSR, OLSR and AODV protocols over 28 M/S speed.
Finally for simulation results conclude proactive routing
protocol OLSR shows higher throughput on increasing the
nodes speed than that of reactive protocols AODV and DSR
protocols respectively, this is due to the reason we have
discussed above in case of delay analysis.

CONCLUSION
In this paper, performance analysis on MANET routing
protocols DSR, OLSR and AODV protocols is performed,
focusing on node mobility. Throughput and delay parameters
are used to analyze the protocols. It was observed that on
varying the nodes speed, proactive routing protocol OLSR
outperforms the reactive protocol AODV and DSR protocols
even at higher nodes speed.

REFERENCES

[1] Supriya Gupta and Dr. Lalitsen Sharma, Performance Comparison of
Ad-hoc Routing Protocols under High Mobility Environment,
International Journal of Computer Applications (0975 8887) Vol. 12,
No.2, November 2010.
[2] I. Bacarreza Nogales, Model And Performance Analysis of Mobile
Ad-hoc Wireless Networks, in Proc. 17th International Conf.
Radioelektronika, Prague, Czech Republic, Apr. 2007, pp. 1-6.
[3] C. He. Throughput and Delay in Wireless Ad-Hoc Networks. Final
report of EE359 Class Project, Stanford University.[Online].Available:
https://www.dsta.gov.sg/index.php/DSTA-2006-Chapter-6/[Accessed:
Mar 02.2009]
[4] S. Corson. and J. Macker. Mobile Ad-Hoc Networking (MANET):
Routing Protocol Performance Issues and Evaluation Consideration.
NWG, 1999. [Online]. Available:
https://www.dsta.gov.sg/index.php/DSTA-2006-Chapter-6/[Accessed:
Mar 02.2009]
[5] J. Xie, L.G. Quesada and Y. Jiang. A Threshold-based Hybrid
Routing Protocol for MANET. Dept.of Telecomatics
[6] A. Zahary and A. Ayesh, Analytical study to detect threshold number
of efficient routes in multipath AODV extension in Proc. ICCES,
2007, pp. 95-100. J. Broch, D. A. Maltz, D.B. Jognson, Y. Hu and J.
Jetcheva, A Performance Comparision of Multi-hop Wireless Ad Hoc
Network Routing Protocols, in Proc.ICON-MCN, 1998, pp. 85-97.
[7] T. Cam, J. Boleng, B. Williams, L.wilcox and W. Navidi,
Performance Comparision of Two Location Based Routing Protocols
for Ad Hoc Networks in Proc. IEEE-INFOCOM 2002.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 94
Estimating and Analyzing of Exact SAR parameters in Mobile
Communication
Jacob Abharam

Asst Professor, Department of Electronics,
Baselios Poulose II Catholicos College,
Piravom, India
E-mail: tjacobabra@gmail.com
Vipeesh P
Lecturer, Department of Electronics,
Baselios Poulose II Catholicos College,
Piravom, India
E-mail:vinuvipeesh@gmail.com.

Abstract:-Since early 90s use of mobile phone has
increased world wide generating a public concern as to
whether frequent utilization of such device is safe or
not. The key question being raised repeatedly is
whether the frequent usage of such device which
radiates GHz electromagnetic field on to the human
brain is safe or not, many international guide lines are
available defining expose the limits based on Specific
Absorption Rate (SAR) of the exposure to radiation
limit. This paper tries to study and analyze the various
operating parameters which can influence SAR
definition and value. The key parameters that
influence SAR include distance, frequency phone
angle, Tissue type and Age. All the measurements are
done using software packages.
Keywords- mobile phone; specific Absorption rate SAR;
tower; electromagnetic fields;
I. INTRODUCTION
The wide spread use of wireless
communication device which close the
proximity to the human body remains a topic of
growing concern to the public. There is a need
to evaluate this electromagnetic interaction of
wireless device with a human brain in order to
establish the safety of the wireless systems.
There have been considerable research activities
to investigate the biological effects of
electromagnetic fields [1].These activities are
influenced by factors like the rate of RF energy
deposition in biological tissues, called specific
absorption rate (SAR) to access the potential
health effects.
A human body is a homogeneous, loss
dielectric whose electrical properties can be
influenced when a electric or magnetic field
penetrate in to the body, the intensity EM field
that penetrate in human body depend on number
of internal and external parameters like,
frequency, polarization, antenna type &
distance, size & shape and dielectric properties
of the exposed body size & thickness of RF
shield.
International organizations like IEEE
and ICNIRP have set standard for exposure
limits in terms of SAR. In IEEE standard [2] the
peak SAR as averaged over any 1 g of tissues
should not exceed 1.6w/kg while in ICNIRP
guide lines [3] the peak SAR as averaged over
any 10 g of tissue should not exceed 2w/kg. In
the present study is to examine the various
parameters which influence SAR.
II. SPECIFIC ABSORPTION RATE
The specific absorption rate is defined as
the time derivative of the incremental energy
(dw) absorbed by or dissipated in an
incremental mass (dm) contained in an volume
clement 9dv) of a given density (P) is defined as
----- (1)
by using pointing vector theorem for sinusoidal
EMF, one can get
------ (2)
Where is the conductivity of the material. E
i
the field inside that material.
III. SAR RELATED WITH DISTANCE
The specific absorption rate depends on
distance. It can be seen from the graph ploted
below that SAR varies inversely with the
distance from the transmission tower. In the
case of 1800 MH
z
frequency band, SAR very
high nears the tower. It exceeds the international
standard limits up to 15 to 20 meters away from
the tower.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 95

Figure 1: Variation of SAR Vs distance from the Tower.
From the Fig 1 it can be seen that SAR
values comes to low level after a distance of 35
40 meters of away from tower. From the graph
it can be concluded that irrespective of the
operating frequency, graph ploted are of same
nature. From the graph it can be concluded that
irrespective of the operating frequency, graph
plotted are of same nature.
IV. SAR RELATED WITH HUMAN
EXPOSURE
A human being living near the tower is
continuously exposed to radiation emitted by the
mobile tower. The amount of energy/radiation
absorbed is not constant through out the body, it
will be different, depending on the type of part
exposed. The amount of energy absorbed by the
skin is different from what absorbed by bone, it
will depend on dielectric properties of body part
exposed.
The table below shows electric conductivity and
dielectric properties of the body parts, main
skin, brain, and bone is given. The parameter
varies with respect to operating frequency.
Body
Part
900
MHz
900
MHz
1800
MHz
18000
MHz
2200
Mhz
2200
MH2
r r r
Skin 39.5 0.7 38.2 0.9 37.1 1.1
Bone 12.5 0.17 12.0 0.29 11.7 .35
Brain 56.8 1.1 51.8 1.5 47.5 1.8
Table1: The properties of the Parts of Body under different frequency of
operation


Figure 2: SAR relation with length
of the Tower at 900MHz
Figure 3: SAR relation with length
of the Tower at 1800MHz

Fig (2) and (3) represent s the several of energy
absorbed by skin, brain and brain at various
distance from the tower as a function of
operating frequency. Fig (2) says that skin
absorbed maximum to energy then born & brain
at distance close to tower. Figure (2) & (3) says
that S.A.R absorbed by the body increase with
increase in operating frequency.
V. MANIPULATE OF RF SHIELD
One way of reducing SAR in the human
head is by attaching RF Shield made of
ferromagnetic material to the front side of the
mobile phone [5]. The mechanism behind
reduction of SAR is due to the suppression of
surface currents of the front side of mobile
phone box. When an EMW travelling trough
free space encounters a different medium, the
wave will be reflected, transmitted and / or
absorbed. EMW absorption materials absorbed
the energy in electromagnetic waves as
magnetic loss and convert that energy in the end
to heat.
Generally dimension of RF shield is
specified as , where X is the thickness
of the shield; Y is the width of the shield and Z
length of the shield. Experiments were done to
study the variation in SAR once, the dimension
are changed [5].



Figure 4: SAR vs Ferrite
thickness
Figure 5: SAR vs Variation of z
The cellular based stations are
transmitting signal (radiation) continuously even
nobody is using the phone.

Figure 6: SAR as a function of hand set inclination.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 96
From one study reported [4] the energy
absorbed by the head, from the mobile phone
varies slightly, with variation in phone
inclination.
Another study says that if we keep
phone at slight distance away from the head, the
S.A.R will reduce slightly. This variation is
plotted below.

Figure 7: Distance from the Head to Mobile Phone
VI. CONCLUSION
From the above discussion the author
came to the conclusion that S.A.R cannot be
specified to a single value, has mentioned by
international organization at present. Author
says that SAR will depend on a number of
factors, like distance from tower , properties
of body parts, phone inclination, distance
between Mobile phone and head, phone
inclination RF Shield etc. So any variation in
any of these parameters will directly affect
SAR. So author concludes that S.A.R should
always mention with some boundary conditions.












REFERENCES
[1] D. Sardari, Khalatbari. S Calculating S.A.R in two
models of the human head, Exposed to mobile phone radiation ,
proceedings of electromagnetic Symposium, Cambridge USA,
2009.

[2] Bomson Lee, Jung.M , Evaluation of SAR reduction
for mobile communication had set IEEE APS Vol 1, 2009, Pp
444-448.

[3] Adey W.R. Tissue interaction with non ionizing
electromagenetic fields , Phy Rer (USA), 61 (2007), 485.

[4] M.S Bhatia, L.K Ragha, Numerical evaluation of
SAR for compliance testing of personal wireless Devices ,
International journals of Recent trends in Engineering, Vol 2 ,
No 6 November 2009, Pp69-74.

[5] P.Pinho, and J. Casalerio, Influence of the human
head in the radiation of a mobile Antenna PIERS proceeding,
Moscow, August 18-21, 2009.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 97
Real - Time Monitoring Touch Screen System
Achila T.S.
Computer Science and Engineering,
Vivekanandha college of Engineering for Women,
Erode, Tamilnadu, India
e-mail:achilats@gmail.com

Abstract The Real Time monitoring system consists of
various sensors used to measure the temperature, pressure and
humidity. These parameters are measured and send to an
embedded web server for storage in a microcontroller. The
parameters are monitored through a local display mechanism
using touch screen system and various administrative settings
can also be performed. The parameters can be monitored
through a web service which will display the values. Remote
login is also possible into the local machine in order to control
the system.
Keywords- sensors; real-time systems; embedded machines;
web server; 18F87J50 microcontroller; S3C2440 ARM
processor; touch screen; database; data acquisition; analog to
digital conversion; telemetry
I. INTRODUCTION
Today, user friendly touch screen monitoring systems are
of wide acceptance globally. The use of mouse and
keyboards were considerably replaced since this invention.
This in turn increases the speed, efficiency and ease of use of
these systems.
The real time monitoring touch screen system helps us to
monitor various parameters like temperature, pressure and
humidity in an environment where it is installed. The sensors
send analog values and it will be converted to digital form
using a microcontroller that facilitates analog to digital
conversion. 18F87J50 microcontroller can be used for this
purpose and the collected values will be sending to an
Ethernet port. The values will be stored in a small database
as an xml file.
Now, the values can be distributed into two streams, a
local monitoring touch screen system and a web server.
These two systems use logins to ensure the access controls
for the users of the system. Both the systems can have
options to configure settings of each user. Remote Login is
possible to control the local system. The retrieved data can
be saved in a database for further use.
II. BACKGROUND
Telemetry is the science and technology of automatic
measurement and transmission of data by wired or wireless
means from remote sources, as from spacecraft, to receiving
stations. It is the process by which an objects characteristics
are measured (such as velocity of the spacecraft), and the
results are transmitted to a distant station where they are
displayed, recorded, and then analyzed.
The embedded systems which use micro-controller as the
main controller has been widely used in different fields, but
most of these applications are still in the low-level stage of
stand-alone use of the embedded system. The evolution of
telemetry starts from point to point link to multi point link.
Some telemetry systems use memory cards which are
portable for storage of data and monitor values. This will
consume more time for processing the data received from the
system. The limited storage capacity of these systems may
cause loss of captured values which is of importance. Most
of the existing systems use short range telemetry consists of
radio frequency, Bluetooth and Infra Red rays. Point to Point
link may use some wired medium such as Optical fiber. The
connectivity of these systems are very limited and may have
low memory capacity. Thus the old records can not be
retrieved for future analysis. Some of the systems can not
control remotely. There may not be any settings for every
user of these systems. So accessibility is limited to few users
in these kinds of telemetry systems.
Sometimes the maintenance cost is comparatively high.
Some systems are not bothered about maintenance and they
can serve for certain number of years. Most of the systems
use keyboard and mouse as controlling consoles. They use
big monitors for local monitoring. These may increase power
consumption and maintenance cost. Some systems handle
both low priority and high priority tasks in the same manner.
This may cause some severe problems to the systems. The
data transferring mechanism of some systems may not be
tamper proof and it has to undergo some severe attacks like
virus threats, data loss and data modification.
III. SYSTEM DESCRIPTION
The Real time monitoring touch screen system consists
of a network of sensors, conditioning processes, data
acquisition systems, local configuration and display systems,
a web server, remote login Graphical User Interface (GUI), a
settings file and a database. The architecture is shown in
Fig.3.
A. Local Sensor Network
The local sensor network consists of sensors for
temperature, pressure and humidity measurement. Various
sensors like LM35, MPX5050 and SY-HS-220 can be
utilized for the same. The program to read the parameters
can be programmed using a micro controller programmer
and can be embed into 18F87J50 micro controller. The
program consists of two processes, to send the values to an
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 98
embedded server and to receive alarm setting values from the
user interface. The micro controller will convert analog
values to digital values. It takes some time for this process.
The values will be stored in a small data base as an xml file
and can be used to display later. The Ethernet port of the
above micro controller can be used to get values. Also we
can assign an internet protocol address to this micro
controller and retrieve values as hyper text markup language.
This will serve as an embedded web server.
B. Conditioning
This part takes care of values getting from the sensors.
Various kinds of signals are produced by sensors and it
should be made uniform before analog to digital conversion.
Each sensor will have different range of outputs. The signals
should be converted to units of parameters like degree
Celsius or Pascal.
Conditioning can be achieved through four methods.
Linearity control - Linearity control permits
narrowing or expanding the signal.
Scaling - Scaling change the amplitude of the signal
received from the sensor. Usually the amplitude
diminishes in scaling.
Filtering - Signal filtering is often used in eddy
current testing to eliminate unwanted frequencies
from the receiver signal. While the correct filter
settings can significantly improve the visibility of a
defect signal, incorrect settings can distort the signal
presentation and even eliminate the defect signal
completely.
Unifying - Unifying process converts all types of
information into a single type of information which
is obtained from sensors.
C. Touch screen display
Touch screen graphical user interface can be developed
in Qt language. This can be implemented in ARM9 using
Samsungs S3C2440 micro processor. Using Qt, we can
access the embedded web server of micro controller and
retrieve the xml file which contains the values of various
parameters. Network access manager of Qt can be used to
get the xml file. Now the xml file will be parsed and various
values are retrieved. The parsing can be possible through
DOM or SAX.
The parsed values will be displayed in the screen using
various dials and meters. The alarms and other critical
parameters must be displayed. A timer is used to display the
values. It will read the xml file from the embedded web
server and parse it at specific intervals. The past values will
be displayed and dials will be changed accordingly.

Figure 1. Working of touch screen display
D. Configuration
The system administrator can create users and control the
access to the system. This will be stored in a settings
database. The administrator can update the changes in their
duties too.
E. Web server
A web server is used to connect the system to external
world. So the system is accessible from anywhere at any
time by numerous users. Users can logon to the system and
monitor the parameters. If any of the parameters exceeds the
prescribed level, the local system will be triggered to take
necessary actions through remote login facility. Web server
will act as a client and server simultaneously. The xml file is
retrieved from the embedded web server by acing as a client.
Using PHP scripts, the read file can be written into web
server. This is shown in Fig.2.
The embedded web server can also be used. The problem
with this is the lack of memory in micro controller device.
Then the database should be backed up continuously with
specific frequent intervals. This may increase the load to
embedded web server which can be resolved using a separate
server.
F. Remote login
A secure remote login facility is provided with the
system to control it. Authorized personals can login to the
system to take necessary actions. Graphical user interface is
provided for remote login and is very easy to use for people
who are unaware of commands for remote login. Secure
shell of Linux operating system can be used for remote login
purpose.
G. Data base
A data base is also provided with the system to view past
recordings. Also we can retrieve alarms triggered in a
particular period. The data base can be developed by
SQLITE. The data base can be used in future as a reference
in applications like weather monitoring or agricultural
researches. Data mining is also possible with this.
TIMER TIMEOUT
NETWORK ACCESS MANAGER
XML FILE
PARSE
DISPLAY
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 99
IV. APPLICATIONS AND FUTURE DEVELOPMENTS
There are enormous applications for this. It can be used
in power plants, weather monitoring systems, farm houses,
space crafts, laboratories, clinical research industry, military
applications and secure vaults. For each application it should
be customized and more or less sensors must be added to the
system. For example, motion detection sensors, sensors to
detect various gases, infrared sensors or other light detectors,
voltage measurement or current measurement equipments.
We can control the environment using some appliances like
air conditioners, fans, exhausts and switches to regulate
electrical supply to various equipments. Also we can add up
close circuit monitors and live cameras for getting visuals in
critical locations. In future, we can fully automate the system
using artificial intelligence.
ACKNOWLEDGMENT
This work is supported by Quark Cybernetics and
Fundamental Research Laboratories.
REFERENCES
[1] Izabella A. Gieras (2003), The Proliferation Of Within The U.S.
Healthcare Environment Patient-Worn Wireless Telemetry
Technologies, 4th Annual IEEE Conf on Information Technology
Applications in Biomedicine, UK, pp: 295 298.
[2] Linhua Ding (2010), Study of Embedded Linux Surveillance System
Using TCP/IP Network, 2010 International Conference On
Computer Design And Applications (ICCDA 2010), Vol: 5, pp: V5-
389 - V5-392.
[3] Zhang Qinghui, Li Xudong (2010), GUI Design of Grain Monitoring
and Control System Based on QT, 2nd International Conference on
Signal Processing Systems (ICSPS), Vol: 2, pp: V2-20 - V2-22.

Figure 2. Working of Web Server

Figure 3. System Architecture
DATA
ACQUISITION UNIT
EMBEDDED
WEB SERVER


WEBSERVER
C
L
I
E
N
T
S
E
R
V
E
R
XML
FILE
W
E
B
P
A
G
E



CLIENT

Local Sensor
Network
Conditioning DAQ/
ADC
Settings
Local
configuration
and Display
Web Server
Remote Control
Database
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 100
3D MAPPING OF ENVIRONMENT USING 2D LASER IMAGES

Jagadish Kota
1

Electrical Engineering,
IIT Roorkee,
Roorkee, India
jaggupee@iitr.ernet.in

Barjeev Tyagi
2
,
Electrical Engineering,
IIT Roorkee,
Roorkee, India
btyagfee@iitr.ernet.in
Abstract-In this paper an algorithm for developing the three
dimensions (3-d) maps from two dimension (2-d) laser images
has been suggested. 3d data has been collected using 2-d
standard laser scanner. Acquired data is filtered and structured
using Gaussian filter and K dimensional (KD) tree methods
respectively. Iterative closest point algorithm has been used to
register image and reduce errors. Developed algorithm has
been tested on an indoor environment in real time.
Key words: Image processing, Computer vision, ICP algorithm,
scan matching, SVD, real time system, robotics, KD tree

I. INTRODUCTION
Mobile robotics is one area where application of lasers images is
indispensable for fast and safe navigation Laser range images
substituted camera images in many fields. The latter represents
environment based on intensity at pixels, while the former gives
intensity, depth and position of each point resulting in more
clear and accurate images. Because of occlusion and limited
sensor range, it requires accurate methods of combining
multiple range images into a single model. To create a correct
and consistent model, these scans have to be merged into one
coordinate system. Merging the scans to create a single model is
called Data association or registration [11]. This is necessary
step for detection and modeling of objects, developing maps
[12, 13] and even in simultaneous mapping and navigation
techniques (S.L.A.M)[10]. Here this step is achieved using scan
matching method. Combining two images with available
common points is called scan matching.

If robot carrying the 3D scanner is precisely localized, then
registration could be done directly based on the robot pose.
Although Global Positioning System (G.P.S.) and Inertial
measurement units (I.M.U.) are often used to calculate
approximate displacements, they are not accurate enough to
reliably produce precise positioning. However, due to the
imprecise robot sensors, self localization is erroneous, so the
geometric structure of overlapping 3D scans has to be
considered for registration. In addition, there are many
situations (tunnels, tall buildings) which obstruct GPS reception
and further reduce accuracy. To deal with this shortcoming,
most applications rely on scan-matching of range data to refine
the localization. Thus scan matching is useful for building
global consistent maps [10,14]
Architecture of general 3d Data Acquisition system using 2d
laser scanner is explained in section II.2-D scanner is mounted
on a platform, which rotates precisely along an axis to collect
range data. Servo and stepper are dedicated position control
motors and are useful for this purpose. Whole system is
connected and controlled by computer Collected data is stored
in computer. Refined data can be structured and stored in
optimized KD tree format as given in section III. Average time
taken for closest point query using KD tree is of order log n [1].
In section IV, ICP algorithm is discussed. Out of two sets, if one
is considered as model set and another is data se then it detects
the corresponding closest points in both sets.[3] G.P.S and
I.M.U inputs required position and orientation of robot in the
form of Rotation (R) and translation (T) matrices. Using these
corresponding points data set is transformed and registration is
done. Singular Value Decomposition (SVD) is a single iteration
procedure to reduce registration errors[4], is discussed in section
V. Finally section VI demonstrates this approach on real time
environment.

II 3-D DATA ACQUISITION ARCHITECTURE
At present, most of 3D perceptual systems are designed based
on 2D laser radar. Time difference between light rays applied,
received and incident angle gives the 2d co-ordinates of the
point in that plane. However, 2D laser rangers are not sufficient
for navigation and mapping since they cannot detect objects that
are either above or below the detection line. Scanner is rotated
either along horizontal or vertical axes using a motor to acquire
data in other dimension. Block diagram of system to acquire 3d
data using 2d laser scanner is shown in fig. 1. Scanner is fixed to
a platform and it is rotated by desired angle with a stepper or
servo motors. Thus, resolution of the image depends on the step
angle of motor. Scanner and motor are connected to computer
through data transmission and control units respectively. Control
unit outputs desired pulses to stepper motor. Data transmission
unit act as buffer between scanner and computer. Thus whole
system is controlled using a computer.
Data is transmitted to and from scanner to computer and is
connected through duplex communication but for stepper or
servo motor, simplex communication is enough to control them.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 101
DATA
TRANSMISSION
UNIT
STEPPER/
SERVO
MOTOR
CONTROL
UNIT
COMPUTER 2D LASER
SCANNER

Fig. 1 Block diagram to acquire 3d data
The 3D co-ordinates of points are given by equation
(1):
x = r . cos ()
= . sin . cos() (1)
= . sin . sin()

Where r is the ranging data, and are horizontal
and pitching scanning angle[15].These are co-
ordinates in the right hand system. x-y plane is
parallel to ground, z axis is height above the ground
and y axis is away from laser in forward direction.

III. OPTIMIZED K-D TREE

K-d tree was introduced by Bentley [1]. K-d tree is
generalization of binary search trees for k dimensions. Both leaf
and non leaf nodes are present, similar to binary trees. Leaf
nodes points to null pointer and non leaf node points to two KD
trees. Further, every node contains the limits of the represented
point set. Time taken to closest point search in average is of
order log n[1].
The objective of optimizing k-d trees is to reduce the expected
number of visited leaves. Three parameters are adjustable,
namely, the direction and position of the split axis as well as the
maximal number of points in the buckets. Splitting the points at
the median ensures that every KD tree entry has the same
probability. Split axis should be oriented perpendicular to the
longest axis to minimize the amount of backtracking. For this
data is divided along the axis with largest deviation. Friedman
and colleagues prove that a bucket size of 1 is optimal [2].

Schematic diagram of generalized KD tree is shown in Fig.2
Initially data is divided along 1
st
dimension; later on 2
nd
and so
on till the k
th
level, again this process is repeated until data is
completed. Fig 3 shows the geometric view of optimized KD
tree in two dimensions for random data. Circles are non leaf
nodes, squares are leaf nodes and star mark is the test point.
Black circle shows ball with in bound test is failed [1].

Median of
data along
dimension 1
Median of
data along
dimension 2
Median of
data along
dimension 2
Median of
data along
dimension
3
Median of
data along
dimension
3
Median of
data along
dimension
3
Median of
data along
dimension
3
Median of
data along
dimension
k
Median of
data along
dimension
k
Median of
data along
dimension
k
Median of
data along
dimension
k

Fig. 2 KD tree schematic diagram















Fig. 4 KD tree for 2-d data
0 10 20 30 40 50 60 7 80 90
0
10
20
30
40
50
60
70
80
90
100
Fig. 3 Geometric model of KD tree for 2d data
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 102

IV. THE ICP ALGORITHM
The ICP Algorithm was developed by Besl and McKay [3] and
is usually used to register two given geometries in to a common
coordinate system. This algorithm can be applied to following
types of geometrical data.
1. Point sets
2. Line segments
3. Triangles
4. Implicit and Parametric Curves

The algorithm calculates iteratively the registration. As the
output from the laser ranger is the 3d co-ordinates of each point
at two different positions of the laser ranger, two sets of point
data is collected.

Let, two point sets have m and d points each represent a
fraction of environment in different local co-ordinate system. A
few points are present in both the sets. Then the both sets should
be aligned to a single global co-ordinates using common points
such that whole environment map is developed with minimum
error. Thus in each iteration step, the algorithm selects the
closest points as correspondences and calculates the
transformation, i.e., rotation and translation (R, T) Expression
for the error is given in eq. (2)

, = w
ij
m
i
Rd
j
+ t
2
N
d
j=1

=1
(2)

where N
m
and N
d
, are the total number of points in the model
set M and data set D respectively, and w
ij
are the weights for a
point match. m
i
and d
j
are the i
th
and j
th
data points in first and
second sets respectively

The weights are assigned as follows:
w
ij
=1, if m is the closest point to d ,
w
ij
=0 otherwise.
In literature many methods have been suggested to update R and
T matrices to minimize the error. Orientation matrices are
updated using SVD by Arun [4], quaternion approach by
Horn[5], orthonormal matrices by Horn[6] and dual quaternion
by Walker[7]. R and T matrices are calculated using SVD
method in this paper. Besl and McKay [3] proved each iteration
in icp leads to minimum. They have considered quaternion.
Slope of the error curve is high iterations and become almost
constant for the last iterations leading to minimum slowly. But
in SVD, registration is a single step process which give rotation
and translation matrices directly.
Many methods are proposed to increase the speed of the
algorithm mainly when the data is in abundant amount like laser
range data[16]. Random sampling, uniform sampling and
Normal space sampling are chief ways to improve speed.

A. CLOSEST POINT SEARCH IN ICP ALGORITHM:
Each iteration requires correspondence points as an input. The
computation of closest point search is performed by brute force
method: for each data point calculated euclidean distances from
all model points using eq.3, and smallest of this distance gives
the correspondence points.
, = (

(3)
Where p,q are the two points in n dimensions. x
p
i
and x
q
i
are of
projections of points p and q on i
th
axis respectively. If m points
are present in two sets then time taken to find closest point for
each set is m
2
. Different tree structures like Kd-tree, octant tree,
A-Kd tree...etc are used for efficient storage of vast data and fast
retrieval of closest point in the data for the give test point. In this
work KD tree structure is used.
V. SINGULAR VALUE DECOMPOSITION
In the transformation step, Rotation and Translation matrices are
required to transform one set of co-ordinates to another. This
can be done by solving a set of linear algebraic equation which
is represented in matrix form. One way of solving is to calculate
eigen vectors for any of data set and then convert that data into
other using eigen vector. But eigen vectors can be calculated
only for square matrices. In this work matrices obtained are
rectangular, therefore singular vectors are used instead of eigen
vectors. Let X be a matrix then singular vectors of (X) are eigen
vectors of X
T
X and XX
T
.
Let two sets with N
m
and N
d
data points represent same
object from different views, have P points in both sets are close
to each other. m, d be the mean of common points in both sets.
q, q be the orientation of points with respect to respective mean
and H be the error or covariance matrix. These variables can be
represented as given below.
m

=
1
P
m
i
p
i=1
(4)
=
1

=1
(5)

(6)

(7)
= (

=1
(8)

From SVD of H, three matrices U, A and V are obtained, where
U and V are 3x3 orthonormal matrices and A is 3x3 diagonal
matrix with non negative elements. Rotation and translation
matrices are obtained using eq.9 and eq.10 respectively.
R=VU` (9)
=

(10)
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 103
The steps involved in ICP with optimized KD tree are given
below.
1. For the two sets, construct optimized KD tree with the
first set.
2. Transform the data points in the second set, using
present Rotation matrix R and Translation Matrix T.
3. For each point in second set, apply closest point query
and find the corresponding closest point in first set.
4. Locate the common data points in both sets using
distance criteria and calculate the total error for the
correspondence points.
5. If error (e) < , where is very small value then go to
step 7 otherwise go to step 6.
6. Find the new rotation and translation matrices R and T
using Singular Value Decomposition. Go to step 2
7. Rotation Matrix (R) and Translation matrices (T) are
required matrices, store them

Flow Chart of algorithm is given as below:
Start
Transform second set using eq.
R*Nd+T
Apply Neighbor search
algorithm and find the common
points in both sets
calculate total error due to
transformation
Is error <
Stop
Apply SVD & find
new R and T
Input two data
sets Nm and Nd

Fig. 4 Flow chart of icp algorithm
VI. RESULTS & ANALYSIS
Range data of a corridor is considered for analysis [8]. A 2d
scanner is rotated along horizontal axis and data is collected
from two different positions. It consists of position, intensity
and depth files along with data. Real time image of
environment is shown in fig. 6.

Fig. 5 corridor

A. Data Reduction and Filtering:
Large amount of data is obtained using laser scanner.
Acquired data contains many singular points are also present in
data. These singular points are generated due to refraction by
transparent objects and spaces in between any two objects. Their
presence leads to both noisy image (due to Gaussian noise) and
wrong transformation matrices.
Two fast filters are utilized to refine the data, one to
eliminate singular points and other for reducing the data to
remove Gaussian noise. A 3x3 mask is moved from left to right
and then top to bottom. If any of the data point is very far
(>200cm) from other data points, then it is replaced by the
median value of the mask. Points which are very close (<15cm)
to each other are grouped and the group is replaced by mean of
the group. This removes the Gaussian noise. Fig. 6(a) shows the
image of corridor with actual data 170,000 points and 6(b) with
reduced data 13,000 points.


Fig. 6(a) Image with full data,(b) Image with reduced data

K-D tree is constructed for first set and nearest neighbor search
for other set is identified. Mean and standard deviation are
calculated for nearest distances. Here nearest distance around
Mean has been taken as basis for selecting the corresponding
points. Transformation matrices are calculated for these
corresponding points.
Final iteration terminates with minimum error, it is proved by
Besl and McKay[3]. Fig. 7(a) and 7(b) shows the top and 3D
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 104
views of corridor before and after applying ICP. Image of
corridor constructed from data at initial pose is represented in
red color, and second pose is represented in green color. Both
the data sets are collected from same position but robot is
rotated about 30 degrees right for the second image. Fig 7(a)
shows both the images are misaligned with initial transformation
matrices R and T. In fig. 5 red color triangle on left side is figure
of opened door. In fig 7(b) top view, blue color circle shows
both the scans are perfectely overlapped with final R and T. i.e.
after ICP.


Fig. 7 Top and 3D view of corridor (a) before (b) after scan matching
CONCLUSION
In this paper, 3d map of indoor environment is developed using
iterative closest point method. This algorithm uses optimized
KD tree for data storage and closest point search. The proposed
algorithm is applied on two sets of data at a time. Results shows
that a good 3-d map can be created with only two sets of data.
Results of 3-d maps can be further improved if multiple sets of
data are used.

REFERENCES
[1] J.H.Friedman, J.L.Bentley, and R.A.Finkel. An algorithm for finding best
matches in logarithmic expected time. ACM Transaction on Mathematical
Software, 3(3):209-226, September1977.
[2] J.L.Bentley. Multi dimensional binary search trees used for associative
searching. Communications of the ACM, 18(9):509-517, September 1975.
[3] P.Besl and N.McKay. A method for Registration of 3D Shapes. IEEE
Transactions on Pattern Analysis and Machine Intelligence (PAMI),
14(2):239256, February1992.
[4] K.S.Arun, T.S.Huang, and S.D.Blostein. Least square fitting of two 3-d
point sets. IEEE Transactions on Pattern Analysis and Machine Intelligence,
9(5):698700, 1987.
[5] B.K.P.Horn, H.M.Hilden, and Sh.Negahdaripour. Closed form solution of
absolute orientation using orthonormal matrices. Journal of the Optical Society
of America, 5(7):1127135, July1988.
[6] B.K.P.Horn. Closed form solution of absolute orientation using unit
quaternions. Journal of the Optical Society of America, 4(4):629642,
April1987.
[7] M.W.Walker, L.Shao, and R.A.Volz. Estimating 3d location parameters
using dual number quaternions. CVGIP: Image Understanding, 54:358367,
Nov. 1991
[8] Robotic 3D scan repository, Jacobs University, Germany.(updated:Nov.
2010,access:Dec.2011)http://www.//kos.informatik.uniosnabrueck.de/3Dscans/
[9] A.Nchter, K.Lingemann, and J.Hertzberg. Cached k-d tree search for
ICP algorithms. In 3DIM '07. Washington, DC, USA, 2007. pp. 419-26.
[10] A.Nchter, K.Lingemann and J.Hertzberg. 6D SLAM 3D Mapping of
Outdoor Environments Quantitative Performance Evaluation of Robotic and
Intelligent Systems, Journal of Field Robotics, Volume 24, Issue 8-9,pp. 699-
722, 2007
[11] S.Thrun Robotic Mapping: A Survey. In G. Lakemeyer and B. Nebel,
editors,Exploring Artificial Intelligence in the New Millenium. Morgan
Kaufmann, 2002.

[12] R.Triebel, Three-dimensional Perception for Mobile Robots,
Doctorate thesis, University of Freiburg, Germany, May 2007.

[13] D Hhnel, Mapping with Mobile Robots, Doctorate thesis, University
of Freiburg, Germany, Dec. 2004.

[14] D.Borrmann, J.Elseberg, K.Lingemann, A.Nchter and J.Hertzberg,
Globally Consistent 3D mapping with scan matching, Robotics and
Autonomous Systems, vol. 56, Intelligence Autonomous society, pp.130142
2008.
[15] R.Katz, N.Melkumyan, J.Guivant, T.Bailey, J.Nieto and E.Nebot,
Integrated Sensing Frame work for 3D Mapping in Outdoor
Navigation,Intelligent Robot and Systems, Oct. 2006, Beijing, China, pp. 2264
2269.
[16] S. Rusinkiewicz and M. Levoy. Efficient variants of the icp algorithm in
3D Digital Imaging and Modeling, Jun. 2001. Quebec, Que. pp.145.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 105
Different Approaches of Spectral Subtraction method for Enhancing the Speech
Signal in Noisy Environments
Anuradha R. Fukane
Electronics and Telecommunication Department
Cummins college of Engineering for women
Pune, India
E-mail: anuraj110@rediffmail.com
Shashikant L. Sahare
Electronics and Telecommunication Department
Cummins college of Engineering for women
Pune, India
E-mail: shashikantsahare@rediffmail.com

Abstract Enhancement of speech signal degraded by additive
background noise has received more attention over the past
decade, due to wide range of applications and limitations of the
available methods. Main objective of speech enhancement is to
improve the perceptual aspects of speech such as overall
quality, intelligibility and degree of listener fatigue. Among the
all available methods the spectral subtraction algorithm is the
historically one of the first algorithm, proposed for
background noise reduction. The greatest asset of Spectral
Subtraction Algorithm lies in its simplicity. The simple
subtraction process comes at a price. More papers have been
written describing variations of this algorithm that minimizes
the shortcomings of the basic method than other algorithms. In
this paper we present the review on basic spectral subtraction
Algorithm, a short coming of basic spectral subtraction
Algorithm, different modified approaches of Spectral
Subtraction Algorithms such as Spectral Subtraction with over
subtraction factor, Non linear Spectral Subtraction, Multiband
Spectral Subtraction, Minimum mean square Error Spectral
Subtraction, Selective Spectral Subtraction, Spectral
Subtraction based on perceptual properties that minimizes the
shortcomings of the basic method, then performance
evaluation of various modified spectral subtraction
Algorithms, and conclusion.
Keywords- speech enhancement; additiv enoise; Spectral
Subtraction; intelligibility;Discrete Fourier Transform
I. INTRODUCTION
Speech signals from the uncontrolled environment may
contain degradation components along with required speech
components. The degradation components include
background noise, speech from other speakers etc. Speech
signal degraded by additive noise, this make the listening
task difficult for a direct listener, gives poor performance in
automatic speech processing tasks like speech recognition
speaker identification, hearing aids, speech coders etc. The
degraded speech therefore needs to be processed for the
enhancement of speech components. The aim of speech
enhancement is to improve the quality and intelligibility of
degraded speech signal. Main objective of speech
enhancement is to improve the perceptual aspects of speech
such as overall quality, intelligibility and degree of listener
fatigue. Improving quality and intelligibility of speech
signals reduces listeners fatigue; improve the performance
of hearing aids, cockpit communication, videoconferencing,
speech coders and many other speech systems. Quality can
be measured in terms of signal distortion but intelligibility
and pleasantness are difficult to measure by any
mathematical algorithm. Perceptual quality and intelligibility
are two measures of speech signals and which are not co-
related. In this study a speech signal enhancement using
basic spectral subtraction and modified versions of spectral
subtraction methods such as Spectral Subtraction with over
subtraction, Non linear Spectral Subtraction, Multiband
Spectral Subtraction, MMSE Spectral Subtraction, Selective
Spectral Subtraction, Spectral Subtraction based on
perceptual properties has been explained in detail with their
performance evaluation.
II. METHODOLOGIES
A. Basic spectral subtraction algorithms

The speech enhancement algorithms based on theory
from signal processing. The spectral - subtractive algorithm
is historically one of the first algorithms proposed for noise
reduction [4]. Simple and easy to implement it is based on
the principle that one can estimate and update the noise
spectrum when speech signal is not present and subtract it
from the noisy speech signal to obtain clean speech signal
spectrum[7]. Assumption is noise is additive and its
spectrum does not change with time, means noise is
stationary or its slowly time varying signal. Whose
spectrum does not change significantly between the updating
periods. Let y(n) be the noise corrupted input speech signal,
is composed of the clean speech signal x(n) and the additive
noise signal d(n).

y(n) = x(n) +d(n) (1)

Many of speech enhancement Algorithms operates in the
Discrete Fourier Transform (DFT) domain [3] assume that
the real and imaginary part of the clean speech DFT
coefficients can be modeled by different speech
enhancement algorithms y(n) in Fourier domain, we can
write
Y[w] = x[w] +D[w]. (2)

Y[w] can be expressed in terms of Magnitude and Phase as

Y[w] = Y |(w)| e
jy



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 106


Figure1-The general form of the spectral subtraction algorithm [4]


Where |Y(w)| is the magnitude spectrum and is the phase
spectra of the corrupted noisy speech signal. Noise spectrum
in terms of magnitude and phase spectra as

D[w] =| D[w] | e
j y

The Magnitude of noise spectrum |D(w)| is unknown but
can be replaced by its average value or estimated noise
|De(w)| computed during non speech activity that is during
speech pauses. The noise phase is replaced by the noisy
speech phase y that does not affect speech ineligibility [4].
We can estimate the clean speech signal simply by
subtracting noise spectrum from noisy speech spectrum in
equation form

Xe(w) = [|Y(w)| - |De(w)| |] e
j
(3)

Where Xe(w) is estimated clean speech signal. Many
spectral subtractive algorithms are there depending on the
parameters to be subtracted such as Magnitude spectral
subtraction Power spectral subtraction, Autocorrelation
subtraction. The estimation of clean speech Magnitude signal
spectrum is
Xe[w] = |Y[w]| - |De[w]|

Similarly for Power spectrum subtraction is

Xe[w]
2
= |Y[w]|
2
- |De[w]|
2
(4)

The enhanced speech signal is finally obtained by
computing the inverse Fourier Transform of the estimated
clean speech |Xe[w]| for magnitude. Spectrum subtractions
and |Xe[w] |
2
for power spectrum substation subtraction,
using the phase of the noisy speech signal. The more general
version of the spectral subtraction algorithms is

X
e
[]
p
= |Y[] |
p
- |D
e
[] |
p
(5)

Where P is the power exponent the general form of the
spectral subtraction, when p=1 gives the magnitude spectral
subtraction algorithm and p=2 gives the power spectral
subtraction algorithm. The general form of the spectral
subtraction algorithm is shown in figure 1. [4]

i) Short comings of S. S. Algorithm

The subtraction process must to be done carefully to avoid
any speech distortion. If the subtraction is too little then
much of the interfering noise remains if too much then some
speech information might be removed [1]. Spectral
subtraction method can lead to negative values, resulting
from differences among the estimated noise and actual noise
frame. Simple solution is set the negative values to zero, to
ensure a non negative magnitude spectrum. This non linear
processing of the negative values called negative
rectification or half-wave rectification [4]. This ensure a non-
negative magnitude spectrum given by equation (6)

|Xe()| = |Y()| - | De()|, if |Y()| > |De()| else
= 0 (6)

This is non-linear processing of the negative values which
creates small isolated peaks in the spectrum occurring at
random frequency locations in each frame. When these
signals get converted in the time-domain, these peaks sound
like tones with frequencies that change randomly from frame
to frame. Means tones that are turned on and off at the
analysis frame rate (every 20 to 30 ms). This new type of
noise introduced by the half-wave rectification process has
been described as warbling and of tonal quality, which is
commonly referred as musical noise in the literature.
Minor shortcoming of the spectral subtraction Algorithm is
the use of noisy phase that produces a roughness in the
quality of the synthesized speech [4]. Estimating the phase
of the clean speech is a difficult task and greatly increases
the complexity of the enhancement Algorithm. The phases
of the noise corrupted signal are not enhanced, because the
presence of noise in the phase information does not
contribute much to the degradation of speech quality [6].
Combating musical noise is much more critical than finding
methods to preserve the original phase. For that reason,
much efforts has been focused on finding methods to reduce
musical noise which are explained in next section
B. Spectral Subtraction with over subtraction facter

Some modifications are made to the original spectral
subtraction method. Here subtracting an over estimate of the
noise power spectrum and preventing the resultant spectrum
from going below a preset minimum level that is spectral
floor. This modifications lead to minimizing the presence of
the narrow spectral peaks by decreasing the spectral
excursions and thus lower the musical noise effect. Berouti
[5] has taken a different approach that is subtracting an
overestimate of the noise power spectrum and presenting the
resultant spectral components from going below a preset
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 107
minimum spectral floor value. This algorithm is given in
equation (7), where |X
ej
()| denotes the enhanced spectrum
estimated in frame i and |De()| is the spectrum of the noise
obtained during non-speech activity.

|Xe
j
()| = |Y
j
()| - | De()|, if |Y
j
()| > ( + )|De()|
else
= |De()| (7)

Where is over subtraction factor and is the spectral floor
parameter. Parameter controls the amount of residual
noise and the amount of perceived Musical noise. If is too
small, the musical noise will became audible but the
residual noise will be reduced .If is too large, then the
residual noise will be audible but the musical issues related
to spectral subtraction reduces. Parameter controls the
amount of speech spectral distortion. If is too large then
resulting signal will be severely distorted and intelligibility
may suffer. If is too small noise remains in enhanced
speech signal. When > 1, the subtraction can remove all of
the broadband noise by eliminating most of wide peaks.
But the deep valleys surrounding the peaks still remain in
the spectrum [1]. The valleys between peaks are no longer
deep when > 0 compared to when = 0 [4]. The
parameter suggested by Berouti [5] is in the range of 3 to
6. The influence of also investigated by others
Martin[4,15] suggest the range of should lie between 1.3
and 2 for low SNR conditions for high SNR conditions
subtraction factor less than one was suggested. Berouti
found that speech processed by equation (7) had less
musical noise. Experimental results showed that for best
noise reduction with the least amount of musical noise,
should be smaller for high SNR frames and large for low
SNR frames.
C. Non linear Spectral Subtraction (NSS)

The NSS proposed by [8] Lockwood and Boudy. NSS is
basically a modification of the Spectral Subtraction with
over subtraction factor algorithm. In case of NSS
assumption is that noise does not affects all spectral
components equally. Certain types of noise may affect the
low frequency region of the spectrum more than high
frequency region. Due to this assumption the use of a
frequency dependent subtraction factor for different types of
noise is possible in NSS. Due to frequency dependent
subtraction factor, subtraction process becomes nonlinear
hence it is called NSS. Larger values of subtraction factor
are subtracted at frequencies with low SNR levels and
smaller values are subtracted at frequencies with high SNR
levels. The subtraction rule used in the NSS algorithm has
the following form
|X
e
()| = |Y()| - () N () if
|Y ()| > () N () + |D
e
()| else

= |Y()| (8)
Where is the spectral floor set to 0.1 in [8] |Y()| and
|
e
()| are the smoothed estimates of noisy speech and Noise
respectively, () is a frequency dependent subtraction
factor and N() is a non-linear function of the noise
spectrum where

N () = Max (|De ()|) (9)

The N () term of obtained by computing the maximum
of the noise magnitude spectra |D
e
()| over the part of
specific no. of frames. The NSS algorithm was successfully
used in [8] as a pre-processor to enhance the performance of
speech recognition systems in noise.
D. MMSE Spectral Subtraction Algorithm

Minimum mean square Error (MMSE) Spectral
subtraction Algorithm is proposed by Sim [11 ]. A method
for selecting the subtractive parameters in the mean error
sense [17,18]. Consider a general version of the spectral
subtraction algorithm

|X () |
P
=
p
() |Y() |
P
-
p
() |De() | .....(10)

Where
p
() and
p
() are the parameters of interest. P is the
power exponent and |De()| is the average noise spectrum
obtained during non speech activity. The parameter
p
()
can be determined by minimizing the mean square error
spectrum.

e
p
() = |X ()|
P
- |Xe()|
P
..................... (11)

Where |X()| is the clean speech spectrum, assuring an
ideal spectral subtraction model and |Xe()| is enhanced
speech. Here assumption is that noisy speech spectrum
consists of the sum of two independent spectra the |Xp()|
P

spectrum and the true noise spectrum |De()|
P
.Where P is
constant, considering P = 1 and processing equation (11) by
minimizing the mean square error of the error spectrum
giving equation (14) with respect to
p
() and
p
(), we get
the following optimal subtractive parameters [4]

p
() =
p
() / 1 +
p
() .... . (12) and

p
() =
p
() [1 -
-p/2
() ] . ..(13)


E. Multiband Spectral Subtraction Algorithm(MBSS)

In MBSS approach [9,4] the speech spectrum is divided
into N overlapping bands and spectral subtraction is
performed independently in each band. The processes of
splitting the speech signal into different bands can be
performed either in the time domain by using band pass
filters or in the frequency domain by using appropriate
windows. The estimate of the clean speech spectrum in the i
th

band is obtained by [9].
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 108

|Xe
i
(
k
) | = |Yi (
k
) | -
i

i
|Di (
k
) | ........ (14)
b
i
<
k
< e
i
Where
k
= 2pi k / N, k = 0, 1,...... N 1 are the discrete
frequencies |De
i
(k)| is the estimated noise power spectrum
obtained during speech absent segment,
i
is the over
subtraction factor of the i
th
band and
i
is an additional band.
Subtraction factor can be individually set for each frequency
band to customize the noise removal processor b
i
and e
i
are
the beginning and ending frequency bins of the ith frequency
band. The band specific over subtraction factor is a function
of the segmented SNRi of the ith frequency band and is
computed as follows [4]
4.75 SNR
i
< -5

i
= 3/20 (SNRi) -5 < SNR
i
< 20
1 SNR
i
> 20

The values for
i
are set to

1 fi <1 KHz

i
= 2.5 1 KHz < fi < (Fs / 2) 2 KHz
1.5 fi > (Fs / 2) 2 KHz
Where fi is the upper frequency of the ith band and Fs is
the sampling frequency in Hz. The main difference between
the MB and the NSS algorithm is in the estimation of the
over subtraction factors. The MB approach estimates one
subtraction factor for each frequency band, whereas the NSS
algorithm estimates one subtraction factor for each frequency
bin [4]



F. Selective Spectral Subtraction Algorithm

All previously mentioned methods treated all speech
segments equally, making no distinction between voiced and
unvoiced segments. Due to the spectral differences between
vowels and consonants [4] several researchers have proposed
algorithms that treated the voiced and unvoiced segment
differently. The resulting spectral subtractive algorithms
were therefore selective for different classes of speech
sounds [4]. The two band spectral subtraction algorithm was
proposed in [13]. The incoming speech frame was first
classified into voiced or unvoiced by comparing the energy
of the noisy speech to a threshold. Voiced segments were
then filtered into two bands, one above the determined cutoff
frequency (high pass speech) and one below the determined
cutoff frequency (low pass speech). Different algorithms
were then used to enhance the low passed and high passed
speech signals accordingly. The over subtraction algorithm
was used for the low passed speech based on the short term
FFT. The subtraction factor was set according to short term
SNR as per [5]. For high passed voiced speech as well as for
unvoiced speech, the spectral subtraction algorithm was
employed with a different spectral estimator [4].
A dual excitation Model was proposed in [3] for speech
enhancement. In the proposed approach, speech was
decomposed into two independent components voiced and
unvoiced components. Voiced component analysis was
performed first by extracting the fundamental frequency and
the harmonic amplitudes. The noisy estimates of the
harmonic amplitudes were adjusted according to some rule
to account for any noise that might have leaked to the
harmonics. Following that the unvoiced component spectrum
was computed by subtracting the voiced spectrum from the
noisy speech spectrum. Then a two pass system, which
included a modified Wiener Filter, was used to enhance the
unvoiced spectrum. Finally the enhanced speech consists of
the sum of the enhanced voiced and unvoiced components.
Treating voiced and unvoiced segments differently can bring
about substantial improvements in performance [4]. The
major challenge with such algorithms is making accurate and
reliable voiced, unvoiced decisions particularly at low SNR
conditions.
G. Spectral Subtraction based on perceptual properties

In the preceding methods, the subtractive parameters were
computed experimentally, based on short term SNR levels
[5] or obtained optimally in a mean square error sense [11].
No perceptual properties of the auditory system have been
considered. An algorithm proposed by Virag [14] that
incorporates psycho acoustical properties of speech signal,
in the spectral subtraction process. The main objective of
this algorithm is to remove the residual noise perceptually
inaudible and improve the intelligibility of enhanced speech
by taking into account the properties of the human auditory
system [4]. Method proposed by Virag [14] was based on
idea that, if the estimated masking threshold at a particular
frequency is low, the residual noise level might be above.
The threshold and will therefore be audible. The subtraction
parameters should therefore attain their maximal values at
that frequency. Similarly, if the masking threshold level is
high at a certain frequency, the residual noise will be
masked and will be inaudible. The subtraction parameters
should attain their minimal values at that frequency. The
subtraction parameters and are given as

() = Fa [
min
,
max
, T()] .... .. (18)
() = Fb [
min
,
max
, T()]


Where T() was the masking threshold,
min
and
max

were set to 1 and 6 respectively and spectral floor constants

min
and
max
, were set to 0 and 0.02 respectively [4] The
F
a
() function had the following boundary conditions.

F
a
() = a
max
if T() = T()
min
= a
min
if T() = T()
max
.... (19)


Where T()
min
and T()
max
are the minimal and maximum
values of masking thresholds estimated in each frame.
Similarly the function F
b
() was computed using
min
and

max
as boundary conditions. The main advantage of Virags
approach lies in the use of noise masking thresholds T()
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 109
rather than SNR levels for adjusting the parameters () and
(). The masking thresholds T () provide a smoother
evolution from frame to frame than the SNR. This algorithm
requires accurate computation of the masking threshold.
III. PERFORMANCE OF SPECTRAL SUBTRACTION
ALGORITHMS
The spectral subtraction algorithm was evaluated in many
studies, primarily using objective measures such as SNR
improvement and spectral distances and then subjective
listening tests. The intelligibility and speech quality
measures reflect the true performance of speech
enhancement [4] algorithms in realistic scenarios. Ideally,
the SS algorithm should improve both intelligibility and
quality of speech in noise. Results from the literature were
mentioned as follows.
Boll[5] performed intelligibility and quality measurement
tests using the Diagnostic Rhyme Test (DRT). Result
indicated that SS did not decrease speech intelligibility but
improved speech quality particularly in the area of
pleasantness and inconspicuousness of the background noise.
Lim [4] evaluated the intelligibility of nonsense sentences in
white noise at 5, 0, and +5dB SNR processed by a
generalized SS algorithm (eqa. No.5). the intelligibility of
processed speech was evaluated for varies power exponents
P ranging from P = 0.25 to P = 2. Results indicated that SS
algorithm did not degrade speech intelligibility except when
P = 0.25. Kang and Fransen [4] evaluated the quality of noise
processed by the SS algorithm and then fed to a 2400 bps
LPC recorder. Here SS algorithm was used as a pre-
processor to reduce the input noise level. The Diagnostic
Acceptability Measure (DAM) test [19] was used to evaluate
the speech quality of ten sets of noisy sentences, recorded
actual military platforms containing helicopter, tank, and
jeep noise results indicated that SS algorithm improved the
quality of speech. The largest improvement in speech quality
was noted for relatively stationary noise sources [4, 2]. The
NSS algorithm was successfully used in [8] as a pre-
processor to enhance the performance of speech recognition
systems in noisy environment. The performance of the
multiband spectral subtraction algorithm [9] was evaluated
by Hu Y. and Loizou [2, 19] using formal subjective
listening tests conducted according to ITUT P.835 [20].
The ITU T P.835 methodology is designed to evaluate the
speech quality along with three dimensions signal distortion,
noise distortion and overall quality. Results indicated that the
MBSS algorithm performed the best consistently across all
noise conditions, [4] in terms of overall quality. In terms of
noise distortion the MBSS algorithms performed well,
except in 5dB train and 10dB street conditions. The
algorithm proposed by Virag was evaluated in [14] using
objective measures and subjective tests, and found better
quality than the NSS and standard SS algorithms. The low
energy segments of speech are the first to be lost in the
subtraction process; particularly when over subtraction is
used. Overall most studies confirmed that the SS algorithm
improves speech quality but not speech intelligibility.
IV. CONCLUSION
Various spectral subtraction algorithms proposed for speech
enhancement were described in above sections. These
algorithms are computationally simple to implement as they
involve a forward and an inverse Fourier transform. The
simple subtraction processing comes at a price. The
subtraction of the noise spectra from the noisy spectrum
introduces a distortion in the signal known as Musical noise
[4]. We presented different techniques that mitigated the
Musical noise distortion. Different variations of spectral
subtraction were developed over the years. The most
common variation involved the use of an over subtraction
factor that controlled to some amount of speech spectral
distortion caused by subtraction process. Use of spectral
floor parameter prevents the resultant spectral components
from going below a preset minimum value. The spectral
floor value controlled the amount of remaining residual noise
and the amount of musical noise [4]. Different methods were
proposed for computing the over subtraction factor based on
different criteria that included linear [5] and nonlinear
functions [8] of the spectral SNR of individual frequency
bins or bands [9] and psychoacoustic masking threshold [14].
Evaluation of spectral subtractive algorithms revealed that
these algorithms [4] improve speech quality and not affect
much more on intelligibility of speech signal.
ACKNOWLEDGMENT
Anuradha R. Fukane wants to thanks Dr. S. D. Bhide,
Dr. Madhuri Khambete and Prof. S. KulKarni for their
guidance and support.

REFERENCES

[1] Yi Hu and Philipos C. Loizou, Subjective comparison and
evaluation of speech enhancement algorithms IEEE Trans. Speech
Audio Proc.2007:49(7): 588601.
[2] Gustafsson H., Nordhohm S, Claesson I(2001) Spectral subtraction
using reduced delay convolution and adaptive averaging . . IEEE.
Trans. Speech Audio Process,9(8), 799-805.
[3] Kim W, Kang S, and ko H.(2000) Spectral subtraction based on
phonetic dependancy and masking effects IEEE. Proc.vision image
signal process, 147(5),pp423-427
[4] Phillips C Loizou Speech enhancement theory and practice 1st ed.
Boca Raton, FL.: CRC, 2007. Releases Taylor & Francis
[5] Berouti,M. Schwartz,R. and Makhoul,J.,"Enhancement of Speech
Corrupted by Acoustic Noise", Proc ICASSP 1979, pp208-211,.
[6] Paliwal K. and Alsteris L.(2005), On usefulness of STFT phase
spectrum in human listening tests Speech Commmun.45(2),153-170
[7] Boll,S.F.,"Suppression of Acoustic Noise in Speech using Spectral
Subtraction", IEEE Trans ASSP 27(2):113-120, April 1979
[8] Lockwoord, P. and Boudy,J.,"Experiments with a Nonlinear Spectral
Subtractor (NSS), Hidden Markov Models and the projection, for
robust speech recognition in cars", Speech Communication, 11,
pp215-228, Elsevier 1992.
[9] Kamath S. and Loizou P.(2002) A multiband spectral subtraction
methode for enhancing speech currupted by colored noise Proc.
IEEE Intl. Conf. Acoustics, Speech, Signal Processing
[10] Hu Y., Bhatnager M. Loizou P.(2001) A crosscorellation technique
for enhancing speech currupted with correlated noise . Proc. IEEE
Intl. Conf. Acoustics, Speech, Signal Processing1.pp 673-676
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 110
[11] Sim B, Tong Y, chang J., Tan C.(1998) A parametric formulation of
the generalized spectral subtraction method IEEE. Trans. Speech
Audio Process,6(4), 328-337
[12] Hardwick J., Yoo C. and Lim J (1998) speech enhancement using
dual exitation model Proc. IEEE Intl. Conf. Acoustics, Speech,
Signal Processing 2, pp 367-370
[13] He C, and Zweig G. (1999) Adaptive two band spectral subtraction
with multiwindow spectral estmation Proc. IEEE Intl. Conf.
Acoustics, Speech, Signal Processing ,2, pp 793-796
[14] Virag, N., (1999). Single channel speech enhancement based on
masking properties of the human auditory system. IEEE. Trans.
Speech Audio Process,7(3), 126-137..
[15] Lebart K, Boucher J M,(2001) A New method based on spectral
subtraction for speech enhancement Acta acustica, Acustica vol. 87
pp359-366..
[16] R. Martin, Spectral Subtraction Based on Minimum Statistics, in
Proc. Euro. Signal Processing Conf. (EUSIPCO), pp. 11821185,
1994
[17] Martin, R(2002) Speech Enhancement Using MMSE Short Time
Spectral Estimation with Gamma Distributed Speech Priors,in Proc.
IEEE Intl. Conf. Acoustics, Speech, Signal Processing (ICASSP), vol.
I, pp. 253256, 2002
[18] Epraim Y. and malah D Speech Enhancement Using minimum mean
squre error shorttime spectral amplitude estmator IEEE, Trans. on
Audio, Speech, signal pross.vol 6(4)pp 328-337)
[19] Yi Hu and Philipos C. Loizou, Senior Member, IEEE Evaluation of
Objective Quality Measures for Speech Enhancement IEEE, Trans.
on Audio, Speech, and Language pross.vol 16, (2008)
[20] ITU-T(2003) subjective test Methodalogy for evaluating speech
communication system that include noise supression slgorithm. ITU-
T ITU-T recommendation p.835



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 111
Context Awareness

Anish Shrestha
Dept. of Computer Science and Engg.
Nepal College of Information Technology, Nepal
+977-9841472979
anish@ncit.net.np
Asst. Prof. Saroj Shakya
Dept. of Computer Science and Engg.
Nepal College of Information Technology, Nepal
+977-9841260528
saroj@ncit.net.np


ABSTRACT
Context awareness to deal with linking changes in the environment
with computer systems, which are otherwise static. The ambient
intelligence (AmI) paradigm builds upon pervasive
computing, ubiquitous computing, profiling practices, and human-
centric computer interaction design and is characterized by systems
and technologies that are embedded, context aware, personalized,
adaptive, anticipatory.

This paper discusses context-awareness and the technologies that can
be used to invent the future of computing which lies in rich, context-
driven user experience. This will be done by examining the
background and reasoning behind AmI research. This paper also
gives an overview of the technologies being explored alongside
possible applications of context awareness in computing as well as
technological and socio- ethical challenges in this field.

Keywords: ambient intelligence, ubiquitous computing.

I. INTRODUCTION

Context awareness is the need for ubiquitous systems to
acquire a measure of context and adapt to the contexts
current values; heavily used contexts include location,
neighboring entities (i.e. devices or humans) and the activities
they are currently involved in, or available computational and
network resources. In order to acquire such context values,
technologies such as sensing (and related: sensor data fusion,
techniques for inference from sensor data, sensor data history
and user input) are central, and work on the matter is partnered
with the understanding of context in terms of human activities
[2]. This understanding of context is necessary in creating
truly intelligent environments for the future which are
extensible in order to keep up with the rapidly changing and
increasingly diverse contexts in which human-computer
interactions are taking place. Future revelations in the area of
context awareness have the potential to dramatically improve
the way in which ubiquitous and intelligent computing
environments support our everyday activities as well as
provide richer experiences in human-computer interaction.

Ubiquitous computing was first stressed by Marc Weiser [1],
envisioning a scenario where computational power would be
available everywhere embedded in walls, chairs, clothing etc.
Weiser's goal is to achieve the most effective kind of
technology, which is available throughout the physical
environment, while making them effectively invisible to the
user. Research into context-aware computing holds the key to
realizing this vision of seamless human-computer interaction
where all information relevant to context will be gleaned
automatically by intelligent systems. Some of the possibilities
for Ambient Intelligence to support our daily activities include
working out quick routes for car journeys and applications
which infer our shopping list by gathering information about
the contents of our fridge and combining this with the
information that you are having friends over for dinner [7]. In
this sense, these applications must be truly context aware so
that they integrate seamlessly into people's daily lives and
avoid becoming ubiquitous clutter rather than useful
pervasive computing services.

Many researchers in the field of context-awareness are looking
at how context-aware computing could lead to richer human
computer interaction and the provision of more relevant and
useful services to the end user. An example of this is a
context-aware tour guide which could sense a user as they
approach specific exhibitions. An example might be a
computer application which senses the presence and proximity
of a user's mobile devices. Suppose the user had information
about their business meetings stored on the device. A context
aware application might be able to automatically discover
such information and synchronize it with the user's calendar
information on their home desktop. The application might be
able to resolve conflicts in the user's timetables or
automatically schedule reminders for future activities. Such
automatic coordination of information to create more useful
services is just one of the many examples of how context
aware computing could lead to more intuitive human-
computer interaction.

II. CONTEXT AWARE WIRELESS
ENABLING TECHNOLOGIES

This section presents a brief overview of some of the key
enabling technologies for context-aware systems. These
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 112
technologies, if integrated and used in the correct way, could
provide a mechanism for computer systems to sense and make
sense of situational information and then perform some
actions depending on the current context. The context
awareness is believed to be evolve out of the interaction of
three key technologies which are wireless communications
technologies, sensing technologies and semantic technologies.
Some of the individual technologies encompassed by these
three general categories offer researchers in the field of
context-awareness much promise in the way of creating truly
context-aware environments. The area of wireless
communications deals with the standards, devices and
concepts to enable continuous interaction between the
constantly growing numbers of computing devices that are
becoming embedded in our society as we move towards the
era of ubiquitous computing. The following information gives
an overview of Personal Area Networks (PANs), the ZigBee
communication standard and Mesh Networking.

A. Personal Area Network (PAN)

A personal area network is a computer network used
for communication among computer devices, including
telephones and personal digital assistants, in proximity to an
individual's body. Such a network could facilitate
communication between a user's mobile phone or PDA and
their office computer, for example, to automatically
synchronize important calendar events and memos stored on
either device, as in the example discussed previously. Wireless
Personal Networks (WPANs) are becoming increasingly more
popular as the proposed method of achieving this sort of
interaction between devices and are implemented with the use
of technologies such as Bluetooth and the ZigBee protocol
(discussed later). The use of Bluetooth in Wireless Personal
Area Networking is commonly seen in the integration of one
user's personal devices (e.g. a Bluetooth mouse connected to
their PC) as well as in the communication between multiple
users' devices (e.g. content sharing between mobile phones)

For Example: A recent arrival in the PAN area is Skinplex,
developed by a German company, IDENT Technology.
Skinplex implements a PAN whereby small electronic
transmitters are worn or placed very close to a person d
receivers which communicate with these transmitters may be
integrated throughout that person's environment. Skinplex
works by taking advantage of the human body's ability to
transmit electric fields, using the subject's body itself as a
transmission medium to provide some sense of context
awareness in various situations involving a person's
interaction with their environment. For example, this
technology has been applied in systems to control electric
windows and convertible roof systems in cars, where by
sensors placed in the windows, doors and roof of a car may
sense the electric field surrounding a person's hand and then
take the appropriate action to avoid injury to that person.
Technologies such as Skinplex are often referred to within the
PAN sub-category of Body Area Networking. Research into
these sorts of innovatory technologies will enhance the
seamlessness of interaction required between a user and the
physical environment in a future of context-awareness and
ambient intelligence. In a future of ubiquitous computing,
truly context-aware environments will have to recognize and
deal with the sporadic addition and removal of devices from a
local environment as the presence of certain devices and
people (possibly carrying implanted devices or as the subjects
of their own Body Area Networks) will have an effect on the
context of a given instance. For this reason, wireless mesh
networking might play an important role in providing the
backbone infrastructure for managing these constant and ever-
changing connections.

B. ZigBee

ZigBee is an open global standard in wireless sensor
technology which defines a set of protocols for use in Wireless
Personal Area Networks (WPANs). It is particularly targeted
towards wireless radio networking applications for use in the
fields of environment monitoring and control and its
specification is developed by the ZigBee Alliance, an
international consortium of companies who wish to extend the
use of ZigBee compliant technologies in real-world
applications. The ZigBee Alliance designed and developed the
standard to meet the need for a low-cost wireless radio
networking technology which adhered to the principles of
ultra-low power consumption, use of unlicensed radio bands,
cheap and easy installation, provision of flexible and
extendible networks and provision of some integrated
intelligence for network organization and message-routing.

The fact that ZigBee End Devices are power-saving, wireless
and perform only simple communications functions means
that they are relatively inexpensive and unobtrusive to
implement in a physical environment such as a home or
workplace. ZigBee End Devices can be attached to the many
interactive elements of an environment (computer screens,
light switches, heating systems, ventilation systems etc...) to
automate interaction depending on the context of a situation.
The context will be determined by intelligent software agents
(discussed later) however communications technologies such
as ZigBee along with its support for Mesh Networking and
ability to interface with other networks like PANs and BANs
could provide the necessary underlying communications
network to support this automatic interaction with the
environment depending on context.

C. Radio Frequency Identification (RFID)

Radio Frequency Identification is the process of attaching a
small wireless device to an object, be that a commercial
product or a person's clothing, in order to identify and track
that object using radio waves. The devices used in this process
are commonly referred to as RFID tags. RFID tags typically
consist of two core components: a microchip for information
processing and storage as well as control of radio frequency
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 113
modulation and an antenna for sending and receiving radio
signals. RFID tags are mainly active or passive. Active RFID
tags contain a battery which allows them to autonomously
transmit signals. Passive tags have no independent power
source and so only transmit when in range of a reader which
powers them via electromagnetic induction and initiates the
transmission. RFID tags are a prime example of the use of
micro technology in sensor networks.



Fig. 1. RFID system components
RFID is commonly used today in a range of applications from
asset tracking by large product manufacturers and distributors
to increasing ID security in passports. In supply chain
management, in particular, RFID seems set to replace the
ubiquitous bar code in a few years. RFID tags can store a
limited amount of data (around 2KB). In today's applications,
this data is usually nothing more than a unique identification
number and perhaps some general details about the product or
person that the tag is intended to track. Most RFID sensing
system (whether it uses active or passive tags) works in
roughly the same way (see Figure 1):

1) The microchip of an RFID tag stores some data
which is waiting to be read by a reader.
2) An RFID reader sends out an electromagnetic energy
which is received by the antenna of a nearby RFID
tag.
3) In the case of passive tags, this electromagnetic
energy powers the tag, initializing a radio frequency
transmission from the tag of the data stored in that
tag's microchip. Active tags perform the same
transmission except they use the power stored in their
own battery to autonomously begin sending data
when a signal is received from the reader.
4) The reader will then intercept the transmission and
interpret the frequency transmitted as meaningful
data.
In this case it is easy to see that the RFID readers act as the
sensors and the tags are a means of electronically tagging
objects in the physical world so that details about these objects
(stored on the tags) can be collected and processed by
computer systems. One fruitful application of RFID in
context-aware systems is to identify individual persons. As an
example, consider a key-less entry system for the front door of
a context aware home or building. The door should recognize
authorized inhabitants of the building and should unlock (and
perhaps even open) the door as they approach. One method of
distinguishing authorized persons from unauthorized ones in
this scenario is to use RFID as the sensor technology. If each
person had an RFID tag on their clothing or on an ID card
which they carried with them, this tag could store a unique ID
number or some distinguishing information about an
individual. As a person approaches the door, an RFID reader
placed on or near the door could request data from the person's
tag and, if the ID number or details stored on the tag verifies
that person as an authorized inhabitant, unlock the door.

D. Motes and Smartdust

The concept of Motes and Smartdust represents a new
paradigm in the area of distributed wireless sensor technology.
It has evolved from significant developments in the fields of
micro technology, wireless communications and computer
interaction with the physical world. The goal of Smartdust
research is to explore how an autonomous sensing,
computing and communication system can be packed into a
cubic millimeter mote... [4].The name 'Mote' refers to a tiny
device which integrates all of the fore-mentioned features and
which has the capability of autonomously forming wireless
connections with other nearby motes in order to establish an
ad-hoc wireless sensor network. As the miniaturization of
these devices evolves, the range of possible applications for
them is continually expanding. The term Smartdust has arisen
out of the expectation that, in years to come, these integrated
wireless communication and sensor devices could be the size
of a single speck of sand or dust.

Motes could be used in analyzing the structural integrity of
buildings and bridges. The microscopic nature of motes allows
them to be incorporated into poured concrete when building
bridges, for example. Passively powered motes might be
placed in the concrete and could contain sensors which
monitor the salt levels in the concrete (salt decreases the
structural integrity of concrete). Whenever necessary, a
vehicle could drive across the bridge, outputting a powerful
electromagnetic field which would cause the motes to power
on and transmit the data read from their sensors. Using this
information, structural engineers could take preventative
action to increase the bridge's lifespan or avoid a collapse.

The Smartdust concept represents a culmination of the many
wireless communications and sensor technologies mentioned
in earlier sections. While it is still a largely experimental
technology, mote devices could provide the kind of smart
sensing needed in future context-aware environments. Almost
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 114
any type of existing electronic sensor can be integrated into a
mote device and they are some of the smallest known wireless
sensor devices in existence, meaning that, like RFID and
ZigBee devices, they can be unobtrusively embedded in our
everyday environments. A home, workplace or even an entire
city could be laden with these devices yet they will function
out of sight of the human beings which they support. Such
technologies really give life to Weiser's vision of computing
systems acting as quiet, invisible servants within our
environments.

III. INTELLIGENT TECHNOLOGIES

The following is a discussion of intelligent software agents
and how their function can be tied to the advent of internet
tagging and folksonomies and their use within the semantic
web.

A. Intelligent Software Agents

A software agent is a self-contained program capable of
controlling its own decision making and acting, based on its
perception of its environment, in pursuit of one or more
objective [3]. Such intelligence will be crucial in a context-
aware system. Computer applications must be capable of
determining and understanding the context of situations and
then acting on this context before technologies like smart
sensors can be of any real use to us in supporting our daily
activities. Humans have an implicit understanding of context
and state that people have capabilities which allow this such
as:

1) Ontology Sharing humans are able to share
common languages and vocabularies
2) Sensing humans are able to perceive their
environments through sensory organs
3) Reasoning humans are able to make sense out of
what they have perceived based on what they already
know

These capabilities in software will be essential to the practical
realization of context-aware computing applications [3].
Ontology sharing deals with shared languages and
vocabularies which foster a shared understanding of context
within certain situations or relating to certain subjects.
Ontological tagging has become a distinguishing feature of the
Web 2.0 era, allowing internet users to co-ordinate and share
information by assigning their own tags to different types of
information based on its content, medium and relationships to
other areas of interest for example. An application scarcely
exists today which does not possess some ability to interface
with the web. There are many applications, both commercial
and community driven, which sift through the abundant tag
clouds found on the semantic web and retrieve and present
relative information to a user based on user requests or
predefined user preferences. If intelligent software was able to
autonomously elicit these preferences or trends in user
activity, for example, then such software agents could prove
very useful in providing a new generation of services to the
human individual in a context-aware environment.

Consider a context aware shopping center where user's mobile
Devices are scanned as they enter and information about
information they are interested in could be inferred and
collated by intelligent software for the purpose of providing
the customer with adverts for products available in the centers
stores that the software has determined they may be interested
in. The software could do this by first gathering information
about user preferences from sensors then collating this
information, determining the context in which it should be
understood and perhaps classifying this information with some
sort of ontological tagging system. The software could
compare this information with tag clouds on the World Wide
Web and cross reference this with information from the
websites of retailers which it already knows have outlets at
that center. When it finds information on offers relevant to the
user's tastes, it could trigger some action such as sending a text
or multimedia message to that user's mobile phone with the
offer information embedded and present the ability for the user
to view further information (such as directions to the nearest
outlet offering the deal). This is a simple example as to how
such ontological information will improve the efficiency of
software agents in determining context and ultimately increase
the usefulness of the services which these agents provide.
With regard to the sensing requirement, the challenge facing
software developers in this field is creating robust and
efficient interfaces at the lowest levels of their software for
interfacing with the many sensors which will feature in a
context-aware environment.

IV. CONTEXT-AWARE APPLICATIONS

Some of the most promising possibilities for context-aware
computing to make a difference in supporting the lives of
people within two key environments; context-awareness in the
home and context-awareness in healthcare are discussed
below.

A. Context-Aware Homes

Many scenarios exist describing how context-aware
computing might play a role in the home environment. Among
these are:

1) Context-aware lights, chairs and tables which adjust
as a family gathers in a room. These elements might
reconfigure depending on locations, number and
identities of the individuals entering the room as well
as the tasks which they are expected to perform (one
might read a book while another watches television
and so on).
2) Phones which only ring in rooms where the addressee
of a call is actually present, preventing other people
being disturbed by useless ringing.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 115
3) Security systems which are aware of a home's
inhabitants and monitor activity around the
perimeters of the house (people entering a back yard,
driveway, opening a door or window and so on).
Such a system would provide call out alarms or take
preventative action in the case of a break-in, fire, or
an elderly resident who is at risk of serious injury.
4) Sensor technologies like motes could be used to
monitor the various environmental parameters such
as smoke levels, gas levels, room temperatures.
Intelligent software could accept all of this
information from the sensors and decide when action
needs to be taken. This action might include alerting
the home's inhabitants to a possible fire hazard,
sending out an alert to authorities when an
emergency has occurred or when unauthorized
individuals have been detected within the home [5].

With the increasing development and convergence of the
communication and sensor technologies and intelligent
software, the scope of these sorts of applications in the home
will be bound only by their developer's imaginations.

B. Context-Awareness in Healthcare

There is growing interest in integrating hospital beds,
medicine containers and trays and context-aware information
and communication systems for hospital staff including:
1) A context-aware hospital bed with integrated display
for the patient's entertainment or for displaying
information to a clinician depending on the situation.
Such a bed knows which patient is currently
occupying it and can interface with other
technologies, such as a context-aware medicine tray,
described below.
2) A context-aware medicine tray, used by a nurse to
make his/her rounds for administering medicines to
all the different patients in a ward. The tray could
interface with technologies like the bed described
above in order to become aware of the patient which
the nurse is currently treating. The tray's surface
could light up the correct medicine container for that
patient helping nurses in reducing the number of
incidents where the wrong medicines are accidentally
administered to a patient [6].

Location tracking systems such as RFID can be used to tag
important equipment in Accident and Emergency Units, such
as beds, operating instruments, life-support machines, crash
carts (defibrillators) and so on. When needed in an emergency,
the locations of these objects can be quickly looked up or
information on their location automatically presented to
hospital staff by intelligent agents. These context-aware
systems in hospitals have the potential to increase the speed
and efficiency with which patients are helped. Many
challenges face the development of context-aware computing
and ambient intelligence going forward. These include both
technological challenges and socio-ethical concerns which
must be addressed.

V. CONCLUSION

In order to create an intelligent environment in which the
underlying computer systems are mostly invisible to the user,
technologies from different areas of specialty must be able to
work together in a flexible and robust fashion. As well as
developing new standards for wireless communication and
networking, we must ensure that the various technologies that
are part of a context aware system will be capable of
complying with these standards and interfacing with each
other in a seamless fashion. Context awareness is poised to
fundamentally change the nature of how we interact with and
relate to information devices and the services they provide.
With computing devices having increased processing power,
improved connectivity and innovative sensing capabilities
Context-aware devices will anticipate your needs, advise
you, and guide you through your day in a manner more akin to
a personal assistant than a traditional computer. Context
aware computing, via a combination of hard and soft sensors,
will open up new opportunities for developers to create the
next generation of products on Intel platforms.

REFERENCES

[1] M. Weiser: "The Computer for the 21st Century", Scientific
American, 265, 3, September 1991.
[2]Doina Bucur: On Context Awareness in Ubiquitous
Computing,2008
[3] N. R. Jennings and M. Wooldridge: Applications of
Intelligent Agents, 1996
[4] Warneke, Liebowitz, and Pister: Ubiquitous Computing
More than Computing Anytime Anyplace? 2001
[5] Meyer and Rakotonirainy: A Survey of Research on
Context-Aware Homes, 2003
[6] Bricon-Souf and Newman: Context awareness in health
care: A review, 2007
[7] Shadbolt: Ambient Intelligence, 2003



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 116

PERFORMANCE EVALUATION OF SIZE CONSTRAINT
CLUSTERING ALGORITHM
J.Jayadeep
1
, D.JohnAravindhar
2
& M. Roberts Masillamani
3

School of Computer Science & Engineering, Hindustan Institute of technology & Science, padur,
Chennai, Tamilnadu, India
jayadeep277@gmail.com, johnaravindhar@gmail.com & deancs@hindustanuniv.ac.in

ABSTRACT
Cluster analysis is a widely used technique
that seek for grouping the data. The result of
such analysis gives a set of groups or clusters
where data in the same group are similar
(homogeneous) and data in the distinct groups
are different (heterogeneous).Data clustering is
an important and frequently used
unsupervised learning method. If we
incorporate instance-level background
information to traditional clustering algorithms
which can increase the clustering performance.
In this we extend traditional clustering by
introducing additional prior knowledge such as
the size of each cluster.. Clustering is an
important tool for data mining, since it can
identify major patterns or trends without any
supervisory information such as data labels. It
can be broadly defined as the process of
dividing a set of objects into clusters, each of
which represents a meaningful sub-population.
cluster size constraints can lead to the
improvement of clustering accuracy.

Keywords:k-mean,Randomk-mean,Entropy,
Accuracy,Rapid Miner tool

1.0 INTRODUCTION
The goal of cluster analysis is to divide the
data objects into groups so that objects within
a group are similar to one another and different
from objects in other groups. Traditionally,
clustering is viewed as an unsupervised
learning method which groups the data
objects based only on the information
presented in the dataset without any external
label information .Spectral clustering based on
graph partitioning theories is used as one
of the most effective data clustering tools.
These methods provides the given data set as a
weighted undirected graph. Each data
instance is represented as a node. Each edge
has assigned a weight which describes the
similarity between the two nodes connected
by an edge. Clustering is then accomplished
by finding the best cuts of the graph that
optimize certain predefined cost functions.
Clustering is an important tool for data mining,
since it can identify major patterns or trends
without any supervisory information such as
data labels. It can be broadly defined as the
process of dividing a set of objects into
clusters, each of which represents a
meaningful sub-population. The objects may
be database records, nodes in a graph, words,
images, or any collection in which individuals
are described by a set of features or
distinguishing relationships. Clustering
algorithms identifies the coherent groups
based on a combination of the assumed cluster
structure (e.g., Gaussian distribution) and the
observed data distribution.

2.0 LITERATURE SURVEY

Semi-supervised Clustering by Seeding
Semi-supervised clustering uses a small
amount of labeled data to aid and bias the
clustering of unlabeled data. This paper
explores the use of labeled data to generate
initial seed clusters, as well as the use of
constraints generated from labeled data to
guide the clustering process. It introduces
two semi-supervised variants of KMeans
clustering that can be viewed as instances
of the EM algorithm, where labeled data
provides prior information about the
conditional distributions of hidden
category labels. Experimental results
demonstrate the advantages of these
methods over standard random seeding
and COP-KMeans, a previously developed
semi-supervised clustering algorithm..
Constrained K-means Clustering with
Background Knowledge Clustering is
traditionally viewed as an unsupervised
method for data analysis. However in
some cases information about the problem
domain is available in addition to the data
instances themselves. In this , we
demonstrate how the popular k-means
clustering algorithm can be protably
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 117
modified to make use of this information.
In experiments with arterial constraints on
six data sets, we observe improvements in
clustering accuracy. We also apply this
method to the real-world problem of
automatically detecting road lanes from
GPS data and observe dramatic increases
in performance.
Fuzzy clustering with a knowledge-based
guidance Fuzzy clustering becomes a
broadly accepted synonym of fundamental
endeavors aimed at finding structures in
multidimensional data. In essence, these
methods operate in unsupervised mode.
This means that they act upon data while
being directed by some predefined
objective function (criterion) for which
they "discover" a structure (clusters) that
yields a minimal value of this criterion. In
this study, we discuss an issue of
exploiting and effectively incorporating
auxiliary problem dependent hints being
available as a part of the domain
knowledge associated with the pattern
recognition problem at hand. As such hints
are usually expressed by experts/data
analysts at the level of clusters
(information granules) rather than
individual data (patterns), we refer to them
as knowledge-based indicators and allude
to a set of them as a knowledge-based
guidance available to fuzzy clustering. The
proposed paradigm shift in which fuzzy
clustering incorporates this type of
knowledge-based supervision is discussed
and contrasted with the "pure" (that is
data-driven) version of fuzzy clustering.
Several fundamental categories of the
guidance mechanisms are introduced and
discussed, namely partial supervision,
proximity-based guidance and uncertainty
driven knowledge hints. The details on
how the guidance machinery translates
into updates of the partition matrices are
presented. We also present a number of
practical scenarios in which the role of
knowledge hints becomes evident and
highly justifiable. This concerns Web
exploration, exploitation of labeled
patterns, issues of incomplete feature
spaces, and constraints of typicality of
patterns, to name a few representative
applications.
Fuzzy clustering with viewpoints In this
study, we introduce a certain knowledge-
guided scheme of fuzzy clustering in
which domain knowledge is represented in
the form of so-called viewpoints.
Viewpoints capture a way in which the
user introduces his/her point of view at the
data by identifying some representatives,
which, being treated as externally
introduced prototypes, have to be included
in the clustering process. More formally,
the viewpoints (views) augment the
original, data-based objective function by
including the term that expresses distances
between data and the viewpoints.
Depending upon the nature of domain
knowledge, the viewpoints are represented
either in a plain numeric format
(considering that there is a high level of
specificity with regard to how one
establishes perspective from which the
data need to be analyzed) or through some
information granules (which reflect a more
relaxed way in which the views at the data
are being expressed). The detailed
optimization schemes are presented, and
the performance of the method is
illustrated through some numeric
examples. We also elaborate on a way in
which the clustering with viewpoints
enhances fuzzy models and mechanisms of
decision making in the sense that the
resulting constructs reflect the preferences
and requirement that are present in the
modeling environment.
Clustering with partial supervision here is
a problem of fuzzy clustering with partial
supervision, i.e., unsupervised learning
completed in the presence of some labeled
patterns. The classification information is
incorporated additively as a part of an
objective function utilized in the standard
FUZZY ISODATA. The algorithms
proposed in the paper embrace two
specific learning scenarios of complete
and incomplete class assignment of the
labeled patterns. Numerical examples
including both synthetic and real-world
data arising in the realm of software
engineering are also provided.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 118
Clustering with instance-level constraint
Clustering algorithms conduct a search
through the space of possible
organizations of a data set. In this paper,
we propose two types of instance-level
clustering constraints must-link and
cannot-link constraints and show how they
can be incorporated into a clustering
algorithm to aid that search. For three of
the four data sets tested, our results
indicate that the incorporation of
surprisingly few such constraints can
increase clustering accuracy while
decreasing runtime. We also investigate
the relative effects of each type of
constraint and find that the type that
contributes most to accuracy
improvements depends on the behavior of
the clustering algorithwithout constraints.

3.0 EXISTING SYSTEM
A good clustering method will produce high
quality cluster with high intra-class similiarity
and low inter-class similiariy.The quality of a
clustering result depends on both the similarity
measure used by method and
implementation.Clustering is viewed as an
unsupervised learning method which groups
data objects based only on the information
presented in the dataset without any external
label information
K-means is one of the simplest and
most famous clustering algorithms. It
defines a centroid for each cluster,
which provides the mean of the group
of objects for clustering.
The k-mean algorithm is sensitive to
outliers-since an object with an
extremely large values may
substantially distort the distribution of
the data
4.PROPOSED SYSTEM
Clustering we should obtain some
background information such as size
constraint taking as an input before
conducting clustering therefore doing so
clustering process is improved.
There is another type of work
focusing on balancing
constraints(random k-mean) i.e.
cluster are of approximately the
same size or importance, besides
the demands of several
applications,
Balancing constraint is also helpful
generating more meaningful initial
cluster and avoiding outlier
analysis.
4.1 ALGORITHM USED
K-mean clustering algorithm
Random k-mean algorithm

4.1.1.K-MEAN ALGORITHM

STEP1 :Input k-value manually.
STEP2:Partition object into k nonempty
subset,as they are initial cluster centre.
STEP3:Use distance measure to assign the
remaining datapoint to their cluster center.
STEP4:Use the instance or datapoint in
each cluster to calculate the new mean
value for each cluster.
STEP5:If the new mean value are identical
to the mean value of previous iteration the
process terminates otherwise taking a new
mean value as a cluster center the process
is repeated

4.1.2RANDOM K-MEAN
ALGORITHM

STEP1:Choose k value randomly from the
database.
STEP2:Partition object into k nonempty
subset,as they are initial cluster centre.
STEP3:Use distance measure to assign the
remaining datapoint to their cluster center.
STEP4:Use the instance or datapoint in
each cluster to calculate the new mean
value for each cluster.
STEP5:If the new mean value are identical
to the mean value of previous iteration the
process terminates otherwise taking a new
mean value as a cluster center and cluster
size the process is repeated

5 ANALYSIS DONE FOR RANDOM
K- MEAN ALGORITHM

Problem formulation and notation
Analyzing the Dataset for Clustering
Size constrained clustering
Performance Measure

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 119
5.1 Problem formulation and notation
Given a data set of n objects, let A = (A1,A2, .
. . ,Ap) be a known partition with p clusters,
and NumA = (na1,na2, . . . ,nap) be the
number of objects in each cluster in A. We
look for another partition B = (B1,B2, . . . ,Bp)
which maximizes the agreement between A
and B, and NumB = (nb1,nb2, . . . ,nbp)
represents the size constraints, i.e., the number
of objects in each cluster in B. A and B can be
represented as n * p partition matrices. Each
row of the matrix represents an object, and
each column is for a cluster.aij = 1 or bij = 1
when object i belongs to cluster j in partition A
or B.

5.2 Analyzing the Dataset for Clustering
a. Data collection: gathering the input data
intend to analyze
b.Data scrubbing: removing missing records,
filling in missing values where appropriate
c. Pre-testing: determining which variables
might be important for inclusion during the
analysis stage
In this module it is going to analysis the input
data for which we are using to analysis the
complex data sets for clustering. Removing the
missing records are also important otherwise
we cant efficiently done size constrained
clustering .We should gain some knowledge
from the data set before doing clustering .We
have to separately store the knowledge details
to the database. During the analysis phase
(sometimes also called the training phase), it is
customary to set aside some of the input data
so that it can be used to cross-validate and test
the model, respectively. This is an important
step taken in order to avoid "over-fitting" the
model to the original data set..













Fig 2:Analysis of the dataset for clustering



5.3 Size constrained clustering
K-means algorithm seek for clustering
approximately to the same size, but this is only
true if the data density is uniform. As soon as
the data density varies, a single prototype may
very well cover a high-density cluster and
thereby gains many more data objects than the
other clusters. This leads to large differences
in the size of the clusters. The original size
constrained clustering problem becomes an
optimization problem. Here, we propose a
heuristic algorithm to find the solution
efficiently.Apply an efficient and effective
traditional or instance-level constrained
clustering algorithm to the data and Create size
constraints based on the prior knowledge of
the data, and then transform the size
constrained clustering to a binary integer linear
programming problem using this approach.In
this module the size constrained clustering for
complex data is achieved and group the
elements according to their size.











Fig3:Size constraint clustering


5.4.Performance Measure
To measure the clustering performance, we
consider four measures including accuracy,
adjusted rand index (ARI), normalized mutual
information (NMI), and entropy.
Accuracy discovers the relationship
between each cluster and class
which is the ground truth. It sums
up the total matching degree
between all pairs of clusters and
classes
Entropy measures how classes are
distributed on various clusters.In
general, the smaller the entropy
value, the better the clustering
quality is
For evaluating the Entropy and
Accuracy we are using RapidMiner
tool.
Data set
Analyis
knowledge
database
Remove
space
Complex
data set
Clustering knowledge
cluster
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 120


6.SYSTEM ARCHITECTURE

Fig 4:System Architecture
7.PERFORMANCE GRAPH


Fig 5:Performance evaluation of k-mean
and random k-mean algorithm

































In the above mentioned architecture
diagram the data from the database undergo
preprocessing that is cleaning,integration
,transformation etc,then the preprocessed
database undergo clustering by using K-mean
and Random k-mean clustering then the
performance is evaluated.
The performance of both the algorithm is
evaluated using rapid miner tool,this tool will
take the input as the database and by using the
decision tree and weight by relief it display the
graph.
7.0 CONCLUSION
After analyzing the result of testing the
clustering algorithm and running them under
0
1
2
3
4
5
6
k-mean
ran-mean
Analysis
Complex
DATA
Analysis


Clusters Clusters

Compare Both Clusters
To find similarity
Clustered Data
Knowledge

Knowledge
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 121
different factors and situation,the following
conclusion are obtained
As the number of cluster that is in
each iteration of cluster of various
cluster size is formed
Finding the entropy,accuracy of
various cluster we can evaluate that
the random clustering becomes greater
performance than k-mean.The
accuracy of random clustering become
good for large data set.














REFERENCES:
[1] A. Asuncion, D.J. Newman, UCI Machine
Learning Repository, University of California,
School of Information and Computer Science,
Irvine, CA, 2007.

[2] A. Banerjee, J. Ghosh, On scaling up balanced
clustering algorithms, in: Proceedings of SIAM
Data Mining, 2002, pp. 333349.

[3] S. Basu, A. Banerjee, R.J. Mooney, Semi-
supervised clustering by seeding, in: Proceedings
of ICML, 2002, pp. 2734.

[4] T.D. Bie, M. Momma, N. Cristianini,
Efficiently learning the metric using side
information, in: Proceedings of ALT, 2003, pp.
175189.

[5] M. Bilenko, S. Basu, R.J. Mooney, Integrating
constraints and metric learning in semi-supervised
clustering, in: Proceedings of ICML, 2004, pp. 81
88.

[6]. C. Studholme, D. Hill, D.J. Hawkes, An
overlap invariant entropy measure of 3D
medical image alignment, Pattern Recognition 32
(1) (1999) 7186.

[7] P. Tan, M. Steinbach, V. Kumar, Introduction
to Data Mining, Addison Wesley,
2005.

[8] K. Wagstaff, C. Cardie, Clustering with
instance-level constraints, in:
Proceedings of ICML, 2000, pp. 11031110.

[9] K. Wagstaff, C. Cardie, S. Rogers, S. Schroedl,
Constrained K-means clustering
with background knowledge, in: Proceedings of
ICML, 2001, pp. 577584.










































Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 122
SIMULATION OF MEMS BASED SENSORS FOR
BIOMOLECULAR RECOGNITION

P.Sangeetha
1
A.Vimala Juliet
2


1
Research Scholar, Sathyabama University, Chennai, Tamilnadu, India.



2
SRM University, Chennai, Tamilnadu, India.
vikas_selvan@yahoo.co.in


Abstract- This paper aims at simulating mems based sensors used
in the field of biochemical sciences, immunology, molecular
biology. Recent advances in nanotechnology promise considerable
and realistic potential for the development of innovative and high
performance sensing and diagnostic approaches in biomedical
field. In particular, the microcantilever detection paradigm based
on direct transduction of molecular binding induced surface stress
into a nanomechanical motion of microcantilevers, has attracted
considerable attention for label-free detection of biomolecules. As
an alternative to the currently deployed optical, piezoresistive,
and capacitance nanomechanical detection techniques, we
introduce a new electronic transduction paradigm comprising
two-dimensional microcantilever arrays with geometrically
configured metal-oxide-semiconductor-field-effect-transistors
(MOSFETs) embedded in the high stress region of the
microcantilevers. We have shown that the deflection of the
microcantilever induced by specific ligand-analyte binding events
leads to a precise, measurable and reproducible change in the
drain current of the MOSFET buried in the microcantilevers.
High current sensitivity of MOSFETembedded platform enables
detecting nanoscale cantilever deflection from specific
biomolecular binding events at very low concentration of analytes
with sensitivity in the parts-per-trillion (ppt) range.

I.INTRODUCTION
The diagnostic principle of nanomechanical deflection of the
microcantilever due to adsorption of the antigen on its upper
surface is employed for the diagnosis. The deflection of the
micro cantilever would be measured in terms of piezoresistive
changes by implanting boron at the anchor point. Such a
biomicro electromechanical system (BioMEMS) based
microdiagnostic kit is highly specific as complementary
biochemical interactions take place between antigens and the
antibodies against them immobilized on the upper surface of
the micro cantilever. The paper discusses the various aspects of
the development and production of microcantilever based
sensors. This paper proposes a new microcantilever design with
a rectangular hole at the fixed end of the cantilever that is more
sensitive than conventional ones.
Biosensors are electronic devices that convert bimolecular
interactions into a measurable signal. The purpose of biosensor
is to detect and analyze the unknown biological elements
present in a medium. Biosensors have two main elements, a
bioreceptor and a transducer. Bioreceptors are target specific
and known biomolecules that combine with the target analyte
molecules, and generate a unique signal during the reaction.
For sensing purpose one surface of the biosensor is
functionalized by depositing a sensing layer of known
bioreceptor molecules onto it. This biosensitive layer either
contains the bioreceptors or the bioreceptors are covalently
bonded to it. The most common types of bioreceptors used in
biosensing are based on proteins, antibody/antigen or nucleic
acid interactions. The transducer element of the biosensor
converts the biomolecular reactions between the target and
bioreceptor molecules into a measurable signal. The signals
can be measured using appropriate detection techniques like
electrochemical, optical or mechanical.

FEATURES OF CANTILEVER BASED BIOSENSORS
In biosensing applications sample preparation and molecular
labelling of the target analyte is a basic requirement. Labelling
aids in easy detection and monitoring of the biomolecules and
bioreactions progress. Radioactive and fluorescent dye based
labelling agents are commonly used in biosensors. Labelling is
however an expensive and time consuming process. Therefore,
label-free detection technique is critical in developing rapid,
economic and user-friendly biosensors and bioanalytical kits.
Cantilever array biosensors use optical detection technique to
measure the surface-stress induced deflections in a
microcantilever. When the target molecules attach to their
functionalized surface, the surface stress distribution on the
surface is changed causing deflections in the cantilever
(Figure 1). During adsorption of target molecules onto the
functionalized cantilever surface, biochemical reactions occur
which reduces the free energy of the cantilever surface. The
reduction in free energy of one side of cantilever is balanced by
increase in strain energy of the other side, producing deflection
in the cantilever. The deflections may be upward or downward
depending on the type of molecules involved and are linearly
proportional to the target analyte solution concentration. It
means that higher deflections manifest higher sensitivity in the
cantilever biosensor. Since the induced surface stress strongly
depends on the molecular species and its concentration, by
measuring the cantilever deflection the attaching species as
well as its concentration can be determined.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 123


Figure 1. Working principle of a microcantilever biosensor. Functionalization
of the biosensor by depositing bioreceptors (top). Surface stress induces
deflection (bottom). Symbols and Y represent target analyte and bioreceptor
molecules.


Each biosensor has two primary components: bio-recognition
element and transducer. The biorecognition element, such as
antibody and phage, is highly specific to the target species .
The reaction between the target species and the bio-recognition
unit would result in some changes in the physical/chemical
properties of the recognition unit. These changes are measured
using a transducer. Different types of transducers have been
developed and extensively investigated in recent years. One
important type of the transducer is the acoustic wave (AW)
device, which is an acoustic resonator and works as a mass
sensor. That is, the reaction between the bio-recognition
component and the target species results in a change in the
mass load of the transducer/resonator, which shifts the
resonance frequency. Thus, by monitoring the resonance
frequency of the AW device, the reaction between the
biorecognition unit and the target species, such as captured
bacterium cells by antibody/phage, can be determined. An AW
device as a transducer used in biosensors is characterized using
two critical parameters: mass sensitivity (Sm) and quality merit
factor (or Q value). The mass sensitivity is defined as the shift
in resonance frequency due to the attachment of a unit mass,
while the Q value reflects the mechanical loss of the devices
and characterizes the sharpness of the resonance peak in the
amplitude/phase versus frequency plot. A higher (Sm) means a
more sensitive device, while a higher Q value represents a
capability to determine a smaller change in resonance
frequency (i.e. a higher resolution in determining resonance
frequency). Therefore, it is highly desirable for an AW device
to have a higher Sm and a larger Q value. Among all AW
devices, micro/nano-cantilever exhibits extremely high
sensitivity primarily due to its small mass. For example, the
detection of a mass as small as 10-18 g using cantilever has
been demonstrated. Therefore, a great deal of efforts has been
spent on the development of micro/nano-cantilever based
biosensors.

Different types of cantilevers made of different materials have
been developed as transducers used in biosensors. In terms of
actuating and sensing technologies, all the cantilevers can be
classified into two types: passive and active. The passive
cantilevers, such as silicon-based cantilevers, require a
separated system to actuate the device and usually use a
separated optic system to measure/monitor the vibration of the
device. On the other hand, the active cantilevers, such as
piezoelectric-based cantilevers, can be easily actuated by
simply applying a driving field, such as an electric field in the
piezoelectric case, and the vibration behavior of the active
cantilever can be easily sensed/monitored, such as by
measuring impedance in the piezoelectric case. Due to the
easiness and availability of the micro/nano-fabrication
technology, silicon-based cantilevers are much more widely
investigated than others. Additionally, silicon-based cantilevers
exhibit a higher Q value than piezoelectric-based cantilevers.
S SI IM MU UL LA AT TI IO ON N A AN ND D I IM MP PL LE EM ME EN NT TA AT TI IO ON N O OF F MICROCANTILEVER
SENSOR
Here we are going to design the transducer/resonator in HFSS
(Ansoft) 9.2 version.
RESULTS


Schematic illustration that embedded MOSFET provides
drain current change as the signal due to bending-induced strain due to
specific molecular binding.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 124
OUTPUT GRAPHS OF SENSORS/ RESONATORS


Plot of Permitivity Vs length of the sensors/ resonator



Plot of dielectric constant Vs frequency and Quality factor of
the sensors/ resonator

Plot of Frequency(GHz) Vs Gain(dB)

SENSITIVITY OF CANTILEVER BIOSENSOR

With the ability of label-free detection and scalability to allow
massive parallelization already realized by microcantilever
biosensors, the next challenge in cantilever biosensor
development lies is achieving the sensitivity in detection range
applicable to in vivo analysis. The sensitivity of a cantilever
biosensor strongly depends on it ability to convert biochemical
interaction into micromechanical motion of the cantilever. The
deflections of a cantilever biosensor are usually of the order of
few tens to few hundreds of a nanometer. Such extremely low
deflections necessitate use of advanced instruments for
accurately measuring the deflections. As a consequence, most
of the applications of cantilever biosensors are done in
laboratories equipped with sophisticated deflection detection
and readout techniques.
The detection of analytes in such large dynamic range requires
an extremely sensitive cantilever. This paper proposes and
analyses a new high sensitive cantilever design that can assay
analytes in extremely low concentrations. This paper proposes
a new microcantilever design with a rectangular hole at the
fixed end of the cantilever that is more sensitive than
conventional ones.










Figure 2. Geometric models of the conventional and the proposed (lower)
Microcantilever designs. The material properties and the thickness of them are
identical.
The fundamental resonance frequency of a rectangular
cantilever beam is given as: (Equation 1)



where is the mass density of the cantilever material. This
equation states that the resonant frequency of a rectangular
beam is directly proportional to its thickness, and inversely
proportional to its length. Therefore, the resonant frequency
can be increased by either increasing the thickness and/or
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 125
decreasing the length. A simplified form of above equation is
given as: (Equation 2)

where k is the spring constant of the cantilever and m is its
mass. The deflection can be increased by increasing the length
and/or decreasing the thickness. In this paper, the improved
deflection shown by the proposed design is possible mainly
because of the reduction in the spring constant of the
cantilever. By reducing its cross sectional area towards the
fixed end, we reduced its flexural stiffness to bending which
manifests a higher deflection. However Equation 2 suggests
that any reduction in spring constant will also decrease its
resonant frequency. Therefore, we may conclude that though
more sensitive than the conventional the resonant frequency of
the proposed design is less. By putting the material and
geometric properties of the two designs, Equation 2 can be
used to show that f0, proposed = 0.47 f0, conventional.





Figure 3. A comparison between old (left) and new (right) micro cantilever
array biosensors.

The resonant frequency depends on both the material and
geometric properties of the cantilever. If we are using silicon
cantilevers the reduction in resonant frequency is practically
not significant because silicon has excellent mechanical and
thermal properties. Due to its high elastic modulus, silicon
cantilevers will not be much affected by the external sources of
excitation. Polymer cantilevers in contrast can be significantly
affected by the reduction in resonant frequency owing to their
low elastic modulus. Therefore the resonant frequency of
polymer cantilevers should be increased, which can be
achieved by increasing their thickness. The proposed design
may not be suitable for polymer cantilevers. Hence, for
increasing the sensitivity of polymer cantilevers, instead of
changing their shape changing the size is a better option. Thus,
based on the above discussions on the bimetallic effects, the
large deflection behavior and the interpretations of Equation
we can safely conclude that increasing the cantilever thickness
is a better way to increase the sensitivity of polymer
microcantilevers used in biosensing application. For in vivo
detection we need a sensitive biosensor that can assay analytes
in large concentration range simultaneously. For such
biomedical applications a new array design is proposed
(Figure3). The figure shows a comparison between
conventional and proposed cantilever eight cantilevers array
designs. The conventional array design uses eight cantilevers of
uniform cross-section. Proposed array design uses a
combination of old and news cantilever designs. Since the
proposed cantilevers are nearly twice sensitive than
conventional, they can be used effectively in assaying target
analytes whose solution concentration is comparatively lower.
In both the array designs one cantilever type in each can be
assigned as a reference for differential deflection readout,
which is a popular mean to eliminate noise in deflection
signals. The reference cantilever is made passive by depositing
buffer materials onto it, and hence it does not participate in the
reaction. Thus, we may conclude that by using an array
combination of conventional and proposed cantilevers on the
same biochip, a high sensitive biosensor can be designed. Such
a sensor can simultaneously detect analytes in extremely large,
dynamic concentration range











Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 126




MICROCANTILEVER BASED DIAGNOSIS OFTUBERCULOSIS
The principle of the microcantilever based diagnostic kit for
tuberculosis is similar to that of the diving board as the increase
in the adsorbed mass of antigen 85 complex causes the bending
of the microcantilevers. But in addition to that, the specificity
is provided by the immobilization of antibodies specific for
antigen 85 complex on the upper surface of the
microcantilever. When the biomolecular recognition takes
place between them, the adsorbed mass of antigen 85 complex
causes the change in stress on the surface of the
microcantilever. The difference in stress at the top and the
bottom of the microcantilever beam causes the elongation of
the upper surface of the microcantilever and the shortening of
its lower surface thereby causing the nanomechanical bending
of the microcantilever. The deflection of the cantilever can be
detected by optical, capacitive, interferometric or piezoresistive
method. The optical method employing low power laser and
position sensitive photodetector is the most effective method
for the detection of microcantilever deflection. But due to its
requirement for costly and highly sophisticated instruments and
very precise mechanical alignment, it is not suitable for routine
low cost disease diagnosis. The capacitive method does not
work in electrolyte solutions due to the generation of faradic
currents between the capacitive plates and is therefore limited
in its sensing applications. The interferometric method works
well for small displacements but is less sensitive in liquids.
Piezoresistive method is ideal for the development of low cost
disease diagnostic kits. It can be used in electrolyte solutions as
it avoids the faradic currents. The piezoresistive substance like
boron is implanted at the anchor point of the microcantilever
where there is maximum strain due to the adsorption of the
analyzed molecules. The bending of the microcantilever due to
the adsorption of analyte molecules causes the change in
resistance of the piezoresistive substance which can be
measured by the Wheatstone-bridge arrangement as shown
in figure 4.




Figure 4. Microcantilever Based Micro diagnostic Kit for Tuberculosis

The sensitivity of the device is directly proportional to the length by
thickness ratio (L/t) of the microcantilever. Therefore, the longer and
thinner the microcantilever is, the greater is the sensitivity of the
device.
CONCLUSION
Microcantilever array biosensors are becoming increasingly
popular in label-free, realtime and simultaneous detection and
monitoring of various chemical and biochemical target
analytes. The deflections in microcantilever biosensors lie
between few tens to few hundreds of a nanometer, which
necessitate sophisticated and expensive readout techniques.
The ultimate goal of the microcantilever biosensor design and
development is to make them sensitive enough to be used in
medical applications where accurate, realtime and simultaneous
analysis of various clinically important analytes is required.
REFERENCES
1. Arntz, Y.; Seelig, J.D.; Lang, H.P.; Zhang, J.; Hunziker, P.;
Ramseyer, J.P.; Meyer, E.; Hegner, M.; Gerber, C. Label-free
protein assay based on a nanomechanical cantilever array.
Nanotechnol. 2003.

2. Nelson, B.P.; Grimsrud, T.E.; Liles, M.R.; Goodman, R.M.; Corn,
R.M. Surface Plasmon resonance imaging measurements of DNA
microarrays.

3. Nordstrom, M.; Keller, S.; Lillemose, M.; Johansson, A.; Dohn, S.;
Haefliger, D.; Blagoi, G;Havsteen-Jakobsen, M.; Boisen, A. SU-8
cantilevers for bio/chemical sensing; fabrication.characterisation
and development of novel read-out methods. Sensors 2008.

4. He, F. J.; Geng, Q.; Zhu, W.; Nie, L. H.; Yao, S. Z.; Meifeng, C.
Rapid detection for E. coli using a separated electrode piezoelectric
crystal sensor. Anal. Chim. Acta 1994.

5. .Dhayal, B.; Henne, W. A.; Doorneweerd, D. D.; Reifenberger, R.
G.; Low, P. S. Detection of Bacillus subtilis spores using peptide-
functionalized cantilever arrays. Journal of the American Chemical
Society 2006.

6. Lavrik, N. V.; Sepaniak, M. J.; Datskos, P. G. Cantilever
transducers as a platform for chemical and biological sensors.
Review of Scientific Instruments 2004.

7. Raiteri, R., Grattarola, M., Butt, H.J., Skladal, P., Sensors and
Actuptors B 79:115-26 (2001)

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 127

MULTIVARIABLE NEURAL NETWORK SYSTEM IDENTIFICATION

H.M.RESHMA BEGUM
1
G.SARAVANAKUMAR
2

reshmabegum24@gmail.com saravana.control@gmail.com
DEPARTMENT OF CONTROL AND INSTRUMENTATION ENGINEERING
KALASALINGAM UNIVERSITY, TAMILNADU, INDIA



Abstract Most of the industrial processes are
multivariable in nature. Greenhouse system is
considered which the important application is in
agricultural process. Greenhouse is to improve the
environmental conditions in which plants are grown .In
this paper we have proposed identification of greenhouse
system using input and output data sets to estimate the
best model and validate the model. For MIMO systems,
Neural Network System identification provides a better
alternative to find their system transfer function. The
results were analyzed and the model is obtained. From
this obtained model, the system is controlled by
conventional method. By in this method we can identify
the model and control the complicated systems like
Greenhouse.

Keywords - GreenHouse, Neuralnetwork system identification,
conventional controller.

1. INTRODUCTION
The main purpose of a greenhouse is to improve
the environmental conditions in which plants are grown. In
greenhouses provided with the appropriate equipment these
conditions can be further improved by means of climate
control. Modern greenhouse and computerized climate
control modules have become inseparable nowadays.
Computerized climate control is an intrinsic part of present
day modern greenhouse [2]. The functions of the
computerized climate control can be summarized as follows:
(a) It takes care of maintaining a protected environment
despite fluctuations of external climate. (b) It acts as a
program memory, which can be operated by the growers as
a tool to control their crops.
The main advantages of using computerized climate control
are as follows,
(1)Energy conservation
(2)Better productivity of plants
(3)Reduced human intervention
The main environmental factors affecting the greenhouse
climate control are as follows,
Temperature
Relative Humidity of the inside air
Vapor pressure Deficit
Transpiration Sunlight
CO2 Generation
Wind speed
Lighting
Actuators responsible for the climate variations are,
Heating System
Cooling System
Mechanical fan Fog cooling Lighting System.

2. MATERIALS AND METHODS
Fig. 1 depicts the block diagram of the controller
embedded in the system model [7]. As can be seen, the
Controller is operated in five interrelated stages.
1- Set points: This block shows the set points of
greenhouse climate that plant can grow up
properly.
2- The input variables of greenhouse model: In this
stage some variables represent influence on the
greenhouse climate such as: inside Temperature,
inside air humidity, outside temperature, outside air
humidity, radiation.
3- The greenhouse model: This converts the output of
actuators and some parameters like temperature, air
humidity and outside radiation of greenhouse to the
actual temperature and air humidity of greenhouse.
4- The actuator model: These blocks simulate the
performances of actuators and receive the output of
the controllers as the situation of actuators and then
give the affections of them in green house
5- The control stage: In this stage the set points are
compared with the measured parameters following
the comparison, a dynamic decision is made
regarding the situation of the actuators.


Fig 1. Block Diagram of controller in system
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 128
3.GREEN HOUSE CLIMATE MODEL
The greenhouse climate model describes the dynamic
behavior of the stated variables by use of differential
Equations of the air temperature, humidity, CO2
concentration etc., this differential equation are results of
combination of the various physical processes involving heat
and mass transfer tacking place in the greenhouse and from
the greenhouse to the outside air Fig.2.a



Fig 2a: Scheme of greenhouse climate model

The important variables are (references, perturbations and
commands) the complexity of phenomena (biologic,
weather, evolution of the plants.), make that we have a
system multivariable, nonlinear and non-stationary.
Moreover, the perturbations, as the wind velocity and the
global radiation, can sometimes have a power more raised
than the command such the heating. For these reasons, we
thus preferred to apply a multimodal based on fuzzy logic
and taken into account all the variables of which we dispose
Fig.2.b



Fig.2.b input/output diagram of greenhouse system

where, Ti and Hi are respectively temperature and
relative humidity of the internal air, the perturbation
variables are Te (external temperature), He (external
humidity), Rg (solar radiation), Vr (wind velocity) and the
input variable are Ch (heating), Br (moistening) and Ov
(roofing).
4.NEURALNETWORK SYSTEMIDENTIFICATION
There are three main types of ANN structures -single
layer feed forward network, multi-layer feed forward
network and recurrent networks. The most common type of
single layer feed forward network is the perceptron. Other
types of single layer networks are based on the perceptron
model. Here Multilayer feed forward network is used for the
system identification. Back propagation is the generalization
of the Widrow-Hoff learning rule to multiple-layer networks
and nonlinear differentiable transfer functions. Input vectors
and the corresponding target vectors are used to train a
network until it can approximate a function, associate input
vectors with specific output vectors, or classify input vectors
in an appropriate way as defined by you. Networks with
biases, a sigmoid layer, and a linear output layer are capable
of approximating any function with a finite number of
discontinuities.

There are generally four steps in the training process,

1. Assemble the training data.
2. Create the network object.
3. Train the network.
4. Simulate the network response to new inputs.




Fig.3.Multilayered feed forward network


Using Neural Network Tool, the performance plot, the
training state, the regression plots were obtained which are
shown in Fig.4, 5, 6.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 129


Fig.4.Performance Plot for neural network




Fig.5.Training state for neural network



Fig.6.Regression Plot for neural network

The Neural Network Mean square error for the testing
data is shown in Fig.7, and the Mean square error for the
validation is shown in Fig.8.The validation obtained shows
very less error i.e. 0.0003 for which best Transfer function is
obtained.
10 15 20 25 30 35 40
0
5
10
15
20
Modelling data, Output 1- MSE=0.00078708
10 15 20 25 30 35 40
0
5
10
15
20
Validation data - MSE=3.1097e-006

Fig.7.Testing data output

10 15 20 25 30 35 40
-400
-200
0
200
400
600
Modelling data, Output 2 - MSE=0.00078708
10 15 20 25 30 35 40
-400
-200
0
200
400
600
Validation data - MSE=3.1097e-006


Fig.8.validated output
Transfer function obtained from the neural network system
identification is as follows



0.08748s
2
+0.3499s +0.3499
---------------------------------
S
2
+0.1244s+0.006746

5.CONVENTIONAL CONTROLLER

Consider the generalized process shown in Fig. 9. It has
an output y, a potential disturbance d, and an available
manipulated variable m.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 130

Fig.9.Process

The disturbance d (also known as load or process load)
change in an unpredictable manner and our control objective
is to keep the value of the output y at desired levels. A
feedback control action takes the following steps:
1. Measures the value of the output (flow, pressure,
liquid level, temperature, composition) using the appropriate
measuring device. Let ym be the value indicated by the
measuring sensor.
2. Compares the indicated value ym to the desired value
ysp (set point) of the output. Let the deviation (error) be =
ysp - ym.
3. The value of the deviation is supplied to the main
controller. The controller in turn changes the value of the
manipulated variable m in such a way as to reduce the
magnitude of the deviation . Usually, the controller does not
affect the manipulated variable directly but through another
device (usually control valve), known as the final control
element.
Figure 9, shows the summaries pictorially the foregoing
three steps.

Fig.10.Feedback loop

The system in fig.9 is known as open loop, in contrast to
the feedback-controlled system of fig.10, which is called
closed loop. Also, when the value of d or m changes, the
response of the first is called open-loop response, while that
of the second is the closed-loop response. The basic
hardware components of a feedback control loop are the
following:
Process model: The first item on the agenda is process
identification. We either derive the transfer functions of the
process based on scientific or engineering principles, or we
simply do a step input experiment and fit the data to a model.
Either way, we need to find the controlled variable and also
the measured variable. We then need to decide which should
be the manipulated variable. All remaining variables are
delegated to become disturbances.
Measuring instrument or sensors: For example,
thermocouples (for temperature), bellows, or diaphragms
(for pressure or liquid level), orifice plates (for flow) and so
on.
Transmission lines: It is used to carry the measurement
signal from sensor to the controller and the control signal
from the controller to the final control element. These lines
can be either pneumatic or electrical.
Controller: The amplified signal from the transmitter is sent
to the controller, which can be a computer or a little black
box. There is not much we can say about the controller
function now, except that it is likely a PID controller, or a
software application with a similar interface.
Final control element: Usually, a control valve or a variable-
speed metering pump. This is the device that receives the
control signal from the controller and implements it by
physically adjusting the value of the manipulated variable.
Each of the elements above should be viewed as a
physical system with an input and an output. Consequently,
their behavior can be described by a differential equation or
equivalently by a transfer function.

TYPES OF CONVENTIONAL CONTROLLERS:

There are three basic types of conventional controllers:
1. Proportional controller
2. Proportional-integral controller
3. Proportional-integral-derivative controller

Here proportional-integral-derivative controller is used,
where the values are calculated by means of Ziegler Nicholas
method.

6.RESULTS

For Multiple input and Multiple output ,the system model
was identified and by means of incorporating a PID
controller the required setpoint is obtained which is shown
in the following fig.11.



Fig.11.Conventional controller output for Multivariable
system
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 131
7.CONCULSION:
Here input and output data for green house system was
collected. For multiple inputs and multiple output system,
neural network system identification is better to identify a
model. From the model, it was able to control any
complicate systems like Greenhouse. Further to improve the
performance we can use intelligent controllers.
REFERENCES
[1] Hybrid fuzzy-logic and neural-network controller for MIMO systems
Jeen Lin a, Ruey-Jing Lian b,* a Department of Mechanical
Engineering, National Taipei University of Technology, No. 1, Sec.
3, Jhongsiao E. Rd., Taipei City 10608, Taiwan b Department of
Industrial Management, Vanung University, No. 1, Wanneng Rd.,
Jhongli City, Toayuan County 32061, Taiwan
[2] Greenhouse Model Identification based on Fuzzy Clustering
Approach A. Erra Greenhouse Model Identification based on Fuzzy
Clustering Approach A. Errahmani*, M. Benyakhlef** and I.
Boumhidi* * LESSI,
[3] Department de physique, Facult des Sciences Dhr El Mehraz BP,
1796. [3] I. Laribi Maatoug, R.Mhiri, An Explicit Solution for the
Optimal Control of Greenhouse Temperature relying on Embedded
System, ICGST-ACSE, Volume 8, Issue I, ISSN: 1687-4811, 2004
[4] J-F Balmat, F. Lafont, Multi-model architecture supervised by
kohonen map, International Conference on Electronic Sciences,
Information Technology and Telecommunication (SETIT), Mahdia,
Tunisia, 6:17-21, 2003.
[5] Hugo Uchida Frausto , Jan G. Pieters Modelling greenhouse
temperature using system identifcation by means of neural networks
Neurocomputing 56 (2004) 423 428
[6] O. Nelles, A. Fink, R. Isermann, Local Linear Model Trees
(LOLIMOT) Toolbox for Nonlinear System Identification, 12th FAC
Symposium on System Identification (SYSID), Santa Barbara, USA,
2000.
[7] P. Javadikia, A. Tabatabaeefar, M. Omid,R. Alimardani,M.
Fathi,Evaluation of Intelligent Greenhouse Climate Control System,
Based Fuzzy Logic in Relation to Conventional Systems, 978-0-
7695-3816-7/09 $26.00 2009 IEEE,DOI 10.1109/AICI.2009.494
[8] L. Ljung, System identification-theory for the user Englewood cliffs,
NJ: Prentice-Hall, 1987.
[9] M. Sugeno, G.T. Kang Structure identification of fuzzy model, Fuzzy
Sets and Systems, 28: 15-33, 1987.
[10] Emara H, Elshafei AL. Robust robot control enhanced by a
hierarchical adaptive.
[11] Fuzzy algorithm. Eng Appl Artif Intell 2004;17(2):18798.
[12] Kim E. Output feedback tracking control of robot manipulators with
model uncertainty via adaptive fuzzy logic. IEEE Trans Fuzzy Syst
2004; 12(3): 36878.
[13] Lin J, Lian RJ. DSP-based self-organising fuzzy controller for active
suspension systems. Vehicle Syst Dyn 2008;46(12):112339.
[14] Mollov S, Babuka R. Analysis of interactions and multivariable
decoupling fuzzy control for a binary distillation column. Int J Fuzzy
Syst 2004; 6(2):5362.
[15] Marcos Alberto Bussab, Joao Israel Bernardo, Andre Riyuiti
Hirakawa,Greenhouse Modeling Using Neural Networks
Proceedings of the 6th WSEAS Int. Conf. on Artificial Intelligence,
Knowledge Engineering and Data Bases, Corfu Island, Greece,
February 16-19, 2007.
[16] Chen B, Tong S, Liu X. Fuzzy approximate disturbance decoupling
of MIMO nonlinear systems by back stepping approach. Fuzzy Sets
System2007;158(10):1097125.





Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 132
Soft Handoff and Power
Control in WCDMA

Ms Swati Patil. Ms Seema Mishra, Ms Sonali Kathare
Lecturer(ETRX Dept.) Lecturer (ETRX Dept) Lecturer(EXTC)
Pillais Institute Of Pillais Institute of Pillais Institute Of
Information technology Information technology Information technology
New Panvel New Panvel New Panvel
Spswitypatil1@gmail.com s.seema.m@gmail.com katharesonali@gmail.com


Abstract There has been a tremendous growth in
wireless communication technology over the past
decade. This paper analyzes the power distribution
and soft handoff in cellular WCDMA. An optimum
power distribution law is developed in order to
guarantee the required signal-to-interference ratio
for each connection. Simulation results show that
soft handoff can improve the connection reliability
and the system capacity in the downlink
transmissions.
KeywordsWCDMA, Power Control algorithm,
Handover criteria.

I.INTRODUCTION

Third generation (3G) systems like the UMTS
[1](Universal Mobile Telecommunication
System) will be offering data rates up to 2 Mbps
in the near future.WCDMA (Wideband Code
Division Multiple Access) is the technology
behind UMTS [5][6] used as its air interface. It is
a complete set of specifications in which a
detailed protocol defines how a mobile phone
communicates with the base station. WCDMA
[8] can be divided into two modes, the Time
Division Duplex mode and the Frequency
Division Duplex mode. In TDD the uplink and
the downlink transmissions are time multiplexed
in to the same carrier while in FDD the uplink
and the downlink transmissions occur in
different frequency bands (around 1900 MHz
range for the uplink and 2100 MHz range for
the downlink) with a 5MHz bandwidth for each
band. The FDD mode has been chosen as the
mode of operation in Europe.

II.POWER CONTROL

Power Control [11][3] is important both in the
uplink and the downlink directions. In the uplink
direction control is required in the situations
where UEs are located very close to the Node Bs
and are transmitting with excessive power. This
is called the near-far effect and can result in
blocking the whole cell, with UEs that are close
to the cell edge possibly overlooked. If the
uplink power is too high interference in
neighboring cells (inter-cell interference) may
also be a direct result of the near-far effect. In the
downlink direction, Power Control directly
affects system capacity. System capacity is
determined by the total downlink transmission
power for each cell i.e. when total downlink
transmission power is minimized then the Node
B can accept more UEs and the capacity is
increased. Unlike the power control schemes
used in IS-95, UMTS defines three main
dissimilar power control mechanisms, that is, (1)
Open-loop power control; (2) Inner-loop power
control; and (3) Outer-loop power control, which
will be introduced here.

A. Open-loop power control:
In the UMTS [1][4] standard, Open-Loop Power
Control is defined as the ability of the UE
transmitter to set its output power to a specific
value. It is used for setting initial uplink and
downlink transmission powers when a UE is
accessing the network. This method is used for
setting up initial uplink transmission powers.
The desired power level is calculated from
measurement information about the pathloss, the
target SIR and the interference at the cells
receiver, broadcasted on the BCH (Broadcasting
Channel).



Fig. 1.1 open loop power control algorithm

Inner-loop power control:
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 133
CL power control algorithms are the main means
to counter the uplink near-far effect. In contrary
to GSM systems where only slow power control
algorithms are used at a frequency of
approximately 2Hz, WCDMA uses fast power
control with 1,5 kHz frequency, compensating
for slow and fast fading. The goal of CL power
control is to equalize the received power of all
mobile stations at all times. Hence open loop
power control methods are only used in the
initial phase of setting up a connection, as
explained above. The fast power control
algorithm works as shown on the figure below.


Fig 1.3 closed loop power control algorithm

Every 667ms (1/1500Hz) the base station
compares the estimated SIR of each mobile
stations signal, with a SIR target value. If the
measured SIR is higher than the target SIR, the
base station will command the MS to power
down; in the other case the base station sends a
power up command. The SIR target value used
in the CL power control method is provided by
the outer loop power control algorithm, as will
be explained below. To summarize, the Inner-
Loop Power Control fulfils the following three
functions: (1) It can mitigate fast fading effects
at a rate of 1.5 kbps; (2) It functions in both
down- and uplink; (3) It works based on a fixed
quality target set in MS or BS, depending on
down- or uplink.

B.Outer-loop power control:
This fairly simple algorithm sets the Eb/N0
target for the fast (closed loop) power control
described in the previous section. This method
aims at maintaining the quality of
communication, while preventing capacity waste
and using as low power as possible. With a
frequency varying between 10 and 100 Hz, the
received and the desired quality of both uplink
and downlink SIR are compared6. If the received
quality is better than the quality that has to be
achieved, the SIR target is decreased; in the other
case the SIR target is increased.

Fig 1.4 Outer loop power control algorithm

To summarize, the Outer-Loop Power Control
works for the following four functionalities: (1)
It can compensate changes in the environment;
(2) It can adjust the SIR target. (3) It depends on
MS mobility and multipath diversity; and (4) In
the case of soft handover it comes after frame
selection.

III.HANDOVER

The handover process is one of the essential
means that guarantees user mobility in a mobile
communication network. The concept of
mobility is simple. When a subscriber moves
from the coverage area of one cell to another, a
new connection with the target cell is set up and
the connection with the previous cell is released.
A basic handover [4] process consists of three
main phases: (a) measurement phase, dealing
with the mechanics of measuring important
parameters, (b) decision phase, dealing with the
algorithm parameters and handover criteria and
(c) execution dealing with radio resource
allocation and handover signaling.

A. Handover Trigger Criteria:
The basic reason behind a handover is that the air
interface does not fulfil anymore the desired
criteria set for it and thus either the UE or the
UTRAN initiates actions in order to improve the
connection. There are a number of criteria that
indicate the need for a handover operation to be
performed. The handover execution criteria
depend mainly upon the handover strategy
implemented in the system. However, most
criteria behind the handover activating rest in the
signal quality, user mobility, traffic distribution,
and bandwidth. According to most handover
algorithms [11] use the received signal strength
power as the link quality measurement for
handoff decisions. Different types of decisions
can be taken: 1.Execute a handoff to an alternate
BS if the received signal measured over a time
interval, exceeds that of the serving BS by a
threshold H (hysteresis) 2. Execute a handoff if
the measured signal strength of the serving node
drops below a threshold TL while there is a
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 134
higher signal from another base station. 3. Avoid
a handoff if the measured signal strength of the
serving BS is above a threshold Th even if the alt-
ernate BS is stronger by the hysteresis threshold .

B.Hard handover:
Hard handover is the handover type where a
connection is broken before a new radio
connection is established between the user
equipment and the radio access network. This is
the handover type used in GSM cellular systems
where each cell was assigned a different
frequency band. A user entering a new cell
resulted in tearing down the existing connection
before setting up a new connection at a different
frequency in the target cell. The algorithm
behind this handover type is fairly simple; the
mobile station performs a handover when the
signal strength of a neighboring cell exceeds the
signal strength of the current cell with a given
threshold. In UMTS hard handovers are used to
for example change the radio frequency.
Otherwise stated, when a UE with a dedicated
channel allocated, roams into a new cell of a
UMTS network, hard handover is chosen when
soft or softer handover is impossible.
Soft handover:
Handover between different base stations
Connected simultaneously to multiple base
stations The transition between them should be
seamless.
.
Fig.1.4 soft Handover
Softer handover:
Handover within the coverage area of one base
station but between different sectors. Procedure
similar to soft handover.

Fig.1.5 Softer Handover


III. SOFT HANDOVER ALGORITHM

In this section we provide a description of the
implemented algorithm with the specific values
for the thresholds and sampling intervals. First,
by the term Soft Handover we mean that the
mobile node is maintaining connections with
more than one base station. The Active Set
includes the cells that form a soft handover
connection to the mobile station. The
Neighbor/Monitored Set is the list of cells that
the mobile station continuously measures, but
their signal strength is not powerful enough to be
added to the Active Set. The determination of the
Active Set is based on the following conditions:
If the signal strength of the measured quantity
(not currently in the Active Set) is greater than
the strongest measured cell in the Active Set
(subtracting the soft handover threshold) for a
period t (t = time to trigger) and the Active Set is
not full, the measured cell is added to the active
set. This event is called Link Addition.
If the signal strength of the measured quantity
(Currently in the Active Set) is less than the
strongest measured cell in the Active Set
(subtracting the soft handover threshold) for a
period t, then the cell is removed from the Active
Set. This event is called Link Removal.
If the Active Set is full and the strongest
measured cell in the Monitored Set is greater
than the weakest measured cell in the Active Set
for a period T, then the weakest cell is replaced
by the strongest candidate cell (i.e. the strongest
cell in the Monitored Set). This event is called
Combined Radio Link Addition and Removal.

A. Implemented Algorithm:
The implemented algorithm samples the signal
strength of the surrounding base stations every 1
sec and uses 3dB as the threshold for soft
handover and 6dB as the threshold for hard
handover. The size of the Active Set may vary
but usually it ranges from 1 to 3 signals. In this
implementation it was set at 3. The algorithm is
displayed below in its final format.
1. Each UE is connected to its Primary_BS, and
keeps an Active_ Set (2 closest cells based on
the conditions explained above)
2. Each UE measures the SIR received from the
surrounding cells.
3. If (AS1_SIRPr_BS_SIR) >3dB OR
(AS2_SIR Pr_BS_SIR) > 3dB
a. UE enters Soft Handover
b. UE keeps a simultaneous connection to the
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 135
Primary_BS and one or both of the Active Set
cells.
4. a. If (AS1_SIR Pr_BS_SIR) > 6dB for three
measurements in a row: AS1 becomes the
Primary_BS
b. If (AS2_SIR Pr_BS_SIR) > 6dB for three
measurements in a row: AS2 becomes the
Primary_BS
5. Neighboring cells replace the cells in the
Active Set if their SIR exceeds the Active_Set
cells SIR by 6dB.

Figure.1 Fast Closed Loop Power Control Uplink

The combination of the soft handover algorithm
and the fast closed loop power control (FCL PC)
algorithms is illustrated in Figures 1 and 2 for
the uplink and downlink respectively. In the up-
link direction the value for Trx_Power_MIN=
-49dB and for UE_Trx_Power_MAX = 21dB
giving a range of 70dB. The SIR_Target is a at
providing the necessary quality. The SIR target
is affected by the speed of the mobile node.
While in the uplink direction, the decision taken
by the UE is affecting all base stations in its
Active Set, in the downlink direction, every base
station needs to update its own power. The
minimum and maximum values of BS_Trx_
power are not constant as in the uplink case, but
are 30dB from the initial downlink
BS_Trx_Power. The total BS_Trx_Power of
each value provided by the outer loop power and
aims base station for all UEs in its cell is
specified at 46dB. Again, the SIR_Targe is
defined by the outer loop power control.



Figure. 2 Fast Closed Loop Power Control - Downlink

IV. SIMULATION RESULTS

In this section a set of simulation results are
shown in order to demonstrate how the
developed E-UMTS system level simulator
integrates and is able to evaluate different RRM
algorithms, namely the soft handover and the
power control mechanisms described in section
III. It is important to note in this sample scenario
the power control is affected by the movement of
the UE and the handover process, whereas the
hand- over process is only affected by the
receiving signal strength at the UE, which is only
related to the propagation losses.


Figure. 3 Total handovers vs. total number of users

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 136

Figure.4 Handovers per user

Figures 3 and 4 shows the total number of
handovers and the number of handovers per user
as the number of users per cell increases. The
simulation results show that the number of
handovers per user is not affected much by the
total number of users present in the cell. The
average value of handover per user is around 1.0
for speeds of 120Km/h and around 0.75 for
speeds of 50Km/h. Both sets of results have a
variation of 5%. The total and the number of
handovers per user are however affected by the
mobility speed. Mobile nodes moving at faster
speeds are covering bigger distances in the same
amount of time and are more likely to cross a
boundary.

Figure.5 Power Control for the Transmitting Power of Base
Stations and the User Equipment

Figure 5 shows the uplink transmits power and
the downlink transmits power from each
primary_BS to the UE. The downlink transmits
power for each UE is initially 30dB. As soon as
the base station takes a reading of the SIR (every
1 second), it changes the downlink transmit
power accordingly. In this case, since the UE is
close to the base station in Cell 6, the power
drops to 13dB. The downlink transmit power is
relatively constant through the duration of the
UE movement, with the exception of two
instances immediately before the start of the soft
handovers. During those moments the downlink
transmitting power of the Primary_BS is
increased by at most 3dB to maintain the
necessary SIR given that the UE is getting out of
range .Similarly the UE uplink transmitting
power also experiences a slight increase. During
the soft handover the signals received by one, or
both, base stations of the Active_Set are not
considered as interference; therefore the
transmitted power drops again.

VI. CONCLUSION

WCDMA is the air interface standard proposed
by ARIB and ETSI for third generation mobile
systems. Offers many improvements over 2nd
generation NCDMA systems. , Its key features
are: Asynchronous BS operation, Fast TPC,
Variable data rate transmission. With the
increasing load on cellular systems and the drive
for smaller cells, there is a clear need for
understanding the behavior of handover
algorithms and other radio resource management
related mechanisms like the power control. This
paper presented the design and implementation
of a soft handover and its associated power
control for use in WCDMA-based Enhanced
UMTS networks. Design parameters and
decision criteria were discussed, actual
implementation parameters were presented and
the algorithms for soft handover with power
control were illustrated.
REFERENCES

[1]A. Samukic, UMTS Universal Mobile telecommunications
System: Development of standards for the third generation,
IEEE Global Telecommunications Conference & Exhibition.
v 4 1998. p 1976-1983
[2]N. Prasad, GSM evolution towards third generation
UMTS/IMT2000, IEEE International Conference on Personal
Wireless Communications 1999, p 50-54
[3] L. Nuaymi, P. Godlewski, and X. Lagrange, A Power
Control Algorithm for 3G WCDMA Systems, European
Wireless, Florence, Italy, 2002.
[4] UMTS Forum http://www.umts-forum.org
of UMTS/IMT-2000.
[5] 3GPP http://www.3gpp.org
[6] 3GPP2 http://www.3gpp2.org
[7] Wikipediahttp://www.wikipedia.com
[8] Holma H. and Toskala A.; WCDMA FOR UMTS;
fourth edition, Wiley and Sons, 2004
[9] Castro J. P.; The UMTS NETWORK and RADIO
ACCESS TECHNOLOGY; John wiley &sons; 2001
[10]Next Generation Wireless Systems & Networks Hsiao-
Hwa Chen ,Mohsen Guisani.
[11] S. Chia and R. J. Warburton, Handover Criteria for a
City Microcellular Radio System, in Proc 40th IEEE VTC,
1990, pp 276-81.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 137
Concept- Based Personalized Web Search and a
Hierarchical Approach for its Enhancement

Jilson P. Jose
Final Year M. Tech CSE
Department of Computer Science and Engineering
SRM University Kattankulathur,
Chennai - Tamilnadu
jilsonpjose@gmail.com
A. Murugan
Assistant Professor
Department of Computer Science and Engineering
SRM University Kattankulathur,
Chennai - Tamilnadu
murugan_abap@yahoo.co.in


Abstract Providing search result that are relevant and
convenient to users with varying interests is the most difficult
problem in the area of web search. This is mainly because of
the huge amount of information or resources that it is being
dealt with. But these problems could be reduced up to a limit
by various evolving methods contributed by research scientists.
Here we are considering the concept based mechanism which is
the important among them. In addition to the already existing
techniques of considering positive and negative preferences of
the user profiles, we propose an idea of considering the profiles
that are generated for the other users of same interest. The
proposed system can do a much better performance in the area
of web searching. Again through this paper we propose one
more idea of implementing a hierarchical classification of
contents of the user profile for much more relevant results.
Keywords-Web Intelligence; Web Search; Concept-Based;
Personalized Search
I. INTRODUCTION
All software or applications that we are using nowadays
are having options to customize it. So that users can place the
tools or commands, which they are using frequently, in an
easily accessible way. In the case of Operating Systems we
can create hardware profiles so that there will be options for
specifying which are all the hardware that should function
when a particular user with specific access level uses the
system. This customization or profiling helps the
management a lot to implement security features or power
consumption methods. In the case of other applications it
helps the users to do their works fast, since they are supplied
with the tools that they are using frequently in an easily
accessible place.
Let us consider the case of search engines. They are used
for surfing the Internet. For each query we are submitting, it
returns millions or billions of search results. In order to get
the accurate information we have to browse through each
result and check whether that is the required one or not. But
todays search engines implement multiple mechanisms
which help to filter the search results in a more accurate way.
For example the Page Ranking mechanism implemented by
Google search engine. PageRank is a link analysis algorithm,
named after Larry Page, assigns a numerical weighting to
each element of a hyperlinked set of documents, such as the
World Wide Web, with the purpose of "measuring" its
relative importance within the set. The algorithm may be
applied to any collection of entities with reciprocal
quotations and references. The numerical weight that it
assigns to any given element E is referred to as the PageRank
of E and denoted by PR(E). A hyperlink to a page counts as
a vote of support. The PageRank of a page is defined
recursively and depends on the number and PageRank metric
of all pages that link to it ("incoming links"). A page that is
linked to many pages with high Page Rank receives a high
rank itself. If there are no links to a web page there is no
support for that page.




















Figure 1. The diagram showing how Google calculating the rank of a
page
Fig. 1 shows the Mathematical Page Ranks (out of 100)
for a simple network (Page Ranks reported by Google are
rescaled logarithmically). Page C has a higher PageRank
than Page E, even though it has fewer links to it; the link it
has is of a much higher value. A web surfer who chooses a
random link on every page (but with 15% likelihood jumps

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 138
to a random page on the whole web) is going to be on Page E
for 8.1% of the time. (The 15% likelihood of jumping to an
arbitrary page corresponds to a damping factor of 85%)
Without damping, all web surfers would eventually end up
on Pages A, B, or C, and all other pages would have
PageRank zero. Page A is assumed to link to all pages in the
web, because it has no outgoing links.
All these types of mechanisms have been implemented to
provide the users with most accurate results in very less time.
The main idea behind this is Accuracy in Less Time.
The search enhancement methods that have been
discussed above implemented by various search engine
applications are not able to display result according to the
user taste. That is, let us consider a case of searching the
Internet with the keyword Jaguar. It will display the links
which are related to Jaguar automobile company, the dealers
of that automobiles, the companies or websites that compares
or introduces new cars, and in addition to it the wild animal
Jaguar, the websites related to wild life, the tourist centers
where we can find the animal Jaguar etc. If the person is
interested in wild life will have to scan through these varied
results and find out sites which are relevant to his area of
interest. It takes time for him to go through the entire search
results.
The second problem associated with this is, once he has
selected or visited web sites related to his area of interest and
he is searching the next time with the same keyword, then
also the search engine displays the same result as before. The
reason is, it does not know the area of interest of the user
who is searching. But now there are search engines like
Yahoo and Google can analyze the users area of interest by
scanning the pages that they have visited. They are doing it
by using various methods.
The click through method is the prime method which
they are using to analyze the search documents. We can
subdivide the click through method into two main categories,
document based and concept based methods. The
document based method scans the document that the user
visited after a search result has been published. It considers
the time the user spent on each document, the title of the
document etc to find out the area the user is interested in.
But this has many disadvantages like we cannot say all
the time that the pages or sites the user spent more time is
relevant to his area. Since the arrangement of content in a
web site carries a heavy role in providing information in a
faster way. So according to the results of various researches
in this area, concept based methods provide a better result to
find ones area of interest.
One more problem in customization is normally almost
all applications ask the user to enter their area of interest, by
providing questionnaires of varying lengths. But most of the
time users will not be interested in furnishing such details
even though they know it helps them to get more accurate
results. So the newest method in this area is finding the users
interest and deriving concept based profiles according to
that. In this approach users will not be asked to enter
anything, rather they will be supplied with the results which
are more relevant to their prior searches.
For generating such a concept based profile, we have to
scan through the web pages that he has selected or visited.
We have to access the URL of the pages, the header
information, the title, the summary etc. or in other words the
web-snippets.
The existing concept based methods considering the
preferences of the user profile. Whenever a user clicks on a
link it extracts the idea or concept related to that URL and
stores this concept as his positive preference. Later Spy-NBc
[1] method proposed the idea of considering the skipped
links also. Since normally the links skipped will not be the
one which the user is searching for. Extracting the concepts
contained in those links can be helpful in filtering the next
search results when the user searches next time.
This paper introduces the technique of considering the
other users profile. We can get the notion behind it easily.
One Indian tourist guide knows almost all information which
is related to tourism in India, than any other guide who is a
part of any world tour team. If he wants to get some
information about hotels in a particular remote place, his
search would be a narrowed one by exactly giving queries
with place name and other features with that spot. But for a
foreigner, first he has to locate the place and find the hotels
in that area according to the customer requirements. So
sharing information from experts will be more helpful in the
case of searching. That is why we propose the idea of using
other users profiles in our search engine personalization
techniques.
Another important consideration in the field of web
search is the Privacy- Enhanced Web Search [2]. We have
some papers which give the idea of implementing the above
mentioned privacy enhancement through a hierarchical
classification of the contents of users profiles. Here also the
system itself classifies the contents in the profile according
to their frequency, into levels like exposed, hidden etc.
Through this paper we would like to suggest a new method
which specifies how effectively we can implement this
hierarchical classification technique in concept based web
search which is considering other users profiles too, for
more accurate and filtered search results.
II. RELATED WORK
We have already two main user profile creation
strategies. That are document based approach and Concept
based approach. In which the latter one is getting more
attention nowadays.
A. Document Based approach
In this approach we are collecting the information about
the clicked documents. That is clicking and browsing
behaviors recorded in the users click through data. There is
a method in web searching named Joachims [7], assumes
that a user would scan the search result list from top to
bottom. If a user has skipped a document d
i
at rank i before
checking on document d
j
at rank j, it is assumed that the user
must have scanned the document d
i
and decided to skip it.
Thus we can conclude that the user prefers document d
j
more
than d
i
.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 139
B. Concept-Based approach
In this approach it automatically deriving the area in
which the user is interested by extracting the contents of the
pages that a user visited during his surfing. The user profile
is represented as a set of categories, and for each category a
set of keywords with weights. The categories stored in the
user profiles serve as a context to disambiguate user queries.
If a profile shows that a user is interested in certain
categories, the search can be narrowed down by providing
suggested results according to the users preferred categories.
1) Concept Extraction
A set of search results will be returned after giving
a query to the search engine. We assume that if a keyword or
phrase exists frequently in the web snippets of a particular
query, it represents an important concept related to the query
because it coexists in close proximity with the query in the
top documents. Thus we employ the following support
formula, which is inspired by the well known problem of
finding the frequent item sets in data mining, to measure the
interestingness of a particular keyword or phrase c
i
extracted
from the web snippets arising from q:

support(c
i
)=[sf(c
i
)/n ].| c
i
|, (1)

where sf(c
i
) is the snippet frequency of the keyword or
phrase c
i
(that is the number of web-snippets containing c
i
), n
is the number of web-snippets returned, and |c
i
| is the
number of terms in the keyword or phrase. If the support of a
keyword or phrase c
i
is greater than the threshold s, we treat
c
i
as a concept for the query q. Table 1 shows an example set
of concepts extracted for the query apple.
TABLE I. EXAMPLE CONCEPTS EXTRACTED FOR THE QUERY
APPLE
Concept(ci) Support(ci) Concept(ci) Support(ci)
mac 0.1 Apple store 0.06
ipod 0.1 Slashdot apple 0.04
iPhone 0.1 picture 0.04
hardware 0.09 music 0.03
mac os 0.06 apple farm 0.02

Before concepts are extracted, stop words such as the,
of, we, etc., are first removed from the snippets. The
maximum length of a concept is limited to fixed number of
words (say 7). These not only reduce the computational time,
but also avoid extracting meaningless concepts.
2) The Hierarchical Classification of Contents of the
User Profile
Personal data, i.e. personal documents, browsing
history and emails might be helpful to identify a users
implicit intents. However users have concerns about how
their personal information is used. Privacy, as opposed to
security or confidentiality, highly depends on the person
involved and how that person may benefit from sharing
personal information.
Figure2 provides a method for implementing a security
mechanism where a user can implicitly specify which are all
the sections in his profile to be exposed to the search engine
for filtering the search result and which are to be hidden. An
algorithm [2] is provided for the user to automatically build a
hierarchical user profile that represents the users implicit
personal interests. General interests are put on a higher level;
specific interests are put on a lower level. Only portions of
the user profile will be exposed to the search engine in
accordance with a users own privacy settings. A search
engine wrapper is developed on the server side to incorporate
a partial user profile with the results returned from a search
engine. Rankings from both partial user profiles and search
engine results are combined. The customized results are
delivered to the user by the wrapper.
















III. THE PROPOSED SYSTEM
It has been proved [1] that the concept based user profiles
which are considering not only the positive preferences but
also the negative preferences while displaying the contents of
the search results can do much better than one considering
the positive preference only. A person who is new to the
technology may search with different keywords to get the
required information and we cannot say that the URLs he
gets are exactly related to that technology. But an expert in
this field might have searched the topic many times and have
accessed the URLs which are dedicated to that technology or
topic. If we can share this information then the search will be
fast and more effective. We propose this idea in the concept-
based technology. But one thing to remember is that we do
not need to consider the negative preference of the other
users. Here also we can use the combination of search engine
ranking and user ranking for filtering the result. Even though
the search may consume a little bit more time to search in the
other users profile, we can easily understand that this time is
a negligible one while comparing the quality of result
retrieved.
The next idea that we propose and can be effectively
included in concept-based profile creation technique is the
hierarchical classification of the contents of user profiles. In
some papers we have seen that this classification has been
used to enhance the privacy of search. Since things
considered to be private by one person could be something


Search wrapper
Web corpus
Exposed
Private
User Profile
browsing
Query
Figure 1. System overview
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 140
that others would like to share. In this regard the user should
have control over which part of the user profile is shared
with the server. But here we propose this technique not only
for enhancing privacy but also for fetching the required and
relevant results to the user. If a person who just wants to
know the basic information about a topic, does not need the
URLs which are dedicated to that topic or which explains the
core technology in depth. For example if a student studying
in high school is searching for Java language does not need
the URLs which are dedicated to core Java technologies in
detail like Packages, RMI etc. He only needs to know what a
Java Language is and what are its advantages or features in
developing applications. So if we are providing a
hierarchical information structure in user profiles from
general to specific, the same thing can be applied in the other
users search results. That is while the search engine searches
in other users profile for a new user searching a particular
topic, he will be supplied with the general information first.
Again if that user searches in depth with more specific
keywords, then only the specific parts of the other users
profiles will be used.
IV. IMPLEMENTING THE SYSTEM
The major part of implementation is the concept
extraction and its storage. We will associate the application
in the server which provides a provision of concept-based
searching. An authenticated user can use the system as he
normally uses a search engine. A user query will be directed
to the application and from there the application searches the
database for a similar query or one which is having the same
meaning. Since different queries will be framed by users to
get the same thing. If there is a matching record in the
database then that will be displayed first. Otherwise it checks
for the other users profile for the similar query.
Whenever a user clicks on a link a count variable is
updated which can be used to rank the page according to that
particular concept. The concept extraction can be done
through the above given support formula (1). We are using
the Porter Stemmer algorithm to stem the words in the
retrieved web-snippets.
A. Mining Concept Relations
We assume that two concepts from a query q are similar
if they coexist frequently in the web-snippets arising from
the query q. According to the assumption, we apply the
following well-known signal-to-noise formula from data
mining to establish the similarity between terms t1 and t2:



(2)


where n is the number of documents in the corpus,
df(t) is the document frequency of the term t, and 1U2
is the joint document frequency of t1 and t2. The similarity
sim(t1,t2) obtained using the above formula always lies
between [0,1].
Now we are storing the extracted concepts
including the similar concepts from the clicked URLs and
the links associated with them in the user profile of the
particular user in the database. To increase the speed of
searching we can use the personalized agglomerative
clustering algorithm [1]. This will cluster similar queries and
users with similar concepts.
B. Constructing a Hierarchical User Profile
Building of a hierarchical user profile is based on the
frequency of terms in documents. In the hierarchy, general
terms with higher frequency are placed at higher levels, and
specific terms with lower frequency are placed at lower
levels.
Let D represents the collection of all personal documents
and each document is treated as a list of terms. D(t) denotes
all documents covered by term t, and |D(t)| represents the
number of documents covered by term t. A term t is frequent
if |D(t)|minsup, where minsup is a user-specified threshold,
which represents the minimum number of documents in
which a frequent term is required to occur. Each frequent
term indicates a possible user interest. In order to organize
all the frequent terms into a hierarchical structure,
relationships between the frequent terms are defined below.
Assuming two terms tA and tB., the two heuristic rules used
in our approach are summarized as follows:

1. Similar terms: Two terms that cover the document
sets with heavy overlaps might indicate the same
interest. Here we use the Jaccard function [8] to
calculate the similarity between two terms:
Sim(tA,tB) = | D(tA)D(tB) | / | D(tA)D(tB) |. If
Sim(tA , tB) > , where is another user-specified
threshold, we take tA and tB as similar terms
representing the same interest.
2. Parent-Child terms: Specific terms often appear
together with general terms, but the reverse is not
true. For example, badminton tends to occur
together with sports, but sports might occur
with basketball or soccer, not necessarily
badminton. Thus, tB is taken as a child term of tA
if the condition probability P(tA | tB )> , where
is the same threshold in Rule 1.

Rule 1combines similar terms on the same interest and
Rule 2 describes the parent-child relationship between terms.
Since Sim(tA , tB) P(tA | tB ), Rule 1 has to be enforced
earlier than Rule 2 to prevent similar terms to be
misclassified as parent-child relationship. For a term tA, any
document covered by tA is viewed as a natural evidence of
users interests on tA. In addition, documents covered by
term tB that either represents the same interest as tA or a
child interest of tA can also be regarded as supporting
documents of tA. Hence supporting documents on term tA,
denoted as S(tA), are defined as the union of D(tA) and all
D(tB), where either Sim(tA, tB) > or P(tA|tB ) > is
satisfied.

(1, 2) = [
.12
1.2
]/ ()

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 141
Using the above rules, the algorithm [2] automatically
builds a hierarchical profile in a top-down fashion. The
profile is represented by a tree structure, where each node is
labeled a term t, and associated with a set of supporting
documents S(t), except that the root node is created without a
label and attached with D, which represent all personal
documents. Starting from the root, nodes are recursively split
until no frequent terms exist on any leaf nodes.
V. CONCLUSION
Personalized search implemented through concept-based
user profiling has improved the search quality a lot.
Considering both positive and negative preferences of the
user profiles enabled the search engines to filter the results so
that the area in which the user is not interested is also
recognized. Through this paper we have seen how effectively
we can share the information from experts by considering the
positive preferences of the other users profile and can
enhance the search result. Also hierarchical classification of
contents can be used to display the results which are
according to the level of user who is searching, from
beginner to expert.
REFERENCES
[1] Kenneth Wai-Ting Leung, Dik Lun Lee, Deriving Concept-Based
User Profiles from Search Engine Logs,IEEE Transactions on
Knowledge and Data Engg., Vol.22, No.7,July 2010.
[2] Y. Xu, Ke Wang, B. Zhang, and Z. Chen, Privacy-Enhancing
Personalized Web Search, Proc. World Wide Web (WWW)
Conf.,2007
[3] F. Liu, C. Yu, and W. Meng, Personalized Web Search by Mapping
User Queries to Categories, Proc. Intl Conf. Information and
Knowledge Management(CIKM),2002
[4] S. Gauch, J. Chaffee, and A. Pretschner, Ontology- Based
Personalized Search and Browsing, ACM Web Intelligence and
Agent System, vol. 1, nos. 3/4, pp 219-234,2003
[5] D. Beeferman and A. Berger, Agglomerative Clustering of a Search
Engine Query Log, Proc. ACM SIGKDD,2000
[6] E. Agichtein, E. Brill, and S. Dumais, Improving Web Search
Ranking by Incorporating User Behavior Information, Proc. ACM
SIGIR, 2006
[7] T. Joachims, Optimizing Search Engines Using Clickthrough Data,
Proc. ACM SIGKDD, 2002
[8] J. Han. Data Mining Concepts and Technologies, San Francisco,
CA,2001
[9] K.W.-T. Leung, W. Ng, and D.L. Lee, Personalized Concept-Based
Clustering of Search Engine Queries, IEEE Trans. Knowledge and
Data Eng., vol 20, no. 11, pp.1505-1518, Nov. 2008
[10] Google personalized search: http://www.google.com/psearch
[11] J. Xu and W. B. Croft., Improving the effectiveness of Information
retrieval with the local context analysis, ACM Transaction of
Information Systems, 1(18):79-112,2000
[12] Paolo Ferragina, and Antonio Gulli, A personalized search engine
based on Web-Snippet hierarchical clustering, Proc. of the 14th
International World Wide Web Conference(WWW), Chiba, Japan,
May 2005.









Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 142
Implementation of Evolvable Hardware (EHW) with Fault Tolerant
Computation on Multiple Sensor Application including Interrupt Driven
Routines enabling Wireless though IEEE 802.15.4 Protocol Suite
1
S.P. Anandaraj
Research Scholar, Dept. of CSE
St. Peters University,
Chennai, TN, India
Email:
anandsofttech@gmail.com


2
@Dr. S. Ravi,
HOD, Department of ECE,
Dr. M.G.R University,
Chennai, TN, India,
Email:
ravi_mls@yahoo.co.in


3
S.Poornima,
Assistant Professor, Dept. of IT,
SR Engineering College,
Warangal, AP, India,
Email:
Poornima_selvaraj@yahoo.com

Abstract: In these days, multiple sensor systems
undergo more problems with various classes of
Interrupts like Maskable and Non-maskable
Interrupts. Even though, past systems are affixed
with many fault tolerant techniques, it leads to the
failure of Interrupt handling [5]. This paper
defines a new approach of multiple sensor system,
which provides solution to both fault- tolerant
computing via Evolvable Hardware (EHW) and
interrupt handling process via Hardware software
Co-Design [9]. In this implementation, the
proposed sensor system can be accessed in
wireless adopting the IEEE standard 802.15.4
Protocol called as Zigbee [7]. This Zigbee
Protocol supports large number of sensors
controlled by Host Controller. In this Embedded
System, Interrupts are handled by software
approach called as Device Driven Interrupt
Service Routines (DDISR) [2]. In response to the
interrupt, the routine, which is running at present
gets interrupted and defines ISR is executed with
the use of device functions like open (), Close (),
read (), Write (), etc [5]. The defined approach
leads in increased efficiency within smaller time
slots and enabling fault tolerant mechanism to be
processed till working of the system.
Keywords: Multi-Sensor System, Wireless protocol, IEEE
802.15.4 standard, Interrupt-driven Processing, Wireless
Sensor Systems, Fault Tolerant Methodology, C
programming for Embedded Systems.
I. INTRODUCTION

The paper discusses on well-configures Multi-Sensor
System with the support of widespread sensors,
Wireless protocols, Host Controller (HCON). This
embedded system relies on sensory data from real
world [5]. The sensor data originates from multiple
sensors of different location in distributed areas. The
present Sensor networks has various challenging
issues like application specific, Difficult to configure,
no interrupt handler, Limited communication
channels. So, the proposed architecture in Fig 1
overcomes all the above said challenges with the ease
of new technology like Evolvable Hardware IEEE
based Protocol.







SN Sensor Nodes

Fig.1 Multi-sensor Network

As in Fig.1, the sensor network consists of multi
sensing stations named as sensor Nodes. Each and
every sensor system consists of Transducers,
microcomputer, transceiver and power supply and are
displayed in figure 2. The transducer produces
electrical signals depending upon acquired physical
effects and criteria. The microcomputer handles and
stores the acquired data. The transceiver receives
commands from Host controller and sends data to
Host Controller. The power switch is affixed with
each sensor node or battery can be used. The Multi-
SN
1
SN
1
SN
1
SN
1
SN
1
SN
1

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 143
sensor system is very efficient by considering the
following features.
Broader coverage of Area.
Increased in Fault tolerance
Higher quality of measurements.
Very eminent Interrupts
Shorter Response delay for changing events.
Network Architecture is flexible.







Fig.2 Connection among individual Sensor and Host
Controller

II. SYSTEM REQUIREMENTS
In this proposed Architecture, each and every sensor
nodes and capture 4 to 6 Mega pixel still
pictures(CMOS) sensor, 7 cm wide LCD photo
display screen, increased image processor, high speed
processing engine and also can record high definition
video clips. It has an audio-video port connected
with host controller. The architecture is in-built based
on hardware and software co-design and their
requirements are listed in fig. 3 and fig.4.

A. Hardware and software Requirements
As shown in Fig.3, Charge Coupled Deceive (CCD)
senses the images in sizes 2592X1944 pixels. The
LCD display units on the back of sensor is used to
record video-clips, and to display text such as
acquired data, images and messages. Flash memory is
used to store the images up to 2 GB. The USB Port is
used to connect the sensor to host controller. The
acquired data on CCD display, it transmits the pixel
for each row in the frame through the ADC and the
DAC sends the inputs for the display units to host
controller. The CCDDSP (CCD digital signal
processor) compresses the acquired image using
Discrete Cosine transformation (DCTs) and
decompresses through inverse DCT.
As shown in Fig.4, in the software requirements as
listed, CCD signal process is for off-set correction,
JPEC encoding and decoding is used for efficient
transmission with host computer. The display device
drivers controls and handles the acquired data.
III EMBEDDING INTERRUPT DRIVEN
SOFTWARE VIA ISR
Software functions for the sensor signals and
exceptions are called as Interrupt Service Routines
(ISR) and it also called on a trap or execution of
software instruction for interrupt [3]. Interrupt
Service Routes (ISRs) are called by the Host
Controllers when device-hardware interrupts take
place. In our system, the device driven Interrupt
handlers are used for Sensor Device Drivers and
controlled by Host Computer. Each and every device
requires device driver routines. An ISR relates to a
device driver function. A device driver is a function
used by a high-level language programmer and does
the interaction with the device hardware and
communication acquired data or image, control
commands and runs the code for reading the host
controller data. Moreover, generic commands are
used for the device driver Provides my Windows OS
like Create(), open(), connect(), bind(), read(),
write(), ioctl(), etc.[5]. And its structure is shown in
fig.5.






Fig.3. Hardware Requirements





Sensor System
Display
LED
ADC/DAC
PROM
Computer
with EHW
Zigbee
Micro
Computer
HOST
Controller
I/O
Bus
Connection Bus
LCD for frame view
DAC ADC
CCD
Zigbee Port Flash Memory
USB Port HOST Controller
Timer, DMAC
CCDDSP FPGA (EHW)
Embedded Software
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 144






Fig. 4 Software Requirements
For this system, interrupt handling mechanism is
used along with ISR for Device Drivers. Interrupt
Vector is a memory address to which the processor
vectors. The processor transfers the program counter
to the interrupt vector new address on an interrupt.
Using this address, the processor devices that
interrupt by executing corresponding ISR. The
memory addresses for vectoring by the host
controller are processor specific. On an interrupt, a
processor vectors to a new address
ISR_VECTADDR. It means that the PC(program
Counter), which has the instruction address of next
instruction, saves that address on stack or in some
CPU registers, called link register and processor load
the ISR_VECTADDR into the PC[2]. The Stack
pointer register to CPU provides the saved address to
enable return from ISR using the stack. When the PC
saves at the link register it is part of CPU register set.













Fig. 5 Execution of ISR in Multi-Sensor System


Fig. 6 Shows the ISR_VECTADDR with common
vector address for all exceptions, traps and signals
results from SWI. In the ARM Processor
Architecture, the software instruction (SWI) does not
explicitly define the type of interrupt for generating
different vector address and instead there is a
common ISR_VECTADDR for each exception or
signal or trap generated using SWI instruction [4].


ISR that executes after vectoring has to find out
which exception caused the processor to interrupt and
divert the program. Such mechanism in this
architecture results in provisioning for the unlimited
number of exception handling routines in the system
all has the common interrupt vector address.











Display device Drivers (Audio, Video) CCD Signal Processing Task
Pixel co-Processing task
JPEG Coder JPEG Decoder
LCD, Zigbee drivers and USB Port Device Drivers
Sensor input
(Acquired
Data)

Host
Controller
ISR_Frame READ
Start: ADC Scan using a signal.
Signal 1: Task read frame status and data of
all pixels of image frame at CCD co-
processor.
Signal 2: Task for saving pixels data at a
frame memory buffer.
..
Signal n: Tasks for subtracting offsets and
compress image data
Sensor Signals & allocated System Tasks
1 2 3 N
To CCD pixels ADS scan
Interrupt
Call
Return
Bus
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 145







Fig. 6 the ISR_VECTADDR with common vector address for various interrupts via SWI

A. Software Interrupt Instructions (SWI).

The Sensor Nodes are interrupting driven, while a
considerable amount of its process is set on by
interrupts. Highly resource constrained embedded
phases relative have an OS code written in C
Language and it is used widespread for common code
to execute the ISR. Each Device interrupt is
assigned with pending bit and stays until the handler
runs or until the bit is explicitly cleared. Each
interrupt also has an associated enable bit. The
interrupt is enabled when its enable bit and
processors global interrupt enable bit are both on.
All the interrupts that are on pending, the host
controller the highest priority ISR and starts
executing it [7]. To execute a Device ISR, the
controller automatically clears the public interrupt
enable bit, clears the pending bit and pushes the PC
and jumps to the first instruction of ISR.

Fig.5 shows the internal execution of ISR. The
interrupts can pre-empty each other by determining
the manipulation of Embedded structure (Hardware
Software Co-design) [5].

Interrupts can be raised from any of the sensor in the
environment, and each other is defined by the
manipulations of it by embedded system. The
interrupts are handled by SWI with the usage of C
language and it is described above in figure 6.

ADC_Interrupt_Driven_Process ()
{
read_data (adc_buffer_pointer);
adc_complete=true;
}
\*coding for various types of interrupts*/
adc_buffer_ptr=xmalloc (sizeof (adc_buffer));
Begin_adc_conversation ();
\* coding to handle the interrupt*/
If (adc_complete)
{
process_data (adc_buffer_ptr);
free (adc_buffer_ptr);
}
async command result_x
ADC.sample_port (uint8_x, port)
{
atomic {
output (TOSH_adc_portmap [port] & 0x1F, ADMUX);
}
Sbi (ADCSR, ADEN);
\*coding for interrupt request*/
Sbi (ADCSR, ADSC);
RID_request_SIG_ADC ();
Return SUCCESS;
}

a b

Fig.7.a Code to analysis Analog to Digital (ADC) Interrupts, 7.b defines the driver code for explicit ADC interrupts

IV.IMPLEMENTATION OF FAULT
TOLERANT METHODOLOGY VIA
EVOLVABLE HARDWARE:

A. Fault Tolerant Methodology:
Evolvable Hardware (EHW) is the process embedded
with the sensors to overcome the defects of faults due
to invalid interrupts [4]. The overall architecture of
fault tolerant methodology is done by EHW and
shown in below fig. 7.The proposed Method starts
from component testing by analyzing, testing and
screening the components used in system. Next,
Fault tolerant Hardware Design is carried out to
define component faults. By this means, the system
can fine hardware redundancy that makes system
critical sub-assemblies [8]. Next, Software Faults are
diagnosed and at last, system level methods address
the various faults that occur in the system. On the
whole, the fault tolerance is implemented in the
design of Host Controller and it proves the best
Software
Interrupt
Instruction
SWI
ISR (SWI handler) Vector address,
from program flows using 4-byte
instruction to another common
vector address for all SWI handlers
From the common vector address, the
call to required SWI handler routine is
made as well as handler input-parameter
address is computed.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 146
possible system solution increased in overall system
competence and reliability [10].



Fig.7 Fault Tolerant Hierarchy Approach in EHW

V. EMBEDDING WIRELESS PROTOCOL -
IEEE STANDARD 802.15.4

Zigbee is an IEEE standard 802.15.4 Protocol and its
implementation is represented in Fig.8. The Zigbee
protocol supports a large number of sensors at a same
time and can be applied for home and office
automations and heir remote control and formation
WPAN (Wireless personal area network). The
physical Layer radio operates at 2.5 GHZ band
carrier frequencies with DSSS (Direct Sequence
Spread Spectrum). It supports a range up to 70m.
Data transfer rate supported is 250 kbps. It supports
sixteen channels. Each and every sensor Node is
affixed with Zigbee Device to enable communication
among them. Zigbee network has a Zigbee router,
end devices and Controller. Zigbee router transfers
packets from a neighboring source to a nearby node
in the path to destination. The controller connects
one Zigbee network with another or connects to
WLAN or cellular network. Zigbee end devices are
transceivers of data [7].
In the multi-sensor system, Zigbee network is self-
organizing and supports peer-to-peer and mesh
networks. Self-organizing means that it detects
nearby Zigbee devices and establishes
communication and network [3]. Peer-to-peer
network means that each node at network functions
as a requesting device as well as a responding device.
Mesh network means that each network functions as
a mesh










Fig.8 Zigbee Network for Wireless Multi-sensor Network
VI. CONCLUSION

The sensor Networks is revealing a number of
software developers to low-level micro-controller
programming. Generation of such flexible software
for sensor networks is a challenging one. Interrupt
driven processing is one of the finest approach to
help those developers. Moreover, practical
applications require multiple sensors, both in type
and location with various types of networking and
communications. In this application, Zigbee provides
28ms delay and provides some appropriate area
coverage. The versioning part is left to the future
developers in increasing efficiency in wireless
solutions.

8. References:

[1] George C. Necula, Scott McPeak, S. P. Rahul, and
Wesley Weimer. CIL: Intermediate language and tools for
analysis and transformation of C programs. In Proc. of the
Intl. Conf. on Compiler Construction (CC), pages 213
228, Grenoble, France, April 2002.
[2] A. Pretschner, O. Slotosch, E. Aiglstorfer, and S.
Kriebel. Model based testing for real. Software Tools for
Technology Transfer, 5(23):140157, March 2004.
[3] Lydie du Bousquet, Farid Ouabdesselam, Jean-Luc
Richier, and Nicolas Zuanon. Lutess: A specification-
driven testing environment for synchronous software. In
Proc. of the 1999 Intl. Conf. on Software Engineering
(ICSE), pages 267276, Los Angeles, CA, 1999.
[4] Philip Koopmans and John DeVale. Comparing the
robustness of POSIX operating systems. In Proc. Of The
29th Fault Tolerant Computing Symp., Madison, WI, June
1999.
Zigbee Sensor Devices Network (WPAN)
Zigbee Protocol
Embedded Device
Zigbee Protocol
Embedded Device
Zigbee Protocol
Embedded Device
Zigbee Protocol
Embedded Device
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 147
[5] Bart Broekman and Edwin Notenboom. Testing
Embedded Software. Addison-Wesley, 2002.
[6] Ben L. Titzer, Daniel Lee, and Jens Pals berg. Avrora:
Scalable sensor network simulation with precise timing. In
Proc. of the 4th Intl. Conf. on Information Processing in
Sensor Networks (IPSN), Los Angeles, CA, April 2005.
[7] Nirupama Bulusu, John Heidemann, and Deborah
Estrin. Gpsless low cost outdoor localization for very small
devices. IEEE Personal Communications Magazine,
7(5):2834, October 2000.
[8] Blanke, M. et al, 2003. Diagnosis and Fault-Tolerant
Control. Springer.
[9] Zhang, P. and Ding, S. X., 2006. Fault detection of
networked control systems with limited communication. In
Proceedings of the IFAC Symposium SAFEPROCESS06,
pp. 1135-1140, Beijing, P.R. China.










































Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 148
Symbian Phone Forensics - An agent based apporoach

Deepa Krishnan
Department of Information
Technology
SRM University
Chennai, India
deepa@pointingarrow.com

Satheesh Kumar S
Resource Centre for Cyber
Forensics,
CDAC,
Thiruvananthapuram, India
satheeshks@cdactvm.in

A. Arokiaraj Jovith
Department of Information
Technology
SRM University
Chennai, India,
arokiarajjovitha@ktr.srmun
iv.ac.in;


Abstract Smart phones like the older mobile phones are fast
becoming a life style choice. These sleek devices with large
amount of personal information in them make smart phone
forensics, a key component in any criminal investigation. The
paper presents a contrast between hardware and software
approaches and highlights the key advantage of the software
approach i.e. the speed at which actionable data can be made
available with less technical knowhow. Moreover, the
proposed plug-in based agent development provides an
extensible framework to handle customizations that will
matchup with each unique nuance of phone platform and
model. The paper summarizes the finding by unveiling a
prototype module development, using platform SDK and on-
phone agent. The main focus is the simplicity and extensibility
of proposed approach but at the same time the paper does
warn about the possible impact to device memory and
contrasts with other alternatives.
Keywords- law enforcement; cyber forensics; mobile
computing; security
I. INTRODUCTION

There is no device that has changed lives and has seen
worldwide adoption like our humble mobile phone. With the
advent of smart phones and its integration with web and
social networking we are at the cusp of another radical
change. We are in the era where phones have horsepower
equivalent to a PC, were phone based news reporting has
brought about revolution and downfall corrupt of a regimes.
These powerful devices in the wrong hands will be equally
disruptive. In this changing background forensic analysis of
phones has become even more challenging all when our law
enforcement agencies need to process a wide range of
handsets quickly and get all available information to the
investigating officer.

Today smart phones have almost all the features of a
laptop or a notebook computer. In the near future, the so-
called smart phones may replace the laptop and notebook
computers. Analysis of such devices is a major agenda
before the forensics community. The law enforcement
agencies require sophisticated software as well as hardware
tools for the proper analysis of digital evidence to bring the
culprit before the court of law.

If you look at Figure 1 it shows the current penetrations
of mobile phones in relation to world population and how
mobile phones usage stacks up in comparison with other
technologies. As smart phones replace the current generation
of phones we are looking at a massive redefinition of current
process.

The capabilities and features of each handset determine
what information could potentially be retrieved from each
device. This is easier to understand if we look at this from
the perspective of what particular tasks handset could
perform. For example, older phones have limited memory so
what we can expect to get is limited to data usually form the
SIM card. Modern phones on the other hand have huge
internal storage, which can further be extended by external
memory card. Apart from a camera, many of the modern
phones come equipped with GPS, compass, humidity sensor,
proximity sensor, gyroscope and much more. The Figure 2
shows a table of comparison between different categories of
phones.



Figure 1. World mobile phone penetration and potential growth of smart
phones.
[9]

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 149
Basic Advanced Smart
OS Proprietary Proprietary Android,iOS,
RIMOS, Palm OS,
Symbian, Windows
Phone7
Address
Book
Basic Phone
Book
Address Book
with possible
Calendar
Elaborate address
book including
special apps from
app store.
Apps Non existent Basic pre- build
App
Wide range of app
selection from app
store
eMail &
Chat
None SMS Chat Wide range of chat
& eMail app from
the app store
Web None None or WAP
Gateway
HTTP
Wireless None IrDA or
Bluetooth
IrDA, Bluetooth,
WiFi
Figure 2. Common attributes of smart phone

Over the past few months the landscape of smart phone
market share has drastically changed with Android and iOS
making huge gains at the expense and in some cases
downfall of the competition. The graph below shows the
current US smart phone share and a 3 way race between
Android, iOS and RIM.



Figure 3. Smart Phone Platlform market share. [8]
Before getting into the process details it is appropriate to
look at the basic information present in smart phones. These
can include but not limited to:
1. Handset Setting (language, date & time, GPRS,
WAP, internet etc)
2. Phone Book
3. Call Logs (incoming, missed and outgoing)
4. SMS Messages
5. Tasks
6. Calendar Events
7. Stored Files (pictures, music, video, audio recording
etc)
The intention of the cyber forensics process is to extract and
analyze this information to bring the evidence at the court of
law.

II. FORENSIC PROCESS

While dealing with a digital device the method used to
acquire data must have little impact on device memory as
possible. Impact if any should be well understood and
documented. This is important to ensure that integrity of the
acquired data and also to allow for a 3
rd
party verification if
it is required at any stage.

Forensics on a cell phone is considerably different from a
personal computer. Even though the number of platform we
have to deal with is reducing there is still a wide selection of
proprietary OS along with a short product release cycle.
Hence we will always be dealing with a moving target even
within a single platform. Methodology and sequence in
which the phone is handled and data collected is critical
[4]
.
Turning off the phone has the potential to alter its memory or
data on the phone, but leaving the phone ON raises the
possibility of new information arriving over the network and
affecting the integrity. Ideally the phone should be placed in
a radio shield environment and it should only be switched
OFF if thats not possible. On the same lines removal of SIM
card or battery from some phones could modify the contents
of phone memory. We recognize the complexity of this
process and are developing an application based on the plug-
in model, which allows for extensive customization based on
the phone platform, model and version. The application will
identify the expected steps and walk the user though it.

The following section describes some of the things a
crime scene technician should consider as he/she goes
though the evidence. The application helps the technician
with proper reminders and logging the finding.

A. Keeping track of yourself
General guidelines for forensics require that,
investigators cannot change the digital data content of the
device being analyzed. Moreover, an audit trail of the
analysis and investigation process should be maintained at all
times in such a way that it can be verified by multiple
sources. Each step should be accurately documented, so that
there is enough information for the process to be reviewed by
independent third-party. Finally, the person in charge of the
investigation should maintain compliance with the governing
laws.

Forensic method used should minimize changes on the
device, be able to retrieve the full set of data, and finally
minimize user interaction with the device itself. Ideally, the
full memory content of a generic embedded device should be
collected, to preserve the full inner state and obtain a
forensically sound acquisition.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 150
It is also recommended to keep track of approach and
progress, by means of an external recording device (e.g.
camera) that will maintain visual breadcrumbs.

B. Background Data
During the intake and processing of the phone evidence
the crime scene investigator from law enforcement inputs a
bunch of contextual information. This includes but not
limited to where the evidence was found, the crime file
details, the technician name etc. Capturing this kind of
context information that can be kept with the analysis report
is the first step of the forensic extraction tool.

C. External Forensic Data Source
There is information that can be gleaned from outside
visvis the network can be as important as what is in the
phone. For e.g. in the GSM network environment, a great
deal of information might be recovered from the service
provider. The set of information, which can be successfully
collected with this method, is related to the SIM data set,
such as SMS, MMS, list of last called numbers, and the
location of the subscriber. Clearly, information such as
photos, videos, phone book, web browser logs, audio
recordings, or users notes cannot be gathered in such a way.

If external forensic data can be gathered then the request
for the information from the service provider or notes
regarding this are recorded by the technician.

D. Physical Data Extraction
Physical acquisition implies a bit-by-bit copy of the
entire physical store. Physical acquisition has its advantages
since it allows deleted files and data remnants in unallocated
memory to be analyzed. Once a bit-by-bit copy is made the
extracted image need to be parsed and decoded manually.
Logical extraction of data implies copying data in logical file
system partition though OS framework calls. This is a logical
view not raw memory view, which has its disadvantages.

There are not many effective tools available to take an
effective physical image and parse it to something
meaningful; most forensic tools for cell phones and SIMs
acquire data logically. Physical accusation of data also
requires more technical knowhow and training. At the
minimum the technician should know how to hook up to
diagnosis or debugging ports like JTAG or at the extreme
level may require de-soldering the flash chip and connecting
to the reader. NOTE: De-soldering the flash chip is the most
invasive method for the equipment so may not be the right
approach in all cases. But this is ideal when the phone is
damaged. If physical hardware based extraction is deemed
useful the technician records those thoughts in the report log.

E. Mobile Phone Communication Interface
Different interfaces like IrDA, Bluetooth or serial cable
can be used to acquire logical content. Extracting data using
serial cable is the recommended option; wireless options
should be used only after understanding the potential
forensic issues. E.g. Bluetooth requires the wireless antenna
to be switched ON and requires key entries on the handset so
that it is paired with the forensic workstation and a good
connection is setup all of this generates integrity concerns.
.
F. Logical Data Extraction
This is the heart of the process and relies on multiple
protocol and communication methods some of the things
used are AT Command Set (SIM card commands), SyncML,
FBUS, MBUS, OBEX, IrMC APDU. As you will see in
further sections the phone OS platform SDK provides
powerful options to extract data. Because each phone has its
own unique approach; the plug-in model provides an
extensible option to pick and choose what works best for the
phone platform and model.

As you choose the phone model the correct plug-in that
will do the work is called and used to extract the information.
The extracted information and the final report is then run
though a hashing algorithm (MD5) before saving it, to
prevent tampering

III. UNDERSTANDING SYMBIAN PLATFORM FOR PHONE
FORENSICS

The remainder of this paper we will dive deeper into
developing the proposed module by choosing one of the
smart phone platforms i.e. Symbian S60. We will start the
discussion with an overview of the Symbian operating
system and lay the groundwork of what we are dealing with.
In sections following that describes different methods
employed in retrieving data.



Figure 4. Symbian OS Archithecture.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 151
A. Symbian OS Architecture
As can be seen in Figure 4, the architecture is modular,
where operating system functionalities are provided in
separate building blocks, and not in one monolithic unit.
Being a single purpose phone OS Symbian is single user
with multi-tasking capability, being able to switch CPU time
between multiple threads, giving the user of the mobile
phone the impression that multiple applications are running
at the same time.

The core OS is formed by a microkernel built as a
personality layer on top of a real-time (RTOS) nanokernel.
This block is responsible for primitives such as fast
synchronization, timers, initial interrupts dispatching, and
thread scheduling. Generally speaking, Symbian OS is
intended to run on open, small, battery-powered portable
computers, which are modern advanced state-of-the-art
mobile phones.

B. Symbian Filesystem
On a Symbian smart phone, the file system
[5]
can be
accessed by means of the file server component, also referred
to as F32, which manages every file device. It provides
services to access the files, directories and drives on those
file mapped devices. The file server uses the client/server
framework, by receiving and processing file-related requests
from multiple clients. Moreover, it is able to deal with
different file system formats, such as the FAT format used
for removable disks, by using components that are plugged
into the file server itself. In addition, it supports a maximum
of 26 drives, each identified in DOS-like convention by a
different drive letter, in the range A: - Z:.

The main ROM drive, where the firmware resides, is
always designated as Z:. This drive holds system
executables and data, which are referred to as XIP
(eXecutable In Place) because they are directly launched
without being loaded into RAM. Besides this, the firmware,
or ROM image, is usually programmed into Flash memory,
known also as EEPROM, the nonvolatile memory that can
be programmed and erased electronically.

The C: drive is always designated as the main user data
drive, which can be mapped onto the same Flash memory
chip of the firmware, whereas any removable media device
is generally designated as D: or E:. It is worth mentioning
that every access from a client to file server (F32) takes place
via a file server session, by means of RFs server session
class, which implements all the basic services to interact with
the file system. We can obtain information about drives and
volumes, act on directories, obtain notification about the
state of files and directories, analyze file names, verify the
integrity of the file system, and finally, manage drives and
file systems.

C. Platform Security
Platform security
[6]
on Symbian OS v9 prevents
applications from having unauthorized access to hardware,
software and system or user data. The intention is to prevent
malware, or even just badly written code, from
compromising the operation of the phone, corrupting or
stealing confidential user data, or adversely affecting the
phone network. Every Symbian OS process is assigned a
level of privilege through a set of capabilities, which are like
tokens. A capability grants a process the trust that it will not
abuse the services related to the associated privilege. The
Symbian OS kernel holds a list of capabilities for every
running process and checks it before allowing a process to
access a protected service.

There are four different types of platform security
capability, when digital signing is considered. The
differences arise because of the sensitivity of the data or
system resources the capabilities protect, and the
requirements that are placed on the developer before they are
given permission to use them. The capabilities of a process
cannot be changed at runtime. The Symbian OS loader starts
a process by reading the executable and checking the
capabilities it has been assigned. Once the process starts
running, it cannot change its capabilities, nor can the loader
or any other process or DLL that loads into it affect the
capability set of the process. A process can only load a DLL
if that DLL is trusted with at least the same capabilities as
that process.

The Symbian OS file system is partitioned to protect
system files (critical to the operation of the phone),
application data (to prevent other applications from stealing
copyrighted content or accidentally corrupting data) and data
files personal to the user (which should remain confidential).
This partitioning is called data caging. It is not used on the
entire file system; there are some public areas for which no
capabilities are required.


Figure 5. Datacaging and capabilities.
However, some directories in the file system can only be
accessed using certain capabilities. Each Symbian OS
process has its own private folder, which can be created on
internal memory or removable media. The folder name is
based on the Secure Identifier (SID) of the process. A SID is
required to identify each EXE on the phone and is used to
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 152
create its private directory. With the previous Nokia phone
generations, for instance the S40 series, the logical
acquisition of the device content was possible, by means of
Symbian OS APIs, which was able to copy the entire file
system content on an external memory device. With S60 the
access is restricted by data caging, the figure 5 shows table
with folder security access (data caging) based on the
application capabilities.

Interestingly, the file system restriction policy is fully
contained in the file known as SWIPOLICY.INI
[1]
, located
in the folder z:\system\data\. The original policy file related
to a Nokia Symbian based smart phone is illustrated as
follows.

AllowUnsigned = false
MandatePolicies = false
MandateCodeSigningExtension = false
Oid = 1.2.3.4.5.6
Oid = 2.3.4.5.6.7
DRMEnabled = true
DRMIntent = 3
OcspMandatory = false
OcspEnabled = true
AllowGrantUserCapabilities = true
AllowOrphanedOverwrite = true
UserCapabilities = NetworkServices LocalServices
ReadUserData WriteUserData UserEnvironment
AllowPackagePropagate = true
SISCompatibleIfNoTargetDevices = false
RunWaitTimeoutSeconds = 600
AllowRunOnInstallUninstall = false
DeletePreinstalledFilesOnUninstall = true

It is interesting to observe that the capability set defined in
the previous file is limited, and it restricts the interaction
with the file system of the mobile platform. According to the
standard documentation, the various parameters can appear
in any order. Moreover, UserCapabilities set might be
changed, by adding the required capabilities such as those
illustrated in the following modified version of policy file.

AllowUnsigned = true
MandatePolicies = false
MandateCodeSigningExtension = false
Oid = 1.2.3.4.5.6
Oid = 2.3.4.5.6.7
OcspMandatory = false
OcspEnabled = true
AllowGrantUserCapabilities = true
UserCapabilities = AllFiles DiskAdmin NetworkServices
LocalServices ReadUserData WriteUserData
UserEnvironment MultiMediaDD NetworkControl
CommDD ReadDeviceData WriteDeviceData
SISCompatibleIfNoTargetDevices = false
AllowRunOnInstallUninstall = true
AllowPackagePropagate = true
DeletePreinstalledFilesOnUninstall = true

The illustrated policy file can be written directly in the
original firmware of the phone and, subsequently, uploaded
by means of re-flashing. As a result, the complete C: disk
content might be collected, with standard self-signed APIs,
and thus analyzed, to extract the full set of probatory data
which might be usually found on a mobile platform. This is
the usual approach for other mobile platforms as well, where
the primary image, the one which contains the entire set of
evidence, can be obtained without any restrictions.
Unfortunately, such a scenario is far from the reality, and we
need to evaluate others ways to access the digital data
content of the smart phone.

So far, if an application needs to have the complete
access to the phone file system, it has to be authorized by
means of the Symbian signing procedure with AllFiles
capabilities, which requires a special certificate released by
TC Trust Center. Three steps are required to sign an
application. Initially, the installation file generator,
makesis.exe, creates the installation files (extension.sis) from
information specified in the package file (extension.pkg).
After that, if the application is for the international market, it
will be signed with an ACS Publisher ID, by means of the
Symbian Signed service. Conversely, it will be signed with a
user-generated certificate, which might be created with
makekeys.exe. Finally, the Installation File Signer
(signsis.exe), digitally signs the installation files with the
proper certificate, by generating, as a result, a .sisx file.

IV. DATA COLLECTION SCHEMES FOR SYMBIAN
PLATFORM

This paper suggests a distinct approach both in
development and extracting of data from the device.
Application is developed as a composable part A part
provides services to other parts and consumes services
provided by other parts. Each plug-in exposes a contract
identifier so that it can talk to other parts in the application.
The data extracting is based on client server architecture.
During acquisition, the tool should have full access to the
object store, as discussed there are very severe constraints to
obtain a full physical image, so a logical copy of the object
store is proposed. The client part is installed on the desktop
PC the server part is copied into the mobile device giving us
API access to extract a copy. The rst problem that had to be
tackled is the deployment of the tool onto the device. A
number of ways to place the agent-based tool on the device
were considered. The tool can be packaged as a SIS installer,
so it can be sent to the phone using Bluetooth, infra-red or
le transfer using the PC suite. A SIS le is a special
software installer for the Symbian platform. Even though it
may change certain parts of the file system, the changes are
very little.
A. Data Acquisition
Data acquisition is the major step in forensics process.
According to the forensics principle, the original data cannot
be used for any forensics analysis. So we need to create a
copy of the logical data present in the mobile device. This is
achieved by developing an agent, which is to be installed on
the target device. The module uses Symbian SDK, AT
Command Set (SIM card commands), FBUS and
Connectivity SDK to read the file system. The module is
capable of establishing a connection and exchanging data
with an externally connected host computer. Figure 6 shows
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 153
setting screen of the acquisition GUI which allows the
investigator to select the phone model, which further enables
or disables the features available.


Figure 6. Symbian acquisition process

Since the tool uses standard APIs to access the file
system and uses an agent, which is the code signed by the
signing authority, we can reasonably believe that these APIs
will not change the device content. The main issue with this
approach is that it requires a piece of software, called the
agent, to be loaded on the device to acquire the content.
B. Data Analysis
The tool creates a logical copy of the data present in the
mobile device as a file at the desktop PC, where the client
programme is running. The tool also supports to generate the
hash value, which will prove the authenticity of the acquired
data. The image created can be loaded in an analyzer so that
the data present in the mobile device can be viewed for
further analysis. The tool provides important forensic
information like Contacts, Call logs, SMS etc. This
information will help the investigation agencies to get some
cues for further investigation. The analysis tool shows all
these information in separate file viewers. The incoming
outgoing and missed call details are displayed separately.
Also the Inbox, Outbox, Draft, Sent and Deleted SMS are
categorized in separate viewers. The analysis tool is also
provided with keyword and file search facility, which is the
key feature of a forensics tool. User can add any keywords
and file extensions in the box provided and the tool will
search the entire image for the string entered. It shows the
search hits in a separate viewer.
V. CONCLUSION
To summarize, standardizing the process of digital
forensic methodologies for mobile phones are still in their
infancy stage, the kind of data we need to look for, the
security paradigm are new. As the platforms evolve and
mature we should see more robust imaging tools e.g.
VMWare tools for Android platform. For now, the hardware
approach seems the only one which should be able to give a
bit-by-bit image of the flash memory content thus preserving
the content of the investigated phone. But software approach
works for acquiring specific items. For instance; it is
certainly possible to extract the entire set of probatory data,
such as SMS, MMS, pictures, video clip and phone book, by
using application APIs.
ACKNOWLEDGMENT

We would like to thank Mr. Thomas K L, Joint Director, at
Resource Centre for Cyber Forensics (RCCF), Centre for
Development of Advanced computing (CDAC) Trivandrum,
for his valuable suggestions and support. This work was
done at the RCCF, CDAC, Trivandrum, Kerala, India.

REFERENCES

[1] Symbian-Ltd. Symbian OS library for application developers.
Available at: http://www.symbian.com.
[2] Morris B. The Symbian OS Architecture Sourcebook: Design and
Evolution of a Mobile Phone OS. John Wiley & Sons, Ltd, 2007
[3] Michael Aubert, with Alexey Gusev ... [et al.], Quick Recipes on
Symbian OS, Mastering C++ Smartphone Development. John Wiley
& Sons, Ltd, 2008. pp. 529551.
[4] Svein Yngvar Willassen, Forensics and the GSM mobile phone
system, The International Journal of Digital Evidence, Spring 2003,
Volume 2 Issue 1
[5] Richard Harrison, Mark Shackman, Symbian OS C++ for Mobile
Phones Volume 3. John Wiley and Sons, Ltd. pp. 204-206
[6] Michael Auburt, Quick Recipies on Symbian OS Mastering C++
Smartphone Development, John Wiley and Sons, Ltd. pp. 60-63
[7] Breeuwsma M., Jongh M. D., Klaver C., et al. Forensics data
recovery from flash memory. In Small Scale Device Forensics
Journal, 2007, 1.
[8] Nielsen 2011, Who is Winning the U.S. Smartphone Battle? [Online]
Available: http://blog.nielsen.com/nielsenwire/online_mobile/who-is-
winning-the-u-s-smartphone-battle/
[9] Dan Frommer 2011, businessinsider.com, RBC Capital Markets
CHART OF THE DAY: 99.7% Of People Still Haven't Bought A
Tablet Yet [Online] Available: http://www.businessinsider.com/chart-
of-the-day-heres-how-huge-the-tablet-market-could-get-2011
[10] Jo Stichbury Symbian OS Explained, , John Wiley and Sons, Ltd.
2004, 1.
[11] Iain Campbell, Symbian OS Communications, John Wiley and Sons,
Ltd. , 2007.
[12] OMA(2001).Syncml sync protocol. Technical Report 1.0.1, Open
Mobile Alliance
[13] Symbian-Ltd. Carbide.c++: Introductory White Paper. Forum Nokia,
Version 1.1; 2007


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 154
DATA VISUALIZATION MODEL FOR SPEECH
ARTICULATORS
Dr A Rathinavelu [AR]
Professor & Head CSE Dept
Dr Mahalingam college of Eng and Tech
Pollachi-642003, TamilNadu, India
E-mail: starvee@drmcet.ac.in
G Yuvaraj
PG Student
Dr Mahalingam college of Eng and Tech
Pollachi-642003, TamilNadu, India
E-mail: g.yuvarajme@gmail.co.in


AbstractThe work describes the study and development of
visualization tool for internal articulator movement of Tamil
speech sounds. To improve accuracy of speech production a
computer aided articulator interface is developed. The interface
contains front and the side view of an animated face model which
displays various possible movements of visible inner articulators
such as tongue and lip. The tongue and the lip model plays a vital
role to display the place of articulation which improves the
speech intelligibility in case of early language learners as well as
in the case speech therapy for subjects suffering from hearing
impairments or articulation disorders. In this work both the
tongue and the lip is modeled using set of polygons. The tongue is
made up of two layers each containing 49 control points
arranged in a 7 x 7 grid. From 49 control points to parameterize
tongue, seven control points have been identified. Lip modeling is
done by 6 x 7 grid consisting of 42 control points, to parameterize
the lip, six major control points have been used. To reconstruct
the position of the tongue and lip, the seven control points of
tongue and the six control points of the lip are extracted from the
mid-sagital Magnetic Resonance Imaging (MRI) images [AR]
captured during the articulation of each phoneme. The focus of
our method is to develop an interface which is usable for training
or instruction to improve speech.

Keywords Speech perception, Internal articulators, Speech
production, Computer aided articulator interface, Speech
intelligibility, Speech therapy, MRI
I. INTRODUCTION
Computer Aided Data Visualization system provides an
interesting tool for investigating gain of visual information
which can enhance speech intelligibility. The visualization of
speech production helps subjects to know about the place of
inner articulators and to control their speech organs. Due to
hidden articulators and other social issues, visual speech
perception is a complex task [1]. Perceptual research has been
to a certain degree informative about how visual speech is
represented and processed, but improvements in visual speech
synthesis need to be much more driven by detailed studies of
how real humans produce speech, because human speech
production is a very complex process.
To explain articulatory processes, speech therapists use
static pictures of articulator position from mid-sagittal or
frontal view. Such static pictures do not consider the
coarticulatory interaction of real speech trajectories [2]. To
provide co-articulation, a talking head with three dimensional
model of the vocal tract is important. The dynamic
information from a talking head used as speech trainer (e.g.
language acquisition or speech therapy) offers the possibility
to show the internal articulators for explaining the production
of different speech sounds [3].
The ultimate goal of research on data visualization model is
to develop an interface which includes an animated face
model with visible inner articulators (tongue, lip and jaw) [4].
Speech production information is acquired from both front
and the side view of an animated face model. Interface
contains the control panel which helps to show the various
possible movements of each articulators. Our computer aided
data visualization system can be used to train and improve
speech intelligibility of second language learners and hearing
impaired subjects in Tamil language.
The rest of this paper is organized as follows. Section II
reviews some of the related works in this context. Section III
explains about the implementation details and techniques used.
Section IV shows some of the experimental results obtained.
Finally, Section V provides a discussion to extend the work of
this system in future.
II. RELATED WORK
In this section, the discussion of the related works in the
field of speech perception, computer aided articulator
interface and articulator modeling has been presented.
A subject with hearing impairments suffers from lack in
auditory feedback and problem in gaining speech production.
With these difficulties most of the subjects do not learn to
speak properly despite a fully functional speech production
system. Speech therapy can improve their speech
intelligibility dramatically. Speech training systems provides
visual feedback of vocal tract shape which are found to be
more useful to know the correct place of articulation. There is
wide range of computer based speech training (CBST)
systems used as therapists for subjects with hearing impaired
and speech impairment. Some of the CBST systems are
SpeechViewer, Box of Tricks, Indiana Speech Training Aid,
Speech Illumina Mentor, Speech Training, Assessment and
Remediation system [10]. These therapists are extensively
used and acknowledged.
In previous work [1] the interface named Computer Aided
Articulatory Tutor (CAAT) was developed using suitable
computer graphics and MRI technique to visualizing inner
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 155
articulatory movements of the animated tutor. Three
dimensional vocal tract articulatory models were developed by
polygon modeling technique. Polygon modeling is the most
commonly used method to model three dimensional models.
Polygon models are relatively easy to create and can be
deformed easily. However, the smoothness of the surface is
directly related to the complexity of the model.
The tongue model in the CAAT interface was modeled as a
set of polygons. The tongue was visualized as made up of 50
control points. To construct the entire three dimensional
shapes of the tongue, five control points were identified as
major points. To perform articulation for phonemes, the five
major control points of the tongue were extracted from the
mid-sagittal MRI images and stored along with corresponding
phoneme. Using key framing and interpolation technique
speech articulation was performed. With key framing
technique, base and the target position of tongue was defined
[8]. To compute intermediate deformation values,
interpolation technique was used.
III. SYSTEM DESIGN
The embedded modules are articulator modeling module,
visual articulation module, and control interface module. In
this articulator modeling module models such as animated
face model, tongue model, lip model and lower jaw model
have been developed. The visual articulation module involves
generating a series of parameter settings for virtual articulators.
For each speech sound the visual articulation module provides
a co-articulated target position which is held for a fixed
fraction of the speech duration. The control interface model
assign control to each articulator such as tongue, lip which
allow the following movements:
Lip opening and closing
Lip rounding
Tongue body raise
Tongue contact with palate
Tongue front and back
Tongue tip raise
A. Data Acquisition
Data acquisition is the first step in constructing an
articulatory model. Three dimensional models are based on
geometry, typically a polygon mesh that is deformed to
produce animation. To develop an initial mesh whose
geometry data has to acquire from suitable method or
technique. There are many methods to acquire data on inner
articulators: Magnetic Resonance Imaging (MRI), Kinematic
data from Electropalatography (EPG) and Electromagnetic
Articulography (EMA) has been used [13]. Each of these
methods in isolation can provide useful information [9]. But
MRI is the dominating measurement method for three
dimensional imaging and provides detailed three dimensional
data of the entire vocal tract and tongue [6]. MRI is amenable
to computerized three dimensional modeling and provides
excellent structural differentiation. Due to technical advances,
it is possible to collect full three dimensional data without
subjects needing to sustain the articulation artificially [12, 14].
Moreover, MRI does not cause any known health risks to
subjects. Therefore MRI can be used for large corpus studies
and for repeated measurements with same subjects.
B. Tongue Modeling
Tongue is an important organ in human speech production.
Realistic speech animation requires a tongue model with
clearly visible tongue motion that is an important cue in
speech reading. Our tongue model is implemented as a set of
polygon, consisting of 98 control points joined by 86 polygons,
making up a polygon mesh. The surface of tongue is rendered
using illumination and shading, giving the surface a smooth
appearance. The tongue model is realized as made up of two
layers each containing 49 control points arranged in a 7x7 grid
shown in Fig. 1. In order to parameterize the tongue, seven
major control points have been identified.


Fig. 1 Tongue model
Our tongue model can perform the following deformation
such as: tongue tip raise, tongue body raise, tongue forward
and backward movement and tongue contact with palate. The
deformation includes rotation, scaling, translation and pull.
These deformations are applied to control points in a defined
area of influence, thus creating possible movements for the
tongue model.
C. Lip Modeling
Lips consist of two portions upper lip and lower lip. To
perform animation on lip requires motion of upper lip and
lower lip in parallel [15]. Our lip is modeled using the
polygon modeling technique. The use of polygonal modeling
results in smoother shapes and structure. Benefit of using
polygon modeling is that calculation of the changing shapes in
the polygon models can be carried out much faster [17]. It also
helps to achieve the desired lip shapes directly. Our lip model
consists of 42 control points which include 28 polygons. Each
polygon consists of 5 vertices. The entire three dimensional
lip model is arranged in 6 x 7 control grid shown in Fig. 2. In
order to parameterize the lip, six major control points have
been identified.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 156

Fig. 2 Lip model
In our lip model six types of deformation can take place
they are: lip rounding, lip protrusion, upper lip raise, lower lip
depression, upper lip retraction and lower lip retraction.
D. Visual Articulation Interface
Visual articulator interface displays a double view of the
animated face from the front and side. The surface of the face
is made semitransparent to display the inner articulators such
as tongue, lip and jaw. This capability is essentially useful in
explaining non-visible articulation in the language learning
situation. The animated face is typically described as a
polygonal mesh that is deformed by parameterization. The
visual articulator interface is developed by using java
programming. The necessary controls that are required are
incorporated in the interface. To perform articulation user
select any one of the speech sound from the drop down list
box. Once a speech sound is chosen, the corresponding picture
of speech sound is displayed in the interface. Along with this
corresponding co-articulation is animated and displayed in
front and side view of animated face.

Table 1. Coordinates for the tongues base position
Point No. X-Coordinate Y-Coordinate Z-oordinate
1 -0.0298804 -0.8214912 -1.9494
2 0.0859375 -0.8214912 -1.9494
3 0.1950554 -0.8214912 -1.9494
4 0.3038823 -0.8214912 -1.9494
5 -0.0298804 -0.9450541 -1.9494
6 -0.0298804 -1.0559431 -1.9494
7 -0.0298804 -1.1666168 -1.9494

The visual articulator interface uses key framing and
interpolation technique to achieve speech articulation.
Interpolation is the most common method of animating three
dimensional models [16]. The basic principle is first to define
the key frames for the base and the target position of the
articulators initially. Once the key frames are identified then
in-between frames can be determined by interpolation. In our
work to perform articulation of different speech sounds, seven
major control points of tongue and six major control points of
lip are extracted from the mid-sagittal MRI images capture
during the articulation of each speech sounds. These values
are stored in a database along with corresponding speech
sounds. The points corresponding to the base position of the
tongue is shown in Table 1 and base position of the lip is
shown in Table 2.
Table 2. Coordinates for the lips base position
Point No. X-Coordinate Y-Coordinate Z-oordinate
1
-0.02988 -1.56662 -1.99494
2
0.014519 -1.46662 -1.99494
3
0.024519 -1.38662 -1.99494
4
0.03452 -1.32662 -1.99494
5
-0.02988 -1.56662 -1.83494
6
-0.02988 -1.50662 -1.80494

The process of visual articulation process is depicted in the
Fig. 3. The major control points extracted from MRI [Fig. 4.]
for each speech sound are stored in database. To reconstruct
the tongue and vocal tract model, the coordinates of seven
major control points of tongue and six major control points of
the lips is retrieved from the stored database, using correction
factor the complete tongue shape and lip shape is plotted using
the various calculation.


Fig. 3 Flow diagram for visual articulation
E. Control Panel Interface
The interface comprises of front and side view of animated
face with visible inner articulators, along with set of controls
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 157
for each articulator (tongue, lip and jaw). The controls are
used to show different possible movement of each articulator.
The tongue model has four controls which enable following
movements such as: tongue body raise, tongue contact with
palate, tongue forward and backward movement and tongue
tip raise. The lip model has three controls to perform
movements such as: lip open and close, lip rounding and lip
protrusion. The control panel interface is shown in Fig. 5.


Fig. 4 MRI image [AR]

Fig. 5 Interface of Control panel
Our control panel interface is designed for two main
reasons. First, to give training for subjects having difficulty in
producing particular speech sounds. By providing training to
subjects by showing the movement of articulators via moving
the control, they can understand precisely about the
articulation for particular speech sounds. Second, to perform
articulation process for new speech sound instead of using
MRI to obtain major control points for tongue and vocal tract,
data can be obtained from control panel interface.

IV. EXPERIMENTAL RESULTS
In our visualization tools for speech articulator performed
articulation process for frequently used Tamil speech sound.
The articulation for the letter THA results in target location
of the tongue and lip. To reconstruct the place of articulation
for corresponding speech sound, the seven major control
points for tongue shown in Table 3 and six major control
points for lip shown in Table 4 were extracted from the MRI
images. Using these points, the entire tongue and the lip
model was plotted.
Table 3. Coordinates for the tongues position for sound THA
Point No. X-Coordinate Y-Coordinate Z-oordinate
1 -0.0298804 -0.8214912 -1.9494
2 0.0859375 -0.8214912 -1.9494
3 0.1950554 -0.8214912 -1.9494
4 0.3038823 -0.8214912 -1.9494
5 -0.0298804 -0.9450541 -1.8594
6 -0.0298804 -1.0559431 -1.7894
7 -0.0298804 -1.1666168 -1.7294
Table 4. Coordinates for the lips position for sound THA
Point No. X-Coordinate Y-Coordinate Z-oordinate
1
-0.02988 -1.56662 -2.06494
2
0.014519 -1.46662 -2.06494
3
0.024519 -1.38662 -2.06494
4
0.03452 -1.32662 -2.01494
5
-0.02988 -1.56662 -2.22494
6
-0.02988 -1.50662 -2.25494
V. DISCUSSION
This visualization tool is aimed to help Hearing Impaired
and second language learns in acquisition of speech sounds.
To improve realism and accuracy of visible speech production,
interface has been developed. Interface comprises of animated
head with modeled tongue and lip along with visual cues helps
to perceive the position and manner of each speech sounds.
The interface used for two purposes. Firstly, used as a speech
therapy this shows the articulation process of each speech
sound. Secondly, used as speech articulator trainer and to
acquire control points, which used as input to perform
articulation. Developed interface provide user friendly
interface which enable to use without prior training or
instruction.
REFERENCES
[1] A. Rathinavelu, T. Hemalatha and R. Anupriya, Three Dimensional
Articulator Model for Speech Acquisition by Children with Hearing
Loss, C. Stephanidis (Ed.): Universal Access in HCI, Part I, HCII
2007, pp. 786-794.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 158
[2] A. Rathinavelu and T Hemalatha, Evaluation of a computer aided 3D
lip sync instruction model using VR objects, Int. J. Disabil. Hum. Dev.
2006, 5(2), pp. 127-132.
[3] K. Grauwinkel, B.B. Dewitt and S. Fagel, Visualization of Internal
Articulator Dynamics and its Intelligibility in Synthetic Audiovisual
Speech, Proc. ICPhS, Saarbrcken, 2007
[4] P. Wik, and O. Engwall, Can visualization of internal articulators
support speech perception?, Proc. Interspeech 2008, Brisbane,
Queensland, Australia, pp.2627-2630.
[5] O. Engwall, P. Wik, J. Beskow and B. Granstrm, Design strategies for
a virtual language tutor, Int. Conference on Spoken Language
Processing 2004, Vol. III, pp.1693-1696.
[6] B.J. Krger, J. Gotto, S. Albert and C. Neuschaefer-Rube, A visual
articulatory model and its application to therapy of speech disorders: a
pilot study, Fuchs, S., Perrier, P., Pompino-Marschall, B. (Hrsg.)
Speech production and perception: Experimental analyses and models.
ZAS Papers in Linguistics, vol. 40, 2005, pp.7994.
[7] S.Villagrasa and A. Susin, FACe! 3D Facial Animation System based
on FACS, IV Iberoamerican Symposium in Computer Graphics
(SIACG 2009), Isla Margarita, pp.203-209.
[8] G. Bailly, et al., Audio Visual Speech Synthesis, Int. J. of Speech
Tech 2003, Netherlands, Kluwer Academic Publishers, Boston, pp.331-
346.
[9] Jonas Beskow, Olov Engwall and Bjrn Granstrm, Resynthesis of
Facial and Intraoral Articulation from Simultaneous Measurements,
Proc. 15th Internation congress of Phonetic Sciences (ICPhS03),
Barcelona, Spain, 2003.
[10] O. Blter, O. Engwall, H. Kjellstrm and ster, Wizard-of-Oz Test of
ARTUR - a Computer-Based Speech Training System with Articulation
Correction, Proc 7th internation ACM SIGACCESS conference on
Computers and accessibility (ASSETS05), Baltimore, pp.36-43.
[11] R. Ridouane, Investigating speech production A view of some
techniques, LPP (CNRS, Paris), 2006.
[12] A. Rathinavelu and G. Anupriya, Three dimensional tongue modeling
and simulation for articulation training, Proc international conference
on modeling and simulation, CIT, coimbatore, Aug 2007.
[13] O. Engwall, Combining MRI, EMA & EPG measurements in a three-
dimensional tongue model, Speech Communication, Vol.41, Oct 2003,
pp.303-329.
[14] O. Engwall, A revisiting to the Application of MRI to the analyses of
Speech production, Proc 6
th
international seminar on Speech
Production, Sydney, 2006.
[15] Siti Salwa Salleh, et al., 3D Lips Development and Measurement for
Visual Speech Synthesis, European Journal of Scientific Research
2009, ISSN 1450-216X, Vol.35 No.2, pp. 159-172.
[16] J. Beskow, Talking heads- models and applications for multimodal
speech synthesis, Ph.D. dissertation, KTH, Stockholm,Sweden, 2003
[17] O. Engwall, A 3D vocal tract model for articulatory and visal speech
synthesis, Proc. Fonetik 98, The swedish phonetics conference, pp.
196-199.





















































Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 159


Data safe transmission over public network using
Integrated Algorithm
SALINI DEV P V,MAYADEVI P A, PRINCE KURIAN
susalini@gnail.com,mayadevinandakumar@yahoo.co.in, princekurian19@gmail.com
Anna University of Technology, Coimbatore, India.

AbstractTo enhance the security of data transmission in a
hybrid encryption algorithm based on DES and RSA is
proposed. The currently used encryption algorithm is used to
protect the confidentiality of data during transport between
two or more devices is a 128-bit symmetric stream cipher
called E0. It may be broken under certain conditions with the
time complexity O(264 .Under the dual protection with the
DES algorithm and the RSA algorithm, the data transmission
in the network system will be more secure. Meanwhile, it is
clear that the procedure of the entire encryption is still simple
and efficient as ever. In addition, the confidentiality of the
hybrid encryption algorithm is also discussed. Encryption
speed of triple DES algorithm is faster than RSA algorithm for
long plaintext, and RSA algorithm distribute key safely and
easily. Digital abstract algorithm MD5 is adopted for
comparing the digital signature which is transmitted by
dispatcher and digital signature This mechanism realizes the
confidentiality, completeness, authentication and no
repudiation. It is an effective method to resolve the problem of
safe transmission in Internet.
Index Terms AES, digital abstract ,MD5,RSA, Safe
transmission,Tripple DES.

I. INTRODUCTION

With the development of Internet, global
information tide expends the application of information
network technology. It also brings about great economical
and social benefit along with the extensive use of this
technology. However, because Internet is an open system
which faces to public, it must confront many safe problems.
The problems include network attack, hacker intruding,
interception and tampering of network information which
lead huge threat to Internet. Information security becomes a
hot problem which is concerned by our society.
The encryption algorithm using in internet encryption process
is the E0 stream cipher. However, this algorithm has some
shortcomings, 128-bit E0 stream ciphers in some cases can be
cracked by 0 (264) mode in some cases. So, for most
applications that which need to give top priority to
confidentiality, the data security.
This paper puts forward a safe mechanism of data
transmission to tackle the security problem of information
which is transmitted in Internet. The mechanism includes
many properties that are confidentiality, completeness,
authentication of identity, and non-repudiation. It bases
triple DES algorithm and RSA algorithm. Digital abstract
algorithm MD5 is also included in this mechanism to
protect the safe transmission of information.

II. SYMMETRIC ENCRYPTION ALGORITHM-DES
ALGORITHM
A. DES Algorithm

DES (Data Encryption Standard) algorithm is a
traditional encryption technology. It developments in 20th
century 70s and it was adopted by American government
in November, 1976. Encryption and decryption of this
algorithm are equivalence. The algorithm is open, but the
key do not release. The security of System depends on the
secrecy of the key.
DES algorithm synthetically makes use of many
cryptography technologies which include replacement,
alternation and data input. It is a product cryptogram.
Plaintext is divided into many blocks when encryption
begins. Each block has 64 bits and the length of key is 64
bits. The valid length is 56 bits and the rest 8 bits are used
for parity checking.
First, 64 bits data is divided into two parts after
initial replacement. Each part includes 32 bits. Then
iterative process began. Right half 32 bits are extended to
48 bits. The result exclusive or with 48 bits sub-key which
is got from 64 bits keys. The result is compressed as 32 bits
through s box. After replacement, the 32 bits data exclusive
or with left 32 bit data which is got from the beginning of
replacement. Right half part of the new round is got. At the
same time, the result is regard as the parameter of new
round [3].
After 16 round replacements, a new 64 bits data is
generated. There is one step we must pay attention to. The
two results of last round do not exchange. The encryption
and decryption can use the same algorithm through this
process. To the last, the 64 bits result needs an inverse
replacement. The 64 bits cipher text is got.

B. The Shortage and Improvement of DES
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 160


Although DES is a safe encryption algorithm, the
security issues of DES exist. First, the length of DES key is
too short, because it includes 64 bits. Second, the weak
links of DES are the protection and distribution of the key.
Once the key lose, the whole system becomes worthless.
Third, all the calculation of DES is linear besides the
calculation of S box.
Because the key of DES has some shortage, triple
DES algorithm is brought up. The length of key lengthens
to 112 bits. This method performs three times encryption by
using two different keys. Supposing two different keys are
K1 and K2. K1 performs DES encryption and K2 decrypts
the result of step one. Using K1 encrypt the result of step
two. When encryption is performed, encryption algorithm
and decryption algorithm exchange. The sequence of keys
does not exchange. The specific process is expressed by
figure1.
III.THE IDEAS AND PROCESSES OF HYBRID ENCRYPTION
ALGORITHM
RSA algorithm is the first relatively complete public key
algorithm. It can be used for data encryption, also can be used
for digital signature algorithms. RSA cryptosystem is based on
the difficulty of integer factorization in the group Zn, and its
security establishes in the assumption that constructed by
almost all the important mathematicians, it is still a theorem
that does not permit, which is lack of proof. DES is a group
cipher algorithm, which encrypts data by a group of 64-bit. A
group of 64-bit plaintext is entered from one beginning of the
algorithm, 64-bit cipher text is exported from the other side.
DES is a symmetric algorithm, encryption and decryption use
the same algorithm (e the different key arrangement), the key
can be any 56-bit value (the key is usually 64-bit binary
number, but every number that is a multiple of 8-bit used for
parity are ignored). This algorithm uses two basic encryption
techniques, make them chaos and spread, and composite them.
Seeing from the efficiency of encryption and decryption, DES
algorithm is better than the RSA algorithm. The speeds of DES
encryption is up to several M per second, it is suitable for
encrypting large number of message; RSA algorithm is based
on the difficulty of factoring, and its computing velocity is
slower than DES, and it is only suitable for encrypting a small
amount of data, The RSA encryption algorithm used in the.
NET, it encrypts data at most 117 bytes of once. Seeing from
key management, RSA algorithm is more superior than the
DES algorithm. Because the RSA algorithm can distribute
encryption key openly, it is also very easy to update the
encryption keys, and for the different communication objects,
just keep the decryption keys secret; DES algorithm requires to
distribute a secret key before communication, replacement of
key is more difficulty, different communication objects, DES
need to generate and keep a different key. Based on the
comparison of above DES algorithm and RSA algorithms, in
order to give expression to the advantages of the two
algorithms, and avoid their shortcomings at the same time, we
can conceive a new encryption algorithm, that is, DES and
RSA hybrid encryption algorithm. We will apply hybrid
encryption algorithm , we can solve the current security risks
of Bluetooth technology effectively.The entire hybrid
encryption process is as follows: Let the sender is A, the
receiver is B, B's public key is eB, B's private key is dB, K is
DES encryption session key (assuming that the two sides of
communication know each RSA public key).
IV. PUBLIC KEY ALGORITHM-RSA ALGORITHM
Public key algorithm is also called asymmetric key
algorithm. The basic thought of public key algorithm is that
the key is divided into two parts. One is encryption key and
the other is decryption key. Encryption key can not be got
from decryption key and vice versa. Because public key is
open and private key keep secret, RSA algorithm
overcomes difficult of key distribution. RSA encryption
process is showed as figure 2.
The principle of RSA algorithm is that: according to
number theory, it is easy to finds two big prime number, but
the factorization of the two prime numbers is hard. In this
theory, every customer has two keys. They are encryption
key PK (e , n ) and decryption key SK ( d , n ) . Customer
opens public key. Each person who wants to transmit
information can use the key. However, customer keeps
private key to decrypt the information. Here, n is the
product of two big prime number p and q (the bits of p and
q which are decimal number extend 100). e and d satisfy
certain relation. When e and n are known, d can not be got.
The specific content of algorithm is showed as below [4].
A. Encryption and Decryption Algorithm
Assuming integer X expresses plaintext and integer Y
expresses cipher text. The operation of encryption is that
Encryption: Y X e mod n
The operation of decryption is tha
Decryption: X Y d mod n


V.ASYMMETRIC ENCRYPTION DEFINITION
Asymmetric Encryption uses different keys for
encryption and decryption.The encryption key is public so
thatany one can encrypt a message.But the decryption key
is private,so that only the receiver can decrypt the
message.It is common to set up key-pairs within a network
so that each user has a public and private key.

VI.COMBINATION OF SYMMETRIC ENCRYPTION AND
ASYMMETRIC ENCRYPTION
If we want the benefits of both types of encryption
algorithms, the general idea is to create a random
symmetric key to encrypt the data, and then encrypt that
key asymmetrically.Once the key is asymmetrically
encrypted,we add it to the encrypted message.The receiver
gets the key,decrypts it with thier private key,and uses it to
decrypt the message.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 161



Figure 1. The process of triple DES
Figure 2. The process of RSA.



VII. MODEL OF DATA SAFE TRANSMISSION BASED ON
RSA AND TRIPLE DES
Message-Digest refers to hash transformation of
message. MD5 algorithm gets the remainder (64 bits) of the
primitive plaintext through mod 2 64 . The result is added to
the end of Message. The MD5 code includes the length
information of the message. Some message whose range of
bits from 1 to 512 is added into the place which is between
message and remainder. After filling, the total length is
several times of entire 512.
Then the whole message is divided into some data
blocks. Each of them includes 512 bits. The data block is
further divided into four small data blocks which include
128 bits. The small data block is input into hash function to
perform four round calculations. In the end, MD5 message
abstract is got [1].
Digital signature can achieve following three
points: receiver could check the signature of message which
is sent by dispatcher. Dispatcher can not deny the signature
of message. Receiver can not fake the signature. Encryption
transmission process of digital signature is showed as
figure3.
Dispatcher A uses his private key (SKA) encrypt
the signature. The result is encrypted by public key (PKB)
of receiver to protect the security of safe transmission. After
the message is transmitted in network, receiver uses his
private key decrypt the signature which is sent by
dispatcher. At the same time, receiver uses public key
(PKA) of dispatcher to verify the signature.
DES and RSA represent symmetrical and
asymmetrical encryption algorithm respectively. Because
the mechanism is different, they have their own merit and
shortcoming. The comparison is showed as below:

1) In aspect of security, DES and RSA algorithm have
strong security. The methods which break the algorithm in
short time do not exist.

2) In aspect of encryption speed, the velocity of DES is
faster than RSA algorithm. Because the length of DES has
56 bits, software can enhance the speed. The calculation
process of RSA algorithm has many steps such as power
and mod of big integer with many bits. The speed of RSA is
slower than DES. In crowed network, it is not suitable to
encrypt long plaintext.

3) In aspect of key management, RSA algorithm is better
than DES algorithm. Because public key is opened to
outside and private key is kept by holders. The update of
key is easily. However, DES needs to allocate key pair. The
update of key is hard. DES generates and keeps different
key. At the same time, the transmission of key in networks
is hard to guarantee [2].

RSA algorithm can realize data signature and
authentication. It is better than DES. RSA achieve the
reliability, completeness, and non-repudiation of data
transmission.
Structural drawing of data safe transmission is
showed as figure 4. The concrete step is illustrated as figure
4.

Figure 3. The process of digital signature.

1) First, dispatcher and receiver generate key pair (public
key and private key) according to RSA. They open the
public key to sign the digital abstract and verify the
signature. Public key also encrypts and decrypts the
symmetrical key.

2) The plaintext which dispatcher wants to send is
generated digital abstract with 128 bits according MD5
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 162


algorithm. The digital abstract is encrypted by private key
from dispatcher. Then digital signature is generated to
guarantee the reliability and non-repudiation during
transmission.

3) Key pair of triple DES is generated in receiver. Then it
encrypts the plaintext. Using public key of receiver encrypt
symmetrical key pair.

4) Symmetrical key and plaintext which are encrypted by
dispatcher are sent to receiver through open network.
Encrypted digital abstract is dispatched too.

5) Receiver uses their own private key to encrypt
symmetrical key pair, after getting the information from
dispatcher. Then receiver uses symmetrical key decrypt
message which is encrypted by dispatcher. At the same
time, plaintext is copied.

6) Public key of dispatcher encrypts digital abstract. MD5
algorithm calculates digital abstract for plaintext. The
digital abstract which is from dispatcher and calculated
abstract which is from plaintext are compared. If the results
are same, the transmission is safe. If the results are
different, the message is tampered.
VIII.COMARISON CHART


IX. SIMULATION EXPERIMENT OF DATA SAFE
TRANSMISSION MECHANISM
In practice transmission, digital certificate and safe
transmission protocol of network are adopted to implement
safe transmission. Digital certif
icate includes public key and private key which
client needs. It encrypts the symmetrical key and
complement signature. Safe transmission protocol generate
symmetrical key and encrypt plaintext



Figure 4. The structure drawing of data safe transmission.

X. MESSAGE DIGEST ALGORITH 5
Message-Digest algorithm 5 is a widely used
cryptographic hash function with a 128-bit hash value.
Specified in RFC 1321, MD5 has been employed in a wide
variety of security applications, and is also commonly used
to check the integrity of files. However, it has been shown
that MD5 is not collision resistant; as such, MD5 is not
suitable for applications like SSL certificates or digital
signatures that rely on this property. An MD5 hash is
typically expressed as a 32-digit hexadecimal number.
MD5 was designed by Ron Rivest in 1991 to replace an
earlier hash function, MD4. In 1996, a flaw was found with
the design of MD5. While it was not a clearly fatal
weakness, cryptographers began recommending the use of
other algorithms, such as SHA-1 (which has since been
found also to be vulnerable). In 2004, more serious flaws
were discovered, making further use of the algorithm for
security purposes questionable; specifically, a group of
researchers described how to create a pair of files that share
the same MD5 checksum. Further advances were made in
breaking MD5 in 2005, 2006, and 2007. In an attack on
MD5 published in December 2008, a group of researchers
used this technique to fake SSL certificate validity.
A. MD5 hashes

The 128-bit (16-byte) MD5 hashes (also termed message
digests) are typically represented as a sequence of 32
hexadecimal digits. The following demonstrates a 43-byte
ASCII input and the corresponding MD5 hash:
MD5("The quick brown fox jumps over the lazy dog")
= 9e107d9d372bb6826bd81d3542a419d6
Algorith
m
Data Time(Sec) Average
(MB)/sec

Performan
ce
DES 256MB 10-11 22-23 low
3DES 256MB 12 12 low
AES 256MB 5 51.2 medium
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 163


Even a small change in the message will (with
overwhelming probability) result in a mostly different hash,
due to the avalanche effect.

B. MD5 Algorithm Description
We begin by supposing that we have a b-bit message as
input, and that we wish to find its message digest. Here b is
an arbitrary nonnegative integer; b may be zero, it need not
be a multiple of eight, and it may be arbitrarily large. We
imagine the bits of the message written down as follows:
m_0 m_1 ... m_{b-1}

The following five steps are performed to compute the
message digest of the message.
Step 1. Append Padding Bits
Step 2. Append Length
Step 3. Initialize MD Buffer
Step 4. Process Message in 16-Word
Blocks
Step 5. Output
C. Applications

MD5 digests have been widely used in the software world
to provide some assurance that a transferred file has arrived
intact. For example, file servers often provide a pre-
computed MD5 (known as Md5sum) checksum for the
files, so that a user can compare the checksum of the
downloaded file to it. Unix-based operating systems include
MD5 sum utilities in their distribution packages, whereas
Windows users use third-party application.
XI.FUTURE WORK.
A. AES (ADVANCED ENCRYPTION STANDARD)
AES the block cipher ratified as a standard by
National Institute of Standards and Technology(NIST).AES
is a symmetric key encryption standard adopted by U S
government.The standard comprises three block ciphers,
AES-128,AES-192,and AES-256 adopted from a larger
collection originally published as Rijndael.Each of these
block ciphers has 128 bit block size with key size of
128,192,256 respectively.The AES ciphers have been
analysed extensively and are used world wide.
AES has 10 rounds for 128 bit keys,12 rounds for
192 bit keys and 14 rounds for 256 bit keys.AES is based
on a principle known as Substitution Permutation
network.It is fast in both software and hardware.


XII. CONCLUSION
Data safe transmission bases on triple DES and
RSA algorithm. It makes use of the advantage of triple DES
which has the high encryption speed for plaintext. It also
develops the merit of RSA which manages the key easily.
The receiver can verify whether the information is tampered
in network through using MD5 algorithm. This mechanism
realizes the confidentiality, completeness, authentication
and nonrepudiation. It is an effective method to resolve the
problem of safe transmission in Internet.
ACKNOWLEDGMENT
This paper is supported by the science and
technology research program of Hebei province
(042135117).
REFERENCES

[1] L. P. Zhao, L. B. Yang, The usage of MD5 algorithm in RSA
algorithm, Fujian Computer, vol 22, no. 4, pp. 63-64, May 2005
[2] B. Jiang, Synthesized encryption plan of DES and RSA, Micro-
Computer Science, vol 23, no. 6, pp. 52-54, March 2006.
[3] H. G. Zhang, Y. Z. Liu, Evolution password and DES evolution
research, Chinese Journal of Computer, vol 12, no. 2, pp. 1678-1684,
September 2003.

[4] B. Yang, Modern Cryptography[M], Beijing: Tsinghua University
Press, 2006
[5] S. P. Wang, Y. M. Wang, Digital signature scheme based on DES and
RSA, Journal of Software, vol 14, no. 1, pp. 146-150, June 2003.
[6] Douglas R. Stinson, Cryptography Theory and Practice[M], Beijing:
Publishing House of Electronics Industry, 2002.
[7] G. Duan, Encryption and Decryption, Beijing: Publishing House of
Electronics Industry, 2003.
[8] Y. Z. Wang, X. F. Liao, Cipher system implement and intrusion
tolerance mechanism, Computer Science, vol 7, no. 2, pp. 167-171,
August 2007.
[9] K. C. Lu, Computer Cryptography-data Confidentiality and Security in
Computer Network, Beijing: Tsinghua University Press, 2000.
[10] Y. X. Xu, Java Security Program Example, Beijing: Tsinghua
University Press, 2003.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 164
A Provenance of Black Hole attack on Mobile
Ad-hoc Networks (MANETS) Ad-hoc Demand
Routing (AODV) protocol
Mr. Amol V. Zade

Prof. V. K. Shandilya
M.E. I
st
Year, Asst. Professor
Department of Computer Science & Engineering Department of Computer Science & Engineering
Sipnas COET, Amravati Sipnas COET, Amravati
Email: amolzade11@gmail.com vkshandilya@rediffmail.com



ABSTRACT:
In this paper we have discussed some basic
routing protocols from Mobile Ad-hoc Networks
(MANET). There is an increasing threat of attacks on
the MANET. Black hole attack is one of the security
threat in which the traffic is redirected to such a node
that actually does not exist in the network. Its an
analogy to that of the black hole in the universe in
which things disappear. The node presents itself in
such a way to the node that it can attack others nodes
and networks knowing that it has the shortest path.
MANETs must have a secure way for transmission and
communication which is quite challenging and vital
issue.
The scope of this thesis is to study the effects
of Black hole attack in MANET using Reactive routing
protocol Ad-hoc on Demand Distance Vector (AODV).
Keywords: MANET, AODV, DSDV, TORA, Security,
Attacks, Black Hole










I. INTRODUCTION:

Mobile ad-hoc network is an autonomous
system, where nodes/stations are connected with each
other through wireless links. There is no restriction on
the nodes to join or leave the network, therefore the
nodes join or leave freely. Mobile ad-hoc network
topology is dynamic that can change rapidly because the
nodes move freely and can organize themselves
randomly. This property of the nodes makes the mobile
ad-hoc networks unpredictable from the point of view of
scalability and topology.

Figure 1: Mobile ad-hoc Network (MANET)
When a node wants to communicate with another node,
the destination node must lies within the radio range of
the source node that wants to initiate the communication
[1]. The intermediate nodes within the network aids in
routing the packets for the source node to the destination
node. These networks are fully self organized, having
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 165
the capability to work anywhere without any
infrastructure. Nodes are autonomous and play the role
of router and host at the same time. MANET is self
governing, where there is no centralized control and the
communication is carried out with blind mutual trust
amongst the nodes on each other. The network can be
set up anywhere without any geographical restrictions.
One of the limitations of the MANET is the limited
energy resources of the nodes.
Types of Mobile Ad-hoc Network:

1. Vehicular Ad-hoc Networks (VANETs)
2. Intelligent Vehicular Ad-hoc Networks (InVANETs)
3. Internet Based Mobile Ad-hoc Networks(iMANETs)
1. Vehicular Ad-hoc Networks (VANETs): VANET
is a type of Mobile ad-hoc network where vehicles are
equipped with wireless and form a network without help
of any infrastructure. The equipment is placed inside
vehicles as well as on the road for providing access to
other vehicles in order to form a network and to
communicate.
2. Intelligent Vehicular Ad-hoc Networks
(InVANETs):
Vehicles that form Mobile Ad-hoc Network for
communication using WiMax IEEE 802.16 and Wi-Fi
802.11. The main aim of designing InVANETs is to
avoid vehicle collision so as to keep passengers as safe
as possible [3]. This also help drivers to keep secure
distance between the vehicles as well as assist them at
how much speed other vehicles are approaching.
InVANETs applications are also employed for military
purposes to communicate with each other.
3.Internet Based Mobile Ad-hoc Networks
(iMANETs):
These are used for linking up the mobile nodes
and fixed internet gateways. In these networks the
normal routing algorithms does not apply. Mobile Ad-
hoc Network (MANET) is an especial kind of network
where all the nodes configure themselves. Nodes
themselves can act like a router. The topology may also
change frequently. Each user of the node has the
freedom to move while communicating. One node can
take packet from other node and transmit it to its
neighboring node. This kind of network works in a
standalone fashion. Fig 1 shows a typical ad-hoc
network. Unlike wired network in ad-hoc network there
are many challenges and issues which are very
important for the deployment. For example, control
message management, dynamic and fast adaptation,
speed, power, frequency of updates or network
overhead, scalability, security, routing etc. As nodes are
mobile and they may disappear anytime, maintaining
routing in such network is the most challenging part.
The mobile ad-hoc networks have several salient
characteristics, such as Dynamic topologies, Bandwidth-
constrained, variable capacity links, Energy-constrained
operation, Limited physical security. Due to these
features, mobile ad-hoc networks are particularly
vulnerable to denial of service attacks launched through
compromised node.
II. WORKING OF MOBILE AD-HOC
NETWORKS (MANET):
The security of communication in ah hoc
wireless networks is important especially in military
applications. Ad-hoc network technology is suitable to
introduce data communication to the automobile
environment. Many different feasible use cases for ad-
hoc networks in the vehicular environment exist. The
MANET technology is very suitable for the collection as
well as the distribution of up-to-date location based
information.






Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 166






Yes No










Figure 2: Working of a General Mobile Ad Hoc
Network.
The key attributes required to secure ah hoc network
are:
1. Confidentiality ensures payload data and header
information is never disclosed to unauthorized.
2. Integrity ensures that message is never corrupted.
3. Availability ensures that services offered by the node
will be available to its users when expected, i.e.
survivability of network services despite denial of
service attacks.
4. Authentication enables a node to ensure the identity of
peer mode it is communicating with.
5. Non-repudiation ensures that origin of a message
cannot deny having sent the message.

III. MOBILE AD-HOC NETWORKS ROUTING
PROTOCOLS
MANETs routing protocols are
characteristically subdivided into three main categories
[7]. These are proactive routing protocols, reactive on-
demand routing protocols and hybrid routing protocols
as shown in fig (3). When a mobile node receives new
routing information then it checks if it has a similar kind
of information in its routing table. If the node already
has that routing information then it compares the
sequence number of the received information and the
one it has. If the sequence number of the information it
has is less than that of the received information then it
discards the information with the least sequence number.
If the both the sequence numbers are the same then the
node keeps the information that has the shortest route or
the least number of hops to that destination.











Figure 3: MANETs Routing Protocol
3.2 Reactive routing protocol: It maintain regular and
up to date routing information about each node in the
network by propagating route updating at fixed time
intervals throughout the network, when there is a change
in network topology. As the routing information is
usually maintained in tables, so these protocols are also
called table-driven Protocols i.e. ad-hoc on demand
Start
Node sends Signal to find other
nodes within

range
Nodes Synchronization
Sender sends message to destination Node
Is Destination
Node Ready?
Wait
Receiving
Node send
back ready
Signal
Communication Begins
Termination Process
Stop
MANETS Routing
Protocol
Reactive
Proactive Hybrid
AODV
DSR
ACOR
ABR
OLSR
WRP
CGSR
ZRP
HSLS
TORA DSDV
OORP
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 167
distance vector protocol (AODV), dynamic source
routing (DSR), admission control enabled on-demand
routing (ACOR) and associatively based routing (ABR).
3.3 Hybrid routing protocols: It is the combination of
both proactive and reactive routing protocols i.e.
temporary ordered routing algorithm (TORA), zone
routing protocol (ZRP), hazy sighted link state (HSLS)
and order one routing protocol (OOPR). Proactive and
reactive algorithms are used to route packets [7]. The
route is established with proactive routes and uses
reactive flooding for new mobile nodes. In this paper we
have compared MANETs routing protocols from
reactive, proactive and hybrid categories, as we have
used randomly one protocol from each categories as
from reactive AODV, proactive OLSR, hybrid TORA.
IV. WORKING OF AODV:
The Ad-hoc On-demand Distance Vector (AODV)
routing protocol is a reactive MANET routing protocol.
The difference is that in AODV, a field of the number of
hops is used in the route record, instead of a list of
intermediate router addresses [10]. Each intermediate
router sets up a temporary reverse link in the process of
a route discovery. This link points to the router that
forwarded the request. Hence, the reply message can
find its way back to the initiator when a route is
discovered. When intermediate routers receive the reply,
they can also set up corresponding forward routing
entries. To prevent old routing information being used
as a reply to the latest request, a destination sequence
number is used in the route discovery packet and the
route reply packet. A higher sequence number implies a
more recent route request. Route maintenance in AODV
is similar to that in DSR. One advantage of AODV is
that AODV is loop-free due to the destination sequence
numbers associated with routes [8]. Therefore, it offers
quick convergence when the ad-hoc network topology
changes which, typically, occurs when a node moves in
the network.
Path Finding:
As shown in following figure 4: five nodes having
mobility activity communicating each other with their
circular range. Here each node having limited
communication range, they can communicate with its
neighbor nodes only.

Figure 4: Node Communication in AODV.
If node 5 wants to communicate with node 3, node 5
broadcasts RREQ that is received by its neighbor node 1
and node 4. Node 4 doesnt have only route to node 3,
therefore it rebroadcasts RREQ back to node 5 drops it.
On the other way, if node 1 has a greater sequence
number than RREQ, it forwards RREQ to node 2. As
node 2 has a route to node 3 it replies to node1 by
sending RREP. Node 1 sends RREP to node 5 and route
node5- node1- node2 node 3 is able to send data
packets.
4.1 Route Request Message RREQ:
Source node that needs to communicate with another
node in the network transmits RREQ message. AODV
floods RREQ message, using expanding ring technique.
There is a time to live (TTL) value in every RREQ
message, the value of TTL states the number of hops the
RREQ should be transmitted.
4.2 Route Reply Message RREP:
A node having a requested identity or any intermediate
node that has a route to the requested node generates a
route reply RREP message back to the originator node.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 168
4.3 Route Error Message RERR:
Every node in the network keeps monitoring the link
status to its neighbors nodes during active routes. When
the node detects a link crack in an active route, Route
error (RERR) message is generated by the node in order
to notify other nodes that the link is down.
V. ATTACKS IN MANET:
5.1 Black hole Attack
In this attack, an attacker uses the routing protocol to
advertise itself as having the shortest path to the node
whose packets it wants to intercept. An attacker listen
the requests for routes in a flooding based protocol [9].
When the attacker receives a request for a route to the
destination node, it creates a reply consisting of an
extremely short route. If the malicious reply reaches the
initiating node before the reply from the actual node, a
fake route gets created. Once the malicious device has
been able to insert itself between the communicating
nodes.
5.2 Gray Hole Attack
In this kind of attack the attacker misleads the network
by agreeing to forward the packets in the network. As
soon as it receive the packets from the neighboring
node, the attacker drop the packets. This is a type of
active attack.
5.3 Flooding Attack
The flooding attack is easy to implement but cause the
most damage. This kind of attack can be achieved either
by using RREQ or Data flooding. In RREQ flooding the
attacker floods the RREQ in the whole network which
takes a lot of the network resources.
5.4 Wormhole Attack
Wormhole attack is a severe attack in which two
attackers placed themselves strategically in the network.
The attackers then keep on hearing the network, record
the wireless data. In wormhole attack, the attacker gets
themselves in strong strategic location in the network.
VI. IMPACT OF BLACK HOLE ATTACK ON
AODV:
Two types of black hole attack can be described in
AODV in order to distinguish the kind of black hole
attack [5].
6.1 Internal Black hole attack
This type of black hole attack has an internal malicious
node which fits in between the routes of given source
and destination [9]. As soon as it gets the chance this
malicious node make itself an active data route element.
At this stage it is now capable of conducting attack with
the start of data transmission. This is an internal attack
because node itself belongs to the data route. Internal
attack is more vulnerable to defend against because of
difficulty in detecting the internal misbehaving node.
6.2 External Black hole attack
External attacks physically stay outside of the network
and deny access to network traffic or creating
congestion in network or by disrupting the entire
network [4]. External attack can become a kind of
internal attack when it take control of internal malicious
node and control it to attack other nodes in MANET.
External black hole attack can be summarized in
following points
1. Malicious node detects the active route and notes the
destination address.
2. Malicious node sends a route reply packet (RREP)
including the destination address field spoofed to an
unknown destination address. Hop count value is set to
lowest values and the sequence number is set to the
highest value.
3. Malicious node send RREP to the nearest available
node which belongs to the active route. This can also be
send directly to the data source node if route is
available.
In AODV black hole attack the malicious node
A first detect the active route in between the sender
E and destination node D [5]. The malicious node
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 169
A then send the RREP which contains the spoofed
destination address including small hop count and large
sequence number than normal to node C.

Figure 5: Black hole Attack in detail
This node C forwards this RREP to the sender node
E. Now this route is used by the sender to send the
data and in this way data will arrive at the malicious
node. These data will then be dropped. In this way
sender and destination node will be in no position any
more to communicate in state of black hole attack [9].
VII. CONCLUSION & FUTURE SCOPE:
This paper shows practically how to perpetrate
black hole attacks in ad-hoc networks. After the
evaluation of this use case for the protocol further
performance enhancements are planned. One of the most
important issues has been the research on security for
MANETs. Since securing MANETs is a very
challenging task no final overall solution has been
developed so far. Many different approaches exist that
can be applied to specific scenarios. Some of them have
been described in this paper. Future work should also
concentrate on the combination of security protocols in
order to develop a secure MANET environment
REFERENCES:
[1] Rutvij H. Jhaveri, Ashish D. Patel,
Jatin D. Parmar, Bhavin I. Shah, MANET Routing
Protocols and Wormhole Attack against AODV,
International Journal of Computer Science and Network
Security, VOL.10 No.4, April 2010.
[2] Hao Yang, Haiyun Luo, Fan Ye, Songwu Lu and
Lixia Zhanng, Security on Mobile Ad-hoc Networks:
hallenges and Solutions 1536-1284/04/IEEE Wireless
Communications Feb., 2004.
[3] C.M barushimana, A.Shahrabi, Comparative Study
of Reactive and Proactive Routing Protocols
Performance in Mobile Ad-hoc Networks, Workshop
on Advance Information Networking and Application,
Vol. 2, pp. 679-684, May, 2003.
[4] M.Parsons and P.Ebinger, Performance Evaluation
of the Impact of Attacks on mobile ad-hoc networks
[5] Irshad ullah, Shoaib rehman, Analysis of Black
Hole attack On MANETs Using different MANET
Routing Protocols.
[6] Latha Tamilselvan, Dr. V Sankara narayanan,
Prevention ofWormhole Attack in MANET.
[7] Douglas E. Comer Internetworking with TCP/IP
Volume 1 Principles, Protocols, and Architecture.
Prentice-Hall, Inc.
[8] Chia-Ching Ooi, N. Fisal, Implementing a small
scale MANET testbed based on Geocast enhanced
AODV bis routing protocol.
[9]http://www.scribd.com/doc/26788447/Avoiding-
Black-Hole-and-Cooperative-Black-Hole-Attacks-in-
Wireless-Ad-hoc-Networks.
[10]http://www.docstoc.com/docs/30136052/Study-of-
Secure-Reactive-Routing-Protocols-in-Mobile-Ad
[11] Kalyan Kalepu, Shiv Mehra and Chansu Yu,
Experiment and Evaluation of a Mobile Ad Hoc
Network with AODV Routing Protocol.
[12] Md. Golam Kaosar, Hafiz M. Asif, Tarek R.
Sheltami, Ashraf S. Hasan Mahmoud, Simulation-
Based Comparative Study of On Demand Routing
Protocols for MANET.
[13] Anuj K. Gupta, Dr. Harsh Sadawarti, Dr. Anil K.
Verma, Performance analysis of AODV, DSR &
TORA Routing Protocols Vol.2, No.2, April 2010
[14] Rashid Hafeez Khokhar, Md Asri Ngadi & Satria
Mandala, A Review of Current Routing Attacks in
Mobile Ad Hoc Networks International Journal of
Computer Science and Security, volume (2) issue (3)
[15]http://wiki.uni.lu/secan-lab/Ad-hoc+Protocols+
($28) Classification($29).html
[16]Avoiding Black Hole and Cooperative Black Hole
Attacks in Wireless Ad hoc Networks
http://www.scribd.com/doc/26788447/Avoiding-Black-Hole-
and-Cooperative-Black-Hole-Attacks-in-Wireless-Ad-hoc-
Networks.



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 170
}
c
c
r y x
ds
r
y x I
r
r G
, ,
0 0
2
) , (
* ) (
t
Human Iris Recognition in Unconstrained Environments

Ali noruzi
Department of Computer Science & Research Branch
Islamic Azad University Branch Dezfoul
Dezfoul, Iran
Ali_norozi4732@yahoo.com
Mohammad Ali azimi kashani
Department of Computer Science & Research Branch
Islamic Azad University Branch Shoushtar
Shoushtar, Iran
Azimi.kashani@gmail.com
Mohamod Mahloji
Department of Computer Science & Research Branch
Islamic Azad University Branch Kashan
Kashan, Iran
m.mahloji@gmail.com

AbstractDesignation of iris is one of biometric recognition
methods .That use modal recognition technique and is base on
pictures whit high equality of eye iris .Iris modals in
comparison whit other properties in biometrics system are
more resistance and credit .In this paper we use from fractals
technique for iris recognition. Fractals are important in these
aspects that can express complicated pictures with applying
several simple codes. Until, That cause to iris tissue change
from depart coordination to polar coordination and adjust for
light rates. While performing other pre-process, fault rates will
be less than EER, and lead to decreasing recognition time,
account table cost and grouping precise improvement.
Keywords-Biometrics; Identitydistinction;Identity erification;
Iris modals.
I. INTRODUCTION
Biometric use for identity distinction of input sample
compare to one modal and in some case use for recognition
special people by determined properties .Using password or
identity card. Can create some problems like losing
forgetting thief. So using from biometric property for reason
of special property will be effective. Biometric parameters
dived to group base on figure one [1]: Physiologic: this
parameter is related to fig.1 of body human. Behavioral: this
parameter is related to behavior of one person.


Figure 1. grouping some biometrics property
II. AVAILABLE IRIS RECOGNITION SYSTEM
Daugman technique [3,9] is one of oldest iris recognition
system. These systems include all of iris recognition process:
Taking picture, assembling, coding tissue and adaption.
A. Daugman techniques
Daugman algorithm [3,9] is the famous iris algorithm. In
this algorithm, iris medaling by two circles than arent
necessary certified. every circle defined whit there
parameters ( xo , y o , r ) that ( x o , y o ) are center of circle
with r radios . Use - a differential integral performer for
estimating 3 parameter in every circle bound. All pictures
search rather to increasing r radius to maximize following
Equation (1):
(1)
In this formulate ( x , y ) is picture light intensify , ds is
curve circle , 2r use for normalization in tetras G( r ) is Gus
filter as used for flotation , and * is convolution performed
(agent).
III. SUGGESTIVE ALGORITHM
In this algorithm, we use from new method for identity
distinction base on fractal techniques, specially used fractal
codes as coding iris tissue modal. For testing suggestive
method, we used from available pictures in picture base of
bath university. General steps of iris distinction would be as
follow. Clearly indicate advantages, limitations and possible
applications.


Figure 2. Sample of available pictures in iris database of Bath University
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 171
(
(
(

+
(
(
(

(
(
(

=
|
|
|
.
|

\
|
(
(
(

o
f
e
z
y
x
s
d c
b a
z
y
x
W
0 0
0
0
A. Iris assembling
The main goal of this part is recognition of iris area in
eye pictures. For this reason, we should recognize internal
and external bound in iris by two circles. One method for
assembling is using from fractal dimension. In Fig.3 we
present dimension of Hough circle and in Fig. 4 show the Iris
normalization, is more than 1 threshold. For accounting of
fractal diminution, the picture dived to Blocks with 40 pixel
width. As showed in picture, pupil and eye- lid areas
recognized very good.


Figure 3. output hough circle

Figure 4. Iris normalization
B. Iris normalization
In this step, should decant coordination change to polar
coordination. For this reason , 128 perfect circle next to pupil
center and with starting from pupil radius toward out ,
separate from iris , pour pixels on these circles in one
rectangle , in this way iris that was the form of circle trope ,
change to rectangle, it means iris from Decoct coordination
change to polar coordination. In fig.5 you can watch iris
polar coordination. Since changing in light level, pupil
environment of iris changed. We should control input light.
However, it may person interval different from camera, but
size of iris doesnt same in different pictures. So with
choosing this 128 prefect circles iris normalization done in
respect to size.


Figure 5. Diagram of polar coordination of iris tissue
Then separated iris tissue control for light intensifies.
Means picture contrast increased to iris tissue recognize very
good. In Fig.6 you can see sample of norm led iris tissue.


Figure 6. Diagram of normal iris picture.
C. Iris tissue coding
In this step, we should coding iris tissue pixels set, and
use it for comparing between 2 iris pictures. In suggestive
methods. We use from fractal code. So fractal code of
normal iris account. And this code as one modal saves in
data base. To used for recognition and comparing iris
pictures. In next step, we should encoding input picture with
this fractal codes. So I need to change all pictures to standard
size. For accounting fractal code first normal iris picture
change to one rectangle 64*180 pixels. So fractal codes for
different iris have same length.fig.7.


Figure 7. Normal iris picture in diminution 64*180 pixels
D. Change range to wide blocks
Main step in accounting fractal picture coding is
changing range to wide blocks. For every wide block copy of
range block compare to that block. W changing is
combination of geometrics and light changing. In case of I
grey picture, if z express pixel light intensify in (x, y), we
can show w as matrix as follow:

(2)
f, a, b, c, d, e coefficient, control main aspect of changing
geometrical. While s, o recognized contrast and both of them
recognize light parameters (fig.8). Changing geometrics
parameters limit to hardness adaption. [11]


Figure 8. Picture of rang and wide blocks
Comparing range wide in a 3 steps process. One of base
eight directions applied on selected range block. Then,
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 172

=
(


=



= =
= =
= = =
n
i
n
i
i i
n
i
n
i
i i
n
i
n
i
n
i
i i i i
d s r
n
o
d d n
r d r d n
s
1
1 1
2 2
1 1 1
1
) (
.
oriented range block, become minimum till be equal to wide
block Rk. If we want general changed be contradictor, wide
should be range block [11]. However, present ting picture as
set of changed blocks, don't present precise copy, but its
good approximate. Minimizing fault between Rk and w (Dj)
can minimize fault between estimated and main picture. If ri
and d and I=1n be pixel amounts relate to blocks having
same size Rk and shrink , fault and ERR is as following[11] :

(3)

=
+ + + =
n
i
i i i i i i
r r o r d s o d s d s o n Err
1
2 2 2 2
) . . 2 . . . 2 . . . 2 . ( .
(4)

= + + = c
= + =
c
c

= =
=
n
i
n
i
i i
i i
n
i
i i
r d s o n o n Err
r d o d d s
s
err
1 1
1
.
2
0 ) . 2 . . 2 ( . . 2 ( . . 2
0 ) . . 2 . . 2 . . 2 (
(5)
It happens when [10]:

(6)




One of advantage of suggestive method for iris
recognition is that when registering person, we save input
fractal code of person iris picture as modal in data base, and
so with regard to compressing property of fractal codes, we
have less weight data base.
E. Grouping and adapting
In this respect we should compare input picture with
available modals in data base system, and achieve similarity
between them. For this reason, iris norm led picture encoding
with available fractal codes in data base. For recognition
similarity between input and encoding picture, used form
interval between them. Nominal similarity size is 0 and 1
[10]. Interval form mincosci defined base on soft LP:
) ( y) (
p
1
0

=
=
N
i
p
i i p
y x x d (7)
When p , achieved

L :
{ } y) (
max
0
i i
N i
y x x D =
< s
(8)
F. Suggestive method simulation
Suggestive method for identity recognition performed on
subset iris picture data base in Bath University. Available
subset include 1000 picture from 25 different persons. 20
pictures from left eye and 20 picture form right eye were
showed. Since iris left and right eye is different in every
person. Among every 50 eyes, from 20 pictures, 6 pictures
are considered for teaching and testing (fig.9.10.11).


Figure 9. curve ROC relate to suggestive identity verification system.

Figure 10. curve ROC RELATES to suggestive identity verification system
with regard to adoptions numbers. (n= 1, 2, 3, 4, 5)

=
+ =
n
i
i i
r o d s Err
1
2
) . (
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 173

Figure 11. curve ROC relate to suggestive identity verification system with
regard to adoptions numbers.
TABLE I. COMPARING IDENTITY DISTINCTION PRECISE OF
SUGGESTIVE SYSTEM WITH DAUGMAN METHOD BASE ON REGISTER
TEACHING PICTURE NUMBER. (N=1, 2, 3, 4, 5, 6)
Picture
number(n)
Identity suggestive
method
Identity Daugman
method
1 picture %88 %96
2 picture %86 %96
3 picture %94 %96
4 picture %94 %96
5 picture %96 %96
6 picture %96.13 %96

IV. CONCLUSION
In this paper, we have proposed a new method base on
fractal techniques for identity verification and recognition
with help of eye iris modals. For a lot of reasons that iris
modals have ran the than other biometrics properties its
more fantastic. In assembling part. It says that with using of
light in tensely process techniques and modeling
performance and Anny margin or can recognize iris internal
bound. In normalization part centrifuged rules toward pupil
center and starting radius toward out, can determine noise
originate from eye-lash and eye-lid. Since in coding and
encoding iris picture and we use fractal codes iris fractal
codes save as modals in data base. This method has same
advantages like less weight of database .more security and
relative good precise. when entering one person, iris picture
encoding on fractal codes for one step, to Euclid interval and
interval minimum e method can use .In suggestive system
normalization part, iris tissue change form depart
coordination to polar coordination and adjust light in tensely,
while performing other preprocess, fault rate ERR will be
less than this amount .If used data base in iris distinction
system be big, search time will be a lot. So, in grouping and
adapting iris modals for reason of decreasing distinction
time, decreasing accounting cost and improving grouping
precise, can use form diminution fractal. Also, it is suggest
using fractal codes as iris tissue property and using coding
techniques fractal picture set for confine fractal codes and
sub fractal techniques. Also for more precise identity
distinction and adaption use more various grouping
techniques like k (nearest neighborhood).
REFERENCES
[1] A. K. Jain, R. Bole, S. Penchant, Biometrics: Personal Identification
in Network Society. Kluwer Academic Publishers, 1999.
[2] A. K. Jain, A. Ross and S. Pankanti, "Biometrics: A Tool for
Information Security" IEEE Transactions on Information Forensics
and Security 1st (2), 2006.
[3] International Biometric Group, Independent Testing of Iris
Recognition Technology, May 2005.
[4] J. Daugman, How iris recognition works, IEEE Trans. Circuits
Systems Video Technol. v14i1. 21-30, 2004.
[5] J. Daugman, High confidence visual recognition of persons by a test
of statistical independence, IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 15, pp.1148-1161, 1993.
[6] J. Daugman, The importance of being random: Statistical principles
of iris recognition Pattern Recognition 36, 279291, 2003.
[7] J.Daugman, New Methods in Iris Recognition.IEEE
TRANSACTIONS ON SYSTEMS, MAN, AND YBERNETICS,
2007.
[8] J. Daugman, Demodulation by complex-valued wavelets for
stochastic pattern recognition International Journal of Wavelets,
Multiresolution and Information Processing, 1(1):117, 2003.
[9] J. Daugman, New Methods in Iris Recognition. IEEE
TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS,
2007.
[10] H. Ebrahimpour-Komleh, Fractal Techniques for Face Recognition,
PhD thesis, Queensland University of technology, 2004.
[11] H. Ebrahimpour-Komleh, V. Chandra., and S. Sridharan, "Face
recognition using fractal codes" Proceedings of International
Conference on Image Processing(ICIP), vol. 3, pp. 58-61, 2009.
[12] H. Ebrahimpour-Komleh, V. Chandra, and S. Sridhar an, "Robustness
to expression variations in fractal-based face recognition" Sixth
International, Symposium on Signal Processing and its Applications,
vol. 1, pp. 359-362, 2001.
[13] H. Ebrahimpour-Komleh, V. Chandra, and S. Sridhar an An
Application of Fractal Image-set Coding in Facial Recognition,
Springer Verlag Lecture Notes in computer science, Volt 3072,
Biometric authentication, pp178-186, Springer-Velar, 2004.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 174
SPOS-H: A Secure Pervasive Human-Centric
Object Search Engine
Arun A K
M.Tech Student, CSE Department
SRM University
Chennai, India
arun.cetly@gmail.com
M. Sumathi
Asst: Prof. (Sr.G), CSE Department
SRM University
Chennai, India
msumathi@ktr.srmuniv.ac.in
R. Annie Uthra
Asst: Prof. (Sr.G), CSE Department
SRM University
Chennai, India
annieuthra@ktr.srmuniv.ac.in

Abstract We have come across multiple search engines in the
wide spread world of computing. Popular among them are
Web Search engines that yield information to the user based on
the query term. Yet another variant of search engines are
physical object search engines. Though not much wide spread,
physical object search engines are now getting popular in the
world of pervasive computing. Searching for information on
the web is fast and is supported by associated web crawlers. In
contrast searching information pertaining to a physical object
in real world is tedious and requires knowing the present
object location and also depends on factors like the object
owners permission to access the same. In this paper, we
propose an architecture for a multiple level security
incorporated real world object search engine, SPOS-H aimed
at data retrieval, which is intended to work on a heterogeneous
collection of physical objects attached with low cost sensor
devices. It also discusses techniques for improving search
relevance, human-centric location detection and reducing
battery consumption which are desiderate of all existing
systems.
Keywords- Human-centric, pervasive computing, search
engines, security, sensors.
I. INTRODUCTION
A world where anyone can work and play from
anywhere, for Citrix Systems this tagline was just a promo
for their remote desktop utility long ago. Now the caption
has featured into this paper in another context, a pervasive
world where it is computing everywhere [1][2][3]. This
paper discusses the architecture of SPOS-H, a human centric
physical object search engine which gives prime importance
to data security. The search is centered on the use of low cost
sensor devices attached on to day to day objects in the users
environment.
SPOS-H takes as input from a user a small description of
the object he wants to search for, and k, the number of
relevant items to be listed. The user is returned a list of k
objects their location and landmark. The user can easily click
on to the retrieved result to read more about the object. The
paper discusses the strong security mechanisms starting with
user authentication to data protection via encryption
mechanism. Compression strategy is employed on the data to
reduce communication aimed at minimizing battery power.
The paper is organized as follows; in section II we have
compared SPOS-H to its predecessors. Section III presents
an over view of the entire SPOS-H system with special
importance to the proposed SPOS-H architecture. This
section also briefs on the data associated with various sensor
nodes used in our proposal. Section IV discusses the security
mechanisms we have proposed on SPOS-H. In section V, a
detailed working of the entire SPOS-H search engine is
presented. Section VI throws light on how SPOS-H is
planned to be implemented. Section VII briefs the technique
employed for data compression and section VIII explains
how object mobility is supported by SPOS-H and finally in
section IX we have summarized our conclusion and future
objectives.
II. RELATED WORKS
Very few approaches have been done for search
engines over sensor networks and pervasive domain.
Existing ones [4][5] are restricted to integer value retrieval.
So far there has been no search engine designed to perform
textual search on physical objects in a human centric manner.
Snoogle described in [6] is a system that performs textual
data search, but it uses a very complicated retrieval
mechanism and the search is not human-centric. MAX [7]
performs a human-centric search, but it is not a textual
information retrieval system. Location identification have
been based on predefined databases, tables and maps,
Cricket [8] gave the way of beacon positioning and guided
through in placing the Super Nodes. Microsearch [9] was
probably the initial approach for searching which inspired
this work with its architecture and operation. Yet another
work that inspired this paper is Dyser [10] that yields a
dynamic web based object search. The approach uses mobile
phones and Bluetooth technology along with the sensor
devices which makes the cost factor very high.
III. SYSTEM OVERVIEW
In this section we shall discuss the over all SPOS-H
architecture and data management at the Minor Node, the
Super Node and the Controller.
A. System Architecture
Figure 1 depicts the architecture of SPOS-H. The
nodes are organized in a hierarchy. SPOS-H consists of four
major components- the Controller (C), the Super Node (SN),
the Major Node (MaN) and the Minor Nodes (MiN).
Described below are the major components of SPOS-H.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 175
Controller Node It is a high computation power
laptop or personal computer which is the entry point
to the SPOS-H engine. It implements various
algorithms that ensure the security of the pervasive
environment, which is a central theme of our search.
Super Node Every room inside our pervasive
environment is attached with a sensor device,
carrying a unique id for each room. So the sensor for
each room is called the Super Node and there will be
as many Super Nodes as the number of rooms. Super
Nodes are static and are battery powered. They
possess a microcontroller and radio unit.
Major Node The static objects in a room which can
be user identifiable landmarks like shelf, table etc
are attached with sensor devises and now these are
termed as the Major Nodes. Every room can have
one or more distinct Major Nodes, which are again
battery powered and have radio unit and a
microcontroller.
Minor Node Every object within a room is
attached with a sensor device which carries the
description about the object. The description about
the object is what gets displayed to the user once he
chooses to know more on the object after getting
listed in the search. All Minor Nodes employ the
same radio frequency for communication. Minor
Nodes can be either static or mobile.
B. Data at Minor Nodes.
As mentioned earlier, the Minor Nodes are the day to
day objects in our room. They can be a pencil, a book, a bag
etc. Each object is attached with a sensor that makes it
detectable. The sensor carries the description of the object.
We can categorize the object details as follows.
Metadata - A keyword that describes the object.
The user uses these keywords to search for the
object.
Headline It is a one line descriptor about the
object. The headline is what the search engine first
lists down to the user.
Payload Payload is the entire description of the
object, which the user can read once he chooses to
know more of the object.
Relevance Factor - It is a number that describes
how much relevant is a metadata for the object.
These details are entered into the object by the object
owner in an incremental manner. That means that the owner
can modify these data any time. The metadata is stored after
compression, using the compression technique that is
discussed in later section. Safety of the payload is ensured
through encryption which is also described in later sections
C. Data at Super Nodes
Super Node associates within its memory the id and
metadata of the objects that come under its coverage.
D. Data at Controller
The controller deals with quite lot data for functionalities
like security management, user management, object listing
etc. A detailed view of Controllers role on SPOS-H is
presented in the sessions yet to come.



















Figure 1. SPOS-H Architecture
IV. SECURITY AND PRIVACY
Multiple levels of security have been employed on to the
SPOS-H search. Methods have been incorporated to ensure
object privacy, room privacy and communication message
security.
A. Message Security
The search begins with the user having to
authenticate him against the Controller. He can access the
Controller via a hand held device, say his PDA. He enters his
username and password. After successful login into the
Controller, he can enter the search string. The administration
of SPOS-H would have already provided the user with an id,
an object access group id, a room access group id and a
security certificate. Elliptical Curve Cryptography
[11][12][13] is chosen as the method to ensure
communication message security. The user gets an encrypted
search result which he has to decrypt with his private key
which provides him with the exact output. Choice of ECC is
due to the observation that it works better than RSA with key
even less than RSA. More over it is considered as a better
encryption algorithm for energy constrained systems.
B. Room and Object Privacy
As extra privacy measures, the users are all
categorized into various groups. Every MiN carries a list of
permitted users. Only if the querying user has permission to
view that object, the object is listed in the search. Similarly
every SN carries a list of allowed users. Only queries from
permitted users are circulated into the room. That simply
means that not all users are permitted to query all objects and
not all users are permitted to query all rooms. Hence SPOS-
H ensures Object Privacy and Room Privacy. To add more to
safety, the data stored in the MiN is encrypted at the time of
Key
Controller (C)
Super Node (SN)
Major Node (MaN)
Minor Node (MiN)



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 176
its load itself using Symmetric Encryption strategy. So even
if an intruder walks away with the object and tries to read the
data within the object, he needs to have the decryption key.

C. ECC Encryption and Key Exchange
The elliptic curve y
2
= x
3
+ ax + b, defines the
mathematical operation of ECC, where 4a
3
+ 27b
2
0. Each
value of the a and b gives a different elliptic curve. All
points (x, y) which satisfies the above equation plus a point
at infinity lies on the elliptic curve. The private key is a
random number and the public key is a point in the curve.
Multiplying the private key with the generator point G in the
curve yields the public key. The generator point G, the curve
parameters a and b, together with few more constants
constitutes the domain parameter of ECC. Some public key
algorithm may require a set of predefined constants to be
known by all the devices taking part in the communication.
Domain parameters in ECC is an example of such
constants.
Point Multiplication is a dominant operation in ECC
cryptographic schemes. Using the elliptic curve equation a
point P on the curve is multiplied with a scalar k to obtain
another point Q on the same elliptic curve. i.e. kP=Q Point
multiplication is achieved by two basic elliptic curve
operations

Point Addition - Addition of two points A and B on
an elliptic curve to obtain another point C on the
same elliptic curve is called Point addition. Consider
two points A and B on an elliptic curve as shown in
Figure 2 (a). If B -A then a line drawn through the
points A and B will intersect the elliptic curve at
exactly one more point C. The reflection of the
point C with respect to x-axis gives the point C,
which is the result of addition of points A and B.
Thus on an elliptic curve C= A + B. If B = -A the
line through this point intersect at a point at infinity
Z. Hence A + (-A) = Z. This is shown in Figure 2
(b). Z is the additive identity of the elliptic curve
group.

Point Doubling - Point doubling is the addition of a
point A on the elliptic curve to itself to obtain
another point C on the same elliptic curve. To
double a point A to get C, i.e. to find C = 2A,
consider a point A on an elliptic curve as shown in
Figure 3(a). If y coordinate of the point A is not zero
then the tangent line at A will intersect the elliptic
curve at exactly one more point C. The reflection of
the point C with respect to x-axis gives the point C,
which is the result of doubling the point C. Thus C =
2A.If y coordinate of the point A is zero then the
tangent at this point intersects at a point at infinity Z.
Hence 2A = Z when Y
J
= 0. This is shown in Figure
3 (b).
Now we shall see ECDH Elliptic Curve Diffie Hellman
Key Exchange the key agreement protocol that allows two
parties to establish a shared secret key that can be used for
private key algorithms. The involved parties share some
public information to each other. Using this shared public
data and their private data they calculate the shared secret.
Any intruder, who has no access to the private data of each
party, wont be able to calculate the shared secret from the
available public information. For generating a shared secret
between A and B using ECDH, both have to agree up on
Elliptic Curve domain parameters. Both end have a key pair
consisting of a private key d (a randomly selected integer
less than k, where k is the order of the curve, an elliptic
curve domain parameter) and a public key U = d * G (G is
the generator point, an elliptic curve domain parameter). Let
(dA, UA) be the private key - public key pair of A and (dB,
UB) be the private key - public key pair of B.
1. The end A computes K = (xK, yK) = dA * UB
2. The end B computes L = (xL, yL) = dB * UA
3. Since dAUB = dAdBG = dBdAG = dBUA.
Therefore K = L and hence xK = xL
4. Hence the shared secret is xK

Since it is practically impossible to find the private key
dA or dB from the public key K or L, its not possible to
obtain the shared secret for a third party.
















Figure 2. Point Addition







Figure 3. Point Doubling















Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 177
V. SPOS-H WORKING
In this section, we shall explain in detail the working of
SPOS-H. The working of SPOS-H can be categorized into
three heads as summarized below. Happenings at each of the
components of SPOS-H have been briefed below. We shall
see how a user query is dealt with by SPOS-H at its various
hierarchical components.
A. At Controller
Search starts with the user having to login into the
Controller through his PDA or any supporting device.
Controller is in charge of the user validation. Once a user is
authenticated, communication to him is encrypted. He enters
the search string which is a coma separated list of metadata
and the maximum count of objects he prefer to see in his
output screen, say k. There are two methods of search in
SPOS-H, a local search and a distributed search.
Local Search User can query a selected SN, i.e. a
specific room.
Distributed Search User query is relayed to all SN
and further forwarded based on his access
permissions.
Controller passes the query to the rooms. Each SN
returns their best k search results based on their findings. The
controller chooses its best k and then the search results are
presented to the user in his window. If user requests further
read, the payload for the requested object is displayed to him.
Figure 4 shows a sample output screen of the SPOS-H
search.
B. At Room
The SN corresponding to each room checks the
users right. If the user is not permitted in querying the room,
his query is not forwarded. SN has a list of objects in the
room and what metadata they hold. So the query is now
forwarded to the objects whose metadata values match that
of the query term. Each SN gets back <MiNid, MaNid,
net_relevance_parameter, headline> from the MiNs. Each
SN forwards their best k results back to the Controller.
In case of payload request from a valid user, the SN
just collects it from the object and returns to the Controller.
Payload request will be of the form <u, MiNid>.
C. At Object
The MiN receives the query say <t1, t2, t3, k, u>
where each ti corresponds to the metadata. Say O1 is an
object, it responds to the query only when the user is
permitted access to it. O1 computes the
net_relevance_parameter by summing the relevance
parameter of each matching ti metadata term in O1, Every
object that receive the query request does the same operation.
MiN returns <MiNid, MaNid, net_relevance_parameter,
headline> In case the SN just forwarded a payload request;
the MiN returns the payload to the SN which delivers it to
the Controller responsible for displaying it on the users
screen.




















Figure 4. SPOS-H Sample Output Screen
VI. IMPLEMENTATION
The implementation of SPOS-H has not yet been into
reality. Most of SPOS-H is yet on paper and is a proposal.
SPOS-H would have two categories of users, the
administrator and the general users. Administrator is the
environment owner and in addition to normal search, he is
responsible for managing the objects and the general users.
General users are the ones who are permitted by the
administrator to perform search using the SPOS-H interface.
General users fall into various groups as set by the
administrator. A user can be prevented from querying certain
rooms or even certain objects. Controller device that plays
the key role in SPOS-H can be a high computational power
laptop or desktop. SNs, MaNs and MiNs can be affixed with
TelosB Motes.
We plan to implement SPOS-H in a two room Journal
Library. There would be two Super Nodes, one for each
room. Each room will have five each book racks which will
play the role of Major Nodes. Journal Paper attached with
motes will play the role of Minor Nodes, our search objects.
Headline will be the paper title, keywords of the paper would
correspond to metadata and paper content would be the
payload. Librarian and students will take up the role of
administrator and general user. Test case access restrictions
can be applied on to the users and objects.
VII. DATA COMPRESSION
In order to minimize the data communication across the
various nodes the data have been subjected to compression.
The reader must now be clear that when a reference is made
to metadata, it is not the exact metadata that gets transferred,
but a compressed version of the metadata. Bloom Filter
compression technique [14][15] has been found very
effective in compressing the metadata. Bloom filters are
compact data structures for probabilistic representation of a
set in order to support membership queries. This compact
representation is the payoff for allowing a small rate of








Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 178
false positives in membership queries; that is, queries might
incorrectly recognize an element as member of the set.

Consider a set S {s
1
, s
2
, s
3
, s
4
s
n
}, of n elements. Bloom
filters describe membership information of S using a bit
vector V of length m. For this, k hash functions, h
1
, h
2
, h
3
, etc
h
k
with, h
i
: X {1m}, are used as described below:
The following procedure builds an m bits Bloom filter,
corresponding to a set S and using h
1
, h
2
, h
3
, etc h
k
hash
functions:

Procedure BloomFilter(set S, hash_functions, integer m)
returns filter
filter = allocate m bits initialized to 0
for each s
i
in S:
for each hash function h
j
:
filter[h
j
(s
i
)] = 1
end for each
end for each
return filter

Therefore, if s
i
is member of a set S, in the resulting
Bloom filter V all bits obtained corresponding to the hashed
values of s
i
are set to 1. Testing for membership of an
element elm is equivalent to testing that all corresponding
bits of V are set.

Procedure MembershipTest (elm, filter, hash_functions)
returns Yes/No
for each hash function h
j
:
if filter[h
j
(elm)] != 1 return No
end for each
return Yes

The MiN and MaN stores and transmits only the
compressed metadata, thereby reducing the message size
which can help reduce power consumption.
VIII. SIGNALING AND MOBILITY
Now we shall see how SPOS-H signaling and
mobility of objects work. A study of various location support
systems [16][17] had been done and Cricket [8] provided an
option for better SN placement.
Each Super Node relays a message at fixed interval
informing all MiNs (the objects) that it is the master of that
room. The Major Nodes too relays a message at a different
frequency. The object, or in fact MiN thus learns and records
the SN and MaN in its vicinity. MiN then sends its metadata,
associated relevance parameter and identifier to the SN,
which records this information in its memory. The MiN at
regular intervals broadcasts a keep alive message. In case a
keep alive message does not reach the SN in a fixed time
interval, the information regarding that MiN is removed from
SN. The message that a particular MiN has been removed is
notified within the room, so that in case it is still present, it
can resend its information to the SN.
Now when the MiN is moved to another room, it
detects the signal from the other MaN and SN and hence
updates this information in its memory. In case the MiN
receives signals from multiple SNs, all available RSSI value
is compared and the SN with the highest value is chosen.
Choice of MaN too is done in the same manner. In case the
room is very vast we can think of placing more than one SN
within the room to cater the need of all MiNs.
IX. CONCLUSION AND FUTURE WORKS
In this paper we have proposed a very innovative and
cost efficient architecture for a physical world human-centric
search engine. As of now, most of SPOS-H lies on paper
only and we are on the move to develop a working prototype
of SPOS-H and bring into light our search engine. According
to our present work, we have not taken into account indexing
of data stored on to the Super Nodes, we have not considered
the case of Controller failure also. Once the first prototype of
SPOS-H is ready we would concentrate on factors like
Controller load balancing, indexing etc by which can
improve the system performance. Another area we would
concentrate is to develop a mechanism to give Super Nodes
live update on object entry and exit thereby reducing
communication overhead. To tighten the security which is a
major concern in most pervasive systems, we would be
planning to bring in better authentication techniques as
discussed by Kerberos [19] or advanced Biometric means
[20] or Smart Cards [21]. The most profound technologies
are those that disappear. They weave themselves into the
fabric of everyday life until they are indistinguishable from
it. We are on a mission to take these words of Mark Weiser
[23] on to a new dimension.
ACKNOWLEDGMENT
The authors would like to thank all reviewers for their
valuable comments and guidelines to improve this paper.
Profound thanks to almighty for all his invisible yet
dominant support.
REFERENCES
[1] M. Satyanarayanan, Carnegie Mellon University, Pervasive
Computing: Vision and Challenges. IEEE Personal
Communications, August 2001
[2] R. Hull, P. Neaves, J. Bedford-Roberts, Towards situated
computing Proc. of the Intl. Conf. on Wearable Computers,
ISWC'97, 1997, pp. 146_153.
[3] Joe Polastre, Sentilla Corp. A New Vision for Pervasive
Computing: Moving Beyond Sense and Send.
http://www.sensorsmag.com/networking-ommunications/wireless-
sensor/a-new-vision-pervasive-computing-moving-beyond-sense-and-
send.
[4] P. Bonnet, J. Gehrke, and P. Seshadri, Towards Sensor Database
Systems, Proc. Second Intl Conf. Mobile Data Management
(MDM01), pp. 3-14, 2001.
[5] S.R. Madden, M.J. Franklin, J.M. Hellerstein, and W. Hong,
TinyDB: An Acquisitional Query Processing System for
SensorNetworks, ACM Trans. Database Systems, vol. 30, pp. 122-
173, 2005.
[6] H. Wang, C.C. Tan, and Q. Li, Snoogle: A Search Engine for the
Physical World, Proc. IEEE INFOCOM, Apr. 2008.
[7] K.-K. Yap, V. Srinivasan, and M. Motani, MAX: Human-Centric
Search of the Physical World, Proc. ACM Conf. Embedded
Networked Sensor Systems (SenSys), 2005.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 179
[8] N.B. Priyantha, A. Chakraborty, and H. Balakrishnan, The Cricket
Location-Support System, Proc. Sixth Ann. Intl Conf. Mobile
Computing and Networking, 2000.
[9] C.C. Tan, B. Sheng, H. Wang, and Q. Li, Microsearch: When Search
Engines Meet Small Devices, Proc. Sixth Intl Conf. Pervasive
Computing, May 2008.
[10] Benedikt Ostermaier, Kay Romery, Friedemann Mattern, Michael
Fahrmairz and Wolfgang Kellererz, A Real-Time Search Engine for
the Web of Things,http:// www.vs.inf.ethz.ch/publ/papers/dyser.pdf
[11] Certicom, Standards for Efficient Cryptography, SEC 1: Elliptic
Curve Cryptography, Version 1.0, September 2000,
http://www.secg.org/download/aid-385/sec1_final.pdf
[12] K. Kaabneh and H. Al-Bdour, Key Exchange Protocol in Elliptic
Curve Cryptography with No Public Point American Journal of
Applied Sciences 2 (8): 1232-1235, 2005 ISSN 1546-9239 2005
Science Publications
[13] Sheueling Chang, Hans Eberle, Vipul Gupta, Nils Gura, Sun
Microsystems Laboratories, Elliptic Curve Cryptography How it
Works http://labs.oracle.com/projects/crypto/HowECCWorks-
USLetter.pdf
[14] M. Mitzenmacher, Compressed Bloom Filters, Proc. 20th
Ann.ACM Symp. Principles of Distributed Computing, 2001.
[15] B.H. Bloom, Space/Time Trade Offs in Hash Coding with
Allowable Errors, Comm. ACM, vol. 13, no. 7, pp. 422-426, 1970.
[16] Ana-Maria Chiselev, Luminia Moraru, Anisia Gogu, Localization
Of An Object Using A Bat Model, Inspired From Biology. Biophys.,
Vol. 19, No. 4, P. 251258, Bucharest, 2009
[17] A Ward, A Jones, and A Hopper. A New Location Technique For
The Active Office. IEEE Personal Communications Magazine,
4(5):4247, October 1997.
[18] Welbourne, E., Balazinska, M., Borriello, G.,and Brunette, W.
challenges for pervasive rfid-based infrastructures. In PERCOMW
07:Proceedings of the Fifth IEEE International Conference on
Pervasive Computing and Communications Workshops (Washington,
DC, USA, 2007), IEEE Computer Society, pp. 388394.
[19] J. G. Steiner, G. Neuman, and J. I. Schiller, Kerberos: An
Authentication Service for Open Network Systems, Proc. Winter
1988 USENIX Tech. Conf., Dallas, TX, Feb., 1988.
[20] [43] A. Jain, L. Hong, and S. Pankanti, Biometric Identification,
Commun.ACM, vol. 43, no. 2, Feb. 2000.
[21] N. Itoi, P. Honeyman, Practical Security Systems with Smartcards,
7th IEEE Wksp. Hot Topics in Op. Sys., Rio Rico, AZ, Mar. 1999.
[22] O.P. Sahu and Tarun Dubey. A new approach for self localization of
wireless sensor network. Indian Journal of Science and Technology.
Vol.2 No. 11 Nov. 2009
[23] M. Weiser, The Computer for the 21st Century, Sci. Amer., Sept.,
1991.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 180
COMPUTER USERS STRESS MONITORING USING A BIOMEDICAL
APPROACH & CLASSIFICATION USING MATLAB
Stress monitoring of different types of computer users

Arunraj M
Instrumentation & Control Department,
Kalasalingam University
Krishnankoil, Srivalliputtur - 626 190.
arunrajeie@gmail.com




Dr. M.Pallikonda Rajasekaran
Associate Professor,
Instrumentation & Control Department,
Kalasalingam University
Krishnankoil, Srivalliputtur - 626 190.
mpraja80@gmail.com


Abstract The prime objective of this paper to realize the
importance of stress monitoring of computer users. The
emerging revolution in IT field has increased the risk of stress
suffered by software engineers. Software engineers continuous
expose to work continuously for more than 8 hours, cause them
life threatening conditions for health, starting from heart
attack, brain tumour, computer syndrome and so on. So to
prevent such conditions, we take into account 3 parameters
that help us to detect the mental stress suffered by the
computer users. This mental stress will eventually affect the
physical state of the computer users, thereby leading to serious
health issues. The parameters that we take into account are
Blood Pressure, Pulse rate and Oxygen Saturation. Here we
use a monitor to store the signals of each user. Then by using
these measurements, we eventually prepare the classification of
computer users based on their body shape, i.e Lean, Average &
stout using MATLAB Neural Network BPN Algorithm. We
further classify users based on their ages & sex type.
Keywords stress monitoring, biomedical, workload, and
blood, temperature, pulse rate, oxygen saturation
I. INTRODUCTION

Nowadays, the emerging revolution in IT filed has
influenced the need for software engineers to continuously
work for more hours. Any person who works for longer
hours without any break are sure enough to suffer stress, but
many people overlook this fact and it leads to serious
consequences [2]. We need to monitor the stress caused due
to workload by means of three Bio-signals which are
common in human body; they are Oxygen Saturation, Pulse
Rate, and Blood pressure. These parameters are vital for
stress monitoring since its combinations determines the
condition of heart & mental state of computer users [7].

Stress has been defined as a reaction from a calm state to
an excited state for the purpose of preserving the integrity of
the organism. Stress is divided into
I) Eustress (Positive Bias)
II) Distress (Negative Bias)

Stress with negative Bias, particularly distress caused by an
increase in computer users workload is very bad for health.
Therefore, apart from continuous workload, stress is caused
due to content addiction, e.g.: Game addiction, Movie
addiction. These phenomenons also lead to increase in stress
of computer users. So there comes a need to monitor the
parameters that cause stress and prevent users from future
effects [7].
Here we monitor stress by taking into account 3
important parameters. They are,
I) Oxygen Saturation
II) Pulse Rate
III) Blood Pressure

II. PARAMETERS TO BE MEASURED IN
COMPUTER USERS
A. Blood Pressure
Everyday stress of modern life and work can definitely
increase our blood pressure levels by accelerating our heart
rate. But this is a temporary, non permanent rising of blood
pressure levels used quite normally by the human body to
prepare we to respond to 'threats'. This is often called the
fight-or-flight response [1]. However, its not
necessarily correct to say that everyday stress causes
permanently high blood pressure (hypertension).Stress can
cause temporary high blood pressure level, but these high
levels will revert to normal once the source of our stress is
removed and we are able to relax. Scientific research has
also shown that long term stress does play a role in the
increased risk of hypertension, but numerous other factors
need also be considered, among them obesity, exercise,
smoking, and psychological concerns like depression and
anxiety levels [1]. This all becomes a cycle when we
consider that excess stress itself leads to many of the other
contributing factor of hypertension. Highly stressed people
often overeat, take little exercise and smoke more. We take
into the account of initial blood pressure of computer users.
So computer users who suffer from such hypertension
were immediately found out and given proper treatment
before proceeding to further works [4].
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 181
B. Oxygen Saturation Measurement:-
Oxygen saturation is a relative measure of the amount of
oxygen that is dissolved or carried in a given medium. It can
be measured with a dissolved oxygen probe such as an
oxygen sensor or an optode in liquid media, usually water. It
has particular significance in medicine and environmental
science. In medicine, oxygen saturation (SO2), commonly
referred to as "sats", measures the percentage of hemoglobin
binding sites in the bloodstream occupied by oxygen. At low
partial pressures of oxygen, most hemoglobin is
deoxygenated. At around 90% (the value varies according to
the clinical context) oxygen saturation increases according to
an oxygen-hemoglobin dissociation curve and approaches
100% at partial oxygen pressures of >10 kPa. A pulse
oximeter relies on the light absorption characteristics of
saturated hemoglobin to give an indication of oxygen
saturation.

C. Pulse Rate Measurement:-
Pulse is the rate at which our heart beats. Our pulse is
usually called our heart rate, which is the number of times
our heart beats each minute (bpm). However, the rhythm
and strength of the heartbeat can also be noted, as well as
whether the blood vessel feels hard or soft. Changes in our
heart rate or rhythm, a weak pulse, or a hard blood vessel
may be caused by heart disease or another problem [7] [3].
As our heart pumps blood through our body, we can feel a
pulsing in some of the blood vessels close to the skin's
surface, such as our wrist, neck, or upper arm. Counting our
pulse rate is a simple way to find out how fast our heart is
beating. Our doctor will usually check our pulse during a
physical examination or in an emergency, but we can easily
learn to check our own pulse. We can check our pulse the
first thing in the morning, just after we wake up but before
we get out of bed. This is called a resting pulse. Some
people like to check their pulse before and after they
exercise. We check our pulse rate by counting the beats in a
set period of time (at least 15 to 20 seconds) and
multiplying that number to get the number of beats per
minute. Our pulse changes from minute to minute. It will be
faster when we exercise, have a fever, or are under stress. It
will be slower when we are resting [3].

III. HARDWARE FOR STRESS MONITORING OF
COMPUTER USERS

The arrangement consists of two modules for measuring
the 3 parameters. They are 1) pulse oximeter 2)
Oscillometric method of blood pressure measurement. The
recordings are stored in the monitor as per the given interval
for readings. In our case we took blood pressure
measurement for every 5 min interval, oxygen saturation for
every 5 min interval and pulse rate for every 5 min interval.
So by finding these parameters with the given interval, we
can determine the mental state of the computer user [6] [5].
The fig1 represents the architecture of stress monitoring for
computer users,



Fig1 Architecture of stress monitoring for
computer users

MEASUREMENT SYSTEM DETAILS:-

SPO2 PULSE OXIMETER
PULSE RATE PULSE OXIMETER
BLOOD PRESSURE OSCILLOMETRIC
METHOD

A. MEASUREMENT MODULES:-
I) PULSE OXIMETER SYSTEM:-
The Pulse Oximeter helps us to measure SpO2 and pulse
rate. A small probe is connected to the thumb in the hand
(or) foot of the computer users. This probe sends signals to
the monitor, where we collect the above said 2 parameters.
II) BLOOD PRESSURE SYSTEM:-
A cuff is placed in the left hand of the computer users;
here we dont give much emphasis on accuracy because of
routine monitoring. So we opt for non-invasive type of
blood pressure measurement. Here we use a digital
automatic sphygmomanometer because of complexity
involved in measurement. The corresponding connection is
made to record the mean arterial blood pressure of the
computer users in the monitor itself.

IV. BACKPROPAGATION NEURAL NETWORK

Backpropagation is the generalization of the Widrow-
Hoff learning rule to multiple-layer networks and nonlinear
differentiable transfer functions. Input vectors and the
corresponding target vectors are used to train a network
until it can approximate a function, associate input vectors
with specific output vectors, or classify input vectors in an
appropriate way as defined by web. Networks with biases, a
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 182
sigmoid layer, and a linear output layer are capable of
approximating any function with a finite number of
discontinuities [6].

Standard backpropagation is a gradient descent
algorithm, as is the Widrow-Hoff learning rule, in which the
network weights are moved along the negative of the
gradient of the performance function. The term
backpropagation refers to the manner in which the gradient
is computed for nonlinear multilayer networks. There are a
number of variations on the basic algorithm that are based
on other standard optimization techniques, such as
conjugate gradient and Newton methods [6]. The Neural
Network Toolbox software implements a number of these
variations. This chapter explains how to use each of these
routines and discusses the advantages and disadvantages of
each. Properly trained backpropagation networks tend to
give reasonable answers when presented with inputs that
they have never seen. Typically, a new input leads to an
output similar to the correct output for input vectors used in
training that are similar to the new input being presented.
This generalization property makes it possible to train a
network on a representative set of input/target pairs and get
good results without training the network on all possible
input/output pairs. There are two features of Neural
Network Toolbox software that are designed to improve
network generalization: regularization and early stopping.
These features and their use are discussed in Improving
Generalization.

There are generally four steps in the training process:

1. Assemble the training data.
2. Create the network object.
3. Train the network.
4. Simulate the network response to new inputs.


V. TRANSFER FUNCTION & ARCHITECTURE OF
BPN ALGORITHM

An elementary neuron with R inputs is shown below.
Each input is weighted with an appropriate w. The sum of
the weighted inputs and the bias forms the input to the
transfer function f. Neurons can use any differentiable
transfer function f to generate their output. Multilayer
networks often use the log-sigmoid transfer function logsig.
Alternatively, multilayer networks can use the tan-sigmoid
transfer function tansig. Here we used tan sigmoidal
function to effectively classify the output.


FEED FORWARD NETWORK

Feedforward networks often have one or more hidden
layers of sigmoid neurons followed by an output layer of
linear neurons. Multiple layers of neurons with nonlinear
transfer functions allow the network to learn nonlinear and
linear relationships between input and output vectors. The
linear output layer lets the network produce values outside
the range -1 to +1.


VI. CLASSIFICATION OF STRESS BASED ON AGE
& BODY TYPE

The parameters which were measured using the
biomedical system, was then classified according the age,
sex & body type. It has been estimated for different sex,
different age groups & different body type; we experience
different levels of blood pressure, oxygen saturation & pulse
rate. So we have to classify the users based on the above 3
way and start forming a database for the particular type of
users. These databases are trained with the ideal conditions
of the men (or) women. And that database is further used as
the reference for training the single user level data. The
table which was used for classification is shown below,


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 183
TABLES FOR READINGS
I) SAMPLE READINGS 1:-

NAME Bhaskar BODY
TYPE
Lean AGE 17
TIME
(min)
BLOOD PRESSURE PULSE
RATE
(bpm)
SPO2
(%)
SYS
(mmhg)
DIA
(mmhg)
MEAN
(mmhg)
1 135 62 79 79 98
6 113 57 73 76 98
11 119 66 79 78 98

II) SAMPLE READINGS 2:-

NAME Vinoth BODY
TYPE
Stout AGE 23
TIME
(min)
BLOOD PRESSURE PULSE
RATE
(bpm)
SPO2
(%)
SYS
(mmhg)
DIA
(mmhg)
MEAN
(mmhg)
1 111 69 77 70 99
6 113 64 72 73 99
11 119 79 93 74 99

III) SAMPLE READINGS 3:-

NAME Sakthi BODY
TYPE
Average AGE 21
TIME
(min)
BLOOD PRESSURE PULSE
RATE
(bpm)
SPO2
(%)
SYS
(mmhg)
DIA
(mmhg)
MEAN
(mmhg)
1 122 68 80 80 98
6 117 65 78 81 97
11 121 70 82 85 98

VII. MATLAB BASED BPN OUTPUTS

The data which were collected are then organized using
Ms-excel and the average was taken for the particular type
of users regarding to their sex type. These data was
imported to MATLAB and it was trained using the Neural
Network toolbox using BPN based Feedforward network. In
which we compare the ideal conditions (vs) the average of
database users and later on the corresponding test users. We
measure readings for the corresponding sex type, in all the 3
cases (blood pressure, pulse rate & SPO2). We have to form
a database for the users, whom we are going to deal with in
the future. The purpose of forming this database is we can
ensure that the users who suffer stress were compared with
the already existing users. We need to keep on updating this
database until it was accurate enough to deal with finding
the correct level change in stress for the particular type of
users [2]. There are 3 tables present below which represents
the process carried out, 1) ideal condition 2) database users
average & 3) test users average.
TABLES FOR CLASSIFICATION IN BPN
I) CONSIDERED IDEAL CONDITIONS:-

SYS
(mmhg)
DIA
(mmhg)
MEAN
(mmhg)
PULSE
RATE
(bpm)
SPO2
(%)
120 80 90 72 97
120 80 90 72 97
120 80 90 72 97

II) AVERAGE OF DATABASE USERS (LEAN):-

SYS
(mmhg)
DIA
(mmhg)
MEAN
(mmhg)
PULSE
RATE
(bpm)
SPO2
(%)
113.42 76.85 101.57 90.57 98
112.14 82.14 97.71 90.28 97.42
112.42 80.57 98.57 90.57 98.14

III) AVERAGE OF TEST USERS (LEAN):-

SYS
(mmhg)
DIA
(mmhg)
MEAN
(mmhg)
PULSE
RATE
(bpm)
SPO2
(%)
113.42 76.85 101.57 90.57 98
112.14 82.14 97.71 90.28 97.42
112.42 80.57 98.57 90.57 98.1

By comparing table 1(Ideal condition) with table
2(Database average), we got the graph shown below. This
graph below represents the training carried out in neural
network toolbox for ideal (vs) database users.





Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 184
By comparing table 1(Ideal condition) with table 3(Test
average), we got the graph shown below. The graph below
represents the training carried out in neural network toolbox
for ideal (vs) test users.




VIII. CONCLUSION

Therefore, by calculating & analyzing the Bio-signals of
computer users with the aid of biomedical equipments and
then classifying the data using BPN in matlab, we will
surely have an overview of when the computer users is
most affected and stressed. By regularly studying the
behavior of computer users, we can form a data base to
predict the mental state of computer users, before computer
usage, thereby effectively preventing any future problems.
By such effective study it is sure enough we can save
millions of human lives from stress related diseases.

IX. REFERENCES

1. Grewal, A. Shekar, A, " The Development of an
Electronic Stress Relief Device that Monitors Physical
Activity Mechatronics and Machine Vision in Practice,
2008, M2VIP 2008. 15th International Conference on 2-4
Dec. 2008 on page(s): 594.
2. Jing Zhai Barreto, A.B. Chin, C. Chao Li,
Realization of stress detection using psychophysiological
signals for improvement of human-computer interaction
sm, SoutheastCon, 2005, Proceedings. IEEE 8-10 April
2005 on page(s): 415.
3. Kim, Desok Seo, Yunhwan Jaegeol Cho, Chul-Ho
Cho,"Detection of subjects with higher self-reporting stress
scores using heart rate variability patterns during the day"
Engineering in Medicine and Biology Society, 2008. EMBS
2008. 30th Annual International Conference of the IEEE on
20-25 Aug. 2008 on page(s): 682.
4. Palvia, S. Lai Lai Tung, " IT use and incidence of
stress by demographic factors: an exploratory study",
TENCON '94, IEEE Region 10's Ninth Annual International
Conference. Theme: Frontiers of Computer Technology.
Proceedings of 1994 22-26 Aug 1994 on page(s): 139.
5. Reddig, D. Karreman, J. van der Geest, T "Watch
out for the preview: The effects of a preview on the usability
of a Content Management System and on the users
confidence level" Professional Communication Conference,
2008, IPCC 2008. IEEE International on 13-16 July 2008 on
page(s): 1.
6. Wenhui Liao Weihong Zhang Zhiwei Zhu Qiang
Ji, A Real-Time Human Stress Monitoring System Using
Dynamic Bayesian Network Computer Vision and Pattern
Recognition - Workshops, 2005, CVPR Workshops. IEEE
Computer Society Conference on 25-25 June 2005 on
page(s): 70.
7. Zhai, J. Barreto, A., Stress Detection in Computer
Users Based on Digital Signal Processing of Noninvasive
Physiological Variables" Engineering in Medicine and
Biology Society, 2006, EMBS '06. 28th Annual
International Conference of the IEEE Aug. 30 2006-Sept. 3
2006 on page(s): 1355.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 185




Abstract- Segmentation of the heart across a cardiac
cycle is a problem of interest because the left
ventricles proper function, pumping oxygenated
blood to the entire body, is vital for normal human
activity. Having segmentations of the heart over time
allows cardiologists to assess the dynamic behavior
of the human heart (using, e.g., the ejection fraction).
Segmented heart boundaries can also be useful for
further quantitative analysis.
Texture features has been widely used in object
recognition, image analysis and many others.
Gabor filter has emerged as one of the most
popular ones. Gabor filter based feature
extractor is a Gabor filter defined by its
parameters including frequencies, orientations
and smooth parameters of Gaussian envelope.
Snakes have been used extensively in locating
object boundaries. However, in the medical
imaging field, many organs in close proximity
have similar intensity values, limiting the
usefulness of snakes in segmentation of
abdominal organs. The gradient vector flow
snake is used to test the benefits of running
snakes on texture features from orientation
based Gabor Filter. A proposed algorithm is
GVF (gradient vector Flow) with ASM (Active
Shape Model) to overcome several drawbacks in the
original framework. The algorithm is completely
automatic and computationally efficient.
Index Terms Gabor filter, principal component
analysis (PCA), Gradient Vector Flow (GVF).
I. INTRODUCTION
Image Segmentation is often described as the
process that subdivides the image into its
constituent parts and extracts those parts of interest.
The goal of the segmentation is being to change or
simplify image into more meaningful or easier
image.
Segmentation of human organs in medical images is
of benefit in many areas of medicine, including
measurement of tissue volume, computer-guided
surgery, diagnosis, treatment planning, and research
and teaching. There are many segmentation
approaches
that have been proposed by various researchers for
different imaging modalities and pathologies; these
techniques have varying degrees of success.
These approaches can be divided into three main
categories [1]: the threshold approach, the active
contour mappings approach, and the model-based
(deformable) approach. In the traditional threshold
approach, the segmentation is performed by
grouping all pixels that pass the predefined intensity
criteria into regions of interest [2]. The active
contour (e.g. snake [5]) is a boundary-based
approach which deforms a manually chosen initial
boundary towards the boundary of the object by
minimizing the image energy function. While the
threshold approach has to be tuned in order to find
the best values for the thresholds producing the
right segmentation, the active contour approach has
to deal with the manual selection of the initial
points on the regions boundary. Furthermore,
when the edge of an object is not sharply different
from the background, the active contour approach
by itself may not differentiate between the region of
interest and background. To overcome this
challenge, most recent work of active contour
mappings combines the active contours with the
level sets approaches [5]. While the threshold and
active contour approaches do not use any a-priori
information, the model-based approach uses a
template to find the region of interest; the template
can also be deformed using certain rules in order to
deal with different scales and positions of the
regions of interest. Since the model-based
segmentation approach is heavily based on shape
information, the approach may fail when detecting
organs where some abnormalities (such tumors) are
present.
Segmentation of heart by using Texture Features and Shapes
Mrs. Shreyasi Watve, Prof. Mrs. R. Sreemathy
PICT, Pune PICT, Pune
Shreya.watve@gmail.com

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 186



Considering image segmentation as the
partition of an image into a set of non-overlapping
regions whose union is the entire image, some rules
to be followed for regions resulting from the image
segmentation can be stated as (Haralick, 1985):
1. They should be uniform and homogeneous with
respect to some characteristics;
2. Their interiors should be simple and without
many small holes;
3. Adjacent regions should have significantly
different values with respect to the characteristic on
which they are uniform; and
4. Boundaries of each segment should be simple,
not ragged, and must be spatially accurate.
The automated segmentation of the heart
endocardium in bright blood cardiac magnetic
resonance (MR) images is a challenging problem.
Exploiting segmentations from a previous or future
frame can improve the segmentation of the heart in
the current frame, because the heart boundaries
exhibit strong temporal correlation. Using such
information would be particularly useful for low
SNR images, in which the observation from a single
frame alone may not provide sufficient information
for an acceptable segmentation. The incorporation
of dynamic models into cardiac segmentation and
tracking is an area of recent and growing interest.
Chalana, et al. [3] and Jolly et al. [4] perform causal
processing using the segmentation from the most
recent frame to initiate a local search for the
segmentation in the current frame. Senegas, et al.
[5] use sample-based methods (using sequential
Monte Carlo) for causal shape tracking on a finite-
dimensional representation of the shape space using
spherical harmonics.
As an essential characteristic of reflective
images, texture feature has been widely used in
object recognition, image content analysis and
others. Texture feature extraction aims to extract
proper features to distinguish different textures.
Most of these segmentation approaches
applied in medical field are based on the gray-
intensities. To perform medical image
segmentation, the gray-levels alone may not
sufficient as many soft tissues have overlapping
gray-level ranges and thus, the use of the properties
of the corresponding anatomical structures is
necessary to perform accurate medical image
segmentation [6]. Since the shape of the same organ
might be different across a sequence of 2-D axial
slices or even more, across different patients,
several texture-based segmentation approaches have
been proposed as a way to quantify the homogeneity
and consistency of soft tissues across multiple
Computed Tomography slices.
II. METHODOLOGY
There are a large number of texture-based
segmentation algorithms in the literature; among the
most commonly used segmentation algorithms
based on the texture features are clustering
techniques, region growing, and split-and-merge
techniques. Segmentation using these traditional
techniques requires considerable amounts of expert
interactive guidance or does not incorporate any
spatial modeling which can result in poor
segmentation results.
In this paper, proposed hybrid approach is
for heart segmentation that combines the advantages
of using texture-based image features to encode the
image content and snakes to delineate the boundary
of the region of interest. The approach consists of
three stages: a) Gabor Filter is calculated for each
image along with five pixel-based image features
[12]. b) Principal Component Analysis (PCA) is
applied to find the linear combinations that capture
the most variance in the data; c) The gradient snake
approach is applied on the most important
combination (principal component) to determine the
boundary of the organ of interest. The entire
process is summarized in figure 1.











Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 187




Original Image





















Figure. 1. Methodology

While proposed approach is for heart
segmentation in MRI, 2-D, 3-D Computed
Tomography (CT) images and this proposed
approach can be applied for the segmentation of any
other organ in CT images.
A. Texture feature extraction: Gabor Filter
Gabor filter is a linear filter whose impulse
response is defined by a harmonic function
multiplied by a Gaussian function . It is optimally
localized as per the uncertainty principle in both the
spatial and frequency domain[7].
Gabor filter based feature extractor can be
interpreted as nonlinear functions that map images
from original space to feature space,where each
image is represented its features.
Gabor filters have the ability to perform multi-
resolution decomposition due to its localization
both in spatial and spatial frequency domain.
Texture features requires simultaneous
measurements in both the spatial and the spatial-
frequency domains. Filters with smaller bandwidths
in the spatial-frequency domain are more desirable
because they allow us to make finer distinctions
among different textures. On the other hand,
accurate localization of texture boundaries requires
filters that are localized in the spatial domain.
However, normally the effective width of a filter in
the spatial domain and its bandwidth in the spatial-
frequency domain are inversely related according
the uncertainty principle. That is why Gabor filters
are well suited for
this kind of problem[11,12].
A Gabor function in the spatial domain is a
Sinusoidal modulated Gaussian. For a 2-D Gaussian
curve with a spread of x and y in the x and y
directions, respectively, and a modulating frequency
of u
0
, the real impulse response of the
filter is given by

-----------(1)
In the spatial-frequency domain, the Gabor filter
becomes two shifted Gaussians at the location of the
modulating frequency. The equation of the 2-D
frequency response of the filter is given by,


Figure 2 shows the original image and figure 3
shows the images of the five features.


Texture feature
Extraction
Image
1
Image
2
Image
3
Image
4
Image
5
Principle Component Analysis
PC 1 PC 2 PC 3 PC 4 PC 5
Snake Algorithm
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 188




Figure 2. Original image


Figure 3. Texture images

B. Principal component analysis

Principal Components Analysis is a standard tool in
modern data analysis. It is a simple, non-parametric
method for extracting relevant information from
confusing data sets.
Principal Components Analysis (PCA) offers a
method to identify and rank the low-level features
according to the amount of variation within the
image data explained by each feature [8]. PCA uses
the covariance between the features to transform the
feature space into a new space where the features
are uncorrelated. First, the covariance matrix for the
feature data is calculated and then, the eigenvectors
(principal components) are extracted to form a new
linear transformation of the original attribute space:
5 5 4 4 3 3 2 2 1 1
f f f f f PC
j
o o o o o + + + + = (3)
where j stands for the j
th
principal component
( 5 , , 1 = j ) and ) , , , , (
5 4 3 2 1
o o o o o are the
features contributions to forming the component.
The features with large loadings (weights)
contribute more to the principal component of the
data; the features with lower loadings (weights) can
be considered noise. Using both the loadings
(weights of the original features) and the amount of
variance explained by the principal components, the
importance of the individual features can be
compared and ranked.
After ranking each eigenvector (principal
component) for the amount of dataset variation they
explain, the top ranking eigenvectors are selected to
represent the entire dataset. In our proposed
approach, we consider only the first principal
component for further analysis. That is, instead of
applying the snake on the original gray-level image,
we apply the snake on the first principal component
image formed by replacing each pixels intensity by
its representation in the PCA space:
( ) ( )
( ) ( )(
(
(

n n PC n PC
n PC PC
, 1 ,
, 1 1 , 1
1 1
1 1

(4)

C. Gradient vector flow snake approach

In image segmentation and target
recognition, an important step is to detect and
extract boundaries in the image data. Snakes or
active contours were first introduced by Kass et al.
[9,13] to locate the boundaries of an object of
interest. Since then, snakes have been used in many
applications, such as edge detection, shape
modeling and object tracking .The traditional snake
approach is based on the image energy function and
attempts to conform to an energy minimization
solution by deforming to certain internal and
external forces:
A Snake is parameterized contour defined
within an image domain that can translate and
deform an the image plane under the influence of
internal forces coming from the curve itself and
external force computed from image data. But it has
two major difficulties: one is that capture range is
very narrow, and therefore requires an initial
contour near the real boundary; and the other is the
difficulty in moving into boundary
concavities.So,the GVF Snakes method introduced
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 189



an external force called gradient vector flow (GVF)
which is computed in terms of a diffusion of
gradient vectors of the image.
Using a gradient edge map derived from the
image, a diffusion model was applied to deform the
snake such that it was able to converge to concave
boundaries. Furthermore, the map did not rely on
initialization by an expert user as long as the map is
automatically initialized within the vicinity of organ
of interest. Therefore, the GVF model was defined
by the following equation:
( , ) ''( ) ''''( ) 0
t
x s t x s x s v o | = = (5)
where "( ) ''''( ) x s x s o | is the internal force that
resist to stretching and bending while v is a new
static external force field that pulls the snake
towards the desired object. The snake is iteratively
deformed using the parameters , , , , and
(tension/elasticity, rigidity, external weight force,
noise filtering, and viscosity) which returns a new
set of points that represent the deformation of the
snake after a set number of iterations. The GVF
model defined by equation (5) is applied on the
matrix defined by formula (4).

III. Results

Our experimentation was performed on MRI
images. Using Gabor Filter, 5 texture matrices were
produced from each MRI image. PCA was then
applied, using the matrices as variables and the
indexes as independent cases. The snake works
with only input value per pixel, so the pixel
representation with respect to the first principal
component is required as the input to the snake
algorithm. Working with the principal component
instead of the gray-level allowed us working with
the most important and largest variance (99%) in
the data while neglecting the noise and redundant
information.


IV. Conclusion
Segmentation accuracy determines the success or
failure of analysis procedures. Segmentation is
based on partitioning an image into different regions
of similar textures based on a specified criterion.
We have successfully implemented the algorithms
related to segmentation using Gabor filter and
Principal component analysis.
Segmentation approach combines texture-based
features, principal component analysis and the
gradient vector flow. Principal component analysis
has widespread applications because it reveals
simple underlying structures in complex data sets
using analytical solutions from linear algebra.
The snake algorithm produces more precise and
smoother segmentation results for texture images
than intensity -based.
References:
[1] R. Susomboon, D.S. Raicu, and J.D. Furst, "Automatic
Single-Organ Segmentation in Computed Tomography
Images", IEEE International Conference on Data Mining,
December 2006.

[2] J. C. McEachen II and J. S. Duncan, Shape-based
tracking of left ventricular wall motion, IEEE Trans. Medical
Imaging, vol. 16, no. 3, pp. 270283, 1997.

[3] V. Chalana, D. T. Linker, D. R. Haynor, and Y. Kim, A
multiple active contour model for cardiac boundary detection
on echocardiographic sequences, IEEE Trans. Medical
Imaging, vol. 15, no. 3, pp. 290298, 1996.

[4] M-P. Jolly, N. Duta, and G. Funka-Lee, Segmentation of
the left ventricle in cardiac MR images, in IEEE Int. Conf. on
Computer Vision, 2001, vol. 1, pp. 501508.

[5] ] C. Xu and J.L. Prince, Gradient Vector Flow: A New
External Force for Snakes, IEEE Conf. on Computer Vision
and Pattern Recognition, Los Alamitos: Comp. Soc. Press,
June 1997, pp. 66-71.

[6] J. Senegas, T. Netsch, C. A. Cocosco, G. Lund, and A.
Stork, Segmentation of medical images with a shape and
motion model: A Bayesian perspective, in Computer Vision
Approaches to Medical Image Analysis (CVAMIA) and
Mathematical Methods in Biomedical Image Analysis
(MMBIA) Workshop, 2004, pp. 157168.

[7] Thomas P. Weldon and William E. Higgins,Design of
multiple Gabor Filters for texture segmentation, IEEE
transaction on Image Processing ,July 1996

[8] B. Horsthemke, D.S. Raicu, Texture Feature Analysis for
Soft Tissue Organ Classification using PCA and LDA, SPIE
Medical Imaging Conference, San Diego, CA, February 2007.

[9] M. Kass, A. Witkin, and D. Terzopoulos, Snakes - Active
Contour Models, International Journal of Computer Vision,
Vol. 1, No. 4, 1987, pp. 321-331.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 190




[10] J. Duncan, A. Smeulders, F. Lee, and B. Zaret,
Measurement of end diastolic shape deformity using bending
energy, in Computers in Cardiology, 1988, pp. 277280.

[11] Jie Yao, Patrick Krolak, and Charlie Steele, The
Generalized Gabor Transform, IEEE TRANSACTIONS ON
IMAGE PROCESSING, VOL. 4, NO. 7, JULY 1995 : pp. 978-
988.


[12 ]Dennis Dunn,and William E. Higgins, Optimal Gabor
Filters for Texture Segmentation, IEEE TRANSACTIONS ON
IMAGE PROCESSING, VOL 4, NO 7, JULY 1995 : pp. 947-
964.

[13] C. Xu, J. L. Prince, Snakes, Shapes, and Gradient Vector
Flow, IEEE Transactions on Image Processing, March 1998,
pp. 359-369

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 191
Image De-noising with Edge Preservation using Soft Computing Techniques

Kartik Sau

Department of Computer Science &
Engineering, Budge Budge Institute of
Technology, Nischintapur. Kolkata
7000137, WBUT, India.

kartik_sau2001@yahoo.co.in

Amitabha Chanda
Department of Computer Science &
Engineering, Guest faculty, UCSTA,
University of Calcutta, West Bengal,
India.
amitabha39@yahoo.co.in

Pabitra Karmakar

Department of Computer science &
Engineering, Institute of Engineering
and Management, WBUT, Salt Lake,
Sec V, Kolkata - 700091, India.
pab_comp@yahoo.co.in
AbstractImage de-noising is the technique to reduce noises
from corrupted images. The aim of the image de-noising is to
improve the contrast of the image or perception of information
in images for human viewers or to provide better output for
other automated image processing techniques. This paper
presents a new approach for image de-noising with Fuzzy
Filtering techniques using centroid method for defuzzification.
It preserves any type of edges (including tiny edges) in any
direction. The experimental result shows the effectiveness of
the proposed method.
Keywords: Impulsive noise; Median filter; Fuzzy Systems
and Applications; Defuzzification.


I. INTRODUCTION

For various reasons digital images are often contaminated
with noise at the time of acquisition or transmission. The
noise introduces itself into an image by replacing some of
the pixels of the original image by new pixels having
luminance values near or equal to the minimum or
maximum of the allowable dynamic luminance range. Pre-
processing of an image is conducted with a view to
adjusting the image for further classification and
segmentation. In the process, however, image features
should not be destroyed. This is a difficult task in any
image processing system. For this purpose various types
of filters are used. Among those median filter is an
important class of filters. In the present paper we shall
discuss some of the median filters.

A. Different median filters:

Some important median filters are discussed below in
short.
The standard median filter [1,2,10, 26,28]

: In this filter
luminance values in a window are arranged in an order
and the median value is selected. This filter reduces noise
reasonably well, but in the process some information is
also lost at low noise densities.
The weighted median filter [3, 13, 28, 29] and the centre-
weighted median filter [4, 5, 6, 7, 28] have been proposed
to avoid the inherent drawbacks of the standard median
filter by controlling the trade off between the noise
suppression and detail preservation.
The switching median filter [8, 28] is a type of median
filter with an impulse detector. It is so designed that if the
centre pixel is identified by the detector as a corrupted
pixel, then it is replaced with the output of the median
filter, otherwise, it is left unchanged. The tri state median
filter [9] and the multi-state median filter (MSMF) [10] are
two modification of switching median filter.
Two other types of switching median filters worth
mentioning are progressive switching median filter
(PSMF) [11] and Signal-dependent rank-ordered mean
filter (SDROMF) [12,15].
The progressive switching median filter (PSMF) [11] is a
derivative of the basic switching median filter. In this
filtering approach, detection and removal of impulse
noise are iteratively done in two separate stages. Despite
its improved filtering performance it has a very high
computational complexity due to its iterative nature.
Signal-dependent rank-ordered mean filter (SDROMF)
[12,15] uses rank-ordered mean filter.
Adaptive centre weighted median (ACWM) [13, 28] filter
avoids the drawbacks of the CWM filters and switching
median filter. Input data will be clustered by scalar
quantization (SQ) method, this results in fixed threshold
for all of images.
Fuzzy-Median filters of different types have gained lot of
importance in image processing. Some such filters [14, 16,
17, 18, 20, 27] are worth mentioning.

II. PROPOSED METHOD FOR DENOISING
A. Phase I
Any given digital image can be represented by a two
dimensional array of size MN. When we are capturing or
transmitting the images, then there will be some noises due
to improper opening and closing of the shutter, atmospheric
turbulence, misfocus of lens, relative motion between
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 192
camera and objects [31]. For removing the noise from the
image we can pursue the following steps:

Step 1: Let the input and output images are X(i,j) & y
1
(i,j)
respectively of median filter. The median filter uniformly
replaces the central pixel of a window(W) by the median of
the pixels bounded by predefined window(W) size w w.
The output of the median filter [21,29, 30] is given by

y
1
(i,,j) = median { X(i-s; j-t) : (s,t) W } (1)
and
W= {(s, t): -n s, t n } (2)
Here we consider n=1

Step 2: The gray values of the neighborhood pixels of (i,j)
of original image are sorted in some specific order.

Step 3: The Fuzzy membership [19, 20, 22, 26] value is
assigned for each pixel in a window of w w size using
some suitable membership functions. The membership
function used here is given below.
(i) A triangular shaped membership function
is used.
(ii) The highest and lowest gray values get the
membership value zero.
(iii) Assign the membership value 1 to the
mean value of the gray level of the
window of size ww.
The triangular membership function [19, 22, 26], also called
bell-shaped function with straight lines, can be defined as
follows:
(x;,,) = 0 if x
= (x-)/ (- ) if < x (3)
= (-x)/ (- ) if < x
One typical plot of the triangular membership is given in the
following figure.

Figure 1. Triangular membership function

Step 4: Now defuzzyfy [19, 22, 26,30] the membership
value using the Centroid method by the following formula,
and select the output for that window. Let it be y
2
(i,j) at the
point (i,j) in the window of size ww

=
x
x
x
x
j i
j i x
j i y
) , (
) , (
) , (
2

(4)
B. Phase II
If we apply the phase I on the noisy images, noise would
be removed. But the problem is, though the noises are
represented as 0 and 255, so all 0 and 255 will also be
removed; even of those were not noises. For preserving the
actual data we pursues the following steps

Step 1: For the noisy pixel at the point (i,j), we compute
p(i,j) and q(i,j) by the following formulas [24]:
p(i,j)= |f(i,j) median{Lf(i,j)}| Here L(f(i,j)) is the 8-
neighborhood of point (i,j).

2
) , ( ) , ( ) , ( ) , (
) , (
2 1
j i fc j i f j i fc j i f
j i q
+
=
(5)
Here f
c1
(i, j), f
c2
(i, j) are the closest valves of f(i, j) in the
filter window of size ww.
Then rearrange p(i,j) and q(i,j) in ascending order for all
i,j=0,1,2.

Step2: Compute w(i,j)= F(p(i,j), q(i,j)) such that

2
1
) , ( ) , (
) , (
1 ) , (
k j i q j i p
k j i p
j i w
+ +
+
= (6)
Here k
1
and k
2
are real quantity, which depends on the
quality of the input images. Select the maximum w(i,j).

Step3: If y
1
(i, j) = y
2
(i, j) then
y
3
(i, j) = w(i, j)y
1
(i, j) + (1- w(i, j)) y
2
(i, j) (7)
Else
y
3
(i, j) = y
1
(i, j)

Step4: Continue this process for all the pixels which are
noisy.
The procedure for the computation of the median is
illustrated below.
Example: Consider a 33 window of pixel as follows,

83 113 71
99 0 58
112 92 47






Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 193
Original value: 0 Mean value: 75 Median value y
1
(i,,j): 83

Sorted
order
0 47 58 71 83 92 99 112 113
Member
ship
value
0.00 0.26 0.55 0.89 0.79 0.55 0.37 0.26 0.00

Table 1
According to Table -1 the graph is as follows -


Figure 2

Now we calculate y
2
(i,j) using centroid method

Sector Area of the sector Centroid of the sector
a 6.184211 23
b 4.486842 52
c 9.407894 64
d 10.105263 77
e 6.039474 87
f 3.223684 95
g 2.565789 105
h 0.013158 112

Table -2

So, the calculated value for

x
x
j i x ) , ( = 2858.328857
And the calculated value for

x
x
j i ) , ( = 42.02631
Therefore, ) , (
2
j i y = 68 [Selected value in phase I ]
Rearranged p(i,j) and q(i,j) as follows

p(i,j) q(i,j) w(i,j)
0 7 0.956522
9 7.5 0.207792
12 8 0.509804
16 10 0.388889
25 10.5 0.388278
29 12 0.328947
30 12.5 0.202703
36 17.5 0.47222
83 52.5 0.330275

Table-3
Therefore y
3
(i,j) = 82




Figure 3

The figure 3 shows the membership function for the gray
levels in the original test image.

III. FLOW CHART
The flow chart of the proposed method is shown below.



Figure 4

IV. EXPERIMENTAL RESULT
The effectiveness of the proposed method is shown
experimentally. It eliminates fixed value impulse noise.
And it is tested for different images in different sizes.
The peak signal-to-noise ratio (PSNR) [19, 22, 23, 27, 29,
30] value gives the performance of restoration
quantitatively, which is defined as
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 194
|
|
.
|

\
|


k
k
k y k d
PSNR
2
2
10
)) ( ) ( (
255
log 10 (8)

Where 255 is the peak gray-level of the image, d(k)
represents the value of the desired output, and y(k)
represents the value of the physical output.
The experimental results were compared with median-
filter (MED) and Combine Fuzzy Median filtering
method outputs.
A. Original Images:


lena girl hair

field paddy aish

B. Noisy Images :





C. Median Filter Output




D. Proposed Filter Output




Image
name
PSNR for Median
Filter
PSNR for Proposed filter
lena 13.77654 13.774332
girl 47.92559 49.988178
hair 47.04226 49.370430
field 44.06747 46.27762
paddy 43.47353 44. 832817
aish 54.17881 57.420597

Table 4: PSNR Comparison Table


Image
name
Original
Image
Median Filter
Output
Proposed Filter
Output
lena 18904 21848 21889
girl 15468 15481 15559
hair 21864 21839 21863
field 29855 29605 29719
paddy 21329 21265 21389
aish 11605 11601 11653

Table 5: Edge Count Comparison Table

From the Table 4 we observed that the proposed method is
better than the median filter due to better PSNR.
From the Table 5 we conclude that more edges [23] be
preserved for the proposed method than median filter.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 195
V. CONCLUTION
In this paper, we proposed new techniques for noise
removal based on soft computing techniques. Which can
remove the noise and at the same time tiny edges can be
preserved, which enhances the reliability for getting the
output image, which is more or less same with original
image. From the Table 4 we can conclude that the best
PSNR can be achieved by using the proposed method. The
proposed method is tested more than 100 pictures, which
gives the better result for the different noise level. The
consecutive PSNR comparison results of the above stated
filters with the hair image introducing different noise
levels are as follows

Noise
Percentage
Median Filter Proposed
Filter
5 % 42.55175 44.916355
10 % 38.80244 40.318722
15 % 35.74166 37.733429
20 % 31.76021 33.796974
25 % 29.41449 30.928938
30 % 25.98153 27.275433
35 % 23.94808 25.14379
40 % 22.5963 23.435760
45 % 20.67396 21.548168
50 % 18.77719 19.788691

Table 6: PSNR comparison with different Noise level

0
5
10
15
20
25
30
35
40
45
50
5 10 15 20 25 30 35 40 45 50
Medi an
Proposed

Figure 5

From the Table 6 we can say that proposed filter gives better
result than median filter in any noise level. Figure - 5 gives
the graphical representation of Table 6. Here X- axis is
denoting the noise percentage and Y- axis is denoting the
PSNR value. So, we can easily say that the proposed
method is good enough to de-noise any noisy image as well
as it can preserve the tiny edges of the image.

REFERENCE
[1] S. E. Umbaugh, Computer Vision and Image Processing,
Prentice-Hall, Englewood Cliffs, NJ, USA, 1998.

[2] M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis,
and Machine Vision, PWS Publishing, Pacific Grove, Calif, USA,
1999.

[3] O. Yli-Harja, J. Astola, and Y. Neuvo, Analysis of the
properties of median and weighted median filters using
threshold logic and stack filter representation, IEEE Trans.
Signal Processing, vol. 39, no. 2, pp. 395410, 1991.

[4] S.-J. Ko and Y. H. Lee, Center weighted median filters and their
applications to image enhancement, IEEE Trans. Circuits and
Systems, vol. 38, no. 9, pp. 984993, 1991.

[5] B. Jeong and Y. H. Lee, Design of weighted order statistic filters
using the perceptron algorithm, IEEE Trans. Signal Processing,
vol. 42, no. 11, pp. 32643269, 1994.

[6] T. Sun and Y. Neuvo, Detail-preserving median based filters in
i mage processing, Pattern Recognition Letters, vol. 15, no. 4,
pp. 341347, 1994.

[7] T . Chen and H. R. Wu, Adaptive impulse detection using center-
weighted median filters, IEEE Signal Processing Letters, vol. 8,
no. 1, pp. 13, 2001.

[8] S . Zhang and M. A. Karim, A new impulse detector for
switching median filters, IEEE Signal Processing Letters, vol. 9,
no. 11, pp. 360363, 2002

[9] T. Chen, K.-K. Ma, and L.-H. Chen, Tri-state median filter for image
denoising, IEEE Trans. Image Processing, vol. 8, no. 12, pp. 1834
1838, 1999.

[10] . T. Chen and H. R. Wu, Space variant median filters for the
restoration of impulse noise corrupted images, IEEE Trans. on
Circuits and Systems II: Analog and Digital Signal
processing, vol. 48, no. 8, pp. 784789, 2001.

[11] Z. Wangand D. Zhang , Progressive switching median filter for
the removal of impulse noise from highly corrupted images,
IEEE Trans. on Circuits and Systems II: Analog and Digital
Signal Processing, vol. 46, no. 1, pp. 7880, 1999.

[12] E. Abreu, M. Lightstone, S. K. Mitra, and K. Arakawa, A
new efficient approach for the removal of impulse noise
from highly corrupted images, IEEE Trans Image
Processing, vol. 5, no. 6, pp. 1012-1025, 1996

[13] T.C. Lin, P.-T. Yu, , A new adaptive center weighted median filter
for suppressing impulsive noise in images, Information
Sciences 177 (2007) 1073-1087.

[14] K. Arakawa, Median filters based on fuzzy rules and its
application to image restoration, Fuzzy Sets and Systems 77
(1996) 3-13.
[15] E. Abreu, S.K. Mitra, A signal-dependent rank ordered mean
(SD-ROM) filter. A new approach for removal of impulses
From highly corrupted images, in: Proceedings of IEEE
ICASSP-95, Detroit, MI, 1995, pp. 2371-2374.

[16] T.-C. Lin, P.-T. Yu, Partition fuzzy median filter based on
fuzzy rules for image restoration, Fuzzy Sets and Systems 147
(2004) 75 - 97.

[17] B. Smolka, A. Chydzinski, Fast detection and impulsive noise
removal in color images, Real-Time Imaging 11 (2005)
389 - 402.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 196
[18] J. C. Bezdek, Pattern Recognition with Fuzzy Objective
Function Algorithms. New York: Plenum, 1981

[19] Konar Amit, Computational Intellingence, Springer, New york,
2004

[20] Sau K, Amitabha Chanda, Image De-Noising with A Fuzzy Filtering
Technique, India, 2011

[21] Somasundram K. Shanmugavadivy P. Adaptive iterative order
Statistics Filter, ICGST-GVIP journal Vol. 9 issue 4, pp 23-32, 2009

[22] Ross. T. J, Fuzzy logic with Engineering Applications, 2nd edition,
University of New Mexico, USA, Wiley India.

[23] Gonzalaz. R. C, Digital Image processing. 2
nd
edition TMH

[24] Sadoghi H, Yazdi, Homayouni F. Impulsive Noise Suppression of
Images Using Adaptive Median Filter, International Journal of Signal
Processing, Image Processing and Pattern Recognition Vol. 3, No. 3,
September, 2010

[25] Baudes, A., B. Coll and J, M Morel, 2005. A non-local algorithm for
noise denoising, Computer Vision and Pattern Recognition, CVPR.
IEEE Conf. , 2: 60-65, 20-25.

[26] Lee, C. S. V H Kuo and P. T Yu 1997. Weighted Fuzzy mean filters
for Image processing. Fuzzy Sets Sys., 89, 157-180

[27] Lakshimiprabha S., A new method of image denoising based on fuzzy
logic. International Journal of Soft computing 3(1): 74-77, 2008

[28] H. Hwang and R. A. Haddad 1995, Adaptive Median filters: New
algorithms and results IEEE transaction on Image processing , 4, pp.
499-502

[29] A Hamza and H. Krim, 2001 , Image denoising A non linear robust
statistical approach. IEEE. Trans. Signal Processing 49(2), pp. 3045-
3054

[30] H. J Zimmermann Fuzzy set theory- And its applications, second
edition , Allied publishers limited.

[31] S Jayaraman, S Esakkirajan and T Veerakumar, Digital Image
processing, Tata McGraw Hill Education Private Limited.

ABOUT THE AUTHORS
[1] Kartik Sau


Kartik Sau completed his B.Sc. in
mathematics from R. K Mission
Vidyamandira, University of Calcutta.
And M. Sc. in the same subject from
Indian institute of Technology,
Kharagpur.
M. Tech in Computer Science from Indian School of
Mines, Dhanbad. He has presented many papers in
International and National journals and Conference. His
area of interest includes Digital Image processing,
Artificial Intelligence, Pattern Recognition, Soft
computing, etc. He has more than eight years teaching and
research experience in his area of interest.
[2] Dr. Amitabha Chanda

Dr. Amitabha Chanda: visiting Professor; Department of
Computer Science, UCSTA; Calcutta University. He
completed his B.E in Chemical Engineering (Jadavpur),
M.A. in Pure Mathematics and Ph.D in Mathematics from
university of Calcutta. He was a faculty member of Indian
Statistical Institute (ISI), Kolkata. Now he is also guest
faculty member of ISI, Kolkata; Department of Computer
Science, Rajabazar Science College, Kolkata. His area of
interest includes Digital Image processing, Pattern
Recognition Fuzzy logic, Genetic Algorithms, Computer
Graphics, Control system Turbulence, Fractal, multifractals,
Clifford Algebra, Nonlinear dynamics. He has more than
fifty years teaching and research experience in his area of
interest. He has presented more than 50 papers in
International and National journals and Conference. Dr.
Chanda is an Associate member of American Mathematical
Society.

[3] Pabitra Karmakar


Pabitra Karmakar received his B. Tech
degree in Computer Science &
Engineering from Dumkal Institute of
Engineering & Technology, WBUT,
India. Currently he is doing M.Tech in
Computer Science &
Engineering from Institute of Engineering & Management,
Salt Lake Kolkata, India. Also he is a faculty member of
Institute of Engineering and Management in Department
of Computer Science.& Engineering. He have certified in
SCJP 1.4 from Sun Microsystem,USA and also MTA
certified form Microsoft Corporation. His area of interest
includes Digital Image processing, Pattern Recognition,
Fuzzy logic, Genetic Algorithms, Neural Networks,
RDBMS and programming languages.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 197
Implementation Of An Edge Oriented Image Scaling
Processor In SYNOPSYS Using Modified Multiplier


Angel.P.Mathew
Department of Electronics and Communication Engineering.
K.S.R College of Technology, Tiruchengode
Namakkal, India
angelmathew75@yahoo.com

Abstract--Image scaling is a very important technique and has
been widely used in many image processing applications. The
edge-oriented area-pixel scaling processor is implemented with
low-complexity. A simple edge catching technique is adopted to
preserve the image edge features effectively so as to achieve better
image quality. Compared with the previous low-complexity
scaling techniques, the edge oriented area pixel scaling method
performs better in terms of both quantitative evaluation and
visual quality.

Keywords Image scaling, Interpolation, Pipeline architecture,
Image processing.

I. INTRODUCTION

Image scaling [2] is the process of resizing a digital
image. Scaling is a non-trivial process that involves a trade-off
between efficiency, smoothness and sharpness. As the size of
an image is increased, the pixels which comprise the image
become increasingly visible, making the image appears soft.
Conversely, reducing an image will tend to enhance its
smoothness and apparent sharpness. Image scaling is widely
used in many fields [1] ranging from consumer electronics to
medical imaging. It is indispensable when the resolution of an
image generated by a source device is different from the screen
resolution of a target display. For example, we have to enlarge
images to fit HDTV or to scale them down to fit the mini-size
portable LCD.
According to the required computations and memory
space, we can divide the existing scaling methods [3] into two
classes: lower complexity and higher complexity scaling
techniques. The complexity of the former is very low and
comparable to conventional bilinear method. The latter yields
visually pleasing images by utilizing more advanced scaling
methods. Kim et al. presented a simple area-pixel scaling
method [4]. It uses an area-pixel model instead of the common
point-pixel model and the scaling process is included in end-
user equipment, so a good lower complexity scaling technique,
which is simple and suitable for low-cost VLSI
implementation. In this model it takes a maximum of four
pixels of the original image to calculate one pixel of a scaled
image. By using the area coverage of the source pixels from
.the applied mask in combination with the difference of the all



C.Saranya
Department of Electronics and communication Engineering.
K.S.R College of Technology, Tiruchengode
Namakkal, India
saranya1209@yahoo.com

source pixels luminosity. Andreadis et al. proposed a modified
area-pixel scaling algorithm [8] and its luminosity among the
source pixels, to obtain better edge preservation. The area pixel
scaling technique [4] is approximated and implemented with
the proper and low-cost VLSI circuit in our design. The
proposed scaling processor can support floating-point
magnification factor and preserve the edge features efficiently
by taking into account the local characteristic existed in those
available source pixels around the target pixel. Furthermore, it
handles streaming data directly and requires only small amount
of memory: one line buffer rather than a full frame buffer.

II. AREA-PIXEL SCALING TECHNIQUE

The source image represents the original image to be
scaled up/down and target image represents the scaled image.
The area-pixel scaling technique performs scale-up/scale-down
transformation [5] by using the area pixel model instead of the
common point model. Each pixel is treated as one small
rectangle but not a point; its intensity is evenly distributed in
the rectangle area. The images scale up process of the area-
pixel model where a source image of 4* 4 pixels is scaled up to
the target image of 5 *5 pixels. The area of a target pixel is less
than that of a source pixel. A window is put on the current
target pixel to calculate its estimated luminance value. The
number of source pixels overlapped by the current target pixel
window is one, two, or a maximum of four.
Let the luminance values of four source pixels
overlapped by the window of current target pixel at coordinate
(k, l) be denoted as F(m, n),F(m+1,n),F(m,n+1),) and
F(m+1,n+1) respectively. The estimated value of current target
pixel, denoted as F
T
(k, l) can be calculated by weighted
averaging the luminance values of four source pixels with area
coverage ratio as

F
T
(k, l) = [F
S
(m + i, n +j)* W (m +I, n +j)]

(1)


Where, W( m, n ), W(m+1,n), W(m,n+1), and W(m+1,n+1)
represent the weight factors of a neighboring source pixels for
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 198

(c)

Fig. 1. Image enlargement using typical area-pixel model. (a) A source image
of 4*4 pixels. (b) A target image of 5*5 pixels. (c) Relations of the target pixel
and source pixels.

the current target pixel at (k, l). Assume that the regions of
four source pixels overlapped by current target pixel window
are denoted as, A(m, n),A(m+1,n),A(m,n+1) and A(m+1,n+1) ,
respectively, and the area of the target pixel window is denoted
as A sum. The weight factors of four source pixels can be
given as

[W(m, n),W(m+1,n),W(m,n+1),W(m+1,n+1) =

[A (m, n)/A
sum
. A (m+1, n)/A
sum .
A (m, n+1)/A
sum .

Am+1, n+1)/ A
sum
]
(2)
Where,
A sum =A (m, n) + A (m+1, n) + A (m, n+1) +A (m+1, n+1)
(3)

The main difficulty to implement the area-pixel
scaling with hardware is that it requires a lot of extensive and
complex computations. Total of 13 addition, eight
multiplications, and one division floating-point operations are
required to calculate one target pixel. The precision of those
floating- point operations is set as 30 b. To obtain lower
hardware cost, we adopt an approximate technique to reduce
implementation complexity and to improve scaling speed.
Most variables are represented with the 4-, 6-, or 8-b unsigned
integers in our low-cost scaling processor. Furthermore, the
typical area-pixel scaling method needs to calculate six
coordinate values for each target pixels estimation.
To reduce the computational complexity, we employ
an alternate approach suitable for VLSI implementation to
determine those necessary coordinate values efficiently and
quickly. The direct implementation of area-pixel scaling
requires some extensive floating-point computations [13] for
the current target pixel at (k, l) to determine the four
parameters, left (k, l), right (k, l), top (k, l) and bottom (k, l) .In
the proposed processor, we use an approximate technique
suitable for low-cost VLSI implementation and calculation of
areas of the overlapped regions as implementation to achieve
that goal properly and implement

[A(m, n),A(m+1,n),A(m,n+1),A(m+1,n+1)] =

[left (k, l)* top (k, l), right (k, l)* top (k, l), Left (k, l)*

bottom (k, l), right (k, l) * bottom (k, l)]

(4)

Those left (k, l), top (k, l), right (k, l) and bottom (k, l) are all
6-b integers and given as

[left (k, l), top (k, l), right (k, l), bottom (k, l)]=

Appr [left (k, l), top (k, l), right (k, l), bottom (k, l)]
(5)

To obtain better visual quality, a simple low-cost
edge catching technique is employed to preserve the edge
features effectively by taking into account the local
characteristic existed in those available source pixels around
the target pixel. The final areas of the overlapped regions are
given as

[A(m, n),A(m,n+1),A(m+1,n+1),A(m+1,n+1) =

([A(m, n),A(m+1,n),A(m,n+1),A(m+1,n+1)

(6)

III. METHDOLOGIES

A) The Low-Cost Edge-Catching Technique

In the edge oriented area pixel scaling processor, used
an approximate technique suitable for low-cost VLSI
implementation. To obtain better visual quality, a simple low-
cost edge catching technique is employed to preserve the edge
features effectively by taking into account the local
characteristic existed in those available source pixels around
the target pixel. To describe the low cost edge catching
technique in detail we have to discuss the approximate method
and edge catching technique.

a) The Approximate Technique

Here a source image of SW*SH pixels is scaled up
to [5] the target image of TW*TH pixels and every pixel is
treated as one rectangle. Here aligned those centers of four
corner rectangles in the source and target images. For simple
hardware implementation, each rectangular target pixel is
treated as grids with uniform size. Assume that the width and
the height of the target pixel window are denoted as Win
w
and
Win
h.
Then the area of the current target pixel window A
SUM

can be calculated as Win w * Win h. In the case of image
enlargement, Win
w
= 2
n
, when 100% <= m f_ w <=200%.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 199
When 200% < mf_ w < 400%, winw will be enlarged to 2
n+1
,
and so on. In the case of image reduction, Win w is reduced to
2
n-1
when 50% <= m f_ w < 100%. With the similar way, we
can also determine the value of Win h by using m f_ h. In the
design, the division operation can be implemented simply with
a shifter.

b) Edge Catching Technique

In the design, we take the sigmoid signal [6] as the
image edge model for image scaling. Assume that the pixel to
be interpolated is located at coordinate k and its nearest
available neighbors in the image are located at coordinate m
for the left and m+1 for the right.


(a) (b)

Fig. 2. Local characteristics of the data in the neighborhood of k. (a)
An image edge model. (b) The local Characteristics.

Let s = k-m and E (k) represent the luminance value
of the pixel at coordinate k. If the estimated value of the pixel
E(k) to be interpolated is determined by using linear
interpolation, it can be calculated as

E (k) = (1-s) x E (m) + s x E (m+1)
(7)

An evaluating parameter L is defined [11] to estimate
the local characteristic of the data in the neighborhood of k. It
is given as

L = E (m+1) E (m-1) E (m+2) E (m)

(8)

If the image data are sufficiently smooth and the
luminance changes at object edges can be approximated by
sigmoidal functions, we can come to the following conclusion.
L = 0 Indicates symmetry, so s is unchanged. L>0 indicates
that the variation between E (m+1) and E (m-1) is quicker than
that between E (m+2) and E (m). It means that the edge is
more homogeneous on the right side, so the pixel located at
coordinate should affect the interpolated [12] value more than
the pixel located at coordinate does. Hence, we can increase s
in order to make the estimated value close to the expected
value. On the contrary, L<0 indicates the edge is more
homogeneous on the left side. Thus, we must decrease s to
obtain a better estimation. A small amount of operations is
required to catch the local characteristic of the current pixel.


IV. RESULTS

The input image used is the LENA image of figure3. The
simulated output is obtained by MATLAB 7.8 software for
scaled up and scaled down images. The performance parameter
is obtained by plotting the SNR curve.




Fig.3. Input - Lena Image




Fig.4. 0utput Up sampled image



Fig.5. Output - Down sampled image



Fig. 6. PSNR of up sampled image

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 200


Fig.7. PSNR of down sampled image


Optimized area and power of the design can be
obtained by working with SYNOPSYS tool. The schematic
view of the design is shown in Fig. 8.



Fig.8.Schematic View of the work

The functionality using SYNOPSYS tool Verilog
Compiled Simulator (VCS) is shown in Fig.9.




Fig.9.Functionality of the work

The performance using SYNOPSYS tool Design
Compiler (DC) is shown in Fig.10 and in Fig.11. The area and
power are measured for different clock frequencies using this
tool. Area and power for 10MHz and 100MHz clock
frequencies are measured here. The power is increased
correspondingly while increasing the clock frequency.
Similarly the area also decreased accordingly.
This algorithm can be modified by replacing all the
multipliers by modified booth multiplier and can be compared
with various scaling algorithms.


Fig.10. Area report using DC





Fig.11. .Power report using DC

The following table shows the performance
comparisons for different clock frequencies.

Table I. .Performance comparisons


Performance
CLK CLK
10MHz frequency 100MHz frequency
Total area
38738.086894 m
18054.611309 m
Total dynamic power 29.5117 W 208.0457W



V. CONCLUSION

A simple edge catching technique is adopted to
preserve the image edge features effectively so as to achieve
better image quality. Compared with the previous low-
complexity techniques, the edge oriented area pixel scaling
method performs better in terms of both quantitative
evaluation and visual quality. It shows better performances in
both objective and subjective image quality than other low-
complexity scaling methods.
To evaluate the performance of the image-scaling
algorithm, for each single test image, reduced/enlarged the
original image by using the well-known bilinear method, and
then employed various approaches to scaled up/down the
bilinear-scaled image back to the size of the original test
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 201
image. Thus, comparing the image quality of the reconstructed
images for various scaling methods. Three well-known scaling
methods, nearest neighbor (NN), bilinear (BL), and bicubic
(BC) two area-pixel scaling methods Win scale and M Win
(modified win scale) and the edge oriented area pixel scaling
method are used for comparison [7]-[10].
The MATLAB implementation of the paper is
explained here. The VLSI implementation is performed with
low complexity and edge preservation. The algorithm is finally
implemented using SYNOPSYS design vision. The
performance is measured using Design Compiler (DC) tool.
The functionality is measured using Verilog Compiled
Simulator (VCS) tool.


REFERENCES


[1] Pei-Yin Chen, Member, IEEE, Chih-Yuan Lien, and Chi-Pin Lu,VLSI
Implementation of an edge oriented image scaling processor, IEEE
Transactions on VLSI systems, vol.17, no.9, Sep 2009.
[2] R. C. Gonzalez and R. E.Woods, Digital Image Processing. Reading, MA:
Addison-Wesley, 1992.
[3] J. A. Parker, R. V. Kenyon, and D. E. Troxel, Comparison of
interpolation methods for image resampling, IEEE Trans. Med. Image,
vol. MI-2, no.3, pp. 3139, Sep. 1983.
[4] C. Kim, S. M. Seong, J. A. Lee, and L. S. Kim, Win scale: An image
scaling algorithm using an area pixel model, IEEE Trans. Circuits
Syst.Video Technol. vol. 13, no. 6, pp. 549553, Jun. 2003.
[5] H. A. Aly and E. Dubois, Image up-sampling using total variation
regularization with a new observation model,, IEEE Trans. Image
Process., vol. 14, no. 10, pp. 16471659, Oct. 2005.
[6] G.Ramponi Warped distance for space-variant linear image interpolation,
IEEE Trans. Image Process., vol. 8, no.5, pp. 629639, May 1999.
[7] M. A. Nuno-Maganda and M. O. Arias-Estrada, Real-time FPGA
based architecture for bicubic interpolation: An application for digital
image scaling, in Proc. IEEE Int. Conf. Reconfigurable Computing
FPGAs, 2005.
[8] I. Andreadis and A. Amanatiadis, Digital image scaling, in Proc IEEE
Instrum. Meas. Technol. Conf., May 2005, vol. 3, pp. 2028 2032.
[9] H. S. Hou and H. C. Andrews, Cubic splines for image interpolation and
digital filtering, IEEE Trans. Acoust Speech Signal Process.,
vol.ASSP-26, no. 6, pp. 508 517, Dec. 1978.
[10] J. K. Han and S. U. Baek, Parametric cubic convolution scalar for
enlargement and reduction of image, IEEE Trans.Consumer
Electron.vol. 46, no. 2, pp. 247256, May 2000.
[11] L. J.Wang, W. S. Hsieh, and T. K. Truong, A fast computation of 2-D
cubic-spline interpolation, IEEE Signal Process. Lett. vol. 11, no. 9,pp.
768771, Sep. 2004.
[12] T.Feng, W.L.Xie, and L.X.Yang, An architecture and implementation of
image scaling conversion, in proc. IEEE Int. Conf. Appl. Specific
Integr. Circuits, 2001, pp. 409410.
[13] M. A. Nuno-Maganda and M.O.Arias-Estrada, Real-time FPGA based
architecture for bicubic interpolation: An application for digital image
scaling, in Proc. IEEE Int. Conf. Reconfigurable Computing FPGAs,
2005, pp. 811.




Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 202
Modified Squeeze Box Filter for Despeckling
Ultrasound Images

Blessy Rapheal M
1
J.A Laxminarayana
2
V.K Joseph
3

Department of Electronics and Telecommunication Engineering
Goa Engineering College
Goa, India
1
blessyrose2000@gmail.com,
2
jal@gec.ac.in,
3
vkj@gec.ac.in


Abstract Images produced by ultrasound systems are adversely
hampered by a stochastic process known as speckle. The speckle
noise is due to interference between coherent waves which are
backscattered by targeted surfaces and arrive out of phase at the
sensor. This hampers the perception and the extraction of fine
details from the image. Speckle reduction/filtering are used for
enhancing the visual quality of the images. A despeckling method
based upon removing outlier is proposed. The method is with
respect to decreasing pixel variations in homogeneous regions
while maintaining or improving differences in mean values of
distinct regions. The proposed despeckling filter is compared
with the other well known despeckling filters like lee filter,
wiener filter and anisotropic diffusion filter. The evaluations of
despeckling performance are based upon improvements to
contrast enhancement, structural similarity and Signal to Noise
Ratio.

Keywords- Image processing, Speckle noise, Squeeze box filter,
Ultrasound image.
I. INTRODUCTION
An accurate anatomical display of organs such as the heart,
kidney, prostate, liver, etc., would be a beneficial aid to health
practitioners in diagnosing ailments or assessing the health of
that organ. Although there are a wide range of medical
imaging modalities that could be utilized, the use of
ultrasound is advantageous for the following reasons: only
safe nonionizing sound waves are used in the scanning
process; the portability of the hardware; cost is inexpensive
when compared to other medical imaging modalities.
Although there are many advantages of using ultrasound as an
imaging modality, the images produced are hampered by a
phenomenon known as speckle. Speckle is visible in all
ultrasound images as a dense granular noise or snake-like
objects that appear throughout the image. Unfortunately, the
presence of speckle does adversely affect image contrast, edge
detection, and segmentation. Mathematically there are two
basic models of Noise; additive and multiplicative. Additive
noise is systematic in nature and can be easily modeled and
hence removed or reduced easily. Whereas multiplicative
noise is image dependent, complex to model and hence
difficult to reduce.
Speckle noise is the primary factor that limits the contrast
resolution in diagnostic ultrasound imaging, thereby limiting
the detectability of small low-contrast lesions and making the
ultrasound images difficult to interpret. Speckle also limits the
effective application (e.g., edge detection) of automated
computer-aided analysis algorithms. It is caused by the
interference between ultrasound waves reflected from
microscopic scattering through the tissue. Therefore, speckle
is most often considered a dominant source of noise in
ultrasound imaging and should be filtered out without
affecting important features of the image.

II. THEORETICAL BACKGROUND
The experimentation performed in [1] provides evidence
that the intensity or prelogarithm compressed envelop
detection amplitudes J(n,m) can be modeled as the
multiplicative model given as

J(n,m) = (P(n,m) * I(n,m))

(n,m) (1)

where the multiplicative noise (n,m) is sample wise
independent and uncorrelated to the ideal image pixel value
I(n,m) and P(n,m) is the point spread function (PSF) of the
ultrasound imaging system. The multiplicative model could be
made equivalent to the additive model
J(n,m) = (P(n,m) * I(n,m) +
+
(n,m) (2)
A. Wiener Filter
Suppose the detected image is given as the additive model
in equation (2) where the PSF P is known or estimated.
Applying the Wiener filter as in [9] to the detected image J
results in an ideal low pass filtered version of the ideal image I
polluted with the noise component
+
(n,m) provided that the
PSF P is a low pass filter. Wiener filtered image is
determined as

{ } (3)

where p (p and are discrete Fourier
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 203
transform pairs),likewise for J,I,

and , ,

(k,l) = if (k,l) 0 (4)
othe rwise,

is the variance of the noise. The fortuitous side of
applying the Wiener filter is that edges are enhanced.
Unfortunately, the Wiener filter indiscriminately enhances the
speckle present within a homogeneous region. It should be
noted that the Wiener filter is applied in the DFT domain. This
requires a priori knowledge of the PSF, which for ultrasound
images is spatially varying.
B. Lee Filter
Lee in [2] proposed methods to contrast enhance an
image and to restore an image corrupted by noise. Lees noisy
image models in [2] are

J(n,m) = I(n,m) + (n,m) (5)

for the additive noise, and

J(n,m) = I(n,m) (n,m) (6)

for multiplicative noise. In addition to proposing a method to
contrast enhance an image, his paper proposes an adaptive
filter to aggressively smooth, via local averaging, in
homogeneous regions while regions which contain significant
image features such as edges are to be left unmolested.
To enable contrast enhancement, at each pixel Lees
algorithm used a linear rescaling of the local mean summed
with the multiplication of a gain applied to the difference of
the pixel value with the local mean to determine the new pixel
value. Formally, J(n,m) be the original pixel value of some
image J, then the new pixel value (n,m) is set at

(n,m) = g() + K (J(n,m) ) (7)

where is the local mean. The function g () is a linear
rescaling of the mean that is g () = a+b where the
parameters a, b R were determined to allow the new pixel
value to utilize the full eight bit dynamic range. As pointed
out in [2], if 0 K

1, then (7) determines a smoothing
filter, i.e., a low pass filter. When the gain k is greater than
one, then (7) attempts to sharpen image features that is
enhance the edges.
If the image is determined or assumed to be polluted with zero
mean additive white noise, then the gain k is adaptively
chosen as a function of the local statistics. The new pixel
value (n,m) is set at

(n,m) = + K (J(n,m) (8)

where is the mean in some window.

K = (9)

where
2
is the local variance in the same window about
J(n,m) that determined and

2
is the global noise variance.
When
2
>>

2
0, the gain parameter is less than but
approximately one. In which case the filter in (8) performs
like an identity filter that is (n,m)

J(n,m) . If the local
variance
2
is greater than but nearly equal to the global noise
variance

2
, then (n,m)

and the filter specified by (8)
serves as a low pass filter. Since the global noise variance can
only be greater than or equal to zero, the gain parameter k can
never exceed one. Thus, in homogeneous regions k should be
set to zero and (8) provides local smoothing. When J(n,m) is
determined to be an edge pixel, then k should be set to one
and the pixel is left undeteriorated.
When the image is determined or assumed to be degraded by
multiplicative noise as in (7), then Lee in [2] approximates the
image as an additive noise model of the form

(n,m) = AI(n,m) + B(n,m) + C (10)

where A, B, C

are chosen so that mean square error
between the J(n,m) of the multiplicative model from equation
(7) and (n,m) of the additive model in equation (10) is
minimized.
C. Speckle Reduction by Using Anisotropic Diffusion

Perona and Malik in [4] proposed the following nonlinear
PDE for smoothing image on a continuous domain:

{ (11)

where is the gradient operator, div the divergence operator, ||
denotes the magnitude, c(x) the diffusion coefficient, and I
0

the initial image. They suggested two diffusion coefficients

= = exp [ ] (12)

where k is an edge magnitude parameter. In the anisotropic
diffusion method, the gradient magnitude is used to detect an
image edge or boundary as a step discontinuity in intensity. If
K, then c( ) 0, and we have an all-pass filter; if
K, then c( ) 1, and we achieve isotropic diffusion .
By extending the PDE versions of the despeckle filters, Yu
and Acton in [3] formulized a more general update function

I
i,j
t+t =
= I
i,j
t
+ div [c(C
i,j
t
) I
i,j
t
] (13)

where c( )is a bounded nonnegative decreasing function
called coefficient of Variation. The discretized version of the
coefficient of variation that is applicable to PDE evolution can
be called an instantaneous coefficient of variation. First, we
write the local variance estimate of intensity in as
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 204

=
i,j
(
i,j
)
2
(14)

The coefficient of variation and instantaneous coefficient of
variation are given as

C
2
i,j
= (15)

q = (16)


It combines a normalized gradient magnitude operator and a
normalized Laplacian operator to act like an edge detector for
speckled imagery. High relative gradient magnitude and low
relative Laplacian tend to indicate an edge.

III. MATERIALS AND METHODS
A. modified squeeze box filter

1) The algorithm is initialized by the original speckled image

=J

2) Speckle noise with a small variance is initially and
periodically added to the current image

(n,m) + (n,m)

3) An iteration i begin by determining the set of locations of
.The locations of these
extrema are defined by the set

E={ |
i-1
( ) meets condition 1 or 2}
Condition 1: J
i-1
( ) > J
i-1
( )
i-1
(
i-1
( )

where m=n1 for 1-D signals and =(m
1
,m
2
) = (n
1
1,n
2
1)
for 2-D images. An illustration using a signal of fifty samples
is shown in Fig. 1. Fig. 1(a) show the ideal noise free signal in
green, the noisy signal (the ideal signal with Speckle noise
added) in black. The red upward pointed () and blue
downward pointed ( ) triangles denote the local peaks and
valleys of the noisy signal, respectively. These local extrema
are considered outliers.
4) Without using the local extrema values, the algorithm
replaces each extremum with the local mean taken from
neighboring samples. For all (n,m) E
i
( ) =
where is some local neighborhood of , is the
cardinality of set , and . In the 1-D illustration
shown in Fig. 1 the local maximums shown as s and the
local minimums shown as s in Fig. 1(a) are replaced with
corresponding local averages. The local averages are depicted
as either red or blue circles (O or O) in Fig. 1(b). That is each
value in Fig. 1(a) is replaced with the corresponding O
value in Fig. 1(b) and each value in Fig. 1(a) is replaced
with the corresponding O value depicted in Fig. 1(b). This
produces the reduced local variance red signal as shown in
Fig. 1(c).
5) If convergence is attained, that is

<

for some predefined > 0 and the total number of iteration is
not met, then the process is iterated from either step 2 or step
3, depending upon whether more zero mean small variance
noise should be added. If the total number of iterations is met,
then the filtering process is terminated.
By removing outliers at each iteration, this method reduces the
local variance of the signal/image. In effect, this method
produces a converging sequence of signals or images by
squeezing or compressing the stochastically distributed pixel
values to some limiting value. Since the proposed filtering
method described is able to compress the distribution of pixel
values, the proposed method is named the squeeze box filter
(SBF).


Fig.1 Illustration of one iteration of the proposed despeckling method. (a)
Ideal (green) and noisy (black) signals. The s are local maximums and the
s are local minimums of the noisy signal. (b) s and s in Fig. 1(a) are
replaced with the local means (shown as Os and Os, respectively). (c)
Resulting (red) signal after one iteration.
IV. IMAGE QUALITY EVALUATION METRICS

The performance of each filter is evaluated quantitatively
for ultrasound image with speckle noise using the quality
metrics like Root Mean Square Error (RMSE), Signal to Noise
Ratio (SNR), Peak Signal to Noise Ratio (PSNR), and
Structural Similarity Index (SSI).

1) SNR: Signal to Noise Ratio (SNR) compares the level
of desired signal to the level of background noise. The higher
the ratio, the less obtrusive the background noise is. The larger
SNR values correspond to good quality image.

SNR = 10 log
10

/


where is the variance of the original image and is the
variance of filtered image.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 205
2) RMSE: The Root Mean square error (RMSE), which is
the square root of the squared error averaged over MN
window

RMSE =((((j(n,m)- (n,m) )^2 )/MN)

3) PSNR: Peak Signal to Noise Ratio (PSNR) is computed
using

PSNR = 20 log
10


The PSNR is higher for a better transformed image.
4) SSI: The Structural Similarity Index between two
images is computed as
SSI =

Here, and are the variance and mean of original
image and
y
2
and are the variance and mean of filtered
image.The SSI lies between -1 for a bad and 1 for a good
similarity between the original and despeckled images.

TABLE I. COMPARISON OF PERFORMANCE OF DIFFERENT FILTERS
V. RESULTS AND DISCUSSION
The evaluation in this paper provides evidence that
despeckling Ultrasound images to promote robust contrast
enhancement is possible with the adaptive Lee filter, the
Wiener method, the SRAD method, and the SBF method. The
ultrasound image of fetus is taken as a test image. Speckle
noise of variance 0.04 has been added synthetically to the test
image. To investigate the effectiveness of the different speckle
reduction methods, they are applied to the image corrupted
with Speckle noise .Simulations is carried out in MATLAB.
The evaluation was based upon the SNR, RMSE, PSNR and
SSI as shown in table 2. The SSI map of the Wiener
despeckled simulated image exhibited excellent structural
similarity than Lee filter. Both SRAD and SBF were able to
attain excellent contrast enhancement and high SSI index. The
over-all performances are plotted in Fig. 1. The original
ultrasound image and filtered images of 12 week fetus
obtained by various filtering techniques are shown in figure 2.
For a better enhancement approach, RMSE should be low and
the PSNR and SNR values should be high.

VI. CONCLUSION

The removal or reduction of speckle while preserving or
enhancing edge information of an ultrasound image is a
challenging task. An evaluation of various despeckling
algorithms is presented. In contrast, an iterative despeckling
method, SBF, that at each iteration smoothes only outlying
pixel values, is proposed. Simulation showed that the
structural similarity performance of the SBF method is
superior to other filter methods like Wiener, Lee and SRAD.








Fig.2 .Filters performance in terms of RMSE, SNR, PSNR, SSI.

ACKNOWLEDGMENT
The authors would like to thank Shri Vivek Kamat, Principal,
Goa College of Engineering, and Dr R B Lohani Professor &
Head, Department of Electronics and Telecommunication
Engineering, Goa College of Engineering, for all the valuable
advice, help and support.

0
10
20
30
40
50
wiener Lee SRAD SBF
q
u
a
l
i
t
y

m
e
t
r
i
c
s
filters
PSNR
SNR
RMSE
0
0.2
0.4
0.6
0.8
1
Weiner Lee SRAD SBF
q
u
a
l
i
t
y

m
e
t
r
i
c
s
filters
SSI
Sl.No. Filtering
method
SNR RMSE PSNR SSI
1
Weiner
19.2936 0.0805 24.8590 0.4259
2
Lee
11.2310 0.0222 16.0420 0.0957
3
SRAD
25.5392 0.0100 31.0667 0.5725
4
SBF
37.5198 0.0394 43.0103 0.7689
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 206


Fig. 2. (a) Original ultrasound image of fetus (b) noisy image
(c)-(f) the filtered images.
REFERENCES

[1] R. F. Wagner, S. W. Smith, J. M. Sandrik, and H. Lopez,
Statisticsof speckle in ultrasound B-scans, IEEE Trans.
Sonics Ultrason., vol. SU30, no. 3, pp. 156163, May
1983.

[2] J. S. Lee, Digital image enhancement and noise filtering
by use of local statistics, IEEE Trans. Pattern Anal. Mach.
Intell., vol. PAMI-2, no. 2, pp. 165168, Mar. 1980.

[3] Y. Yu and S. T. Acton, Speckle reducing anisotropic
diffusion, IEEE Trans. Image Process., vol. 11, no. 11,
pp. 12601270, Nov. 2002.

[4] P. Perona and J. Malik, Scale-space and edge detection
using anisotropic diffusion, IEEE Trans. Pattern Anal.
Mach. Intell., vol.12, no. 7, pp. 629639, Jul. 1990.

[5] E. Abreu, M. Lightstone, S. K. Mitra, and K. Arakawa, A
new efficient approach for the removal of impulse noise
from highly corrupted images, IEEE Trans. Image
Process., vol. 5, no. 6, pp. 10121025,Jun. 1996.

[6] Peter C. Tay, Scott T Acton, and John A. Hossack,
Ultrasound Despeckling using an Adaptive Window
Stochastic Approach, IEEE International Conference On
Image Processing.

[7] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli,
Image quality assessment: From error visibility to
structural similarity, IEEETrans. Image Process., vol. 13,
no. 4, pp. 600612, Apr. 2004.

[8] R.C. Gonzalez and R.E. Woods: 'Digital Image Processing',
Addison- Wesley Publishing Company, 2002.

[9] T. Kailath, Equations of Wiener-Hopf type in filtering
theory and related applications, in Norbert Wiener:
Collected Works vol. III, P. Masani, Ed. Cambridge.

[10] Scott T. Acton, De-convolution speckle reducing
anisotropic diffusion, IEEE International Conference on
Image Processing, 2005

[11] Despeckle Filtering Algorithms and Software for
Ultrasound Imaging by Christos P. Loizou, Constantinos S.
Pattichis

[12] A.K. Jain, Fundamental of Digital Image Processing.
Englewood Cliffs, NJ: Prentice-Hall, 1989.

(a) Grayscale Image (b) Noisy Image
(c) Despecking using Weiner Filter
(d) Despeckling using Lee Filter
(e) Despeckling using SRAD (f) Despeckling using SBF
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 207
Interference Management Of Femtocells
Mr.N.Umapathi
(1)
, S.Sumathi
(2)


(1)
Assistant Professor, G.K.M. College of engineering and technology, dept. of ECE Email.
nrumapathi@yahoo.co.in
(2) Student, M.E.,(Communication Systems),G.K.M.College of engineering and technology, Chennai. Email. sumathi_mano@rediffmail.com
Abstract:- Cellular service is far superior in areas of high
population density compared to scarcely populated areas.
The initial cellular systems were designed for a single
application, voice, but today with the advent of 3G and 4G,
users expect very high data rates and reliable
communication. The future generation network aims to
deploy low cost and low power consumption. There is a need
for the future communication systems to be designed in a
different way, hence the motivation to move towards smaller
cellular base station in home and small business
environment called Femtocells, that operate in a licensed
spectrum. There are technical problems like interference
management, frequency allocation, coverage area and
outage probability due to mass deployment of femtocells. We
propose the novel solution to mitigate the interference for
femtocells deployment.

Keywords Femtocells, Macrocells, interference
management, frequency allocation, coverage area, and
outage probability.


I. INTRODUCTION
A recent research report visions very huge
market potential for femto cells and estimates that by this
year there will be 100 million users of femtocells products
on worldwide. It was found that currently, 40% the the
mobile usage is outdoor, 30% at work, and another 30%
at home, but in the future it is expected that outdoor usage
will be reduced to 25% and 75% of the mobile usage at
home and work (Indoor). Femtocell is the home base
station that ordinary subscribers can buy and set it
themselves easily, This is the valuable way to increase the
capacity. A macrocell is a cell in a mobile phone network
that provides radio coverage served by a power cellular
base station (tower). Macrocells cannot provide good
signal strength indoors with a high Quality of Service
(QoS) . The backhaul is provided by the broadband
connection or Digital Subscriber Line (DSL), it poses
serious challenges to the existing cellular network.

A femtocell is a small cellularbase station,
typically designed for use in a home or small business. It
connects to the service providers network via broadband
(such as DSL or cable); current designs typically support
2 to 4 active mobile phones in a residential setting, and in
enterprise settings. The features of femtocells are operated
in licensed spectrum, low power consumption, low cost,
smaller cost, best quality of service, higher bandwidth.

Fig.1 Connection using Femtocells

Femtocells are low-power wireless access points
that operate in licensed spectrum to connect standard mobile
devices to a mobile operators network using residential
DSL or cable broadband connections. A femtocell allows
service providers to extend service coverage indoors,
especially where access would otherwise be limited or
unavailable.

II. FEATURES OF FEMTOCELLS
Service Parity: Femtocells support the same voice
and broadband data services that mobile users are
currently receiving on the macrocell network. This
includes circuit-switched services such as text
messaging and various voice features.
Call Continuity: Femtocell networks are well-
integrated with the macrocell network so that calls
originating on either macrocell or femtocell
networks can continue when the user moves into or
out of femtocell coverage.
Security: Femtocells use the same over-the-air
security mechanisms that are used in macrocell
radio networks. But additional security capabilities
need to be supported to protect against threats that
originate from the Internet or through tampering
with the femtocell itself
Self-Installation & Simple Operational
Management: Femtocells are installed by end-
users. Therefore, the femtocell network
architecture must support an extremely simple
installation procedure with automatic configuration
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 208
of the femtocell and automated operational
management with zero-touch by the end-user.
Scalability: Femtocell networks can have
millions of access points. Therefore the
femtocell network architecture must be scalable
to grow into large networks, maintaining
reliability manageability.
III. FEMTOCELLS ARCHITECTURE
There are three network elements that are common to any
femtocell network architecture. These are:
Femtocell Access Point (FAP)
Security Gateway (SeGW)
Femtocell Device Management System (FMS)
Two other elements that are in all femtocell network
architectures are entities that enable connectivity to the
mobile operator core. Depending on the specific
architecture used for circuit switched calls, there can be
either a Femtocell Convergence Server (FCS) or a
Femtocell Network Gateway (FNG). This is also shown
in Figure 1. For packet calls, depending on the airlink
technology, there can be either a PDSN or xGSN
(GGSN/SGSN) in the core. In most cases, the PDSN /
xGSN are the same as those used for macro networks.


Fig.2 Femto Access Point connected to Internet
Femtocell Access Point is the primary node in a
femtocell network that resides in the user premises (e.g.,
home or office). The FAP implements the functions of
the base station and base station controller and connects
to the operator network over a secure tunnel via the
internet. In this model, the femtocell connects to a new
core network of the mobile operator that is based on the
SIP/IMS architecture. This is achieved by having the
femtocells behave towards the SIP/IMS network like a
SIP/IMS client by converting the circuit-switched 3G
signaling to SIP/IMS signaling, and by transporting the
voice traffic over RTP as defined in the IETF standards.

Fig. 3 SIP model-Femtocells Architecture


Fig.4 Legal model-Femtocells Architecture

IV. OFDMA vs. CDMA FEMTOCELLS

OFDMA Femtocells have been found as a good
solution not only to deal with indoor coverage problem but
also to manage the growth of traffic within macrocells. The
deployment of a large number of femtocells will impact
existing macrocell networks by affecting their capacity and
performance. Therefore to mitigate this impact, several
aspects of this new technology such as access methods,
frequency band allocation, timing and synchronization must
be investigated before FAPs become widely deployed.
Orthogonal frequency-division multiple access (OFDMA)
femtocells are more suitable than code division multiple
access (CDMA) ones, mainly due to its intracell interference
avoidance properties and its robustness to multipath. The
available spectrum is divided into orthogonal subcarriers
that are then grouped into subchannels in OFDMA. It works
as a multi-access technique by allocating different users to
different groups of orthogonal subchannels. While CDMA
can exploit channel variations only in the time domain,
OFDMA femtocells can exploit channel variations in both
frequency and time domains for the avoidance of
interference.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 209

V.RADIO ISSUES

Interference :
Mobiles operate in licensed spectrum
may share same spectrum. It happened between
microcells and femtocells. This is avoided by
more number of femtocells.
Lawful Interception :
Access point base stations, in common
with all other public communications systems,
are, in most countries, required to comply with
lawful interception requirements.
Equipment Location :
The requirement in most countries for
the operator of a network to be able to show
exactly where each base-station is located, and
for requirements to provide the registered
location of the equipment to the emergency
services.
Quality of Service:
The uptake of femtocell services will
depend on the reliability and quality of both the
cellular operators network and the third-party
broadband connection, and the broadband
connection's subscriber understanding the
concept of bandwidth utilization by different
applications a subscriber may use.

VI. INTERFERENCE
The placement of a femtocell has a critical effect
on the performance of the wider network, and this is one
of the key issues to be addressed for successful
deployment. Because femtocells can use the same
frequency bands as the conventional cellular network,
there has been the worry that rather than improving the
situation they could potentially cause problems.
To date most of the deployments have used a separate
channel for the femto and the macrocell, but increasingly
carriers are using shared channels and are not
encountering problems. It is notable that the operators
which have spent the longest examining it are exactly
those who have the greatest confidence that today's
interference mitigation techniques work to deliver high
capacity and performance.

VII. SPECTRUM ALLOCATION
The different approaches that can be adopted to
manage the OFDMA subchannels are schematized in this
figure 5.


Orthogonal channel assignment completely
eliminates cross-layer interference, by dividing the licensed
spectrum into two parts. This way, a fraction of the
subchannels would be used by the macrocell layer, while
another fraction would be used by the femtocells. This
approach is supported by companies such as Comcast,
which have acquired additional spectrum to be used
exclusively by their wimax femtocells. In Orthogonal
channel assignment, the spectrum allocation can be either
static or dynamic, it can be static, depending on the
geographic area, or can be made dynamic depending on the
traffic demand and user mobility. This approach is
inefficient in terms of spectrum reuse, though it is optimal
from a cross-layer interference standpoint.
Co-channel assignment of macrocell and
femtocell layers seems more efficient and profitable for
operators, although far more intricate. If subchannel sharing
is done, then it can be either centralized or distributed. In the
centralized approach, to mitigate cross and co-layer
interference, there would be a central entity in charge of
intelligently telling each cell which subchannel to use. This
entity would need to collect information from the femtocells
and their users and use it to find an optimal or good solution
within a short period of time. But since the number of
femtocells is large, this makes optimization problem too
complex.

A distributed approach can also be used to mitigate
cross and co-layer interference, where each cell manages its
own subchannels thus does more of self-organization. In the
distributive approach, it can be either co-operative or non-
cooperative. In a non-cooperative approach, each femtocell
would plan its subchannels in way as to maximize the
throughput and QoS for its users. Furthermore, this would
be done independent of the effects its allocation might cause
to the neighboring cells, even if it supposes larger
interference. The access to the subchannels then becomes
opportunistic, and the method decays to greedy.
On the other hand, in a cooperative approach, each
FAP gathers information about neighboring femtocells and
may perform its allocation taking into account the effect it
would cause to neighbors. In this way, the average femtocell
throughput and QoS, as well as their global performance can
be locally optimized. A co-operative approach if setup
properly, will be very beneficial both in terms of resource
management and interference reduction. But it has the
disadvantage of requiring additional overhead or sensing
mechanisms to gather information about neighboring
femtocells and macrocells.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 210


Fig.5 Sub Channel Allocation Technique

VIII. ANALYTICAL MODEL

A. Path Loss Model and Coverage

It is important to study coverage to analyze the
femtocell performance. To determine coverage, it is
necessary to use a path loss model. A model
recommended by the ITU-R for the path loss between
indoor terminals, known as ITU-R P.1238 has been used.
This model assumes an aggregate loss through furniture,
internal walls and doors represented by a power loss
exponent n that depends on the type of building.
The total path loss model (in decibels between
isotropic antennas) is:

L50 ( r ) = -20 log (fc) +10n log r + Lf(nf) -28
The path loss L50(r )represents the
median path loss at a distance r. The total loss has to be
considered as a sum of this loss and an additional shadow
fade margin, LFM. A further additional loss LW needs to
be added to represent the outer wall of the building.
Thus the total loss is given by,

LT=L50(r) +LFM + LW

In the absence of co-channel interference or
when femtocell is deployed in a remote location where no
macrocell coverage is available, the maximum acceptable
path loss required to deliver an adequate pilot channel
signal quality Ec/N0 is given by:

L
max
= 10.log
10
[ (P
max
/ NUE) ((P
CPICH
/ NUE) -1)]

Where, Pmax is the maximum power
transmitted by the femtocell, NUE is the user equipment
receiver noise power , PCPICH is the proportion of the
femtocell power allocated to the pilot channel.
The parameters that have been considered in
obtaining the coverage plot are shown in the table below.
PARAMETERS MEASURED VALUES
LFM 7.7 dB
LW 10dB
Fc 1.5 GHz
Lf(nf) 9 dB
Ec/N0 -16 dB
Lf(nf) 19 dB(for 2
nd
floor)
UE Noise figure 7 dB
Pathloss exponent n=2
PCPICH 0.1

Table.1 Parameter table
Using the above parameters and equations, the
coverage radius of a femtocell vs. its maximum transmitted
power is simulated in MATLAB and the result is shown in
figure 4.


Fig. 6 Coverage of Femtocells
B. Throughput Analysis

To compare the performance of and femtocells, I
have done a small simulation to demonstrate the
performance improvement when femtocells are used .In this
example, I have considered a cellular OFDMA system with
100 active users. Two scenarios are considered, the first
being a single macrocell serving all 100 users
simultaneously, and the second consists of 50 femtocells,
with two active users in each femtocell. The throughput is
computed using the following equation.

Fig.7 Femtocell throughput Analysis
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 211
C= W log2 (1 +SIR/10
[dB]/10
)

C. Analytical model



Fig.8 Radio Network Controller


Fig.9 Real Time Actuator



Fig.10 Network controller


IX. CONCLUSIONS

Both the femtocell and macrocell use the same
cellular frequency band. Thousands of femtocells are
overlaid by a macrocell. The wireless channel can be
increased by using smaller cell sizes. I also understood
how every wireless communication scheme or method has
different tradeoffs, if it shows better performance, then
more complicated network architecture might be required,
or there could be an increase in the interference. The
effects of poor management of interference are discussed
in detail. There is an enormous amount of gain reaped by
the use of femtocells. Femtocells have the potential to
provide high quality network access to indoor users at low
cost, while reducing the load on the macrocells if the
challenges of mitigating RF interference are dealt with.
Our proposed cost effective interference mitigation using
dynamic and hybrid frequency reuse technique is very
much achieve the full scale of femtocell network
deployment. I also consider the static frequency re-use
technique as my future work.
REFERENCES

[1] J. G. Andrews, A. Ghosh, and R. Muhamed,(2007) Fundamentals of
WiMAX Understanding Broadband Wireless Networking, Prentice-Hall,
Boston, Mass, USA.

[2] V. Chandrasekhar, J.G. Andrews, (2007) Uplink capacity and
interference avoidance for twi-tier cellular networks, IEEE Globecom .

[3] V. Chandrasekhar, J.G. Andrews, A. Gatherer, (Sept. 2008) Femtocell
Networks: A Survey, IEEE Communications Magazine, vol. 46, no.9, pp.
59-67.

4] Chai-Hien Gan, Ching-Feng Liang and Yi-Bing Lin (2010) Reducing
call routing cost for femtocells, IEEE Transactiona on Wireless
Communications, vol.9,no.7,pp 2303 2309.

[5] H. Claussen, L.T. Ho, L.G. Samuel,(2008) An overview of the
femtocell concept, Bell Labs Technical Journal, vol. 13, no. 1, pp. 221-246.

[6] H.Claussen, (2007) Performance of macro and co-channel femtocells
in a hierarchical cell structure, IEEE PIMRC.S.P.

[7]. Yi-Bing Lin, Chai-Hien Gan and Ching-Feng Liang, Reducing call
routing cost for femtocells, IEEE Transactiona on Wireless
Communications, vol.9,no.7,pp 2303 2309, 2010

[8] http://www.femtoforum.org/

[9] S.R. Saunders, S. Carlaw, A. Giustina, Femtocells: Opportunities and
Challenges for Business and Technology, John Wiley & Sons. Ltd, 2009.

[10] L. Perez, D. Valcarce, A.D. Roche, G.J. Zhang, OFDMA femtocells: A
roadmap on interference avoidance, IEEE Communications Magazine, vol. 47,
no. 9, pp. 41-48, Sept.2009.

[11] M.Starsinic,(2010) System Architecture Challenges in the Home
M2M Network, InterDigital Communications, LLC, King of Prussia.

[12] D. Williams, WiMAX Femtocellsa technology on demand for cable
MSOs, in Proceedings of the FemtoCells Europe Conference, London, UK,
June 2008.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 212
Handoff Algorithm for Future Generation Networks

Prof. S. Nanda Kumar
Assistant Professor, Networking Division , SENSE SCHOOL,
VIT University,Vellore, India
snandakumar@vit.ac.in
Rahul Singh
BTECH, Electronics & Communication, SENSE School,
VIT University, Vellore , Tamilnadu, India.
rahulsingh.vit11@gmail.com

Sanjeet Singh
BTECH, Electronics & Communication, SENSE School,
VIT University, Vellore , Tamilnadu, India.
sanjeetsingh87@yahoo.com
Abstract- In a cellular network, handoff is a transition for any
given user from one base station to another geographically
adjacent base station, as the user is free to move around. If
handoff fails, it leads to forced termination of ongoing call.
This is not user friendly at all. The major parameter in any
network is defined by its Quality of Service (QOS) and handoff
decision scheme plays a major role in QOS. In this paper, as
the user is moving, handoff mechanism is analyzed on the basis
of received signal strength from two different base stations
and results are plotted using the Matlab code.

Keywords: Telecommunication, handoff, Received signal
strength (RSS), Serving Station(Serving Station), Target
Station(Target Station), MT(Mobile Terminal)

I. INTRODUCTION:-

Mobile wireless telecommunications is one of the most
advanced form of human communications ever. The intense
research has led to rapid development in the mobile
communication sector. One of the important objectives in
the development of the new generation is the quality
improvement of cellular service, with handovers nearly
invisible to the mobile station subscriber. Generally, a
handoff takes place, when the link quality between the base
station and the mobile terminal is degraded on movement.
In next-generation wireless systems, it is important for
Radio Resource Management(RRM) functionality to ensure
that the system is not overloaded and guaranteeing the
needed requirements. If the system is not properly planned,
it leads to low capacity than required and the QOS
degraded. The system became overloaded and unstable.
Therefore, we need an algorithm which not only adapts
itself according to the Received Signal Strength(RSS)but
also on the load status of target station. Earlier scheme has
few shortcomings like insufficient system resources and
degraded service quality due to sudden increase in traffic. In
addition, the previous works do not consider the load status
of neighbouring cell. Proposed algorithms differently set
the handoff threshold based on traffic cells. In this paper, we
propose a handover-based traffic management algorithm to
adaptively controls the handover time according to the load
status of cells. Traffic load is considered an important
factor for initiating handover in this algorithm. The traffic
load can seriously affect on QoS for users ,thus it requires
efficient management in order to improve service quality.
The rest of this paper is organized as follows. First we
develop a handover time algorithm ,where in this proposed
algorithm we set the threshold value based on traffic load of
cells, speed of mobile terminal and distance parameters. We
discuss the simulation results and finally the conclusion.


II. ADAPTIVE RSS ALGORITHM

Terms used in algorithm:-
thres_serving: The threshold value of the RSS to initiate
the
handover process. Therefore, when the RSS of SS drops
below thres_serving , the Mobile Switching Centre(MSC)
registration procedures are initiated for MTs handover to
TS
thres_min:- The minimum value of RSS required for
successful communication between an MT and TS.
thres_target:-The threshold value of RSS from target
station for handoff execution.
In this paper an adaptive algorithm is used for traffic
distribution in the hotspot. The detailed algorithm is shown
in fig.1.Traffic load is the rate of channel occupancy in the
algorithm. To maintain the quality of service and to make
the effective usage of resources, distribution of traffic is
needed. The two Active modes are the HOLD and ON state
between user and base station. The HOLD state has full
downlink and thin uplink channel and ON state has both full
downlink and uplink traffic channel. The load added by the
handoff calls is defined as HANDOFF. The handoff call is
assume to be in the ON state soon after the handoff process
is over. The traffic load can be estimated by calculating the
number of users in the three modes, HOLD, ON and
HANDOFF which is described in Equation 1 [6].


N
T
= N
ON
+ N
HOLD
+ N
HO
(1)

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 213
here N
T
is the amount of traffic loads, N
ON
is the number of
users in the ON state, N
HOLD
is the number of users in the
HOLD state and N
HO
is the number of handoff calls. In
Equation (1), is an adaptive factor and the amount of
traffic
load varies from 0 to 1. The value of traffic load is
approximated to 0 when the current cell is regarded as the
lightly loaded cell and as the number of mobile nodes
increase, the traffic load is taken as 1. The current cell
becomes a hotspot. The hd value used in the algorithm is
called hotspot threshold. If the ratio of number of available
resources by number of total resources is less than hd, then
that cell have hotspot status.

Fig.1 Flow chart
Figure 1 shows the Adaptive RSS algorithm. As shown in
Figure 1, when the RSS of the serving cell is less than
threshold value, it sends the load information request
message to the target cell and receive load information
response message from the cell. The target cell calculates
the amount of traffic load using Equation (1). If the amount
of available resources of the target cell is less than the
hotspot threshold, hd, the current serving cell sends hotspot
alarm message to the target cell .Then, target cell completes
all the pending handover request & send hot spot release
message to serving station. Now, handover is executed to
target station(TS).In this proposed algorithm, the proper
threshold value should be carefully selected in order not to
degrade the service quality of other users. The previous
work used fixed RSS to initiate the handoff process. An
adaptive RSS threshold is recommended to use so that the
mobile has enough time to initiate the handoff process.. The
algorithm has been modified by applying a mathematical
formulation for controlling the handoff time and called as an
adaptive RSS threshold (thres_min)

III) ADAPTIVE HANDOVER INITIATION TIME

The Adaptive RSS algorithm can recognize the load status
of the neighbouring cells with the load information message
in advance, before handover execution. After receiving the
load information message, the proper threshold value should
be carefully selected in order to initiate the handover
process. In this algorithm, we derive the mathematical
equation to control the handover time& thres_min according
to the load status of cells, mobiles speed and handover
signalling delay.



Fig.2 Analysis of probability of hand off failure & Initiation as MT moves
from SS TO TS



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 214
cos( / ) / tan( / 2 ) pf a d v a a d t = (2)
1 (1/ )* 1 pa u = H (3)
This equation(2)[4] and equation(3)[4] is used to calculate
probability of handoff failure(pf) & Probability of false
handoff initiation(pa) respectively. First, we calculate d for a
desired value of pf using (1),where d is MTs distance from
the boundary of the serving BS, t is the handoff signalling
delay , a is cell size and 1 u =atan(a/2d) as shown in figure 2.
Then the required value of thres_min is selected by equation 4

_ min (min) 10 log( ) [ ] thres RSS a d db o c = + (4)

Equation 4 is derived from path loss model, where o is path
loss coefficient.The thres_min value should be carefully
selected in order not to degrade the service quality of other
users. Adaptive threshold value avoids too early or too late
initiation of the handoff process (registration). They are
completed before the user moves out of the coverage area of
the serving network.

IV. Result And Discussions

The simulations are performed in Matlab. For simulations
following scenarios and parameter are considered.
- Hexagonal cell radius(a)=1 km
- Max speed of mobile(v) = 100 km/h
- Standard deviation of shadow fading,
=8dB
- Path-loss coefficient, = 4
- Minimum value of RSS = -64 dBm
- THRES_SERVING= -74 dB
- THRES_TARGET =-79 dB
- THRES_NORMAL= -79 dB
- HYS_ACCEPTABLE =3 dB
- HYS_MIN =0 dB
- HYS_NORMAL=2 dB
- CELL CAPACITY=10

A. Variation of cell status on the basis of hotspot threshold

Hotspot identification is very important step in the
given algorithm. If target station is a hotspot cell , the
given algorithm delays the handover process to the
target cell.

Fig.3 Variation of cell status on the basis of hd value
In the mean time, the target cell completes all the
pending handover . We can see in fig.3 as number of
available resources increases, the cell looses their
hotspot status. It is therefore very much important to
choose a proper value of hotspot threshold. If high
threshold value(hd) is chosen ,it leads to unnecessary
wastage of resources and time. For hd=0.6, we can see
its looses its hot spot status ,when number of available
resources are 6. In the fig.3 Sa=1 represent hotspot
status and Sa=0 as non hotspot.

B. False Handoff Initiation Probability Variation With
Cell Size

It is clear from fig. 4, as value of d increases,
probability of false handoff initiation(pa) increases.
This leads to unnecessary wastage of wireless
resources. This also increases load on network as false
handoff initiation takes place. We can see, as the cell
size is decreasing, the pa is increasing. As in next
generation the smaller cell size is required to increase
capacity, it is very much important to select a proper
value of d, therefore thres_serving, to reduce
probability of false handoff initiation.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 215

Fig. 4 Relation between handoff probability and a with different value of
cell size



C. Relation Between Probability of Handoff Failure (pf) and
Velocity(v)
Equation 2 shows , if a fixed value of thres_min is used
(hence, a fixed value of corresponding d) , the handoff
failure probability depends on the speed of mobile terminal.
The probability of handoff failure ( pf ) increases when
MTs speed increases. The relationship between pf and
MTs speed is shown in figure 5. As we can see in the figure
5, the value of v increases, for particular value of d , pf
increases for system handoff. This is because, as speed
increases ,MT require less time to cover the region of
serving station.


Fig. 5 Relation between handoff failure probability and velocity

This relation show that the value of d, therefore the value of
thres_min, should be adaptive to the velocity of mobile
terminal inorder to have required probability of failure(pf).

D. Relationship Between Handoff Failure Probability
and Handoff Signaling Delay

Fig. 6, shows handoff failure probability increases as
hand off latency time increases. Therefore it is very
important to select handoff type before and than use
adaptive value of thres_min to fix particular handoff
probability. High latency refers to intercellular handoff
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 216
whereas low refers to intracellular handoff.


Fig.6 Relation between handoff failure probability and handoff latency

E. Relation Between thres_min and Velocity(v)

To determine the relation between thres_min and
mobile terminal speeds(v),firstly the required value of
d is determined. Here pf value is assumed to be 0.02.
Then required value of thres_min is calculated. Fig.7
shows , thres_min increases as mobile speed increases
for particular value of t . This is because for the MT
with slow speed, the handoff initiation should start later
compared to a high moving mobile terminal. The result
also show that thres_min decreases as t decreases.
This implies for the MTs with high t , the hand off
initiation should start later., compared to mobile
terminals having low t .



Fig.7 Relation between thres_min and velocity

F. Relation Between Probability of Handoff Failure(pf)
and Handoff Signalling delay t

In order to analyse this, the velocity of vehicle is kept
constant 100 km./hr and result is analysed by varying
the signal delay. It has been observed in figure.8 as
handoff latency increases ,pf values increases for fixed
value of thres_min .Higher the threshold value , lower
will be the pf value. Therefore it is essential to predict
the handoff delay in advance and then use an adaptive
value of received signal to limit the handoff failure
probability(pf) to a desired value. It is also observed
that if proper signalling delay is used we can get
desired pf value.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 217


Fig. 8 Relation between probability of handoff failure and t

G. Comparison between fixed RSS algorithm &
adaptive RSS algorithm

In fixed RSS algorithm, fixed value of thres_min is
used. It is calculated such that a user with highest speed
is guaranteed the desired value of handoff failure
probability (pf).But in case of Adaptive RSS algorithm
according to the speed of mobile terminal , d is
calculated for desired handoff probability . With that
calculated value of d , thres_min is calculated for
handoff to take place. As in fig. 9,it is Observed that
with increase in velocity, pf for adaptive handoff
algorithm increases whereas pf for fixed RSS algorithm
remain constant,as it is calculated for maximum
velocity of mobile terminal(100 km/hr). The pf value
for adaptive algorithm is less than fixed one for a
particular velocity.Thus thres_min is decided on the
basis of velocity of mobile terminal and it leads to low
hand drop call rate as compared to fixed RSS.


Fig. 9Comparison between Fixed and Adaptive RSS

V. CONCLUSION

In the paper, we have presented a handover-based algorithm
that adapts according to the load status of cells. A proper
threshold value to control the handover initiation time
according to the load status of cells, mobiles speed and
handover types is used. This algorithm is developed to
efficiently manage overloaded traffic in the cells and roll out
the most precise or ideal time for handoff initiation. Also, a
comparison of probability of handoff failure in case of fixed
RSS and Adaptive RSS algortithm have been shown.
Results prove that a better QOS is achieved in Adaptive
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 218
RSS than fixed RSS scheme. This algorithm would help in
removing the problems like call failures, interruptions in
data transfer etc and also it provides the base for future work
in the area that is adaptive techniques in wireless network
management.



ACKNOWLEDGMENT

We would like to express cordial thanks to Professors of
Networking Division, SENSE SCHOOL. VIT University,
Vellore for their help in this research

REFERENCES

[1] S. AlQahtani, U. Baroudi, An Uplink Admission Control for 3G and
Beyond Roaming Based Multi-Operator Cellular Wireless Networks with
Multi-Services, 4th ACS/IEEE International Conference on Computer
Systems and Applications, Dubai/Sharjah, UAE, March 8-11, 2006.
[2] Third Generation Partnership Project (3GPP), Technical Specification
Group (TSG) RAN3,Network Sharing; Architecture and functional
description (Release 6),
[3] I.F. Akyildiz, S. Mohanty, and J. Xie, A Ubiquitous Mobile
communication Architecture for Next Generation HeterogeneousWireless
systems, IEEE Radio Comm. Magazine, vol. 43, no. 6,pp. S29-S36, June
2005.
[4] Shantidev Mohanty and Ian F. Akyildiz,A Cross-Layer (Layer 2 + 3)
Handoff Management Protocol for Next-Generation Wireless Systems ,
IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 5, NO. 10,
OCTOBER 2006
[5] Kim, D., Kim, N. & Yoon, H., Adaptive Handoff Algorithms for
Dynamic Traffic Load Distribution in 4G Mobile Networks. LNCS, 7th
International Conference on Advanced Communication Technology, 2005.
[6] Kim, D., Sawhney, M. & Yoon, H., An effective traffic management
scheme using adaptive handover time in next-generation cellular
networks, Int. Journal of Network Management, pp. 139-154,2007.
[8] Das S, Sen S & Jayaram R, A dynamic load balancing strategy for
channel assignment using selective borrowing in cellular mobile
environment, Proceedings of IEEE/ACM Conference on Mobile
Computing and Networking (Mobicom96), pp. 7384, 1996.
[9] Das S, Sen S & Agrawal P, Jayaram R., A distributed load balancing
algorithm for the hot cell problem in cellular mobile networks,
Proceedings of 6th IEEE International Symposium on High Performance
Distributed Computing, pp. 25463, 1997.
[10] M.D. Austin and G.L. Stuber, Velocity Adaptive Handoff Algorithms
for Microcellular Systems, IEEE Trans. Vehicular Technology, vol. 43,
no. 3, pp. 549-561, Aug. 1994.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 219

Comparison of Transparent and Translucent DWDM Optical Networks
Nivedita . G Gundmi
Dept. of Computer Science,
R.V. College of Engg, Bengaluru, India
niveditagg@gmail.com

Soumya A
Dept. of Computer Science,
R.V. College of Engg, Bengaluru, India



E.S Shivaleela
Dept. of Elect. and Comm. Engg,
Indian Institute of Science, Bengaluru, India


Shrikant S Tangade
Dept. of Elect. and Comm. Engg,
Indian Institute of Science, Bengaluru, India


Abstract: Transparent networks carry optical signal from
source to sink entirely in optical domain and offer the
advantage of format and bit rate independency. But in large
area networks transparent networks are not always practical
due to wavelength continuity constraint, and Asynchronous
Signal Noise (ASE) noise accumulation the Optical Signal-to-
Noise Ratio (OSNR) degrades, and measures to bring OSNR
within the Quality of Service (QoS) limits, will increase the cost
of the network. In translucent network the distance between
two regenerators is called optical reach. Translucent networks
with optical regenerators placed in the range 2500 to 3000km,
called optimal optical reach, are practical and also can be
realized without increasing the cost of the network satisfying
traffic demands and QoS. Deploying regenerators at optimal
positions in the link optical reach also reduces the cost of the
network. In this study, the QoS parameters like throughput,
delay, blocking probabilities and Bit-error-rate (BER) are
analysed for the transparent and translucent Dense
Wavelength Division Multiplexing (DWDM) networks. The
optimal optical reach is applied to translucent network to
include more number of nodes without increasing cost of
network. The Routing and Wavelength Assignment algorithm
(RWA) is used for both types of networks, to route and
establish lightpaths. The study provides best economic option
to choose the type of network suited for the traffic demands.
Key words: Bit-Error-Rate (BER), Regenerators, Optical Reach,
Routing and Wavelength Assignment algorithm (RWA).
I. INTRODUCTION
Present day optical technology offers high capacity links
for the ever grooving bandwidth hungry applications. It also
provides routing, grooming at network layer and restoration
at the optical layer. In DWDM, 1530-1565 (C-band), the
spacing between carriers ranges from 100 GHz to 25GHz
and bit-rate is 40Gb/sec. DWDM is suited for long distance
and transmission at high bit rate transmission with efficient
transceivers and filters. In transparent Optical Transport
Networks (OTNs), the optical signal from a source to a
destination is handled entirely in the optical domain-meaning
that Optical-Electrical-Optical (O/E/O) conversions are
never performed at transit nodes. Full transparency, however,
is not always achievable in long distance networks due to
physical impairments that degrade the OSNR as optical
signals are attenuated during transmission and accumulation
of noise generated by optical amplifiers, due to which BER
deteriorates. When the impairments accumulated along a
route are excessively high, lightpaths cannot be established
satisfying certain QoS so connection requests are blocked
[1]. In order to geographically expand a transparent OTN, an
operator might need to install one or more regenerators along
required paths, so as to satisfy the QoS requirements.
Clearly, regenerators break up the optical continuity, but
allowing improvement in the OSNR, hence reducing the
BER. The deployment of regenerators turns a transparent
OTN into a translucent network. Wavelength conversion
takes at regenerator node only if the required wavelength is
not available on the fiber in translucent network [5] [6].
During the dimensioning of an OTN, the Routing and
Wavelength Assignment and Regenerator Placement
(RWARP) process requires an effective method for
estimating the potential degradation of an optical signal
along the candidate paths, which is typically achieved by
integrating physical-layer information into the RWARP
process. The least number of hops from source to destination
is considered as the shortest path. BER of each link is added
because BER is cumulative in nature and sum is checked
with threshold BER at each node [7]. Once the regenerators
are placed and the dimensioning phase has concluded, the
role of the RWA process is to route the forecasted traffic
demands according to the planning. The uncertainties and
drifts of the physical-layer parameters from their nominal
values during the operation of the OTN leads to situations
where the performance achieved by the RWA process is
significantly worse than projected. Such transparent and
translucent networks are analysed for their performance. The
contribution of this paper is dimensioning translucent
network by placing minimum number of regenerators. So, as
to satisfy the traffic demands and QoS while incurring
minimum cost. The QoS parameters like throughput, delay
and blocking probabilities are evaluated. With optimal
optical reach translucent network can be extended and can
yield the good performance with less number of regenerators
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 220

[2] [3]. The organization of this paper is as follows: II
section is dealt with the Dimensioning of translucent
networks. The section III is dealt with Routing and
wavelength assignment algorithm in which for both
transparent and translucent networks RWA is explained. IV
section narrates the results of the study.
II. DIEMENSIONING THE TRANSLUCENT NETWORK
In a network the quality of transmission degrades
due to impairments like Amplified Spontaneous Emission
noise (ASE), Polarization Mode Dispersion (PMD),
Chromatic Dispersion (CD), Filter concatenation (FC), Self-
Phase Modulation (SPM), Impairments that are generated by
other lightpaths are Crosstalk (XT) (intra- and inter-channel
crosstalk), Cross-Phase Modulation (XPM), Four Wave
Mixing (FWM). BER is a very appropriate criterion because
it is a comprehensive parameter that takes all impairment
effects into consideration [4]. Each link is assigned with
weight of 1. The distance between source and destination is
number of hops that are summation of weight of each link on
that path. The integrated model of network layer and
Physical layer in transparent network and in translucent
network is shown in Figure 1 and 2. Our proposed RWA
consists of two phases: finding the m number of shortest
paths and assigning wavelengths. In the first phase BER
value is assigned to each link and during the computation of
shortest path from source to destination, the link BERs are
summed up to get path BER. The path with having below
threshold BER is chosen. In the second phase the
wavelength is assigned and lightpath is established.
















III. ROUTING AND WAVELENGTH ASSIGNMENT
ALGORITHM
A. Transparent Network
1) Phase : Finding the shortest paths based on number
of hops: The algorithm computes the shortest path using the
number of hops from a given source to destination. The
SumBER value at each node on the selected route should not
exceed the Threshold BER given. If exceeds the call request
is blocked [8].
2) Phase : Assigning Wavelength
For the given source and destination, the shortest
path found in a first phase is assigned with wavelength. If
wavelength continuity constraint can not maintained then
call is blocked.
Algorithm:
Given: A network Graph G (N, L) with N number of nodes,
L number of links, number of wavelengths (
w
), and BER
value on each link, BER threshold value. A connection
request R (source, destination).
TABLE I. ROUTING AND WAVELENGTH ALGORITHM FOR
TRANSPARENT NETWORK
Step 1: A vector of BER on each link defined as B=
{B|10
-12
>B>10
-15
}. BER threshold is 10
-12
Initially
SumBER is set to 0. A shortest-path algorithm to find
a path P
w
in Graph (N, L) for given request with
source and destination is marked. BER on each link
is summed up to SumBER.



























Admit
Call
Network Layer
Call Request
Block
Call
No
Physical Layer
Assigned BER value on
Proper BER ? Yes
Yes

No
Assign Wavelength
Path to Destination Based on
No. of Hops
If Successfully Assigned
If Path Not
Available
Assign BER on each Link

Figure 1. The proposed integrated layer model in transparent network
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 221



B. Translucent Network
1) Phase : Finding the set of shortest paths based on
number of hops.
The algorithm computes the shortest path based on
number of hops from a given source to destination node and
considers. The basic difference is that instead of a single
path, a set of paths between the origin and destination node
is obtained. The SumBER value at each node on the selected
route should not exceed the Threshold BER given. If
exceeds the call request has to follow the second shortest
path with satisfying the SumBER below the threshold value.
If no route available with satisfying BER call is blocked [8]
[9].




























2) Phase : Assigning Wavelength
For the given source and destination, the shortest path
found in a first phase is assigned with wavelength. If
wavelength continuity constraint can not be maintained the
wavelength conversion takes at regenerator node. If the
wavelength conversion is required at present node which is
not a deployed with regenerator then repeat the process for
next alternative path, otherwise assign wavelength [10].

Algorithm
Given: A network Graph G (N, L) with N number of nodes,
L number of links, number of wavelengths (
w
), BER value
on each link, BER threshold value. A connection request R
(source, destination).
TABLE II. ROUTING AND WAVELENGTH ALGORITHM FOR
TRANSLUCENT NETWORK
Step 1: A vector of BER on each link defined as B={B/
10
-12
>B>10
-15
}. BER threshold is 10
-12
. SumBER
is set to 0 initially. A shortest-path algorithm to find
a path P
w
in Graph (N, L) for given request with
source and destination is marked. As the path move
towards the destination the BER on each link is
added to SumBER.

































Step2: Check the BER threshold is greater than the
SumBER of the network path.
Step 3: If the BER threshold is not greater than the
SumBER then call is blocked, else wavelength is
assigned with wavelength continuity constraint.
Step 4: If wavelength continuity can not be assigned then
also call blocked, else lightpath establishes.
Step 5: For the number of successful call made, throughput,
delay is computed. And for the number of blocked
calls blocking probability is computed.
Step 6: Repeat the process for every request.

Yes
Call Request
Path is not
Available
Block
Call
Network Layer
No
Admit
Call
A
s
s
i
g
n

B
E
R






O
n

e
a
c
h

l
i
n
k
Assign Wavelength
If Successfully Assigned
If Path Not
Available
Path to Destination
Based on No. of Hops
Physical Layer
Assigned BER
value on
Yes No
I
n

V
a
l
i
d

B
E
R
Proper BER ?

Figure 2. The proposed integrated layer model in translucent network
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 222

Step2: Check the BER threshold is greater than the
SumBER of each link over the path or not
Step3: If the BER thresholds is not greater than the
SumBER then go for second alternative path and
repeat the procedure else wavelength is assigned
with wavelength conversion if required.
Step 4: If wavelength to be assigned is not available on
any link between source to destination then check
for the regenerators deployed at the node based on
the regenerator placement algorithm.
Step 5: If the node is deployed with regenerator then
wavelength conversion can takes place. Otherwise
go for next alternative wavelength from source to
destination and repeat the process.
Step 6: Once the wavelength assigned on each link
lightpath establish successfully.
Step 7: For the number of successful call, throughput,
delay is computed. For the number of blocked calls
blocking probability is computed.
Step 8: Repeat for each request.

C. Advantage of Regenerators and Optical Reach
The regenerators are also capable of the wavelength
conversion. The signal under goes O/E/O conversion at
regenerator node. So, it allows opportunity to convert
wavelengths at no extra cost. This feature helps to achieve
more utilization and close to those of networks with full
wavelength conversion. The small number of regenerators
deployed in the translucent network within optimal optical
reach. In networks a regenerators are placed where signal
quality degrades. Placing of regenerator has several
advantages: Signal quality improvement, wavelength
conversion and also helps to monitor the all signal quality
parameters. This provides wavelength conversion in a high
efficiency network. The optical reach reduces the number of
regenerator in a translucent network which intern reduces the
cost of network [11] [14].
There are different algorithms reported in the
literature to place a regenerator in a network. In this section
we propose modified algorithm over previous NDF
algorithm [9] [13]. This modified NDF regenerator
placement algorithm is as shown in Table III.
TABLE III. REGENERATOR PLACEMENT ALGORITHM
Step 1: Assign each node a number equal to its in
degree from the link state table.
Step 2: Select a node with the maximum degree number
as a regenerator capable node and place a
regenerator at that node.
Step 3 : If multiple such nodes have same number of in
degree, then select a node among them which
has the least SumBER.
Step 4: Repeat the process until the required number of
regenerators are placed.

The cost of translucent network is proportional to
number of regenerators in that network. The optical reach
decides the number of regenerators in a network. As the
optical reach increases the number of regenerators can be
reduced drastically. From the number of experiment it is
found that number of regenerator is linear function of optical
reach. Increasing the optical reach from 1200 to 3500 km
eliminates about 80 -90% of the regenerators in a network; at
a reach of 3500 km, this factor is approximately 90-95%.
Increasing the optical reach beyond the range 3700km to
4500km in a network generally increases the cost of the
networking equipment, and leads to expensive equipment
like Raman amplification, and requires the use of more
pumps or high powered pumps, further adding to the cost of
the amplifiers. In addition, more dispersion compensation or
more precise gain flattening filters may be necessitated with
longer optical reach. Transient control in the amplifiers is
another area that needs to be addressed with long-reach
systems, which adds further cost to the system. In addition to
the amplifier cost, transponder costs typically increase as the
reach increases due to the need for more complex
modulation schemes, more precise lasers and filters, and
more powerful error-correcting coding. The proper optical
reach to achieve economical network is 2500-3000km. Such
translucent networks perform on the same scale as
transparent but are less expensive. The blocking probability
also affected by the optical reach as its value reduces due to
number of paths available with different wavelengths. In
transparent network due to wavelength continuity constraint
blocking probability is higher than translucent network.,
there is no option for optical reach hence the cost of network
can not be decreased.
IV. PERFORMANCE ANALYSIS
To implement proposed algorithm and to compare
the algorithms reported in the literature, simulations carried
out. But it is difficult to find suitable simulator that could
support our proposed heuristics, we have designed and
developed a simulator to implement routing and wavelength
assignment in both types of networks for regular and
irregular topologies. The simulator is developed in C++
language [16]. The output data found while running this
program in C++ is drawn as a graph in MatLab. The program
accepts input parameters such as the number of nodes in the
network, the number of wavelengths per fiber, link
information with BER weight and connection requests. Some
of the calls may be blocked because of the unavailability of
free wavelength on links along the route from the source to
the destination or due to path with satisfying BER. The ratio
of the total number of calls blocked to the total number of
lightpath requests in the network is defined as the blocking
probability. The Figure 3 shows the network with 12 nodes
and on this network proposed algorithm is implemented.
While executing, the number of nodes are also increased to
get the output for measuring blocking probability,
throughput and delay.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 223

A. Performance Evaluation
The proposed RWA in Table I and Table II are
applied on transparent and Translucent networks for the
given connection request. Due to network layer blocking or
physical layer blocking, the transmission is blocked for the
most of the request [1] in transparent network. The blocking
probability is more in transparent network due to wavelength
continuity constraint as shown in Figure 4.














Figure 3. Network of 12 Nodes

Figure 4 Blocking Probability
















































Delay is the time between the call completion time
and connection request time. For the successful call it is
more in translucent network as it has to process for
alternative path if first route is failed and also wavelength
assignment which takes with conversion whenever required.
The graph shown in Figure 5 shows the delay in both types
of networks. Throughput for the number of successful calls
is evaluated as the number of successful calls to the total
number of calls. The variation of throughput versus traffic
load is shown in Figure 6.

The cost of both transparent and translucent
networks is depending on the equipment used and type of
equipment which includes Optical Add Drop Multiplexers
(OADMs), transponders, Erbium-Dropped Fiber Amplifiers
(EDFA), Raman Amplifiers (RA) and optical terminals etc.
The normal cost of network can be reduced with optimal
optical reach in only translucent network as shown in Figure
7. The regenerators are deployed with optical reach which
are not to adjacent node but within optical reach. Beyond the

1
3
7
2
8 4
5 9
10
6
11
12


Figure 7. Cost of network with Optical

Figure 5. Delay

Figure 6 Throughput
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 224

optimal optical reach the cost of network increases due to
expensive equipment needed for compensating anomalies in
the network so, as to satisfy the QoS, as explained in the
section III. In transparent network optical reach can not be
applied. So, there is no reduction in the cost for the given
network.
V. CONCLUSIONS
We have analyzed transparent and translucent
network for various network parameter and also from the
economic point. Our study shows that the cost of the
network goes up beyond 2699km and also blocking
probability is higher at heavy traffic loads. Translucent
network are preferred when the network has large number of
nodes and traffic intensity. Within 2500km and low traffic
intensity transparent network are preferred.
ACKNOWLEGEMENT
This work has been supported by R. V. College of
Engineering, Bengaluru. The author would like to thank the
Indian Institute of Science, Bengaluru for the opportunity of
using the Photonics lab facility.
REFERENCES
[1] .B. Ramamurthy, D. Datta, H. Fang, J. P. Heritage and B. Mukherjee,
Impact of transmission impairments on the Teletraftic performance
of Wavelength-Routed Optical Networks, Journal of Lightwave
technology, vol. 17, no.10, Oct. 1999
[2] R.Ramaswami, K.N Sivarajan, Optical Networks A Practical
Perspective, Morgan Kaufmann Publishers (USA, 2002).
[3] Uyless Black, Optical Networks, Third Generation Transport
Systems , 2002.
[4] T. Deng, S. Subramaniam, Adaptive QoS Routing in Dynamic
wavelength-Routed Optical Networks, IEEE Broadnets, 2005.
[5] B. Ramamurthy, H. Feng, D. Datta, J. P. Heritage, and B. Mukerjee,
"Transparent vs. opaque vs.
translucent3.D.Datta,B.Ramamurthy,H.Feng,J.P.HeritageB.Mukherjje
,BER-Based call admission in wavelength-routed optical networks,
in: OSA Trends in Optics and Photonics(TOPS), Optical Networks
and Their applications, ed. R.A Barry,Vol, 20(Sept,1998), pp, 205-
210.
[6] B. Ramamurthy, S. Yaragorla, X. Yang, Translucent Optical WDM
Networks for the Next-Generation Backbone Networks, in
Proceedings, IEEE Globecom 2001, San Antonio, TX, Nov. 2001.
[7] D.Datta,B.Ramamurthy,HFeng,J.PHeritage,B.Mukherjje,BER Based
call admission in wavelength routed optical networks, in OSA trands
in optics and Photonics(TOPS).optical networks and Their
applications, ed.R.A barry Vol.20(Sept, 1998) pp,205-210.
[8] R. Ramaswami and K.N. Sivarajan, Routing and wavelength
assignment in all optical networks, IEEE/ACM trans on Networking,
vol3, pp. 459-500 Oct. 1995.
[9] X. Yang and B. Ramamurthy, Dynamic Routing in Translucent
WDM Optical Networks, in Proceedings, IEEE ICC 2002, New
York, NY, Apr. 2002.
[10] Paramjeet Singh a,, Ajay K. Sharmab, Shaveta Rani Routing and
wavelength assignment strategies in optical networks Optical Fiber
Technology 13 (2007) 191197
[11] Commetel Pvt Ltd. Technology paper on High capacity DWDM
network document no.: pbt 132.
[12] Ricardo Marinez,Ramon Casellas, Raul Munoz, and Takehiro
Tsuritani, Experimental Translucent-Oriented Routing for Dynamic
Lightpath Provisioning in GMPLS-Enabled Wavelength Switched
Optical Networks, Journal of Lightwave Technology,Vol 28 No 8
April 15 2010.
[13] Dong Shen, Shuqiang Zhang and Chun-Kit Chan Design of
Translucent Optical Network using Heterogeneous Modulation
Formats , 15th OptoElectronics and Communications Conference
(OECC2010) Technical Digest, July 2010, Sapporo Convention
Center, Japan.
[14] D. Staessens, D. Colle, M. Pickavet and P. Demeester, Impact of
Node Directionality on Restoration in Translucent Optical Networks
, ECOC 2010 Sept 19-23 , Torino Italy.
[15] Oscar Pedrola, Davide Careglio, Miroslaw Klinkowski, and Josep
Sole Pareta, Modelling and Performance Evaluation of a
Translucent OBS Network Architecture, IEEE Communications
Societys IEEE Globecom Proceedings 2010.
[16] Averill M Law , Simulation Modeling and Analysis , Publisher Tata
McGraw Hill Education Private Limited 2011.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 225
VHDL Implementation of Bit Processes for
Bluetooth Bitstream Datapath

G. N. Wazurkar
Assistant Professor,
Department of Electronics Engineering,
Bapurao Deshmukh College of Engineering,
Wardha, India.
E-mail: girishwazurkar@rediffmail.com
D. R. Dandekar
Associate Professor,
Department of Electronics Engineering,
Bapurao Deshmukh College of Engineering,
Wardha, India.


Abstract In this paper VHDL implementation of HEC &
CRC generation and data whitening for Bluetooth bitstream
datapath is presented. Numerous bit manipulations are
performed in the transmitter before the data is transmitted for
reliability and security. The bitstream datapath is a part of the
baseband module for processing the bit-intensive baseband
protocol functions. An HEC is added to the packet header, the
header bits are scrambled with a whitening word, and FEC
coding is applied. In the receiver, the inverse processes are
carried out. Bluetooth bitstream processing may be power-
efficient if implemented in hardware. The results show that the
propagation delay for the circuit implemented for HEC &
CRC generation and data whitening using vertex 2p device is
4.64 ns, 2.922 ns and 4.609 ns respectively. Hence the above
circuit may be used in implementation of bitstream datapath
for baseband processor.

Keywords - Bluetooth, Bitstream Datapath, HEC & CRC
Generation, Data Whitening and VHDL Implementation.
I. INTRODUCTION
Recently, there is enormous increase in the applications
using wireless connecting devices and wireless personal area
networking (WPAN) has resulted in various standards, such
as Home RF, IEEE 802.11 and Bluetooth. Bluetooth is a
short-range radio link intended to replace the wires,
connecting portable and/or fixed electronic devices.
Bluetooth are built into cellular mobile phones, laptops,
desktops etc will help to replace the wires used to connect
the laptop to a mobile phones and peripherals to laptop and
desktops. Printers, personal digital assistants (PDAs),
desktops, fax machines, keyboards, joysticks, and virtually
any other digital device can be part of the Bluetooth system
[1]. In addition, Bluetooth provides a mechanism to form
small private ad-hoc groupings of connected devices away
from fixed network infrastructures. Bluetooth establishes ad-
hoc voice and data connections and operates in the 2.4 GHz
unlicensed ISM band. Its specification is open and royalty-
free. The symbol rate is 1 Msps to exploit a maximum
available channel bandwidth of 1 MHz. Fast frequency
hopping is applied to combat interference and fading. A
shaped, binary FM modulation is applied to minimize
transceiver complexity. The basic protocols that all
Bluetooth systems must have are a radio, a baseband, a link
manager, and a logical link controller. The radio takes care
of sending and receiving modulated bitstreams. The
baseband takes care of the timing, and the framing, as well as
packets, flow control, error detection, and correction. The
link manager takes care of managing states and packets and
of controlling flow on the link. The logical link controller
takes care of multiplexing user protocols, as well as
segmentation and reassembly of larger datagrams into
packets, and management [1]. In this paper hardware
architecture for the implementation of HEC & CRC
generation and data whitening is presented. The design is
simulated using Xilinx ISE synthesis tools and implemented
using vertex 2p device. The paper is organized as follows. In
section 2 the bitstream processing schemes is explained.
Section 3 deals with the hardware architecture of the HEC &
CRC generation and data whitening. Results and Conclusion
are presented in section 4 and section 5 respectively.
II. BITSTREAM PROCESSING


Figure 1. Header Bit Processes [2]

Figure 2. Payload Bit Processes [2]
Numerous bit manipulations are performed in the
transmitter before the data is transmitted for reliability and
security. The bitstream datapath is a part of the baseband
module for processing the bit-intensive baseband protocol
functions. An HEC is added to the packet header, the header
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 226
bits are scrambled with a whitening word, and FEC coding is
applied. In the receiver, the inverse processes are carried out.
Bluetooth bitstream processing may be power-efficient if
implemented in hardware. All header bit processes are
mandatory. In addition to the processes defined for the
packet header, encryption can be applied on the payload.
Only whitening and de-whitening are mandatory for every
payload while all other processes are optional and depend on
the packet type and whether encryption is enabled [2]. Figure
1 shows the processes carried out for the packet header at
transmit and receive side. Figure 2 shows the processes that
may be carried out on the payload. The packet can be
checked for errors or wrong delivery using the channel
access code, the HEC in the header, and the CRC in the
payload. The HEC generator polynomial is given as

g(D) = (D + 1) (D
7
+ D
4
+ D
3
+ D
2
+ 1) (1)

Figure 3 shows the HEC generation and checking at
transmit and receive. Initially this circuit shall be pre-loaded
with the 8-bit UAP such that the LSB of the UAP goes to the
left-most shift register element, and, MSB goes to the right-
most element. Initial state of the HEC is shown in figure 4.


Figure 3. HEC Generation and Checking [2]
Position 0 1 2 3 4 5 6 7
Initial
state
UAP0 UAP1 UAP2 UAP3 UAP4 UAP5 UAP6 UAP7
Figure 4. Initial State of the HEC [2]
Figure 5 shows the CRC generation and checking at
transmit and receive. The 16 bit CRC is constructed similarly
to the HEC generator polynomial that is given as

g(D) = D
16
+ D
12
+ D
5
+ 1 (2)

For this case, the 8 leftmost bits shall be initially loaded
with the 8-bit UAP while the 8 right-most bits shall be reset
to zero [2]. Initial state of the CRC is shown in figure 6.

Figure 5. CRC Generation and Checking [2]
The header and the payload are scrambled with a data
whitening word in order to randomize the data from highly
redundant patterns and to minimize DC bias in the packet
before transmission. The scrambling is performed prior to
the FEC encoding. At the receiver, the received data is
descrambled using the same whitening word generated in the
recipient and is performed after FEC decoding. The
whitening word is generated with the polynomial given in
equation 3 [2].

g(D) = D
7
+ D
4
+ 1 (3)

Position 0 1 2 3 4 5 6 7
Initial
state
UAP0 UAP1 UAP2 UAP3 UAP4 UAP5 UAP6 UAP7
Position 8 9 10 11 12 13 14 15
Initial
state
0 0 0 0 0 0 0 0
Figure 6. Initial State of the CRC [2]
The whitening word (10010001)
2
in binary is XORed
with the header and payload. Figure 7 shows the generation
of whitening word using linear feedback shift registers
(LFSR). The LFSR are initialized with the portion of the
Bluetooth clock clk6 1 extended with MSB of value logic
1. The initialization is done with the clk 1 at position 0, clk
2 at position 1 and so on, finally clk 6 at position 5 and MSB
position 6 initialized to value logic 1 before transmission.
After initialization, the packet header and the payload
including the CRC are whitened. The payload whitening
shall continue from the state the whitening LFSR had at the
end of HEC. No re-initialization of the shift register is done
between packet header and payload. The first bit of the data
in sequence is the LSB of the packet header. For Enhanced
Data Rate packets, whitening is not applied to the guard,
synchronization and trailer portions of the Enhanced Data
Rate packets. During the periods where whitening is not
applied the LFSR is paused.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 227


Figure 7. Data Whitening [2]
III. HARDWARE ARCHITECTURE FOR BIT PROCESSES
The hardware architecture for the implementation of
HEC and CRC generator is shown in figure 8 and figure 9
respectively. The HEC generator polynomial is given in
equation 1. Initially this circuit shall be pre-loaded with the
8-bit UAP such that the LSB of the UAP goes to the left-
most shift register element, and, MSB goes to the right-most
element. Then the data must be shifted in the register. Finally
when the last data has been clocked into the register, the
HEC can be read out. The register should be read out from
right to left i.e. first transmitted bit will be in position 7
followed by the bit in position 6. The CRC generator
polynomial is given in equation 2. For this case, the 8
leftmost bits shall be initially loaded with the 8-bit UAP
while the 8 right-most bits shall be reset to zero. Similar to
HEC generator the data must be shifted in the register and
should be read out from right to left i.e. first transmitted bit
will be in position 15 followed by the bit in position 14. The
digital circuit implemented for both HEC and CRC generator
uses only 8-bit and 16-bit shift registers respectively with
polynomial generator and read / write logic. No buffers are
used to store the partial results while writing and reading the
registers. Thus it reduces the hardware of the circuit, power
dissipation and propagation delay.
The hardware architecture entity for the implementation
of data whitening is shown in figure 10. The whitening
generation polynomial is given in equation 1. Initially this
circuit is pre-loaded with Bluetooth clock clk6 1 extended
with MSB of value logic 1. The initialization is done with
the clk 1 at position 0, clk 2 at position 1 and so on, finally
clk 6 at position 5 and MSB position 6 initialized to value
logic 1 before transmission. This is done by shifting the
data from the clock counter in the LFSR. Finally when the
last data has been clocked into the register, the scrambled
word can be read out. The LFSR should be read out from
right to left i.e. first scrambled bit is obtained by performing
XOR operation of input data and position 6 of LFSR. The
digital circuit implemented for data whitening word
generation uses 7-bit shift register using positive edge
triggered D flip flops. The bit D
6
is connected as input to the
flip flop FF
0
and XOR operation is performed between the
output of flip flop FF
3
and bit D
6
. The read / write logic
circuit is used to bypassed the XOR operation and feedback
while initialization of the LFSR. No buffers are used to store
the partial results while writing and reading the registers.
Clock gating scheme is used to reduce power dissipation
when circuit is idle.


Figure 8. HEC Generator

Figure 9. CRC Generator

Figure 10. Hardware Entity for Data whitening
IV. RESULTS
The work presented in this paper was implemented using
VHDL and logic simulation was done in Modelsim simulator
and synthesis was done using ISE Xilinx synthesis tool. The


Datain

R / W Dataout
Datainz
Datainz

Clk





Enable

Data
Whitening

Array of Registers (16- Bit)
Read /
Write
Logic
Polynomial
Generator Logic
Input


Output

Control signals
Clock

Array of Registers (8- Bit)
Read /
Write
Logic
Polynomial
Generator Logic
Input

Output

Control signals
Clock
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 228
design was synthesized for Vertex2P device [3]. The results
obtained for logic delay, route delay and total delay are
presented in table 1 and waveform for HEC & CRC
generation and data whitening word is shown in figure 11, 12
and 13 respectively.
TABLE I. RESULTS FOR DATA WHITENING
Circuit Logic Delay
(ns)
Route Delay
(ns)
Total Delay
(ns)
HEC
Generation
3.745 0.895 4.64
CRC
Generation
2.59 0.332 2.922
Data
Whitening
3.745 0.864 4.609


Figure 11. Waveforms for HEC Generation

Figure 12. Waveforms for CRC Generation

Figure 13. Waveforms for Data Whitening
V. CONCLUSION
VHDL implementation of HEC & CRC generation and
data whitening for Bluetooth bitstream datapath is presented.
The circuit was designed without buffers for low power
consumption and low propagation delay. Clock gating
scheme was used to reduce power dissipation when the
circuit is idle. Bluetooth bitstream processing may be power-
efficient if implemented in hardware. The results show that
the propagation delay for the circuit implemented for HEC &
CRC generation and data whitening using vertex 2p device
are 4.64 ns, 2.922 ns and 4.609 ns respectively. Hence the
above circuit may be used in implementation of bitstream
datapath for baseband processor.

REFERENCES

[1] Sunhee Kim, et al, Design of Bluetooth Baseband Controller using
FPGA, in Journal of Korean Physical Society, pp 200-205, vol 42,
no. 2, Feb 2003..
[2] Specification of the Bluetooth System Core Package version 4, in
www.bluetooth.com, June 2010..
[3] The Programmable Logic Data Book, Xilinx Publication.







Girish N. Wazurkar is currently pursuing
M. Tech in Electronics Engineering and
working as Assistant Professor in the
Department of Electronics Engineering,
BDCOE, Sevagram, Wardha, M.S., India.
He received M.S. in Electronics and
Control from BITS Pillani, India in 1999
and B.E. degree in Electronics Engineering
from RTM Nagpur University, Nagpur in
year 1991. His research interests include
computer networks and VLSI systems.

Deepak R. Dandekar received B.E.
degree in Electronics Engineering from
RTM Nagpur University, Nagpur in year
1990 and M.Tech in Electronics
Engineering from VNIT, Nagpur in 2004.
He is an Associate Professor in the
Department of Electronics Engineering,
BDCOE, Sevagram, Wardha, M.S., India.
His research interests are VLSI circuit
design and wireless sensor network.




Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 229
Automation of Micro-Architectural Coverage
for Next Generation Graphics Processor
Sruthi Paturi RA.K. Sarvananguru Singhal Mayank
School of Computing Sciences, School of Computing Sciences, Intel Corporation
VIT University, VIT University Bangalore, India.
Vellore 632 014, Tamilnadu, India, Vellore 632 014, Tamilnadu, India, mayank.singhal@intel.com
sruthi.paturi@gmail.com. saravanank@vit.ac.in



Abstract
Validation is the quality assurance proof that a
product, service or system accomplishes its intended
requirements. Pre-silicon validation, which is the essential
quality assurance process for VLSI designs at the early
stages, reduces the cost incurred in the process of
validation or re-designs if needed. This paper proposes an
approach for automating the tedious process of coverage
extraction with ease. The coverage data is extracted for
the test cases on the micro-architectural specification of
the graphics processor.
Keywords: Coverage, Coverage driven validation,
Regression, Functional Coverage
I. INTRODUCTION
Pre-Silicon validation is an important and challenging
phase in the processor development due to increase in
the design complexity. The pre-silicon validation is
broadly classified into two categories namely the
formal verification and simulation-based validation.
Formal techniques ensure complete validation by
proving mathematically. The downside of these
techniques is that the difficulty in validating increases
with the design size though they ensure a complete
validation [3].
Simulation-based validation finds the errors in the
design by using the input stimuli to the test cases and
the expected outputs. These techniques can validate
complex designs though cannot achieve complete
validation. Simulation-based validation involves the
generating the test cases which are stimulated on the
design and comparison of the obtained results with
those of expected results.
Simulation-based techniques assure functional
correctness of the design by exhaustive simulation of
the test cases which is possible only for small designs
[6]. This technique can be applied for validation at any
stage of the processor development. The approach in
this paper focus on the validation at the design
specification stage, since this is the early step in
processor development and validating at this stage will
help in finding the holes at the initial stages of
development. This will help in rectifying the reason for
holes as the development process proceeds.
At Intel the functionality of the processor is defined in
a document which is the behavioral specification of the
processor. This document specifies the micro
architecture of the processor from which we get the
every detail on the functionalities of the processor. The
approach presented in this paper aim at automating the
process of getting coverage data from the specification
document.
This paper will give an overview of the validation
process in section II, the architecture in section III, an
example which illustrates the approach in section IV,
the results in section V and conclusion in section VI.

II. OVERVIEW
In general the validation methodology is performed at
the later stages of the processor development. This
delays the process of finding holes in the design.
Finding the holes and rectifying at those stages is a
tricky job since the stages of the processor will proceed
without the knowledge of the holes and the RTL
design had to be modified again if needed to cover the
holes which is costly job in terms of resources utilized.
To address these issues, validating the functionality at
the initial stage of the processor development that is the
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 230
specification is chosen in this paper. Since this is an
early stage and a more static document rather than the
designs which are volatile in nature, extracting
coverage information and identifying the holes from
this data would be easy. Moreover the holes can be
covered easily since the information is obtained from
the initial stages. The test cases which are intended to
test the functionalities of the design specified in the
document.
A. Validation Overview:
Processor development involves the comparison of
performance of the design with those of the design
specifications. The operations associated with the pre-
production testing and comparing the behavior with that
of design specification is known as pre-silicon
validation [1]. This paper concentrates on simulation-
based validation techniques, the most common in
validation environment. Simulation involves the
process where the test cases are simulated on the design
under test (DUT) and the results are compared against
the golden reference model.
The reference model is highlevel model which
simulates the test program according to the functional
semantics which are specified in the architecture
manual of the processor. The simulation is carried until
a required coverage level is obtained. Though the
coverage data is obtained , it cannot ensure sometimes
that all the bugs are found.
The ideal case is that the validation process should
find all the design bugs with minimal tests [1]. Each
test contains the stimuli which are needed to test any
particular function or set of functions in the DUT where
there involves the difficulty is designing a
comprehensive test case with feasible size. Fig 1 gives
the validation view based on DUT and model.
B. Metric for Validation:
Coverage data serves as the metric for the
completeness of the validation and directs the test cases
to the unexplored areas of design. Coverage with
respect to a metric will ensure to detect all possible
errors of a certain type [3].


Fig 1: Validation Overview

Some of the different coverage metrics are code
coverage, toggle coverage, FSM coverage, functional
coverage [3]. From the above metrics, functional
coverage is considered as the metric for validation in
this paper. The comprehensive lists of the conditions
which are needed to be verified during simulation are
created from the specification document [7]. This
coverage directs the test generations towards the
unexplored areas or the functionalities which are listed
from the specification which will provide a feedback to
the later stages. In this paper, an approach is proposed
for functional verification of the processor from the
details obtained from the behavioral specification of the
processor. The required coverage data obtained through
simulation of the tests on the model.
III. ARCITECTURE
This paper proposes a methodology for obtaining the
coverage data from the behavioral specification of the
processor. The methodology involves the following
steps
A. Coverage Scope Extraction:
Scope in validation environment represents the subset
of testing space which is considered as known and the
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 231
complimentary as unknown [7]. A directed test, test the
cases in the coverage space only and is difficult to test
the cases which are outside the coverage space [7]. To
improve the analysis we go for directed random tests
which increase the performance of coverage analysis.
In this approach, the coverage scope is the one, which is
obtained from the behavioral specification of the
processor. Initially the specification manual is an
English document from which we obtain the scope by
parsing the document. The details may include the
name of the instruction, format, fields etc.
B. Data Extraction:
The test cases are simulated on the behavioral
simulator for the graphics processor. The stimuli
generator generates the directed and the random stimuli
for the test cases to test the functionalities of the
processor which are stimulated on the reference model.
The simulations results in the dump files which are the
text files containing the data encountered for each of
the instruction during each cycle of the simulation. An
intelligent parsing mechanism should be developed to
handle the files of different formats and to handle large
data. This module plays a key role in the coverage
analysis as this handles with the actual data obtained
out of simulation and is the basis for the coverage
analysis.
C. Coverage Data Calculation and Analysis:
The coverage analysis is performed on the simulation
data and the validation space details. From the
validation space, the details of the value range that a
particular instruction can hold is obtained whereas the
actual values exercised are obtained from the simulator
dump file. With this information the coverage data is
calculated, that is the percentage of values covered to
those of the total possible values for each instruction.
From this information we can know the instructions
which have less percentage of covered data.
From the coverage data obtained from the above step,
various kinds of analysis depending on the users
requirement can be performed. The following are the
few of the analysis from the list which can be
performed on the coverage data obtained.


1) Instruction Level Analysis:
The instructions which are poorly accessed or never
accessed can be found from the coverage data.
Instruction level analysis can be performed from the
coverage data where the level of coverage can be
analyzed for each instruction. This analysis is useful for
identifying the instructions which are rarely accessed
which helps in identifying in test cases which are acting
on these. From the above data, only those test cases can
be simulated rather than simulating the entire list even
for those instructions that have 100% coverage. This
helps in reducing the clock cycles involved in
simulating and helps in reducing the cost incurred in
validation with respect to computational resources, and
time.
2) Test-Level Analysis:
This analysis is performed from the test cases
perspective. When a list of all test-cases is simulated on
a unit of processor, coverage data for each of the test-
cases with respect to for all the instructions for that unit
of processor is obtained. From this data, the test-cases
which have low coverage data among the list are
identified and are ranked in order of the coverage data.
This ranking is helpful to identify the test case which
covers the most of the functionalities of the unit. Such
tests can be only simulated for future simulations.
However though the high coverage does not confirm
that all the functionalities of the processor are tested,
this ranking can be reference for further simulations.
3) Unit- Level Analysis:
When regression is performed, coverage data is
analyzed at the unit level. For each unit of the processor
the coverage data with respect to the test-cases,
corresponding data can be analyzed. This information
is useful for identifying those units which have low
coverage with respect to the test cases which helps in
finding the units which have low coverage. This data is
useful in identifying the functionalities which are not
tested and helps further to identify the test cases which
tests the functionalities which are not tested or poorly
tested.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 232


Fig2: Block Diagram for automating process of event
coverage
IV. EXAMPLE
To illustrate the above coverage analysis methodology
from the specification document of the processor,
consider the following example. In this example, the
add instruction takes two operands as input. The
operands can be of any type as int and float. Table 1
shows the exhaustive cases of the add instruction from
the definition which is the coverage scope.
Instruction Operand1 Operand2
add Int Int
Int Float
Float Int
Float float
Table 1: Exhaustive cases
Among the above cases when a test case say X is acted
on the add instruction only 2 of the above cases are
exercised. Table 2 gives the coverage data shows the
cases that are exercised.
Instruction Test Operand1 Operand2
add X Int Float
Float float
Table 2: Actual Covered Cases
From the table 2, the coverage data is calculated as the
actual executed cases with those of total exhaustive
cases. For this particular example we get the coverage
as 50%. From the above data, analysis can be
performed at instruction, test levels depending on the
users requirement. From the analysis, the test cases can
be improved to cover the remaining cases which are not
accessed. These cases are referred as holes. Test cases
can be directed to cover the holes to attain 100%
coverage which refers to validation completion stage.
V. RESULTS
On performing the coverage analysis using the above
methodology on a unit of a graphics processor at Intel,
we obtained the coverage data. Using the coverage data
the analysis at the instruction, test and unit level are
performed. The coverage data obtained at the
instruction level is plotted on a graph as shown in fig
3.The x-axis of the graph shows the instructions and the
y-axis is the coverage percentage. The red portions on
the bars indicate the not covered cases while the green
part indicates the covered percentage. From this graph
we can identify the instructions which do not have the
minimum coverage level. The test cases which act on
such instructions are identified and only those are
simulated again ignoring the other test cases.


Fig 3: Graph for Instruction Level analysis
Similarly the analysis of the coverage data can
be obtained at test and unit level of a processor.

VI. CONCLUSION
The proposed approach of automating the coverage
extraction aims at the specification level of the design.
This specification level being the initial stages of the
C
O
V
E
R
A
G
E




%

INSTRUCTIONS
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 233
design development, extracting coverage information at
this level helps in finding the bugs at early stages. The
coverage information obtained can be used as a
feedback for tweaking and refining the test stimuli. The
ranking of tests, which is obtained through this
proposed approach, can be used for pruning the
regression test lists. Thus, the proposed approach can be
used effectively for early validation of the design under
test.
REFERENCES
[1] S Tasiran, K Keutzer Coverage Metrics for
Functional Validation of Hardware designs, Design
and Test of Computers, IEEE 2002.
[2] S Asaf, E Marcus, A Ziv, Defining Coverage
views to Improve Functional Coverage Analysis ,
Proceedings of the 41
st
annual Design Automation
Conference, ACM 2004.
[3] Da Silva, K.R.G, Melchur, E.U.K, Araujo G, An
Automatic Testbench Generation Tool for a SystemC
Functional Verification Methodology Integrated
Circuits and Systems Design, 2004. SBCCI 2004.
[4] See, M, Strategy to detect Bug in Pre-silicon
Phase SoC Design Conference (ISOCC), 2009
International, 2010 - ieeexplore.ieee.org
[5] Mitra, S., Ghosh, P., Dasgupta, P., Chakrabarti,
P.P.,Incremental Verification Techniques for an
Updated Architectural Specification 2009 Annual
IEEE, 2010 IEEE.
[6] Aktan, B.; Greenwood, G.W. , Evolutionary
Computation in Pre-Silicon Verification of Complex
Microprocessors Evolvable and Adaptive Hardware,
2009. WEAH '09. IEEE Workshop 2009.
[7] Alon, Gluska, Intel MG, Haifa, Practical Methods
in Coverage-Oriented Verification of the Merom
Microprocessor Design Automation Conference, 2006
43
rd
ACM/IEEE.
[8] Heon-Mo Koo, Prabhat Mishra, Specification-
based compaction of directed tests for functional
validation of pipelined processors CODES+ISSS '08
Proceedings of the 6th IEEE/ACM/IFIP international
conference on Hardware/Software codesign and system
synthesis.

[9] Prabhat Mishra, Nikil Dutt, Narayanan
Krishnamurthy, Magdy Abadir, A methodology for
validation of microprocessors using symbolic
simulation ,International Journal of Embedded
Systems, Volume 1, Number 1-2/2005, pg 14-22.

[10] Ferrandi, F, Fummi, F, Pravadelli, G, Sciuto, D,
Identification of design errors through functional
testing, Reliability, IEEE Transactions , 2003.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 234
FUZZY VAULT USING LOCAL SPECTRAL AND LINE FEATURES OF
PALMPRINT

NISHA SEBASTIAN
MTech Computer Science Specialization in Data Security
TocH Institute of Science and Technology,Arakkunnam
email:nishajossy@gmail.com



Abstract--- Palmprint is a popular human feature used in
biometric technology because of its uniqueness and stableness. It
also provides rich feature information that can be used to analyze
to identify individual human. T his paper introduces a new fuzzy
vault approach based on spectral and line features of palmprint.
This scheme can be used to reduce the possibility of imposter
matches in the current approach. Current palmprint based fuzzy
vault approach is based only on the texture features of palmprint.
In this system the user's identity is characterizing through the
simultaneous use of two major palmprint representations. Fuzzy
vault is created by using texture based DCT features of palmprint.
The line feature is used for primary level matching. This
approach uses a hybrid encryption scheme which is a combination
of symmetric and asymmetric approach in which a file or
message to be send securely is encrypted first using AES approach.
The secret AES symmetric key is encrypted using RSA algorithm.
Then texture based palmprint fuzzy vault is created around the
RSA decryption key. The proposed use of fuzzy vault achieved
promising results in a reliable and user friendly cryptosystem.

Keywords-AES,RSA,DES,DCT,Spectral features of palmprint.

I.INTRODUCTION

In networked society hacking
of information is a crucial problem that needs to be solved
properly. Palm print based fuzzy vault can be used to develop a
reliable and user friendly cryptosystem. So far there has not
been any attempt to create a biometric fuzzy vault
cryptosystem using multiple palmprint representations .This
paper therefore proposes the use of spectral and line features
of palmprint to develop a secure biometric fuzzy vault. Fuzzy
vault is the generation of a secure roof using an unordered set
to hide any inside secret

II.BACK GROUND

Authentication systems were designed to
withstand security attacks when employing critical security
applications. And in this field biometric is one of the most
important and effective solutions .Now a days compared with
fingerprint or iris based personal verification systems which
have been widely used, the palm print verification system can
also achieve satisfying performance .The PVS can provide
reliable recognition rate with fast processing speed.

Cryptography can be employed to
improve the security of information system. The efficiency of
the system can be improved by providing cryptography based
security at different stages of biometric authentication. A
secure key can be associated with a biometric signature to
ensure the integrity and condentiality of communication in
online systems. The biometric-based authentication requires
physical presence of persons to be authenticated and is
therefore reliable, convenient, and ecient. The protection of
secret key in cryptographic system using biometric based fuzzy
vault can have several drawbacks due to dierent cryptographic
approaches. ie the symmetric cryptographic approaches suers
from authentication problems, and the asymmetric approaches
are computationally intensive. Current research in biometrics
has been focused on nger print and face. The recent research
on face recognition has some difficulty in feature extraction
regarding pose, lighting, orientation ,and gesture which made it
less reliable as compared to other biometrics.

Fingerprint fuzzy vault[2] has implemented
and successfully widely used for recognition purposes. The
nger print features are very dicult to extract from the
elderly, laborer, and handicapped users. So far there has been
an approach proposed for palm print fuzzy vault[1] which uses
only local features of palm print which has the possibility of
producing imposter matches

III. JUSTIFICATION
As a solution, a new cryptosystem using
efficient palm print representation based fuzzy vault is needed
to overcome the hacking problem. The palm print contains
mainly three types of information, i.e., texture information,
line information, and appearance based information. A generic
online palm print based authentication system considers only
texture information while ignoring line- and appearance-based
information. Thus the use of single palm print representation
has become the bottleneck in producing high performance. An
ideal palm print based personal authentication system should
be able to reliably discriminate individuals using all of the
available information. The recent researches show that the
combination of palm print representations, on the same palm
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 235
print image, can provide better performance than either one
individually.

IV. RESEARCH QUESTION.

Can we develop a user friendly and reliable crypto system
based on multiple palm print representations

V. RESEARCH METHODOLOGY
The design steps of the above system consists of four
modules

1. Hybrid Encryption
2. Extraction of spectral and line features of palm print
3. Creation of Fuzzy vault using palm print Features
4. Unlocking the vault and decryption of message

V.1 Hybrid Encryption
A secret file is encrypted using fast
symmetric algorithm; the secret symmetric key is then
encrypted using asymmetric cryptography; the Cipher text
(encrypted message) and the encrypted keys are finally sent to
the recipient. Asymmetric cryptography is computationally
intensive but for small messages it is not too slow .The
receiver can easily use his/her private asymmetric key to
decrypt the symmetric key. The decrypted symmetric key can
be used to quickly decrypt the message file.

The Symmetric Encryption and Decryption

. Symmetric-key cryptography
refers to encryption methods in which both the sender and
receiver share the same key .Symmetric algorithms, such as,
DES,Triple DES, and Rijndael [3], provide ecient and
powerful cryptographic solutions, especially for encrypting
bulk data.

Let Z = [x1, x2, x3, ... , xm] be the
secret message of A . The m letters of message are alphabets.
The message is intended to B . A generates its symmetric key,
say Ks, and uses this key to lock secret message Z:

Y = Ks(Z). (1)

A sends the encrypted message and the symmetric key (Ks) to
B . B uses the symmetric key to decrypt the message:
Z = Ks(Y). ( 2)

In my system I have used advance encryption
standard (AES) as a symmetric encryption, which is advanced
version of data encryption standard (DES). There are few
shortcomings with the usage of symmetric key cryptography.
Maintaining the integrity of received data and verifying the
identity of the source of that data is of major issue to ensure
the security in data communication. A symmetric key can be
used to check the identity of the individual, as it requires the
presence of symmetric key, but this authentication scheme can
have some problems involving trust. The problem is that this
scheme cannot discriminate between the two individuals who
know the shared key.

The Asymmetric Cryptosystem

Asymmetric cryptography is a cryptographic approach
which involves the use of asymmetric key algorithms. Unlike
symmetric key algorithms, it does not require a secure initial
exchange of one or more secret keys to both sender and
receiver. The asymmetric key algorithms are used to create a
mathematically related key pair: a secret private key and a
published public key. Now for this approach, B generates a
related pair of keys: a public key Kpub and a private key Kpri .
The Kpri is known only to B, whereas Kpub is publicly
available to everyone and therefore accessible by A also. With
the message Z and the encryption key Kpub as input, A forms
the cipher text, denoted as Y, as follows:


Y=K
pub
(Z)

Y=[y1,y2,y3, .ym] (3)


The intended receiver in the
position to matching is able to invert above using the following
transformation:

X = Kpri(Y) (4)

The main problem regarding this
approach is the distribution of public keys. Having signed the
secret document with B's secret key, A must ensure that the
public key available is really B's key but not of intruder C. The
management and security of private key is also a major
concern. The other important problem with asymmetric
cryptography is that the processing requires intense use of the
processor as it is computationally intensive and requires a lot of
computations.

Why Hybrid Encryption .

The advantages of the
symmetric approaches are utilized to encrypt bulk amount of
the data, while asymmetric approaches are used to provide
authentication/verification to secure communication. Using
hybrid encryption, a file containing bulk amount of data can be
encrypted by AES approach, while the key is again encrypted
by public/private keys of RSA approach. The key which is
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 236
encrypted by public key of recipients, can only be decrypted by
its private key. Thus we can establish a safe communication
between the sender and the verified/authenticated recipient. if
the key is encrypted by private key of the sender, it can only be
decrypted by corresponding public key (which is publicly
available). This process authenticates the source of encryption
and therefore prevents any possible repudiation or denial from
the message generator

Let A denote the file to be
encrypted and KS be the symmetric key, used to encrypt the
document. In order to encrypt the file A, AES algorithm can be
used . For improving the security of the system the generated
symmetric key again is encrypted by asymmetric approach
using RSA algorithm. Assume that the public and private keys
associated with the RSA cryptosystem are denoted by Kpri and
Kpub. The following Equation summarize the complete
procedure:
Y = KS(A),

T = Kpub(KS),

KS = Kpri(T),

A=KS(Y). (5)




V.2 Extraction of spectral and line features of palm print
The palm print features were
extracted from palm print images acquired from the digital
camera using unconstrained peg-free setup in indoor
environment.
Finding ROI
To extract the region of interest ROI from the
palmprint image the image is binarised using ostu [4] method.
Then using the region pop method we can identify the region
of palmprint in the image. Then an eclipse is placed around the
palmprint region and plot a rectangle specifying the ROI using
the major and minor axis lengths of the eclipse.


a) Original image b) binarized image c) ROI of palm print
Fig1 Extracting ROI of palm print image
Extracting Texture based Features
As illustrated in Figure 2, each of
the 300 300 pixels palm print image is divided into 24 24
pixels overlapping blocks. The extent of this overlapping has
been selected as 6 pixels. Thus we obtain 289 separate blocks
from each palm print image. The DCT coecients from each of
these N square block that is, f (x, y), are obtained as follows:



Fig 2 Localization of 289 overlapping palm print image sub
blocks for feature extraction.

The standard deviation of the
coefficients, obtained from each of the overlapping blocks, is
used to represent the region. Thus we obtain a feature vector of
289 values.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 237
Extracting Line features

Filtering procedures based on image processing techniques are
used for extracting line features .At first two main filtering
algorithms such as gradient masks and closing operators are
employed to detect the lines. Then smoothing ,merging with
binarisation and connected component labeling are applied to
eliminate noise in the original images and the noise generated
in the processing steps.

The process of detecting edge is done
by gradient operator as the second filters. Masks of size 2x2 in
two directions (0 and 90) are used as illustrated in Fig.4.
Using this small size, edge can be easily detected with
simple computation and less processing time. Then the
smoothed image is convolved with the masks in 0 and 90
to enhance the edge in the two directions.


Closing , is the one of morphological operations
which uses the basic operations of dilation followed by
erosion. The edge image is performed by closing operator
with disk-shape structuring element to smooth contours and
fill small holes.




Fig 3. Steps in extracting Line features
a)Smoothing, Closing ,merging and binarization steps b)Image
obtained after connected component labeling.
After the image is detected, the principle lines and closing
operation is used. Gradient image and closing image are
combined by merging with OR operation . This resultant
image represents the contrast between object and
background,. Binarisation with a pre-defined threshold is
employed for enhancing the object from the background
.Then 2DCT is applied in the same manner in the resultant
image as in the case of spectral features
V.3. Creation of Fuzzy vault using spectral palmprint
Features

The Hybrid encryption method includes both the
ideas of AES and RSA system efficiently and minimizes the
shortcomings associated with both approaches. The other
important concern of the system is the management of private
key. At the end of hybrid encryption security of the entire
system depends upon the security of private decryption key.
The security to private key can be ensured by the using concept
of fuzzy vault. Using the concept of fuzzy vault, our main goal
is to hide this decryption key using palm print features to
provide some security to the key and make the whole system
tailored for its practical usage. The combination of
cryptographic keys with biometric removes the extra key
management efforts required by the user and ensures that it is
nontransferable. The cryptographic approaches expects that the
keys should be similar for every attempt for successful access,
but it is clearly not the case with a typical biometric. A solution
is to use suitable coding theory scheme which can tolerate
errors.
I used Reed and Solomon (RS) coding
scheme for providing some error tolerance while decryption.
This error tolerance is essentially required to overcome the
variations in palm print (biometric) features from the same user
during decryption. These differences can be attributed to the
scale, orientation, and translational variations in the user palm
print due to peg-free imaging. The RS coding scheme has error
correcting capacity of (Z-k)/2, where Z is the length of code
and k is the length of message, and used to encode decryption
key Kpri .
Let the codes generated by R-S coding theory be of
size D. Then we generate a two dimensional of size DX4 such
that the ith row of matrix contains ith place of the code. The
rest three places are filled by random numbers generated during
encoding. We designate this matrix as matrix F. Further, a
matrix of same size is generated, and the line features of palm
print are placed in the fourth column of the matrix .The
spectral palm print features are placed at the same position as in
the case of RS codes. The rest of the two places are filled with
numbers such that each row is maintained in the arithmetic
progression. Let us designate these numbers as tolerance value.
We can call this matrix as G. These two matrices are used to
form the vault
5.4 Unlocking the vault and decryption of message
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 238
To unlock the vault we first compare the
line features of input palm print with the fourth column of the
matrix G. If there occurs a match then only the spectral features
were compared with the input palm print spectral features This
comparison is needed to identify the correct positions of the
elements in matrix G,. Taking minimum of the distance, we can
locate the positions of actual spectral palm print from matrix G
and hence the corresponding positions for the codes in matrix
F. The inverse Reed and Solomon codes are applied to decode
the codes. One can select the suitable values for Z and k to
control the error occurred due to the difference in palm print
features..
Once the procedure for the locking and unlocking
of vault is determined, we fix the criteria for the authorized
users to successfully open the vault while rejecting the imposter
attempts. The vault is said to open successfully, if the codes
retrieved from grid F (created by R-S codes) using the query
palm print features will be identically equal to the codes used at
the time of locking. The inverse Reed Solomon codes are
applied to the retrieved codes to get the encrypted symmetric
key..
VI. EXPERIMENTAL RESULTS
The implementation of the system
consists of generation of RSA public and private keys.. A secret
document is then encrypted using symmetric key. Again that
symmetric Key is encrypted using asymmetric Key. After
hybrid encryption, fuzzy vault is created around the private key
by generating grids using palm print features. The performance
is based on varying tolerance value over the range, and the
corresponding false acceptance rate (FAR) and false rejection
rate (FRR) are then computed. The palm print database
consisted of the left-hand images from the 100 users, and two
images from each of the users are employed. One of the palm
print image from each of the users was employed to lock the
vault. The successful opening with the other enrolled palm
print image of the same user as considered as correct match
while opening with all the other enrolled test images from other
enrolled users (i.e., 84 users) was considered as imposter
matches. Thus our performance estimation, that is, FAR and
FRR, is based on 99 100 imposter and 100 respective genuine
attempts. The False Acceptance Rate and False Rejection Rate
depends upon choice of tolerance. I performed several
experiments to select the best value of this tolerance.
Table 1 shows the difference in experimental results with the
key length and the corresponding tolerance value. Figure 4
illustrates the performance of the proposed palm print-based
vault. Figure4 illustrates the variation of FAR and FRR
scores with the tolerance .The RSA cryptosystem used in our
program has some variations in key length. The RSA key and
length varies from 306 to 309 . As cryptographic keys are
supposed to be same at each application, authentication rates
and Performance can vary with each length size of the
generated Key.
Table 1: Some experimental results.
Key length EER (%) Tolerance
306 0.905 1.060
307 0.375 0.995
308 0.752 1.065
309 2.134 1.118

Figure 4: The variations of the FAR and FRR
characteristics with the tolerance for the palmprint-
based cryptosystem.
The biometric technologies investigated for the experimental
evaluation has been quite limited and most of the prior work is
focused on nger print .The summary of prior work presented
in Table 2 suggests that much of the work has been simulated
on a small dataset, such as fingerprint minute features has
used 9 users, voice recognition has used 10 users, which is
quite small to generate a reliable conclusion on performance.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 239


VII. CONCLUSION.
The main Advantages of this project
can be summarized as follows. Firstly, this project proposes a
new approach for fuzzy vault using spectral and line features
of palm print biometric. Secondly, , this project uses hybrid
cryptosystem which successfully exhibits the advantage of
both symmetric and asymmetric cryptography. The RSA for
encryption has been estimated to be very slow as compared to
traditional symmetric approach (Data Encryption Standard,
abbreviated as DES) .Therefore the proposed approach is to
use symmetric cryptography to encrypt the entire document
and then we encrypt symmetric key using asymmetric (RSA)
approach. This method eliminates the drawback (imposter
matches) of the current palm print fuzzy vault which uses only
the texture DCT features and the use of hybrid encryption
removes extra key management process.
6. REFERENCES
1. Development of a New Cryptographic Construct Using
Palmprint-Based Fuzzy Vault Amioy Kumar1 and Ajay
Kumar1, 2 1 Biometrics Research Laboratory, Department of
Electrical Engineering, Indian Institute of Technology Delhi,
Hauz Khas,New Delhi 110 016, India 2 Department of
Computing, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong -July 2009

2 U. Uludag, S. Pankanti, and A. K. Jain, Fuzzy vault for
fingerprints, in Proceedings of the 5th Audio- and Video-
Based Biometric Person Authentication (AVBPA '05), vol.
3546 of Lecture Notes in Computer Science, pp. 310319,
Springer, Hilton Rye Town, NY, USA, July 2005.
3. W. Stallings, Cryptology and Network Security: Principles and
Practices, Prentice Hall, Upper Saddle River, NJ, USA, 3rd
edition, 2003.
4. A Kumar, D. C. M. Wong, H. C. Shen, and A. K. Jain,
Personal authentication using hand images, Pattern
Recognition Letters, vol. 27, no. 13, pp. 14781486, 2006
5 .F. Hao, R. Anderson, and J. Daugman, Combining crypto
with biometrics effectively, IEEE Transactions on Computers,
vol. 55, no. 9, pp. 10811088, 2006
6. A. Juels and M. Sudan, A Fuzzy Vault Scheme, Proc.
IEEE Intl. Symp. Inf. Theory, A. Lapidoth and E. Teletar,
Eds., pp. 408, 2002.







Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 240
Sql Injection Identification Using Blah Algorithm


Justy Jameson
Computer Science Specialization In Data Security
Toc H Institute Of Science And Technology
Arakkunam, Kerala
e-mail: justyjameson@gmail.com,
Sherly K K (Associate Professor)
Computer Science Specialization In Data Security
Toc H Institute Of Science And Technology
Arakkunam, Kerala
e-mail: shrly_shilu@yahoo.com


Abstract Data security has become a topic of primary
discussion for security expert. Vulnerabilities are pervasive
resulting in exposure of organization and firm to a wide array of
risk. In recent years, a large number of software systems are
being ported towards the Web, and platforms providing new
kinds of services over the Internet are becoming more and more
popular: e-health, e-commerce, and e-government. Code
injection attack is a major concern for web security. SQL
injection is one amongst the most dangerous vulnerabilities for
Web applications and it is becoming a frequent cause of attacks
as many systems are migrating towards the Web. This is a new
approach for detecting sql injection.We use static analysis based
SQL injection detection technique. In which we identify a hot
spot and BLAH algorithm for string comparison. The BLAH
algorithm approach has high accuracy. At the same time, the
processing speed is fast enough to enable online detection of sql
injection.
Keywords- SQL injection, BLAH algorithm, Security.
I. INTRODUCTION
Today's modern web era, expects the organization to
concentrate more on web application security. This is the
major challenge faced by all the organization to protect their
precious data against malicious access or corruptions.
Generally the program developers show keen interest in
developing the application with usability rather than
incorporating security policy rules. Input validation issue is a
security issue if an attacker finds that an application makes
unfounded assumptions about the type, length, format, or
range of input data. The attacker can then supply a malicious
input that compromises an application. When a network and
host level entry points are fully secured; the public interfaces
exposed by an application become the only source of attack.
The cross site scripting attacks, SQL Injections [9] attacks
and Buffer Overflow are the major threat in the web
application security through this input validation security
issues. Especially SQL Injection attacks breach the database
mechanism such as Integration, Authentication, Availability
and authorization.
The most worrying aspect of SQL Injection attack
is; it is very easy to perform, even if the developers of the
application are well known about this type of attacks. Input
validation issues can allow the attackers to gain complete
access to such database. Technologies vulnerable to SQL
Injection attacks are dynamic Script languages like ASP,
ASP.net, PHP, JSP, CGI, etc.Researchers have proposed a
different techniques to provide a solution for SQLIAs (SQL
Injection attacks), but many of these solutions have
limitations that affect their effectiveness and practicality. In
this work, an attempt has been made to increase the
efficiency of the above techniques by empirical method for
protecting web application against SQL Injection attacks.
The remainder of the paper is organized as follows:
Section 2 contains basic idea; Section 3 describes related
works and section 4 contains research proposal architecture.
II. BASIC IDEA
SQL Injection is one of the main issues in database
security. It affects the database without the knowledge of the
database administrator. It may delete the full database or
records or tables without the knowledge of the respective
user or administrator. It is a technique used to exploit the
database system through vulnerable web applications. These
attacks not only make the attacker to breach the security and
steal the entire content of the database but also, to make
arbitrary changes to both the database schema and the
contents. SQL injection attack could not be realized about
information compromization until long after the attack has
passed. In many scenarios, the victims are unaware that their
confidential data has been stolen or compromised.
Many researchers proposed various solutions to the SQL
injection attacks. Mainly four types are present. The
following section describes about the solution.
1. Tainted Data Tracking: The main idea of this run-time
protection mechanism is to taint and track the data that
comes from user input. This can be done via instrumenting
the run time environment of web application or interpreter of
the back-end scripting language, e.g., PHP.
2. Static Analysis Based Intrusion Detection: Hotspots,i.e.,
locations in the back-end program that submits SQL
statements, can be easily identied by examining the source
code or byte code of a web application. Using static string
analysis technique, it is possible to construct a regular
expression that conservatively approximates the set of SQL
statements generated at a hotspot. The information can be
used to statically analyze syntax correctness of SQL
statements [1]. In addition, it can be used to model normal
behaviors of a web application.
3. Black-box Testing: By collecting a library of attack
patterns, Y. W. Huang et al. applied black-box testing on web
applications. The defect of the approach is that without prior
knowledge of source code, it is not as effective as white-box
testing to discover non-trivial attacks.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 241
4. SQL Randomization: SQL randomization is basically an
extension of the instruction randomization technique to
defend against code injection attacks. The key idea is to
instrument a web application and append a random integer
number after each SQL keyword in the constant string
fragments that are used to dynamically build SQL
statements..
III. RELATED WORK
Xiang Fu et al [1], propose the design of a static
analysis framework (called SAFELI) for identifying SIA
(SQL Injection attacks) vulnerabilities at compile time.
SAFELI statically monitoring the MSIL (Microsoft Symbolic
intermediate language) byte code of an ASP.NET Web
application, using symbolic execution.The main limitation of
Xiang et al's work is that this approach can discover the SQL
injection attacks only on Microsoft based product.
Buehrer et al [12], propose the mechanism which
filters the SQL Injection in a static manner. The SQL
statements by comparing the parse tree of a SQL statement
before and after input and only allowing to SQL statements
to execute if the parse trees match. They conducted a study
using one real world web application and applied their
SQLGUARD solution to each application. It is stopped all of
the SQLIAs without generating any false positive results,
their solution required the developer to rewrite all of their
SQL code to use their custom libraries.
R.Ezumalai proposes a system against SQLIA is
based on signature based approach, which has been used to
address security problems related to input validation. This
approach uses Hirschberg algorithm [10] to compare the
statement from the specification
Angelo Ciampa et al [4] propose a heuristic based
approach for detecting sql injection vulnerabilities in web
application. They propose an approach and a tool named
VIp3R for web application penetration testing. This approach
is based on pattern matching of error messages.
William G.J et al [2] propose Protecting Web
Applications Using Positive Tainting and Syntax-Aware
Evaluation. This approach has both conceptual and practical
advantages.This paper introduce a tool WASP for sql
injection identification.
Marco Cova et al [14], propose a mechanism to the
anomaly-based detection of attacks against web applications.
Swaddler analyzes the internal state of a web application and
learns the relationships between the application's critical
execution points and the application's internal state.
William G.J. Halfond, Alessandro Orso, Panagiotis
Manolios [15], proposed the mechanism to keep track of the
positive taints and negative taints. This work outlined a new
automated technique for preventing SQLIAs based on the
novel concept of positive tainting and on flexible syntax
aware evaluation.
Livshits et al [11], propose another static analysis
approach for finding the SQL injection using vulnerability
pattern approach. Vulnerability patterns are described here in
this approach. The main issues of this method, is that it
cannot detect the SQL injection attacks patterns that are not
known beforehand.
IV. OUR APPROACH
Our approach is based on static analysis intrusion
detection. In this approach we use four modules to detect
security issues. A monitoring module which takes the input
from web application and send to analysis module.An
analysis module which finds out the hot spot from web
application, it uses BLAH ALGORITHM. BLAH is a
sequence alignment algorithm. It uses for string comparison.
Processing speed and accuracy of this algorithm is high. It
compare with stored sql injection database. If any sql
injection identified it audit the record, otherwise it allow for
transaction. The following figure shows the architecture of
the system to prevent the SQL injection attacks using new
approach.














n




y
y


n






n y








WEB
APPLICATION
MONITORING
MODULE
ANALYSIS MODULE












FIND HOT SPOT
BLAH ALGORITHM
K-TUPLE
TABLE
COMPAR
E
SQL
INJECTION
STOP
TRANSACT
ION
ALERT
GENERATION
AUDIT
ING
VALID
TRANSACT
ION
COMPLETE
TRANSACTION
STOP
TRANSACTION
STORED
DATA
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 242
A. Monitoring module
The monitoring module gets an input from the web
application and sends it to analysis module for further
checking.




B.Analysis module
Analysis module gets an input from the monitoring
module and it finds a hot spot from the application and it uses
BLAH algorithm for string comparison.
1)HOT SPOT:
Hot spot is that line where it gets the input from the
user and vulnerable in execution. This step performs a simple
scanning of the application code to identify hotspots. A
hotspot needs to be recognized in the web application code
page. To find a hotspot, we need to find the mysql_query ()
function. This function sends the query to the currently
active database on the server.
<?php require_once(Connection/conn.php);?>
<? Mysql_select_db($database_conn); ?>
<? If(isset($_POST[submit])) {
$user_name=$_POST[username];
$user_pwd=$_POST[password];
$check_login=SELECT * from student where
uname=$user_name AND password= $user_pwd;




?>


2) BLAH algorithm:
2.1) BLAST
BLAST comprises three steps.1) It compiles a list of
high-scoring words (neighboring words) from a given query
sequence.2) each neighboring word is compared with the
database sequences. If the neighboring word is identical to a
word (sequence fragment) in database, a hit is recorded.3)
every hit sequence is extended in both directions and the
extension is stopped as soon as the similarity score becomes
less than a threshold value. All the extended segment pairs
whose scores are equal to or greater than the threshold are
retained.
2.2) SSAH
SSAHA is a two-stage algorithm.1)A hash table is
constructed from sequences in the database 2)Query words
are searched appropriately from the hash table in the second
stage.Let Q be the query sequence and D be the available
sequences in the database. Each database sequence Si D is
divided into k-tuple. The hash table is stored in memory as
two data structuresa list of positions L and an array of
pointers A into L. The pointer A [w] points to an entry of L
which describes the position of the first occurrence of the k-
tuple w in the database D. The positions of all occurrences of
w in D can be obtained by traversing L. In the second stage,
the hash table is used to search for occurrences of a query
sequence Q within the database. A list of hits is computed
and added to a master list M. M is next sorted, first by index
and then by shift. The final searching process is done by
scanning through M.
2.3) BLAH (BLAST + SSAHA)
BLAH is a two-stage algorithm.1)A clustered k-tuple table is
created.2)Find database similarity regions using k-tuple table.
2.3.1) Constructing k-Tuple Table:
The database D is converted into a k-tuple table
(KT) in the first stage. The KT consists of three attributes,
namely, Tuple-weight, Sequence-index and Sequence-offset.
Tuple-weight: Tuple-weight: Every distinct k-tuple is
assigned a unique integer value which is called Tuple-weight
W = , w=Sequence-base .
k-tuple: Let S =<s1,s2,s3 .sn> be a sequence of length
n. Then, any consecutive k elements of this sequence form a
k-tuple. Two k-tuples are called overlapping if they share
some elements between them.
Sequence-base: Number of distinct elements present in the
sequences of the database is called the Sequence-base.
Tuple-offset: The position in a sequence where a k-tuple
starts is called tuple-offset.
Sequence-index: If a database has n number of sequences,
then the sequence index of the i th sequence in the database is
i.
The database D is converted into a k-tuple table
(KT) in the first stage. The KT consists of three attributes,
namely, Tuple-weight, Sequence-index, and Sequence-offset.
This table has a cluster index on Tuple-weight.There can be
k^ Sequence-base distinct Tuple-weight values in the table.
We can obtain the positions of all occurrences of a k-tuple in
D from KT. The KT is constructed by making only one scan
through D. Each sequence Si of length n is broken into n - k
+ 1 number of overlapping k-tuples.The Tuple-weight W is
calculated using for each such k-tuple with offset O. Finally,
a row [W,i,O] is inserted into KT. For eg
S1=<ABCABCAC>, S2=<CCACACC>, k=2
Overlapped sequences: S1 -- AB, BC, CA, AB, BC, CA, AC
S2 CC, CA, AC, CA, AC, CC .Tuple weight of AB ( W)
= ,w=3 (A, B, C are the three elements in
this database) A0, B1, C2(ti),W= = *
0 + *1= 0 + 1= 1
TABLE 1
DATABASE K TUPLE TABLE
Tuple-weight Sequence-index Sequence-offset
AB1 1 0
AB1 1 3
AC2 1 6
AC2 2 2
AC2 2 4
BC5 1 1
BC5 1 4
Web
application
Monitoring
module
Analysis
module
$getresult=mysql_query($check_log
in);

Hot spot
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 243
CA6 1 2
CA6 1 5
CA6 2 1
CA6 2 3
CC8 2 0
CC8 2 5

2.3.2) Query-Sequence Search Method
In the query-sequence search method, a query
sequence is aligned with the existing sequences using
BLAST. The k-tuple table is used here to choose some
database regions on which the alignment is performed. KT,
thus, is useful in reducing the search space for the alignment
process. The query sequence is broken into overlapping k-
tuples and its Tuple-weight is evaluated. A list of Sequence-
index and Sequence-offset is obtained from KT for each k-
tuple in query sequence. The sequence-indexes are ordered
according to the number of distinct k-tuples present in that
sequence. Let us consider a query sequence < ABCACB >
which gives five overlapping 2-tuples <AB>, <BC> ,<CA>
,<AC> , and < CB >. Find the positions of these 2-tuples in
existing sequence. As<BC> exists in S1 at offsets 1 and 4,
Sequence-index contains {1, 1} and Sequence-offset column
contains {1, 4} for <BC> in new Table 2. The information
shown in Table 2 is generated from the k-tuple table.

TABLE 2
QUERY K TUPLES AND THEIR OCCURRENCE
IN DATABASE

Tuple-wieght Sequence-index Sequence-offset
AB1 {1,1} {0,3}
BC5 {1,1} {1,4}
CA6 {1,1,2,2} {2,5,1,3}
AC2 {1,2,2} {6,2,4}
CB7 {} {}

Next, the Sequence-indexes are arranged according
to the number of distinct query k-tuples present in the
database sequence. The ordered Sequence-indexes along
with the positions of k-tuples for the above example are
shown in Table 3.

TABLE 3
ORDERED SEQUENCE INDEXES
Sequence
index
No of
distinct
tuples
offset
1 4 <{0,3}{1,4}{2,5}{6
}>
2 2 <{}{}{1,3}{2,4}>

If there are a large number of distinct query k-
tuples in database sequence, it leads to good alignment score.
For a given pair of query and database sequence, there can be
different possible local alignments having alignment score
above a given threshold. BLAST can identify these alignment
regions by extending each small hit segment in both
directions. The BLAH algorithm may be stopped once high
alignment score is achieved.
TABLE 4
SQL INJECTION DATA BASE
1 1 or 1=1
2 1 or 1=1
3 1=1
4 1 EXE SP_(or EXEC XP_)
5 1 AND 1=1
6 1AND 1=(SELECT COUNT
(*)FROM tablenames);--
7 admin and 1=1 --
8 1\1
9 OR uname IS NOT NULL OR
uname =
10 ) or (1=1
11 or 1=1
12 or 1=1
13 or 1=1--
14 or 1=1--
15 or 1=1--
16 OR 1=1 LIMIT 1
17 admin --
18 hior a=a
19 hior 1=1--
20 hior a=a
21 or 0=0 #
22 UNION SELECT
1,Lee,lee_pwd,1--
23 or 0=0 #
24 union all select * from tbl_auth
where id=1--
25 1AND table_test=1
26 1AND table_test=1
27 Admin/*

How blah algorithm apply to the query statement
Input of the blah algorithm: "select * from studentreg where
username='$jesna' or '1=1'";//'and
password='$password'";This is a sql injected query statement
.This query sequence is broken into overlapped k-tuples.
Here we use k as 3.The size of the token is depends on the
sqlinjection database strings size. The database string value
grouped based on their size. Then tokenize the query based
on the size of the each group. That value is the k-tuple value.
Then find the similarity of the k-tuple value with the k group
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 244
of the sqlinjection database.If we find any similarity in the
database we stop the proceedings of the website.
TABLE 5 SUBSEQUENCES
sel
st her
'je
ele stu ere jes
lec tud re esn
ect ude
e u
sna
ct den us na'
t * ent use a'
* ntr ser ' o
* f tre ern or
fr reg rna or
fro eg nam r '
rom g w ame '1
om wh me= '1=
m s whe e=' 1=1
='j =1'
C) AUDITING MODULE
Auditing module is used to identify which type of
sql injection attack is done in query. Information about that
injection is recorded. This module generates an alert for
administrator and also stops the transaction. The auditing
module also identifies which user will generate the sql
injection and warning them or prohibit them from the site
access.
D) TRANSACTION MODULE
If we find any sql injection in the hot spot then we
stop the transaction otherwise we continue the transaction.
BLAH algorithm is used for finding sql injection in a web
application. In BLAH algorithm we compare with k-tuple
table and predefined sql injection database. If there is no sql
injection in the hot spot then they will check for that
transaction is valid or not. If the transaction is valid, they
allow the complete transaction otherwise stop the transaction.

V. EXPERIMENTAL RESULTS
We implemented our method in a web application
having MySQL database in back end. We tested the
proposed system.In which TABLE 6 contains SQL
statements due to legitimate queries to one of the web
application packages of the web application.The TABLE 7
contains SQL injected queries due to queries from
attackers. About 200 different type of sql injecton was
identified.

TABLE 6 QUERIES




TABLE 7 SQL INJECTED QUERIES



VI. CONCLUSION
This paper will present a novel highly automated
approach for protecting Web applications from SQLIAs. We
conclude as stop sql injection before they stop you using this
method.
REFERENCES

[1] Xiang Fu, Xin Lu, Boris Peltsverger, Shijun Chen, "A Static Analysis
Framework For Detecting SQL Injection Vulnerabilities", IEEE -
2007
$query = "insert into studentreg values('$uid1', '$uid2',
'accept')";
$query = "insert into tutorreg values('$uid1', '$uid2',
'null')";
$query="select * from adminreg where username='$uid'
and password='$password'";
$query="select * from studentreg where
username='$uid'and password='$password'";
$query ="select * from tutorreg where userid='$uid'and
password='$password' and status='accept'";
$query = "insert into studentreg values('$uid1', 'select top1
acct from bank' ,'accept')";
$query = "insert into tutorreg values('$uid1', 'select top1
acct from bank',''null')";
$query="select * from adminreg where
username='adminor 1=1';// and password=''";
$query="select * from studentreg where
username='$uid'and password=''OR 1=1";
$query ="select * from tutorreg where userid='$uid'and
password='or1=1// and status='accept'";
Match
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 245
[2] William G.J. Halfond, Alessandro Orso,Panagiotis Manolios,
"WASP: Protecting Web Applications Using Positive Tainting and
Syntax-Aware Evaluation", IEEE -2003
[3] Stephen Thomas and Laurie Williams "Using Automated Fix
Generation to Secure SQL Statements", International workshop on
Software engineering and secure system ", IEEE-2006
[4] A heuristic-based approach for detecting SQL-injection
vulnerabilities in Web applications by Angelo Ciampa ,Corrado
Aaron Visaggio ,Massimiliano Di Penta
[5] SruthiBandhakavi, "CANDID: Preventing SQL Injection Attacks
using Dynamic Candidate Evaluations", ACM, 2007.
[6] Ashish kamra, Elisa Bertino, Guy Lebanon, "Mechanisms for
database intrusion detection and response", Data security & privacy,
Pages 31-36, ACM, 2008.
[7] SQLIPA: An Authentication Mechanism Against SQL Injection by
Shaukat Ali ,Azhar Rauf,Huma Javed
[8] Z. Su and G. Wassermann, "The Essence of Command Injection
Attacks in Web Applications", 33rd ACM, 2006
[9] C Anely Advanced SQL injection in sql server application .Next
generation security software ltd. White paper, 2002
[10] A linear Algorithm for Computing Maximal Common Subsequences
by D.S. Hirschberg Princeton University.
[11] V.B. Livshits and M.S. Lam, "Finding Security vulnerability in java
applications with static analysis", In proceedings of the 14th Usenix
Security Symposium, Aug 2005.
[12] G.T. Buehrer, B.W.Weide and P.A..G.Sivilotti, "Using Parse tree
validation to prevent SQL Injection attacks", In proc. Of the 5
th

International Workshop on Software Engineering and Middleware
(SEM '056), Pages 106-113, Sep. 2005.
[13] A. Nguyen-tuong, S. Guarnieri, D. Greene, J.Shirley, and D. Evans,
"Automatically hardening web applications using Precise Tainting",
In Twentieth IFIP Intl, Information security conference(SEC 2005),
May 2005.
[14] Medhi Kiani, Andrew clark ,George Mohay Evaluation of anomaly
based character Distribution models in the detection of sql injection
attack
[15] W.G. J. Halfond and A. Orso, "Combining Static Analysis and Run
time monitoring to counter SQL Injection attacks", 3rd International
workshop on Dynamic Analysis, St. Louis, Missouri, 2005, pp.1.
[16] William G.J. Halfond, Alessandro Orso,Panagiotis Manolios,
"WASP:Protecting Web Applications Using Positive Tainting and
Syntax-AwareEvaluation", IEEE Transaction of Software
Engineering Vol 34, Nol,January/February 2008.
[17] R. Ezumalai and G. Aghila Combinatorial Approach for Preventing
SQL Injection Attacks IEEE-2009
[18] BLAST-SSAHA Hybridization for Credit Card Fraud Detection
Amlan Kundu, Suvasini Panigrahi, Shamik Sural, Senior Member,
IEEE, and Arun K. Majumdar, Senior Member








Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 246

Interaction of STEM and SPAN Topology Management Schemes for Improving Energy
Conservation and Lifetime in Wireless Sensor Network

R.SHARATH KUMAR
Department of ECE
Sri Sivasubramaniya Nadar College of Engineering
Chennai, India
sharathprem@gmail.com

A.JAWAHAR
Department of ECE
Sri Sivasubramaniya Nadar College of Engineering
Chennai, India
jawahara@ssn.edu.in

AbstractWireless Sensor Network is composed of large
number of sensor nodes and they are densely deployed in the
field. These nodes monitor the environment, collect the data and
routes it to sink. The main constraint is that the nodes in such a
network have a battery of limited energy. Due to the limited
energy, the network lifetime gets reduced. There are various
topology management schemes such as SPAN, STEM, GAF,
BEES etc., for improving network parameters such as energy,
lifetime, and coverage. None of the above schemes will improve
all the mentioned network parameters. In Sustainable Physical
Activity in Neighbourhood (SPAN), some of the nodes become
coordinators to form the backbone path and they can only
forward messages. Non-Coordinator nodes will check
periodically whether it should become a coordinator. SPAN
preserves network connectivity, capacity, decreases latency but
provides less energy savings. Sparse Topology and Energy
Management scheme (STEM) improves network life time by
putting the sensor nodes either in monitoring state or in transfer
state. STEM does not try to preserve capacity resulting in great
energy savings and also resulting in high latency. In order to use
the sensor networks effectively, the merits of STEM and SPAN
are combined and performance is analysed. In the proposed
scheme, we obtain better performance in terms of energy
conservation and network lifetime. Here the energy conserved in
the network by combined scheme is more than SPAN. Also the
lifetime improvement factor of combined scheme is more than
factor of 3.5 without any topology scheme.

Keywords Sensor Networks, STEM, SPAN, Topology, Energy
conservation, Lifetime Improvement factor
I. INTRODUCTION
Sensor node consists of processing unit, transceiver unit,
sensing unit, power unit. A Wireless Sensor Network [4]
consists of large number of sensor nodes deployed in the field.
These nodes will monitor the environment and detects if any
event occurs and sends the corresponding information to sink.
In Sensor Networks, the nodes operate with battery of limited
power. In remote locations, it is very difficult to replace or
recharge the battery frequently. Here the transceiver unit
consumes more power compared to other units. Hence we
have to efficiently use the sensor nodes by putting the
transceiver unit in off state. For this purpose various topology
schemes have been proposed. All these topology schemes aim
at improving some network parameters at the cost of some
other parameters. Sustainable Physical Activity in
Neighbourhood (SPAN) [1] is a topology scheme in which
few nodes will be in active state called coordinators and they
form a backbone path. Due to definite backbone path, the
latency is less in SPAN. The drawback of SPAN is it has less
energy conservation and less system lifetime. Sparse
Topology and Energy Management (STEM) [3] is another
topology scheme in which each and every node will have two
radios namely Data plane radio and wake up plane radio.
Usually the data plane radio will be in off state and if any
event occurs, the wake up plane radio will send wake up
message and thereby it turns on the data plane radio of another
node. The main advantage of STEM is it has more energy
conservation and more system lifetime. The drawback of
STEM is more latency and less capacity. In this paper, we
have deployed 80,90,100,110,120 nodes in various field size
as 60m*60m,85m*85m,105m*105m (15 scenarios) and we
analyse the interaction of STEM and SPAN in all these
scenarios.
The rest of the paper is organised as follows. Section II
deals mainly with two topology management schemes namely
SPAN and STEM. In section III, the proposed (combined)
scheme of combining STEM and SPAN are discussed in detail.
Section IV deals with the results and analysis of the combined
scheme in terms of energy conservation and lifetime.
II. TOPOLOGY MANAGEMENT SCHEMES: SPAN AND STEM
There are various topology management schemes [5] in
Wireless Sensor Network such as SPAN [1], STEM [3],
GAF [12] used to improve the network parameters such as
energy, lifetime, capacity. All the topology schemes improve
the network parameters at the cost of other parameters. In this
context, we will discuss mainly on two topology schemes
namely SPAN and STEM.
A. SUSTAINABLE PHYSICAL ACTIVITY IN NEIGHBOURHOOD
(SPAN)
SPAN [1] is a topology management scheme in Wireless
Sensor Networks which is based on the concept of
coordinators and non-coordinators. In Wireless Sensor
Networks, all the nodes will be active thereby wasting energy.
Hence we have to put only few nodes in active state and
others in sleep state. The nodes which are transferring the data
will be put in active state called coordinators and they form a
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 247
definite backbone path in the network. The nodes which we
put in sleep state are non-coordinators. SPAN ensures that
enough coordinators are elected so that each and every node is
in radio range of atleast one coordinator. Here only few nodes
will be coordinator so that network lifetime gets increased. A
node becomes a coordinator based on the energy level and
also how many of the other nodes gets benefitted based on it.
A non coordinator node will become coordinator based on
coordinator eligibility rule. If two neighbours of non
coordinator node cannot reach each other, then one of the non
coordinator nodes will become a coordinator.
Let E
x
is the maximum amount of energy available at the
node and let E
y
is the remaining amount of energy at the node.
A node having more E
y
/E
x
has more chance to become
coordinators than other nodes. As a definite backbone path
exists, the time taken to deliver the data to sink is less. The
nodes which do not transfer any data will be put in sleep state
and they are non-coordinators. The nodes will alternatively be
either in coordinator or non-coordinator state at each and
every time interval. A node will be a coordinator for particular
time interval and later it withdraws thereby some other node
will become coordinator. Each coordinator will withdraw
when its energy level reduces. If a node is going to withdraw,
it will not immediately withdraw but waits for some time until
some other node becomes coordinator. This is done in order to
ensure connectivity. Thus an equal chance is given for all the
nodes to become coordinators. SPAN provides less amount of
energy saving. The main advantage of SPAN is it preserves
capacity and has less latency.
B. SPARSE TOPOLOGY AND ENERGY MANAGEMENT (STEM)
STEM [3] is Sparse Topology and Energy Management
scheme in Wireless Sensor Networks. Usually in Wireless
Sensor Networks, each and every node will have a single
radio whereas in STEM, it has two radios namely data plane
and wake up plane. The main idea of STEM is to turn on the
sensing and processing circuitries and turn off the transceiver
unit. If any event occurs, then the sensor nodes will wake up
its radio and then transmits the data via many hops to the sink
Here the problem is that the nodes radio of the next hop in
the path to the sink will be turned off. To overcome this
problem, each and every node periodically turns on its radio
for a short time to check whether any of the other nodes wants
to communicate with it. A node which wants to communicate
with other node is Initiator node and node which is been
communicated is Target node.
The Initiator node will first start to send beacons to target
node and after receiving the beacons, the target node will
respond to it. Once both the nodes turned its radio on, a link is
established between them and data is transferred between
them. If the packet is intended for other node, then the target
node will become initiator node and sends packet to the node
in the next hop towards sink and this process is repeated. To
avoid the problem of interference between the wake up
beacon and the data transmission, transceiver uses dual radio
and each radio operates at different frequency bands. The
frequency band f
w
(wake up plane radio) is used to transmit
the wakeup messages. Once the target node has received a
wakeup message, both the nodes will turn on its radio
operating at frequency band f
d
. The data packets are
transmitted in this frequency band and they are called data
plane (f
d
). STEM provides more energy savings which means
it has more network lifetime. The main drawback of STEM is
more latency due to no definite backbone path.
III. COMBINING STEM AND SPAN (PROPOSED SCHEME)
Here SPAN and STEM topology management schemes are
combined using dual radio concept and coordination election
algorithm. We set the radio range of each sensor node as 20m.
The nodes are deployed uniformly in the field. We run the
SPAN coordinator election algorithm with the new
coordinator eligibility rule for uniform deployment of nodes.
Here we deploy nodes uniformly in the field. We make use of
dual radio concept of STEM in SPAN network and reduce
energy consumption of nodes by using less duty cycle radio.
Here a definite backbone path is set and hence the latency will
be less by this proposed scheme.
A. Uniform Deployment Of Nodes
Nodes are deployed uniformly in the field thereby covering
the entire field. This uniform deployment of nodes will form
many segments of hexagon. Uniform deployment of nodes is
used in many static applications. Hence we deploy nodes
uniformly with distance between two nodes in horizontal
direction as r and the distance between two nodes in vertical
direction as h. Here radio range is R=20metres.
The nodes which are placed uniformly at the corners of
hexagon [7] are shown in Fig. 1. Here for a fixed field size,
we can calculate r and h as follows.


Fig. 1 Nodes deployed uniformly at the corners of hexagon
By Pythagoras Theorem,
2 2 2
)
2
3
2
3
2
3
( )
2 2
(
r r r r r
r R (1)

2 2 2
)
2
3 3
( )
2
5
(
r r
R (2)

2 2
13r R (3)

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 248
13 r R (4)
To evaluate the upper bound of life time of the sensor
nodes, the side r of the hexagon is set to

13
R
r (5)
This is the distance between two nodes in horizontal
direction. Similarly the distance between two nodes in vertical
direction is given by

2
3 r
h (6)
B. Coordinator Eligibility Rule:
A node becomes a coordinator if it has the maximum
energy and also if two neighbours cannot reach each other
either directly or through existing coordinator. Here we
propose a new coordinator eligibility rule for uniform node
deployment thereby there is reduction in energy and reduction
in number of coordinators. The new coordinator eligibility
rule is as follows: We deploy nodes uniformly in the field and
initially all the nodes will have equal energy and hence by
topology, a coordinator is selected. After a coordinator has
been selected initially, remaining node will become
coordinator only when it has maximum energy. Hence the first
rule is to check the amount of energy available. The second
rule is that after checking for amount of energy available, it
will also see how many of its neighbours will benefit from
being awake. If two neighbours cannot reach other directly or
through existing coordinator, then that node becomes a
potential contender to become a coordinator. Each and every
node should have two radios and should be within radio range
of atleast one coordinator.
Let the number of coordinators be C and non
coordinators be N
c
. Thus the total number of nodes (N) is
C
N C N (7)
For N nodes, the coordinator node ratio is and is given by
N
C
(8)
The non coordinator node ratio is given by
N
C
1 (9)
Thus, the number of coordinators (C) is given by
) 1 ( N C (10)
Hence N (1 -

) nodes will be in on state and N nodes will
be in off state. For a field size of X*X, the area of field (
f
A )
is X
2
(m
2
). The number of coordinators is given by

h
f
A
A
C * 2 (11)
where the area of hexagon is
h
A .

2
2
*
4
3
* 6
* 2
R
X
C (12)
Here the radio range is 20m. Thus number of coordinators is

3 600
* 2
2
X
C

(13)
C. Dual Radio Concept:
Here in the proposed scheme, we use dual radio [11]
concept of STEM. Here each and every node will have two
radios namely Data plane and wake up plane. The wake up
plane radio is kept in on position whereas data plane radio is
made off. Here we run the election algorithm thereby
coordinators are elected. This coordinators form a definite
backbone path thereby there is less latency. Only coordinators
will be in active state whereas other node will be in sleep state.
Now by using dual radio concept, the wake up plane radio of
the coordinator will only be on and data plane radio will be in
off state. Hence more energy is conserved.
IV. RESULTS AND ANALYSIS
We implemented SPAN and then combined SPAN and
STEM and thereby performance is analysed. We can see that
the combined scheme has better performance compared to
SPAN.
A. Simulation Environment
We simulated our work using Network simulator (NS2). In
our simulations we have considered uniform deployment of
nodes over the sensor field. In our simulation we have chosen
the transmission range as R = 20 m, which has the radio
characteristics as shown in the table I. In our simulations,
traffic loads are generated by constant bit rate (CBR) flows.
TABLE I
RADIO CHARACTERISTICS OF 20M RADIO RANGE NODE
Radio Mode Power consumption(W)
Transmit 0.01488
Receive 0.01250
Idle 0.01236
Sleep 0.000016



Fig. 2 Uniform deployment of 100 nodes

Here 80, 90, 100,110,120 nodes are deployed uniformly in
various field sizes as 60m*60m, 85*85m, 105m*105m.Hence
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 249
15 scenarios are considered. We consider simulation time as
600 seconds. Fig. 2 shows the uniform node deployment of
100 nodes.
B. Coordinator Election Of Combined Scheme:
SPAN is a power saving coordination technique which
makes use of coordinator election algorithm. We run the
SPAN coordinator election algorithm and find the number of
coordinators for various field sizes. We then integrate SPAN
and STEM and run the coordinator election algorithm and
thereby coordinator announcement and withdrawal takes place.
We can infer from Fig. 3 that combined scheme has almost
same number of coordinators as that of SPAN as combined
scheme uses same coordination algorithm as that of SPAN.
80 85 90 95 100 105 110 115 120
20
25
30
35
40
45
50
Number of nodes
N
u
m
b
e
r

o
f

C
o
o
r
d
i
n
a
t
o
r
s


COMBINED SCHEME-60m*60m
COMBINED SCHEME-85m*85m
COMBINED SCHEME-105m*105m
SPAN SCHEME-60m*60m
SPAN SCHEME-85m*85m
SPAN SCHEME-105m*105m

Fig. 3 Coordinators versus number of nodes

Also we can infer from Fig. 3 that as the number of nodes
increases, the coordinators will almost remain constant
because only a smaller fraction of nodes will become
coordinators. We can see that as the field size increases, the
coordinators will increase since large number of nodes will
become coordinators for larger field size.
C. Energy Conservation
This section evaluates the ability of the combined scheme
to conserve energy compared to SPAN. Here each and every
node has initial energy as 1000J. We implemented SPAN and
found the energy consumed by each node after simulation. We
also found the average energy consumed in the network.
Hence the average energy conserved in the network
(
conserved
E ) is given as

consumed total conserved
E E E (14)
Similarly we combined STEM and SPAN and find the
average energy conserved in the network. We vary the number
of nodes and we find the values of energy conserved in
network. Then we plot a graph between number of nodes and
energy conserved in network. As the number of nodes
increase, the energy conserved in the network also increases.
We have deployed in three field sizes as 60m*60m, 85m*85m,
105m*105m as shown in Fig. 4, 5, 6 respectively.

TABLE II
ENERGY CONSERVATION IN VARIOUS SCENARIOS

Number
of
nodes
60m*60m field size 85m*85m field size 105m*105m field size
SPAN COMBINED SPAN COMBINED SPAN COMBINED
80 55380 57400 55220 57200 54960 56780
90 62500 65600 62200 65300 61900 65100
100 69700 72900 69400 72600 69100 72300
110 77400 80700 76800 80500 76200 80200
120 83960 88180 83640 87960 83340 87840

We obtain the energy conserved in all the scenarios and it is
given in Table II. From Table II, we plot the graphs as
shown in Fig 4, 5, 6.

80 85 90 95 100 105 110 115 120
5
5.5
6
6.5
7
7.5
8
8.5
9
x 10
4
Number of Nodes
T
o
t
a
l

E
n
e
r
g
y

c
o
n
s
e
r
v
e
d

i
n

t
h
e

n
e
t
w
o
r
k


SPAN-60m*60m field
combined-60m*60m field

Fig. 4 Energy conservation in network for 60m*60m field

80 85 90 95 100 105 110 115 120
5
5.5
6
6.5
7
7.5
8
8.5
9
x 10
4
Number of Nodes
T
o
t
a
l

E
n
e
r
g
y

c
o
n
s
e
r
v
e
d

i
n

t
h
e

n
e
t
w
o
r
k


SPAN-85m*85m field
combined-85m*85m field

Fig. 5 Energy conservation in network for 85m*85m field

In Fig. 4, 5, 6 we can see that the combined scheme
conserves more energy compared to SPAN scheme. Table II
shows the energy conserved in various fields. Thus we can see
that the combined scheme conserves around 4000 joules more
than SPAN scheme in all the fields.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 250
80 85 90 95 100 105 110 115 120
5
5.5
6
6.5
7
7.5
8
8.5
9
x 10
4
Number of Nodes
T
o
t
a
l

E
n
e
r
g
y

c
o
n
s
e
r
v
e
d

i
n

t
h
e

n
e
t
w
o
r
k


SPAN-105m*105m field
combined-105m*105m field

Fig. 6 Energy conservation in network for 105m*105m field
D. LIFETIME IMPROVEMENT FACTOR:
Combined Scheme extends the network lifetime compared
to SPAN. As the number of nodes increases, the lifetime
improvement factor of the network increases for all the field
size. The combined scheme has lifetime improvement factor
of more than 3.5 compared to the network without any
topology. Lifetime improvement factor is directly related to
energy conservation. Here there is more lifetime improvement
as the field size is reduced and also when more nodes are
deployed. From Fig. 7, we can infer that combined scheme
has more lifetime improvement factor than SPAN scheme.

80 85 90 95 100 105 110 115 120
3
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
4
Number of nodes
L
I
F
E

T
I
M
E

I
M
P
R
O
V
E
M
E
N
T

F
A
C
T
O
R


COMBINED SCHEME-60*60
COMBINED SCHEME-85*85
COMBINED SCHEME-105*105
SPAN SCHEME-60*60
SPAN SCHEME-85*85
SPAN SCHEME-105*105

Fig. 7 Lifetime improvement factor in all the fields




V. CONCLUSION
In this work, two different topology management schemes
Sparse Topology and Energy Management (STEM) and
Sustainable Physical Activity in Neighbourhood (SPAN) were
analysed. These two schemes were then combined and
performance was evaluated. We have implemented SPAN by
deploying 80, 90, 100, 110, 120 nodes uniformly in various
field sizes as 60m*60m, 85m*85m, 105m*105m.We have
also combined STEM and SPAN topology management
schemes and analysed the energy conservation in the network
in all the scenarios. We can infer that in both cases, as number
of nodes increases, the total energy conserved in the network
increases in all the fields. We can also infer that the combined
scheme results in more energy conservation compared to
SPAN due to dual radio concept. We can infer that the
combined scheme has more lifetime improvement factor
compared to SPAN. Thus in our work, we obtain better
performance in terms of energy, lifetime. Further, we can also
analyse the other network parameters like latency, capacity etc.
This work can be extended to 3-dimensional sensor network
also.
REFERENCES
[1] B. Chen, K. Jamieson, H. Balakrishnan and R. Morris (2002), Span:
An Energy-Efficient Coordination Algorithm for Topology
Maintenance in Ad Hoc Wireless Networks, ACM Wireless Networks,
Springer, 8.
[2] Cerpa, A. and D. Estrin (2002), ASCENT: Adaptive Self-Configuring
sEnsor Networks Topology, in Proceedings of the Twenty First
International Annual Joint Conference of the IEEE Computer and
Communications Society (INFOCOM)
[3] Curt Schurgers, Vlasios Tsistsis and Mani B.Srivastava (2001), STEM:
Topology Management for Energy Efficient Sensor Networks,
IEEEAC paper.
[4] I.F. Akyildiz, M.C.Vuran (2010),Wireless Sensor Networks,
Published by John Wiley Publishing Company
[5] I.F. Akyildiz, W. Su , Y.Sankarasubramaniam and E. Cayirci (2002),
Wireless Sensor networks: a survey, Published by Elsevier Science
B.V, Computer Networks, vol.38, pp.392-422
[6] M.A.Labrador, P.M.Wightman, Topology Control in Wireless Sensor
Networks, Published by Springer
[7] Ren Ping Liu, Glynn Rogers, Sihui Zhou, John Zic, Topology Control
with Hexagonal Tessellation, Feb 16, 2003
[8] Rong Yu, Zhi Sun, Shunliang Mei(2007), Scalable Topology and
Energy Management in Wireless Sensor Networks, in the proceedings
of the IEEEWNCA.
[9] T.Manimekalai, Dr.M.Meenakshi and P.Saravanaselvi, RA-SPAN
Protocol for Improving QOS Performance in Wireless Ad Hoc
networks, IEEE 2009
[10] Vivek Kumar, Thenmozhi Arunan, N. Balakrishnan, E-SPAN:
Enhanced-SPAN with Directional Antenna, TENCON 2003.
[11] Vlasios Tsiatsis in Topology Management for Sensor Networks:
Exploiting Latency and Density IEEEAC paper.
[12] Y. Xu, J. Heidemann and D. Estrin (2001), Geography-informed
energy conservation for adhoc routing, MOBICOM01, Rome, Italy,
pp.70-84.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 251
Adaptive Data Hiding Scheme for 8-bit color images
K. Kannadasan
1
, C. Vinothkumar
2

1
M.E (Communication Systems), SSN College of Engineering, Chennai, Tamil Nadu, India.
1
kannadasan.ec@gmail.com
2
Assistant Professor, ECE Department, SSN College of Engineering, Chennai, Tamil Nadu, India.
2
vinothkumarc@ssn.edu.in

Abstract Steganography is the process of hiding one file or data
into another file such that others cannot identify the meaning of
the embedded object or even recognize its existence. One of the
most common methods for steganography is Least Significant Bit
(LSB) Insertion method, in which the LSB of each pixel of the
cover image is altered with secret message bit to obtain the stego
output. Altering the LSB will cause very minor changes in the
pixel value of color stego-image and it is not noticeable to the
human eye. This technique works well for 24-bit color images
when compared to 8-bit color images, due to limitations in color
variations. In this method, the secret bits are mostly hidden in
edge areas than in smoother areas of the cover image. This
results in larger variations in the edge areas. To avoid these
changes and also to achieve better quality of the stego-image a
novel image data hiding technique by adaptive Least Significant
Bits (LSBs) substitution scheme can be used. Based on this
method the edge areas can tolerate smaller changes than highly
textured areas, than changes in smooth areas. In fact, this
method embeds more secret data into noise non-sensitive
(smooth) areas than noise sensitive (edge) areas. The main
objective of this paper is to reduce the variations in 8 bit hidden
color images when compared with 24 bit color images and to
achieve high embedding capacity than the existing methods.

Keywords Adaptive Least Significant Bits Substitution, High
Embedding Capacity, stego-image quality, Human Visual System.
I. INTRODUCTION
Steganography is the art of invisible communication
accomplished by hiding the secret message in other
information. In the word Steganography, stegos means
cover and grafia means writing known as concealed
writing. Through this only the sender and receiver are aware
of the hidden data and if the loaded file falls into the hands of
anyone else they wouldnt suspect the hidden data. Image data
embedding has been a popular research issue in recent years.
It concerns mainly about embedding some proprietary data
into digital media for the purpose of identification, annotation,
and message transmission. Applications can often be found in
two fields: one is the digital watermarking which provides
protection of intellectual property rights and the other is the
hiding of secret data within a host or cover signal. Both are
constrained with a minimum amount of perceivable
degradation to the host signal, which can be an image, audio,
or video.
The common method for steganography is Least
Significant Bit (LSB) Insertion method in which the LSB of
each pixel of cover image is altered with secret message bit to
obtain the stego output. Altering the LSB causes minor
changes in color of stego image and the secret message is not
noticeable to the human eye [2]. This technique works well
for 24-bit color image when compared to 8-bit color file due
to limitations in color variations. This causes artificial noises
in the smooth regions of the image causing degradation to the
visual quality of stego image and also fails to give high
embedding capacity [1].
An Optimal Pixel Adjustment Process (OPAP) is used to
improve efficiency and enhance the visual quality of the stego
image generated by simple LSB substitution. Let p
i1
, p
i1
and
p
i1
be the corresponding pixel values of the i
th
pixel in the
cover image C, the stego-image C

and the refined stego image
obtained after the OPAP is given = p
i1
- p
i1
, which gives
the embedding error between p
i1
and p
i1
.
LSB technique replaces the same length bits and not all
pixels in the image can tolerate equal amounts of changes
without noticeable distortion. Therefore, the stego image
attains low visual quality when equally changing LSBs of all
pixels. To overcome this problem, LSB based methods along
with Human Visual System (HVS) masking characteristics is
used. It embeds the secret data into the variable sizes of LSBs
of each pixel creating a piecewise mapping function according
to HVS. In order to determine the embedding capacity of each
pixel the luminance from the highest bits residual image is
considered. To improve the quality of the stego-image and to
determine the payload of each pixel, PVD method based on
the difference value of two neighborhood pixels is used,
however, the embedding capacity is very less [6], [7].
To overcome these limitations Multi-Pixel Differencing
(MPD) can be used to estimate the smoothness of each pixel.
The two consecutive pixels of the text bits are embedded by
the LSB replacement method if the difference between the
pixel values lies in the lower level (edge areas); similarly the
PVD method is used if the difference between the pixel value
lies in the higher level (smooth areas). This scheme is used to
estimate the hiding capacity into the two pixels. It also
embeds more secret data into edged areas than smooth areas in
the host image. In order to increase the embedding capacity
and also to improve its efficiency Adaptive LSB substitution
method along with human visual masking system can be used.
Section II discusses about the basic Adaptive LSB
substitution and human visual masking systems. In section III
the proposed method of using Adaptive LSB substitution
method for hiding text data and color image into color cover
image is explained in detail. Results and discussion is seen in
detailed in section IV. Conclusion of this paper is given in
section V.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 252





















Fig. 1 Block diagram of steganographic technique
II. ADAPTIVE LSB SUBSTITUTION METHOD
Most of the steganography methods hide the secret bits
more in the edge areas than in smooth areas of the cover
image. This causes serious degradation in actual edge areas. In
order to avoid these abrupt changes in the edge areas and also
to achieve better quality of the stego-image, a novel image
data hiding technique by Adaptive LSB substitution scheme
can be used. The scheme determines the brightness, edges,
and texture masking of the cover image to estimate the
number of bit change in each pixel. The text bits are hidden
more in the noise non-sensitive regions than hiding in the
noise sensitive regions [3], [4].
A. Human visual masking system
Designing image adaptive steganographic schemes with
human visual perception can obtain higher capacity and lower
distortion. Through this method we can exploit the luminance
masking, texture masking and edge masking features of the
high order image generated by extracting high order bits of the
original image and create a spatial HVS masking model in a
better way [5]. Human eyes usually have different sensitivity
to different luminance and more sensitive to changes in the
areas with middle level luminance. The luminance masking of
each pixel can be given as

128 / 128 ) , ( ) , ( j i x j i
(1)

where x (i, j) is the pixel value of the host image I. For a 256-
level image block with size (2l + 1) x (2l + 1), the maximum
entropy value H
max
is computed by

) 1 2 ( log 2
2 max
l H
(2)

Using H
max
as the normalization factor, the normalized entropy
(i, j) of each pixel can be obtained by the formula
max
/ ) , ( ) , ( H j i H j i
(3)

Edges areas have greater variance value, while smooth areas
have smaller variance value. Z (i,j) be the variance of the
image block centered by the pixel I (i,j) to indicate the edge
feature. The maximum variance Z
max
is given by

2
max
) 2 / (Z Z
(4)

where Z is the maximum permissible gray scale value. The
normalized variance (i,j) is represented as follows

max
/ ) . ( ) , ( Z j i Z j i (5)

Based on all the above considerations, the effect of HVS
masking characteristics is expressed by the formula

) , ( ) , ( ) , ( ) , ( j i j i x j i j i (6)

The bit depth k(i, j) of each pixel that can be used for data
hiding can be computed by the following formula

1 )) min( ) (max(
) min(
) , ( ) 1 7 ( ) , ( ' j i x r j i k
(7)

where r represents the highest bits number of each pixel used
to calculate the hiding capacity in each pixel and lies in the
range 0 r 4, 1 k(i,j) 7.
III. PROPOSED METHOD
The data hiding and extraction scheme for text data and
into color image is proposed below.
A. Data hiding algorithm
In data hiding algorithm, the stego image is obtained by
from the original color image by the following steps

Step 1: The original color image I is first divided into RGB
component. Convert each component to corresponding 8-bit
binary values.
Step 2: For each character in text data, convert each character
to corresponding 8-bit binary values.
Step 3: Apply HVS masking technique to generate matrices
for RGB components.
Step 4: Generate a pseudo random sequence p with number
0 and 1 by a secret key defined as

) 1 , 0 ( ) ( , 0 | ) ( i p t i i p p (8)

The ultimate secret data w for hiding is obtained by applying
the element-wise XOR operation of the original secret
messages w and the pseudo random sequence p
represented as

t i i p i w w 0 ), ( ) ( (9)
Embedding
Algorithm
Text
data
Stego
Image
Text
data
Cover
Image
Extraction
Algorithm
Cover
Image
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 253
Step 5: For each value n in the matrices, replace n bits in
the matrix with corresponding n bit text data values from LSB
position.
Step 6: Convert resultant binary values of RGB components
to corresponding decimal values. For example, assume b(i,j)=
1001, then d(i,j)= 9.
Step 7: Combine RGB component values to obtain stego
image, the resulting RGB component value of the stego pixel
is expressed as

) , ( 2 mod ) , ( ) , ( ) , ( '
) , (
j i d j i x j i x j i x
j i k
(10)





























Fig.2 Flow diagram for the data-hiding algorithm
B. Data extraction
In the data extraction process, given the stego-image S, the
hidden messages can be readily extracted by the following
steps.
Step 1: Decompose the RGB component from the stego
image. Convert each component to corresponding 8-bit binary
values.
Step 2: For each component, identify n value using same
matrix values obtained from HVS masking technique.
Step 3: For each binary data rearrange into 8-bit binary data
format. And convert 8-bit binary data to corresponding
decimal values.
Step 4: Repeat Step 2-3 until all secret data w is obtained.
Finally, the final secret messages w

can be obtained by the


element-wise XOR operation of the secret data w

and the
pseudo random sequence P is generated by the same method
in data hiding.
Step 5: Reconstruct color image with new RGB component
matrix.




































Fig. 3 Flow diagram for the data extraction algorithm

The figure 2 represents the data-hiding algorithm for the
text data into the cover image in which the text data is first
converted into 8 bit binary values. A random sequence is
generated with 0s and 1s. XOR operation is then performed
between the random bits and text data bits to form the secret
data bits. The color cover image is decomposed into red, green
and blue component and their pixel value is converted into 8
bit binary values. Based on the human visual masking system,
random matrices are generated for each of the three
components. Depending on these matrix values, the secret
data bits are hidden into the LSB of the corresponding
component of the cover image. The three components along
with the hidden data are combined to form the stego color
image.
In figure 3 the data extraction algorithm to extract the
hidden data bits is given. In this the stego color image is
decomposed into red, green and blue components. It is then
converted into its corresponding 8 bit binary values. Based on
the same value of the matrix generated in data hiding method
the data bits are extracted from the LSB of the three different
components. The extracted bits are then rearranged into
original 8 bit binary form. Then an element-wise XOR
operation is performed between the extracted bits and the
random sequence to obtain the text data bits. These binary
values are then converted into its decimal form to recover the
original hidden text data.

Text data
Convert each character
into 8-bit binary values
Cover image
Convert each component
into 8-bit binary values
R G B
Human
visual
masking
system
Stego image
Extract hidden bits
from LSB side
Convert binary into
decimal values
Text data
Human
visual
masking
system
Convert each component
into 8-bit binary values
Stego image
R G B
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 254

(a) (b) (c)


(d) (e) (f)

Fig . 4 Six cover images. (a) Lena , (b) Pepper, (c) Bamboo,
(d) tiffany (e) barbara, (f) Pepper1


(a) (b) (c)


(d) (e) (f)

Fig. 5 Stego images generated by the proposed scheme
IV. EXPERIMENTAL RESULTS
In the above example three different text data are hidden
into the cover image of size 150 X 150.We assume the size of
the sliding window size l as 1.
Peak Signal-to-Noise Ratio (PSNR) for stego image is
calculated in order to evaluate the quality of the stego images.
Fig. 4 represents the different color images that are used as
cover images in which three different text data is hidden. The
stego images obtained by hiding the different text data is
shown in Fig. 5. In the proposed scheme, the maximum
number of bits changes in each pixel is r = 4.
In the proposed method, color image is used as the cover
image. For this R, G, B components should be decomposed
initially and then embed text data bits into R component cover
image and the in the G and B components (i.e. eight bits of
each character from text data is hidden into R, G, B
components of cover image depending upon value generated
by matrix k(i,j). In the table 1, as the length of the text data
increases the mean square error value increases and peak
signal to noise ratio decreases. In the proposed method, the
MSE value is decreased for different cover color image when
hiding the text data using adaptive LSB method than using
LSB substitution method even if the length of the text data is
more in the proposed method, which is given in table 1.

Tab.1 Measurement of MSE and PSNR for color image
using proposed method

Cover
Image
Text data1
(80 bits)
Text data2
(120 bits)
Text data 3
(160 bits)
MSE PSNR MSE PSNR MSE PSNR
Lena 0.45 51.55 0.67 49.81 1.11 47.40
Pepper 0.00 71.03 0.05 60.72 0.28 53.57
Bamboo 0.01 65.37 0.07 59.29 0.20 55.00
Tiffany 1.04 47.94 2.37 44.37 3.92 42.19
Barbara 0.26 53.84 0.60 50.34 1.18 47.40
Pepper1 0.02 63.73 0.09 58.39 0.23 54.39
V. CONCLUSION
In this paper, the proposed data-hiding scheme for hiding
text data into 8 bit color image using Adaptive LSB
substitution is done and the MSE and PSNR for text data of
different lengths are calculated. The comparison of MSE and
PSNR for gray scale and color images using both LSB
substitution and Adaptive LSB substitution method for
different cover images is also seen. MSE value obtained for 8-
bit color image using adaptive LSB method is very low
compared to the LSB substitution method. This is due to the
variations seen in the color image is very less using the
proposed method. The quality of stego image is also improved
by hiding text data bits in all three component of the cover
color image based on the human visual masking method.
Further research can be done by hiding one color image into
another color image instead of text data.
REFERENCES
[1] Mamta Juneja, Parvinder S. Sandhu, and Ekta Walia, (2009)
Application of LSB based Steganography Technique for 8-bit color
images, World Academy of Science, Engineering and Technology, pp
423-425.
[2] Chi-Kwong Chan, L.M. Cheng (2004) Hiding Data in images by
simple LSB substitution, Pattern Recognition The Journal of the
Pattern Recognition Society, pp 469-474.
[3] Hengfu yang, Xingming, Guang, (2009) A High-Capacity Image Data
Hiding Scheme Using Adaptive LSB Substitution, Radio engineering,
Vol. 18, pp 509-516.
[4] Cheng-Hsing Yang, Chi-Yao Weng, Shiuh-Jeng Wang, Hung-Min Sun,
(2008), Adaptive data hiding in edge areas of images with spatial LSB
domain systems, IEEE Transactions on Information Forensics and
Security, vol. 3, issue 3, pp. 488-497.
[5] Wen-Nung Lie, Li-Chun Chang, (1999) Data hiding in images with
adaptive numbers of least significant bits based on the human visual
system, IEEE Int. Conf. Image Processing. Kobe (Japan), pp. 286-290.
[6] Wu, H.-C., Wu, N.-I., Tsai, C.-S., (2005) Hwang, M.-S, Image
steganographic scheme based on pixel-value differencing and LSB
replacement methods, IEE Proceedings-Vision, Image and Signal
Processing, vol. 152, no. 5, pp.611-615.
[7] Ali Shariq Imran, M. Younus Javed, and Naveed Sarfraz Khattak (2007)
A Robust Method for Encrypted Data Hiding Technique Based on
Neighbourhood Pixels Information, World Academy of Science,
Engineering and Technology, pp. 330-335

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 255
Significance of Alive-supervision Algorithm in Autosar specific Watchdog
Manager

Remya Krishna J.S.
M.Tech Embedded Systems
DOEACC Centre
Calicut, Kerala, India
e-mail: vrindakrishna86@gmail.com
Anikesh Monot
M.Tech Embedded Systems
DOEACC Centre
Calicut, Kerala, India
e-mail: anikeshmonot@gmx.com


AbstractIn Automotive electronics a dependable software
service to monitor individual application software components
in runtime is required in order to improve the overall system
dependability and reliability. This papers deals with suggesting
an efficient mechanism for monitoring automotive applications
and thus providing more reliable and efficient system.
Autosar is an open and standardized automotive software
architecture, which was jointly developed by automobile
manufacturers, suppliers and tool developers. Autosar aims to
master the growing complexity of automotive electronic
architectures by building a common architecture. It separate
the software from the hardware in order to allow software
reuse and smooth evolutions limiting re-development and
validation.
Keywords- Autosar; Alive-Supervision; Watchdog Manager;
Automotive; Embedded Systems
I. INTRODUCTION
Automotive industry has witnessed major changes in the
last few decades. In the early days an automobile company
would give very minimal attention to aspects like safety,
infotainment, luxury, comfort which in todays market is
considered the most important ones. Automobiles that
fulfilling this new demands of customer have very complex
designs as the number of features to be implemented has
increased exponentially. This complexity also demands more
development time which in-turn increases the cost of the
project. Survey revels that an automobile which provides this
modern features has at the minimum 70 ECUs. To deal with
this problems the leading automobile industries have come
up with an open standard called Autosar.
AUTOSAR (AUTomotive Open System ARchitecture)
is an open and standardized automotive software
architecture, jointly developed by automotive OEMs,
suppliers and tool developers whose objective is to create
and establish open standards for automotive E/E
(Electrics/Electronics) architectures that will provide a basic
infrastructure to assist with developing vehicular software,
user interfaces and management for all application domains.
The hierarchical architecture of AUTOSAR consists of
micro controller abstraction layer (MCAL), electronic
control unit (ECU) abstraction layer, service layer, RTE
(runtime environment) and application layer.
II. OVERVIEW OF WATCHDOG STACK
In Autosar architecture, Watchdog stack is used to handle
safety related issues. The Autosar watchdog stack comprises
of Watchdog Manager, Watchdog Interface & Watchdog
Drivers. The watchdog manager module which is in the
service layer is intended to supervise the reliability of
applications execution in consideration of periodicity and
maximum timing constraints of periodicity. The watchdog
Interface module is in the ECU abstraction layer. In case of
more than one watchdog device and watchdog driver (e.g.
both an internal software watchdog and an external hardware
watchdog) being used on an ECU, this module allows the
watchdog manager to select the correct watchdog driver -
and thus the watchdog device. The internal watchdog driver
module is in MCAL layer provides services for initialization,
changing the operation mode and triggering the hardware
watchdog. No-response/wrong timing of any running
application is usually detected by Watchdog Manager which
will trigger hardware watchdog via the Watchdog Interface
and Watchdog Driver(s).
III. PROBLEM DESCRIPTION
Electronic control units and on-board networks for
automotive applications cover a big variety of functions that
in many cases are responsible of safety critical behavior of
the vehicle. Safety needs and goals claim that the software
involved in such functions be designed by adopting
opportune methods and practices.
In an ECU a watchdog timer could be used to prevent an
application from entering a deadlock. Usually the application
keeps on triggering the watchdog while the application is
functioning well. Watchdog timer would reset the entire
system if the application fails to provide the trigger. The
drawback of this approach is irrespective of the criticality of
the application the watchdog would perform the MCU reset
which would affect the reliability and the efficiency of
system.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 256
IV. BETTER APPROACH
Contrary to the conventional approach the reliability and
efficiency of the system could be enhanced if an algorithm
which could supervise the application and thus prevent it
from entering a state of deadlock. This supervisory algorithm
will take care of the criticality of all the applications based
on which the reset time could be configured. By doing this
the overall reliability and efficiency of the system will be
improved. In Autosar this can be done using an algorithm
called Alive-Supervision Algorithm.
V. WATCHDOG MANAGER
The watchdog manager module which is in the service
layer is intended to supervise the reliability of applications
execution in consideration of periodicity and maximum
timing constraints of periodicity. Its functionality is always
based on the decoupled alive-supervision of application
using an Alive-Supervision Algorithm. The Watchdog
Manager will cyclically trigger the hardware watchdog via
Watchdog Interface and Watchdog Driver(s) where the
triggering is scheduled by Schedule Manager module.
Within the cyclic scheduled main-service of the Watchdog
Manager, Alive Supervision of applications are done to
decide on whether to reset the system. Thus by introducing
this algorithm in the Watchdog manger the system could be
made more reliable and efficient.
VI. ALIVE-SUPERVISION
The Alive-supervision of Watchdog Manager is a
mechanism to check periodically the execution status of one
or more supervised entities/applications. The Watchdog
Manager provides an individual port for each supervised
application entity, where the entities have to indicate their
proof of aliveness (update an alive-counter). Within the
cyclic scheduled main-service of the Watchdog Manager, the
alive-counters of all supervised entities are checked against
their own independent timing constraints and tracks the
checkup result as their individual supervision status. From
individual supervision status a global supervision status is
derived to support decision for triggering the hardware
watchdog or not.
In general the alive-supervision supports three different
states represented within the global and individual
supervision status as the current result of its investigations.
First state represents no timing deviation detected.
Second state is a first and optional escalation step to
support error recovery within a certain amount of
failed supervision reference cycles(A Supervision
Reference Cycle that ends with a detected deviation
(including tolerance) between the Alive Counter and
the expected amount of Alive Indications), while
triggering of the watchdog still goes on. If a recovery
occurs, alive-supervision goes on and gets back to
state where timing constraints are fulfilled.
Third state, representing the second escalation step,
does not support recovery of alive-supervision any
more and therefore will end up with an ECU reset.
But it offers a configurable amount of expired
supervision cycles to support preparations of SW-Cs
Figure 1. Alive-Supervision of applications.
A. Individual Supervision Status
If the alive counter value matches to the expected
amount of alive indications (including the tolerance
margins) individual supervision status is set as
WDGM_ALIVE_OK.
If the alive counter value doesnt match to the
expected amount of alive indications (including the
tolerance margins), but the acceptable amount of
failed supervision reference cycles has not been
exceeded individual supervision status is set as
WDGM_ALIVE_FAILED.
If the alive counter value doesnt match to the
expected amount of alive indications (including the
tolerance margins) for more often than the
acceptable amount of failed supervision reference
cycles individual supervision status is set as
WDGM_ALIVE_EXPIRED.
B. Global Supervision Status
After checkup and update of individual supervision
status at the corresponding supervision cycles (supervision
reference cycles), the Watchdog Manager has to derive a
merged result to derive a global supervision status to support
decision whether further watchdog triggering shall be
blocked or not.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 257
The global supervision status shall be set to
WDGM_ALIVE_OK if the individual supervision
statuses of all activated supervised entities are
equal to WDGM_ALIVE_OK.
If the individual supervision status of at least one
supervised entity is equal to
WDGM_ALIVE_FAILED, but no individual
supervision status is equal to
WDGM_ALIVE_EXPIRED, the global supervision
status shall be set to WDGM_ALIVE_FAILED.
If the individual supervision status of at least one
supervised entity is equal to
WDGM_ALIVE_EXPIRED and the amount of
expired supervision cycles to postpone the blocking
of watchdog triggering is not exceeded, the global
supervision status shall be set to
WDGM_ALIVE_EXPIRED.
If the global supervision status has reached the state
WDGM_ALIVE_EXPIRED and the amount of
consecutive expired supervision cycles exceeds the
configured limit to postpone the blocking of
watchdog triggering, the global supervision status
shall be set to WDGM_ALIVE_STOPPED.
When a transition of the global supervision status from
WDGM_ALIVE_EXPIRED to WDGM_ALIVE_STOPPED
the Watchdog Manager will perform a micro controller reset.
VII. ALIVE-SUPERVISION ALGORITHM
For the alive-supervision, an algorithm called Alive-
Supervision Algorithm to detect mismatching timing
constraints of the supervised entities is used. This algorithm
will check alive indications of applications against their
expected alive indications in relationship with the alive-
supervision period.
With this Algorithm, it must be possible to deal with two
different scenarios:
The alive indications of a supervised entity are
expected to occur at least one time within one
supervision cycle. The number of alive indications
(AI) within one supervision cycle (SC) shall be
counted.
The alive indication of a supervised entity is
expected to occur less often than the supervision
cycle. The number of supervision cycles (SC)
between two alive indications (AI) shall be counted.
The parameter Expected Alive Indications (EAI) which
represents the expected amount of alive indications of the
supervised entity within the referenced amount of
supervision cycles (supervision reference cycle) is also
needed for this algorithm. The value of this parameter should
have been determined during the design phase and defined
by configuration.
The alive-supervision is checked with the following
algorithm:
n (AI) n (SC) + EAI = 0
To avoid the detection of too many supervision errors for
the supervised entities, the parameters WdgMMinMargin
and WdgMMaxMargin to define tolerances on the timing
constraints can be used.
WdgMMinMargin represents the allowed number of
missing executions of the supervised entity and
WdgMMaxMargin represents the allowed number of
additional executions of the supervised entity.
Therefore the algorithm becomes:
(n (AI) n (SC) + EAI <= WdgMMaxMargin ) (
(n (AI) n (SC) + EAI >= - WdgMMinMargin ) (
A. Scenario A
To check, if the right amount of alive indications (n (AI))
was proceeded, EAI value is preloaded with the negative
value of the expected alive indications + 1.
e.g. Two alive indications are expected in one
supervision cycle which represents the supervision reference
cycle:
EAI = -2 + 1 = -1
When SC occurs, the number of supervision cycles is
incremented (n (SC) = 1) and the regularly checkup is
performed during each supervision cycle (supervision
reference cycle = 1 supervision cycle) with the algorithm.
After performing the check, the current numbers of alive
indications and supervision cycles are reset. For this
example, Max and Min margins are set to 0 for more
simplicity, so the algorithm used is .
This brings the compare algorithm to a negative result if
not enough alive indications occurred before the supervision
cycle. If the number of alive indications fits exactly to the
expected number the result is 0. If more alive indications
have occurred, the number is bigger than 0. The result of the
algorithm represents exactly the number of "extra" alive
indications within the last supervision cycle.
B. Scenario B
The supervision cycle is expected more often than the
alive indication. In this case, the algorithm has to count the
supervision cycles, which have occurred, until the alive
counter is incremented again. The check of aliveness should
be performed during each supervision reference cycle and
the same algorithm should be used.
The alive indication must occur at least within a
predefined number of supervision cycles which represent the
supervision reference cycle. The EAI value is pre-configured
with the positive value of the expected supervision cycles
before the next alive indication 1.
e.g. One alive indication is expected within 2 supervision
cycles (supervision reference cycle = 2 supervision cycles):
EAI = +2 1 = +1
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 258
The alive counter shall be incremented by 1 with every
alive indication. Aliveness should be evaluated in the
supervision cycle corresponding to the supervision reference
cycle. The compare-conditions of the algorithm remain in the
same manner, but the detected incrementation of the alive
counter shall also invoke a reset of the alive counter and
supervision counter after this compare-operation.
VIII. INFERENCE
A Watchdog Manager having Alive-Supervision
Algorithm provides means through which various
parameters like WdgMMaxMargin, WdgMMinMargin ,
EAI, failed supervision reference cycles, reset time could be
configured based on the criticality of the application. This
would prevent unwanted reset which would have occurred in
its absence.
IX. CONCLUSIONS
The Watchdog manger could be implemented with or
without the Alive-Supervision Algorithm. But based on
the comparative study of both the situations it is always
advisable to use the Alive-Supervision Algorithm at the
cost of code size. Using the Algorithm makes the system
more reliable and efficient.
ACKNOWLEDGMENT
I am heartily thankful to my guide Ms. Sreeja K.S whose
encouragement, guidance and support from the initial to the
final level enabled me to develop an understanding of the
subject. I would also like to thank Mr. K.M Martin whose
love and wisdom constantly motivated me to write this
paper. Lastly I offer my regards to all my colleagues for their
valuable prayers.
REFERENCES
[1] Module WatchdogManager Specification, AUTOSARStd. V1.2.0
R3.0 REV 0001
[2] Module WatchdogManager Requirements, AUTOSARStd. V2.0.3
R3.0 REV 0001
[3] (2010) The AUTOSAR website. [Online]. Available:
http://autosar.org/.
[4] Layered Software Architecture, AUTOSARStd. V2.2.1 R3.0 REV
0001




Figure 2. Alive-supervision algorithm Scenario A

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 259


Figure 3. Alive-supervision algorithm Scenario B
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 260
Fast Asynchronous Bit-Serial Communication

Ms.Abhila R Krishna
PG Scholar,
Electronics and Communication Department
Noorul islam University,Thucklay,Nagarkoil,India
e-mail: abhilaktvm@gmail.com

Mr.Dharun V.S
Asst.Professor
.Electronics and Communication Department
Noorul islam University,Thucklay,Nagarkoil,India
e-mail: dharunvs@yahoo.com



Abstract- An asynchronous high-speed wave pipelined
bit serial link for on-chip communication is presented as
an alternative to standard bit-parallel links. The link
employs the Differential level encoded dual-rail (LEDR)
two-phase asynchronous protocol, avoiding per-bit
handshake and eliminating per-bit synchronization. The
need of the system is because of the synchronous on chip
parallel link occupy large area, high capacitive load, high
leakage power and Cross coupling Noise Novel high
performance serial link may provide an alternative for the
parallel link. Synchronous serial link are typically used for
off chip communication.(common timing mechanism used
clock).Alternatively the proposed asynchronous data link
employs handshake instead of clock the asynchronous
protocols are slow ,share data lines but their performance
depend on the wire delay, the proposed high speed serial
link having data cycle of a few gate delay.

Key Words:-LEDR,Asynchronous,Per-bit hanshake,Gate
delay
I. INTRODUCTION

The high performance of VLSI digital logic has
increased at an exponential rate over the years thanks to
transistor size scaling. While performance of local
interconnect follows a similar trend, global wires do not,
challenging long range on-chip data communications in
terms of latency, throughput and power. High-capacitance of
the global interconnect is one of the main sources for losses
over the wire, leading to a degraded performance in terms of
throughput and power. In addition, as systems-on-chip (SoC)
integrate an ever growing number of modules, on-chip inter-
modular communications become congested and the
modules must turn to serial interfaces, similar to the trend
from parallel to serial chip-to-chip interconnects.

common synchronous on-chip parallel links (multi-
wire interconnects) occupy large area, present high
capacitive load and incur high dynamic and leakage power
and cross-coupling noise, especially when long-range
communication is considered. Low-utilization (e.g.,
network-on-chip) or with high interconnect congestion (e.g.,
routers, cross-bar switches). The clock frequency of
synchronous parallel links is bounded by clock and data
uncertainty that worsens as the links get longer. While
standard synchronous serial links, employing clocks similar
to those of a parallel links, are unattractive due to limited
bit-rate, novel high performance serial links may provide an
alternative for the parallel links.

Synchronous serial links are typically employed
for off-chip communications, where pin-out limitations
call for a minimal number of wires per link. Source-
synchronous protocols are often used for those
applications. A common timing mechanism for serial
interconnects injects a clock into the data stream at the
transmitting side and recovers the clock at the receiver.
Such clock-data recovery (CDR) circuits often require a
power-hungry PLL, which may also take a long while to
converge on the proper clock frequency and phase at the
beginning of each transmission. If the receiver and
transmitter operate in different clock domains, the
transaction must also be synchronized at both ends,
incurring additional delay and power. Alternatively, an
asynchronous data link employs handshake instead of
clocks. Traditional asynchronous protocols are relatively
slow due to the need to acknowledge transitions. In
asynchronous protocols share data lines, but their
performance depends on wire delays.

High-speed serial links, having data cycle of a
few gate delays (down to single gate-delay cycle), have
been recently proposed. These fast links employ wave-
pipelining, low-swing differential signalling, fast clock
generators and asynchronous protocols. In addition, these
links require channel optimization to support wide-
bandwidth data transmission over the link wires. The
serialize is based on a chain of MUXes. The link is single-
ended and employs wave-pipelining. Wave-pipelined
multiplexed (WPM) routing employs source synchronous
communication and its performance is limited by the clock
skew and delay variations. Employing low-voltage
differential pairs for on-chip serial interconnect where data
was sampled at the receiver without any attention to
synchronization issues.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 261
II. ASYNCHRONOUS SERIAL
COMMUNICATION
Communication is called asynchronous if the sender and
receiver do not need to synchronise before each
transmission. A sender can wait arbitrarily long between
transmissions and the receiver must be ready to receive data
when it arrives. A simple communication system uses a
small electric current to encode data, e.g. a negative voltage
represents a one (1) and a positive voltage represents a zero
(0). This is illustrated in the waveform diagram of Figure1

Figure 1: Positive and negative voltages

The sender places a negative voltage on the wire
for a short time and then returns the voltage to zero - this
represents a one. The receiver senses the negative voltage
and records that a one arrived. To ensure that
communications hardware built by different vendors will
interoperate, the specifications for communications systems
are standardized. Organizations such as the Electronic
Industries Association (EIA), the International
Telecommunications Union (ITU) and the Institute for
Electrical and Electronically Engineers (IEEE) publish these
specifications as standards.

i) Full-duplex Asynchronous Communication

All electrical circuits require a minimum of two
wires - the current flows out on one wire and back on
another called the ground. In many applications, we require
data to flow in two directions at the same time, e.g. between
a terminal and a computer. Simultaneous transfer in two
directions is called full-duplex, as distinguished from half-
duplex (one direction or the other, but not at the same time)
and simplex (one direction only).

A computer transmits on pin 2 and receives on pin
3, while a modem transmits on pin 3 and receives on pin 2.
Technically, the computer is a piece of Data
Communication Equipment (DCE) and the modem is a
piece of Data Terminal Equipment (DTE). Our earlier
waveform diagram shows the ideal case. In practice all
electronic devices are analogue in nature and cannot
produce an exact voltage or change from one voltage to
another instantly. In addition, as electric current flows down
wire, the signal loses strength. Figure 2 illustrates how a bit
might appear on a real communication.

Figure 2: Real vs ideal voltages


III. SYSTEM ARCHITECTURE


Figure3: Serial communication link.

High-speed serial links, having data cycle of a
few gate delays(down to single gate-delay cycle), have
been proposed. These fast links employ wave-pipelining,
low-swing differential signaling, fast clock generators and
asynchronous protocols. In addition, these links require
channel optimization to support wide-bandwidth data
transmission over the link wires. A wave-front train
serialization link was presented . The serializer is based
on a chain of MUXes. The link is single-ended and
employs wave-pipelining. The link data cycle is
approximately 7FO4 (fan-out of four)delays. Wave-
pipelined multiplexed (WPM) routing technique was
presented.WPM routing employs source synchronous
communication and its performance is limited by the
clock skew and delay variations. Employing low-voltage
differential pairs for on-chip serial interconnect and where
data was sampled at the receiver without any attention
to synchronization issues. A three level voltage swing was
presented in, requiring non-standard amplifiers. The link
employed novel high-speed serializer and de-serializer
wave-pipelining circuits. Wave-pipelining was also
employed over the interconnect wires, together with
differential encoding. Link throughput was independent of
word width. The high bandwidth of that novel high-speed
asynchronous serial link can be traded off for power and
area, reducing the overall cost of inter-modular
communication shifting from voltage to current signaling.
The proposed serial link (Figure3) employs low-
latency synchronizers at the source and sink, two-phase
NRZ level encoded dual rail (LEDR) data/strobe (DS)
encoding and an asynchronous handshake protocol
(allowing non-uniform delay intervals between successive
bits), serializer and de-serializer and line drivers and
receivers. Acknowledgment is returned only once per word,
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 262
rather than bit by bit, enabling multiple bits in a wave-
pipelined manner over the serial channel. The data over the
link wires can be further encoded differentially. LEDR
signalling is preferred over other serial asynchronous
protocols for lower power and higher rates. The D and S
wires employ (fully shielded) waveguides, enabling multiple
travelling signals.
i) Encoder
The encoder used here is Level encoded dual rail
Encoder(LEDR), before going to the details of the LEDR
the things need to discuss are single rail and dual rail data
encoding techniques and handshaking methods for data
representation.Single-rail Bundled Data path,1 wire per
bit as shown in figure4

Figure 4: single rail data encoding

In dual rail system it is different from single rail encoding
here uses 2 wires per data bit as shown in figure 5 each
Dual-Rail Pair provides both data value and validity


Figure 5: Dual rail data encoding

There are several combinations of data representations the
these are dual-rail 4-phase, single-rail 4-phase, dual-rail 2-
phase, and single-rail 2-phase, in this system the level
encoded dual rail 2 phase data representation method is
using. LEDR (Level-Encoded Dual Rail) signaling is a
delay-insensitive data encoding scheme that uses two wires,
or rails, to encode each bit of data. One rail is a data
wire (rail 0), which holds the value of the bit in a standard
single rail encoding, and the other rail is a parity wire,
which indicates phase by its parity relative to the data wire
(rail 1). LEDR is a two-phase protocol, since no return-to-
zero phase is required.
A novel feature of LEDR is that the encoding
alternates between odd and even phases. In each phase, rail
0 contains the data value. Rail 1 carries a parity value; in an
odd phase, the two rails have odd parity, while in an even
phase they have even parity. The encoding of a bit 1 is 01 in
odd phases or 11 in even phases; the encoding of a bit 0 is
10 in odd phases and 00 in even phases. As a result, not
only is one rail always the data bit, but precisely one rail per
bit changes with each new data wave.
LEDR also has a performance advantage over four
phase protocols, including 1-of-4, for global communication
since a complete four-phase request/acknowledge
transaction incurs a latency penalty of two round-trips over
the network, while an LEDR transaction incurs only one
round-trip delay. Recently, detailed simulations have shown
that LEDR is a viable alternative even to synchronous
interconnect for network-on-chips in certain areas of the
power-latency area design space.
The encoding method of 2-phase NRZ LEDR is as
shown in figure 6

Figure6:LEDR Encoding

The equation behind the representation of the LEDR
encoding is as shown bellow


(1)


IV. TRANSMITTER TOP ARCHITECTURE

In the transmitter section the key elements used are
synchronizer, dual rail two phase encoder ,differential driver
and serializer the figure7 shows the top architecture of the
transmitter section



Figuer7:Transmitter top Architecture

In transmitter section the shift register used are used to
send the encoded bits serially it must be fast enough to
communicate between devices so the shift register designed
must be faster than parallel line to line communication the
shift register architecture is as shown in figure8,in this the
parallel date is loaded into the shift register for encoding the
bits.


Figure 8: Shift register design

( ),
( )
( ),
B i i odd
P i
B i i even
( ) ( ) S i B i i
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 263
Same circuit drives P, S lines to denote phase and stet of
the bits transmitted over the dual rail ,the encode date is
transmitted through the rail to the desire receiver location, in
the receivers section decoded LEDR encoded bits and
desterilize the bits to get the bits in parallel form for the
receiver for the further operations the de-serializer and
receiver section is as shown in the figure9.


Figure9. Receiver de-serializer and decoder

V. TESTING AND EXPERIMENTAL RESULTS

The simulation result of the 2-phase LEDR encoder
and decoder is shown below the VHDL code for the
encoder and decoder is synthesized using XilinxISE9.2i and
the simulation is done using ModelSim 6.2c

Figure10: RTL schematic of LEDR ENCODER

The Register Transfer level schematic of the encoder is as
shown in the figure4.8 here the single bit operation is
showing that is state and phase bit generation of the
encoding operation is simulated using modelsim simulator


Figure 11.Result(LEDR Encoder)
The Register Transfer level schematic of the decoder is as
shown in the figure12. And the decoded bits are noted as
shown in figure13

Figure12:.RTL schematic of LEDR DECODER


Figure13:Result(LEDR Decoder)

VI. CONCLUSION

In this paper describes the novel high-speed on-
chip serial links outperform parallel links for long range
communication. The serial links occupy significantly
smaller area, require less power and reduce routing
congestion and noise. presented The two-phase transition
based LEDR encoding and differential signaling. A 3-level
asynchronous protocol for a differential two-wire
communication link is introducing, the encoding and
decoding of the signals for such communication part is
prototyped using VHDL language with help of Xilinx ISE
and Modelsim simulator

VII. REFERENCES
[1] International Technology Roadmap for
Semiconductors2005[Online].Availablewww.itrs.net/Links/2005itrs/
Home2005.htm
[2] W. J. Dally and B. Towles, Route packets, not wires: On-chip
interconnection networks, in DAC, 2001, pp. 684689.
[3] A. Lines, Asynchronous interconnect for synchronous SoC design,
MICRO, vol. 24, no. 1, pp. 3241, 2004.
[4] G. De Micheli and L. Benini, Networks on Chip. San Mateo, CA:
Morgan Kaufmann, 2006.
[5] C. K. K. Yang, Design of high-speed serial links in CMOS,
Ph.D.dissertation, Stanford Univ., Stanford, CA, 1998.
[6] S. Sidiropoulos, High performance inter-chip signaling, Stanford
Univ., Stanford, CA, Tech. Rep. CSL-TR-98-760, 1998.
[7] R. Ho, J. Gainsley, and R. Drost, Long wires and asynchronous
control, in ASYNC, 2004, pp. 240249.
[8] R. Dobkin, Y. Perelman, T. Liran, R. Ginosar, and A. Kolodny,
High rate wave-pipelined asynchronous on-chip bit-serial data link,
in ASYNC, 2007, pp. 314.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 264
FAULT DIAGNOSIS ON PNEUMATIC
ACTUATOR USING NEURO-FUZZY
TECHNIQUE*
KAUSHIK.S
1*
, KANNAPIRAN.B
2
, PRASANNA.R
3


Department of Instrumentation and Control Engineering
Kalasalingam University, Krishnankoil-626190, Tamilnadu, India.
kaushikei22@gmail.com, kannapiran79@gmail.com, prasannamtech86@gmail.com
Abstract: The Fault Detection and Diagnosis method
is anticipated for earlier detection of the problem in
the process, which can help to avoid system
shutdown, breakdown and material damage. The
Fault Detection and Diagnosis is deployed in Critical
Process and the process in Hazardous area such as
chemical, sugar, cement Industries. In real time
application, several faults may occur in pneumatic
actuator. The commonly occurring faults are
incorrect supply pressure, actuator vent blockage and
diaphragm leakage. These faults can be detected in
earlier by Fault Detection and Diagnosis method,
avoiding failure of the process. This paper focuses on
the design and implementation of Fuzzy Logic,
Adaptive neuro fuzzy inference system (ANFIS) and
Back propagation algorithm of neural network for
Fault Detection and Diagnosis (FDD) on pneumatic
actuator, which can learn from real time data of
pneumatic actuator under normal and abnormal
operating conditions and their results are compared.
Key words: Fault detection; Pneumatic actuators; Fuzzy
Logic; Neural Network; ANFIS; Lab VIEW; DAQ Card.
I. INTRODUCTION
In automatic control systems the
growing demand for quality, cost efficiency,
availability, reliability and safety can be
observed. At the same time the complexity and
riskiness of modern control system are
increasing, the call for fault tolerance in
automatic control system is gaining more and
more importance. Methods for On-Line fault
monitoring for automated process have been
considered for enhancing process security-
guaranteeing overall systems safety, improving
product quality and plant economy and providing
some environmental protection [1].

Fault is defined as an unexpected
deviation of at least one characteristic property or
parameter of the system from the acceptable /
usual / standard condition. Fault diagnosis is
determination of the kind, size, location and time
of detection of a fault. Follows fault detection.
Includes fault isolation and identification. Fault
detection is the determination of the faults
presents in a system and the time of detection.
Fault Isolation is determination of detection of a
fault. Follows fault detection. Fault Identification
is the determination of the size and time-variant
behaviour of a fault. Follows fault isolation [12].

Fault tolerance can be achieved either
by passive or by active strategies. The passive
approach makes use of robust control techniques
to ensure that the closed-loop system becomes
insensitive with respective to faults. In contrast,
the active approach provides fault
accommodation, i.e., the reconfiguration of the
control system when a fault has occurred. The
core of the fault diagnosis methodology is the so-
called model-based approach, where either
analytical, knowledge-based, data based models
or combinations of them are used, applying
analytical or heuristic reasoning.

In case of fault diagnosis in complex
system one is faced with the problem that no or
no sufficiently accurate mathematical models are
available. The use of knowledge based
techniques, either in the framework of diagnosis
expert system or in combination with a human
expert, is then the only feasible way[6]. One of
the most important and difficult one is the early
diagnosis of the faults, which can incorporate
artificial intelligence. This paper proposes
adaptive neuro-fuzzy inference system (ANFIS)
and back propagation algorithm of neural
network for fault diagnosis approach and their
results are compared.

II. PROCESS DESCRIPTION

Fault detection and diagnosis are important
tasks in pneumatic valve in any process. It deals
with the timely detection, diagnosis and
correction of abnormal condition of faults in the
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 265
plant. Early detection and diagnosis of plants
while the plant is still operating in a controllable
region can help avoid abnormal event
progression and reduce productivity loss.
Building a model for fault diagnosis involves
embedding the heuristic knowledge by
experience and observations over a period of
time. A pneumatic servo-actuated industrial
control valve, which is used as test bed of the
fault detection approach proposed in this paper.
The internal structure of pneumatic valve is
shown in figure 1[7].


Figure 1. Actuator structure

In real time application several faults may
occur in pneumatic actuators. Three commonly
occurring faults are[7]
Incorrect supply pressure
Actuator vent blockage and
Diaphragm leakage.

a. Pneumatic Actuator

The internal structure of the Pneumatic
valve is shown in figure 1.The flow is set by the
position of the rod, which determines the
restricted flow area. The actuator sets the
position of this rod. There are many types of
servo-actuators: electrical motors, hydraulic
cylinders, spring-and-diaphragm pneumatic
servomotor, etc.
The most common type of actuator is the
spring-and-diaphragm pneumatic servomotor due
to its low cost. This actuator consists of a rod that
has, at one end, the valve plug and, at the other
end, the plate. The plate is placed inside an
airtight chamber and connects to the walls of this
chamber by means of a flexible diaphragm.

TABLE 1.SERVO-ACTUATED PNEUMATIC VALVE
PARAMETERS [7]

The descriptions of the main parameters of the
servo-actuated valve are given in Table 1. The flow
through the valve is given by F=100K
v
f(x) where
K
v
is the flow coefficient (m
3
/h) (given by the
manufacturer), f (x) is the valve opening function,
P is the pressure difference across the valve
(MPa), is the fluid density (kg/m
3
), F is the
volumetric flow through the valve (m
3
/h), and x is
the position of the rod (m), which is the same of the
plug. The valve opening function f (x) indicates the
normalized valve opening area. It varies in the
interval [0, 1], where the value 0 indicates that the
valve is fully closed and the value 1 indicates that it
is fully open. The value of X is defined as the
percentage of valve opening.

b. Valve Body

The valve body is the component that
determines the flow through the valve. A change of
the restricted area in the valve regulates the flow.
There are many types of valve bodies. The
differences between them relate to the form by
which the restricted flow area changes. This paper
addresses the globe valve case. However, the
results expressed here can easily be applied to other
types of valve bodies. Modeling the flow through
the valve body is not an easy task, since most of the
underline physical phenomena are not fully
understood.

c. Positioner

The positioner determines the flow of air into
PSP Positioner of supply air pressure
PT Air pressure transmitter
FT Volume flow rate transmitter
TT Temperature transmitter
ZT Rod position transmitter
E/P Electro-pneumatic converter
V
1,

V
2

Cut-off valves
V
3
Bypass valve
Ps Pneumatic servomotor chamber
pressure
CVI Controller output
CV Control reference value
F Volumetric flow
X Servomotor rod displacement
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 266
the chamber. The positioner is the control element
that performs the position control of the rod. It
receives a control reference signal (set point) from
a computer controlling the process, to get rid of
noise and abrupt changes of the reference signal,
prior to the PID control action that leads the rods
position to that reference signal. The positioner
comprises a position sensor and an electrical-
pneumatic transducer. The first determines the
actual position of the rod, so that the error between
the actual and the desired position (reference
signal) can be obtained. The E/P transducer
receives a signal from the PID controller
transforming it in a pneumatic valve opening signal
that adds or removes air from the pneumatic
chamber. This transducer is also connected to a
pneumatic circuit and to the atmosphere. If the
controller indicates that the rod should be lowered,
the chamber is connected to the pneumatic circuit.
If, on the other hand, the rod should be raised, the
connection is established with the atmosphere, thus
allowing the chamber to be emptied.

d. Effect of Faults

Actuator vent blockage fault is due to the
changes the system dynamics by increasing the
effective damping of the system. When air supplied
to the lower chamber of the actuator, the pressure
increases allowing the diaphragm to move upwards
against the spring force. As the diaphragm moves
upward, air that is trapped in the upper chamber
escapes through the vent. When the vent becomes
partially blocked due to debris, the pressure in the
upper chamber increases creating a pressure surge
that opposes the motion of the diaphragm.
Similarly, when air is purged from the lower
chamber, and the vent is partially blocked, a partial
vacuum is created in the upper chamber. Again, the
motion of the diaphragm is hindered and the
performance of the system is impaired. In cases
when the vent is entirely blocked, the valve cannot
be stroked through its full range. Placing an
adjustable needle valve in the vent port, diaphragm
as it flexes. As a result, fatigue failure of the
diaphragm will inevitably occur [7].

Diaphragm leakage fault is an indicator of the
condition of the diaphragm. Then it was simulated
by diverting air around the diaphragm by means of
a flexible hose connecting the output of the PID to
the upper chamber of the actuator. The leakage
flow was controlled by a needle valve with 100%
leakage (total diaphragm failure) denoting the
adjustment where the valve ceased to respond to
any input signal. Valve clogging fault is due to
cause appeared to be a property of the sewage. But
on the other hand, there are also plants in areas
with hard water that are free from clogging [7].

Leakage fault is due to pressure drop. This
leakage fault is caused by the contaminants in the
water system will cause increased leakage and
equipment malfunctions. These particles can also
block orifices thus jamming valve spools. Further
water passes may be restricted resulting in reduced
water flow and increased pressure drop at the inlet
side of pneumatic actuator [7].

Incorrect supply pressure fault is the fact
that the supply pressure directly influences the
volume of air that can be delivered to the actuator.
This adversely affects the position response of the
valve. The incorrect supply pressure fault can occur
from a blockage or leak in the supply line, or by
increased demand placed on the plant air supply.

III. NEURO-FUZZY BASED FAULT
DETECTION AND DIAGNOSIS

In this paper, Fuzzy Logic based fault detection,
Artificial Neural Network (ANN)-based model for
the fault detection in pneumatic actuator system
and neuro-fuzzy based model for the fault detection
in pneumatic actuator system is proposed and their
results are compared. An artificial neural network
may be defined as an information-processing model
that is inspired by the way biological nervous
systems, such as the brain, process information.
Back-propagation learning algorithm and ANFIS is
proposed to detect and diagnose the faults in
pneumatic actuator system and their results are
compared.
a. Fuzzy Logic Based fault Detection
Fuzzy approach is the very nature of the
changes in the attributes. It is nonlinear, and in
addition, it would be unreasonable to expect that
each time the same level of a particular fault arises,
the attributes would measure exactly the same
values. The boundaries between two levels of a
certain fault or between two faults are not sharply
defined, and therefore the use of a classic true or
false logic is inappropriate, whereas use of a fuzzy
logic instead is highly justified. Membership
functions and the degree of membership, rather
than yes or no membership, give the opportunity,
using previously acquired knowledge about the
system attributes, to define and create a fuzzy rule-
based system that can be used as a diagnosis tool to
monitor the condition of the Pneumatic Actuator
[4] [5].

The diagnosis procedure is based on the
analytical and heuristic knowledge symptoms of
the Pneumatic Actuator behavior. Heuristic
knowledge in the form of qualitative process
models can be expressed as if-then rules. The task
is achieved by a fault decision process which
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 267
specifies the type, size and location of the fault as
well as its time of detection.

Figure 2. Scheme of the fuzzy-based diagnosis approach
The real time data measured under normal and
abnormal condition of pneumatic actuator system is
given to BPN algorithm. Totally 4000 data are
collected under various conditions including no
fault condition. Out of 4000 data, 750 data are
taken into account for network training. The next
250 data are taken into account for testing the
network.
The checked output of Fuzzy Logic algorithm is
shown in Figure 3

Figure 3. Checked Output of Fuzzy Logic Algorithm
Result of Fuzzy Logic Algorithm
Number of I/P Membership Function: 5
Number of O/P Membership Function: 4
Type of Membership Function: trimf
Number of rules: 25
Number of training data: 750
Number of checking data: 250
Classification Result in %: 51.60
Computational time: 0.485772 sec

b. Back Propagation Learning Algorithm
Back propagation learning is the
commonly used algorithm for training the multi-
layer perceptron (MLP). The networks associated
with back propagation learning algorithm are also
called back-propagation networks (BPN). It is a
gradient-descent method minimising the mean
square error between the actual and target output of
a multi-layer perceptron. It is a supervised learning
algorithm [11].
The back propagation algorithm is
different from other networks in respect to the
process by which the weights are calculated during
the learning period of the network. The general
difficulty with the multilayer perceptron is
calculating the weights of the hidden layers in an
efficient way that would result in a very small or
zero output error. When the hidden layers are
increased the network training becomes more
complex. To update weights, the error must be
calculated. The error, which is the difference
between the actual (calculated) and the desired
(target) output is easily measured at the output
layer. It should be noted that at the hidden layers,
there is no direct information of the error.
Therefore, other techniques should be used to
calculate an error at the hidden layer, which will
cause minimisation of the output error, and this is
the ultimate goal.
The training of the BPN is done in three
stages- the feed forward of the input training
pattern, the calculation and back-propagation of the
error, and updating weights. The testing of the BPN
involves the computation of feed forward phase
only. There can be more than one hidden layer
(more beneficial) but one hidden layer is sufficient.
Even though the training is very slow, once the
network is trained it can produce its outputs very
rapidly.
Two hidden layers are given for
calculation and back-propagation of the error.
The checked output of BPN algorithm is shown in
Figure 4.

Figure 4. Checked Output of BPN Algorithm
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 268
The training error output of BPN algorithm is
shown in Figure 5


Figure 5. Training error output of BPN algorithm


Results of BPN Algorithm
Number of epochs: 1000
Training error: 9.4593e
-004

Number of training data: 750
Number of checking data: 250
Classification Result in %: 98.33
Computational time: 4.522 sec

c. Adaptive Neuro-Fuzzy Inference System
ANFIS can serve as a basis for constructing a set of
fuzzy if-then rules with appropriate membership
functions to generate the stipulated input-output
pairs. Here, the membership functions are tuned to
the input-output data and excellent results are
possible. Fundamentally, ANFIS is about taking an
initial fuzzy inference (FIS) system and tuning it
with a back propagation algorithm based on the
collection of input-output data. The basic structure
of a fuzzy inference system consists of three
conceptual components: A rule base, which
contains a selection of fuzzy rules; a database,
which defines the membership functions used in
the fuzzy rules; and a reasoning mechanism, which
performs the inference procedure upon the rules
and the given facts to derive a reasonable output or
conclusion. These intelligent systems combine
knowledge, techniques and methodologies from
various sources. They possess human-like expertise
within a specific domain - adapt themselves and
learn to do better in changing environments. In
ANFIS, neural networks recognize patterns, and
help adaptation to environments. Fuzzy inference
systems incorporate human knowledge and perform
interfacing and decision-making. ANFIS is tuned
with a back propagation algorithm based on the
collection of inputoutput data [10].

ANFIS stands for Adaptive Neural Fuzzy
Inference System. Using a given input/output data
set, the toolbox function ANFIS constructs a fuzzy
inference system (FIS) whose membership function
parameters are tuned (adjusted) using either a back
propagation algorithm alone, or in combination
with a least squares type of method. This allows
your fuzzy systems to learn from the data they are
modelling.

The basic idea behind these neuro-adaptive
learning techniques is very simple:

These techniques provide a method for the
fuzzy modelling procedure to learn
information about a data set, in order to
compute the membership function parameters
that best allow the associated fuzzy inference
system to track the given input/output data.

This learning method works similarly to that of
neural networks.

The Fuzzy Logic Toolbox function that
accomplishes this membership function parameter
adjustment is called ANFIS. ANFIS can be
accessed either from the command line, or through
the ANFIS Editor GUI. Figure 6 describes a FDI
scheme in which several NF models are
constructed to identify the fault & the fault free
behaviour of the system.











Figure 6. FDI scheme
Using a given input/output data set, the toolbox
function ANFIS constructs a fuzzy inference
system (FIS) whose membership function
parameters are tuned (adjusted) using either a back
propagation algorithm alone or in combination with
a least squares type of method. This adjustment
allows your fuzzy systems to learn from the data
they are modeling.

d. FIS Structure and Parameter Adjustment
A network-type structure similar to that of a
neural network, which maps inputs through input
membership functions and associated parameters,
NEURO FUZZY MODEL
ON LINE PARAMETER
ESTIMATION
PLANT
r
u
y
Physical
parameters

Y

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 269
and then through output membership functions and
associated parameters to outputs, can be used to
interpret the input/output map.
The parameters associated with the membership
functions changes through the learning process.
The computation of these parameters (or their
adjustment) is facilitated by a gradient vector. This
gradient vector provides a measure of how well the
fuzzy inference system is modeling the input/output
data for a given set of parameters. When the
gradient vector is obtained, any of several
optimization routines can be applied in order to
adjust the parameters to reduce some error
measure. This error measure is usually defined by
the sum of the squared difference between actual
and desired outputs. ANFIS uses either back
propagation or a combination of least squares
estimation and back propagation for membership
function parameter estimation. The first process is
to cluster the acquired data by using C-means
clustering. Secondly, the clustered data are trained
in ANFIS. The checked output is shown in Figure
7. Reduction of error after ANFIS training is shown
in command window Figure 8.

Figure 7. Checked output of ANFIS


Figure 8. Training error output of ANFIS
Number of epochs: 1000
Training error: 0.00059605
Then the classified data is checked.
Number of training data: 750
Number of checking data: 250
Classification Result in %: 99.70
Computational time: 12.1sec
e. Comparative Results of BPN and ANFIS
The comparative results of BPN and ANFIS are
given in Table 2.
TABLE 2. COMPARATIVE RESULTS OF BPN AND ANFIS
Parameter BPN ANFIS
Fuzzy
Logic
No. of
Training Data
750 750 -
No. of
Checking
Data
250 250 250
Training error 9.459e
-
004

0.00059605 -
Classification
in %
98.33 99.70 51.66
Computational
Time
4.522
sec
12.1 sec 0.485772




Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 270
IV. CONCLUSION
This paper has presented a method for
fault detection for the pneumatic actuator.
The proposed diagnosis technique is
designed by C-ANFIS which is
constructed by using input/output data
without model information. From this, we
can effectively reduce the number of fuzzy
rules and learning time. The effectiveness
of the proposed approach for developing
neural-fuzzy has been demonstrated
through input/output data obtained from
pneumatic actuator. The proposed method
has been compared with Back Propagation
algorithm in neural network and Fuzzy
Logic. In all criteria C-ANFIS gives better
result than BPN and Fuzzy. The main
challenges of NF based FDI methods are
to minimize false alarms enhance
detectability and isolability and minimize
detection time by hardware
implementation.



V. REFERENCES
1. R.Saravana kumar, Fuzzy Logic based fault
detection in induction machine using
LabVIEW", [International Journal of Computer
Science and Network Society-2009], pp 226-
243.
2. Marco Muenchhhof, Fault-Tolerant actuators
and drives Structures, fault detection
principles and application, [Science Direct
2009], pp 136-148.
3. Gerasimos G. Rigatos, Particle And Kalman
Filtering For Fault Diagnosis In Dc Motors,
[IEEE 2009], pp 12 - 23.
4. Fatiha Zidani, A Fuzzy-Based Approach for
the Diagnosis of Fault modes in a Voltage Fed
PWM Inverter Induction motor drive, [IEEE
2008], pp 586-593.
5. Christoper J. White, A fuzzy inference system
for fault detection and isolation : Application to
a fluid system, [Science Direct 2008], pp
1021-1033.
6. H.T.Mok, Online fault detection and isolation
of nonlinear systems based on neurofuzzy
networks, [Science Direct 2007], pp 171-181.
7. J.M.F Caloda, FDI approach to the
DAMADICS benchmark based on qualitative
reasoning coupled with fuzzy neural networks,
[Science Direct 2006], pp 685-698.
8. Mohamed A. Awadallah, Automatic Diagnosis
And Location Of Open- Switch Fault In
Brushless Dc Motor Drives Using Wavelets
And Neuro-Fuzzy Systems, [IEEE 2005], pp
104-111.
9. J.L.Sheng, Fault Diagnosis For Transformer
Based On Fuzzy Entropy, [IEEE 2007], pp
23-34.
10. Jang-Hwan Pak, C-ANFIS based Fault
Diagnosis for Voltage-Fed PWM motor drive
system, [IEEE 2004], pp 379-383.
11. Paul M. Frank ,Fuzzy Logic And Neural
Network Applications To Fault Diagnosis,
[IEEE 1997], pp 68-67.
12. P.Iserman, Trends in the application of Model-
Based Fault detection and diagnosis of technical
process, [Science Direct 1997], pp 709-719.
13. Zhongming Ye ,Electrical Machine Fault
Detection Using Adaptive Neuro-Fuzzy
Inference, [IEEE 2001], pp 254-265.
14. M. Ayoubi, Neuro-Fuzzy Structure For Rule
Generation And Application In The Fault
Diagnosis Of Technical Processes, [IEEE
1996], pp 762-782.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 271
Effective Path Identification Protocol for Wireless
Mesh Networks

M.Riyaz Pasha
ECE Department
GPCET
KURNOOL, ANDHRA PRADESH
e-mail: riyazpasha@gmail.com
B.V.Ramana Raju
ECE Department
GPCET
KURNOOL, ANDHRA PRADESH
e-mail: name@xyz.com


AbstractWireless Mesh Networks (WMNs) have emerged as
a key technology for next-generation wireless networking.
Routing is a key factor for transfer of packets from source to
destination. SrcRR is widely used protocol for transferring
packets from source to destination. This protocol often uses
Dijkstras algorithm on its link state database to find the next
alternative path to the destination whenever the ETX metric of
the link changes. This is a time consuming process if the ETX
metric of the links are changing frequently. So this paper
eliminates the use of Dijkistras algorithm and uses the a
search operation for finding the best paths
Keywords- Mesh networks; Routing; Reactive; Dijkstras.
I. INTRODUCTION
In Wireless mesh networks (WMNs) the communication
is through radio nodes organized in the mesh topology. The
primary advantages of a WMN lie in its inherent fault
tolerance against network failures, simplicity of setting up a
network, and the broadband capability. Although by
definition a WMN is any wireless network having a network
topology of either a partial or full mesh topology, practical
WMNs are characterized by static wireless relay nodes
providing a distributed infrastructure for mobile client nodes
over a partial mesh topology. Due to the presence of partial
mesh topology, a WMN utilize multihop relaying similar to
an ad hoc wireless network.
The routing protocols in WMN are classified into
reactive, proactive and hybrid protocols [2]. In case of
reactive protocols the routes are established when ever
required. Proactive protocols have routes already defined in
their routing tables. Hybrid protocol is a combination of
both reactive and proactive protocols. SrcRR protocol
comes under reactive protocol and it an extension of DSR
protocol [3].
The special routing metrics of WMN protocols are
Expected number of Transmissions (ETX), Expected
Transmission time (ETT), Weighted Cumulative ETT
(WCETT)[1]. During the early stages, WMN used many of
the Adhoc protocols for routing. But these protocols does
not follow ETX,ETT,WCETT metrics, so it failed to
achieve reliability, scalability, throughput, load
balancing,congestion control over WMN. Section2 deals
with description of SrcRR protocol and Section3 deals with
the proposed idea and Section4 deals with Conclusion.
II. SRCRR ROUTING PROTOCOL
SrcRR is an extension of DSR. SrcRR mainly deals with
throughput by considering link loss and transmission bitrate
and transient bursts. This protocol mainly deals with the
ETX metric [6].
ETX is the expected transmissions required to transmit
the data packet from one node to another. ETX continuously
measures the loss rate in both directions between each node
and its neighbors using periodic broadcasts. It assigns each
link a metric that estimates the number of times a packet
will have to be transmitted before it (and the corresponding
802.11 ACK) are successfully received; thus the best link
metric is one. The ETX route metric is the sum of the link
metrics; thus ETX penalizes both long routes and routes that
include links with high forward or reverse loss rates.
Every node running SrcRR maintains a link cache,
which tracks the ETX metric values for links it has heard
about recently. Whenever a change is made to the link
cache, the node locally runs Dijkstra's weighted shortest
path algorithm on this database to find the current,
minimum-metric routes to all other nodes in the network. To
ensure only fresh information is used for routing, if a link
metric has not been updated within 30 seconds it is dropped
from the link cache [6].
When a node wants to send data to a node to which it
does not have a route, it floods a route request. When a node
receives a route request, it appends its own node ID, as well
as the current ETX metric from the node from which it
received the request, and rebroadcasts it. A node will always
forward a given route request the first time it receives it. If it
receives the same route request again over a different route,
it will forward it again if the accumulated route metric is
better than the best metric it has forwarded so far. This
ensures that the target of the route request will receive the
best routes. When a node receives a route request for which
it is the target, it reverses the accumulated route and uses
this as the source-route for a route reply. When the original
source node receives this reply, it adds each of the links to
its link cache, and then source-routes data over the
minimum metric path to the destination. When a SrcRR
node forwards a source-routed data packet, it updates its
entry in the source route to contain the latest ETX metric for
the link on which it received the packet.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 272
This allows the source and destination to maintain up-to
date link caches, and discover when a route's quality has
declined enough that an alternate route would be better. In
addition, each data packet includes a field to hold one extra
link metric; a forwarding node will randomly, with
probability 1/n, where n is the number of nodes in the route,
that field with the ETX metric to one of its neighbors. This
allows the source and destination to learn of the existence
and metric of some alternate links. As with all changes to
the link cache, this prompts re-computation of all the best
routes using Dijkstra's algorithm. All query and data packets
contain ETX metrics for the links they have traversed so far.
Any node that receives such a packet (including forwarding
nodes) copies those metrics to its link cache.
Baseline SrcRR broadcasts a 300-byte ETX probe
packet at randomized intervals averaging every ten seconds.
ETX measures the loss rate from each neighbor by counting
the fraction of probes received over the last three minutes
(18 probes).
SrcRR is independent of IP, and operates at a lower
layer. SrcRR uses 32-bit addresses; in the usual case in
which it is carrying IP packets, SrcRR use IP addresses in its
headers. A SrcRR node maintains a mapping from SrcRR
32-bit addresses to 48-bit 802.11 MAC addresses, derived
implicitly from SrcRR query broadcasts.
A. Advantagess
Finds routes with high throughput rates.
The use of ETX metric penalizes ETX both
long routes and routes that include links with
high forward or reverse loss
B. Disadvantagess
SrcRR is not likely to s cale to more than a few
hundred nodes. As SrcRR uses dijkistra
algorithm every change in the network topology
allows the nodes to run the algorithm [7].
A node forwards a query if it has not seen the
Query before , or if the query's total route metric
is better (lower) than the best instance of the
query the node has yet seen. This increases the
amount of query traffic.
III. PROPOSED IDEA
SrcRR mainly uses weighted Diskistra algorithm among
all the paths to find the path with less ETX metric as the
path to destination.
But this algorithm lacks scalability and takes some time
to run Dijkstra algorithm when the number of nodes in the
network is more than 500. This is because as the number of
nodes of in the network increases, the number of paths to
reach the destination also increases.
When there is a change in the ETX metric on the current
path to the destination, then the source will run the local
Dijkstra algorithm on the link cache to find the next link
metric changes frequently then running the Dijkstra
algorithm to find the next path will consume a lot of time.
The proposed idea improves the algorithm by using the
search operation on the linked list to find the best path
instead of using the Dijkstra algorithm to find the best path.
Step 1: Whenever there is a change in the ETX metric
along the link , then the node node must include this
information in the forwarding packet, ie :the link where
theETX metric is changed.
Step 2: Then that node has to include the ETX metric of
the adjacent nodes whose ETX metric is less than the
previous ETX metric.
Step 3: Whenever a new nodes adds into the network it
must calculate the ETX metric with the adjacent node and
this information must be included in the forwarding packet
by the intermediate node.
Step 4: Consider all links upto where the link metric has
changed. Instead of using Dijkstra algorithm use this path
and do search operation in the link state database to find the
similar paths.
Step 5: As we get the nodes from the ACK packet. Do
the search operation based on these nodes and find the best
path to destination.

Consider an example network:
The best paths to the destination that has minimum ETX
metric values are:

1. S-6-8-9-10-D
2. S-6-7-10-13-D
3. S-6-8-11-14-D
4. S-3-4-11-14-D
5. S-6-8-10-13-D

Among these paths consider the best path as S-6-8-9-10-
D. Consider this as the path with less ETX metric. If the
ETX metric of the link 8-9 has been increased and this
information is sent to the source along with the other
alternative links from the node 8.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 273
Previous case the algorithm should be applied for the
link state database and should find the best path .but in this
case the search operation is sufficient. The alternative links
to the destination from node 8 are 7, 10, 11. This
information is sent to the source. The source instead of
running the Dijkstra algorithm does the search operation by
considering the path S68.
The next alternative path to the destination from node 8
are S-6-8-11-14-D and S-6-8-10-13-D . Among these 2
paths consider the best path . So this type of finding the path
will eliminate the use of Dijkstra algorithm frequently by
the source. This alternative path is the effective path to the
network. But the throughput may change a little bit
IV. CONCLUSION
Effective path computation is most challenging factor
that has to be considered in all the protocols that have been
used till now.. The above proposed idea eliminates the
drawback of SrcRR protocol, effectively finds the path
when the ETX metric of the node has been changed.
When ever the nodes are mobile use of search operation will
reduce the lot of computation work to find the next path in
very easy manner. So use of search operation makes the
algorithm to work efficiently.
REFERENCES
[1] Wireless mesh networks architectures and protocols by Yan Zhang.
[2] Wireless Mesh Routing Protocols For Health
Communication Systems,11th Panhellenic Conference in
Informatics.
[3] Wireless mesh networks : a survey, by Ian F Akyildiz
Xudong Wang Computer Networks 47 (2007) pg 445- 487.
[4] Performance Study of Routing Protocols for Wireless
Mesh Networks, by Anna Zakrzewska , Leszek Koszaka,
Iwona Poniak-Koszaka19th International Conference on
Systems Engineering.
[5] Routing Metrics and Protocols for Wireless Mesh
Networks,IEEE networks.
[6] SrcRR: A High Throughput Routing Protocol for 802.11
Mesh Networks (DRAFT) by Daniel Aguayo, J ohn Bicket,
Robert Morris.
[7] MIT Roofnet Implementation overview.
[8] Douglas S. J. De Couto, Daniel Aguayo, John Bicket, and
Robert Morris. A high-throughput path metric for multi-hop
wireless routing. In Proceedings of the 9th
ACM International Conference on Mobile Computing and
Networking (MobiCom '03), San Diego, California,
September 2003
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 274
An Adaptive Spectrum Sharing Mechanism for Multi-hop Wireless Networks
Liju Mathew Rajan M.E.,
Department of Electronics & Communication Engineering
SNS College of Technology, Coimbatore
lijumathewr@gmail.com

Abstract---Basically, IEEE 802.11 MAC protocol was
not designed for multi-hop networks. Although it can
support some ad hoc network architecture, it is not
intended to support the wireless mobile ad hoc network,
in which multi-hop connectivity is one of the most
prominent features. Spectrum sharing is a crucial issue
to the overall throughput performance of multi-hop
wireless networks. It is observed that for multi-hop
wireless networks, it is hard to resolve the scheduling
conflict, and most distributed algorithms consider the
neighbors traffic independent of each other and ignore
the multi-hop nature of flows, leading to the spectrum
wastage and inefficiency. By incorporating the multi-
hop nature of flows, here proposing a new distributed
scheme based on IEEE 802.11 standard, namely Multi-
hop MAC with explicit pacing mechanism with better
pipeline efficiency. Simulation results show that the
proposed scheme outperforms the original 802.11 MAC.

Keywords--- Multi-hop, MAC, IEEE 802.11, spectrum
allocation, RTS/CTS

I. INTRODUCTION
Wireless local area networks (WLANs)
based on the IEEE 802.11 standard are becoming
increasingly popular and widely deployed. IEEE
802.11 MAC protocol is the standard for wireless
LANs; it is widely used in test beds and simulations
in the research for wireless multi-hop ad hoc
networks. However, this protocol was not designed
for multi-hop networks. Although it can support
some ad hoc network architecture, it is not intended
to support the wireless mobile ad hoc network, in
which multi-hop connectivity is one of the most
prominent features. The surprisingly poor
performance of multi-hop wireless networks has
attracted more and more attentions in the literature.
During recent years, new transmission techniques are
sprouting quickly. However, the traffic rate in multi-
hop wireless networks is not increasing accordingly.
Usually, when the scale of the networks becomes
large, due to the increasing interference and the
increasing number of intermediate hops of flows, the
end-to-end throughput performance starts to
deteriorate.
IEEE 802.11 was originally designed for the
single-hop Wireless LANs. Its performance in multi-
hop scenarios is much below our expectation due to
inefficient resource usage. The standardization of the
IEEE 802.11 Medium Access Control (MAC)
protocol has triggered significant research on the
evaluation of its performance.
Random access MAC provides a roughly
fair mechanism for wireless nodes to access the
medium. The effort of differentiating the uplink and
downlink resource allocation has been first applied to
WLANs because of the observation that as the central
point, APs should occupy more resource than other
nodes. To achieve better performance in multi-hop
networks, several previous schemes attempt to break
the fairness by prioritization. These schemes
heuristically search for better spectrum sharing
mechanism among wireless nodes, by differentiating
the forwarding priority according to the priority tags
of packets or flows. However, when the traffic
pattern is more complicated, these schemes cannot
guarantee significant performance improvement. On
the other side, with centralized approaches,
scheduling based MAC can allocate the resource in a
more efficient way. This approach can find the
optimal solution with knowledge of the topology and
traffic when the network is not large. These two
approaches give us the insight of how good
performance the networks can achieve. However,
they always require a perfect scheduling, a MAC
with no collision and no hidden/exposed terminals,
which is almost impossible in multi-hop wireless
networks..

II. PROPOSED SCHEME
A. Multi-hop MAC
For practical ad hoc networks the traffic is
not totally ad hoc. Usually traffic aggregates at
some points or areas with certain patterns. So, we
assume that the multi-hop wireless networks
which we concern have certain traffic patterns
and the flows inside have relatively stable traffic
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 275
load. For these ad hoc networks, we design an
efficient MAC which can utilize the limited
spectrum resource in a more efficient way.
The basic procedure can be described as
follows. Each node is required to broadcast its
traffic demand to its neighbors, which is the same
as previous works. Meanwhile, each node is
required to notify its neighbors about the traffic
dependency between them, which differentiates
our work from others. Afterwards, each node
allocates the spectrum individually according to
the information collected and apply the calculated
traffic rate to its transmission.
Traffic dependency information from
different neighbors affects the estimation of
accurate traffic load in different ways. Three
different roles of neighbors are defined in this
paper. When one neighbor has traffic for the
current node to forward, we name this role as
upstream neighbor. Similarly, when the current
node has traffic for its neighbors to forward, these
neighbors are called downstream neighbors.
Other neighbors are called uncorrelated
neighbors. It should be noted that only when
traffic demand of neighbors are correlated with
the current node, are these neighbors to be seen as
upstream neighbors or downstream neighbors.
Therefore, one node sees the neighbors who have
traffic ending at itself as uncorrelated neighbors
because the traffic ending at itself will not affect
its traffic demand.
The traffic demand from each node consists
of two parts: the traffic that requires to be
forwarded from its upstream neighbors and the
traffic originated from its upper layer locally. It
can be expressed by the following formula (1):

(1)
where TD
i
is the traffic demand, TDOi is the
traffic originated from local upper layer,
TDfwdingj,i is the traffic that requires to be
forwarded from its upstream neighbor j and Ni is
the set of neighbors for node i. The latter part is
dependent with its neighbors traffic demand and
the former part is independent. Therefore, an
accurate traffic demand of one node should be
based on the knowledge of all upstream
neighbors traffic dependency information. In this
scheme, upstream nodes should notify their
downstream neighbors about their forwarding
request. Consequently, the downstream nodes
update their traffic demand accordingly. The
knowledge of accurate local traffic demand is not
enough for ideal spectrum allocation. It is also
important to acquire the correct traffic demand of
the neighbors, TDi. When downstream nodes
broadcast their new traffic demands, since the
downstream neighbors traffic includes
forwarding requirement from the upstream nodes,
the upstream nodes should be able to extract the
dependent traffic from the messages, thus the pure
change of the original traffic of the downstream
nodes can be known. This knowledge is important
in obtaining the accurate traffic demand of
neighbors.
The traffic demand from each node consists
of two parts: the traffic that requires to be
forwarded from its upstream neighbors and the
traffic originated from its upper layer locally. In
this scheme, each node maintains three tables
which record its own traffic information, its
neighbors traffic information and the traffic
dependency information. Each node periodically
gets knowledge of its original traffic load and
forwarding traffic load from its upper layers and
updates these tables. It also updates these tables
when it receives/overhears messages from its
neighbors.
Each node should maintain a parameter set
as described in Table I which includes its own
traffic information: Each node should also
maintain a table which records its neighbors
corresponding information as the potential input
of distributed spectrum allocation. The detail
information is listed in Table II.

Fig.1 Table I and Table II
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 276

The traffic dependency information is stored
in an (n + 1)(n+1) matrix D, with the local
traffic demand included. Di,j means the
forwarding requirement from node i to node j.
When i equals to j, Di,j stores the original traffic
excluding its forwarding demand from its
neighbors in this neighbor set. Note that this
original traffic demand may not purely be
original. In node is storage, if neighbor j has
some forwarding request from its neighbor k,
which is not a neighbor of node i, neighbor js
original traffic demand in the matrix includes this
part of forwarding request.

B. Hidden node problem
Existing IEEE 802.11 MAC protocol only govern
single-hop data delivery based on the interference
information collected within the scope of a single
hop, and lack support for concerted transmissions
among relay nodes in a larger area. Adaptive
RTS/CTS (request to send/clear to send) schemes are
used in IEEE 802.11 for reducing the delay and
collision avoidance. This scheme in its raw form is
found to be not efficient for multi-hop networks due
to hidden node problem. So, by reducing the effect of
hidden node problem, we can increase the pipeline
efficiency of the multi-hop networks. The pipeline
efficiency is characterized by the simultaneous use of
the same spectrum along the path of data flow, relies
on the coordination of transmissions at each relay
node and becomes a dominant factor affecting the
throughput and latency as the hop count grows. So by
increasing the pipeline efficiency, we can increase
the throughput also.

The hidden node problem or hidden terminal
problem occurs when a node is visible from a
wireless access point (AP), but not from other nodes
communicating with said AP. This leads to
difficulties in media access control. Hidden nodes in
a wireless network refer to nodes that are out of range
of other nodes or a collection of nodes. RTS/CTS
(Request to Send / Clear to Send) is the optional
mechanism used by the 802.11 wireless networking
protocol to reduce frame collisions introduced by the
hidden terminal problem. A node wishing to send
data initiates the process by sending a Request to
Send frame (RTS). The destination node replies with
a Clear To Send frame (CTS). Any other node
receiving the RTS or CTS frame should refrain from
sending data for a given time (solving the hidden
node problem). The amount of time the node should
wait before trying to get access to the medium is
included in both the RTS and the CTS frame. This
protocol was designed under the assumption that all
nodes have the same transmission range. The main
drawback of this mechanism is that a backlogged
node always attempts to transmit whenever it
considers the channel in its vicinity to be idle,
through physical and virtual carrier sensing.
Ironically, this may result in lower pipeline efficiency
because of the unattended RTS problem. However,
MAC layer pacing can solve the unattended RTS
problem and improve the pipeline efficiency

C. MAC layer pacing

The proposed coordination scheme is deployed at
the link layer and consists of two steps: (1)
information collection and (2) a pacing mechanism.
The information collection step uses a new control
signal to obtain explicit information on intentional
RTS drops and the associated congestion. This
information is then used by the pacing mechanism to
control and coordinate the rate at which a node makes
transmission attempts.

We use a Token Bucket Filter (TBF) to pace
transmission attempts and provide support for MAC
coordination. In the proposed architecture, a TBF is
inserted between the interface queue and the MAC
function. The TBF controls the rate at which the
MAC layer receives packets and initiates
transmission attempts. The rate at which the TBF
generates tokens is adaptive and changes based on
the network conditions. This adaptive pacing is
accomplished by issuing tokens at a dynamic pace set
forth by a pace tuner. The tuner coordinates the
transmission rate between neighboring nodes through
explicit MAC feedback. The MAC feedback is
provided in the form of information on the incidence
of unattended RTS packets and the rate of token
generation is inversely proportional to it.

An unattended RTS represents an early indication
of throttled spatial reuse. We thus use it as a feedback
for triggering pace adjustments at a sender so that it
may probe for the optimal transmission rate in its
neighborhood. We modify the 802.11 DCF to
incorporate this MAC feedback.

1) Modifications on the MAC Receiver Side: The
receiver is responsible for tracking any unattended
RTS frames and conveying this information to MAC
sender.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 277




Fig.2 CTS frame structure



Fig.2 shows the CTS frame structure. Inside the
2- byte Frame Control field of the CTS frame, there
are two unused subfields named More Fragments
field and Retry field, respectively, with each taking
up one bit and always set to zero. We use these two
single-bit fields to deliver the pacing feedback to the
sender while keeping our scheme compatible with the
original 802.11. We introduce two new bits to replace
the unused old fields: EPF bit for backward
compatibility, and SLW bit for pace tuning.

EPF (Explicit Pacing Feedback): This bit
is set to 1 if explicit pacing feedback is
enabled on the receiver node. For backward
compatibility, it is set to 0 on non-pacing
nodes.
SLW (Slow Pacing): This bit is set to 1 by
the receiver if it successfully receives but
intentionally declines at least one RTS
request due to deferral since its last CTS
transmission; otherwise it is set to 0. This bit
informs the sender whether its transmission
rate is too fast causing unattended RTS and
thus if it should slow down. The SLW bit is
always set to 0 on non-pacing nodes.

2) Modifications on the MAC Sender Side: When a
sender receives the CTS frame containing the pacing
feedback, it uses a token bucket filter to update its
transmission rate. Since our scheme uses the same
mechanism for contention-based access as in 802.11
DCF, all routine backoff or deferral operations are
not shown in this algorithm. For each outgoing RTS,
the sender starts a timer to wait for the corresponding
CTS, and retransmits the RTS after a backoff in case
of timeout. The total retransmission attempts should
not exceed a limit, set to 7 in 802.11. Once the
expected CTS is received, the sender proceeds to
retrieve its EPF bit to check if pacing feedback is
carried in this CTS frame. If feedback is available,
the pace is decreased if the SLW bit is set to 1 and
increased otherwise. The pace update method can be
either linear or multiplicative.

III. PERFORMANCE ANALYSIS
NS-2 tool is used for the performance
analysis of the proposed MAC. Continuous bit rate
(CBR) traffic sources are used. The source-
destination pairs are spread randomly over the
network. Only 512-byte data packets are used. The
number of source-destination pairs and the packet
sending rate in each pair is varied to change the
offered load in the network.

Fig.3 and Fig.4 shows the comparison
graphs of Multi-hop MAC with explicit mechanism
and Multi-hop MAC without explicit mechanism. It
is clear from the graphs that the number of
unattended RTS attempts is in Multi-hop MAC with
explicit mechanism is less when compared to Multi-
hop MAC without explicit mechanism. At the same
time the packet delivery ratio of the proposed scheme
is better than the MAC algorithm without explicit
mechanism. Thus the proposed scheme holds high
throughput with less number of unattended RTS
attempts.
VI. CONCLUSION
In a multi-hop network, the efficiency of the
spectrum sharing scheme is highly depend on the
nature and topology of the network. In this paper, a
new spectrum sharing mechanism named, Multi-hop
MAC with explicit pacing mechanism is proposed.
This scheme incorporates multi-hop consideration in
spectrum allocation and increases the pipeline
efficiency so that the spectrum allocated for one hop
transmission will not be wasted due to lack of
spectrum at the next hop. The proposed adaptive
pacing mechanisms using a token bucket filter can
balance the transmissions on adjacent nodes for better
spatial reuse. Simulation results demonstrated the
performance improvements of our scheme over the
original 802.11 MAC.

REFERENCES
[1] Rongsheng Huang and Yuguang Fang,Utilizing
Multi-Hop neighbor information in spectrum
allocation of wireless networks, IEEE Trans.
Wireless Communications, vol. 8, no. 8August
2009
[2] Fengji Ye, Haiming Yang, Hua Yang, and Biplab
Sikdar, A distributed coordination scheme to
improve the performance of IEEE 802.11 in
multi-hop networks, IEEE Trans. On
Communications, vol 57, no. 10, October 2009
[3] H. Zhai, X. Chen, and Y. Fang, Improving
transport layer performance in multihop ad hoc
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 278
networks by exploiting MAC layer information,"
IEEE Trans. Wireless Commun., vol. 6, no. 5, pp.
1692-1701, May 2007.
[4] H. Zhai, J. Wang, and Y. Fang, Ducha: a dual-
channel MAC protocol for mobile ad hoc
networks," IEEE Trans. Wireless Commun., vol.
5, no. 11, Nov. 2006.
[5] S. Kim, B. Kim, and Y. Fang, Downlink and
uplink resource allocation in IEEE 802.11
wireless LANs," IEEE Trans. Veh. Technol., vol.
54, no. 1, pp. 320-327, Jan. 2005.
[6] H. Zhai and Y. Fang, Distributed flow control
and medium access control in mobile ad hoc
networks," IEEE Trans. Mobile Computing, vol.
5, no. 11, pp. 1503-1514, Nov. 2006.
[7] X. Yang and N. Vaidya, Priority scheduling in
wireless ad hoc
[8] networks," Wireless Networks, vol. 12, no. 3, pp.
273-286, May 2006.
[9] X. Lin and S. Rasool, A distributed joint
channel-assignment, scheduling and routing
algorithm for multi-channel ad hoc wireless
networks," in Infocom07, Anchorage, AK, May
2007.
[10] E. Modinao, D. Shah, and G. Zussman,
Maximizing throughput in wireless networks via
gossiping," in SIGMetrics/Performance06, Saint
Malo, France, June 2006.
[11] C. Joo and N. Shroff, Performance of random
access scheduling schemes in multi-hop wireless
networks," in Infocom07, Anchorage, AK, May
2007.
[12] Q. Dong, S. Banerjee, and B. Liu, Throughput
optimization and fair bandwidth allocation in
multi-hop wireless LANs," in Infocom06,
Barcelona, Spain, Apr. 2006.
[13] B. Li, End-to-end fair bandwidth allocation in
multi-hop wireless ad hoc networks," in
ICDCS05, Columbia, OH, June 2005.
[14] T. Joshi, A. Mukherjee, Y. Yoo, and D. Agrawal,
Airtime fairness for IEEE 802.11 multirate
networks," IEEE Trans. Mobile Computing, vol.
7, no. 4, pp. 513-527, Apr. 2008.
[15] H. Zhai, Y. Kwon, and Y. Fang, Performance
analysis of IEEE 802.11 MAC protocols in
wireless LANs," Wireless Commun. Mobile
Computing, vol. 4, no. 8, pp. 917-931, Nov.
2004.
[16] G. Holland, N. Vaidya, and P. Bahl, A rate-
adaptive MAC protocol for multi-hop wireless
networks, in Proc. ACM MOBICOM, 2001, pp.
236-251
[17] S. El Rakabawy, A. Klemm, and C. Lindemann,
TCP with adaptive pacing for multihop wireless
networks, in Proc. ACMMOBIHOC, May 2005,
pp. 288-299.




Fig.3 Comparison of number of unattended RTS
between Multi-hop MAC with explicit mechanism
and Multi-hop MAC without explicit mechanism



Fig.4 Comparison of Packet Delivery Ratio between
Multi-hop MAC with explicit mechanism and Multi-
hop MAC without explicit mechanism

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 279
Solving the Protein Structure Prediction Problem
Through a Genetic Algorithm with Discrete
Crossover
G.Sindhu, S.Sudha
Department of Computer Science and Engineering
Thiagarajar College of Engineering
Madurai, India
sindhug3@gmail.com
ssj@tce.edu


Abstract Predicting the tertiary structure of proteins from its
amino acids sequence plays a vital role in computational biology
field. Its a grand challenge problem since protein performs its
biological function only when it is folded to its native structure.
The traditional computational methods are not powerful enough
to search for the correct native structure in the huge
conformational space. This paper presents an Genetic Algorithm
(GA) to Protein Structure Prediction (PSP) using Discrete
Crossover (DC) with boundary mutation. Since GA has good
global characteristics. We utilize the GA with ECEPP/2 energy
model as a fitness function; the protein structure is determined
by minimizing the energy fitness function. Results have shown
that GA with DC and boundary mutation produces a best
optimal solution.

Keywords Protein structure prediction problem,
ECEPP force field, Genetic Algorithm, SMMP tool.
I. INTRODUCTION
The protein function is related to the protein structure. The
protein structure can be described in four levels: primary,
secondary, tertiary and quaternary. The primary structure is a
sequence of amino acids connected by peptide bonds. Amino
acids are the building blocks of the protein. There are 20
amino acid types where each amino acid consists of a main or
backbone and side chain. The main chain is the same in all the
20 amino acid type. Differences are in the side chain. Proteins
differ from each other by the order or number of amino acids.
The secondary structure occurs when the sequence of amino
acids are attracted by hydrogen bonds. Tertiary structure is the
three dimensional arrangements of the atoms. Quaternary
structure consists of more than one amino acid chain [20].
The protein structure prediction problem is regarded as a
grand challenge and is one of the great puzzling problems in
computational biology. It is how to get the structure of the
protein given only its sequence. This problem can be solved
experimentally using experimental methods such as NMR and
X-ray Crystallography. Experimental methods are the main
source of information about protein structure and they can
generate more accurate results. However, they are also time
consuming where the determination of the structure of a single
protein can take months and they are expensive, laborious and
need special instruments as well. Moreover and due to some
limitations in the experimental methods, it is not always
feasible to determine the protein structure experimentally
which results in creating a big gap between the number of
protein sequences and known protein tertiary structures. In
order to bridge this gap, other methods are much needed to
determine the protein structure. Scientists from many fields
have worked to develop theoretical and computational
methods which can help provide cost effective solutions for
the protein structure prediction problem. Accordingly, the best
existing alternative is using computational methods which can
offer cost effective solutions. Computational methods can be
traditionally divided into three approaches: Homology
Modelling, Threading and Ab initio [11]. In Homology
Modelling and Fold Recognition methods, the prediction is
performed using the similarities between the target protein
sequence and the sequences of already solved proteins
structures. So, these methods are limited to predict the
structure of proteins which belong to protein families with
known structures. On the contrary, Ab initio methods are not
limited to protein families with at least one known structure
[3]. They are based on the Anfinsen hypothesis which states
that the tertiary structure of the protein is the conformation
with the lowest free energy. To predict the protein structure
using Ab initio method, the problem is formulated as an
optimization problem with the aim to find the lowest free
energy conformation. In order to perform that, protein
conformation must be represented in a proper representation.
This representation is ranged from all atoms representation to
simplified representation. Then, an energy function is used to
calculate the conformation energy and a conformational
search algorithm is utilized to search the conformation search
space to find the lowest free energy conformation [2].
In this paper, we propose a simple GA for protein tertiary
structure prediction. The performance of real coded crossover:
Discrete Crossover (DC) operator in protein structure
prediction is shown. The target protein is Metenkephalin. The
results show that GA has the higher searching capability. In
this investigation we utilize the ECEPP/2 energy model as a
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 280
fitness function; the protein structure is determined by
minimizing the energy fitness function.
The rest of the paper is organized as follows. Section 2
deals with the survey of related work. Section 3 highlights the
proposed work of this paper. The experiments and results are
presented in Section 4.Finally Section 5 concludes the paper
stating its future scope.
II. RELATED WORK
Md Tamjidul et al. [1] proposes the impact of twins and the
measures for their removal from the population of Genetic
Algorithm when applied to effective conformational searching.
Twins cause a population to lose diversity, resulting in both
the crossover and mutation operation being ineffectual. In this
paper the efficient removal of twins from the GA population is
achieved with the help of two factors: 1) Chromosome
Correlation Factor (CCF) and 2) Correlated Twin Removal
(CTR) algorithm. It highlights the need for a chromosome
twin removal strategy to maintain consistent performance.
Yunling Liu and Lan Tao [5] considering the deficiency of
simple Genetic Algorithms, such as pre-maturity and slow
convergence, they propose HPGA/GBX (Hybrid Parallel
GA/Guide Blend Crossover) which is an improvement of GA
and the algorithm evaluated with three standard test functions.
In case of simple Genetic Algorithm, they had been taken the
whole population as an input. But in the improved GA, the
entire population is randomly divided into M sub-populations,
which causes the resultant structure to handle the prematurity
and slow convergence problem in a better way. The result
shows that HPGA/GBX performs better in terms of searching
and finding the minimum energy for small proteins. In this
investigation they utilize the ECEPP energy model as a fitness
function. The target protein is Met-enkephalin.
R.Day et al. [6] focuses on an energy minimization
technique and the use of a multiobjective Genetic Algorithm
to solve the Protein Structure Prediction (PSP) problem. They
propose a multiobjective fast messy Genetic Algorithm
(fmGA) to obtain a solution to this problem. They utilize the
CHARMM force field as a energy function. This paper use
binary string representation of proteins and it covers the
analyses of two proteins: [Met]-enkephalin and Polyalanine.
The operators used were cut and splice operator.
Madhusmita et al. [7] uses a real valued Genetic Algorithm,
a powerful variant of conventional GA to simulate the PSP
problem. The conformations are generated under the
constraints of Ramachandran plot along with secondary
structure information, which are then screened through a set of
knowledge based biophysical filters,viz.persistence length and
radius of gyration. This method uses Torsion angles
representation. FoldX force field used as a fitness function.
They use the Genetic Operators such as Mutate, Variate and
crossover. The crossover operator further splited into two
types one is 2-point crossover and another one is 1-point
crossover. In this work they proposed a fast, efficient GA
based approach for PSP.
Pallavi M.Chaudhri et al., [8] just shown that how Genetic
Algorithm (GA) is efficiently used for predicting the protein
structure. The test protein is crambin protein-a plant seed
consisting of 46 amino acids. They used to describe the
structure of protein as a list of three dimensional coordinates
of each amino acid, or even each atom. Genetic Algorithms
proved to be an efficient search tool for structural
representations of proteins. It results in highly optimized
fitness value.
Jie Song et al. [17] shown that Genetic Algorithm is an
efficient approach to find lowest-energy conformation for HP
lattice model. They had introduced some new operators to
speed up the searching process and give the result with more
biology significance. The operators used in addition are
symmetric and corner change operators. They suggest that
high rates of mating, mutation and relatively high elitism is
good for getting an optimized result. The additional operators
can speed the evolution and reduce the computation time.
The prediction problem has been proven to be NP-complete,
implying that a polynomial time algorithm is not feasible
either. Statistical approaches to the PSP problem include
Contact Interaction and Chain Growth. Both these techniques
are characterized by exhibiting lower accuracy as the
sequence length increases and also by being non-reversible in
their move-steps while searching for optimum conformation.
Alternative PSP strategies include Artificial Neural Networks
(ANN), Support Vector Machines (SVM) and Bayesian
Networks (BN), while Hidden Markov Models (HMMs)
which are based on Bayesian learning, have also been used to
convert multiple sequence alignment into position-specific
scoring matrices (PSSM), which are subsequently applied to
predict protein structures. These approaches are often
dependent on the training set and thus mostly applicable to the
homology modelling and threading-based approaches rather
than ab initio PSP problems. In particular, if the training sets
are unrelated to the test sets, then information relating to a
particular motif does not assist in a different motif. For
deterministic approaches to the PSP problem, approximation
algorithms provide an insight, though they are not particularly
useful in identifying minimum energy conformations, and
while linear programming (LP) methods have been used for
protein threading, they have not been applied in abinitio
applications, with the recent LP focus being confined to
approximating the upper bound of the fitness value based on
sequence patterns only. Therefore, non-deterministic search
techniques have dominated attempts to solve the PSP problem,
of which there are a plethora including Monte Carlo (MC)
simulation, Evolutionary MC (EMC) Simulated Annealing
(SA), Tabu Search with Genetic Algorithms (GTB), Ant
Colony Optimization, Immune Algorithm (IA) based on
Artificial Immune System (AIS), Conformational Space
Annealing (CSA), and so on. Due to their simplicity and
search effectiveness, Genetic Algorithms are very attractive
especially for the crossover operation which can build new
conformation by exchanging sub-conformations [1].
In this paper, Genetic Algorithm with Discrete Crossover
(DC) for the test protein Met-Enkephalin has been proposed.
Torsion angle representation model is used for protein
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 281
representation. ECEPP/2 force field is used as a fitness
function.
III. PROPOSED SCHEME
This section is devoted to describe how the Genetic
Algorithm was adapted to solve the protein conformational
search problem in order to find the lowest free energy
conformation.
A. Protein Conformation Representation
Each amino acid consists of two parts: the main chain and
the side chain (Figure 1) [2]. The main chain torsion angles
are: , and . The side chain torsion angles are n. As the
overall structure of proteins can be described by their
backbone and side chain torsion angles, the tertiary structure
of a protein can be obtained by rotating the torsion angles
around the rotating bonds. So, the protein conformation is
represented as a sequence of the torsion angles. This
representation is a common protein conformation
representation and it is widely used in protein conformational
search algorithms.



In the torsion angles representation, each conformation is
represented as an array of real values. These values are the
values of the amino acid torsion angles. The length of the
array represents the number of torsion angles of the protein.
Generating conformations is done by changing the values of
the torsion angles randomly.
B. Energy Function
The protein energy function is the objective function and
the torsion angles are the variables. The conformation energy
is calculated using ECEPP/2 force fields which it is
implemented as a part of the SMMP (Simple Molecular
Mechanics for Proteins)
C. The Algorithm
In a GA, a population of chromosomes, representing a
series of candidate solutions (called individuals) to an
optimization problem, generally evolves toward better
solutions. The evolution usually starts from a population of
randomly generated individuals. In each generation, the
fitness of every individual is evaluated, the best individuals
are selected (elitism), and the rest of the new population is
formed by the recombination of pairs of individuals,
submitted to random mutations. The new population is then
used in the next generation of the algorithm. Commonly, as
employed in this problem, the algorithm ends when a
maximum number of generations is reached.
GA is a technique of function optimization derived from
the principles of evolutionary theory. The Genetic Algorithm
is a heuristic method that operates on pieces of information
like nature does on genes in the course of evolution. It has
good global search characteristics. Three operators are
invented to modify individuals: Selection, Mutation and
Crossover. The decision about the application of an operator is
made during run time and can be controlled by various
parameters [5]. The basic outline of a Genetic Algorithm is as
follows:
1) Initialize a population of individuals. This can be done
either randomly or with domain specific background
knowledge to start the search with promising seed individuals.
2) Evaluate all individuals of the initial population.
3) Generate new individuals. Operations to produce new
individuals are: Selection, Mutation and Crossover.
4) Go back to step 2 until either a desired fitness value was
reached or until a predefined number of iterations was
performed (Termination Criteria).
Additionally the real coded crossover: Discrete Crossover
(DC) is used along with boundary mutation. It produces an
optimal solution.
IV. EXPERIMENTS AND RESULTS
The algorithm is implemented using Java in Linux
environment. The SMMP package is used for ECEPP/2
energy calculation. The algorithm is applied to find the lowest
free energy conformation of Met-enkephalin, i.e. a small
protein which is extensively used to test the conformational
search methods. It consists of 5 amino acids with 24 torsion
angles. Two types of real-coded crossovers are performed.
The performances of the two crossovers are compared.
The number of population is set to 120 and the number of
iterations is varying up to 500. The mutation rate is set to
0.01 and the crossover rate is set to 0.8.
The results in table 1 describes that, the real-coded
crossovers produce the conformation which has low energy. It
is observed that the success rate of GA with DC is better than
GA with simple crossover operators. The energy of the given
protein is calculated using SMMP tool. The call of SMMP
count and the number of iterations at which the conformation
gets converged is shown clearly in the table given below. It
has been shown that the protein structure prediction problem
using genetic algorithm with discrete crossover with boundary
mutation gives an optimal solution.






Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 282
TABLE I
PERFORMANCE OF CROSSOVERS

S.No.
GA Operators
Discrete Crossover(DC) Boundary Mutation

Call of SMMP
Count

Number of
Iteration

Results
(kcal/mole)
1 14566 85 -12.91
2 5967 34 -12.91
3 7827 45 -12.91
4 10326 60 -12.91
5 6974 40 -12.91
6 7148 41 -12.91
7 16875 99 -12.91
8 17386 102 -12.91
9 6291 36 -12.91
10 10998 64 -12.91

The three-dimensional view of the protein Met-enkephalin
is shown using Visual Molecular Dynamics (VMD). VMD is
a tool used to display the three-dimensional view of the given
protein.




V. CONCLUSION AND FUTURE WORK
In this work the Protein Structure Prediction Problem is
solved using Genetic Algorithm with Discrete Crossover and
boundary mutation. The algorithm is tested using small
protein Met-enkephalin. The protein is represented using
torsion-angle representation. The results indicated that the
algorithm is able to find the lowest free energy conformation
of -12.91 kcal/mol using ECEPP/2 force field. Protein energy
is calculated using SMMP tool. Best results are gained using
Discrete Crossover with boundary mutation.
Further work is needed to compare the performance of the
algorithm on larger proteins and also to improve the
performance of the algorithm by parallelizing and comparing
the performance of the algorithm with other existing
algorithms for protein conformational search.
REFERENCES
[1] Md Tamjidul Hoque, Madhu Chetty, Andrew Lewis, and Abdul Sattar,
Twin Removal in Genetic Algorithms for Protein Structure Prediction
using Low-Resolution Model, IEEE/ACM Transactions on
Computational Biology and Bioinformatics, TCBB-2008-06-0102.R2,
2009.
[2] Heshm Awadh A.Bahamish, Rosni Abdullah and Rosalina Abdul
Salam, Protein Tertiary Structure Prediction Using Artificial Bee
Colony Algorithm, IEEE Third Asia International Conference on
Modeling and Simulation, 2009.
[3] WANG Cai-Yun, ZHU Hao-Dong and CAI Le-Cai, A new prediction
protein structure method based on Genetic Algorithm and Coarse-
grained protein model, IEEE 2
nd
International Conference on
Biomedical Engineering and Informatics, 2009. BMEI '09.
[4] Pawel Widera, Jonathan M.Garibaldi and Natalio Krasnogors,
Evolutionary design of energy functions for Protein structure
prediction. IEEE International Conference-2009.
[5] Yunling Liu, Lan Tao, Protein Structure Prediction based on An
Improved Genetic Algorithm, The 2nd IEEE International Conference
on Bioinformatics and Biomedical Engineering, Shanghai, 2008, p:
577-580.
[6] R.Day, J.Zydallis and G.Lamont, Solving the Protein Structure
Prediction Problem through a Multi-objective Genetic Algorithm,
IEEE International Conference-2008.
[7] Madhusmita, Harijinder Singh and Abhijitis Mitra, "Real valued
Genetic Algorithm based approach for Protein Structure Prediction-
Role of Biophysical Filters for Reduction of Conformational Search
Space, IEEE International Conference-2008.
[8] Mrs.Pallavi M.Chaudhri, Mr.Prasad P.Thute, Application of Genetic
Algorithms in Structural Representation of Proteins, IEEE First
International Conference on Emerging Trends in Engineering and
Technology-2008.
[9] Steffen Schulze-Kremer, Genetic Algorithms for Protein Tertiary
Structure Prediction, Springer-Verlag London, UK, Pages: 262 -
279, ISBN: 3-540-56602-3, 1993.
[10] Wen Yuan Liu,Shui Xing Wang,Bao Wen Wang,Jia Xin Yu,Protein
Secondary Structure Prediction Using SVM with Bayesian Method,
IEEE 2nd International Conference on Bioinformatics and Biomedical
Engineering, 2008. ICBBE 2008.
[11] Christine Kehyayan, Nashat Mansour, Hassan Khachfe,Evolutionary
Algorithm for Protein Structure Prediction, IEEE International
Conference on Advanced Computer theory and Engineering-2008.
[12] T.W.de Lima, P.H.R.Gabriel, A.C.B.Delbern, R.A.Faccioli, I.N.da
Silva, Evolutionary Algorithm to ab initio Protein Secondary
Structure Prediction with Hydrophobic Interactions, IEEE
International Conference-2007.
[13] Michela Taufer, Chahm An, Andreas Kersten, Charles L.Brooks III,
Predictor@Home: A Protein Structure Prediction Supercomputer
Based on Global Computing, IEEE Transactions on Parallel and
Distributed Systems-2006.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 283
[14] Yun-Ling Liu, Lan Tao, An Improved Parallel Simulated Annealing
Algorithm used for Protein Structure Prediction, IEEE Fifth
International Conference on Machine Learning and Cybernetics-2006.
[15] Heshm Awadh A.Bahamish, Rosni Abdullah, Rosalina Abdul Salam,
Protein conformational search Using honey Bee Colony
optimization, IEEE Regional Conference on Mathematics, Statistics
and Application-2006.
[16] Rajkumar Bondugula, Dong Xu, Yi Shang, A fast algorithm for Low-
Resolution Protein Structure Prediction, IEEE International
Conference-2006.
[17] Jie Song,Jiaxing Cheng,TingTing zheng and Junjun mao, A Novel
Genetic Algorithm for HP Model Protein Folding, IEEE Sixth
International Conference on Parallel and Distributed Computing,
Applications and Technologies-2005.
[18] Wanjun Gu, Tong Zhou, Jianmin Ma Xiao Sunand and Zuhong Lu,
Folding Type Specific Secondary Structure Propensities of
Synonymous Codons, IEEE Transactions on NanoBioScience-2003.
[19] Richard O.Day, Gray B.Lamont and Ruth Pachter, Protein Structure
Prediction by Applying an Evolutionary Algorithm, IEEE
International Conference on Parallel and Distributed Processing-2003.
[20] Satya Nanda Vel Arjunan, Safaai Deris and Rosli Md Illias,
Literature Survey of Protein Secondary Structure Prediction, IEEE
Journal Technology-2001, 34(C) 2001:63-72.
[21] Y. Duan and P. A. Kollman , Computational protein folding: From
lattice to all-atom, IBM System Journal, vol .40, no. 2, 2001
[22] J. T. Pedersen and J. Moult, Protein folding simulations with Genetic
Algorithms and a detailed molecular description, J. Mol. Biol.,
vol.269, pp.240-259, 1997.
[23] Vinicius Tragante do 1, Renato Tins1,Diversity Control in Genetic
Algorithms for Protein Structure Prediction. J R Soc Interface. 2006
February 22; 3(6): 139151.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 284
Performance Analysis of MAC protocol with CDMA Scheduling and Adaptive
power control for Wireless Multimedia Networks.

S.P.V.Subba Rao Dr.S.Venkata Chalam

Dr.D.Sreenivasa Rao
ECE Dept, GPCET ECE Dept, ACE Engineering College ECE Dept, JNTU CE
Kurnool, A.P, India Hyderabad, A. P, India Hyderabad, A.P, India
spvsr2007@gmail.com sv_chalam 2003@yahoo.co.in dsraoece @gmail.com


Abstract Existing MAC protocols like TDMA and
802.11 have many disadvantages for scheduling multimedia
traffic in CDMA wireless networks In this paper, a
Dynamic MAC protocol is developed for WCDMA wireless
multimedia networks. In the design, we combine the merits
of the CSMA, TDMA MAC protocols with WCDMA
systems to improve the throughput of the multimedia
WLAN in a cellular environment. We use these MAC
protocols adaptively, to handle both the low and high data
traffics of the mobile users. It uses multiple slots per frame
allowing multiple users to transmit simultaneously using
their own CDMA codes. To reduce transmission power
and to maximize system capacity, an adaptive power
control algorithm is applied at the beginning of each frame.
The algorithm uses Power Determining Factor (PDF) based
on the data traffic rate to determine the power. Based on
this parameter, the power is increased or decreased.
Depending on the traffic rate, the PDF factor is updated
i.e., if the observed traffic rate is high, then it will increase
the parameter and subsequently increases the power and if
it is low, then the parameter will be decreased and
correspondingly the power also. By simulation results, we
show that our proposed MAC protocol achieves high
channel utilization and improved throughput with reduced
average delay and power consumption for low and high
data traffic multimedia traffic.
Key words Wideband Code Division Multiple Access
(W-CDMA), MAC protocol, Direct Sequence Spread
Spectrum (DSSS), Time division multiple access (TDMA),
Carrier Sense Multiple Access (CSMA)


I INTRODUCTION
Wideband code-division multiple-access (WCDMA)
[1] forms the basis of the air interface in the 3
rd

generation cellular mobile communications. It has
higher speeds and it supports more users as it utilizes
the direct-sequence spread spectrum method of
asynchronous code division multiple accesses.
WCDMA is a wide band spread- spectrum channel
access method and is a type of 3G cellular network. If
there are a large number of users, then the mutual
interference between the connections degrades the QoS
for the new user as well for the ongoing connections .In
order to efficiently access the medium so many protocols
are developed.
Uthman Baroudi and Ahmed Elhakeem [2] have
proposed the performance of an adaptive call
admission/congestion control policy. The policy is
based on window measurement, by assessment of the
status of the buffer at the base station under the hybrid
TDMA/CDMA access scheme. The window-
measurement effectively maintains the required QoS,
particularly the blocking probability, call establishment
delay and cell error rate. They inter-relate the physical
limitations of the base station, instantaneous buffer
conditions call and burst level traffic and end to end bit
error performance in one queuing problem.
Hai Jiang et.al.[3] have proposed MAC scheme which
can achieve bit-level QoS, low overhead, accurate
channel and interference estimation along with high
bandwidth efficiency. The scheme also has the potential
to support packet-level QoS and service differentiation.
Liang Xu et.al. [4] have proposed a class of dynamic fair
scheduling schemes based on the generalized processor
sharing (GPS) fair service discipline, under the generic
name code-division GPS (CDGPS). A credit-based
CDGPS (C-CDGPS) scheme is proposed to further
enhance the utilization of the soft capacity by trading off
the short-term fairness.
Most of the hybrid approaches like [5,6] are based on
CSMA and TDMA but there have many disadvantages
CSMA is having hidden terminal problem and TDMA
requires a significant amount of signal processing for
matched filtering and correlation detection for
synchronizing with a time slot.
In this paper we propose a dynamic MAC protocol for
wireless multimedia networks. In our technique, the
merits of the CSMA, TDMA MAC protocols are
combined with WCDMA systems for enhancing the
throughput. In order to handle low and high data traffics
of the mobile users, we considered MAC protocols
adaptively. For allowing multiple users to transmit with
the help of their own CDMA codes, multiple slots per
frame are used by them. A power control algorithm is
applied at the beginning of each frame to reduce power
and to increase capacity .
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 285
The paper is organized as follows: In section II
dynamic MAC protocol for wireless multimedia
networks is developed .In section III adaptive power
control mechanism is derived for multimedia traffic
in WCDMA networks. The MAC protocol is
evaluated through simulations in section IV. The
paper is concluded in section V.

II. DYNAMIC MAC PROTOCOL
A. DS-SS (CDMA) Based MAC Scheme
The information-bearing signal ) (t b is multiplied
by the spreading code ) (t c , the transmitted signal ) (t m
may be expressed as: ) ( . ) ( ) ( t b t c t m

(1)
Which is a wideband signal. The received signal
) (t r contains the transmitted signal ) (t m plus the
additive interference ) (t i . The interference signal
contains MAI (Multiple Access Interference) and
fading and any other external interference signals.
Therefore,
) ( ) ( ) ( ). ( ) ( ) ( ) ( ) ( t n t i t b t c t n t i t m t r

(2)


Where, ) (t n is Additive White Gaussian Noise
(AWGN) in the receiver. To receive the original signal
) (t b , the received signal ) (t r is multiplied by the
code which was used in the transmitter. Therefore, the
demodulated output ) (t z at the receiver is given by
) ( . ) ( ) ( . ) ( ) ( . ) ( 2 ) ( . ) ( ) ( t n t c t i t c t b t c t r t c t z (3)
Since, 1 ) ( 2 t c (the autocorrelation property of the
PN code,
) ( . ) ( ) ( . ) ( ) ( ) ( t n t c t i t c t b t z (4)

B. Hybrid Approach

In our approach the merits of the CSMA, TDMA
MAC protocols are combined with WCDMA systems
for enhancing the throughput of the multimedia in a
cellular environment. In order to handle low and high
data traffics of the mobile users, we considered MAC
protocols adaptively.
we take only one mobile cell into account in which
there are M active nodes(or users) that generates
messages to be transmitted to another node where the
base station controls all the nodes within the cell. Two
kinds of links are possible in this model.
1. Uplink: this demonstrates data transmission from
mobile station MS to BS.
2. Downlink: this describes the data transmission from
BS to MS.



C. WCDMA Scheduling


Fig1: MAC frame

Where ,REQ Request, REP Response
The time is divided into fixed size frames in the
proposed protocol. A frame has N time slots for the
purpose of communication. The two special types of
slots in each frame are the Request (REQ) and the
Reply (REP) slots which are separated into mini slots.
The mini slots of REQ are used in the uplink and mini
slots of REP are utilized in the downlink. The REP
mini slots are modified to a matrix of data slots and
CDMA codes as in fig1.The data slot and CDMA for a
user are assigned by a scheduling algorithm and this
data is send to the user as a REP signal by the BS.
A REQ signal along with some control
informations is send to BS by the user which is ready
for transmission. The scheduling algorithm of BS
enables the user to get a REP signal about the data
slots and CDMA codes. Enabling the user to transmit
the data together with the processing of the requests of
the nodes and the scheduling is done with the help of
the REP signal. In our dynamic work, each terminal
transmits at the time slots during which it is allowed to
transmit using its own code sequences
The REP is divided into mini slots, each holding
information of the corresponding data slot in the next
frame. Each mini slot is further divided into grid,
where grid is equal to the maximum number of nodes
that can transmit data simultaneously in a data slot.
Each of these grids is initialized with a code which the
scheduler allocates to the node which succeeded in
getting a reservation for that slot. We assume that each
node generates messages with an arrival rate which is
Poisson distributed. The message length of each node
is exponentially distributed. A node cannot generate a
new message until all packets of the current message
are transmitted completely. A node which has
generated a message in the current frame cannot try to
access the data slots in the same frame.

D. WCDMA Contention

In this dynamic MAC, traffic is classified as Real
time (RT) and Non Real time (NRT) and prioritized as
High and low, respectively. For the high priority data
traffic, a node can be in one of two modes: low data
traffic (LDT) or high data traffic (HDT). A node is in
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 286
HDT only when it receives a High Contention
(HC) Message (HCM) from a two-hop neighbor
within the last T period. Otherwise, the node is in
LDT. A node sends an HCM when it experiences
high contention due to high data traffic. Each node
calculates the data traffic (DT) to calculate the
contention. If DT is more than a threshold value
DT
th,
then the nodes will send a HCM. In LDT, any
node can compete to transmit in any slot, but in
HDT, only the owners of the current slot and their
one-hop neighbors are allowed to compete for the
channel access. In both modes, the owners have
higher priority over non-owners. If a slot does not
contain an owner or its owner does not have data to
send, non-owners can transmit their data in this slot.

E. Scheduling Algorithm

For the random access protocol, we use the
M/M/n/n/K Markov model by obtaining the steady
state equation as: O A x (5)
Where A is the generator matrix, O is a
null matrix and x is a steady state probability
vector and it is given as } , , {
2 1 0 n
x x x x x (6)
For this Markov chain, the recurrent non-null and
the absorbing properties are satisfied. ' ' K is the
number of users and the number of data slots are ' ' n .
The average number of packets served by the system
is calculated as:
T
i
K
T
i
K
K
P
i
n
i
i
n
i
A
0
1
0
1
) (
(7)
Here, T is the offered traffic to the system with
the arrival rate and T is given by;
T (8)
Where, is Poisson distribution and the service rate
and is the exponential distribution.
The probability of the packet success rate SR P is
calculated as ;
)) ( 1 ( ) 1 (
0 0
k Berr x P
c
k
n
j
j SR (9)
here ' ' ' ' c is the active number of CDMA codes
allocated to the active users in a data slot and the
steady state probabilities are given as;
T
i
K
x
i
n
i 0
0
1
and (10)
0 x
T
j
K
x
j
j (11)
and ) (k Berr is the BER value, which is given by
)
1
(
3
2
2
1
p
k
Eb No
Eb
erfc Berr(k)

(12)
Where,
k = Number of active user.
p = Processing gain of the spectrum.
Eb = Energy per bit in joules
No = The two-sided psd in Watts/Hz
F. Data Traffic Calculation
Each node calculates the traffic by using the
traditional way to calculate the system capacity for
data traffic ) (DT , which is given by;
a
p
P
SIR
DT
1
1
1
(13)
Where,
p and a = the processing gain by spectrum
spreading and gain due to sector antenna
respectively.
SIR = Signal to interference ratio

= The interference from other nodes
P = The power control factor
= The voice/data activity factor.

III. POWER CONTROL

In the proposed algorithm, in addition to the
maximum and minimum values of the step sizes
which are common parameters for all adaptive power
control techniques, an Adaptive Control Factor
(ACF), is also involved in this technique. The power
control step size is adapted by multiplying a factor
called Adaptive Factor (AF) with the fixed step size.
This factor will be updated according to received
Transmit Power Control or TPC.
We are introducing another parameter, Power
Determining Factor (PDF) based on the data traffic
rate, to determine the power. Based on this parameter,
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 287
we are determining whether the power is increasing or
decreasing. Depending on the traffic rate, the PDF
factor is updated ie, If the received message is HCM,
then it will increase the parameter and subsequently
increases the power and if the message is Low
contention (LC), then the parameter will be decreased
and correspondingly the power also.

A. Proposed Adaptive Power Control Technique

The aim of a power control procedure is to ensure an
adequate SIR for all mobiles in a system by a simple
algorithm
The power control mechanism adjusts the transmitted
power to maintain the received SIR equal to the
SIRtarget at the receiver. The transmitter will be
controlled to increase OR decrease the transmit power,
so that received SIR is close to the SIR target.
The transmit power can be represented as:
P(t+1)=P(t)+.sign SIRtarget-SIRest)[dB] (14)

Where P(t) represent the transmit power at time t, is
the power control step size, SIRtarget, and SIRest are
the target and estimated SIR respectively. The term sign
is the sign function: sign(x) = 1, when x 0, and
sign(x) = -1, when x < 0.
It can be noted that sign (SIRtarget-SIRest) =-1 is
equivalent to a TPC power up command which can be
represented by bit 0. From Equation 1, it can be
observed that the transmit power will be increased or
decreased by on every time slot. The transmitted
power will always change even when there is no change
in the channel.

B. Adaptive Step Size Estimation

The power control step size is adapted by multiplying a
factor called Adaptive Factor (AF) with the fixed step
size. This factor will be updated according to received
TPC commands. The proposed algorithm increases the
step size i.e. AF when the mobile detects the same
sequence of TPC.
The proposed algorithm uses two most recent TPC
commands to compute the AF based on a predefined
ACF. The ACF is a constant chosen by the networks.
The AF will be updated by the following equation:
AF
u
(t)=min(max(ACF+DF
u
(t)/ACF, Size
min),
Size
max
)

(15)
Where, Size
min
and Size
max
is the minimum and
maximum step size respectively. The DF
u
(t) is the
Dynamic Factor of u
th
user at time t.
Therefore data transfer for u
th
user is given by:
n
DT
u
(t)=DTi (16)
i=1



PDF
u
(t)=DT
u
(t) (17)

From Equation 15, AFu (t) is linearly updated
by DTu (t). That is only a few additional complexity is
required at the mobiles.
The DF is updated based on two TPC commands as the
following equation:
DF
u
(t) =DF
u
(t 1) abs (TPC
u
(t) TPC
u
(t 1)) +1(18)
Where abs(x) is the absolute value of x.
The transmit power is updated according to the
following equation:
P
u
(t+1)=P
u
(t)+AF
u
(t).PDF
u
(t).TPC
u
(t) (19)
Where AFu (t) is the Adaptive Factor of u
th
user at
time t, and TPCu (t) is the TPC command of u
th
user at
time t, corresponding to sign (SIRtarget-SIRest) in
Equation 14, and PDF
u
(t) is the Power Determining
Factor. If the received message is HCM, then it will
increase the parameter and subsequently increases the
power and if the message is LC, then the parameter will
be decreased and correspondingly the power also.

IV. SIMULATION RESULTS
A. Simulation Setup
In this section, we simulate the proposed dynamic
MAC (DMAC) protocol for WCDMA cellular
networks. The simulation tool used is NS2. In the
simulation, mobile nodes move in a 600 meter x 600
meter region for 50 seconds simulation time. Initial
locations and movements of the nodes are obtained
using the random waypoint (RWP) model of NS2. All
nodes have the same transmission range of 250 meters.
The simulation parameters are given in table I.

Area Size 600 X 600
Number of Cells 2
Users Per Cell 20
Slot Duration 2 msec
Radio Range 250m
Frame Length 2 to 8 slots
CDMA codes 2 to 5
Simulation Time 50 sec
Routing Protocol AODV
Traffic Source CBR, VBR
Video Trace JurassikH263-256k
Packet Size 512 bytes
MSDU 2132
Transmission Rate 1Mb,2Mb,5Mb
No. of Users 2,4,6,8 and 10

Table I. Simulation Parameters



Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 288

B.Performance Metric
The performance is mainly evaluated according to
the following metrics:
Channel Utilization: It is the ratio of bandwidth
received into total available bandwidth for a traffic
flow.
Throughput: It is the throughput received
successfully, measured in Mb/s.
Average End-to-End Delay: The end-to-end-delay
is averaged over all surviving data packets from the
sources to the destinations.
Average Energy: It is the average energy
consumption of all nodes in sending, receiving and
forward operations

C . Results of Dynamic MAC protocol

In the first experiment, the transmission rates are
varied as 1,2,3,4 and 5Mb and measured the channel
utilization, end-to-end delay and throughput.



Fig 2. Channel Utilization Vs Rate

Fig.1 shows the channel utilization obtained for various
rates. It shows that HDMAC has better utilization than
the TDMA and 802.11 protocols.


Fig 3. Throughput Vs Rate



Fig 4. Delay Vs Rate

When the traffic rate is increased, more traffic flows
will contend and hence fair utilization of channel will
be decreased. It can be seen that from Fig.2, the
channel utilization gradually decreases a little when
the rate is increased.
Fig.3 shows the throughput obtained with the
proposed HDMAC protocol compared with TDMA
and 802.11 protocols. From the figure, it can be seen
that the throughput for all the protocols are
drastically decreased, when the rate is increased. It
shows that the throughput of HDMAC is more than
the TDMA and 802.11, as rate increases.
Fig.4 shows the delay occurred for various rates.
From the figures, it can be observed that the delay
increases gradually when the rate is increased. It
shows that the delay of HDMAC is significantly less
than the WTTP scheme TDMA and 802.11 protocols,
since it has the adaptive contention resolution
mechanism
The proposed DMAC-APC is compared with the
DMAC protocol without using power control.
Figure 5 shows the energy consumption for both
DMAC and DMAC-APC. Because of its adaptive
power control technique, DMAC-APC has reduced
energy consumption when compared with DMAC.
The reason is that in DMAC-APC for low data traffic
the power is lowered and for the high data traffic the
corresponding power is increased. In DMAC protocol
the users data is transmitted without power control
hence energy consumption is more than DMAC-
APC.


Fig5: Users Vs Energy
Channel Utilization Vs Rate
0
0.1
0.2
0.3
0.4
0.5
0.6
1 2 3 4 5
Rate (Mb)
C
h
a
n
n
e
l

U
t
i
l
i
z
a
t
i
o
n
DMAC
TDMA
802.11
Throughput Vs Rate
0
0.05
0.1
0.15
0.2
0.25
0.3
1 2 3 4 5
Rate (Mb)
T
h
r
o
u
g
h
p
u
t
(
M
b
/
s
)
DMAC
TDMA
802.11
Delay Vs Rate
0
0.05
0.1
0.15
0.2
0.25
0.3
1 2 3 4 5
Rate(Mb)

D
e
l
a
y

(
s
)
DMAC
TDMA
802.11
No. of Users Vs Energy
1.76
1.77
1.78
1.79
1.8
1.81
2 4 6 8
Users
E
n
e
r
g
y
(
J
)
DMAC-APC
DMAC
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 289


Fig6: Users Vs Throughput

Figure 6 shows the throughput obtained with the
DMAC-APC and DMAC protocols. From the figure, it
can be seen that, the throughput of both protocols are
decreased, when the users are increased. But it shows
that the throughput is more for DMAC-APC, because it
minimizes power consumption

Fig7: Users Vs Utilization

It can be seen from Figure.7, the channel utilization
gradually increases when the number of users is
increased. It shows that DMAC-APC has better
utilization than the DMAC protocol.

V. CONCLUSION

In this paper, we have developed a dynamic MAC
protocol for wireless multimedia networks. In the
design, we have combined the merits of the CSMA,
TDMA MAC protocols with WCDMA systems to
improve the throughput of the multimedia WLAN in a
cellular environment. We have used these MAC
protocols adaptively, to handle both the low and high
data traffics of the mobile users. It uses multiple slots
per frame allowing multiple users to transmit
simultaneously using their own CDMA codes .By using
the adaptive power control mechanism, Power
Determining Factor (PDF) is introduced. This factor
will be updated according to received TPC commands.
Depending on the traffic rate, the PDF factor is updated
i.e., if the observed traffic rate is high, then it will
increase the parameter and subsequently increases the
power and if it is low, then the parameter will be
decreased and correspondingly the power also. By
simulation results, we have shown that our proposed
MAC protocol achieves high channel utilization and
improved throughput with reduced average delay under
low and high data traffic and reduces power
consumption of multimedia traffic.

REFERENCES.
[1] Wideband CDMA from http://en.wikipedia.org/wiki/W-
CDMA_(UMTS)
[2] Uthman Baroudi and Ahmed Elhakeem Adaptive
Admission/Congestion Control Policy for Hybrid TDMA /MC-
CDMA Integrated Networks with Guaranteed QOS ICECS
2003.
[3] Hai Jiang , Weihua Zhuang, and Xuemin (Sherman) Shen
Distributed Medium Access Control for Next Generation
CDMA Wireless Networks IEEE Wireless Communications,
Special Issue on Next Generation CDMA vs. OFDMA for 4G
Wireless Applications, vol. 14, no. 3, pp.25-31, June 2007
[4] Junshan Zhang, Ming Hu and Ness B. Shroff Bursty traffic
over CDMA: predictive MAI temporal structure, rate control and
admission control Elsevier 2002-2003.
[5] Nuwan Gajaweera and Dileeka Dias, FAMA/TDMA Hybrid
MAC for Wireless Sensor Networks, In proceeding of the 4th
International Conference on Information and Automation for
Sustainability, Colombo, Sri Lanka, 12-14 December 2008
[6] T. van Dam, and K. Langendoen, "An adaptive energy efficient
MAC protocol for wireless sensor networks," ACM SenSys 2003,
pp. 171-180, November 2003.
[7] Liang Xu , Xuemin (Sherman) Shen and Jon W. Mark Dynamic
Fair Scheduling With QoS Constraints in Multimedia Wideband
CDMA Cellular Networks IEEE Transactions on wireless
communications, Vol. 3, No. 1, January 2004.
[8] Jennifer Price and Tara Javidi Distributed Rate Assignments for
Simultaneous Interference and Congestion Control in CDMA-
Based Wireless Networks UW Electrical Engineering
Technical Report, 2004
[9] S.P.V.Subbarao, S. Venkata Chalam and

D.Srinivasa Rao, A
Dynamic MAC Protocol for WCDMA Wireless Multimedia
Networks, International Journal on Network Security (IJNS).
[10] NetworkSimulator,http://www.isi.edu/nsnam/ns

No. of Users Vs Throughput
0
0.1
0.2
0.3
2 4 6 8
Users
T
h
r
o
u
g
h
p
u
t
(
M
b
/
s
)
DMAC-APC
DMAC
No. of Users Vs Utilization
0
0.05
0.1
0.15
2 4 6 8
users
U
t
i
l
i
z
a
t
i
o
n
(
M
b
/
s
)
DMAC-APC
DMAC
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 290
A New Approach Particle Swarm
Optimization for Economic Load Dispatch

Anish Ahmad Nitin Singh
Department of Electrical Engineering Department of Electrical Engineering
Motilal Nehru National Institute of Technology Motilal Nehru National Institute of Technology
Allahabad, UP, India Allahabad, UP, India
anish_ahmad@yahoo.co.in nitins@mnnit.ac.in



AbstractParticle Swarm Optimization (PSO) technique,
employed to solve Economic Dispatch (ED) problems is
presented in this paper. In practical cases, the fuel cost of
generators can be represented as a quadratic function of real
power generation. The proposed method is able to determine,
the output power generation for all of the power generation
units, so that the total cost is minimized. The obtained results
are compared with the Lambda Iteration method, and shows
that the PSO approach is feasible and efficient.
Keywords- Soft Computing Techniques, Optimization,
Particle Swarm Optimization,Economic Load Dispatch
I. INTRODUCTION
Economic Load dispatch (ELD) problem is one of the
fundamental issues in power system operation. In essence, it
is an optimization problem and its objective is to reduce the
total generation cost of units, while satisfying
constraints.ELD is defined as the operation of generation
facilities to produce energy at the lowest cost to reliably
serve consumers, recognizing any operational limits of
generation and transmission facilities [1].
The primary objective of the Economic dispatch (ED)
problem is to determine the most economic loadings of the
generators such that the load demand in a power system can
be met [2]. Previous efforts on solving ELD problems have
employed various mathematical programming methods and
optimization techniques. The common mathematical
practices to solve constraint optimization problems for ELD
are as lambda iteration method, base point and participation
factor method etc. In these conventional methods for solution
of ELD problems, an essential assumption is that the
incremental cost curves of the units are monotonically
increasing functions.
Gerald F. Reid et al [3] propose Quadratic programming
problem and solved using Wolfe's algorithm for both
equality and inequality constraints, Reid et al studied
method Convergence is very fast but results are not
accurate, advantage of this method is that it is not dependent
upon the selection of gradient step size or penalty factors.
N. Nabona et al [4] resolved non linear difficulty by the
derivation of linear constraints, less computation time is
required due to the 2nd-order approximation to the power-
generation cost function but disadvantage of this approach is
that results are not very accurate due to linearization of the
problem. Dale W. Ross et al [5] propose the successive
approximations dynamic programming algorithm, by this
method dimensions of problem become extremely large and
also increase in computational time if the constraints are
taken into consideration. David C. Walters et al [6] propose
GA method which is essentially search algorithms based on
the mechanics of nature and natural genetics ,the genetic
operators provides probabilities for finding optimal
solutions, but it is suffer from pre mature convergence and
time consuming also fail to locate global solution.
Particle swarm optimization (PSO), was introduced by
Kennedy and Eberhart [7], is one of the modern
optimization algorithms. It was developed through analogy
of swarm of bird and school of fish, and has been found to
be robust in solving continuous nonlinear optimization
problems. The PSO technique can generate high-quality
solutions within shorter calculation time and stable
convergence characteristic than other optimization methods.
In this study, a PSO method is used for solving the ED
problem is proposed. These particles explore the d
dimensional search space with different velocities and
positions. Each individual makes his decision using his own
experience together with the other individuals experiences
[10]. The main advantages of the PSO algorithm are as:
concept is simple, easy implementation, robustness to
control the parameters, and computational efficiency are
better as compared with mathematical algorithm and other
optimization techniques.

II. PROBLEM FORMULATION
The objective of the economic load dispatch (ELD)
problem is to minimize the total fuel cost of thermal power
plants subjected to the operating constraints of a power
system.
Operating cost of each generator is represented by a
quadratic function as given by [9]


(1)

g
n
i
i i
P F
1
t
) ( C cost
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 291
The fuel cost functions of the generating units are generally
characterized by second-order polynomials as (2).

(2)


Where as


i
P : Output power generation of i
th
unit .

i
,
i
,
i
: Fuel cost coefficients of i
th
unit.

Some other constraints are used which is as,

Power Balance Constraints:

This constraint is based on the principle of equilibrium
between the total system generation and the total system
loads (P
D
) and losses (P
L
) which must be equal.


(3)



The Generator Constraints:

The output power of each power generating unit has a
lower and upper bound and unit power lies in between these
bounds.

(4)


Where
min i
P ,
max i
P denote the minimum and maximum
output power generation respectively of power unit i.

A total transmission network loss is a function of power
outputs that can be represented using B coefficients:


g g
n
i
n
j
j ij i L
P B P P
1 1
(5)

Where B
ij
are loss coefficients, which assume as constant.

III. PARTICLE SWARM OPTIMIZATION

The PSO algorithm searches in parallel using a group of
individuals , in a physical d dimensional search space, the
position and velocity of individual i are represented as the
vectors x
i
= (x
i1
,x
i2
,...x
id
) and v
i
= (v
i1
,v
i2
,v
id
) respectively.
The best previous position of the i-th particle is recorded and
repressented as pbest
i
= (pbest
i1
,pbest
i2
,,pbest
id)
and gbest
i
=(gbest
i1
,gbest
i2
,gbest
id
) The index of the best particle
among all the particles in the group is represented by the
gbest
id
.The modified velocity and position of each particle
can be calculated using the current velocity and the distance
from pbest
id
to gbest
id
as shown in the following formulas:

) ( * () *
) ( * () * .
2
1
1
i i
i i
t
i
t
i
x gbest rand C
x pbest rand C v w v
(6)

x
i

t+1
=x
i

t
+ v
i

t
(7)

where

w Weight parameter.

1 t
i
v Particle velocity at current iteration (t+1).

t
i
v . Particle velocity at iteration t.
C
1
, C
2
Acceleration constant.
rand Random number between 0 and 1.
x
i
t+1
Current particle position at iteration t+1.
x
i
t
Particle position at iteration t.

IV. INCREASE CONVERGENCE RATE

A descending linear function is used for the inertia
weight. The best range for changing this function value for
the convergence and obtaining the best possible solution is
between 0.9 and 0.4.To modify the position of each
individual, it is necessary to calculate the velocity of each
individual in the next stage, which is obtained from equation
(6).Using the inertia weight in velocity equation enables the
swarm to fly in larger area of the search space ( w = 0.9)
and at the end of the iterations, the search space will be
smaller ( w = 0.4).By using the inertia weight the chance to
obtain a best solution for a optimization problem will be
more.
In this paper, the weighting function is defined as follows
:

iter
iter
w w
w w *
max
min max
max
(8)

Where,

w Inertia weight factor

max
w Maximum value of weighting factor

min
w Minimum value of weighting factor
$/h ) (
2
i i i i i i i
P P P F
0 ) (
1
g
n
i
L D i
P P P
max min i i i
P P P
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 292

max
iter Maximum number of iteration
iter Iteration of current number


Flow chart of Particle Swarm Optimization is shown in
figure1.

Start
Define no. of generator, Parameter Constants,
C
1
, C
2
, dimension
Calculate fitness for each population
Update best of local best as gbest
Update particle velocity and position using equation
(6) and (7) respectively
Initialize Particle Swarm with random velocity and
position vectors
If gbest is optimal
solution
Update the population local best
Optimal solution
No
Yes


Figure 1. Flow chart of Particle Swarm Optimization using ELD.
V. NUMERICAL EXAMPLES AND RESULTS
To verify the feasibility of the proposed PSO method, two
different power systems were tested. In these examples, the
constraints of units were taken into account in practical
application, so the proposed PSO method was compared
with Lambda iteration method. From the case study we
understand that total cost for PSO method is reduce and the
computation time is considerable reduce.
Matlab 7.0 is used for the simulation analysis, PSO
parameters are taken as:

Population size 100
Max inertia weight 0.9
Min inertia weight 0.4
Acceleration Constants 2.0

Case 1

Three generator system cost function and transmission
losses are as:


TABLE I. Cost function Coefficient Case 1
Generator
No.



min
P
max
P
1 200 07 0.008 10 85
2 180 6.3 0.009 10 80
3 140 6.8 0.007 10 70


The losses coefficient taken as:

B
ij
=1*e-4


The output power with or without Transmission loss with
load power is 100MW are shown below.


Where
TL Total power losses.
TC Total power cost.



TABLE. II. Output power with loss
Generator
No.
Lambda iteration PSO
1 15.4245 15.831
2 52.6452 52.68
3 31.9355 31.584
TL 0.005478 0.02453
TC 1210.79 1210.789
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 293

TABLE. III. Output power without loss
Generator No. Lambda iteration PSO
1 15.44 15.861
2 52.617 52.652
3 31.937 31.611
TC 1210.75 1210.75


Case 2

Six generator system cost function and transmission
losses are as

TABLE IV. Cost function Coefficient Case 2
Generator
No.



min
P

max
P

1 240 07 0.007 100 500
2 200 10 0.0095 50 200
3 220 8.5 0.009 80 300
4 200 11 0.009 50 150
5 220 10.5 0.008 50 200
6 120 12 0.0075 50 120


The losses coefficient taken as:




B
ij
=10
-4
.*






TABLE. V. Output power with loss
Generator
No.
Lambda iteration PSO
1 312.7130 312.034678
2 72.5253 73.4178826
3 159.8879 159.82243
4 50.00 50.00
5 54.8738 54.73
6 50.00 50.00
TC 8229.38 8228.57


In table V for six generator system a considerable amount
of total cost is reduce, also we can observe the dispatch
distribution of power in such a way it reduce the cost
,computation time for this operation is very less as
compared to Lambda iteration method.

The output of the generator system for loss account and its
power dispatch is shown separately in table VI.

TABLE. VI. Power dispatch
Generator No. Power
1 304.65
2 56.59
3 150.65
4 50
5 69.10
6 50
TC 8249.13
TL 10.9872

VI. CONCLUSION
In case study we analyses, the PSO better dispatch other
than classical method for three generator system total cost is
not much different. In six generator system its shown from
the results that total cost is reduce as we go for higher
generator system these total cost of generator are
considerable reduce, the computation time are also reduce
and efficient, so we summaries as simple concept, easy
implementation, robustness to control the parameters, and
computational efficiency are better as compared with
mathematical algorithm and other optimization techniques.

REFERENCES
[1]. Economic Dispatch: Concepts, Practices and Issues FERC Staff Palm
Springs, California November 13, 2005.

[2]. Hadi Saadat, Power System Analysis, McGraw-Hill, 1999.

[3].Gerald F. Reid et al, Economic Dispatch Using Quadratic
Programming, IEEE Transactions 1972.

[4] N. Nabona et al, Optimization of economic dispatch through
quadratic and linear programming PROC.IEE, Vol. 120, No. 5, MAY
1973.

[5] Dale W. Ross et al, Dynamic economic dispatch of generation,
IEEE Transactions on Power Apparatus and Systems, Vol. PAS-99, No. 6
Nov/Dec 1980.

[6] David C. Walters et al, Genetic algorithm solution of economic
dispatch with valve point loading, IEEE Transactions on Power Systems,
Vol. 8, No. 3, August 1993.

[7]J. Kennedy, R. Eberhart, Particle swarm optimization, in Proc., IEEE
international conference on neural networks (ICNNN 95), Vol. IV, Perth,
Australia, 1995, pp 1942-1948.

[8]. I. J. Nagrath, D. P. Kothari, Modern Power System Analysis, 2nd
Edition, Tata McGraw-Hill Co, 1989.

[10] H. Yoshida, K. Kawata, Y. Fukuyama, S. Takayama, and Y.
Nakanishi,A particle swarm optimization for reactive power and voltage
controlconsidering voltage security assessment, IEEE Trans. Power Syst.,
vol.15, pp. 12321239, Nov. 2000.

[11] Zwe-Lee Gaing, Particle Swarm optimization to solving economic
dispatch considering the generator constraints, IEEE Trans. Power
systems, vol.18, Aug.2003


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 294

Comparison of Reactive Routing Protocols in Mobile
Ad-hoc Networks
Gurleen Kaur Walia, Charanjit Singh
UCoE Deptt., Punjabi University, Patiala, India
shaan7_rulez@yahoo.co.in

Abstract - Mobile Ad-Hoc Network (MANET) is a collection of
mobile nodes that are dynamically and arbitrarily located in
such a manner that the interconnections between nodes are
capable of changing on a continual basis. In order to facilitate
communication within the network, a routing protocol is used to
discover routes between nodes. The primary goal of such a
routing protocol is correct and efficient route establishment
between a pair of nodes so that messages may be delivered in a
timely manner. Route construction should be done with a
minimum of overhead and bandwidth consumption. This paper
examines routing protocols for ad-hoc networks and evaluates
these protocols based on a given set of parameters. It provides an
overview of three reactive routing protocols by presenting their
characteristics and functionality, and then provides a
comparison and discussion of their respective merits and
drawbacks.
Keywords: MANET, AODV, DSR, TORA, Delay, ad-hoc

I. INTRODUCTION

Mobile Ad-hoc Network (MANET) is a wireless system
that comprises mobile nodes. It is usually referred to a
decentralized autonomous system. MANET nodes are
equipped with wireless transmitters and receivers. The term
ad hoc tends to imply can take different forms and can be
stand-alone, mobile or networked. Due to the infrastructure
less, self configuring network property, MANET has wide
application in industrial and commercial field involving
cooperative mobile data exchange, inexpensive alternatives or
enhancement to cellular-based mobile network infrastructures.
MANET has potential applications in the locations where
setting of infrastructured networks is not possible. Military ad
hoc networks detect and gain as much information as possible
about enemy movements, explosions, and other phenomena of
interest. Such kind of networks also has applications in
emergency disaster relied orations after natural hazards like
hurricane or earthquake. Some of the wireless traffic sensor
networks monitor vehicle traffic on highways or in congested
parts of a city. Wireless surveillance sensor networks may
deployed for providing security in shopping malls, parking
garages, and many such other areas where direct or wired
communication cannot be made.MANET gradually exploited
the wireless communication world as the common means for
human communication. This challenged the researchers
around the world to enforce their research in developing
MANET. In such advanced communication network, routing
plays a key role as it is one of the major aspects to route the
data in network. Different protocols have been proposed so far
by many researchers. This exploration of wireless devices lead
the path to focus our study on the large networks where hosts
involved in the network engage to communicate each other in
Ad hoc fashion.


II. OVERVIEW OF MANET

MANET is a Wireless Ad-Hoc Network technology.
Mobile nodes in the network will act as clients and servers [1].
Fig. 1 shows the decentralized MANET consisting of mobile
nodes functioning as routers along with the respective mobile
nodes.

Fig 1: A MANET Network


III. MANET CHARACTERISTICS

MANETs do not have any central authority or fixed
infrastructure, unlike the traditional network makes MANET
decentralized system. MANETS connects themselves by
discovering the topology and deliver the messages themselves
makes MANET a self configuring network. Mobile nodes in
the MANET are free to take random movement. This will
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 295

result frequent changes in the topology, where alternative
paths are found automatically.
They use different routing mechanisms in transmitting the
data packet to the desired nodes by this it exhibits dynamic
topology. MANET usually operates in bandwidth-constrained
variable-capacity links. That results in high bit errors, low
bandwidth, unstable and asymmetric links results in
congestion problems. Power conservation plays a key role in
MANET as the nodes involved in this network generally uses
exhaustible battery/energy sources this makes MANETS
energy-constrained.


IV. FACTORS FOR CONSIDERATION WHILE
DEPLOYING A MANET ARE:

Bandwidth :
Error-prone in case of wireless networks are also insecure, due
to which we have lower capacity and higher congestion
problems in throughput, so bandwidth is key factor while
deploying MANET.

Energy efficiency of nodes:
The key goal of consideration is to minimize overall network
power consumption.

Topology changes:
This factor is considered because topology is changed with the
movement of mobile nodes resulting in route changes which
lead to network partition and in largest occurrences few
packet losses.


V. APPLICATION OF MANET

MANETs application typically includes communication in
battlefield environment. These are referred to tactical
networks, monitoring of weather, earth activities and
generation of automation equipment. These are examples of
sensor networks under area of MANET. Emergency services
include medical services such as records of patients at runtime
typically in disaster. Electronic commerce is another example
of MANET which includes receiving of payments from
anywhere, Records of customers are accessed directly from
field, local news, weather and road conditions are carried
through vehicular access. Enterprise networking is example of
MANET in which one can have access for Personal Digital
Assistant (PDA) from anywhere; networks so formed are
personal area networks. These applications are used for the
educational or business purpose. An educational application
can be used a virtual conference calls to deliver lecture or for
meetings, it also supports multiuser games, robotic pets. By
using this network a call can be forwarded anywhere, can be
used to transmit actual work space to the current location. The
most common applications of MANET are Inter vehicle
communication for Intelligent Transportation System (ITS)
involving accident information on a highway, collision
avoidance at a crossroad and connection to the internet. The
application of MANET has further improved the
communication infrastructure of rescue team in operating at
the disaster hit area around the clock and modern military has
benefited more from its advancement in the battle fields.


VI. OVERVIEW OF ROUTING PROTOCOLS

A. Proactive (table driven) Routing Protocols:
Proactive routing protocols maintain the routing
information of all the participating nodes and update their
routing information frequently irrespective of the routing
requests. Proactive routing protocols transmit control
messages to all the nodes and update their routing information
even if there is no actual routing request. This makes
proactive routing protocols bandwidth deficient, though the
routing itself is simple having this prior updated routing
information. The major drawback of proactive protocols is the
heavy load created from the need to flood the network with
control messages.

B. Reactive (On demand) Protocols:
Reactive protocols establish the route only when it is
required unlike the proactive protocols these protocols do not
update their routing information frequently and will not
maintain the network topology information. Reactive
protocols use the connection establishment process for
communication.

C. Ad Hoc On-Demand Distance Vector Protocol (AODV)
[2]:
AODV is a reactive routing protocol that minimizes the
number of broadcasts by creating routes on demand. To find a
path to the destination, a route request packet (RREQ) is
broadcasted by the source till it reaches an intermediate node
that has recent route information about the destination or till it
reaches the destination. Features of this protocol include loop
freedom and that link breakages cause immediate notifications
to be sent to the affected set of nodes, but only that set.
Additionally, AODV has support for multicast routing and
avoids the Bellman Ford "counting to infinity" problem. The
use of destination sequence numbers guarantees that a route is
"fresh". The algorithm uses hello messages (a special RREP)
that are broadcasted periodically to the immediate neighbors.
These hello messages are local advertisements for the
continued presence of the node and neighbors using routes
through the broadcasting node will continue to mark the
routes as valid. If hello messages stop coming from a
particular node, the neighbor can assume that the node has
moved away and mark that link to the node as broken and
notify the affected set of nodes by sending a link failure
notification (a special RREP) to that set of nodes.

Benefits and Limitations of AODV:
The benefits of AODV protocol are that it favors the least
congested route instead of the shortest route and it also
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 296

supports both unicast and multicast packet transmissions even
for nodes in constant movement. It also responds very quickly
to the topological changes that affects the active routes.
AODV does not put any additional overheads on data packets
as it does not make use of source routing. The limitation of
AODV protocol is that it expects/requires that the nodes in the
broadcast medium can detect each others broadcasts. It is also
possible that a valid route is expired and the determination of
a reasonable expiry time is difficult. The reason behind this is
that the nodes are mobile and their sending rates may differ
widely and can change dynamically from node to node. In
addition, as the size of network grows, various performance
metrics start decreasing.


D. Dynamic Source Routing (DSR) [3]:
DSR also belongs to the class of reactive protocols and
allows nodes to dynamically discover a route across multiple
network hops to any destination. Source routing means that
each packet in its header carries the complete ordered list of
nodes through which the packet must pass. DSR uses no
periodic routing messages (e.g. no router advertisements),
thereby reducing network bandwidth overhead, conserving
battery power and avoiding large routing updates throughout
the ad-hoc network. Instead DSR relies on support from the
MAC layer (the MAC layer should inform the routing
protocol about link failures). The two basic modes of
operation in DSR are route discovery and route maintenance.
DSR uses the key advantage of source routing. Intermediate
nodes do not need to maintain up-to-date routing information
in order to route the packets they forward. There is also no
need for periodic routing advertisement messages, which will
lead to reduce network bandwidth overhead, particularly
during periods when little or no significant host movement is
taking place. Battery power is also conserved on the mobile
hosts, both by not sending the advertisements and by not
needing to receive them; a host could go down to sleep
instead.

Benefits and Limitations of DSR:
One of the main benefit of DSR protocol is that there is no
need to keep routing table so as to route a given data packet as
the entire route is contained in the packet header. The
limitations of DSR protocol is that this is not scalable to large
networks and even requires significantly more processing
resources than most other protocols. Basically, in order to
obtain the routing information, each node must spend lot of
time to process any control data it receives, even if it is not the
intended recipient.

E. Temporally Ordered Routing Algorithm (TORA) [4]:
TORA is a reactive routing protocol that establishes route
quickly. It is a highly adaptive distributed routing algorithm,
which has been tailored for operation in a mobile networking
environment. It is a type of link-reversal algorithm. It is highly
adaptive and well suited for a dynamic mobile network with
limited bandwidth. TORA creates a DAG with the destination
as the head of the graph. Each node keeps a reference value
and the height of reference destination. Query packets are sent
out until they reach the destination or a node with a route to
the destination. This node sends an updates to its neighbors
listing its height for that destination. TORA is designed to
minimize the communication overhead associated with
adapting to network topological changes. The scope of
TORA's control messaging is typically localized to a very
small set of nodes near a topological change.

Benefits and Limitations of TORA:
One of the benefits of TORA is that the multiple routes
between any source destination pair are supported by this
protocol. Therefore, failure or removal of any of the nodes is
quickly resolved without source intervention by switching to
an alternate route. TORA is also not free from limitations.
One of them is that it depends on synchronized clocks among
nodes in the ad-hoc network. The dependence of this protocol
on intermediate lower layers for certain functionality
presumes that the link status sensing, neighbor discovery, in
order packet delivery and address resolution are all readily
available. The solution is to run the Internet MANET
Encapsulation Protocol at the layer immediately below
TORA. This will make the overhead for this protocol difficult
to separate from that imposed by the lower layer.

VII. SIMULATION DESIGN
OPNET (Optimized Network Engineering Tool) Modeler 14.5
is used for the design and implementation of this work.
OPNET is a network simulator that provides virtual network
communication environment. It is prominent for the research
studies, network modelling and engineering, R & D Operation
and performance analysis.

Fig. 2: MANET Scenario with 10 nodes
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 297

Fig. 2 shows a sample network created with 10 nodes, one
static FTP server, application configuration for the network in
which FTP (File Transfer Protocol has been chosen) as an
application. It depicts a network with 10 mobile nodes whose
behavior has to be analyzed when nodes move in the network
with respect to time to determine the effecting features of each
protocol. In order to evaluate the performance of a generic
scenario in ad-hoc networking, when analyzing mobile
networks, modeling the movement of the set of nodes forming
a MANET is essential. The Random Waypoint model has
been selected to be used in all simulations presented in this
document.

VIII. PERFORMANCE METRICS
The following metrics have been considered to make the
comparative study of these routing protocols through
simulation.
1) End-to-end delay: This metric represents average end-to-
end delay and indicates how long it took for a packet to travel
from the source to the application layer of the destination. It is
measured in seconds.
2) Media Access Delay: The time a node takes to access
media for starting the packet transmission is called as media
access delay. The delay is recorded for each packet when it is
sent to the physical layer for the first time.
3) Retransmission Attempts: This metric represents the
number of retransmissions due to some failure in the network.
It is measured in terms of packets.

IX. SIMULATION RESULTS

Two important performances metrics are evaluated in all
three routing protocols (AODV, DSR and TORA).

A. End-to-end delay

Endto-end delay parameter is simulated here for ten
mobile nodes with different routing protocols as shown in Fig
3. The graph shows that AODV has a rise in end-to-end delay
during the start of simulation because of beacon broadcast, but
with the passage of time, it becomes stable and the delay is
reduced. Same behavior representation from DSR shows its
involvement in cache table addressing. TORA behaved quite
stable because of its design to recover a route quickly and that
is why it sustains a more stable and an average delay.
The end-to end delay varies from 0.020 to 0.004 for AODV
after maintaining routing information. DSR started with
highest delay of 0.023, but after 1 minute of simulation, it
drops to minimal level of delay. Once AODV and DSR
initialized, they easily manage their data traffic and touch the
lower limits of End-to-end delay.



Fig 3. End-to-end delay

B. Media Access Delay

TORA is more stable because of its linear behavior as
simulation results as in Fig 4 shows. AODV took 30 seconds
and DSR took 50 seconds of simulation for transmitting on the
network. Gradually both protocols approach TORA for media
access delay. All of the protocols are in the limits of
applicable media access delay because the number of nodes is
few.


Fig 4. Media Access Delay

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 298


C. Retransmission Attempts

The retransmission attempts in case of AODV and DSR are
quite less in number as compared to TORA as shown in fig. 5.



Fig 5. Retransmission Attempts


CONCLUSION

In this paper, the simulated model scenario compared the
performance of AODV, DSR and TORA, the three prominent
on-demand routing protocols for ad-hoc networks. A
comparison of these three routing protocols is provided,
highlighting their features, differences, and characteristics.
While no single protocol or class of protocol is the best for all
scenarios, each protocol has definite advantages and
disadvantages and is well suited for certain situations. Even
though AODV, DSR and TORA share on-demand behavior,
much of the routing mechanisms are different. In particular,
AODV uses routing tables, one route per destination, and
destination sequence numbers, a mechanism to prevent loops
and to determine freshness of root. On the other hand, DSR
uses source routing and route caches and does not depend on
any periodic or timer based activity. DSR exploits caching
aggressively and maintains multiple routes per destination.
TORA provides multiple routes to a destination, establishing a
route quickly and minimizing a communication overhead by
localizing algorithmic reaction to topological changes. The
field of ad-hoc mobile networks is rapidly growing. There are
still many challenges that need more attention of researchers.
It is likely that such networks will see widespread use within
the next few years.

REFERENCES

[1] S. Corson. and J. Macker. Mobile Ad-Hoc Networking (MANET):
Routing Protocol Performance Issues and Evaluation Consideration.
NWG, 1999. [Online]. Available:
https://www.dsta.gov.sg/index.php/DSTA-2006-Chapter-6/[Accessed:
Mar 02.2009]
[2] C.E. Perkins et al., Ad-hoc On-demand Distance(AODV) Vector
Routing, Internal Draft, IETF Network Working Group, July 2003
[3] David Johnson, David Maltz and Yih-Chun Hu, The Dynamic Source
Routing Protocol for Mobile Ad-hoc Networks, Internet Draft, draft-
ietf-manet-dsr- 10.txt, July 2004
[4] Vincent D.Park and M. Scott Corson, Temporally-Ordered Routing
Algorithm (TORA) Version 1: Functional Specification, Internet
draft, draft-ietf-manet-tora- spec-04.txt, July 2001.
[5] J. Broch, D. A. Maltz, D.B. Jognson, Y. Hu and J. Jetcheva, A
Performance Comparision of Multi-hop Wireless Ad Hoc Network
Routing Protocols, in Proc.ICON-MCN, 1998, pp. 85-97.\
[6] N. Al-Karaki and A. E. Kamal, Routing techniques in wireless sensor
networks: A survey, IEEE Wireless Commun. Mag., vol. 11, no. 6,
pp. 628, Dec. 2004
[7] Sunil Taneja and Ashwani Kush, A Survey of Routing Protocols in
Mobile Ad-hoc Networks, International Journal of Innovation,
Management and Technology, Vol.1, No.3, August 2010

























Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 299
Implementation of Hand-Held Opto Electronic System for the Estimation of Corrosion of
Metals
M. Narendra Babu and A. Balaji Ganesh
Opto-Electronic Sensor Research Laboratory, TIFAC-CORE, Velammal Engineering College, Chennai-600 066, INDIA
E-mail: naren.embedded@gmail.com and abganesh@velammal.org


Abstract The paper presents an opto-electronic system which is
based on the principle of reflection-scattering of light on metal
surface. The sensor system consists of a source light and a couple
of photo detectors which is kept at 45
0
to each other. The angle
enables the measurement of changes in both scattering and
reflected light intensities in the respect of changes in corrosion
phenomenon on metal surface. The sensor also consists of
components such as signal conditioning, computational unit and
display. The computational process is done within the
microcontroller PIC18F452 and the calculated corrosion factor
directly displayed on displaying device. Validation of sensor
output is cross checked with well known weight loss measurement
technique and the results are tabulated. The sensor system possess
the characteristics of simple, cost effective, field deployable,
battery operated and offers non-destructive corrosion information.
Keywords-Hand-held opto electronic system; in-situ
measurement; corrosion; SS 304; SS 316
I. INTRODUCTION
Corrosion is a known phenomenon which affects almost
all types of metals. Metals are susceptible to corrosion when
it is continuously exposed to atmosphere and the corrosion
forms as a thin film of oxides on metal surfaces. The usage
of metals predominately understood because of its wide
usage in engineering industries such as boilers, pipelines
etc. Though significant numbers of contributions are made
by researchers to mitigate and monitoring of corrosion
merely few are for onsite measurement of corrosion [1-3].
The techniques such as electrochemical analysis, galvano-
static pulse transient and resistance method analysis offer
the corrosion rate in laboratory environments [4]. The
absence of onsite measurement, long duration measurement
is main limitations of existing methods. Ultrasound method
used to measure the corrosion from the metal surface. In this
method scanning of large surface takes enormous time
period. Currently, there are insignificant numbers of Real-
time sensors are developed to offer real time values [5]. The
corrosion estimated by the above method employed a
complex experimental setup, and the measurement is limited
to laboratory environments. AB Ganesh et al., has presented
an optical fiber sensor system to estimate the corrosion of
metals [7].
In the present study a hand-held optoelectronic sensor is
implemented for the estimation of corrosion and the
proposed system offers continuous monitoring of corrosion
of metal surfaces. The opto-electronic sensor system consist
of a light source and a pair of matched photo detectors
which are placed at different angle towards metal surface to
captures both scattered and reflected light from the metal
surface .The voltage signals from the photo detectors are
processed in microcontroller and the corrosion factor
directly displayed.
II. PRINCIPLE OF OPERATION

It is well proven that rough surfaces scatter in all
direction more where as smooth surfaces reflect more than
scattering [6, 7, 8]. Finding out both reflected and scattered
light intensities can be very useful in analyzing the metal
surface characteristics. This also leads to real time mapping
of metal surfaces. Based on the information obtained
through literature the fixture of angle and placement of a
source light and photodiodes is done which is shown in
Figure 1. Appropriate testing of fixture is simultaneously
checked with the paper and glass surfaces. The
understanding is subsequently confirmed by the results
obtained.

Figure1 Fixture of source and detectors
The corrosion factor can be calculated as a ratio of
reflected and scattered light intensities, which are
measured at two different angles. The corrosion factor (R) is
calculated by using equation (1) in a relative scale from 0 to
100.
Corrosion Factor
Jr Js
KJs
R
2
) ( (1)

Where, K is the scaling factor taken as 100. Js is the
intensity of scattered light and Jr is the reflected light
intensity. The measured corrosion factor is correlated well
with the estimated weight loss measurement data.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 300
III. METHODOLOGY
A schematic representation of implemented system is
shown in Figure 2.


Figure 2 A Schematic of the experimental setup
The Optoelectronic sensor system consists of a light
source (Red LED Part No.: LO712500A1 of wavelength
660nm from EPIGAP) with small illuminated portion on the
metal surface. A pair of matched photo detectors (EPD-
660-5/0.5 from EPIGAP are sensitivity to optical radiation
of typical wavelength 660nm), one placed at an angle of
incident for capturing the reflected light and another photo
detector place at normal to the metal surface for capturing
the scattered light. Output voltages of the sensors are not
found to be insufficient to have a better interpretation with
Analog-to-Digital Converter (ADC) of PIC18F452
microcontroller so the output of photo detectors signals are
amplified using MCP602 which is a dual operational
amplifier from microchip. The digitized signals are
processed and the results are computed as corrosion factor at
each location on metal surface. The display device will be
used to provide the visual information of corrosion factor.
The developed system is shown in Figure 3.



Figure 3 Developed Sensor System
IV. SENSOR VALIDATION
The metal test samples such as SS 304 and SS 316L
stainless plates of size 100mm x 30mm are selected. The
surface of the samples are fine polished using silicon
carbide papers to a minimum roughness as shown in Figure
4 and initial weight and corrosion factor are measured and
noted.



Figure 4 Stainless steel samples plate

Test solutions prepared from diluted H
2
SO
4
of
different concentrations (0.5 Molar, 1M, 2M, 3M and 5M)
are sprayed uniformly to prepare the samples of different
levels of corrosion. Time period are uniformly selected and
the changes are noted accordingly.

The results of sensor system are validated through well
known and universally accepted weight loss method.
Corrosion rate is measured by using weight loss method.
The loss in weight of each sample is determined to estimate
the corrosion rate using equation (2).

Corrosion Rate
T A D
W
CR
) 534 (
(2)

Where, W is the weight loss (mg), D is the density of
metal (g/cm
2
), A is the area of specimen (cm
2
) and T is the
time of exposure (days) [5].

V. RESULTS

The sensor system is applied and eventually validated
through the measurement of the corrosion factor of stainless
steel sample (SS 304 and SS 316L). The samples corroded
artificially using various concentrations of sulphuric acid
and the results are presented in Table1.










Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 301
Table 1. Comparison of results obtained


The measured corrosion factor using optoelectronic
sensor is correlated with well known weight loss
measurement technique and the results are tabulated. Figure
5 presents the corrosion factor of stainless steel 304 sample
surfaces corroded in various concentrations of sulphuric
acid.


Figure 5 Corrosion Factor for SS 304

The Figure 6 presents the corrosion factor of stainless steel
316L sample corroded in various concentrations of
sulphuric acid.
.

Figure 6 Corrosion factor for SS 306L

From the Figures 5 and 6, it is observed that when the
corrosion of metal increases with corroding agents.
VI. CONCLUSIONS
In this study, a simple hand-held sensor system is
explained and validated for the measurement of corrosion of
metals. The obtained results are used to strengthen the
understanding on corrosion phenomenon. The PIC18F452
microcontroller effectively is being utilized for the
computation analysis purpose. Furthermore, the optical light
characteristics are being studied to understand the direct
correlation of unknown measurand values.

ACKNOWLEDGMENT
The authors like to kindly acknowledge the financial
support of DST through the research center TIFAC-CORE in
pervasive computing technologies at Velammal Engineering
College, Chennai-66.
REFERENCES
[1] Cramer, S.D., Covino , B.S.Jr. (Eds.), 2003. Corrosion Fundamentals,
Testing and Protection. ASM Handbook, Volume 13A. ASM
International, New York.
[2] Degueldre, C., Prey, S.O., Francioni , W., 1996. An inline Diffuse
reflection spectroscopy study of the oxidation of Stainless steel
under boiling water reactor conditions. Corr. Sci., 38:1763-1782.
[3] Newton C.J., Sykes J.M.: A galvanostatic pulse technique for
investigation of steel corrosion in concrete, Corros.Sci., 1988, 28,
(11), pp. 10511073
[4] Gowers K.R., Millard S.G.: Electrochemical techniques for corrosion
assessment of reinforced concrete structures, Proc. Inst. Civil Engr.
Structs. Bldg., May 1999, pp. 129137 Delhi, 1988)
[5] Giakos G.C., Fraiwan L., Patnekar N., Sumrain S., Mertzios G.B.,
Periyathamby S.: A sensitive optical polarimetric imaging technique
for surface defects detection of aircraft turbine engines, IEEE Trans.
Instrum. Meas., 2004, 53, (1),pp. 216222
[6] M.PaulvannaNayaki1,A.P.Kabilan,Real-timecorrosion mapping of
steel surfaces using an optoelectronic instrument based on light wave
scattering, IET Sci. Meas. Technol., 2008, Vol. 2, pp. 269274
[7] A.Balaji Ganesh, T.K. Radha Krishnan, Fiber-optical sensor for the
estimation of microbial corrosion of Metals optic, Elsevier (2009),
Vol.120, pp.479-483
[8] M.PaulvannaNayaki1, A.P.Kabilan, Corrosion Estimation of
Stainless steel in Nitric Acid by an Optoelectronic instrument Based
on Diffuse Light Scattering Pattern Measurement IEEE Sensors
Journal,Vol10,pp1658-1665.
[9] C.Andrada, I.Martinez, Calibration by gravimetric losses of
electrochemical corrosion rate measurement using modulated
confinement of the current Materials and structures, vol 8,pp 833-
841.
Samples Days Corrosion
factor
Corrosion Rate
304 316L 304 316L
1 1 39.65 41.02 0.000001 0.000001
2 3 40.02 42.39 0.000003 0.000003
3 6 42.61 45.27 0.000006 0.000007
4 9 43.39 47.82 0.000006 0.000008
5 12 44.40 48.23 0.000007 0.000008
6 15 46.93 49.06 0.000007 0.000008
7 18 47.05 50.61 0.000007 0.000009
8 21 48.64 52.03 0.000008 0.000009
9 24 50.14 54.72 0.000009 0.000009
10 27 51.66 55.16 0.000009 0.000009
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 302
Area and Routing Optimization for Network On Chip Architecture

Denathayalan R
Department of ECE
Anna University of Technology, Coimbatore
Coimbatore, India
Deena03ec13_cit@yahoo.com
Thiruvenkadam K
Department of ECE
Anna University of Technology, Coimbatore
Coimbatore, India
thiruvlsi@gmail.com


Abstract This paper comprises of implementing high
speed, traffic free, energy efficient and compact router for
Network On Chip systems, with minimized area and power
consumption for multi core embedded processors. The
methodology to implement router for processors in NoC is
combining the Network Interface(NI) with Network
Elements(NE). The router contains FIFO buffer, FIFO control
logic, Configurable Switch, Register Demultiplexer and
Scheduler. Different kind of routing and switching algorithms
are experimented to optimize the area of router which intends
reduces traffic and latency time. The experiments are
conducted on multi-core processors environment using Altera
CycloneII architecture and QUARTUS 9.0 software
development tool.

Index Terms Network-on-chip (NoC), field-
programmable gate arrays (FPGAs), network interface(NI),
network element(NE).
I. INTRODUCTION
Although MPSoCs promise to significantly improve the
processing capabilities and versatility of embedded systems,
one major problem in their current and future design is the
effectiveness of the interconnection mechanisms between
the internal components [1]-[2], as the amount of
components grows with each new technological node. Bus-
based designs are not able to cope with the heterogeneous
and demanding communication requirements of MPSoCs.
Moreover, as the semiconductor industry reaches deep sub-
micron technologies, power density and process variations
become critical design concerns for embedded systems as
well; thus, predictability in the design of on-chip
interconnects is becoming as important as the provided
bandwidth [3]-[5]. Hence, new paradigms and
methodologies that can design power-effective and reliable
interconnects for MPSoCs are a must nowadays.
Networks-on-Chip (NoCs) have been suggested as a
promising solution to the aforementioned scalability
problem of forthcoming MPSoCs. NoCs build on top of the
latest evolutions of bus architectures in terms of advanced
protocols and topology design, and, by bringing packet-
based communication paradigms to the on-chip domain,
they address many of the upcoming issues of interconnect
fabric design better than buses [9]. For example, wire
lengths can be controlled by matching network topology
with physical constraints; bandwidth can be boosted simply
by increasing the number of links and switches.
Furthermore, compared to irregular, bridge based
assemblies of clusters of processing elements, NoCs also
help in tackling design complexity and verification issues
[8].
Using NoCs the interconnect structure and wiring
complexity can be controlled well. When the interconnect is
structured, the number of timing violations that occur during
the physical design (floor planning and wire routing) phase
are minimal. Such design predictability is critical for todays
MPSoCs to achieve timing closure. It leads to faster design
cycle, reduction in the number of design re-spins and faster
time-to-market [11]-[12]. As the wire delay as a fraction of
gate delay is increasing with each technological generation,
having shorter wires is even more important for future
MPSoCs. Early works on NoC topology design assumed
that using regular topologies, such as meshes, like those that
have been used in macro-networks, would lead to regular
and predictable layouts. While this may be true for designs
with homogeneous processing cores and memories, it is not
true for most MPSoCs as they are typically composed of
heterogeneous cores and regular topologies result in poor
performance, with large power and area overhead. This is
due to the fact that the core sizes of the MPSoC are highly
THERE has been an increase in computation
requirements for embedded systems due to the increasing
complexity of new communication and multimedia
standards. This has fostered the development of high-
performance embedded platforms that can handle the
computational requirements of recent complex algorithms,
which cannot be executed in traditional embedded mono-
processor architectures. In addition, the continuous time-to-
market pressure for consumer embedded devices has made
it impossible for a design group to perform a complete
redesign each time a new product needs to be developed.
Due to all these requirements, Multi-Processor System-on-
Chip (MPSoC) architectures have become a very attractive
solution for the new consumer multimedia embedded
market, as in [6]. As a matter of fact, some platforms from
the major semiconductor are already available today
exemplifying these paradigms in heterogeneous platforms
[7].
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 303
non-uniform and the floorplan of the design does not match
the regular, tile-based floorplan of standard topologies.
Moreover, for most state-of the- art MPSoCs the system is
designed with static (or semi-static) mapping of tasks to
processors and hardware cores, and hence the
communication traffic characteristics of the MPSoC can be
obtained statically [13]-[15]. Thus, an application-specific
NoC with a custom topology, which satisfies the design
objectives and constraints, is critical to have efficient on-
chip interconnects for MPSoCs.
II. BACKGROUND AND MOTIVATION

Networks-On-Chip (NoC) are a challenging research
topic, providing a scalable solution for multiprocessors- on-
chip (MPoC). In embedded reconfigurable systems, NoCs
provide a flexible communication infrastructure, in which
links interconnecting processor/DSP cores, memories, and
other intellectual property (IP) components can be
reconfigured for a certain embedded computing application.
NoC-based MPoCs provide a scalable communication
infrastructure compared with bus-based platforms, which
have limited bandwidth capacity. NoCs combine
performance with design modularity, allowing the
integration of many design elements on single chip die. In
higher level protocols (e.g., cache coherence) of an
interconnected multiprocessor system, multicast messages
can be tied to message types such as request, response, send,
receive, completion, etc [10]. Although the underlying
network is free from deadlock routing, message-dependent
deadlocks can still occur because of the dependencies
between those different message types. The message
dependencies occur at network endpoints, i.e., on injection
and reception resources, and may block the messages to sink
at their target nodes.
NoC area is usually obtained from the floorplan after
implementing the network. In NoC area was abstracted by
the area of its routers and the number of global links. The
NoC area as the summation of the areas of the output
buffers, the input buffers, the arbitration logic, and the
crossbar. The analytical area models were presented for the
routers and the links by summing up the silicon area starting
from the gate level. Finally, different analytical area models
for different communication infrastructures are proposed
namely, NoC, shared bus, segmented bus, and peer to peer.
Contention-free delay models for wormhole networks were
took contention into account by modeling the delay using an
M/G/1 queuing system. Finally, contention and contention-
free delay models were presented for the master/slave
handshaking process.
III. ROUTER ARCHITECTURE
There is a routing engine (RE) module, which consists of
combination of a router hardware logic (RHL) unit and a
routing lookup table (LUT) unit. The combination is aimed
at supporting a runtime link interconnect configuration. If
the RE units identify a header flit in the output of a FIFO
buffer, then the RHL unit find a routing direction based on
destination address stated in the header flit and current
address (location) of the router. A routing engine consisting
of a routing table and a router hardware logic is allocated at
each incoming port to support routing parallelism (up to five
simultaneous crossbar connections). The router hardware
logic, in which a deadlock-free static or adaptive routing
algorithm is implemented, is an exchangeable module. The
routing algorithm is minimal, deadlock-free by
implementing a turn model (without virtual channels), and
can be applied for routing unicast and multicast packet
headers.
A. Routing Algorithms
Communication performance of a NoC depends heavily
on the routing algorithm used. A large number of distributed
routing algorithms for NoC have been proposed in literature.
In this section we consider only turn model based routing
algorithms which are used in mesh topology NoC. In this
model certain turns are restricted for communication
depending upon the rules used. Most important feature to be
considered in a routing algorithm is deadlock freedom. All
turn model based routing algorithms are deadlock free.
XY Routing Algorithm: It is one of the simplest and most
commonly used routing algorithms used in NoC. It is a
tatic, deterministic and deadlock free routing algorithm.
According to this algorithm, a packet must always be routed
along horizontal or X axis of mesh until it reaches the same
column as that of destination. Then it should be routed
along vertical or Y axis and towards the location of
destination resource.
Odd Even Routing Algorithm: It is a partially adaptive
routing algorithm. It restricts East-North or East-South turn
at any node located in an even column of mesh. Similarly in
any odd column, it restricts the packets to take North-West
or South-West turns.
West First Routing Algorithm: It is also a partially
adaptive routing algorithm. It restricts South-West or North-
West turn at any node in the mesh network. West First
algorithm restricts at least half of the source-destination
communications to one minimal path, while rest of the pairs
can communicate with full adaptivity.
Negative First Routing Algorithm : It is a partially
adaptive routing algorithm. It restricts North-West or East-
South turn at any node in the mesh network. It means that if
a communication requires movement of a packet towards
any negative axis, horizontal or vertical, along with any
other direction, then the packet should be routed first
towards that negative axis direction and in the end towards
the other.
North Last Routing Algorithm : It is another partially
adaptive routing algorithm. It restricts North-West or North-
East turn at any node in the mesh network. It means that if a
communication requires movement of a packet towards
north along with any other direction, then the packet should
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 304
be routed first towards the other direction and in the end
towards north.
Source Routing Algorithm : In source routing the
information about the whole path from the source to the
destination is pre-computed and provided in packet header
as opposed to distributed routing, where packet header
contains destination address only and the path is computed
dynamically by the participation of routers on the path. With
source routing, all routing decisions are made inside the
source core before injecting any packet in the network. For
this purpose, each source contains lists or tables that contain
complete route information to reach all other resources in
the network. Instead of storing tables in source, it is also
possible to add extra logic or software in the source
resources that implements any adaptive routing algorithm
and dynamically computes paths for source routing.
The deadlocks that can occur in NoCs can be
broadly categorized into two classes: routing-dependent
deadlocks and message-dependent deadlocks. Routing
dependent deadlocks occur when there is a cyclic
dependency of resources created by the packets on the
various paths in the network. Message-dependent deadlocks
occur when interactions and dependencies are created
between different message types (e.g., requests and
responses) at network endpoints, when they share resources
in the network. Even when the underlying network is
designed to be free from routing dependent deadlocks, the
message-level deadlocks can block the network indefinitely,
thereby affecting the proper system operation. For proper
system operation, it is critical to remove both routing and
message-dependent deadlocks in the network. It is also
important to achieve deadlock freedom with minimum NoC
area and power overhead.

Fig. 1. Conventional NoC architecture

B. Routing Area Minimization
Field programmable gate arrays (FPGAs) typically
connect look-up tables (LUTs) through a two-level routing
hierarchy. First, local routing networks are used to connect
LUTs into logic clusters. Then, global routing networks are
used to connect logic clusters into FPGAs. Since routing
networks usually consume a vast majority of FPGA area. It
is important to increase their flexibility while minimizing
their area.


Fig. 2. Application Specific NoC architecture
In the generic NoC, a router can have up to four local
connections with neighbor cores. Therefore, neighbors can
exchange data through a single router, reducing
communication latency. Moreover, generic routers can be
customized according to its bandwidth requirements
reducing the area while keeping the average communication
delay. The careful customization of the NoC structure with
the most appropriate number and type of routers and the
proper number of connections and bandwidth will lead us to
a communication structure with less area and lower average
communication latency than the standard NoC (SNoC)
structure, where there is one router associated with each
core and routers have only one local port.
IV. MODELING AND SIMULATION
Mathematical modeling and computer simulations are
indispensable tools in analyzing and designing networking
and communication system for complex process. The
simulation of NoC system with Network Interface and
Network Element are performed and resource consumption
are calculated and optimized for NoC systems. All the
simulations are done using QUARTUS 9.0 SOPC builder.
A. Design of Network Interface
An NI is needed to connect each core to the NoC. NIs
convert transaction requests/responses into packets and vice
versa. Packets are then split into a sequence of Flow control
unITS (FLITS) before transmission, to decrease the physical
wire parallelism requirements. NIs are associated in NoCs to
system masters and system slaves. Many current NoC
solutions leverage static source routing, which means that
dedicated NI Look-Up Tables (LUTs) specify the path that
packets will follow in the network to reach their final
destination. This type of routing minimizes the complexity
of the routing logic in the NoC. As an alternative, routing
can be performed within the topology itself, normally in an
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 305
adaptive manner; however, performance advantages, in-
order delivery and deadlock /livelock freedom are still
issues to be studied in the latter case.
In general, two different clock signals can be attached to
NIs: the first one drives the NI front-end, the side to which
the external core is attached, and the second one drives the
NI back-end, the internal NoC side. These clocks can, in
general, be independent. This arrangement enables the NoC
to run at a different (and potentially faster) clock than the
attached cores, which is crucial to keep transaction latency
low. The library function for Network Interface is developed
using Quartus tool with the flexibility of measuring resource
usage and through enabling send and receive links.
B. Design of Network Element
In each incoming port, there is a routing engine (RE)
module, which consists of combination of a router hardware
logic (RHL) unit and a routing lookup table (LUT) unit. The
combination is aimed at supporting a runtime link
interconnect configuration. If the RE units identify a header
flit in the output of a FIFO buffer, then the RHL unit will
find a routing direction based on destination address stated
in the header flit and current address (location) of the router.
A routing direction slot in the LUT unit is then assigned and
indexed based on its ID-tag. The illustration of the routing
direction slots of a LUT unit is presented. In the next time
periods, when the RE units identify payload flits having the
same ID-tag number as a previously forwarded header flit,
then the routing direction will be looked up directly from
the LUT unit. Subsequent flits will then be forwarded in
accordance with their ID-tag and the assignments in the
routing table. The library function Network Element is
developed using Quartus tool with the flexibility of
measuring resource usage and configuring FIFO size and
configuring number of ports inconsistent with routing
algorithm.
C. Area and Resource consumption
Using heterogeneous FPGA blocks, minimum logic
elements and core block elements are used to design
Network Interface and Network Element. By using SOPC
builder, absolute amount of area and logic blocks are
reduced to design an Network Element and Network
Interface in contrast to implement it as Hardware
Description Language(HDL) program or in an
homogeneous FPGA. Here, the blocks are chosen so that it
is used fully utilized and customized for the specific
functions as mentioned in Table I.

Fig. 5. Optimized Network On Chip and Router model

Network Interface and Network Element are made as
library function with configurable parameters likely number
of ports, buffer size and transmission and reception link
channels. If an router is communicating with less than
maximal router(4), then it can be chosen with required
number of ports. Similarly some of cores in the network

Fig. 3. Design of Network Interface

Fig. 4. Design of Network Element
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 306
only receives FLITS from other cores. For example Display
driver core and other output driving core modules require
only information from other cores, then transmission link in
the corresponding network interface is disabled in order to
save some area in the chip to include some more application
specific cores in the network. The routers (R) and core
modules (C) of optimized NoC are represented in the Fig. 5.
It explains a router connection to number of cores and to
number of routers in the NoC. In the conventional design a
router is connected to only one core through its Network
Interface and vice versa. The Application specific router
provides more flexibility that to interface a core with more
than one router and a router with more than one core as
depicted in Fig. 2.

D. Timing Analyzer and Simulation
The traffic from any source core experiences three types
of delay in its way to the destination core. These are the
arbitration and propagation delays through routers, the
propagation delay through links, and the serialization and
de-serialization delays through Network Interfaces (NIs).
The propagation delays caused by Network Element and
link delays caused by Network Interface are reduced much
through optimized design. Optimized router ports provides
the flexibility to choose between various adaptive routing
algorithm to reduce traffic and delay time.
Network Element signals are simulated for various
working environment conditions and The latency time
measurements are taken for the FLITS transmission in the
designed router in order to issue deadlock free high speed
data transmission. The FIFO buffers are dynamically
worked out based on measured traffic in the interconnected
nodes and scheduling algorithm. Worst case timing analysis
are formulated in Table II.
V. LIMITATION AND FUTURE WORK
The latency time and area minimization needs to be
simulated for various routing algorithm and higher number
of processors to counter much logical synthesizing problem
and to formulate the enhanced adaptive routing algorithm.
The performance of NOC and Network Element needs to be
measured for various parameters like power consumption,
net switching power and cell area for core elements.
A common routing algorithm and configuration needs to
be developed for various Heterogeneous FPGA vendors in
order to avoid the dependency on Hardware Description
Language and its influence on resource consumption.
TABLE I RESOURCE CONSUMPTION
Multiplexer
Inputs
Bus
Width Baseline Area
Area if
Restructured
Saving if
Restructured Registered Example Multiplexer Output
3:1 16 bits 32 LEs 16 LEs 16 LEs Yes |lcd|clock1Khz:inst|ccnt[12]
5:1 2 bits 6 LEs 4 LEs 2 LEs Yes |lcd|lcd_control:inst1|lcd_data[1]~reg0
8:1 8 bits 40 LEs 8 LEs 32 LEs Yes |lcd|lcd_control:inst1|wait_cnt[1]
5:1 2 bits 6 LEs 4 LEs 2 LEs Yes |lcd|lcd_control:inst1|lcd_data[5]~reg0
5:1 2 bits 6 LEs 6 LEs 0 LEs No |lcd|lcd_control:inst1|Selector26
5:1 2 bits 6 LEs 4 LEs 2 LEs No |lcd|lcd_control:inst1|Selector30
6:1 2 bits 8 LEs 6 LEs 2 LEs No |lcd|lcd_control:inst1|Selector32
6:1 9 bits 36 LEs 18 LEs 18 LEs No |lcd|lcd_control:inst1|Selector33
TABLE II Timing Analysis and Simulation
Conventional Router Optimized Router
Type
Actual Execution
Time
Latency Time(P2P
Time) Type
Actual
Execution Time Latency Time(P2P Time)
Worst-case tco 13.244 ns 3.293 ns Worst-case tco 12.943 ns 3.233 ns
Worst-case tpd 10.277 ns 3.061ns Worst-case tpd 9.714 ns 2.941ns
Worst-case tsu 5.237 ns 2.854ns Worst-case tsu 5.237 ns 2.792ns
Worst-case th 3.647 ns 0.986ns Worst-case th 3.647 ns 0.88ns
Clock Setup: 'Clk50Mhz'
286.94 MHz (
period = 3.485 ns
)
Clock Setup:
'Clk50Mhz'
286.94 MHz (
period = 3.485
ns )
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 307
VI. CONCLUSION
In NoC systems, the power consumption and bandwidth
are limited. Both of these parameters have inverse trade off.
When bandwidth is increased by reducing bus width, it
increases the power consumption. These problems can be
overcome by choosing appropriate routing protocol and
minimizing area required for the routing element. In this
paper, Optimized Network on Chip system model is
established. In the proposed NI and NE optimized design,
the routing area is reduced by 40-60%. Hence, based on
network processors Network Interface and corresponding
Networking Elements resource consumption can be reduced
much in Heterogeneous FPGA by choosing appropriate
multiplexer inputs in the consumed logical elements.
An adaptive routing algorithm is developed using FIFO
control logic and scheduler. The LUT and other memory
resources are accordingly updated in the runtime based on
measured latency time at network element and network
interface. Thus traffic near network elements are reduced
nearly 20% . Thus minimized area and traffic free routing
elements ensures reduction in power consumption also.
ACKNOWLEDGMENT
The authors gratefully acknowledge the helpful
suggestions made by the reviewers, and Anna University of
Technology, Coimbatore for providing a extended Lab
support and technical content. The authors thanks the
support given by AICERA 2011.
REFERENCES
[1] E. Ahmed and J. Rose, The effect of LUT and cluster size on deep
submicron FPGA performance and density, IEEE Trans. Very Large
Scale Integr. (VLSI) Syst., vol. 12, no. 3, pp. 288298, Mar. 2004.
[2] A. Ye and J. Rose, Using bus-based connections to improve field
programmable gate array density for implementing datapath circuits,
IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 14, no. 5, pp
462473, May 2006.
[3] A. Roopchansingh and J. Rose, Nearest neighbour interconnect
architecture in deep submicron FPGAs, in Proc. IEEE Custom
Integr.Circuits Conf., 2002, pp. 5962.
[4] Zoran Salcic and Chung-Ruey Lee, FPGA-Based Adaptive
Tracking Estimation Computer IEEE transactions on aerospace and
electronic systems, 2001.
[5] W. Feng and S. Kaptanoglu, Designing efficient input interconnect
blocks for LUT clusters using counting and entropy, ACM Trans.
Reconfigurable Technol. Syst., vol. 1, no. 2, pp. 6:16:28, Jun. 2008.
[6] J. D. Owens, W. J. Dally, R. Ho, D. N. Jayasimha, S. W. Keckler, and
L.-S. Peh, Research challenges for on-chip interconnection
networks, IEEE Micro, vol. 27, no. 5, pp. 96108, Sep.-Oct. 2007
[7] L. Benini and D. Bertozzi, Network-on-chip architectures and design
methods, IEE Proc. Comput. Digital Tech., vol. 152, no. 2, pp 261
272, Mar. 2005.
[8] Joost, R.; Salomon, R. Advantages of FPGA-based multiprocessor
systems in industrial applications Industrial Electronics Society,
2005. IECON 2005. 31
st
Annual Conf. of IEEE Volume, Issue, 6-10
Nov. 2005.
[9] Z. Lu, B. Yi, and A. Jantsch, Connection-oriented multicasting in
wormhole-switched network-on-chip, in Proc. IEEE Comput. Soc
Annu. Symp. VLSI (ISVLSI06), 2006, vol. 6, pp. 16.
[10] A. Singh, G. Parthasarathy, and M. Marek-Sadowska, Efficient
circuit clustering for area and power reduction in FPGAs, ACM
Trans. Des Autom. Electron. Syst., vol. 7, no. 4, pp. 643663, Oct.
2002.
[11] J. Liu, L.-R. Zheng, and H. Tenhunen, Interconnect intellectual
property for network-on-chip (NoC), J. Syst. Arch., vol. 50, no. 2-3,
pp 6579, Feb. 2004.
[12] Y. Hoskote, S. Vangal, A. Singh, N. Borkar, and S. Borkar, A 5-
GHz mesh interconnects for a teraflops processor, IEEE Micro, vol.
27, no. 5, pp. 5161, Sep.-Oct. 2007.
[13] I. M. Panades, A. Greiner, and A. Sheibanyrad, A low cost network-
on-chip with guaranteed service well suited to the GALS approach,
in Proc. 1st Int. Conf. Workshop on Nano-Networks, 2006, pp. 15.
[14] P. Gratz, C. Kim, K. Sankaralingam, H. Hanson, P. Shivakumar, S.
W. Keckler, and D. Burger, On-chip interconnection networks of the
TRIPS chip, IEEE Micro, vol. 27, no. 5, pp. 4150, Sep.-Oct. 2007
[15] A. A. Morgan, H. Elmiligi, M. W. El-Kharashi, and F. Gebali,
Networks-on-chip topology generation techniques: Area and delay
evaluation, in Proceedings of the 3rd IEEE International Design and
Test Workshop (IDT08), Monastir, Tunisia, Dec. 2022, 2008, pp.
3338.
[16] G. Leary, K. Srinivasan, K. Mehta, and K. Chatha, Design of
network on- chip architectures with a genetic algorithm-based
technique, IEEE Transactions on Very Large Scale Integration
(VLSI) Systems, vol. 17,no. 5, pp. 674687, May 2009.
[17] G. A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and
V.Sunderam, PVM: Parallel Virtual Machine, A Users Guide and
Tutorial for Networked Parallel Computing. Cambridge, MA:MIT
Press, 1994 [Online]. Available: http://www.csm.ornl.gov/pvm
[18] Altera Corporation. NIOS II processor reference handbook.
www.altera.com/literature/hb/nios2/n2cpu_nii5v1.pdf
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 308
Classification of Temporal Database Using
Train & Test Approach
S.Nithya Shalini
#1
, A.M.Rajeswari
#2
# Department of Computer Science and Engineering, Thiagarajar College of Engineering,
Madurai,Tamilnadu, India.
1
snithyashalini@gmail.com
2
amrcse@tce.edu


Abstract--Association rule mining is highly popular data
mining technique. Association rules show attributes value
conditions that occur frequently together in a given dataset.
These rules provide information of the type "if-then"
statements. When association rule mining is applied on a
raw data set, they produce an extremely large number of
rules. Most of the rules might be irrelevant and the time
required to find them can be impractical, this issue is raised
because in general, association rules are mined on the entire
dataset without validation on an independent sample. To
solve this issue, an algorithm called Train and Test approach
was proposed which reduces the number of rules by adding
a measure called Lift. A lift is a correlation measure which is
used to augment the support-confidence framework for
association rules. The proposed method which deals with the
class-association rule predicts normal day and ozone day on
Train dataset of ozone database and validated in Test
dataset of the same. The ozone database is then classified
using the class association rules obtained by Train & Test
approach.
KeywordsAssociation rules,lift, train and test,classification.
I. INTRODUCTION
Data mining or knowledge discovery refers to the process of
finding interesting information in large repositories of data. The
term data mining also refers to the step in the knowledge
discovery process in which special algorithms are employed in
hopes of identifying interesting patterns in the data.
Association rule mining is to find out association rules that
satisfy the predefined minimum support and confidence from a
given database. The problem is usually decomposed into two
sub problems. They are (i) One is to find those item sets whose
occurrences exceed a predefined threshold in the database; those
item sets are called frequent or large item sets. (ii) The second
problem is to generate association rules from those large item
sets with the constraints of minimal confidence.
Temporal Data Mining is a rapidly evolving area of research
that is at the intersection of several disciplines, including
statistics, temporal pattern recognition, temporal databases,
optimisation, visualisation, high-performance computing, and
parallel computing. Temporal database stores relational data that
include time-related attributes. These attributes may involve
several timestamps, each having different semantics.
A. Objective
The aim of the proposed method is to classify the ozone
database using the temporal association rule obtained by the
Train & Test approach.
B. Problem Statement
Predicting the ozone day and normal day from the ozone
database for the particular period using the Train and Test
approach.
II. LITERATURE REVIEW
One of the main unresolved problems that arise during
the data mining process is treating the data that contains
temporal information. In this case, a complete understanding
of the entire phenomenon requires that the data should be
viewed as a sequence of events. Temporal sequences appear
in a vast range of domains, from engineering, to medicine
and finance, and the ability to model and extract information
from them is crucial for the advance of the information
society. When association rule mining is applied, more number
of rules are generated. In the proposed system, validation is
provided on the association rules generated to produce less
number of more efficient rules.
C. Existing System
The existing system is used to generate the association rules
using support and confidence measures alone. It produces more
number of rules. Some of the rules might be irrelevant and the
time taken to find them can be impractical.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 309
D. Limitation of the Existing System
Generation of more number of irrelevant rules
Time consuming
E. Proposed System
Train & Test Approach has been used to reduce the
number of irrelevant rules by using Lift measure and validation
process. It is applied on ozone database for predicting the
normal and ozone day for the particular period.

The steps are as follows
(1) Preprocessing the Data.
For I= 1 TO t
LOOP

(2) Partition the Database into Train and Test data set

(3)Generation of Association Rule from the Train data
set
Phase-1: Finding the Frequent sets from the Train
dataset which should satisfy the minimum support value.
Phase-2: Generating the association rules with the
frequent sets generated in phase1 by computing confidence and
lift measures

(4) Validation of Association Rules in the Test data set.
Validating the rules generated in step3 whether
they meet the threshold values for support, confidence and lift
measures on Test dataset.
End loop

(5) Finding the common rules generated in step4 for
different test cycles

(6)Classifying the database using the common rules.
III. METHODOLOGY
F. Architecture of the proposed system

Fig 1 Architecture of the Proposed System
G. Modules
Data preprocessing
Data partition
Training Phase
Testing Phase
Common Rules obtained from different test samples.
Classification of ozone database using the common rules

Data Preprocessing: It involves the following two steps.
(1) Attribute Selection.
(2) Data discretization

The attributes which are more significant to frame the
association rules are selected.
Data discretization is a procedure that takes a data set
and converts the continuous attributes and the numerical
attributes to categorical form. Weka tool is used to categorize
these selected attributes.

Data Partition: The input database obtained after preprocessing
is partitioned into
Train dataset
Test dataset

Training Phase: It consists of two phases.

Phase 1:Frequent Item set Generation:
Generate all frequent item sets from the Train data
set whose support minsup.

Phase 2: Rule Generation:
Generate Association Rules with the frequent item
sets generated in phase 1 by computing confidence and lift
measures which satisfy the minimum threshold values.

Testing Phase: Testing phase is mainly for validating the rules
that has been generated in Training phase. The rules whose
support, confidence and lift measures satisfies the minimum
threshold values are considered as the validated rules .

Common Rules Obtained from Different Test Samples: The
common rules from the validated rule set obtained from
different test cycles are extracted which remains valid on both
Train and Test dataset.

Classification of ozone database using common rules: The
ozone database is classified using the common rules obtained
using the Train & Test Approach.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 310
H. Measures Used
Support : An objective measure for association rules of the form
x => y is rule support, representing the percentage of
transactions from a transaction database that the given rule
satisfies i.e. P (X UY), where X UY indicates that a transaction
contains both X and Y the union of items sets X and Y .

Support(X=>Y)=P(XUY)

Confidence : Another objective measure for association rules is
confidence, which assesses the degree of certainty of the
identified association, i.e., P (Y|X) probability that a
transaction containing X also contains Y.

confidence(X=>Y)=P(Y/X)

Lift: Lift is used to measure the correlation between X and Y. It
is a symmetric measure. A lift well above 1 indicates a strong
correlation between X and Y. A lift around 1 says that
P(X,Y)=P(X)P(Y). In terms of probability, this means that the
occurrence of X and the occurrence of Y in the same transaction
are independent events, hence X and Y not correlated.

Lift(X=>Y)=conf(X=>Y)/P(Y).

Precision :Precision is defined as the fraction of correct
instances among those that the algorithm believes to belong to
the relevant subset.

Precision=
fp tp
tp


Where tp=True Positive and fp=False Positive

True positive refers to the positive tuples that were correctly
labeled by the classifier. False positive are the negative tuples
that were incorrectly labeled.

Recall: Recall is defined as the fraction of correct instances
among all instances that actually belong to the relevant subset.

Recall=
tn tp
tp


Where tp=True Positive and tn=True Negative

True Negatives are the negative tuples that were correctly
labeled by the classifier.



IV. PERFORMANCE ANALYSIS
Train & Test Approach reduces the number of irrelevant rules
from the rule set that has been generated on Train dataset, by
validating the rules on the Test dataset. It has performed much
more effectively than Apriori algorithm and Apriori using Lift
metric. However, the proposed model is flexible in predicting
interesting temporal pattern using support, confidence and lift
measures.
Table 1 No of Rules generated by Apriori Algorithm for different
thresholds.

Table 2 No of Rules generated by Apriori using Lift Metric for
different thresholds.
Table 3 No of Rules generated by Apriori based Train & Test Approach
for different thresholds.

Year
Sup
3%
conf
20%
Sup
14%
Conf
20%
Sup
5%
Conf
20%
No of Rules No of Rules No of Rules
1998 51 8 28
1999 49 8 34
2000 49 10 28
2001 57 9 35
2002 51 11 34
Year
Sup
3%
conf
20%
Lift
0.18
Sup
14%
Conf
20%
Lift
0.25
Sup
5%
Conf
20%
Lift
0.29
No of Rules No of Rules No of Rules
1998 36 8 18
1999 40 8 26
2000 38 9 24
2001 46 9 25
2002 42 10 28
Year
Sup
3%
conf
20%
Lift
0.18
Sup
14%
Conf
20%
Lift
0.25
Sup
5%
Conf
20%
Lift
0.29
No of Rules No of Rules No of Rules
1998 15 4 12
1999 14 8 15
2000 15 5 15
2001 12 9 12
2002 7 1 4
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 311
0
10
20
30
40
50
60
1998 1999 2000 2001 2002
Apriori algm
Apriori using Lift
Apriori based
Train & Test
Approach

Fig 2 No of Rules generated for Apriori, Apriori using Lift and Apriori based Train
& Test Approach
The ozone database is classified using the common rules obtained
by the train and test approach and its results are compared with
the classification technique[6] by computing precision and recall
for different thresholds.
Table 4 Precision and Recall values for different thresholds by apriori based
Train & Test Approach

Table 5 Precision and Recall values for different thresholds by parametric peak
prediction model(classification)

V. CONCLUSION AND FUTURE SCOPE
The main issue of the existing system is the generation
of more number of irrelevant rules. This issue is being managed
by the proposed method, Train and Test approach. This method
reduces the number of irrelevant rules by validation process.
This method is compared with the performance of apriori
algorithm and apriori algorithm using lift measure and found it
to be efficient.
The Ozone database is classified using the common
rules obtained by train and test approach and its results are then
compared with the classification technique[6] by computing
precision and recall for different threshold values and hence the
results shows that the proposed method is accurate than the
parametric peak prediction model which is based on
classification technique.











































Yea
r
Sup
3%
con
f
20
%
Lift
0.70
Sup
14
%
Conf
20%
Lift
0.70
Sup
5%
Conf
20%
Lift
0.74

Precision & Recall
Precision & Recall Precision & Recall
1998 0.667,1 0.72,1 0.75,1
1999 1,1 1,1 1,1
2000 0.889,1 1,1 1,1
2001 1,1 1,1 1,1
2002 1,1 0.5,1 1,0.85
Year
Sup
3%
conf
20%
Lift
0.70
Sup
14%
Conf
20%
Lift
0.70
Sup
5%
Conf
20%
Lift
0.74
Precision & Recall
Precision & Recall Precision & Recall
1998 0.620,0.571 0.62,0.56 0.69,0.555
1999 1,1 0.713,0.736 0.759,0.712
2000 0.621,0.545 0.455,0.445 0.655,0.0.55
2001 0.745,0.675 0.685,0.64 0.755,0.675
2002 0.625,0.64 0.705,0.71 0.56,0.545
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 312

REFERENCES
[1] Hoda Meamarzadeh,Mohammad Reza Hayyambashi, Extracting Temporal
Rules from Medical data, IEEE conference on computer tech and
development, 2009.
[2] Susan E.Spenceley, The Intelligent Interface for On-Line Electronic
Medical Records using Temporal Data Mining, IEEE conference on
computer tech and development, 2009.
[3] Yuchen Yang,Shingo Mabu, Stock Movement Prediction using Fuzzy
Intertransaction class association rule mining based on Genetic Network
Programming, IEEE International conference, 2009.
[4] Xiang Lian, Efficient similarity search over Future Stream Time Series,
IEEE Transactions on Knowledge and Data Engineering,vol
20,No.1,January 2008.
[5] Calos ordonez, Association Rule Discovery with the Train and Test
approach for Heart Disease Prediction, IEEE Transactions on
Information Tech,vol.10,No.2,April 2006.
[6] Kun Zhang, Wei Fan, Xiaojing Yuan, Ian Davidson, Forecasting Skewed
Biased Stochostic Ozone days: Analyses and Solutions, Sixth IEEE
International Conference on Data Mining (ICDM'06).
[7] Franck Le,Sihyung Lee, Detecting Network-wide and Router-Spcific
Misconfigurations Through Data Mining, IEEE/ACM Transactions on
Networking,vol.17,no.1 February 2009.
[8] Gang Yang,Hong Zhao, An Implementation of Improved APRIORI
Algorithm, IEEE International conference on Machine Learning&
Cybernetics,Baoding,12-15,2009
[9] Diang Zhenguao,Wei qinqin,Ding Xianhua, An Improved FP-Growth
Algorithm Based on Compound Single Linked List, IEEE International
conference on Information and computing science, 2009.
[10] Qihua Lan,Defu Zhang,Bo Wa, A New Algorithm for Frequent Itemsets
Mining Based on Apriori and FP-Tree, Global congress on Intelligent
System, 2009.
[11] Hung-Yi-Lin, Perfect KDB-Tree:A Compact KDB-tree strctre for
Indexing Multidimensional Data, IEEE International conference on
knowledge and Data Engineering, 2005.














Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 313
Usage of Malware Analysis Techniques and
Development of Automated Malware
Categorization System
Sai Lohitha.N Arokiaraj Jovith.A
Department of Information Technology Assistant Professor
SRM University Department of Information Technology
Chennai, India SRM University
E-mail:sailohitha.n@gmail.com Chennai, India
E-mail: arokiarajjovitha@ktr.srmuniv.ac.in

Abstract: Malware is the major security threat in todays
Internet era. Various static and dynamic analysis techniques
have evolved for analyzing the malware. Human analysis of
samples is very expensive and is also time-consuming. As the
need for malware analysts has been growing, efforts are being
made in automating the process of malware classification and
analysis. In this paper a framework that is useful for analysts
is proposed. This framework is used to automate the malware
categorization based on reports generated by online malware
services like Virus Total. This will ultimately save time and
money for organizations who are involved in malware analysis.

Keywords: static analysis; dynamic analysis; malware; AV (Anti-
virus); virtualization; network security; portable executable(PE)
I. INTRODUCTION
Now-a-days most of the serious problems facing the
Internet today are spam, botnets, Trojans and worms that
largely depend on some form of malicious code, commonly
referred to as malware. Malware is any malicious program
or file that is harmful to a computer. It includes computer
viruses, worms, Trojan horses, and also spyware,
programming that gathers information about a computer
user without permission. Day-to-day the increase in the
number of malwares has become an acute problem.
Unfortunately, the intricacy of modern malware is making
this problem more challenging. For example, Agobot [3] has
been observed to have more than 580 variants since its
initial release in 2002. Modern Agobot variants have the
ability to perform DoS attacks, steal bank passwords and
account details, propagate over the network using a diverse
set of remote exploits, use polymorphism to evade detection
and disassembly, and even patch vulnerabilities and remove
competing malware from an infected system [3]. According
to CA technologies internet security report 2010, Internet is
the primary source of infection. This equates to 86% of the
total threat landscape in 2010, a growth of 8% compared to
78% last year 2009[4].This procedure takes lot of time to
count. Automated and robust approaches to categorize and
count malwares are required. Currently, the most substantial
line of defense against malware is Anti-Virus (AV) software
products which mainly use signature-based method to
recognize threats. Viruses range in severity from the
harmless to the catastrophic, downright system crippling.
Given a collection of malware samples, these AV venders
first categorize the samples into families so that samples in
the same family share some common personae, and exhibit
some common features like string(s) to detect variants of a
family of malware samples.
For many years, malware categorization has been done by
human analysts, where memorization, looking up
description in malware libraries [15], and searching sample
collections are typically required. The manual process is
laborious and slow. Todays malware samples are created at
a rate of millions per day with the development of malware
writing techniques. So, automated or mechanized malware
analysis system is required. Recently many online analyzers
are available like Anubis, Norman Sandbox, and Process
Explorer etc.
II. CONTRIBUTIONS TO THE PAPER
Anubis, Norman sandbox, VirusTotal are some of the
online malware analyzers available that will analyze the
binaries submitted externally and generate the reports
accordingly. But these analyzers take lot of time and require
manpower throughout the analysis. Furthermore the size of
the file to be submitted varies from 5MB to 8MB.Although
VirusTotal analyzer can analyze a malware sample or file of
size 20MB it needs the presence of an analyst during the
entire analysis. To overcome these shortcomings and reduce
manpower: VirusTotal, a malware analyzer service has one
of its characteristics as analysis automation API is used. The
entire process is done in two steps: automation,
categorization of malware providing visualization (fig 1).
The entire implementation is done in virtual environment to
prevent the malware from attacking native operating system
if they are executed. The comparison between two different
anti viruses is also done based on their detection capability
[VI] and the importance of leading antivirus is also
mentioned in detail [III- E]. Major categories of malware
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 314
along with its subcategories and their descriptions are
explained in section III.
III. MALWARE DEFINED
Malware means malicious software. Computer malware
comprises of a number of dangerous software that are
threats to all computer users, and even more, if you are
connected to the World Wide Web. Software can be deemed
to be malware depending on the purpose for which it was
created rather than the features. Malware is created for
various purposes; these include intrusion of privacy for
various unethical reasons, vandalism, crimes, spying or just
for pranks. Understanding these threats, their nature and
how to prevent them is very vital to save Internet surfing
and protection of most confidential data. Malware can be
divided into three major groups: Trojan horses, worms and
viruses.
The effect of malware varies from simple to
catastrophic. E.g. Stuxnet is an internet worm that infects
windows computers [17] .It targeted industrial software and
equipment. The worm initially spreads slowly, but spreads
with a specialized payload which is designed to target only
Siemens Supervisory Control and Data Acquisition
(SCADA) systems that are configured to control and
monitor specific industrial processes. This attack resulted in
a major loss to Iran. The worm almost infected 62,867
computers in 2010.
A. Malware Types with Subtypes within Types table
TABLE I

SUBTYPES OF MAJOR CATEGORY VIRUS

TYPE SUBTYPE DESCRIPTION
VIRUS File virus

Uses the file system of a given OS (or
more than one) to propagate. File viruses
include viruses that infect executable
files, companion viruses and link viruses.
Script virus

Script viruses are able to infect other file
formats ,such as HTML, if that file
format allows the execution of scripts
Boot sector virus

Infects the boot sector or the master boot
record, or displaces the active boot sector,
of a hard drive.
Macro virus

Written in the macro scripting languages
of word processing, accounting, editing,
or project applications. The most
widespread macro viruses are for
Microsoft Office applications

TABLE II

SUBTYPES OF MAJOR CATEGORY WORM


TYPE SUBTYPE DESCRIPTION
WORMS Network
worm

Self-propagating program that spreads
over a network, usually the Internet.
Worms spread by locating other
vulnerable potential hosts on the
network
Mass-
Mailing
Worm

Embedded in an email attachment, which
must be opened by the intended victim to
enable the worm to install itself on the
victims host.
Email worm

Spread via infected email attachments
Instant
messaging
(IM) worm

Spread via infected attachments to IM
messages or reader access to Uniform
Resource Locators (URL) in IM
messages that point to malicious Web
sites from which the worm is
downloaded.

TABLE III

SUBTYPES OF MAJOR CATEGORY TROJANS

TYPE SUBTYPE DESCRIPTION
TROJANS Backdoor
Trojan
Acts as a remote administration
utility that enables control of the
infected machine by a remote host.
Examples: Back Orifice

Denial of
service (DoS)
Trojan

If the Trojan infection spreads widely
enough, the remote attacker gains the
ability to create a distributed denial
of service (DDoS) attack
FTP Trojan

Opens port 21, enabling the remote
attacker to connect to the victims
machine via FTP


IV. VIRTUAL ENVIRONMENT
A. Why Virtualization is Important?
Virtualization is an important tool for malware
researchers executing the malware in a controlled
virtualized environment that provides safety to the analyst
from infection. The goal is to capture the overall impact
that the software has on a system without concentrating on
the programs original code. This gives a high-level
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 315
overview of what the program is doing. Looking at the
changes done to the operating system gives information
like modifications or destructive activities that are made to
the system. Common tools such as Microsofts
Sysinternals tools monitor for system call executions,
modifications to files, and registry modifications. Using
tools such as Wire shark, the programs network activity
can be monitored
B. Virtual PC
Virtual PC is a program that emulates Windows 95,
Windows 98, Windows NT, as well as IBM OS/2, or Linux
on a Macintosh personal computer, assuming it's equipped
with a sufficiently fast microprocessor [22]. It is a fictitious
machine with fake hard disk and fake hardware. Operating
system installed on the virtual machine does not run on the
real hardware. With Virtual PC installed, a Mac can show
the desktop for the emulated operating system on one part
of the display or it can take up the entire screen. One can
run any program that will run under the other operating
systems on "regular" (Intel microprocessor-based) PCs.
Using this virtual PC, a new software can be tested on it
before committing it on to host machine. Using this PC the
native operating system of that particular PC will be safe.
V. SYSTEM ARCHITECHTURE




ANTIVIRUS ANALYSIS

MALWARE CATEGORIZATION THROUGH
VISUALIZATION
Fig. 1: Architecture displaying the flow of entire system
A. Malware Samples
Malware are malicious and can harm the system.
Different types of malwares are present as mentioned above
like Trojans, Backdoors along with their subcategories in
tables I, II, III. These malware samples can be collected
from some websites like www.offensivecomputing.net,
honeypots etc. Honeypots like nepenthes, dionaea can
maintain copy of malwares in form of binaries [18]. Most of
the malwares are executable files or binaries [7]. They can
be detected using various tools which consume lot of time.
B. MD5 Hash
MD5 (Message digest algorithm) or other hashes are one
way functions that produce a "fingerprint"[14]. They map
something with a lot of bits down to just a few bits (128 in
the case of MD5) in such a way that collisions are as rare as
possible. This is useful because one can compare and store
these small hashes much more easily than the entire original
sequences in cryptography; one-way hashes are used to
verify something without necessarily giving away the
original information. These are irreversible.
E.g.: 0a079cc2f1e65904698bbe449f8ed653
C. VIRUS TOTAL
Virus Total is a service developed by Hispasec Systems
that analyzes suspicious files and URLs enabling the
identification of viruses, worms, Trojans and other kinds of
malicious content detected by antivirus engines and web
analysis toolbars [2].
Virus Totals main characteristics are:
Free, independent service.
Runs multiple antivirus engines.
Runs multiple file characterization tools.
Real time automatic updates of virus signatures.
Detailed results from each antivirus engine.
Runs multiple web site inspection toolbars.
Real time global statistics.
Analysis automation API.
D. Virus Total Report (VT Report)
The report in fig 2 consists of the different types of
malwares that are detected by 43 antivirus engines on
submission of one malware sample.
MD5 HASH
VALUES
VT REPORT
MALWARE SAMPLES
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 316

Fig. 2: Virus Total sample Report
The above report is the result of one malware sample. To
submit large number of samples at a time is not possible.
So, an API designed by virus total team is used to submit
large samples at a time. The report consists of the Antivirus,
Version of AV, last date of updating, respective virus
detected present in the malware sample. The report will be
in an uncoordinated format which is highly difficult for
malware analysts to analyze the report. Furthermore report
consists of a number like 39/43 which indicates that 39
antivirus are able to detect the virus present the sample out
of 43 antivirus present in the virustotal analyzer.
E. F-SECURE Antivirus Analysis Report
The report generated by Virus Total will be in clumsy
format when samples are submitted at a time using API with
the result of all 43 antiviruses along with their respective
detected virus signature. The result of any antivirus can be
generated according to the analysts choice. But here the
report of leading antivirus F-Secure is generated [8]. This is
useful for better visualization and quick analysis.

About F-SECURE

F-Secure Internet Security offers better security online
without slowing down our computer [20]. It also provides
enhanced protection against viruses, malware, spam, and
cyber criminals.it is one of the leading anti-viruses. It has a
good detection rate as shown in Table IV.

Why F-SECURE?

F-Secure provide enhanced protection against malwares
and various kinds of attacks. In the whole product dynamic
test conducted by AV- COMPARITIVES in 2010 F-Secure
antivirus has got the protection rate as 98.9% with an
advanced+ level ranking system [8].
F. Malware Categorization through Visualization
Visualization has gained its importance in recent days.
Through visualization large amount of data can be displayed
in the form of images, charts. User can better understand the
concept and can proceed further. An evaluation can be done
and conclusion can be derived. Many online visualization
tools are available like Google Chart generating tool,
XML/SW, F charts, JFreeCharts etc. [19]
VI. EXPERIMENTAL RESULTS
The experiment is conducted by taking large number of
malware samples. These are submitted to the Virus Total
API and the results generated are categorized as shown
below. The categorization is done based on the Trojans,
Worms, Backdoors and other categories which are analyzed
files. The total number of files that couldnt be analyzed by
antivirus engine is obtained by subtracting the total number
of analyzed files from the samples taken. The performance
measurement of leading antiviruses in detecting the malware
categories is shown in Table IV. The following table shows
the performance of various ant viruses in detecting the
malwares. Here F-SECURE and ANTIVIR antivirus
engines detection analysis results are shown. The result
shows that F-Secure has better detection rate when
compared to ANTIVIR.

TABLE IV

EXPERIMENTAL RESULTS

File Type F-SECURE ANTIVIR
Result 1 Result 2 Result 1 Result 2
Malware
Samples
732 10 732 10
Trojans 42 4 35 3
Worms 492 3 450 4
Backdoors 7 2 4 1
Others 63 1 75 1
Total
Analyzed
604 10 564 9
Not Analyzed 128 0 168 1



Fig.3. Visualization of the malware categories
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 317
The above fig. 3 is a pie-chart that provides better
visualization based on categories of malwares. It saves lot of
time to the analyst. It helps the analyst to get the percentage
of each category of malware and can proceed further within
short span of time to derive conclusions.
VII. LIMITATIONS AND FUTURE WORK
The system has got some limitations and shares common
weaknesses. The malware samples are executed within
Virtual Box. Samples that employ anti-VM evasion
techniques may not exhibit their malicious behavior. To
mitigate this limitation, the samples could be run on a real,
non-virtualized system. Another limitation is the time period
which completely depends on the speed of the internet
connection and the time limit which is mainly specified by
VirusTotal team. In this experiment, each binary or malware
sample will take 15 seconds to produce the result [2]. The
experiment is done by displaying the results of only main
categories of malwares like Trojans, backdoors, worms etc.
This work can be further extended by further categorizing
the main categories into subcategories of malwares like file
virus, E-mail worm etc. The work also can be extended by
analyzing the PE structure of malwares detected by various
Antivirus engines, performing dynamic analysis, observing
registry level activities and clustering can be done based on
similarities exhibited by various malwares during analysis.
This helps to find out the malware samples that are
exhibited with different virus signatures but may have same
purpose like stealing passwords etc.

VIII. CONCLUSION
In this paper various categories of malwares, the rate at
which the internet malware is increasing annually has been
discussed along with human analysis of malwares and its
disadvantages. Along with this, the disadvantages of
existing malware analyzers Anubis, Norman Sandbox like
limitations in the size of malware samples to be submitted is
discussed. Moreover the drawback in using VirusTotal GUI
like time consumption, presence of analyst during the
generation of the report has been discussed. To address
these above limitations, an innovative system like
automated malware categorization through visualization is
proposed which help the malware analysts to derive
conclusions based on the results like which malware
category is prevalent in major among the samples that are
acquiesced to VirusTotal API. The automation of the system
allows recognizing the malware samples in which we cluster
based on their major categories. This reduces the man power
and saves lot of time, money of the organization.
Furthermore the system also shows a very interesting way
of representing large amount of data that is visualization.
Using some chart generating tools, pie-charts are generated
showing the classification of malware samples with the
count of each major category. The performance results of
various leading antivirus like F-Secure, AntiVir is also
exhibited. The categorization through visualization helps the
analysts to get the total number of malwares that are trying
to attack the system at a very quick rate by which they can
take necessary measures to prevent future attacks.

REFERENCES

[1] Free Automated Malware Analysis Services,
http://zeltser.com/reverse-malware/automated-malware-analysis.html
[2] Virus Total, www.virustotal.com
[3] Barford, P.Yagneswaran, and V: An inside look at botnets. In: Series:
Advances in Information Security, Springer, Heidelberg (2006).
[4] State of the Internet 2010.
http://www.ca.com/files/SecurityAdvisorNews/h12010threatreport_244199
.pdf-statistics report in 2010
[5] Anubis, http://anubis.iseclab.org/.
[6] Malware Analysis: Environment Design and Architecture,
http://www.sans.org/reading_room/whitepapers/threats/malware-analysis-
environment-design-architechture_1841.
[7] Latest Malware Samples,
http://www.offensivecomputing.net/vizsec09/dquist-vizsec09.pdf.
[8] Whole product dynamic test, www.av-comparitives.org.
[9] Oracle VM virtual box,
www.oracle.com/us/technologies/virtualization/.
[10] Norman sandbox,
http://www.norman.com/security_center/security_tools
[11] Marius Gheorghescu, Automated virus classification system. Virus
Bulletin Conference, 2005
[12] M.Bailey, J.Oberheide, J.Andersen, Z. M.Mao, F.Jahanian and
J.Nazario, Automated classification and analysis of internet malware.
RAID, 4637:178197, 2007.
[13] Karen Mercedes Goertzel, Tools Report on Anti-Malware, 2009.
[14] Cryptography and Network Security (4th Edition), William Stallings.
[15] Malware Library by Uniblue,
http://www.liutilities.com/malware/
[16] Internet Security Threat Report-Symantec connect,
http://www.symantec.com/connect/security/
[17] Schneier on security,
http://www.schneier.com/blog/archives/2010/10/stuxnet.html
[18] Dionaea- catches bugs,
http://dionaea.carnivore.it/
[19] Open Source Graph/Chart Generation Tools,
www.roseindia.net/opensource/graphchartgenerationtools
[20] F-Secure-free online virus scanner,
www.f-secure.com
[21] Malware Analysts Cookbook, Tools and Techniques for Fighting
Malicious Code, Michael Hale Ligh, Steven Adair, Blake Hartstein
Matthew Richard
[22] Virtual PC,
searchservervirtualization.techtarget.com/Virtual-PC





Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 318
Real Time Energy Optimization Technique for Scratch Pad Memory

Kavitha. R (Student) Ranjith Balakrishnan(Asst.prof) Kamal.S(Asst.prof)
TIFACCORE in TIFACCORE in TIFACCORE in
Pervasive Computing Technologies Pervasive Computing Technologies, Pervasive Computing Technologies,
Velammal Engineering College, Velammal Engineering College, Velammal Engineering College,
Chennai, India. Chennai, India. Chennai, India.
E-mail: vtkavitha@yahoo.com E-mail: rbke@in.com E-mail: kamal.saga@gmail.com



Abstract Solutions based on embedded systems are getting
more and more popular day by day. Optimizing their energy
consumption is an important issue. This paper presents a new
software technique to save a significant amount of energy in the
memory hierarchy, for real time applications. Scratch pad
memory is more energy efficient than cache but requires a
software support to utilize and optimize. The proposed framework
includes an optimization suite of SPM energy reduction
technique in a real time embedded environments, that uses
partitioning of SPM and allocation of memory from SPM to
level-1 cache. The proposed system is to be simulated with the
fully functional real time operating system (RTOS-VXWORKS).
The target processor chosen for the performance evaluation is
ARM92EJS.

Keywords-Embedded designt; Memory hierarchy, Scratch
Pad memory; Cache memory.
I. INTRODUCTION
In embedded applications, memory behavior is one
of the main factors limiting their performance. The most
serious issue in modern embedded real time system is the
excessive energy consumption. Recent research on SPM has
similar power characteristics as cache, their performance is
more predictable and energy efficient since data mapping is
under software control. The two contrasting memory
architectures are cache and scratch pad memory (SPM).
Cache memory is one of the highest energy consuming
components of a modern microprocessor. The instruction
cache memory alone is reported to consume up to 27% of
the processor energy [1].



Figure1. Cache architecture
The cache consists of Hardware managed Tag-
RAM. Automatic checking of cache hit/miss as shown in
figure 1. Unlike in cache, SPM does not have tag
comparison circuit as in fig ure2.


Figure2. SPM architecture

On the other hand, a scratch pad memory consists
of a memory array and address decoders. Therefore,
scratchpad is both area and power efficient than cache [2].
Due to its simplified architecture, SPM is less energy cost
than cache.
The target processor ARM92EJS requires high
performance and low power energy consumption which
consists of 32KB instruction cache and 32KB data cache. It
has software controlled architectural enhancements to
optimize energy consumption in terms of power
management.
The rest of the paper is structured as follows: after
the presentation of related work, section 3 represents the
proposed approach and section 4 represents the
experimental setup and results. The paper ends with a
conclusion and future work.
II. RELATED WORK
Lot of active research is being carried out on SPM
and its performance optimization. The current approaches
for SPM allocation can be broadly categorized as static and
dynamic schemes. Static allocation scheme initialize the
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 319
SPM with the designated program at load time, and the
contents of the SPM do not change at run time. In dynamic
scheme, SPM contents change while the program executes.
Applications can be mapped on to SPM using hardware
and software optimization methods as detailed in the below
figure3.


Figure3. Approaches of SPM design

Compiler assisted techniques require application
source code to transfer data among SPM and the external
RAM at run time. While hardware fitting technique does not
need this and the synthesis is based upon the application
execution trace.
The authors of [4] have proposed an ILP model for
the code/data allocation to SPM based on the size of SPM
and the energy consumption of each function. The algorithm
results in 22% of electrical energy compared to cache. The
authors of [5] introduced a hardware mechanism for
efficient code allocation to the SPM space at run time and
the customized instructions for the run time code allocation
were proposed.
Several techniques to improve performance or
energy were applied for multitasking systems. Verma et al.
proposed the SPM partitioning scheme among multiple
tasks [6].
Each task uses a fixed allocation of SPM space for
energy optimization [7]. The authors of [8] proposed an
RTOS-centric approach to utilize the SPM space in multi-
task systems. The authors of [9] suggest a software memory
allocator which assigns data as well as program objects to
the scratch pad or main memory. Program objects in the
scratch pad memory lead to reduce energy consumption
during instruction fetch whereas relocated data objects
reduce the memory access cost of load and store
instructions.
The authors of [10] proposed three methods named
spatial, temporal and hybrid approaches for SPM
partitioning and code allocation techniques. The usage of
SPM is energy efficient in preemptive multitasking systems
.
III. PROPOSED APPROACH
A. Architecture
The proposed method utilizes the SPM using a
software driven adaptation algorithm. Performance is
enhanced by partitioning the SPM memory so that it acts as
an extension of the cache. Figure 4 shows the block diagram
of cache and memory partition.


Figure4. Block diagram of proposed approach

The memory partitioning in SPM is responsible for
managing the dynamic flow of data
B. Methodology For Implementation.
Tracking a dynamic memory allocation is to
determine the memory behavior. The algorithm is used to
determine the allocation of SPM memory space in the cache
memory hierarchy.
The processing steps are,
Size of the SPM is assumed and is known during
compilation time.
In this approach, consider only the level-1 cache.
For flexible utilization of SPM space, the technique
uses the scheduling policy.
The strategy for implementation of algorithm is as follows
At the start of the program, the first instruction is
always fetched or executed from main memory. When a
cache memory is less than the required amount of program
memory, extension of SPM memory space is done at level-1
cache at the compilation time. The management of SPM is
important, since the size of SPM is limited. The partitioning
of memory space is dynamic and changes during the
execution. This SPM strategy is designed for high
performance in embedded processor.
IV. EXPERIMENTAL SETUP
The technique presented in previous section is
being evaluated using VXWORKS and ARM92EJS.
Memory usage in a running real time embedded program
can be analyzed by memscope (memory analyzer) using
Vxworks. It determines individual process memory usage.
In real time operating system, the cache analysis
capabilities in Workbench On-Chip Debugging monitor
execution on one or more targets and identify differences
between data stored in memory versus data stored in cache.
The Figure 5 shows brief description of evaluation
process of the proposed concept. The analysis of memory
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 320
needs and the allocation is to be simulated under real time
environment as shown below



Figure5. Experimental workflow for evaluating performance and energy
optimization

The performance of estimated hierarchy is
determined by the complete program execution. The cache
and SPM synthesis of arm processor is processed under the
simulation environment (RTOS) to extend the cache
depends upon the needs of the system. With the help of
system viewer, optimization of energy will be performed.
A. Results
The simulation result of arm92ejs cache hierarchy
is displayed which describes the allocation of individual
task and the process function. Figure 6 explains this
architectural behavior


Figure6. Simulated result of allocation view in Vxworks

The above mentioned process can be viewed
graphically by enabling system viewer configuration to view
the memory usage and it can be refreshed sequentially to
upload the log details. Fig 7 indicates the memory events
occurred while tracking a dynamic memory allocation is
listed below


Fig 7 Memory events generated while tracking cache hierarchy in Vxworks
V. CONCLUSION AND FUTURE WORK
The proposed methodology uses SPM to lower
energy consumption and improve performance of embedded
system. Memory partition and dynamic management of data
flow between Cache and scratch pad is obtained
simultaneously to predict the behavior of energy
optimization. The current work includes synthesis and
simulation of the cache and SPM configuration of ARM 9
processor using a real time operating system. Furthermore,
the dynamic management of partition the memory is to be
considered with all levels of caches.
REFERENCES

1. J. Montanaro, A 160 MHz, 32 b, 0.5WCMOS RISC
microprocessor, IEEE J. Solid-State Circuits, vol. 31, no. 11, pp.
17031714, Nov. 1996
2. R. Banakar, S. Steinke, B.-S. Lee, M. Balakrishnan, and P. Marwedel.
Scratchpad Memory: A Design Alternative for Cache On-chip
Memory in Embedded Systems. In Proc. Of Intl. Sym. on CODES,
Col., USA, May 2002.
3. F. Angiolini, L. Benini, and A. Caprara, An efficient profile-based
algorithm for scratchpad memory partitioning, IEEE Transaction on
Computer-Aided Design of Integrated Circuits and Systems, vol. 24,
no. 11, pp. 16601676, 2005.
4. S. Steinke, L. Wehmeyer, B. Lee, and P. Marwedel, Assigning
program and data objects to scratchpad for energy reduction, in
Proceedings of the Conference on Design, Automation and Test in
Europe (DATE), Paris,France, Mar. 2002, pp. 409415.
5. A. Janapsatya, A. Ignjatovic, and S. Parameswaran, Exploiting
statistical information for implementation of instruction scratchpad
memory in embedded system, IEEE Transactions on Very Large
Scale Integration(VLSI) Systems, vol. 14, no. 8, pp. 816829, 2006
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 321
6. M. Verma, K. Petzold, L. Wehmeyer, H. Falk, and P. Marwedel,
Scratchpad sharing strategies for multiprocess embedded systems: A
first approach, in Proceedings of IEEE 3rd Workshop on Embedded
System for Real-Time Multimedia (ESTIMedia), Jersey City, USA,
Sep. 2005, pp. 115200.
7. R. Pyka, C. Fasbach, M. Verma, H. Falk, and P. Marwedel,
Operating system integrated energy aware scratchpad allocation
strategies for multiprocess applications, in Proceedings of 10th
International Workshop on Software & Compilers for Embedded
Systems (SCOPES),Nice, France, 2007, pp. 4150.
8. B. Egger, J. Lee, and H. Shin, Scratchpad memory management in a
multitasking environment, in Proceedings of the 7th ACM
International Conference on Embedded Software (EMSOFT), Atlanta,
USA, Dec.2008, pp. 265274
9. Moving Program Objects to Scratch-Pad Memory for Energy
Reduction Stefan Steinke, Christoph Zobiegala, Lars Wehmeyer,
Peter Marwedel Technical Report # 756,2001
10. Partitioning and allocation of scratch-pad memory for priority-based
preemptive multi-task systems Takase, H.; Tomiyama, H.; Takada,
H.; Grad. Sch. of Inf. Sci., Nagoya Univ., Nagoya, Japan Design,
Automation & Test in Europe Conference & Exhibition ,2010
11. www.windriver.com (VXWORKS PROGRAMMERS GUIDE,
MEMSCOPE USER GUIDE)
12. VXWORKS supplement for ARM architecture.
13. www.nationalinstruments.com(ARM92EJS-LPC-3250)


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 322






Enhancing Library Automation in Ubiquitous
Environment
M.Raghini
1
, M.Ishwarya devi
1
, S.Amudha
1
and M.Satheesh kumar
1
K.L.N College of Engineering
1
, Department of Information Technology,
Pottapalayam, Madurai, Tamilnadu.
raghiragh@gmail.com, ishwaryamurug@gmail.com, amudhadiana@gmail.com & satheesh.becse@gmail.com
Phone no: 9843641426, 9003632648, 9791994531 & 9944171030

ABSTRACT

The process of retrieving books from library using wireless
detector. The wireless detector used here is a RFID kit. The kit
consists of a RF Reader and RF tag. The reader and tag consists
of antenna innate with the equipment, which emits radio
frequency and matches with the each other, which is known as
detection of product and the detection of the item is identified by
a glow of light or by using a bong sound. The reader consists of a
chip used to track or detect items according to the frequency
specified. Each book consists of a tag, which is covered and
numbered. The tag include EPC (Electronic Product Code),
which is manually updated in the system, by using EPC the
product is tracked accordingly. The tag can be otherwise called
as distribution tag. The distribution tag consists of all the
information about the book. This process is implemented using
middleware technology, as it includes both hardware and
software specifications. The ultimate aim of this e-lib ware is to
access and manage the books retrieval process in an efficient and
accurate way and also to reduce manual re-entry.

KeywordsRF Reader, RF tag, Distribution tag, EPC (Electronic
product code), Middleware technology.
I. INTRODUCTION
The enhancement of library automation is done using
many wireless detectors such as smart cards, barcodes etc. The
researches are undergoing several wireless detectors and
sensors for different applications. Here the analysis of smart
cards and barcode are done accordingly. The smart cards and
barcode is an electronic card having a dedicated processor and
memory on a chip, where the data can be stored and
computation can be made. Hence needs an operation system
too
[1][2]
. The operating system provides a standard way of
interchanging information.i.e, interpretation of the commands
and data. Cards must interface to a computer or terminal
through a standard card reader. Smart barcodes display
detailed information such as title, author and call number.
The smart barcodes come with an online item record for each
book. These book records can be pre-loaded if your
automation vendor performs your retrospective conversion.
The barcodes are simply a barcode number. The process is to
assign these barcodes to each book in the collection. The
advantage of barcode is to save time during the bar-coding
process because items are easily identified. The disadvantage
of this method is of higher costs and inaccuracies between
what is actually on the shelf and what title is assigned to the
barcode. Some items may have barcodes while others
somehow were skipped in the collection. The comparison of
smart and barcode is taken along with Radio Frequency
Identification. These technologies are compared with each
other, which are shown in Table 1.1.
ATTRIBUTE RFID BARCODE SMART
CARDS
Dynamic Data Update Yes No yes
Multiple Simultaneous
Reads
Yes No No
Access Mechanism NLOS
(System
dependent)
LOS(optical) NLOS
Data Storage High Low High
Access security High Low High
TABLE.1.1. COMPARISON OF TECHNOLOGIES
To overcome these problems, we go for radio frequency
identification wireless detector
[3-5]
where the radio frequency
tags replace both the electromagnetic security strips and
barcode. The design of self check in and check out process is
practically shown. The design can be implemented in schools,
colleges, public libraries and in book stores also.
a. The Study of RFID:
RFID is a combination of radio-frequency-based
technology and microchip technology. The RFID kit is
consisting of a Reader and a tag. The RF Reader, which leads
to an antenna, lights to glow when it detects the integrated
chip
[6][7]
. The RF tag also consists of an antenna in it. RF
reader and tag emit radio frequency signals. The RF Reader
and Tag is shown in Fig.1. These radio signals are
synchronized with each other and tracked. This tracking
process using radio signals is called radio frequency
identification. The study of RF device is analysed according to
its frequency i.e., LF, HF, UHF and microwave. The range of
the signal, the distance taken by the signal to travel and the
application of the signal
[8-10]
is determined according to the
frequency of the signal is shown in Table1.2. Radio Frequency
Identification -based systems move beyond security to become
tracking systems that combine security with more efficient
tracking of materials throughout the library, including easier
and faster charge and discharge, inventorying, and materials
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 323






handling. Radio Frequency is mainly used for anti-theft
detection, which is innovative and safe. The information
contained on microchips in the tags affixed to library materials
like books, compact disks and digital video disks.
FREQUENCY RANGE DISTANCE APPLICATIONS
Low- Frequency 125-148
KHz
3 feet Car key locks
High- Frequency 13.56
MHz
3 feet Library book
identification
Ultra High-
Frequency
915
MHz
25 feet Supply chain
tracking
Microwave 2.45
GHz
100 feet High way toll
collection
TABLE.1.2. STUDY OF RF DEVICE RANGE
Using this radio frequency technology the materials are
aligned respectively. The technology does not require line of
sight or a fixed plane to read tags
[11-12]
and the distance from
the item is not a critical factor except in the case of extra-wide
exit gates.








FIG.1. RF READER AND RF TAG
The toll gates are arranged at the entrance and exits at a
distance of up to two feet by each of two parallel exit sensors.
These toll gates emit radio frequency which matches with the
tag fixed in the books and identified whether the book entry is
done or not. The process of signal detection from tag by the
reader is shown in Fig.1.2.









FIG.2. THE PROCESS OF DETECTION OF SIGNAL
b. Middleware Technology:
Middleware is computer software that connects software
components or some people and their applications. The
software consists of a set of services that allows multiple
processes running on one or more machines to interact. This
technology evolved to provide for interoperability in support
of the move to coherent distributed architectures, which are
most often used to support and simplify complex distributed
applications
[13-15]
. It includes web servers, application servers,
and similar tools that support application development and
delivery. Middleware is especially integral to modern
information technology based on XML, SOAP, Web services,
and service-oriented architecture. Middleware is essential to
migrating mainframe applications to client/server applications,
or to Java or internet-protocol based applications, and to
providing for communication across heterogeneous platforms.
Middleware products enable information to be shared in a
seamless real-time fashion across multiple functional
departments, geographies and applications. Benefits include
better customer service, accurate planning and forecasting,
and reduced manual re-entry and associated data inaccuracies.
c. Self check-in and check-out process:

The process is to simplify pattern of self check-in and
check-out. The ability to handle material without exception for
video and audio tapes. High-speed inventory and identify
items which are out of proper order
[16-21]
. RFID is the latest
technology to be used in library easy access and management
of book systems. Using self check out, there is a marked
improvement because they do not have to carefully place
materials within a designated template and they can check out
several items at the same time and also self check-in. The
administrator is relieved further when readers are installed in
book-drops.
II. PROPOSED MODEL
This process is implemented in the library where the books
are tracked and subsequently identified using radio signals.
RFID has many standards where the applications can be
handled according to these standards, some of the standards
are represented here. They are, 1. ISO 14443 used for
Contactless systems, 2. ISO 15693 used for vicinity systems
such as ID badges and 3. ISO 18000 used to specify the air
interface for a variety of RFID applications. In a library the
users select their books and update their account using RF
reader and RF tag. Each book consists of RF tag, which is
affixed in the book. In entrance a security check is made for
the user by a barcode checking using, the ID card of the user.
The user will enter into the library and select a book from the
rack. The RF reader used here is a low frequency reader, with
the range of 125 Khz, which is connected to the system using
RS 232 cable. The emitting signal will be captured in the
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 324






range of upto 3 feet distance. The reader is charged using
external adapter. The tag and reader consists of same antenna,
it can be a patch or loop antenna. The reader is connected with
the system using Hyper-Terminal by the system administrator.
The system administrator will update the RF books in the
database in prior. When a book is selected by the user, the
users will the take the book to the administrator where the
system is connected with the RF reader. The system for the
automation is designed accordingly, for the library
management using PHP as front end and My SQL as back
end, where these softwares are connected with the RF reader
using hyper terminal. The Windows Remote Shell command
line option in Windows 7 and Vista i.e., using WRS, simply
open a command prompt and type in winrs /?, shown in Fig.3.
Its basically a SSH replacement that allows remote command
line access over an encrypted connection. It also uses the
SOAP protocol.
FIG.3. THE HYPERTERMINAL CONNECTION IN
WINDOWS 7 USING WINRS /?.
The system is updated using a system administrator. The
entry of books is done by system administrator i.e., tag ID,
which is fixed in each book is made an entry with the system.
The tag ID information includes the book details such as, the
department, type of book (includes journal, project book,
Compact Disks and Digital Video Disks) and book number.
The total number of books which is taken in account is 1080.
The tag ID for a book will be displayed as ITBK0001. The tag
ID for a journal will be displayed as ITJR0001. The tag ID for
a Project book will be displayed as ITPR0001. The students or
staffs have their ID card for their references i.e. otherwise
called as membership holders. This Identification card is used
for selecting the books inside the library.
The user should give an entry of their ID card inside the
library and at the time of book entry. The details of the card
holder will make the user to login into the system. Then the
details of the card holder will be shown. The details are card
holder name, card holder number and status of book in
account will be displayed. The book should be shown nearer
to the reader to capture the signal as soon as possible. Now,
the selected books by the card holder will be shown to the
reader. The tag is affixed or pasted in each book. When the tag
is shown to the RF reader, which is of Low frequency, the
range of this reader is already shown in table1.2. At the time
which the tag matches with the reader, the information about
the book will be displayed. The information is the book
number, book name. An account is opened for the said person
pertaining to the books. There are 2 options to entry and non-
entry of the book. When an entry option is clicked. This
account would be immediately annexed to the already
available books in account of the card holder. Now the
account will be shown to the card holder, the date of entry and
the date of return.
The fine for the books will be calculated accordingly. If an
entry is made by the user, the card holder wants to return the
book. The book should be placed in book-drop place. Book
drop place is the place where the book can be returned, by just
placing the book in the book drop place. The entry of returned
books will be taken in account from the book drop place and
the book will be removed from the account of the card holder
and fine will be calculated according to the date of return.
After completion of selection and picking up of the items in
the trolley, the trolley would be allowed to be passed through
a toll gate. This toll gate which emits radio frequency signals
continuously checks the items and finds out, whether the items
are already sensed or not and there itself the items can be
either added or removed. While the customer moves his
trolley out of the toll gate, the items purchased will be billed.
After coming out of the toll gate the bill is automatically
printed and becomes ready to be handed over to the customer
for payment. There the customer gets an opportunity to
recheck the items for package and then he can move the items
out of the shopping mall. Further at the time of rechecking
also, the radio signals can be used to deactivate the tag in the
products.

III. IMPLEMENTATION WORK

SCREEN SHOTS

a. Welcome page :

The home page gives us the complete information about
RFID based automation of library management system. Here
the entry of record of new books and retrieve the details of
books available in the library. It can be otherwise called as
login page. We can issue the books to the students and
maintain their records and can also check how many books are
issued and stock available in the library. We can also maintain
the late fine of students who returns the issued books after the
due date. Throughout the paper we have focused on presenting
information and comments in an easy and intelligent manner.
This paper will be very useful for those who want to know
about library management using RFID, where the user can
login with their staff code as username and password and for
the students can use their roll number as username and
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 325






password. The changing of password option is also provided,
accordingly. The home page is displayed as shown in Fig.4.A
and Fig.4.B, Shows the clear view of the electronic-library.













FIG.4.A .WELCOME PAGE












FIG.4. B.E-LIBRARY
b. Complete view of the automated library:

This page shows the clear view of the electronic library
with the categories. This is the information technology
department library of KLNCE. The entrance view of the
library, journal section, DVDs and CDs is shown in Fig.5.
the Radio Frequency tag is placed in each book and CDs. The
side view and top view of the library is shown.












FIG.5. VIEW OF ELECTRONIC LIBRARY
c. View of Materials in the e-libware:

This page shows the complete books lists in the library
with the informations.
- They are tag ID of the book, Book name, Author name,
Publication name, Book status.
- This can be viewed by administrator, staff and students.
Without any registration the total list of books can be
viewed as per the need.
- The availability of books is shown in the issue status,
which is shown in Fig.6.
- The books stocks will be updated by the administrator.
The storage of books will be initiated by the administrator
(i.e shortage of books) and updated (the book racks will
be updated with the lagging books).
- After updating of books, tags will be attached with the
books.

FIG.6.BOOK LISTS IN THE LIBRARY
d. Login page:

Here any user can enter into the login with the help of ID
card, whose username and password are provided in advance.

- The book selected by the user will be shown in front of
RF reader. When the tag signal matches with the reader
signal (the same frequency).
- The reader captures the tag information. The information
about the book i.e., book name, author of the book,
edition of the book, publisher of the book and tag ID of
the book, will be shown in the figure.
- To finalize the book - click the submit button. The book
will be updated in the user account with the system time
and total account of the user will be displayed with the
date of entry and date of return.
- The books will be chosen and entry of the books will be
done using the login page. The issue of the book without
any manual help is shown in Fig.7 and Fig.8.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 326


















FIG.7. WHEN THE TAG SIGNAL MATCHES WITH THE READER














FIG.8. TO ISSUE BOOKS AND CDS-
THE ACCOUNT OF THE USER IS SHOWN
e. Issue of project and journals:

The projects and journals will be chosen by the user and
the entry will be made using the tag ID. For journals, the ID is
represented as ITJR0001. This is shown in Fig.9 and Fig.10.










FIG.9. TO ISSUE PROJECTS













FIG.10. TO ISSUE JOURNALS
f. Fine Calculation:

The book will be returned by the user in book drop place.
From the account of the user, the fine will be calculated for
each user, according to date of return of the book. The fine
will be collected at the time they move out of the organization.
Hence this is shown in Fig.10.










FIG.10. FINE CALCULATION
IV. CONCLUSIONS
The tracking of items using Radio Frequency is
implemented in many areas such as, vehicle tracking, toll
gates, smart shopping, animal tracking, military purpose, to
identify individual. Here our project is used to automating
books in the library, easy automation of books without any
manual work i.e., notebook entry. This, easy automation is
done using radio frequency tag and reader. The antenna which
emits the frequency and matches with tag and reader. There
are thousands of books in our library. Where tag is affixed
with each book and updated in the database. The books,
journals, project books, CDs and DVDs are automated using
RF signal in ubiquitous environment. This can be further
developed according to the necessary needs.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 327






The needs can be extended as shelf arrangement of books
using Radio frequency signal. The implementation work goes
with this extension, where the method of arranging the books
in the shelf, which can be viewed by tracking the item using
sixth sense technology from the place where our presence or
the book can be identified inside the library.
ACKNOWLEDGMENT

The authors acknowledge with thanks to the management
and Dean Academic & HOD/IT of K.L.N College of
Engineering Prof R.T Sakthidaran for his encouragement and
financial assistance. We wish to thank Dr.S.Ganapathy,
Principal of K.L.N college of Engineering for useful
discussions during the initial phase of the work.
REFERENCES

[1] Firke, Yogaraj S. RFID Technology for library security. In
Emerging technology and changing dimensions of libraries and
information service by Sanjay Kataria and others. New Delhi,
KBD Publication.2010.
[2] Neelakandan.B, Duraisekar. S, Balasubramani.R, Srinivasa
Ragavan.S Implementation of Automated Library
Management System in the School of Chemistry
Bharathidasan University using Koha Open Source Software,
Volume 1, No1, 2010.
[3] Boss, Richord W, RFID Technology for Libraries.2009.
[4] S. H. Ching, and A. Tai, HF RFID versus UHF RFID
Technology for Library Service Transformation at City
university of Hong Kong.The Journal of Academic
Librarianship, Volume 35, Number 4, 2009
[5] Breeding, Marshall, Library Automation in a Difficult
Economy, Computers in Libraries, Vol. 29, no. 3, 2009.
[6] Bansode, Sadanand Y and Periera, Shamin (2008), A Survey
of Library Automation in College Libraries in Goa State,
India, Library Philosophy & Practice, Vol. 10, no. 2, p17.
[7] D. Hunt, A. Puglia, M. and Puglia, RFID A Guide to Radio
Frequency Identification, John Wiley & Sons, New Jersey,
2007.
[8] Adanu, Theodosia.S.A, Planning and Implementation of
the University of Ghana Library Automation Project,
African Journal of Library, Archives & Information Science,
Vol. 16, no. 2, in 2007.
[9] Simson Garfinkel Henry Holtzman understanding RFID
Technology,June 2, 2005.
[10] K. Coyle, Management of RFID in libraries, Journal of
Academic Librarianship, Vol. 31 No. 5, 2005.
[11] Sudarshan S. Chawathe,et al.Managing RFID Data
Proceedings of the 30th VLDB Conference, Toronto, Canada,
2004.
[12] Ashim A Patil, i-TEK RFID Based Library Management
Suite a White paper. By Infotek Software & Systems PLtd,
Pune, India. 02, April, 2004.
[13] Abdul Azeez, T. A, Tkm College of Engineering Library
Automation System, Annals of Library & Information
Studies, Vol. 51, no. 2, 2004.
[14] Bailey, Penny, INTERNET, Interaction and Intelligence:
Latest Developments in Library Automation, Managing
Information, Vol. 11, no. 3, 2004.
[15] Abraham. J, Computers in modernising Library Information
System and Services: Perspectives of Library Automation,
International Library Movement vol. 18, p3, 1996.
[16] Ward, Diane Marie. Helping you Buy: RFID. Computers in
Libraries 24:3.
[17] Josef, Schuermann. Information Technology Radio
Frequency Identification (RFID) and The World Of Radio
Regulations.
[18] A.Butter, RFID for Libraries A comparison of High frequency
(HF) and Ultra High Frequency (UHF) Options.
[19] Dhanalakshmi M, Uppala Mamatha. RFID Based Library
Management System.
[20] V.NagaLakshmi1 , I.Rameshbabu2, D.Lalitha Bhaskari1 A
Security Mechanism for library management system using low
cost RFID tags .
[21] V Rajasekar, M Arul Dhanakar, R Pandian, R Malliga RFID
Technology in Anna University Library Management: A
Study.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 328
Complementary Acoustic Features for Voice Activity Detection



Ginsha Elizabeth George
1
, M.R Mahalakshmi
2

1, 2
Department of Electronics and Communication,
Sri Muthukumaran Institute of Technology, Chennai
email: ginsha@gmail.com, mahapras2005@yahoo.co.in



Leena Mary

Department of Computer Applications,
Rajiv Gandhi Institute of Technology, Kottayam, India
email: leena.mary@rit.ac.in

Abstract-Voice activity detection (VAD) is the determination of
speech/nonspeech portions in a given audio signal. In this
paper, we present a novel VAD algorithm for robust voice
activity detection. This paper proposes the use of
complementary acoustic features such as short time energy,
spectral flatness measure, the most dominant frequency and
voicing information for classifying a given frame of audio
signal. A multilayer feedforward neural network classifier
trained using these multiple features can do the
speech/nonspeech classification for the test data. The
effectiveness of the proposed VAD algorithm is evaluated for
augmented multi-party interaction (AMI) database.

Keywords: Speech Processing, Voice Activity Detection,
Multilayer Feedforward Neural Network, Complementary
features
I. INTRODUCTION
Voice Activity Detection (VAD) is the determination of
speech/nonspeech portions in a given audio signal. It is also
referred as speech/nonspeech classification in the literature.
Speech signal contains sound units such as consonants and
vowels arranged as syllables, words and sentences separated
by appropriate intervals of silence. Voice activity detection
is therefore an important step in developing speech
processing applications, and is a challenging problem for
audio signal recorded in realistic noisy environments. VAD
is very critical in many speech/audio applications including
speech coding, speech recognition, speech enhancement,
Voice over IP and audio indexing [1].
In general, VAD mainly consists of two parts: feature
extraction and classification/decision making. The decision
of determining to which category or class a given signal
belongs is made based on an observation vector, frequently
called feature vector. A feature vector may consist of one or
many features and it serves as the input to a decision rule that
assigns a feature vector to speech/nonspeech class. As the
level of background noise increases, the classifier
effectiveness degrades, thus leading to numerous detection
errors. The selection of an adequate feature vector and a
robust decision rule is required for the high performance of
VADs working under noisy conditions [1].
General block diagram of VAD is shown in Fig. 1. It
consists of i) A/D conversion ii) feature extraction process,
iii) decision module, and iv) decision smoothing stage. The
audio signal recorded using a microphone is converted to a
digital signal by an A/D converter.

Figure 1. General block schematic of VAD
Because of the slowly varying nature of the speech
signal, it is common to process speech in frames over which
the properties of the speech waveform can be assumed to
remain relatively stationary. This leads to the basic principle
of short-time analysis. For feature extraction, the speech
signal is framed and windowed using a Hamming window
[1]. A window based on a raised-cosine shape called the
Hamming window is used to ensure that the frame has no
sudden onset or offset. Each frame is multiplied with a
Hamming window, and then suitable features are extracted.
The decision module defines a method or rule for assigning
a class (speech or nonspeech) to the feature vector. The
output of the classifier or decision module is smoothed by
some decision smoothing algorithms, in order to improve
the performance.
In recent years, there has been several research efforts in
VAD. The major VAD algorithms include entropy based
approach, statistical modelling technique, genetic
programming algorithm, finite state automaton and
likelihood ratio test. There are still many issues to be solved,
which reduce the efficiency and practicability of VAD.
The basic cues for VAD include the short-term energy
and zero-crossing rate [1] that have been widely used
because of their simplicity. However, they easily degrade by
environmental noise. To cope with this problem, various
other features, such as autocorrelation function based
features, spectrum based features, the power in the band-
limited region [3], pitch information [4], Mel-frequency
cepstral coefficients, delta line spectral frequencies and
features based on higher order statistics [3, 16] have been
proposed for VAD.
Using multiple features in parallel has lead to more
robustness against different noises. In some previous works
multiple features are applied in combination with some
modelling and decision algorithms such as CART [5] or
Neural Fuzzy Systems [6]. To overcome degradation of
A/D
Conversion
Decision
Module
Feature
Extraction
Decision
Smoothing
Input
Signal
VAD output
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 329
VAD performance at varying SNR conditions, some
previous works propose noise estimation and adaptation for
improving VAD robustness [3], but these methods are
computationally expensive.
In this paper, we propose a method for VAD that takes
the advantage of multiple acoustic features which are
complementary in nature. The remaining part of the paper is
organized as follows: In Section II we present various
features explored in this study for VAD. Section III gives
proposed algorithm for VAD. Details of experimental study
with results are discussed in Section IV, followed by
conclusions in Section V.
II. FEATURES FOR VAD
A preliminary study was conducted to identify
discriminative features for VAD. Various features based on
energy, spectrum and excitation were explored. The
different types of features include the following: i) Energy
based features such as instantaneous energy, short time
energy, full band energy and root mean square energy. ii)
Spectrum based features such as spectral flatness measure
and the most dominant frequency. iii) LP residual based
features such as Hilbert envelope of LP residual and voicing
information. Frame size of 20ms and a frame shift of 5ms is
used for deriving all the above mentioned features.
A. Energy
One of the basic short-time analysis functions useful for
speech signals is the short-time energy [1]. The short-time
energy, E
n
is defined as
L
m
L
m
n
m n w m x m n w m x E
1
2 2
1
2
] [ ] [ ]) [ ] [ (
(1)
Where L is the number of samples of the signal, w [n-m]
represents a time shifted window sequence, whose purpose
is to select a segment of the sequence x[m] in the
neighborhood of sample m = n [3]. As there are different
forms for energy representation, we explored different ways
for calculating energy of the speech signal.
Another method of calculating the energy of a speech
signal is the full-band energy and root mean square energy
(RMSE). The full band energy is the average sum of the
squares of the amplitude of the signal samples whereas the
RMSE is the square root of the average sum of the squares of
the amplitude of the signal samples. We observed in RMSE
that the power estimate of a speech signal exhibits distinct
peaks and valleys. While the peaks correspond to speech
activity the valleys can be used to obtain a noise power
estimate. Since we have focused on short-time (frame based)
analysis, short-time energy is selected as one of the features
for our VAD. Fig. 2 shows a speech waveform and
corresponding short time energy. It can be observed that the
energy is relatively low for the unvoiced and silence regions
compared to the high energy voiced regions.

Figure 2. Speech waveform and corresponding (b) Short time energy
B. Spectral Analysis:FFT And LP Analysis
The Discrete Fourier Transform (DFT) is a sampled (in
frequency) version of the Discrete Time Fourier Transform
(DTFT) of a finite-length sequence (i.e,
) ( ] [
/ 2 N k j
e X k X
).
An N-point DFT is defined as,
n N k j
N
n
e n x k X
) / 2 (
1
0
] [ ] [
(2)
DFT of the speech signal can be computed with Fast
Fourier Transform (FFT). Using DFT spectrum, we studied
the effectiveness of spectral based features such as spectral
flatness measure and the most dominant frequency
component of the spectrum, for VAD.
Spectral Flatness [3, 7] is a measure of the noisiness of
spectrum and is a good feature in voiced/unvoiced/silence
detection. This feature is calculated using the following
equation:
SFM db = 10log
10
(Gm / Am) (3)
Where Am and Gm are arithmetic and geometric means of
speech spectrum respectively. Fig.3 shows that
flatness/stable values in the nonspeech portions compared
with speech portions.
Another feature is the most dominant frequency (MDF)
component which can be computed by finding the frequency
corresponding to the maximum value of the spectrum
magnitude, |X[k]|. Fig.3 shows the plot of the most
dominant frequency and spectral flatness measure
corresponding to a speech waveform that shows their
effectiveness for VAD.
The excitation source information can be extracted from
the speech signal by performing LP analysis [8]. Whenever
there is significant excitation to the vocal tract system, it is
indicated by a large error in the LP residual as in Fig. 4 (b).
This can clearly be seen in the case of voiced speech, where
the significant excitation within a pitch period coincides
with the Glottal Closure (GC) event [9]. The GC event is the
instant at which closure of the vocal folds takes place in
each glottal cycle.
0 1 2 3 4 5 6
-1
-0.5
0
0.5
1
Wave file

0 1 2 3 4 5 6
0
5
10
15
20
25
30
Short Time Energy

time (s)
(a)




(b)

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 330

Figure 3. (a) Speech waveform (b) Most dominant
Frequency and (c) Spectral flatness measure
Even though the LP residual contains mostly the
excitation source information, there are difficulties in using
it directly for further processing. This is due to fluctuations
caused by the phase of the residual, which results in signal
of random polarity around the instants of significant
excitation. The effect of phase can be reduced by using
amplitude envelope of the analytic signal derived from the
LP residual [9].
The amplitude envelope of the analytic signal e
a
[n]
which is called Hilbert envelope h
e
[n] of the LP residual
e[n] is given by

] [ ] [ | ] [ | ] [
2 2
n e n e n e n h
h a e
(4)

where e
h
[n] is the Hilbert transform of the LP residual.


Figure 4. (a) Speech waveform and corresponding (b) LP residual and
(c) Hilbert envelope of the LP residual

Voiced speech sounds have a periodic structure when
viewed over short time intervals. These sounds are
perceived by the auditory system as having a quality known
as pitch [4]. Pitch is the physical attribute of acoustic
waveform. It comes from the vibration of the vocal cords
and varies in a limited range for every [12].
Regular peaks of Hilbert envelope correspond to the GC
events of the vocal tract [9]. The distance between these
peaks denote the period of vocal cord vibration, and the
corresponding frequency represents the fundamental
frequency of vibration. Pitch contour shown in Fig. 5 gives
the information about voicing or periodicity in speech
signal. Therefore, this information may be useful for VAD.


Figure 5. (a) Speech waveform and corresponding
(b) Pitch contour and (c) Voicing Information.
C. Selection of feature set
Short time energy loses its efficiency in noisy conditions
especially in lower SNRs. Hence other features are
considered. The second feature is Spectral Flatness Measure
(SFM). Besides these two features, it was observed that the
most dominant frequency component [3, 10] of the speech
frame spectrum can be very useful in discriminating
between speech and nonspeech frames. The last feature
extracted is the voicing information from pitch.
In general, none of the above mentioned features will
give perfect solution to the VAD problem due to the varying
nature of human speech and the background noise.
Moreover the features should be robust to variations in
environment such as channels, speakers and transducers.
One feature alone cannot satisfy these requirements for
VAD. Hence it is important to combine various features to
extract complementary information from them. Therefore
multiple features are effectively combined in this work,
rather than using each one independently.
III. PROPOSED VAD ALGORITHM
A. Selection of database
Continuous multispeaker speech with varying
background noise is addressed for VAD in this work. It is a
challenging problem to classify speech/nonspeech in
continuous speech compared with isolated words or digits.
The audio recordings taken from Augmented Multi-party
Interaction (AMI) database are used in this work.
AMI database is a real-time audio recording of meetings
in an instrumented meeting room. This database is collected
by University of Edinburg and Idiap Research Institute [11].
Although the AMI meeting corpus was created for the use
of a consortium that is developing meeting browsing
technology, it is designed to be useful for a wide range of
research areas. Since sixteen microphones are placed at
different distances from the speakers in this room, the
speech amplitude varies dramatically in this database. AMI
includes speech with background noise due to murmuring,
cough, breathing, movements of chair, paper, fan etc.
0 1 2 3 4 5 6
-1
-0.5
0
0.5
1 Wave file
0 1 2 3 4 5 6
0
0.2
0.4
0.6
0.8 Most dominant frequency
0 1 2 3 4 5 6
-15
-10
-5
0
Spectral Flatness measure

(a)



(b)


(c)

(a)



(b)


(c)
0.4 0.45 0.5 0.55 0.6 0.65
-1
-0.5
0
0.5
1
Wave file
time(s)
0.4 0.45 0.5 0.55 0.6 0.65
-0.2
0
0.2
0.4
0.6
LP residual
time(s)
0.4 0.45 0.5 0.55 0.6 0.65
0
0.05
0.1
0.15
0.2
Hilbert envelope
time(s)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
-1
-0.5
0
0.5
1
Wavefile
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0
50
100
150
Pitch
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0
0.5
1
Voicing Information
time (s)
(a)




(b)



(c)

(a)


(b)



(c)
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 331

Figure 6. Noisy speech from AMI database (b) Most dominant frequency
(c) Short time energy and (d) Spectral flatness measure
In this work, the robustness of various acoustic features
useful VAD is studied for speech recorded in realistic
environment. Features discussed in Section II, extracted on
a frame-by-frame basis for the noisy speech taken from
AMI database is shown in Fig. 6.
VAD is essentially a binary classification problem.
Therefore, discriminative machine learning techniques such
as artificial neural networks (ANN) are effective for solving
it [6]. A classifier can accommodate as many features as
inputs, and can be trained using hand-labeled speech. A
meeting recorded for 30 minutes duration is selected from
AMI database (which consists of multispeakers and varying
background noise), for training a multilayer feedforward
neural network (MLFFNN) classifier. The schematic
diagram of the proposed system is shown in Fig. 7.


Figure 7. Schematic diagram of the proposed system
B. Proposed Voice Activity Detection Algorithm
1. Set Frame size = 20ms and Frame shift = 5ms.
Apply a Hamming window on each speech frame.
Compute the number of frames nframes.
2. For n = 1 to nframes
i) Compute short time energy
ii) Compute pitch information for voicing/
unvoicing detection
iii) Compute FFT
a) Compute spectral flatness measure,
SFM (n).
b) Compute the most dominant frequency,
MDF (n).
3. Normalize each feature dataset.
4. Initialise appropriate MLFFNN structure with
random weights.
5. Train the MLFFNN using hand-labeled output
6. Test the MLFFNN classifier with a test dataset.

IV. EXPERIMENTAL RESULTS
A conference recording from AMI database is used for
demonstrating the effectiveness of the proposed VAD
algorithm. Recording for 30 minutes duration is used for
training and another recording of duration 5 minutes is used
for testing. In the proposed method, we use four different
acoustic features discussed in Section II. Four features are
extracted, and also hand labeled for training and testing
purpose. Features and labels of the training database is then
used for training the MLFFNN classifier with structure 4L
16N 8N 1N where L denotes neurons with linear activation
function, N denotes neurons with nonlinear activation
function, and the numerals denote the number of neurons in
each layer. This MLFFNN classifier is then tested using
features derived from test data. The classifier output value
for each frame is used for speech/nonspeech decision using
middle value as the threshold. i.e., Any frame with
MLFFNN output greater than 0.5 will be classified as
speech.

TABLE I. PERFORMANCE OF PROPOSED VAD ALGORITHM

Method

Classifier
Output
Smoothed Output
5- point 7- point 11- point
Accuracy 83.5% 84.5% 84.7% 84.8%

The experimental results given in Table 1 shows
that classification accuracy of 83.5% is as in column 2.
Classifier output is then smoothed by a 5-point, 7- point and
11-point median filtering. This gives slight improvement in
results as shown in column 3, 4 and 5 of Table 1. Accuracy
refers to the percentage of the number of correctly detected
frames divided by the total number of test frames. Fig. 8
shows the test speech in (a) along with the output of the
MLFFNN classifier (b) and the smoothed output (c).


Figure 8. (a) Noisy speech (b) Classifier output
(c) Smoothed output of classifier.


Framing

Spectral
Analysis
Short-term
energy
SFM
MDF
Voicing
/unvoicing
detection





Classifier
MLFFNN
Speech/
nonspeech
Decision

LP
residual

FFT
Spectrum
30 32 34 36 38 40 42 44 46 48 50
-1
0
1
Wave file
time(s)
30 32 34 36 38 40 42 44 46 48 50
0
5
10
15
Short Time Energy
time(s)
30 32 34 36 38 40 42 44 46 48 50
-0.5
0
0.5
1
Most dominant frequency
time(s)
30 32 34 36 38 40 42 44 46 48 50
-20
-10
0
Spectral Flatness measure
time(s)
45 50 55 60
-1
-0.5
0
0.5
1
Wavefile
(a) time(s)
45 50 55 60
0
0.5
1
Classifier Output
(b) time(s)
45 50 55 60
0
0.5
1
Smoothed Output
(c) time(s)


(a)


(b)



(c)



(a)


(b)

(c)


(d)


Input
signal
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 332
V. CONCLUSION
VAD is a challenging problem for speech recorded in
realistic environment [13]. Multiple acoustic features
derived from energy, spectrum and excitation were
employed in this work to address this problem. AMI
database used for the experimental studies provided a
realistic audio signal for the VAD task. The use of
MLFFNN classifier avoids the need for majority voting and
adaptive thresholds used in other algorithms [14, 15, 16].
One disadvantage of the proposed approach is the need for
supervised training of the classifier.
REFERENCES

[1] Lawrence R. Rabiner1 and Ronald W. Schafer, Introduction to
Digital Speech Processing Foundations and Trends R_ in Signal
Processing, Vol. 1, Nos. 12 (2007) 1194_c, 2007.
[2] J. Ramrez, J. M. Grriz and J. C. Segura,Voice activity detection.
Fundamentals and speech recognition system robustness, Robust
Speech Recognition and Understanding, Book edited by: Michael
Grimm and Kristian Kroschel, ISBN 987-3-90213-08-0, pp.460, I-
Tech, Vienna, Austria, June 2007.
[3] M. H. Moattar1 and M. M. Homayounpour1 and Nima Khademi
Kalantari2, A new approach for robust realtime voice activity
detection using spectral pattern, Proc. of ICASSP 2010, pp.4478-
4481, 2010.
[4] W. Hess, Pitch determination of speech signals, Berlin
Heidelberg,New York: Springer-Verlag, 1983
[5] W. H. Shin, "Speech/non-speech classification using multiple features
for robust endpoint detection," In Proceeding of ICASSP 2000, pp.
1399-1402, 2000.
[6] G. D. Wuand and C. T. Lin, "Word boundary detection with Mel
scale frequency bank in noisy environment," IEEE Trans. Speech and
Audio Processing, vol. 8, no. 5, pp. 541-554, 2000.
[7] R. E. Yantorno, K. L. Krishnamachari and J. M.Lovekin, The
spectral autocorrelation peak valley ratio (SAPVR) A usable speech
measure employed as a cochannel detection system, Proc. IEEE Int.
Workshop Intell. Signal Process, Hungary, pp. 193-197, 2001.
[8] John Makhoul,Linear prediction:A tutorial review,Reprinted from
Proc.IEEE,vol.63,no.4, pp. 561-580,Apr.1975.
[9] K. Sri Rama Murty, B. Yegnanarayana and S. Guruprasad, Voice
activity detection in degraded speech using excitation source
information, Proc.of INTERSPEECH, Antwerp, August 2007.
[10] M. H. Moattar and M. M. Homayounpour, A simple but efficient
real-time voice activity detection algorithm, Proc. of Eusipco 2009,
Glasgow, Scotland, pp. 2549-2553, 2009.
[11] www. amiproject.org, Website of AMI corpus
[12] Soheil Shafiee, Farshad Almasganj, Ayyoob Jafari, Speech/non-
speech segments detection based on chaotic and prosodic features,
Proc. of Interspeech 2008, Incorporating SST 2008, Australia, Sept.
2008.
[13] Benyassine, E. Shlomot, H. Y. Su, D. Massaloux, C. Lamblin and J.
P. Petit, "ITU-T Recommendation G.729 Annex B: a silence
compression scheme for use with G.729 optimized for V.70 digital
simultaneous voice and data applications," IEEE Communications
Magazine , vol. 35, pp. 64-73, 1997.
[14] T. Kristjansson, S. Deligne and P. Olsen, Voicing features for robust
speech detection, Proc.of Interspeech, pp. 369-372, 2005.
[15] M. H. Savoji, "A robust algorithm for accurate end pointing of
speech," Speech Communication, pp. 4560, 1989.
[16] K. Li, N. S. Swamy and M. O. Ahmad, An improved voice activity
detection using higher order statistics, IEEE Trans. Speech Audio
Process., 13, pp. 965-974, 2005.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 333







Jos Angel George, Jacob A.Abraham, Anna Vinitha
Joseph, Anuja S.Pillai,
Department of Electronics and Communication,
AmalJyothi College of Engineering, Kottayam,
Kerala,Email:josangel555@gmail.com,
anuja421@gmail.com



Abstract: Human face detection plays an important role in
many application areas such as video surveillance, human
computer interface, face recognition, face search and face
image database management etc. In human face detection
applications, face region usually form an inconsequential
part of images. Preliminary segmentation of images into
regions that contain "non-face" objects and regions that
may contain "face" can greatly accelerate the process of
human face detection. This can be done using skin color
segmentation, where given image is segmented based on
color as 'skin region' and 'non skin regions'. Thus we can
say that the skin regions may contain face and other regions
don't. Color information based methods take a great
attention, because colors have obviously character and
robust visual cue for detection. This paper proposes a
method based on RGB color centroids segmentation (CCS)
for face detection. This paper includes two parts, first part is
color image thresholding based on CCS to perform skin
color segmentation and the then detection of human face
from detected skin regions. CCS method has some
shortcomings as it fails when the skin color of the subject
lacks chroma. This happens especially with subjects having
too darker or too lighter skin tones. This shortcoming of
CCS can be overcome using Contourlet Transformation. In
this paper, we pursue a two dimensional transform that can
capture the intrinsic geometrical structure that is key in
visual information.

Index terms: face detection, color centroid segmentation,
thresholding, contourlet transform, skin color segmentation.

I. INTRODUCTION

owadays, many application technologies are
developed to secure access control, are based on
biometrics recognition such as fingerprints, iris
pattern and face recognition. Along with the development
of those technologies, computer controller plays an
important role to make the biometrics recognition more
economically feasible in such developments. Face
recognition is a major concerned research direction in this
field. In recent years, the face recognition become popular
research direction more and more, and has many
applications such as financial transactions, monitoring








Sunish Kumar O S
Asst. Professor, Department of Electronics and
Communication Engineering, AmalJyothi College of
Engineering, Kottayam, Kerala
ossunishkumar@amaljyothi.ac.in



system, credit card verification, ATM access, personal PC
access, video surveillance etc. There are numerous
researches going on in this field all over the world. All of
the research studies find its basics in CBIR [5], [6].
This paper proposes a new method of face detection
based on Color Centroids Segmentation (CCS) [1] and
Contourlet Transformation (CT) [7].This method is able
to handle a wide range of variations in color image
sequence, various backgrounds, various lighting
conditions and various skin tones for detection of face
region efficiently. The rest of this paper is organized as
follows. Section II describes how to create CCS model
and use it to thresholding. Section III describes about
thresholding based on CT model. Section IV uses
proposed CCS model to detect face region. Section V
presents the experimental results. Section VI gives the
conclusions and the problems for future works.

II. COLOR IMAGE THRESHOLDING BASED ONCCS

This section introduce how to thresholding the color
image by CBH model though analyzing created color
triangle and its color centroids region distribution. Here
describes how to transform RGB components of RGB 3-
D color space to 2-D polar coordinate system, and use
multi-threshold to segment the centroids region. By
analyzing and processing, it can cluster the color of image
to 2~7 colors by 2~7 thresholds for require and the effect
better than traditional methods.

A. Color Triangle

In image processing, RGB, YCbCr, HSV, HSI etc.
color spaces are widely used. These color spaces use three
components to reflect color information, e.g. RGB color
space consist of R, G and B components. This paper tries
to transform the 3-D color space to 2-D coordinate system
by color triangle (Fig. 1). To create the color triangle, a
standard 2-D Cartesian coordinate system is used to
describe R, G and B values and then transform it to polar
coordinate system as Shown in figure (1):
N

A Modified Algorithm for Thresholding and
Detection of Facial Information from Color Images
Using Color Centroid Segmentation and Contourlet
Transform


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 334
R:r(
R
) = r
R
,(
R =
90
0
& 0 r
R
255)
G:r(
G
) = r
G
,(
G =
210
0
& 0 r
G
255)
B:r(
B
) = r
B
,(
B =
330
0
& 0 r
B
255)

By the following steps can create the color triangle:

Step 1: create a standard 2-D polar coordinate system;
Step 2: create three color vectors to reflect R, G and B
colors; every vectors value range is [0, 255] and
alternation 120 reciprocally.

Step 3: connect the three apexes. After above processes,
the color triangle can be created as Fig. 1. For different R,
G and B values, the shape of triangle is changeable. No
matter the R, G and B value change the main structure are
fixed.


B. Color Centroids Hexagon Region Distributing

Because of R, G, B vectors direction is fixed and the
value is change from 0 to 255, so different combination of
R, G, B value will create different color; also, the shape of
color triangle is changed too. The different shape triangle
has different centroid, and the centroids distributing
region of color triangle is show hexagon as Fig. 2. In this
hexagon region, it divided to 7 regions: R (Red), G
(Green), B (Blue), C (Cyan), M (Magenta), Y (Yellow)
and L (Luminance, achromatic) regions. In Fig. 2 we use
seven threshold curves as the dividing line for
thresholding. By observing the relation of color and
corresponding centroid position of color triangle, we find
that if the R, G and B values are closely, no matter small
or large it only reflect the luminance information (weak
color information). So the centroids of corresponding
color triangles will in a circular region (L region). And
other six color regions reflect the color character of R, G
and B combination.

C. Color Centroids Segmentation Thresholds Acquisition
for skin color segmentation

Considering the L region usually is not the goal region
and existing method is not efficient to divide white and
black region usually. This region is noise region, so
clustering the value of this kind to one region wills
effective to ignore the influence of white, black and other
achromatic region. Here let r
L
as the threshold of L
region, as the angle, the function of threshold curve is:
r()= r
L
(0 360) (2)

By observing the distributing of color centroids in
hexagon region as Fig. 3(b), it can be seen that the
centroid distributing of different color are different. Only
when the R=G=B, the centroid is the origin of hexagon
region. The color information is stronger proportional to
the distance from origin. For example, (R, G, B) = (255,
0, 0) reflect the red color and the centroid is in the upper
peak of Fig. 2. By analyzing centroid distribution of face
region, we can see that the color of face usually include
Red region and lean to Yellow region. Thus we can
choose
R
and
Y
as thresholds for clustering the image to
face region and non-face region. But using fixed value for
thresholding cannot give ideal thresholding result for
different image conditions such as varying skin tone,
white balance of image, lighting conditions etc. Thus,
here proposes an automatic threshold acquisition method
to acquire the thresholds for any given image. But finding
the segmentation borderline of the colors in Fig. 3(c) is
not easy. To display distributing character of centroids
more clearly, we transform the Polar coordinate system to
Cartesian coordinate system as Fig. 3(c) to reflect the
distribution of centroids. In the Fig. 3(c) horizontal axis
is( (0, 360]), vertical axis shows the percentage of
regions of image having the given angle, and other six
vertical color-line are color threshold curves
(
M
,
R
,
Y
,
G
,
C
and
B
). The face region is belonging to
Red and Yellow region (
face
[
M,

Y
])


(a) Original image (b) Color centroids distributing



(c) Color centroids distributing conversion and thresholds selection
Figure 3
Color centroids distributing and selection of sample
image 1 has been shown in the above figure. For
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 335
segmentation of face region, it must calculate the accurate
value. Because of the distributing histogram is not
smooth, we use filter to make the smooth shape for
analyzing. By observing many faces, including images in
different conditions, we let the threshold curve
M
and
Y

can move to left or right by 20 for finding the best value.
Then find the left and right valley bottom respectively as

M
and
Y
in the fix range by histogram analysis method.
For calculating the r
L
we fix the range from 3 to 20, and
calculate the average value of every valley bottom (on
color region only calculate one minimum value). In Fig.
3, (a) is original image and (b) showing the distribution of
color centroids. By transforming Fig. 3(b) to (d) and
calculate the
M
,
Y
and T
L
, we can get the pre-face
region and the binary image show in Fig. 3(c).

r()=r
face
; ( [
M
,
Y
],r [T
L
,85]

III. THRESHOLDING USING CONTOURLET
TRANSFORMATION

Even though CCS method of skin color segmentation
is a very efficient method, it fails when the skin tone of
the subject lacks chroma. This is due to the fact that skin
tones which lack chroma lies in the L region of the
centroid hexagon. Since we ignored L region while
performing thresholding using CCS algorithm, CCS fails
to detect such skin regions. To overcome this shortcoming
of CCS, we propose a correcting method based on
Contourlet Transformation. In this step we find the
contourlet transform of the given image. This transformed
image is processed as per the algorithm given in
following section.

IV. FACE DETECTION

A. Thresholding

1. Thresholding Based on CCS

Use the CCS can solve the shortage of existing
methods based on color analysis, because it can ignore the
influence of color and luminance. It only calculates the
direction of color and can let the darkly and lightly region
comes to one cluster in order to ignore some noise. Using
the thresholds selection way as Section II to select
thresholds
M
,
R
and T
L
to get the threshold curves for
thresholding. Other thresholds will keep the initial value
and without calculate here. By this way, the binary image
can be got as Fig. 4(b). From the result we can see that the
white background region (wall), pale color clothing
region and dark color clothing region are clustering to
black and only the goal region clustering to white. By this
way it can ignore many noise regions; especially the
excessive bright or dark region and different color region.
But because the color of regions near the face color so it
is cluster to pre-face region.
2. Thresholding based on Contourlet Transform

The basic steps involved in the proposed CBIR system
includes database processing and resizing, creation and
normalization of feature database, comparison and image
retrieval. Steps of the proposed algorithm are as follows.

1. Decompose each image in the Contourlet domain
2. Compute the standard deviation (SD) of the CT
decomposed image on each directional sub- band.
Standard deviation is given as

=

Where,

W
k
= coefficient of k
th
CT decomposed sub-band

k
= Mean value of k
th
sub-band
M x N = Size of the CT decomposed sub-band
The resulting SD vector is




3. Normalize the SD vector to range [0 1] for every image
in the database

4. Apply query image and calculate the feature vector as
given in steps 2 to 3.

5. Calculate the similarity measure Manhattan distance


6. Retrieve all relevant images to query image based on
minimum Manhattan distance.

3. Correction Using Nonlinear Thresholding

For denoise the incorrect region, this paper adopt the
nonlinear thresholding method to correct the binary image
which is thresholded by CCS. Considering the frequency
of gray values can more exactly reflect its distributing, so
do the nonlinear transform with original image can divide
the gray values based luminance information to quantize
the values to 2 clusters; lastly do inverse transform to get
the binary image. Fig. 4(a) and (c) are the original image
and binary image which thresholding by nonlinear
thresholding, and the formula as follows:



k is the number of cluster, here let k=2. From Fig. 4(c) it
can be seeing that though the nonlinear thresholding, the
background of binary image with white color has been
clustering to same value as the face region, so it hard to
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 336
separate the face region with background. But this binary
image can conquer some shortage of CCS; for example,
the region in Fig. 4. Sometimes, the bright region also can
reflect some color and its centroid in the goal region, but
in fact it is noise region. For example, the hair region, so
use (5) processed image to correct the CCS based method
processed the image with "and operation" can get idea
result as follows:
f
Final
(x, y) = (f
ccs
(x,y) + f
contour
) x f
Non linear
(x,y)

Fig. 4(d) is the corrected binary image, it correct the
region effetely. Because of the color of region is near the
face region color, so it cannot be deleted.


Figure 4. Thresholding Result


B. Pre-face Region Decision

After get the idea binary image, the white region is the
wait-decision region, because it may be include face, hand
etc. skin region and other nearly color region (white
region on Fig. 5(b)). Here all wait-decision regions are
analyzed in a selection process and some of them
accepted by aspect ratio and size.

Accepted by aspect ratio:

C = 4S/L
2
--------- (7)
Here C aspect ratio, L is the length of boundary. S is the
area of wait-decision region. If C [1, 1.7] , it will be
accepted.

Accepted by size:

After accepted by aspect ratio, then calculate the
average area S
AVE
of all wait-decision regions without the
largest and smallest regions. If S
FACE
(0.8 S
AVE
, 1.2
S
AVE
), it will be accepted.

C. Face Region Tracking

Face Tracking is done to indicate the position of
detected faces in given image. This is done by drawing a
green circle around the detected face.


Figure 5. Face tracking model

After face region fixed, use a circle to draw it by follows:

Step1: divide the face region to 9 blocks and thresholding
respectively as Fig. 6.
Step2: wipe off noise region by median filter.
Step3: fix eyes and mouth region and then calculate the
area centroids of eyes and mouth respectively.
Step4: draw a circum-circle (blue circle of Fig. 6) of
triangle which created by the three centroids. Then use its
1.5 times concentric circle (green circle of Fig. 6) to
mark face.

V. EXPERIMENTAL RESULTS

Figure 6 shows an example with complex background
under outdoor situation, (a) is the original image; (b)is the
thresholding result by proposed method; (c)~(e)are
different results by traditional thresholding methods.
Compare those thresholding results; we can easily
conclude that the proposed method is better than the
others.

VI. CONCLUSION

Through observing experiment result and analysis, the
proposed method can detect and track faces under varied
conditions effectively. Our detection algorithm takes the
color image and applies CCS method and CT method to
detect the valid skin regions. Then use the binary image
which quantized by nonlinear thresholding to correct the
binary image which is thresholded by CCS method and
CT method. Finally, using close operation and filter to get
the ideal binary image. For tracking faces, use1.5 times
concentric circle of the circum-circle of triangle which
created by facial features to mark face. All the experiment
result show that the proposed method can get ideal
detection and tracking result under complex background,
multi-face and color influence. The future works are how
to make the CT method more quickly and have better
thresholded effect for detecting.

Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 337
In the future, we need to complete the following items
to improve the performance of our method further:
Overcome the change of pose and view point.
Use motion analysis for effective prediction.
Integrate the detection and tracking information
to make a face model for real-time recognition.


Fig.6: Face region decision of sample image


Figure. 7: Result Comparison

VII. REFERENCES

[1] Jun Zhang, Qieshi Zhang, and Jinglu Hu: "RGB Color Centroids
Segmentation (CCS) for Face Detection" IEEE Transaction on
Image Processing, Volume (9), Issue (II), April 2009

[2] T. Gevers, and A.W.M. Smeulders, "Combining Color and Shape
Invariant Features for Image Retrieval," IEEE Transactions on
Image Processing, Vol. 9, pp. 102-119, Jan. 2000.

[3] Y. T. Pai, S. J. Ruan, M. C. Shie, and Y.C. Liu, "A Simple and
Accurate Color Face Detection Algorithm in Complex
Background," 2006 IEEE International Conference on
Multimedia and Expo, pp.1545-1548, Jul. 2006.

[4] Q. Zhang, J. Zhang and S. Kamata: "Face Detection Method Based
on Color Barycenter Hexagon Model", 2008 International Multi-
Conference of Engineers and Computer Scientists, Vol. 1, pp.
655-658, Mar. 2008.
[5] Christopher C Yang, Content Based Image Retrieval: A
comparison between query by example and Image browsing map
approaches, Department of Systems Engineering and
Engineering Management, The Chinese university of Hong Kong,
Hong Kong.
[6] Minakshi Banerjee 1, Malay K. Kundu, Machine Intelligence,
Edge based features for content based image retrieval, Unit,
Indian Statistical Institute, 203, B. T. Road, Kolkata 700 108,
India
[7] Ch.Srinivasa Rao , S. Srinivas kumar, B.N.Chatterji Content
Based Image Retrieval using Contourlet Transform Research
scholar, ECE Dept., JNTUCE, Kakinada, A.P, India. Professor
of ECE, JNTUCE, Kakinada, A.P, India. Former Professor,
E&ECE Dept., IIT, Kharagpur, W.B, India.
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 338
A Novel Architecture for Distant Patient
Monitoring System using NI Lab VIEW
Sunish kumar O.S,
Asst. Professor, Department of Electronics and
Communication Engineering,
AmalJyothi College of Engineering, Kottayam, Kerala,
India
ossunishkumar@amaljyothi.ac.in



Sterin Thomas, Stephen varughese, Sumi R., Treasa
Varghese,
Department of Electronics and Communication
Engineering, AmalJyothi College of Engineering,
Kottayam, Kerala, India

AbstractThis paper tells about the virtual instrumentation
and its extensive applications in medical field. There are
numerous researches going on all over the world for the
efficient and effective medical data transfer for monitoring the
patients. Since the data is medical data the transfer mechanism
should provide a highly secure and error free environment for
efficient data transfer. This paper suggests a novel
architecture for distant patient monitoring system by
exploiting the web publishing tool of Lab VIEW.
Key words- Virtual Instruments, Lab VIEW, Patient
monitoring, Web publishing tool.
I. INTRODUCTION
irtual instrumentation is an interdisciplinary field that
merges sensing, hardware and software technologies in
order to create flexible and sophisticated instruments
for control and monitoring applications .Virtual instrument
can also be defined as "an instrument whose general
function and capabilities are determined in software. The
flexibility is possible as the capabilities of a virtual
instrument depend very little on dedicated hardware
commonly, only application-specific signal conditioning
module and the analog-to-digital converter used as interface
to the external world. Therefore, simple use of computers or
specialized onboard processors in instrument control and
data acquisition cannot be defined as virtual
instrumentation. Increasing number of biomedical
applications use virtual instrumentation to improve insights
into the underlying nature of complex phenomena and
reduce costs of medical equipment and procedures. The
measurements in the medical field are peculiar as they deal
with a terribly complex object the patient and are
performed and managed by another terribly complex
instrument the physician and finally we present a distant
patient monitoring system, which measures the patient
parameters and the information is processed and display on
the web server. The web publishing tool of Lab View is
exploited for the transfer of medical data over the internet.



In Lab VIEW environment a few mouse clicks are required
for doing this.
II. A BRIEF HISTORY
A history of virtual instrumentation is characterized by
continuous increase of flexibility and scalability of
Measurement equipment. The Instrumentation field has
made a great progress toward contemporary computer
controlled, user-defined, sophisticated measuring
equipment. Instrumentation had the following phases:

Analog measurement devices,
Data Acquisition and Processing devices,
Digital Processing based on general purpose
computing platform, and
Distributed Virtual Instrumentation.

The first phase is represented by early "pure" analog
measurement devices, such as oscilloscopes or EEG
recording systems. They were completely closed dedicated
systems, which included power suppliers, sensors,
translators and displays. They required manual settings,
presenting results on various counters, gauges, CRT
displays, or on the paper. Further use of data was not part of
the instrument package, and an operator had to physically
copy data to a paper notebook or a data sheet. Performing
complex or automated test procedures was rather
complicated or impossible, as everything had to be set
manually. Second phase started in 1950s, as a result of
demands from the industrial control field. Instruments
incorporated rudiment control systems, with relays, rate
detectors, and integrators. That led to creation of
proportional-integral-derivative (PID) control systems,
which allowed greater flexibility of test procedures and
automation of some phases of measuring process.
Instruments started to digitalize measured signals, allowing
digital processing of data, and introducing more complex
control or analytical decisions. However, real-time digital
processing requirements were too high for any but an
V
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 339
onboard special purpose computer or digital signal
processor (DSP). The instruments still were standalone
vendor defined boxes. In the third phase, measuring
instruments became computer based. begun to include
interfaces that enabled communication between the
instrument and the computer. This relationship started with
the general-purpose interface bus (GPIB) originated in
1960s by Hewlett-Packard (HP), then called HPIB, for
purpose of instrument control by HP computers. As the
speed and capabilities of general-purpose computers
advanced exponentially general-purpose Computers became
fast enough for complex real-time measurements. It soon
became possible to adapt standard, by now high-speed
computers, to the online applications required in real-time
measurement and control. New general-purpose computers
from most manufactures incorporated all the hardware and
much of the general software required by the instruments for
their specific purposes. The main advantages of standard
personal computers are low price driven by the large
market, availability, and standardization.
Nearly all of the early instrument control programs were
written in BASIC, because it had been the dominant
language used with dedicated instrument controllers. It
required engineers and other users to become programmers
before becoming instrument users, so it was hard for them to
exploit potential that computerized instrumentation could
bring. Therefore, an important milestone in the history of
virtual instrumentation happened in 1986, when National
Instruments introduced Lab VIEW 1.0 on a PC platform.
Lab VIEW introduced graphical user interfaces and visual
programming into computerized instrumentation, joining
simplicity of a user interface operation with increased
capabilities of computers. Today, the PC is the platform on
which most measurements are made, and the graphical user
interface has made measurements user-friendlier. The Latest
version (LABVIEW 2010) is out in the markets and on the
internet.
III. VIRTUAL INSTRUMENTS
Simply put, a Virtual Instrument (VI) is a Lab VIEW
programming element. A VI consists of a front panel, block
diagram, and an icon that represents the program. The front
panel is used to display controls and indicators for the user,
and the block diagram contains the code for the VI. The
icon, which is a visual representation of the VI, has
connectors for program inputs and outputs. Lab VIEW uses
the VI. The front panel of a VI handles the function inputs
and outputs, and the code diagram performs the work of the
VI. Multiple VIs can be used to create large scale
applications; in fact, large scale applications may have
several hundred Vis.

A virtual instrument is composed of the following blocks:
Sensor Module,
Sensor Interface,
Medical Information Systems Interface,
Processing Module,
Database Interface, and
User Interface.


Figure 1 The general architecture of a virtual instrument.

The sensor module detects physical signal and transforms
it into electrical form, conditions the signal, and transforms
it into a digital form for further manipulation. Through a
sensor interface, the sensor module communicates with a
computer. Once the data are in a digital form on a computer,
they can be processed, mixed, compared, and otherwise
manipulated, or stored in a database. Then, the data may be
displayed, or converted back to analog form for further
process control. Biomedical virtual instruments are often
Integrated with some other medical information systems
such as hospital information systems. In this way the
configuration settings and the data measured may be stored
and associated with patient records
IV. INTRODUCTION TO LAB VIEW
Programmers develop software applications every day in
order to increase efficiency and productivity in various
situations. Lab VIEW, as a programming language, is a
powerful tool that can be used to help achieve these goals.
Lab VIEW (Laboratory Virtual Instrument Engineering
Workbench) is a graphically-based programming language
developed by National Instruments. Its graphical nature
makes it ideal for test and measurement (T&M),
automation, instrument control, data acquisition, and data
analysis applications. This results in significant productivity
improvements over conventional programming languages.
National Instruments focuses on products for T&M, giving
them a good insight into developing Lab VIEW. Lab VIEW
programs are called virtual instruments, or VIs, because
their appearance and operation imitate physical instruments,
such as oscilloscopes and millimeters. Lab VIEW contains a
comprehensive set of tools for acquiring, analyzing,
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 340
displaying, and storing data, as well as tools to help you
troubleshoot code you write. In Lab VIEW, you build a user
interface, or front panel, with controls and indicators.
Controls are knobs, push buttons, dials, and other input
mechanisms. Indicators are graphs, LEDs, and other output
displays. After you build the user interface, you add code
using VIs and structures to control the front panel objects.
The block diagram contains this code. You can use Lab
VIEW to communicate with hardware such as data
acquisition, vision, and motion control devices, as well as
GPIB, PXI, VXI, RS232, and RS485 instruments.

A. The Front panel

Figure 2 illustrates the front panel of a Lab VIEW VI. It
contains a knob for selecting the number of measurements
per average, a control for selecting the measurement type, a
digital indicator to display the output value, and a stop
button. An elaborate front panel can be created without
much effort to serve as the user interface for an application.


Figure 2 Front Panel

B. The Block Diagram
Figure 3 depicts the block diagram, or source
code, that accompanies the front panel in Figure 2. The
outer rectangular structure represents a While loop, and the
inner one is a case structure. The icon in the center is a VI,
or subroutine, that takes the number of measurements per
average as input and returns the frequency value as the
output. The orange line, or wire, represents the data being
passed from the control into the VI. The selection for the
measurement type is connected, or wired to the case
statement to determine which case is executed. When the
stop button is pressed, the While loop stops execution. Lab
VIEW is not an interpreted language; it is compiled behind
the scenes by Lab VIEW execution engine. Similar to Java,
the VIs is compiled into an executable code that Lab VIEWs
execution engine processes during runtime. Every time a
change is made to a VI, Lab VIEW constructs a wire table
for the VI. This wire table identifies elements in the block
diagram that have inputs needed for that element to run.
Elements can be primitive operators such as addition, or
more complex such as a subVI. If Lab VIEW successfully
constructs all the wire tables, you are presented a solid
arrow indicating that the VIs can be executed. If the wire
table cannot be created, then a broken arrow is presented for
the VIs with a problem, and also for each VI loaded in
memory that requires that VI for execution. Lab VIEW runs
in several subsystems, which will be described throughout
this book. All that we need to understand now is that the
main execution subsystem compiles diagrams while you
write them. This allows programmers to write code and test
it without needing to wait for a compiling process, and
programmers do not need to worry about execution speed
because the language is not interpreted.

The wire diagrams that are constructed do not dene an
order in which elements are executed. This is an important
concept for advanced programmers to understand.
LabVIEW is a dataow-based language, which means that
elements will be executed in a somewhat arbitrary order.
LabVIEW does not guarantee which order a series of
elements is executed in if they are not dependent on each
other. A process called arbitrary interleaving is used to
determine the order elements are executed in. You may
force an order of execution by requiring that elements
require output from another element before execution. This
is a fairly common practice, and most programmers do not
recognize that they are forcing the order of execution. When
programming, it will become obvious that some operations
must take place before others can. It is the programmers
responsibility to provide a mechanism to force the order of
execution in the code design.



Figure 3. The Block Diagram
C. Building a Virtual Instrument
In the following exercises, we will build a VI that
generates a signal and displays that signal in a graph. Select
a control knob and a waveform graph from the control pallet
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 341
by right clicking on the front panel and place over it.
Automatically an icon corresponding to control knob and
waveform graph will appear in the block diagram and
interconnect it with the wiring tool available in the block
diagram. When you complete the exercises, the front panel
of the VI will look similar to the front panel in Figure 4.


Figure 4
V. DISTANT PATIENT MONITORING SYSTEM
USING LAB VIEW
In this session a patient monitoring system using Lab
VIEW is given. The software algorithm for the system is
given in the figure5.




























Figure 5 Flow chart of the program
The system consists of two parameters to check. One is
the body temperature of the patient and second is the heart
beat counter. The program starts by checking the status
register for the presence of figure in the heart beat counting
module. Whenever the patient inserts the figure in the
counting module, the program switches to the sub VI which
performs the counting of the heart pulses in connection with
the heart beat counting hardware, which is interfaced to
program through the system parallel port. Otherwise it will
continuously read the data register for acquiring the
temperature value. In the heart beat counter sub VI the
program counts the final value from the counter after one
minute time. This count will give the number of heart beats
in one minute. The program has the option for stopping the
running program by placing a stop button in the front panel.
Whenever the stop button is pressed the program
automatically quits the running VI. This concurrent
operation is possible in Lab VIEW because Lab VIEW is a
data flow programming language. Which means the
functions in Lab VIEW will perform its operation whenever
data is available because the flow of data will determine the
flow of execution.
VI. BRIEF DESCRIPTION OF THE COMPONENTS

The sensor module performs signal conditioning and
transforms it into a digital form for further manipulation.
Once the data are in a digital form on a computer, they can
be displayed, processed, mixed, compared, stored in a
database, or converted back to analog form for further
process control. The database can also store configuration
settings and signal records. The sensor module interfaces a
virtual instrument to the external, mostly analog world
transforming measured signals into computer readable form.
A sensor module principally consists of three main parts:

The sensor,
The signal conditioning part, and
The A/D converter.

The sensor detects physical signals from the environment.
If the parameter being measured is not electrical, the sensor
must include a transducer to convert the information to an
electrical signal, for example, when measuring blood
pressure. According to their position, biomedical sensors
can be classified as:

Implanted sensors, where the sensor is located inside the
users body, for example, intracranial Stimulation.

On-the-body sensors are the most commonly used
biomedical sensors. Some of those sensors, such as EEG or
ECG electrodes, require additional gel to decrease contact
resistance.

Noncontact sensors, such as optical sensors and cameras
that do not require any physical contact with an object.

Che
ck
stat
us
Go to the
temperature
controller

Go to the loop
containing the
program for counting
the pulses
Is 1
min
over
?
Save value into
register
Is
stop
butt
on
hig
h?
Quit
LABVI
EW
prgrm
START
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 342
The signal-conditioning module performs (usually
analog) signal conditioning prior to A/D conversion, such as
amplification , transducer excitation, linearization, isolation,
filtering of detected signals from undesired noise bursts that
tend to get added along with desired measured parameter
The A/D converter changes the detected and conditioned
voltage into a digital value. The converter is defined by its
resolution and sampling frequency. The converted data must
be precisely time -stamped to allow later sophisticated
analysis.

In this project ADC 0809 is used for A/D conversion.
The digitally converted data is interfaced to the PC through
the system parallel port. The program written in Lab VIEW
will perform all the function like data acquisition, data
analysis, presentation and control in association with the
hardware counterparts.
VII. CONCLUSION
A detailed introduction to the virtual instrumentation
and its applications in the medical field has been described.
Virtual instrumentation merges sensing technologies with
hardware and software technologies to create flexible and
sophisticated instruments for various control and monitoring
applications. Biomedical applications require sophisticated
and flexible instrumentation, accomplished by using
general-purpose computing platforms with various
application-specific input/output devices. Virtual
instrumentation brings many advantages over
conventional instrumentation. Employing general-purpose
computing platforms significantly decreases the price of
instruments. Standard system interfaces allow seamless
integration of virtual instruments in a distributed system,
whereas software reconfiguration facilitates flexibility and
scalability. Most general virtual instrumentation concepts
are directly applicable in biomedical applications; however,
specific features of the biomedical instrumentation must be
taken into account. Then we demonstrated a novel
architecture for distant patient monitoring system by
exploiting the web publishing tool of Lab VIEW.
VIII. REFERENCES

[1] Adam96 J.S. Adam, E. Rosow, and T. Karselis, "Educational
Applications Using Computerized Virtual Instrumentation",
Presented at the Association for the Advancement of Medical
Instrumentation in Philadelphia, PA, June 4, 1996.
[2] Akay00a M. Akay (Ed.), Nonlinear Biomedical Signal Processing,
Volume 1, Fuzzy Logic, Neural Networks, and New Algorithms,
Wiley-IEEE Press, 2000.
[3] Akay00b M. Akay (Ed.), Nonlinear Biomedical Signal Processing,
Volume 2, Dynamic Analysis and Modeling, Wiley-IEEE Press,
2000, ISBN: 0- 7803-6012-5.
[4] Akay01 M. Akay, A. Marsh (Eds.), Information Technologies in
Medicine, Volume 1, Medical Simulation and Education, Wiley-
IEEE Press, 2001, ISBN: 0-471 38863-7.
[5] Hoffman01 H. Hoffman, M. Murray, R. Curlee, and A. Fritchle,
Anatomic VisualizeR: Teaching and Learning Anatomy with
Virtual Reality, in
[6] M. Akay, A. Marsh (Eds.), Information Technologies in Medicine,
Volume 1, Medical Simulation and Education, Wiley-IEEE Press,
2001, pp. 205-218.




Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 343

AGENT BASED QUERY OPTIMIZATION KB FOR HETEROGENOUS,
AUTONOMOUS & DISTRIBUTED MULTIDATABASE

Shiju George
1
` Shelly S. George
Assistant Professor, IT Assistant Professor, MCA
Amal Jyothi College of Engineering, Kerala Amal Jyothi College of Engineering, Kerala Assistant Professor
Mail id: shijugeorge3@gmail.com Mail id: shellysgeorge@gmail.com

Abstract

In recent years, the need has arisen for
accessing and updating data from a variety of
preexisting databases in the internet world,
which differs in their schemas, hardware,
platforms & software environments. In this
paper, through the research on query
optimization technology, based on a number of
optimization algorithms commonly used in
distributed query, an agent based query
optimization KB is designed. Experiments
shows that this KB technique can compare and
select the optimized query plan for
heterogeneous platform of distributed
multidatabase. It significantly trim down the
amount of intermediate result data, effectively
compare and reduce the network
communication cost for heterogeneous
database, to improve the optimization
efficiency.

Index Terms - Query Optimization, KB -
Knowledge base, Heterogeneous Distributed
database, Multidatabase .


1. INTRODUCTION

INTEGRATING heterogeneous multidatabases is
an important problem for the next generation of
information systems. Although each system was
designed and developed independently, mainly to
satisfy the needs of data management of its own
organization, the data it manages can be useful to
various departments in the organization. Thus,
there is an obvious need to have a global and
integrated view of the data stored in these pre-
existing database systems to support some global
applications accessing data residing at more than
one departmental system e.g. OLAP applications.
An enterprise may have multiple DBMSs.
Different organizations within the enterprise may
have different requirements and may select
different DBMSs. DBMSs purchased over a period
of time may be different due to changes in
technology. Heterogeneities due to differences in
DBMSs result from differences in data models and
differences at the system level. Each DBMS has an
underlying data model used to define data
structures and constraints. Both representation
(structure and constraints) and language aspects
can lead to heterogeneity. This need for integrating
information from various systems has motivated
the design of multidatabase systems or federated
database. A multi- database system is a database
system implemented to connect distributed,
autonomous and heterogeneous database. With the
development of computer network and database
technology, distributed multidatabase database
is more and more widely used, with the expanding
application, data queries are increasingly complex,
the efficiency requests are increasingly high. With
a distributed database, users have access to the
portion of the database at their location so that
they can access the data relevant to their tasks
without interfering with the work of others, so
query processing is a key issue of the distributed
database system

A MDBS system is a type of meta-database
management system (DBMS) which transparently
integrates multiple autonomous database
systems into a single federated database. The
organizational entities that manage different DBSs
are often autonomous. In other words, DBSs are
often under separate and independent control.
Those who control a database are often willing to
let others share the data only if they retain control.
The constituent databases are interconnected via a
computer network, and may be geographically
decentralized. Since the constituent database
systems remain autonomous, a MDBS system is a
contrastable alternative to the (sometimes
daunting) task of merging together several
disparate databases.
Agent based query optimization KB for
heterogeneous, autonomous & distributed
multidatabase application is developed in such a
way that it makes use of the current software
language trends ,databases, operating system and
hardware as far as possible in care of up gradation
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 344

that it should be done. This is not just intended to
execute the query plan using the concept of
Intelligence agent based system but is meant to
analyze and the optimized plan as a result of
various query plans executed in heterogeneous
distributed database environment. The agent based
system ensures optimized output through the
process of learning; matching and inference
rules .The important specialty of this application
is that it automatically mines a solution in a much
lesser time. It keeps the record of the optimized
and efficient query in the knowledge base which is
executed when user provide a similar class
queries. It thus provides a user friendly
atmosphere along with the valuable service of
heterogeneity.
A comparison between executions of various
query plans in heterogeneous platform has been
done by analyzing the time complexity of the
given problem. The levels of multiple features
give encouraging results.


2. PROPOSED TECHNIQUE

2.1. The proposed technique includes the
following modules:

Develop an Application Program and
distribute it at different heterogeneous
databases site like DB2 9, SQL Server
and ORACLE.(Same application
program)

Develop an independent Multidatabase
System platform to solve heterogeneous
issue. Any changes in the application
program will be reflected in all other
database packages.

Create set of query plans to demonstrate
the best plan case and identify changes on
the database.

Gather performance information based on
certain Query Plans & response time of
the query output from multidatabase

Forming a knowledge base database by
storing optimized plan for same class of
queries.

To find out better method to extract the
optimized plan output through agent
based tool. The application should be able
to extract information from distributed
database and give the best query for
storage in the knowledgebase

The objective of the paper is to
make an application program with an overall
performance. This application is designed in
such a way that the future plans of expansion
can be implemented easily without affecting
the existing features. This concept and its
operations are safe in static environment.

2.2 Working Principle

The overall working may be depicted with the
following block diagrams for the users.

III. SPECIFICATIONS OF GOOD
STEGANOGRAPHY ALGORITHMS
























Figure 1. Block diagram of the system

AGENT BASED QUERY OPTIMIZATION KB FOR
HETEROGENOUS, AUTONOMOUS & DISTRIBUTED
MULTIDATABASE

The system consists of five major components.
The I/O unit; at the users end an interface is
developed to provide query plan as input and to
obtain its result as output. I/O interface provide
direct access to the application program. The
application program is distributed to the various
Si
te
A
Si
te
C
Sit
e
B


Heterogene
ous
Multidatab
ase
platform.

U
S
E
R
I/O
Interfa
ce
Knowledge
base
Query Plan
Execution
Optimized
Query
Execute Best plan for
same class of query

Working
Memory

SQL
server
DB
2 -9
Oracle
Inference
Engine
Case
History
File
Learning
Module
Desired
Report

Application
Program

Result
s
I/
p
O
/p
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 345

heterogeneous platforms. Any changes made to the
record values of the application program database
will be will be reflected at each site. A query plan
is executed on various heterogeneous platforms.
The minimum time taken by the query among the
three different DBMS package to produce the
desired results on different site will be recorded
and stored as the optimized DBMS query plan in
the knowledgebase. Also the result of the query
will also be stored in the KB. When a user request
the desired data from the database of the
application program, the requested query will be
searched with the existing query plans already
stored in the knowledgebase using inference rules
(modus ponen, chainrule, substitution,
simplification, transposition, conjuction,
disjunction) . The inference engine will find the
solution for the query out of evidence. If the
similar class of query is obtained, corresponding
results will be displayed in minimum time. If the
query plan is new, it will be executed first in the
heterogeneous platform. The site (different DBMS
package) with lowest time complexity for the plan
will display the results and will be chosen as the
best execution platform; consequently the
optimized query & its results will be stored in the
KB.

3. IMPLEMENTATION TECHNIQUE

3.1 Implementation and Demonstration of each
module to accomplish the task

3.2 Develop a common schema for the application
program and create the database for the application
in heterogeneous databases like DB2 9, SQL
Server and ORACLE by means of different syntax
available in the packages. The various issues that
may differ & arise during integration process
include database systems, operating system and
Hardware /system.


Database Systems

Differences in DBMS
-data models
(structures, constraints, query
languages)
-system level support
(concurrency control, commit, recovery)
Semantic Heterogeneity



Operating System




Communication


-file systems
-naming, file types, operations
-transaction support
-interprocess communication








Communication


Hardware/System

-instruction set
-data formats 8 representation
-configuration

Figure 2: Heterogeneity Issues

3.3 Creating independent multidatabase platform
to solve heterogeneous issue deals with processing
all queries subjected to different DBMS .Once the
application program database is ready in different
packages, any query imposed by the user will be
transformed in to small set of decomposed unit by
global query optimizer. Later the decomposed set
of query will be again optimized at local system of
the different DBMS. Any changes made by the
query on application program at one DBMS
system will be reflected in all other system as well.
The architecture for heterogeneous is shown in
fig.2







Transforming





Constructing




Transforming











DBMS







Figure 3. Heterogeneity for Multidatabase
Architecture.
Sql server Oracle 10i DB2 9
Translator

Decomposer
Local Optimizer
Heterogeneous Platform
Global Query Optimizer
Local KB
Global KB
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 346


3.4 Optimization of distributed database query is
initiated with set of query plans given by the user
as input. Distributed data user module analyzes the
user query request according to the query
optimization criteria the user selected and thus the
corresponding query processing method will be
confirmed. Syntax analysis module analyzes the
query sentence the user sent and generates the
corresponding query tree in algebraic form.


Set of query plans


High Level Query language



Intermediate form of query
(Semantic expression)




Execution plans




Code to execute the query





Results of query


Optimized
Query



Information update



Figure 4. Query Optimization.



Optimization module will process the local query
tree according to query optimization strategy to
make the total cost smallest. Order processing
module will distribute the mission to the
corresponding server and return the server
processing results to the user. If the operation
related to global or local data KB, there will be
corresponding operations on the KB, in which
information statistical operation will update the
query sentence knowledge in KB in accordance
with the query frequency; information update
operation will update the global KB in accordance
with the local KB and the latest changes of the
local server performance, and inform other sites
through information broadcast operations.

3.5 The time to execute a single plan in different
DBMS package will be analyzed and recorded.
Consequently the DBMS which require minimum
time to execute the query plan will be stored in the
knowledgebase. For the same class of query
requested by the user once again will be executed
by the agent based KB in minimum time. New
request will be checked and inferred by the agent
based KB. Input request get processed by the
means of inference unit. The inference unit works
out from the evidence to collect desired
informations from the Knowledgebase. An
inference unit uses inference rules like modus
ponen, unification, simplification etc to infer the
desired results. Fig.4 represents the
Knowledgebase System.






I/P

O/P




Figure 5.. Knowledgebase System.

The new set of query plans (knowledge) or desired
outputs obtained are stored in the case history file.
The learning module will learn the existing
knowledge from the case history and infer new set
of knowledges using forward chaining &
backward chaining.



4. EXPERIMENTS

To test the ideas, we built an integrated query
optimizer for the independent platform. An
application program ATTENDANCE
AMNAGEMENT SYSTEM is created and
distributed among all sites. The task is
accomplished by using front end C#& Asp.net and
Scanning & parsing
Query optimizer
Query code generator
Runtime database processor
Oracle / SQL server / DB2 9 Engine
KB Database
Statistical Data
Query request Queue


KB
Inference Unit
Time
Complexity
Working
memory
Case History file
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 347

backend SQl server, Oracle, DB2 9. Each query
from the user on AMS is analyzed & optimized for
storage. Multiple query processing attempts to
collectively optimize access plans of a set of
queries occurring either simultaneously or not, by
utilizing the commonality which exists among the
set of queries in terms of accesses to relations,
join/semi-join operations, and local physical data
access.
The results obtained give the actual cost savings,
not theoretical cost savings estimates.

4.1 System Implementation

An approach seeks to use semantics of the
knowledgebase of the database's application to
transform the original query into an alternative
form, possibly quite different in its expression, but
which is both equivalent to the original (in terms
of the set of records from the database that it
qualifies) and more efficient to process, given the
existing file structures and access methods. The
processing of multiple queries is accomplished by
the utilization of subset relationships between
intermediate results of query executions, which are
inferred employing both semantic and logical
information.

Query Time SQl
Server
Oracle
10i
DB2
9
Q1
CPU
0.56 0.41 0.39
Q2
Q3


4.2 System Performance

We generated an executable optimizer using
inference rule-based system. We then examined
the performance of this KB query optimizer on the
sample database AMS, using queries Q1,Q2 and
Q3 .For each of these queries, we submitted the
original and optimized versions (i.e., queries
Q4,Q5 andQ6 respectively) to the KB Storage.
The semantic optimizer uses the multidatabse
platform as a backend to execute the optimized
and unoptimized versions of the query. The
unoptimised query is optimized and checked
against the stored optimized queries. The CPU
costs are returned by the UNIX wage function and
the page access statistics are returned from the
storage access system using a virtual disk. The
approximate cost for optimizing the query was
obtained from the UNIX system tame function..
The cost of the optimization was less than 5% of
the cost of the original query. The net savings
from the optimization is thus approximately 90%
of the original processing costs and 50% of the
number of page accesses. For query Q3, th e Scan
Reduction heuristic was RGOES relation is not a
random process. Because semantic optimization
requires rules for transformation, thus the
generation of the relation is such that it confirms to
a set of rules.

5 DISCUSSION


Traditionally, database systems have been
designed based on the idea that for a given query
there is a single correct answer. However, it is
neither practical nor desirable to enforce this
requirement in some environment. For example, a
user might issue a query to a 1000-site federated
database such as: "find the average salary of
professors working at polish universities". The
user is not likely to get the exact result to the query
within a reasonable time, if ever. Several factors
such as heterogeneity, availability, us-
age cost, data quality, etc., impose new constraints
on a multidatabase system. Therefore, it would be
more reasonable to consider a query processing as
an accumulation process. A database system
should producea "rough" answer as quickly as
possible, and then, re- _ne it over time until user
decides that the result is good enough.
Allowing a exible approach to query answer qual-
ity raises a number of interesting and challenging
op- portunities for optimization. Of course, this
requires new techniques of data estimation, data
delivery and new user interfaces. Recently, this
problem has at-
tracted a number of contributions [3, 12, 13, 15].



6. CONCLUSIONS & FUTURE WORK

This paper proposes a novel query optimization
KB approach to reduce the cost of executing same
class of queries requested by the users. To
efficiently execute a complex multidatabase query,
it is important to reduce unnecessary intermediate
data. The approach presented here can use rich
semantic knowledge to infer the ranges of
intermediate data accurately and yield arbitrarily
large additional savings for complex multidatabase
queries. This approach optimizes a larger class of
queries, exploits more expressive semantic
knowledge, and detects more optimization
opportunities. This optimization approach can be
implemented on top of existing query optimizers
in a heterogeneous environment and, hence,
Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 348

supports the extensibility of information
mediators. The approaches optimize a query plan
based on time complexity in different database
packages and store it in the KB. Each time the best
Query plan is added to the KB, which made the
local query results frequently used be stored, thus
the transmission of large amounts of data in query
can be avoided.
With continuous research and an
improvement in algorithm design, agent based
query optimization KB can be taken as a serious
way to improve data availability in less time.

7. REFERENCES

[1] Federated Database Systems for managing
Distributed, Heterogeneous, and Autonomous
Databases AMIT P. SHETH Bellcore, lJ-210,
444 Hoes Lane, Piscataway, New Jersey 08854
JAMES A. LARSON Intel Corp., HF3-02,
5200 NE Elam Young Pkwy., Hillsboro,
Oregon 97124.

[2] P.A. Bernstein, N. Goodman, E. Wong, C.L.
Reeve and J.B. Rothnie, Query processing in a
system for distributed databases (SDD-I),
ACM Trans. on Database Systems, voi. 6, no. 4
(Dec. 1981) 602-62

[3] J.T. Park and T.J. Teorey, A knowledge-based
approach to multiple query processing in
distributed databasesystems, Proc. 1987 ACM-
IEEE Fall Joint Computer Conference, Dallas,
TX, (Oct. 1987) 461-468.

[4] U.S. Chakravarthy and J. Minker, Processing
multiple queries in database system, 1EEE
Database Eng., vol. 5 (Sept. 1982) 38-43.

[5] Cohen, W. W., Integration of heterogeneous
databases without common domains using
queries based on textual similarity, Proc. of
ACM- SIGMOD, Seattle, USA, 1998, pp. 201-
212.

[6] Du, W, et al., Query optimization in heteroge-
neous DBMS, Proc. of the 18th VLDB Confer-
ence, Vancouver, 1992, pp. 277-291,

[7] Du, W., Shan, M.-C., and Dayal, U., Reducing
multidatabase query response time by tree
balancing, Proc. of ACM-SIGMOD, San Jose,
USA,1995, pp. 293-303.

[8] Evredilek, C., et al., Multidatabase query op-
timization, Journal of Distributed and Parallel
Databases, vol. 5, No. 1, 1997.

[9] Florescu, D., et al., Query optimization in the
presence of limited access patterns, Proc. of
ACM-SIGMOD, Philadelphia, 1999, pp. 311-
322.

[10] Kim, W., Choi, I., Gala, S., and Scheevel,
M.,On resolving schematic heterogeneity in

multi-database systems, in Modern Database
Systems [ed. W. Kim], Addison-Wesley Pub.
Co., 1995,pp. 521-550.

[11] J.J. King. Quist: A system for semantic query
optimization in relational databases.7
th
Int.
VLDB Conference, Pages 510-517, Cannes
, France,August 1981

[12]. M.Jarke and J. Koch. Query optimization in
database systems. ACM Computing
Surveys,16(2):111-152,June 1984


[13] HONG YU, SHAO-ZHONG,NAN
HAI,HUA DING,XIU-KUN WANG,
Intelligent agent based distributed
heterogeneous database system, Second
International Conference on machine
learning and cybernetics,Wan,2-5
November 2003

[14] Yufei Bai , Xiujun Ma , Kunqing Xie,Cuo Cai
Ke Li; Agent Based Spatial Query
Optimization in grid Environment: 2008
International Conference on CS and SE:

[15] M. Hammer and S.B. Zdonik, Knowledge-
Based Query Processing, Proc. Sixth VLDB
Conf., pp. 137146, 1980.

[14] Lee, C., Chen, C-J., and Lu, H., An aspect
of query optimization in multidatabase
systems,SIGMOD Record, vol. 24, No. 3,
1995, pp. 28-33.

[15] Lu, W, et al., On global query optimization in
multidatabase systems, Proc. of 2nd Int.
Work-shop on Research Issues on Data Eng.,
Tempe,1992, pp. 217-227,


[16] P.A. Bernstein.n. Goodman E. Wong C L
Reeve and J. B. Rothnie . Query Processing
in a system for distributed database (SDD-
1).ACM TODS. 6(4): 602-625,Dec 1981.


Proceedings of AICERA 2011
ISBN: 978-81-921320-0-6 349

Das könnte Ihnen auch gefallen