Sie sind auf Seite 1von 924

Proceedings of International Conference on

Advancements in
Engineering and Technology
(ICAET-2015)
Proceedings of International Conference on
Advancements in
Engineering and Technology
(ICAET-2015)
March 20–21, 2015

[ISBN: 978-81-924893-0-8]

Editors
Prof. (Dr.) Tanuja Srivastava
Dr. Shweta Rani
Dr. Anuj Kumar Gupta
Mr. Manoj Kumar Gupta

Organized by

Bhai Gurdas Institute of Engineering & Technology


Main Patiala Road, Sangrur–148001 (Punjab), India
www.bgiet.ac.in
(An NBA Accredited Institute)
ORGANIZING COMMITTEE

Chief Patrons
Sh. H. S. Jawandha
Chairman, Bhai Gurdas Group of Institutions, Sangrur

Prof. (Dr.) Guninderjeet Singh Jawandha


Dean Colleges & Executive Member, BGGI, Sangrur

Patron & Coordinator

Prof. (Dr.) Tanuja Srivastava


Director, Bhai Gurdas Institute of Engg. & Technology, Sangrur

Convener

Dr. Shweta Rani

Co-Convener

Dr. Gurmeet Singh Cheema


Dr. Anuj Kumar Gupta
Er. Amardeep Singh Kang

Organizing Secretary
Mr. Manoj Kumar Gupta
Er. Sushil Kakkar
Er. Divesh Kumar
Er. Abhinash Singla
Er. Randhir Singh
Dr. Arun Kumar Singh

Executive Members

Dr. Ravi Kant


Dr. Satnam Singh
Mr. Jatinder Rattan
Er. Amandeep Kaur Randhawa
Er. Parminder Pal Singh
Er. Manuraj Moudgil
Er. Simarpreet Singh
Mr. Kamaljit Singh
INTERNATIONAL ADVISORY COMMITTEE

Prof. Worsak Kanok-Nukulchai


President, Asian Institute of Technology, Thailand

Prof. Christos Christodoulou


University of New Mexico, Albuquerque

Prof. M. Affam Badar


Indiana State University, USA

Prof. Yehia Hadded


University of Ottawa, Canada

Prof. Om Parkash Yadav


Wayne State University, USA

Prof. Gauri S. Mittal


School of Engineering, Guelph

Prof. Xiao-Zhi Gao


Aalto University, Finland

Prof. Nitin Tripathi


Asian Institute of Technology, Thailand

Prof. Mushtak Al-Atabi


Taylors University, Malaysia

Dr. Ravinder Chutani


University of Franche Comte, France

Dr. Mohd. Nazri Ismail


University of Kuala Lumpur, Malaysia

Dr. Huynh Trung Luong


Asian Institute of Technology, Thailand
NATIONAL ADVISORY COMMITTEE
Prof. Pawan Kapur
Director, PIT, Rajpura
Prof. Buta Singh
Dean Academics, PTU, Jalandhar
Prof. A. P. Singh
Dean RIC, PTU, Jalandhar
Prof. S. K. Mohapatra
Thapar University, Patiala
Prof. D.C. Saxena
SLIET, Longowal
Prof. Kawaljit Singh
Director, UCC, Punjabi University, Patiala
Prof. Brahmjit Singh
NIT, Kurukshetra
Prof. P. K. Tulsi
NITTTR, Chandigarh
Prof. Deepak Garg
Thapar University, Patiala
Dr. Amalendu Patnaik
IIT, Roorkee
Prof. S. S. Patnaik
NITTTR, Chandigarh
Prof. T. S. Kamal
Director, Radiant Engineering College, Abohar
Prof. B. C. Chaudhary
NITTTR, Chandigarh
Prof. Rajesh Bhatia
PEC University of Technology, Chandigarh
Prof. Maitreyee Dutta
NITTTR, Chandigarh
Dr. Sunita Mishra
Senior Scientist & Head, CSIO
Dr. Udit Narayan Pal
Senior Scientist, CEERI, Pilani
MESSAGE FROM CHAIRMAN

With a pledge to make a successful rise in the highly competitive world of engineering
and technology, Bhai Gurdas institute of Engineering and Technology is organizing 3rd
International Conference on Advancements in Engineering and Technology on 20
and 21 March, 2015.

In order to face various emerging challenges in different fronts of engineering and


technology, it has become indispensable to explore multifarious integrated and
interdisciplinary engineering approaches.

I am sure that this conference would serve as a platform to connect various


academicians, researchers and scholars to go beyond borders in search of new
frontiers in researches of the millennium and as well to show case their innovations and
findings.

I wish you success in your deliberations at conference.

S. H.S. Jawandha
Chairman
Bhai Gurdas Group of institutions
MESSAGE FROM DEAN COLLEGES

With a pledge to make a successful rise in the highly competitive world of engineering
and technology,

Bhai Gurdas Institute of Engineering and Technology is organizing 3 rd International


Conference on “Advancements in Engineering and Technology” on 20 and 21 st March,
2015.

In order to face various emerging challenges in different fronts of engineering and


technology, it has become indispensable to explore multifarious integrated and
interdisciplinary engineering approaches.

I am sure that this conference would serve as a platform to connect various


academicians, researchers and scholars to go beyond borders in search of new
frontiers in researches of the millennium and as well to show case their innovations and
findings.

I wish you success in your deliberations to make the event a event a successful one.

Dr. Guninderjit Singh Jawandha


Dean Colleges
Bhai Gurdas Group of Institutions
MESSAGE FROM DIRECTOR

With a significant rise Bhai Gurdas institute of Engineering and Technology has taken a
constructive step by organizing 3rd International Conference on Advancements in
Engineering and Technology on 20 and 21 March, 2015.

The theme of the conference indeed attains importance in the millennium as it will help
academicians, researchers and scholars to demonstrate the key issues prevalent in
technological advancements happening worldwide in industrial and manufacturing
sector.

The international Conference was inspired by the need for a platform to address various
emerging issues in engineering. I believe this Conference is essential to all researchers
who will get real opportunities to showcase their research.

I encourage you to make this significant event part of your professional development for
2015 and look forward to welcoming you to the international Conference in the campus.

Dr. Tanuja Srivastava


Director
Bhai Gurdas institute of Engineering & Technology
MESSAGE FROM CONVENER

It is my privilege and honor to welcome to the 3rd international conference on


advancements in engineering and technology which will be held in 20-21st March
2015.

This conference provides an excellent opportunity to showcase your work and share
your expertise, so we as an international community can move towards developing
national and international priorities for advancements in engineering & technology.

This interdisciplinary conference, will bring together academics, researchers,


administrators, policy makers, industry representative, students, from key government
and non-government organizations from all over the globe to share and enhance
knowledge on latest advancements

This conference will be dedicated to discuss on new technologies that can be


transferred from lab to land for the benefit and welfare of persons in the society.

Thank you for joining in this event and expecting a sound response from you all.

Dr. Shweta Rani


Convener
ICAET 2015
MESSAGE FROM CO-CONVENER

It is my great pleasure to welcome you to the 3rd International Conference on


Advancements in Engineering & Technology (ICAET-2015) which takes place in Bhai
Gurudas Institute of Engineering & Technology, Sangrur on March 20 – 21, 2015. It
has been a real honor and privilege to serve as the Co-Convener of the conference.

The conference would not have been possible without the enthusiastic and hard work of
a number of colleagues. A conference of this size relies on the contributions of many
volunteers, and I would like to acknowledge the efforts of our committee members and
referees and their invaluable help in the review process. I am also grateful to all the
authors who trusted the conference with their work.

I look forward to an exciting week of insightful presentations, discussions, and sharing


of technical ideas with colleagues from around the world. I thank you for attending the
conference and hope that you enjoy your visit.

Dr. Anuj Kumar Gupta


Co-Convener
ICAET 2015
MESSAGE

It gives me great pleasure to know that Bhai Gurdas Institute of Engineering &
Technology (BGIET) is organizing its 3rd International Conference on Advancements
in Engineering & Technology (ICAET-2015) for two days on March 20-21, 2015.

The conference will provide a platform to share new research ideas and technological
assets to translate innovations from fundamental research to quality products. I wish the
conference, ICAET-2015, a grand success and congratulate the BGIET leadership for
taking up this noble endeavour.

M. Affan Badar
PhD, CSTM
Professor & former Department Chair
Department of Applied Engineering and Technology Management
Indiana State University, USA
MESSAGE

I am delighted to know that Bhai Gurudas Institute of Engineering & Technology,


Sangrur is organizing the 3rd International Conference on Advancements in
Engineering & Technology (ICAET-2015) on 20th-21st March, 2015 in its Campus.

I hope that this International Conference is to bring together Academicians,


Researchers and Industry Practitioners from around the globe, working broadly in the
areas of Research & Development and its most important applications, with an
emphasis on informal interactions.

Therefore, this conference will help young researchers to share valuable thoughts with
experienced Academicians & Industry Practitioners on their respective areas of
Research. In the process, they will increase the possibility of collaborating among
themselves in the days to come.

On behalf of PTU, I extend a warm welcome to all the National & International delegates
attending the International Conference and wish that the discussions and interactions
during the conference will help them to enhance their knowledge and provide directions
for future research Work.

I extend my best wishes for the success of the International Conference.

Dr. Buta Singh Sidhu


Dean (Academics)
Punjab Technical University
Jalandhar.
PLENARY TALKS

Prof. (Dr.) M. K. Surappa


Director, IIT, Ropar

Prof. (Dr.) A. K. Ganguli


Director, INST, Mohali

Prof. (Dr.) Amod Kumar


Director, CSIO-CSIR, Chandigarh

Prof. (Dr.) R. K. Bagchi


Director, NISST, Mandi Gobindgarh

Prof. (Dr.) D. C. Saxena


SLIET, Longowal

Prof. (Dr.) Joseph Anand Vaz


NIT, Jalandhar

Prof. (Dr.) Brahmjit Singh


NIT, Kurukshetra

Prof. (Dr.) Amalendu Patnaik


IIT, Roorkie

Prof. (Dr.) Jitendra Chhabra


NIT, Kurukshetra

Prof. (Dr.) O. P. Pandey


Thapar University, Patiala

Prof. (Dr.) Seema Bawa


Thapar University, Patiala

Prof. (Dr.) S K Gupta


NITTTR, Chandigarh

Sh. Paramjit Singh


RCED, Chandigarh
CONTENTS

Electronics & Communication Engineering

S. No. Paper ID Paper Title Page No.


Content Retrieval from Historical Manuscript Images: A Review
1. 3 1
Kitty Gupta and Rishav Dewan
Development of Dual Band Planar Inverted-F Antenna for Wireless Applications
2. 4 5
Jashandeep Singh, Sushil Kakkar and Shweta Rani
Power Oscillation Damping and Voltage Stability Improvement Using SSSC integrated with SMES
3. 5 9
Amritpal Singh and Divesh Kumar
Electronic toll collection based on Category of Vehicle Using RFID
4. 9 14
Gurpreet Singh, Amrik Singh and Harpal Singh
Extended local binary pattern for face recognition
5. 13 17
Jatinder Sharma and Rishav Dewan
Analysis and implementation of full adder circuit using Xilinx software
6. 14 22
Girdhari Agarwal, Bobbinpreet Kaur and Amandeep Kaur
Automatic Detection of Diabetic Retinopathy from Eye Fundus Images: A Review
7. 16 27
Manpreet Kaur and Mandeep Kaur
Various Methods of Road Extraction From Satellite Images: A Comparative Analysis
8. 18 33
Atinderpreet Kaur and Ram Singh
Design of Microstrip Patch Antenna by Introducing Defected Ground Structure
9. 26 38
Harpreet Kaur and Monika Aggarwal
Edge Detection of A Video Using Adaptive Edge Detection Operator
10. 31 43
Pankaj Sharma and Jatinder Sharma
Design of 3-side Truncated Patch Antenna With Open Semicircular Slot for UWB Applications
11. 34 50
Amrik Singh, Sushil Kakkar and Shweta Rani
Gait Recognition Using SVM And LDA With Pal And Pal Entropy Image
12. 35 54
Reecha Agarwal and Rishav Dewan
Enhancement The Accuracy of Photonic Crystal Fiber By Using Ann
13. 49 63
Amit Goyal and Kamaljeet Singh Sidhu
Effect of Different Feeding Techniques on Slot Antenna
14. 54 67
Sushil Kakkar, Shweta Rani and Anuradha Sonker
Low Cost Planar Antenna for Wireless Applications
15. 55 71
Anand Jain, Sushil Kakkar and Shweta Rani
Artificial Eyes: Brain Port Device
16. 59 74
Lovedeep Dhiman and Deepti Malhotra
High Reflectance Multiple step metal grating for multichannel reflector
17. 60 78
Ramanpreet Kaur, Neetu Sharma and Jaspreet Kaur
Speech Recognition Using Neural Network
18. 66 81
Pankaj Rani, Sushil Kakkar and Shweta Rani
BER Analysis of Turbo Coded OFDM for different Digital Modulation Techniques
19. 67 86
Gurwinder Kaur and Amandeep Kaur
S. No. Paper ID Paper Title Page No.
Rectangular Microstrip Patch Antenna with Triangular Slot
20. 73 91
Sandeep Singh and Jagtar Singh Sivia
Design review on high-swing, high performance CMOS operational amplifier
21. 74 94
Harjeet Singh and Tusty Bansal
Application of AHP-VIKOR Hybrid MCDM Approach for 3PL selection: A Case Study
22. 75 97
Arvind Jayant and Priya Singh
Comparison of different types of Microstrip patch antennas
23. 76 105
Sumanpreet Kaur Sidhu and Jagtar Singh Sivia
Artificial vision towards creating the joys of seeing for the blind
24. 81 110
Aastha Bansal, Virpal Kaur and Anjali Gulati
A review: nanotechnology
25. 82 113
Sonali Tyagi, Savaljeet Kaur and Shilpa Rani
A Review: 5G Technology
26. 83 116
Varinder Bansal, Gursimrat Singh and Vipin Bansal
Adaptive Modulation based Link Adaptation for High Speed Wireless Data Networks using Fuzzy Expert System
27. 84 118
Kuldeep Singh, Jatin Sharma and Danish Sharma
Design and analysis of dual band printed microstrip dipole antenna for wlan
28. 91 123
Gurmeet Singh and Lakhwinder Singh Solanki
Koch Fractal Loop antenna using modified ground
29. 92 127
Pravin Kumar, Anuradha Sonker and Varun Punia
Automatic Detection of Diabetic Retinopathy- A Technological Breakthrough
30. 96 131
Chinar Chahar and Deepti Malhotra
A Survey on Hybridization of Wireless Energy Harvesting and Spectrum Sharing in Cognitive Radio
31. 98 134
Ranjeet Singh and Amandeep Singh Bhandari
Hybrid modulation
32. 100 138
Rishav Dewan, Gagandeep Kaur and Harjeet Kaur
Removal of Powerline Interference from EEG using Wavelet-ICA
33. 101 141
Gautam Kaushal, V.K. Jain and Amanpreet Singh
A Survey on Reversible Logic Gates
34. 103 144
Santosh Rani and Amandeep Singh Bhandari
A review of reversible logic gates
35. 105 148
Sukhjeet Kaur and Amandeep Singh Bhandari
A Review of an Adaptive On-demand Routing Protocols for Mobile Ad-hoc Networks
36. 112 151
Navjot Kaur and Deepinder Singh
Exploding Electronic Gadgets
37. 113 156
Avonpreet Kaur, Anmol Sharma
Distinguish Abandons in Gears
38. 114 159
Avonpreet Kaur, Anmol Sharma and Shweta Rani
Study of various aspects related to Wireless Body Area Networks
39. 116 161
Raju Sharma and.Hardeep Singh Ryait
How to Reduce Mobile Phone Tower Radiation
40. 118 167
Anmol Sharma, Avonpreet Kaur and Shweta Rani
Optimal ECG Sampling Rate for Non-Linear Heart Rate Variability
41. 119 170
Butta Singh, Vijay Kumar Banga and Manjit Singh

ii
S. No. Paper ID Paper Title Page No.
Design of Rectangular Microstrip Patch Antenna Array for S, C and X- Band
42. 127 175
Jagtar Singh Sivia, Amandeep Singh and Sunita Rani
UWB Stacked Patch Antenna using Folded Feed for GPR application
43. 129 179
Mohammad Shahab and Tanveer Ali Khan
Parameters evaluation of semi-circular object in stitched image
44. 131 183
Jagpreet Singh, Ajay Kumar Vishwakarma and Neha Hooda
Arduino based Microcontroller
45. 133 188
Manoj Kumar and Vertika Garg
Application of DGS in Microstrip Patch Antenna
46. 136 191
Sushil Kakkar, A. P. Singh and T.S. Kamal
Wavelength Assignment Problem In WDM Network
47. 145 194
Gunvir Singh, Abhinash Singla and Sumit Malhotra
Comparative Study of Different Algorithms to Implement Smart Antenna Array-A Review
48. 149 199
Gurjinder Kaur and Gautam Kaushal
A review of deterministic energy-efficient clustering protocols for wireless sensor networks
49. 155 203
Gurjit Kaur and Shweta Rani
Performance Analysis of AODV, TORA and AOMDV Routing Protocols in the presence of Blackhole
50. 159 206
Gurmeet Singh, Deepinder Singh Wadhwa and Ravi Kant
Study of Evolutionary Optimisation techniques and their Applications
51. 160 211
Mandeep Kaur and Balwinder Singh Sohi
Soft Computing and its various tools: A review
52. 181 216
Nishi Sharma and Dr. Shaveta Kakkar
Passive Optical Networks Employing Triple Play Services : A Review
53. 182 220
Harneet Kaur and Harmanjot Singh
Review of Static Light path Design in WDM network
54. 186 224
Harpreet Kaur and Munish Rattan
A Review Paper on Nanomaterials & Nanotechnology
55. 193 228
Anshu Rao and Ravi Kant
Review of Segmentation of Thyroid gland in Ultrasound image using neural network
56. 194 235
Mandeep Kaur and Deepinder Singh
Review of Robust Document Image Binarization Technique for Degraded Document Images
57. 195 240
Rupinder Kaur and Naveen Goyal
A Review of Multibiometric System with Recognition Technologies and Fusion Strategies
58. 211 244
Cammy Singla and Naveen Goyal
Fractal Reconfigurable Antenna
59. 212 250
Anuradha Sonker, Shweta Rani and Sushil Kakkar
Review of Spectrum Sensing in Cognitive Radio by Using Energy Detection Technique.
60. 213 252
Ajay Jindal
Review of Cognitive Radio by Cyclostationary Feature Based Spectrum Sensing
61. 214 256
Nishant Goyal
Review of simple distributed Brillouim scattering modeling for temperature & strain
62. 215 259
Tushar Goyal and Gaurav Mittal

iii
Computer Science & Engineering

S. No. Paper ID Paper Title Page No.


A Comprehensive Study of AODVv2-02 Routing Protocol in MANET
1. 1 263
Vikram Rao and Anuj Kumar Gupta
A Survey on Zone Routing Protocol
2. 2 267
Nafiza Mann, Abhilash Sharma and Anuj Kumar Gupta
A Review on Cloud Computing & its Current Issues
3. 6 272
Sandeep Kaur and Simarjit Kaur
A Survey On Load Balancing Techniques In Cloud Computing
4. 7 276
Lakhvir Kaur and Simarjit Kaur
Review of Various Fractal Detection Techniques in X-Ray Images
5. 10 280
Tanudeep Kaur and Anupam Garg
A Review of the Techniques used for Face Recognition
6. 11 284
Priyanka Bansal
Enhancement in intrusion detection system for WLAN using genetic algorithms
7. 15 289
Rupinder Singh and Sandeep Kautish
A Review on Reliability Issues in Cloud Service
8. 19 292
Gurpreet Kaur and Rajesh Kumar
Influence of Anti-Patterns on Software Maintenance: A Review
9. 20 296
Sharanpreet Kaur and Satwinder Singh
A review on Data clustering and an efficient k-Means Clustering Algorithm
10. 23 302
Sukhjeet Kaur and Satwinder Singh
Data Mining Technique to Predict Mutations from Human Genetic Information in Bioinformatics: A Review
11. 24 306
Manpreet Kaur and Shivani Kang
Analysis of AODV, OLSR and ZRP Routing Protocols in MANET under cooperative black hole attack
12. 25 310
Sukhman Sodhi, Rupinder Kaur Gurm and Gurjot Singh Sodhi
A Review on content based video retrieval
13. 27 314
Jaspreet Kaur Mann and Navjot Kaur
Web Services: An e-Government Perspective
14. 30 319
Monika Pathak, Gagandeep Kaur and Sukhdev Singh
A review on ACO based and BCO based routing protocols in MANET’s
15. 36 322
Jatinder Pal Singh
A Comprehensive Study on the Basics of Artificial Neural Network
16. 38 328
Neha Singla, Mandeep Kaur, Sandeep Kaur and Amandeep Kaur
A Survey of Routing Protocols in Mobile Ad-Hoc Network
17. 39 333
Preet Kamal Sharma and R.K. Bansal
The Beginning of Statistical Machine Translation System to convert Dogri into Hindi
18. 40 337
Manu Raj Moudgil and Preeti Dubey
A brief study about evolution of Named Entity Recognition
19. 43 342
Varinder Kaur and Amandeep Kaur Randhawa
Comparison between PID, fuzzy, genetic algorithm & particle swarm optimization soft computing techniques
20. 44 348
Rakesh Kumar and Nishant Nakra

iv
S. No. Paper ID Paper Title Page No.
A Survey on Routing Protocol for Wireless Sensor Network
21. 46 355
Mahima Bansal, Harsh Sadawarti and Abhilash Sharma
A Survey on Optical Amplifiers
22. 47 360
Kirandeep Kaur and Harsh Sadawarti
Managing Big Data with Apache Hadoop
23. 48 365
Maninderpal Singh Dhaliwal and Amandeep Singh Khangura
Gateway Based Energey Enhancement Protocol For Wireless Sensor Network
24. 52 372
Maninder Jeet Kaur and Avinash Jethi
Literature survey of AODV and DSR reactive routing protocols
25. 56 376
Charu Sharma and Harpreet Kaur
A Survey on Data Placement and Workload Scheduling Algorithms in Heterogeneous Network for Hadoop
26. 57 380
Ruchi Mittal and Harpreet Kaur
A Survey on Parts of Speech Tagging For Indian Languages
27. 58 387
Neetu Aggarwal and Amandeep Kaur Randhawa
Asymmetric Algorithms and Symmetric Algorithms – A Review
28. 61 391
Tannu Bala and Yogesh Kumar

A Novel Approach for Reducing Energy Consumption and Increasing Throughput in Wireless Sensor Network
29. 64 using Network Simulator 2 395
Jagdish Bassi and Taranjit Aulakh
Comparison & Analysis of Binarization Technique for Various Types of Images Text
30. 77 399
Kanwaljeet Kaur, Monica Goyal and Rachna Rajput
Page Ranking Algorithms for Web Mining: A Review
31. 80 406
Charanjit Singh and Sandeep Kumar Kautish
Comparative analysis of various data mining classification algorithms
32. 86 411
Indu Bala and Yogesh Kumar
Simulative Investigation on VOIP over Wimax Communication Network
33. 89 416
Ambica and Avinash Jethi
Swarm Intelligence (SI)-Paradigm of Artificial Intelligence (AI)
34. 99 421
Pooja Marken and Renu Nagpal
Hierarchical Nepali Base Phrase Chunking Using HMM With Error Pruning
35. 106 424
Arindam Dey, Abhijit Paul and Bipul Syam Prukayastha
Cloud Computing: A Review on Security and Safety Measures
36. 111 429
Sandeep Kapur and Sandeep Kautish
A Survey on Multiprotocol Label Switching Virtual Private Network Techniques (MPLS VPN)
37. 115 435
Gurwinder Singh and Manuraj Moudgil
Comparison Analysis of TORA Reactive Routing Protocols on MANET based on the size of the network
38. 122
Emanpreet Kaur, Abhinash Singla and Rupinder Kaur 439
Comparison Analysis Of Zone Routing Protocol Based On The Size Of The Network
39. 123 443
Rupinder Kaur, Abhinash Singla and Emanpreet Kaur
The Performance Analysis of LMCS Network Model based on Propagation Environment Factors
40. 132 447
Sarbjeet Kaur Dhillon and Gurjeet Kaur
Performance Evaluation of Delay Tolerant Network Routing Protocols
41. 141 453
Vijay Kumar Samyal, Sukvinder Singh Bamber and Nirmal Singh

v
S. No. Paper ID Paper Title Page No.
Road Traffic Control System in Cloud Computing: A Review
42. 146 457
Kapil Kumar and Pankaj Deep Kaur
A Review on Scheduling Issues in Cloud Computing
43. 147 461
Kapil Kumar, Abhinav Hans, Navdeep Singh and Ashish Sharma
Artificial Intelligence (AI): A Review
44. 151 464
Ishu Gupta, Damandeep Kaur and Preeti
Data Optimization using Transformation approach in Privacy Preserving Data Mining
45. 157 469
Rupinder Kaur and Meenakshi Bansal
Study on Design the Sensor for control the traffic light time as dynamic for efficient traffic control
46. 162 473
Divjot Kaur
Quality Aspects of Open Source Softwares
47. 172 476
Amitpal Singh and Harjot Kaur
A Study of Shannon and Renyi entropy based approaches for Image Segmentation
48. 177 480
Baljit Singh and Parmeet Kaur
Swarm Intelligence and Flocking Behavior
49. 183 486
Himani Girdhar and Ashish Girdhar
Internet Threats and Prevention – A Brief Review
50. 184 490
Sheenam Bhola, Sonamdeep Kaur and Gulshan Kumar
A Comparative performance analysis of Mobile Ad hoc Network (MANETs) Routing Protocol
51. 197 495
Sheenam Madaan
Unified Modeling Language For Database Systems and Computer Applications
52. 198 498
Jyoti Goyal
Securing Information Using Images: A Review
53. 202 501
Anjala Grover, Gulshan Ahuja and Avi Grover
Text document tokenization for word frequency count using rapid miner
54. 216 505
Gaurav Gupta and Sumit Malhotra

Mechanical Engineering

S. No. Paper ID Paper Title Page No.

Comparative High-Temperature Corrosion Behavior of D-Gun Spray Coatings on ASTM-SA213, T11 Steel in
1. 41 Molten Salt Environment 509
Ankur Goyal, Rajbir Singh and Gurmail Singh

Analysis And Optimization Of Void Spaces In Single Ply Raw Material Using Finite Element Method & Fused
2. 53 Deposition Modeling 517
Harmeet Singh, J.P.S. Oberoi and Rajmeet Singh

Analysis of the Enablers for Selection of Reverse Logistics Service Provider: An Interpretive Structural Modeling
3. 85 (ISM) Approach 521
Arvind Jayant and Uttam Kumar
Sensitization behavior of GTAW austenitic stainless steel joints
4. 87 530
Subodh Kumar and Amandeep Singh Shahi

Grinding Fluid Applications using Simulated Coolant Nozzles and their Effect on Surface Properties in a Grinding
5. 88 Process 534
Mandeep Singh, Jaskarn Singh and Yadwinder Pal Sharma

vi
S. No. Paper ID Paper Title Page No.
Optimization of Machining Parameters for surface roughness in Boring operation using RSM
6. 90 541
Gaurav Bansal and Jasmeet Singh
Elastic - Plastic & Creep Phenomenon In Solids
7. 108 547
Gaurav Verma and Kulwinder Singh
Finite Element Analysis of a Muff Coupling using CAE Tool
8. 121 551
Rajeev Kumar, Mayur Randive and Gurpreet Dhaul
Evaluation Of Supply Chain Collaboration: An Ahp Based Approach
9. 126 556
Arvind Jayant, Veepan Kumar, Ravi Kantand Rakesh Malviya
Impact of pinch strengths on healthy and non-healthy workers in manufacturing unit
10. 134 560
Ahsan Moazzam and Manoj Kumar
Evaluation of Total Productive Maintenance towards Manufacturing Performance: A Review
11. 139 565
Jagvir Pannu, Harmeet Singh and Gopal Dixit
Evaluation of Technological Innovation Capabilities of SMEs
12. 140 574
Taranjit Virk and Gopal Dixit
Techno-economic aspects in micromachining of H11 hot die steel mould using EDM - a case study
13. 163 579
Shalinder Chopra and Aprinder Singh Sandhu
Optimization of Surface Roughness In CNC Turning of Aluminium Using Anova Technique
14. 165 583
Karamjit Singh, Gurpreet Singh Bhangu and Supinder Singh Gill
Effects of Smart Grid Utilization, Performance, Environmental & Security Issues: A Review
15. 187 589
Parminder Pal Singh and Gagandeep Kaur
Recent Advances In Friction Stir Welding For Fabrication of Composite Materials
16. 191 593
Gurmeet Singh Cheema, Prem Sagar and Vikash Jangra
Recent Development In Aluminium Alloys For The Advance Composite Material In Industry
17. 205 597
Mohan Singh and Balwinder Singh Sidhu

Electrical Engineering

S. No. Paper ID Paper Title Page No.


THD Reduction in DVR by BFO-Fuzzy Logic
1. 17 603
Chirag Kalia and Divesh Kumar
Ripple Control in Converter
2. 21 608
Husanpreet Singh and Divesh Thareja
A half and Full-Wave Rectifier With Full Control of the Conducting Angle
3. 22 611
Jashandeep Singh and Simerjeet Singh
Multi-objective Optimization Using Linear Membership Function
4. 28 615
Gurpreet Kaur, Divesh Kumar and Manminder Kaur
Allocation of Multiple DGs and Capacitors in Distribution Networks by PSO Approach
5. 42 620
Satish Kansal, Rakesh Kumar Bansal and Divesh Kumar
A Review of Renewable Energy Supply and Energy Efficiency Technologies
6. 51 626
Ramandeep Kaur, Divesh Kumar and Ramandeep Kaur
Design and Optimization of DGS based T-Stub Microstrip Patch Antenna for Wireless Applications
7. 68 631
Lalit Dhiman Dhiman and Simerpreet Singh Singh

vii
S. No. Paper ID Paper Title Page No.

Feasibility Study of Hybrid Power Generation using Renewable Energy Resources in Tribal Mountainous Region
8. 70 of Himachal Pradesh 638
Umesh Rathore, Ved Verma and Vikas Kashyap
Modeling & Simulation of Photovoltaic system to optimize the power output using Buck-Boost Converter
9. 93 643
Shilpa Garg and Divesh Kumar
Modelling and Analysis of Grid Connected Renewable Energy Sources with active power filter
10. 110 648
Harmeet Singh and Jasvir Singh
3D Finite Element Analysis for Core Losses in Transformer
11. 124 654
Sarpreet Kaur and Dr. Damanjeet Kaur
Overview Of Power Trading: Meaning , Scneario , Issues And Challenges
12. 128 658
Pooja Dogra
A Review Paper on Wireless Power Transmission Methods
13. 173 663
Ramandip Singh and Yadwinder Singh
Assessment of Bioenergy Potential for Distributed Power Generation from Crop Residue in Indian State Punjab
14. 175 667
Ram Singh
Potential of Microbial Biomass for Heavy Metal Removal: A Review
15. 188 671
Garima Mahajan and Dhiraj Sud
Efficiency comparison of New and Rewound Induction Motors used in Rice Mill
16. 201 673
Ramanpreet Singh and Jasvir Singh

Civil Engineering

S. No. Paper ID Paper Title Page No.


Need of Sustainable Development & Use of Demolished Aggregate for Highway Construction
1. 107 676
Raj Kumar Maini, Amandeep Singh and Saajanbir Singh

Applications of Acoustical & Noise Control Materials & Techniques for Effective Building Performance- A Review
2. 144 679
Jashandeep Kaur, Manu Nagpal and Kanwarjeet Singh Bedi
Nanocement Additives - A Carbon Neutral Strength Enhancing Material
3. 166 684
Jaskarn Singh, Gurpyar Singh, Parampreet Kaur and Akash Bhardwaj
Transpiring Concrete Trends: Review
4. 171 687
Kanwarjeet Singh Bedi, Ivjot Singh and Jaskarn Singh
Feasibility of Flyover on Unsignalised Intersection
5. 180 690
Abhishek Singla, Gurpreet Singh, Bohar Singh and Dapinderdeep Singh
Application of Geoinformatics in Automated Crop Inventory
6. 196 705
Sandeep Kumar Singla, O. P. Dubey and R.D. Garg

Basic & Applied Sciences

S. No. Paper ID Paper Title Page No.


A Study on PRP’s (Protein Rich Pulses) by Irradiating Co-60 Gamma Ray Photons
1. 12 713
Manoj Kumar Gupta, Gurinderjeet Singh, Shilpa Rani and Amrit Singh
Drifting effect of electron in multi-ion plasmas with non-extensive distribution of electrons.
2. 37 716
Parveen Bala and Sheenu Juneja

viii
S. No. Paper ID Paper Title Page No.
Second language learner: threads of communication skills in English language
3. 62 720
Chhavi Kapoor

An Experimental Investigation on Aluminium based Composite Material reinforced with Aluminium oxide,
4. 71 Magnesium and Rice Husk Ash Particles through Stir Casting Technique 723
Rajiv Bharti, Sanjeev Kumar and Jaskirat Singh
Role of Youth and Media in modern communication system
5. 72 729
Harpreet Kaur
A Review Study On Presentation Of Positive Integers As Sum Of Squares
6. 95 733
Ashwani Sikri
Analysis of Material Degradation in Chlorine Environment of Power Plants
7. 104 744
Harminder Singh
Multiple scattering effects of gamma ray in some titanium compounds
8. 109 747
Lovedeep Singh, Amrit Singh, Pooja Rani and Manoj Kumar Gupta
Measurements of radon gas concentration in soil
9. 117 751
Navpreet Kaur, Amrit Singh, Manpreet Kaur and A S Dhaliwal
Determination of Conductivity of Human Tear Film at 9.8 GHz
10. 120 754
Namita Bansal, A S Dhaliwal and K S Mann
Biological Significance of Nitrogen Containing Heterocyclic compounds-A Mini Review
11. 137 755
Rajni Gupta
Steady State creep Behavior of Functionally Graded composite by using analytical method
12. 138 762
Ashish Singla, Manish Garg and Vinay Kumar Gupta
Getting Energy and A cleaner Environment With Nanotechnology
13. 148 767
Savita Sood
Optimal Real-time Dispatch for Integrated Energy Systems
14. 156 772
Prabhpreet Kaur and Sirdeep Singh
Engineering Fluorescence Lifetimes of II-VI Semiconductor Core/Shell Quantum Dots
15. 167 776
Gurvir Kaur and S.K. Tripathi
Human Values and Ethics in the Modern Technology Driven Global Society
16. 168 777
Sunita Rani, Vandana Sharma and Neetika
RFID: A Boom for Libraries
17. 176 780
Arvind Mittal, Amit Mittal and Uma Sharma
Toolkit for Fast Neutron Removal Cross-Section
18. 192 783
Kulwinder Singh Mann, Manmohan Singh Heer and Asha Rani
Green Computing
19. 199 789
Sarbjit Kaur and Sonika Dhiman

Recent Modified Fibers With Their Technological Developments in Different Fields of Application - An Overview
20. 207 794
Nisha Arora and Amit Madahar
Innovations in Textile Composite Desining and Their Applications
21. 210 799
Rajeev Varshney and Amit Madahar

ix
Food Engineering

S. No. Paper ID Paper Title Page No.

Effect of plasticizer on the properties of pellets made from agro-industrial wastes


1. 142 802
Shumaila Jan, Kulsum Jan, C.S. Riar and D.C. Saxena

Textural and Microstructural Properties of Extruded Snack prepared from rice flour, corn flour and deoiled rice
2. 143 bran by twin screw extrusion 807
Renu Sharma, Raj Kumar, Tanuja Srivastava and D.C. Saxena
Extruded Products Analog To Meat
3. 150 813
Renu Sharma, Tanuja Srivastava, D C Saxena and Raj Kumar
Biotechnology & Genetic Engineering: Enhancement in food quality and quantity
4. 174 818
Joni Lal and Kulbhushan Rana
Effect of physical properties on flow ability of commercial rice flour/powder for effective bulk handling
5. 189 820
Shumaila Jan, Syed Insha Rafiq and D.C Saxena
Extraction of starch from differently treated horse chestnut slices
6. 190 825
Syed Insha Rafiq, Shumaila Jan,Sukhcharn Singh and D.C Saxena
Industrial effluents and human health
7. 217 830
Anchla Rupal

Agriculture Sciences

S. No. Paper ID Paper Title Page No.


Determination of attenuation coefficient and water content of Broccoli leaves using beta particles
1. 29 832
Komal Kirandeep, Parveen Bala and Amandeep Sharma
The Study of Cloud Computing Challenges in Agriculture with Special Reference to Sangli District (MS)
2. 45 835
Dalvi Teja Satej and Kumbhar S.R.
Baler Technology for the Paddy Residue Management – Need of the hour
3. 169 838
Bharat Singh Bhattu and Ankit Sharma Sharma
A Study on Constraints of Broiler Farming Entrepreneurship in Mansa District of Punjab
4. 170 840
Bharat Singh Bhattu, Ankit Sharma and Gurdeep Singh

Business Administration

S. No. Paper ID Paper Title Page No.

Constraints and Opportunities Faced by Women Entrepreneurs in Developing Countries- with Special Contest to
1. 32 India 845
Kamaljit Singh and Deepak Goyal
A Descriptive Study of the Marketing Mix Strategies of Milkfood Ltd.
2. 33 851
Kamaljit Singh and Ramandeep Kaur
Cohorts In Marketing: A Review Paper
3. 50 855
Amandeep Singh Garai and Gurbir Singh
A Study of Corporate social responsibility in India
4. 78 860
Sandeep Kaur and Seema Jain

x
S. No. Paper ID Paper Title Page No.
Emerging Role Of Information Technology In Banking Sector’s Development of India
5. 79 865
Seema Jain and Sandeep Kaur
Customer Satisfaction towards WhatsApp
6. 94 871
Parveen Singla and Rajinder Kumar Uppal
Opportunities And Challenges In The Era of Globalization In 21st Century
7. 130 876
Pankaj Goyal and Kirna Rani

Employment Generation in Indian Economy Micro, Small & Medium Enterprises (MSME’s): An Analytical
8. 135 Approach 880
Parmjot Singh and Rakesh Kumar
Public Distribution System as Development Planning & Policy
9. 153 887
Gurdeep Kaur Ghuman and Pawan Kumar Dhiman

xi
Electronics & Communication
Engineering
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Content Retrieval from Historical


Manuscript Images: A Review
Er. Kitty Gupta Er. Rishav Dewan
Student (ECE Department) Assistant Professor (ECE Department)
BGIET Sangrur, Punjab BGIET Sangrur, Punjab
kittygpt3@gmail.com rishavdewan@gmail.com

ABSTRACT sharpening. In this way useful contents can be retrieved from


the document images. This paper focuses on review of such
Ancient documents play an important role in history. Various types of methods or algorithms by which quality of degraded
information regarding the literature, tradition and culture is documents can be enhanced.
kept in these documents. These heaps of documents are
degraded because of some climatic circumstances, low quality
and inappropriate holding. This paper reviews on the
techniques used to retrieve the necessary content from these
ancient documents. The techniques include preprocessing,
image binarization, thresholding methods and post processing
methods. Further, during scanning the document it can get
corrupted with some unwanted lines or signals termed as
noise that should be eliminated.

General Terms
Document Image Processing, binarization, PSNR

Keywords
Degraded document image; preprocessing; thresholding; post
processing.

1. INTRODUCTION
There are many libraries and places in which degraded
Historical documents are preserved. These manuscripts are
degraded because of various fault conditions in the
environment or low quality of paper used. Another problem
with these documents is that because of past decay of time the
ink of front page get disfigure with last page. Such types of (a)
problems must be corrected by different techniques. Image
binarization is that technique by which the text can be
retrieved from the document. Binarization breaks the
document image in two parts: image background and
foreground text. Document text edges are digitized by using
image binarization. Thresholding methods are used for
binarization of image. Further, two types of thresholding
methods: local and global thresholding. Separation of
foreground and background of degraded document image is
done by global thresholding method. To get information about
the pixels and local area of the document image, local
thresholding is used. This was also seen that global
thresholding proves best with comparison of local
thresholding. Another thresholding method is Otsu‟s method
named after Nobuyuki Otsu. This is used to detect the text
edges. To find text edges constructing contrast is very
important and after this the edges of text can be easily
identified. Clear bimodal patterns are not obtained in the
degraded document. To find text stroke edges in the image,
correct contrast construction is very important. To detect the (b)
text edges provide uniform background to the degrade Figure 1: Degraded Historical Documents
document image. After this, text edges can be easily identified
and detected by separating text and background from the
image. Edge detection methods are used for edge detecting.
Now, by comparing the intensity of document image contents
can be easily retrieved from the image. For comparison,
assign two values „0‟for background and „1‟ for edges.
Detected edges have clear bimodal pattern which are obtained
by binarization. Bimodal patterns results in the text edge
1
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Input Pre- Post- Output


image processing Binarization processing image

Figure 2: Block diagram for enhancing degrade documents

2. METHODOLOGY
Denoising and Enhancing of Degraded Historical documents does not blur the image. Wavelet scheme, wavelet based
are very important task. In the existing technology some steps scheme with edge preservation and curvelet transform was
are concluded. These include (a) Pre-processing (b) compare with (a) F-Measure, (b) Negative Rate Metric, (c)
Binarization and (c) Post-processing. Each method is Normalized correlation, (d) Peak signal to noise ratio (PSNR).
explained as below: It was concluded that Curvelet transform performs better
results as it preserve edge features of the noisy image and
2.1 Post-processing reduce Gaussian and impulse noise in the image.
Pre-processing is also known as pixel level processing. This B. Gangamma et. al. (2012) [1] combines two methods of
processing includes conversion of coloured image into Grey image processing, filtering and mathematical morphology.
scale image. Unwanted noise and lines are reduced by using Bilateral filter reduce the unwanted noise in the images but
noise removal filters. The noise present in the document unable to smooth the edges of image. Mathematical
images are margin noise, Gaussian noise and impulse noise. morphology helps in the extraction of edges, shapes and
For reducing these noises the filters include Gaussian filter, cracks in the texts. Further, to binarize the image global
median filter, bilateral filter and guided image filter. Bilateral thresholding was used. The results of proposed method were
filter reduce noise without preserving edges and this limitation compare with Gaussian and Average filter. It was concluded
can be overcome by using guided image filter. that the proposed method proves better to degrade the
document image.
2.2 Binarization
Hossein Ziaei Nafchi et. al. (2013) [5] proposed a post
Binarization is a processing of converting grey image into processing method which is based on phase- preserved
binary form. Foreground and background parts of the image denoised image and phase congruency extracted features from
are separated from each other. It is done by thresholding the document image. Non-orthogonal log-Gabor filters were
methods. Global and local thresholding comes in this process. used to get the information of phase and amplitude value at
Another thresholding method is Otsu method that gives the each point of the image. Maximum moment of phase
weighted sum of variance of two pixels. congruency covariance (MMPCC) and locally weighted mean
phase angle (LWMPA) were used detect the edge and
2.3 Post-processing structure of foreground text respectively. Then Otsu‟s method
To enhance the performance of binarization post-processing was applied to get the binarized image and median filter was
technique is applied. During scanning, text may be deviated used to remove the noise from the image. The results were
from the base line. This type of problem is corrected in post evaluated in F-Measure, recall and Distance reciprocal
processing step. Text extraction and edge sharpening is also distortion (DRD). These methods were tested on DIBCO and
included in this. H-DIBCO datasets and the proposed methods showed better
results on DIBCO datasets in terms of F-Measure, recall and
3. LITERATURE SURVEY DRD.

In 2010, Shazia Akram et. al. [11] give an overview on Md Iqbal Quriashi et. al. (2013) [7] compared two approaches
various techniques to enhance the document images. The to degrade historical document images. In first case, Particle
techniques include Preprocessing, feature extraction and Swarm Optimization (PSO) with bilateral filter was applied.
classification. Preprocessing is that state that enhances the In second case, Non-linear filter and bilateral filter was
quality of image. It is also known as pixel level processing. It applied. Then both the techniques were compared in terms of
includes image acquisition, noise removal and image de- PSNR and NAE. The results conclude in favor of Particle
skewing methods. Useful information or data can be extracted Swarm Optimization.
by using feature extraction method. Classification includes Then, in 2014 Jagroop Kaur et. al. [8] proposed a new filter
distribution of documents into various categories which known as guided image filter as it is an edge preserving filter.
improves indexing efficiency of document storage places. The proposed method in this paper was worked in various
This paper concludes various techniques for document image steps. In first step, guided image filter was applied to smooth
processing and new methods must be developed to enhance the degrade image. Secondly, adaptive image contrast
the document images. enhancement was applied for the grouping of contrast and
gradient of local image. Then, final binarization was done
C. Patvardhan et. al. (2012) [3] proposed the method of
with thresholding methods. Proposed method was compared
discrete Curvelet Transform for denoising the document with some old methods in terms of F-Measure, Specificity,
images. Document images corrupted by Gaussian and impulse Geometric Accuracy and PSNR. The proposed method show
noises are denoised by curvelet transform. These noised gets
better results. Another advantage of using guided image filter
added during scanning and transmission. Hard thresholding
is that it reduces noise at higher extent from the degraded
and global Otsu methods were also used to smooth the document.
boundaries. In wavelet scheme Haar wavelet was used as it

2
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

S. Tamilselvan et. al. (2014) [12] proposed a binarization Table 1: Comparison of various methods
technique for retrieving contents from the degraded document
images. This binarization was performed in various steps. S.no Reference Method Used Conclusion
Correct contrast of the image was constructed. Then, by using 1. Shazia Akram Preprocessing, Gives an
Otsu‟s & Canny edge detection method edges of the image et. al. [11] feature overview on
was detected. After edge detection, necessary text was extraction and these techniques
extracted from the image. At last post processing method was classification used to degrade
applied to sharpen the text edges. Clear bimodal pattern of the the digital
text was extracted without blurring the image. After document
experimental results threshold value of output image was images.
calculated and value ranges from 0.3-0.9. It was also
concluded that contrast construction is more valuable step 2. C. Patvardhan Discrete After comparing
among other steps in the proposed method. et. al. [3] Curvelet three methods,
transform curvelet
Haneen Khader et. al. (2014) [4] describes a novel Annotation wavelet scheme transforms gives
tool for handwritten historical images. This was performed on and wavelet better results as
English and Arabic texts in terms of text segmentation. K- scheme with it reduce noise
means and Otsu‟s thresholding methods were used in image edge and preserve
binarization which comes under pre-processing step. Suitable preservation. edge features.
binarization method was selected depending on the quality of Hard
image. On the output of thresholding, segmentation was thresholding and
applied to detect the lines and texts from the image. Finally, Otsu‟s method in
Annotation tool was applied which finds whether the text is wavelet scheme
English and Arabic. A rectangular box was appeared on each
word. Last step of proposed method is saving of Annotation 3. B. Gangamma Filtering and One
by creating an xml file. The aim of this tool is to eliminate et. al. [1] Mathematical disadvantage of
segmentation errors. Morphology. bilateral filter is
Bilateral filter that it not edge
Hossein Ziaei Nafchi et. al. (2014) [4] introduced a phase and Binarization preserving filter.
based binarization method which worked in three steps (a) is done by global Another filter
pre-processing (b) binarization (c) post processing. Denoising thresholding. can be used to
of image is considered in preprocessing step. Median filter preserve edges.
was used to remove unwanted noise and lines and Gaussian
filter to separate foreground from background. Main 4. Hossein Ziaei Post processing On DIBCO
binarization is based on MMPCC and LWMPA. In post Nafchi et. al. method, dataset proposed
processing, to enhance the binarization Gaussian filter was [5] MMPCC, method shoe
applied. Further, to get ground truth image PhaseGT was used LWMPA, Otsu‟s better results
to simplify and speed up the ground truth generation process. method and
These methods had been analyzed on dataset of DIBCO, H- median filter.
DIBCO, PHIBD and BICKLEY DIARY.
5. Md Iqbal PSO, bilateral PSO performs
Quriashi et. filter and non- better.
al. [7] linear filters
4. CONCLUSION
6. Jagroop Kaur Guided image Guided image
This review paper analyzes the various algorithms and et. al. [8] filter, filter is edge
techniques used for enhancing the degraded historical Binarization preserving filter
documents or manuscripts. The documents get degraded due and will used to
to various environmental conditions. Every technique has its detect brain
own advantages and disadvantages. Usually for improving the tumor.
quality of image binarization performs better results. It is done
by various thresholding methods. Filters are used to remove 7. S. Contrast Contrast
the noise from degraded image and edge preserving filter Tamilselvan construction, construction
show better results. Every algorithm is compared on the basis et. al. [12] Otsu‟s method gives better
of some parameters like PSNR, F-Measure and NC. In future and post results.
new algorithms would be developed for improving other processing.
historical images like text on stone monuments. 8. Haneen Annotation tool, Annotation tool
Khader et. al. image find whether the
[4] segmentation text is English
and Otsu‟s or Arabic.
method

3
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5. REFERENCES
[1] B. Gangamma, Srikanta Murthy K, Arun Vikas Singh,
“Restoration of Degraded Historical Document Image”,
Journal of Emerging Trends in Computing and Information
Science”, Vol. 3, Issue 5, May 2012.

[2] Bolan Su, Shijian Lu, Chew Lim Tan, “Robust Document
Image Binarization Technique for Degraded Document
Images”, IEEE Transactions on Image Processing, Vol. 22,
Issue 4, April 2013.

[3] C. Patvardhan, A.K. Verma, C.V. Lakshmi, “Denoising of


Document images using Discrete Curvelet Transform for
OCR Applications”, International Journal of Computer
Applications(0975-8887), Vol. 55, Issue 55, October 2012.

[4] Haneen Khadar, Abeer Al-Marridi, “An Interactive


Annotation tool for Indexing Historical Manuscripts”, IEEE
transactions on image processing, 2014.

[5] Hossein Ziaei Nafchi, Reza Farrahi Moghaddam,


“Application of Phase –Based Features and Denoising in post
processing and binarization of Historical Document Images”,
IEEE 12th International Conference on Document Analysis
and Recognition”, pp. 220-224, 2013.

[6] Hossein Ziaei Nafchi, Reza Farrahi Moghaddam, “Phase


Based Binarization of Ancient Document Images: Model and
Applications”, IEEE Transactions on Image Processing, Vol.
23, Issue 7, July 2014.

[7] Iqbal Quraishi, Mallika De, “A Novel Hybrid Approach to


Restore Historical Degraded Documents”, IEEE International
Conference on Intelligent Systems and Signal Processing
(ISSP), 2013.

[8] Jagroop Kaur, Rajiv Mahajan, “Improved Degraded


Document Image Binarization using Guided image filter”,
International journal of Advance research in Computer
Science and Software Engineering, Vol. 4, Issue 9, September
2014.

[9] Konstantinos Ntriogiannis, Basilis Gatos, Ioannis


Pratikakis, “Performance Evolution Methodology for
Historical Image Binarization”, IEEE Transactions on Image
Processing, Vol. 22, Issue 2, February 2013.

[10] Manish Yadav, Swati Yadav, Dilip Sharma. “Image


Denoising Using Orthonormal Wavelet Transform with Stein
Unbiased Risk Estimator”, IEEE student‟s Conference on
Electrical, Electronics and Computer Science”, 2014.

[11] .Shazia Akram, Mehraj-Ud-Din Dar, Aasia Quyoum,


“Document Image Processing- A Review”, International
Journal of Computer Applications (0975-8887), Vol. 10, Issue
5, November 2010.

[12] S. Tamilselvan, S.G. Sowmya, “Content Retrieval from


Degrade Document Images using Binarization Technique”,
IEEE International Conference on Computation of Power,
Energy, Information and Communication(ICCPEIC), pp.422-
426, 2014.

4
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Development of Dual Band Planar Inverted-F


Antenna for Wireless Applications
Jashandeep Singh Sushil Kakkar Shweta Rani
M.Tech Student Assistant Professor Associate Professor
ECE Department ECE Department ECE Department
BGIET, Sangrur BGIET, Sangrur BGIET, Sangrur
jsdeol1989@gmail.com sushil.kakkar@bgiet.ac.in shwetaranee@gmail.com

achieving maximum possible frequency bands with


ABSTRACT suitable return loss and radiation pattern are desirable
A dual-band planar inverted-F antenna (PIFA) for [5].
wireless applications has been presented in this paper.
The proposed antenna is compact in size and design on Planar Inverted-F Antennas are widely used in a variety
FR4 substrate. The antenna consists of a slotted radiator of communication systems especially in mobile phone
supported by shorting wall and a small ground plane. handsets [6]. Also, PIFAs have features such as small
Square shaped slot in radiating patch have been used to size, light weight, low-profile, simple fabrication and
introduce dual band operation into the proposed antenna. relatively low specific absorption rate (SAR). Due to low
The structure is designed and optimized to operate at absorption of energy in the human body, this antenna
2.02GHz and 6.1GHz with achievable bandwidths provides good efficiency. In recent years, there have
15.53% and 14.23% respectively. These two bands cover been a number of PIFA designs with different
the existing wireless communication frequency bands configuration to achieve single and multiple operations
from 1.9-6.5GHz. Good return loss, antenna gain and by using different shapes of slots. Planar inverted-F
radiation pattern characteristics are obtained in the antennas (PIFAs) can cover two or more standard
frequency band of interest. Structural dimensions of the frequency bands and due to their thin planar structures
proposed antenna are optimized by using HFSS EM [7]. Truncated corner technique, meandered strips and
solver. Details of the dual-band PIFA characteristics are meandered shapes have been used to create multiple
presented and studied. band operations. Several techniques have been used to
improve the bandwidth of PIFA antennas [8].

Keywords The introduction of various resonant elements in order to


Planar Inverted-F Antenna (PIFA), Return Loss, Gain, create a multiband PIFA is a very common approach.
Another method calls for the addition of parasitic patches
Bandwidth with resonant lengths close to the frequency band where
the bandwidth improvement is required. The inclusion of
slots in the ground plane and in the radiating structure
1. INTRODUCTION has also been used to enhance the bandwidth. These
In the past decades, cellular communications become a antennas are generally designed to cover one or more
ubiquitous part of modern life. The desire of people to be wireless communications bands such as the Global
able to communicate effectively while being mobile has System for Mobile Communications (GSM900 and 800),
become an incentive for mobile communications Global Positioning System (GPS 1400 and 1575),
integration in terrestrial and satellite wireless systems Personal Communication System (PCS 1800 and 1900),
[1]. The mobile communication industry has already Digital Communication Systems(DCS-1800), Universal
developed enormously bringing fast and reliable Mobile Telecommunication System (UMTS 2000), 3G
infrastructure to people’s disposal. Yet, wireless IMT-2000, 4G LTE (700, 1700, 2300, 2600),Wireless
communication systems are still in the centre of Local Area Networks (WLAN) and Worldwide
extensive academic and technological research and Interoperability for Microwave Access (WiMAX) etc
development, with constant demand of more compact, [9].
faster and more reliable devices and services [2].
In recent years, the demand of compact, smaller than In this work compact PIFA is proposed and presented for
palm size communication devices has increased various wireless applications. The effects of different
significantly. Communication system demands for shorting wall width are studied. The proposed antenna
antennas to exhibits some standard properties such as satisfies the return loss, VSWR and bandwidth for
reduced size, moderate gain broadband and multiband applications within frequency range from 1.9-6.5 GHz.
operation [3, 4]. With the increasing interest in covering The measured reflection coefficient, radiation pattern,
various frequency bands, attention was drawn toward the VSWR and gain are characterized.
study of multiband antennas. For multiband antennas,

5
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2. DESIGN AND STRUCTURE


Table 1. Dimensions of Proposed Antenna
Figure 1 shows the geometry of proposed antenna with
detailed dimensions given in Fig 1. The antenna designed Sr.No. Parameter Dimension(mm)
on FR4 substrate with a dielectric constant ɛr= 4.4 and a
loss tangent of 0.02 and thickness of the substrate h= 1 Length of Patch, L1 26
1.57mm have been used to design planar inverted-F
antenna. Air is used as dielectric between FR4 substrate 2 Width of Patch, W1 25.6
and top radiating patch. The dimensional parameters of Width of First Supporting
the proposed antenna are detailed in Table 1. The square 3 2
Wall, S1
shaped slot of suitable dimensions are cut in the Height of Supporting
antenna element to get the required bandwidth. Radiating 4 3.57
Wall, S2
element slot has been used for producing miniaturization,
Width of Second
dual and wide band operation. The proposed antenna has 5 3
Supporting Wall, S3
a very small size and is physically thin.
Width of Shotining Strip,
6 2
S4
The antenna element is fed by a coaxial probe at the Distance b/w Right Corner
suitable location to get better impedance matching. 7 12
and slot, L2
The thickness of copper used in prototype is 0.16mm. Distance b/w Left corner
The radiating element of PIFA is grounded with a 8 2
and slot, L4
shorting strip. The optimization and simulations of
Width of Radiating
the antenna is carried out using High Frequency 9 20
Element, L3
Structure Simulator (HFSS). HFSS employs the Finite
Element Method (FEM), adaptive meshing, and brilliant Distance b/w top corner
10 2
and slot, W3
graphics to give you unparalleled performance and
insight to 3D EM problems. HFSS can be used to Distance b/w lower corner
11 11.6
calculate parameters such as S-Parameters, Resonant and slot, W4
Frequency and Fields [3]. The 3D model of proposed
antenna generated in the HFSS is shown in Figure 2. The
antenna impedance matching is achieved by controlling
the distance between the feed-line and shorting strip.
Optimized dimensions of the antenna are given in the
Table 1.

Fig. 2 : 3-D Model of proposed PIFA Generated in


HFSS

3. RESULTS AND DISCUSSION


3.1 Return Loss
The simulated return loss (S11) characterstics of the
proposed antenna is shown in Fig.3. From the graph it
can be seen that resonant frequencies achieved are
Fig. 1 : Radiating Element of Proposed antenna 2.06GHz and 6.11GHz with return loss of -24.05 dB and
-44.45 dB. Therefore, the proposed antenna covers the
corresponding bandwidths defined by S11 < -6 dB for
the two bands are 15.53% (1.917-2.233GHz) for 2.06
14.23% (5.6-6.53GHz) for 6.1GHz. These bandwidths
satisfy the requirements for various wireless applications.

6
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.4 Gain
The gain and efficiency are the two important parameters
of the antenna. The overall gain of the antenna obtained
after simulating the PIFA structure is shown in Fig. 6. A
peak gain of 3.70 dB has been achieved. This value of
gain achieved by the proposed structure is moderate
value and considered to be good for the overall
performance of the antenna.

Fig. 3 : Return Loss Graph of Proposed PIFA

3.2 VSWR
Voltage Standing Wave Ratio (VSWR) is a ratio of peak
voltage on the minimum amplitude of voltage of standing
wave . The VSWR is always a real and positive number
for antennas [7]. The smaller the VSWR is, the better the
antenna is matched to the transmission line and the more
power is delivered to the antenna. It is illustrated in Fig.
4 that at 2.06 GHz VSWR is 0.8 dB at 6.1GHz VSWR is
0.7 dB Also it is observed from the results that at these
resonant frequencies the Voltage Standing Wave Ratio is Fig. 6 : 3-D Polar plot showing Gain
below 2 dB which is desirable for most of the wireless
applications.
4. CONCLUSION
In this paper a dual band Planar Inverted-F Antenna, has
been presented which covers the frequencies between
1.9-6.5GHz. The proposed antenna has a simple
configuration and is simply printed on FR4
substrate. It has been found that making slots in the
radiating patch and slits in the ground plane provides a
simple multiband PIFA with enhanced bandwidth. The
proposed antenna can covers UMTS, DCS, PCS, GPS,
3G, 4G, and an additional frequency bands, and provides
Fig. 4 : VSWR plot of the proposed antenna good return loss, VSWR and radiation patterns.

3.3 Radiation Pattern


It can be seen from the plot of Fig. 5, that the
5. REFERENCES
antenna is a good radiator with almost omnidirectional
radiation which supports multiple standards. [1] X. Zhang and A. Zhao, “Enhanced Bandwidth PIFA
Antenna with a Slot on Ground Plane”, Progress In
Electromagnetics Research Symposium, China,
March 23-27, 2009.
[2] N.Ojaroudi, H. Ojaroud and N. Ghadimi, “Quad-
Band Planar Inverted-F Antenna (PIFA) for
Wireless Communication Systems” Progress In
Electromagnetic Research Letters, Vol.-45, 51-56,
2014.
[3] H.F.Abu Tarboush, R. Nilavalan and T. Peter,
“Multiband Inverted-F Antenna With Independent
Bands for Small and Slim Cellular Mobile
Handsets” IEEE Transactions On Antenna and
Propagation, Vol.- 59, NO. -7, 2011.
[4] S.Kakkar and S. Rani, “A novel Antenna Design
with Fractal-Shaped DGS Using PSO for
Fig. 5 : Radiation Pattern of proposed PIFA Emergency Management”, International Journal of
Electronics Letters, vol. 1, no. 3, pp. 108-117, 2013.
[5] N.Kumar and G.Saini, “A Multiband PIFA with
Slotted Ground Plane for Personal Communication
Handheld Devices” International Journal of

7
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Scientific and Research Publications, ISSN 2250-


3153
[6] S.Kakkar, S. Rani and A.P. Singh, “On the Resonant
Behaviour Analysis of Small-Size Slot Antenna
with Different Substrates” International Journal of
Computer Applications, pp. no. 10-12, 2012
[7] S.Rani and A. P. Singh, “On the Design and
Optimization of New Fractal Antenna Using PSO”,
International Journal of Electronics, vol. 100, no.
10, pp. 1383-1397, 2012.
[8] N. Kumar and G.Saini, “A Multiband PIFA with
Slotted Ground Plane for Personal Communication
Handheld Devices” International Journal of
Scientific and Research Publications, ISSN 2250-
3153
[9] R.R. Raut Dessai and H. G. Virani, 2013 “Triple
Band Planar Inverted F-Antenna For LTE, Wi Fi
And WiMax Applications” International Journal of
Engineering Research & Technology Vol. -2, Issue
3.

8
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Power Oscillation Damping and Voltage Stability


Improvement Using SSSC integrated with SMES

Amritpal Singh Divesh Kumar


Research Scholar Research Scholar
. B.G.I.E.T Sangrur B.G.I.E.T Sangrur
amritpal290@gmail.com diveshthareja@gmail.com

This paper proposes a model of SSSC with and without SMES


ABSTRACT to carry out the power flow control in the electric system. The
The power system network is becoming more complex SMES coil has been connected to the Voltage Source Converter
nowadays so maintaining the stability of the power system is (VSC) through the dc-dc chopper. Detail on operation, control
very difficult. So we have designed a 12-pulse based Static strategy of SSSC, chopper control of SMES and simulation
Synchronous Series Compensator (SSSC) which is operated results for SSSC with and without SMES are presented in the
with and without integration of Superconducting Magnetic subsequent section.
Energy Storage (SMES) for enhancing the voltage stability and
power oscillation damping in multi area system. Control scheme
for the chopper circuit of SMES coil is designed. The model of 2. SSSC
power system is designed in MATLAB / SIMULINK A SSSC is build with Thyristors with turn-off capability like
environment and tested for various conditions. Model is tested GTO or today IGCT or with more and more IGBTs. The static
SSSC with and without SMES is analyzed for various transient line between the current limitations has a certain steepness
disturbances. determining the control characteristic for the voltage.
The advantage of a STATCOM is that the reactive power
Keywords provision is independent from the actual voltage on the
connection point. This can be seen in the diagram for the
Static Synchronous Series Compensator (SSSC),
maximum currents being independent of the voltage in
Superconducting Magnetic Energy Device (SMES), comparison to the SVC. This means, that even during most
severe contingencies, the STATCOM keeps its full capability.
1. Introduction Basic STATCOM structure and voltage and current
Today’s modern interconnected power system is highly complex characteristic are shown in fig. 1.
in nature. In this, one of the most important requirements during
the operation of the electric power system is the reliability and
security. Maintaining stability of such an interconnected multi
area power system has become a cumbersome task. As a counter
measure against these problems, the Flexible AC Transmission
System (FACTS) devices were proposed. Nowadays, the new
Energy Storage System (ESS) is interface with FACTS device to
increase its performance. In bulk power transmission systems,
power electronics based controllers called FACTS, used to
simultaneous control of real and reactive power flow control, has
been proposed in the literature.
Presently, FACTS devices are a viable alternative as they allow
controlling voltages and current of appropriate magnitude for
electric power system at an increasingly lower cost .However, a
comparable field of knowledge on FACTS/ESS control is quite
limited. Therefore, in this work a methodology is proposed to Fig 1: STATCOM structure and voltage / current characteristic
control the power flow, which uses FACTS controllers with
energy storage. Using switching power converter-based FACTS
controllers can carry this out. Among the different modeling of The three phases STATCOM makes use of the fact that on a
FACTS devices, SSSC is proposed as the most adequate for the three phase, fundamental frequency, steady state basis, and the
present application well discussed. The DC inner bus of the instantaneous power entering a purely reactive device must be
SSSC allows incorporating a substantial amount of energy zero. The reactive power in each phase is supplied by circulating
storage in order to enlarge the degrees of freedom of the SSSC the instantaneous real power between the phases. This is
device and also to exchange active and reactive power with achieved by firing the GTO/diode switches in a manner that
utility grid. Based on a previous study of all energy storage maintains the phase difference between the ac bus voltage ES
technologies currently available, the use of SMES is proposed and the STATCOM generated voltage VS. Ideally it is possible
for the considered application has been presented. Novelreactive to construct a device based on circulating instantaneous power
power controllers for STATCOM and SSSC have been reported. which has no energy storage device (i.e. no dc capacitor).

9
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.1 CONTROL SCHEME FOR SSSC technology. Thus, there has also been considerable development
on SMES for pulsed power systems. A SMES device is made
The 6 Pulse STATCOM using fundamental switching will of up of a superconducting coil, a power conditioning system, a
course produce the 6 N+or-1 harmonics. A 6 Pulse refrigerator and a vacuum to keep the coil at low temperature,
STATCOM is shown in fig.2. There are a variety of methods see Figure 3.
to decrease the harmonics. These methods include the basic
12 pulse configuration with parallel star / delta transformer
connections, a complete elimination of 5th and 7th harmonic
current using series connection of star/star and star/delta
transformers and a quasi 12 pulse method with a single star-
star transformer, and two secondary windings, using control
of firing angle to produce a 30(deg) phase shift between the
two 6 pulse bridges.

Fig.3 Superconducting Magnetic Energy Storage device

Energy is stored in the magnetic field created by the flow of


direct current in the coil wire. In general, when current is passed
through a wire, energy is dissipated as heat due to the resistance
of the wire. However, if the wire used is made from a
Fig.2 Six Pulse STATCOM superconducting material such as lead, mercury or vanadium,
zero resistance occurs, so energy can be stored with practically
The dc sides of the converters are connected in parallel and share no losses.
the same dc bus. The GTO valves are switched at fundamental Due to its rapid discharge capabilities the technology has been
frequency, and the dc voltage varies according to the phase implemented on electric power systems for pulsed power and
control technique used to control the output voltage. system stability applications. The discharge capabilities of
The SSSC switching is synchronized with respect to the SMES compared to several other energy storage technologies is
transmission line current iline, and its rms magnitude is illustrated in Fig. 4.
controlled by transiently changing the phase shift α between Vdc Fig.4 Illustration of the system power rating and the discharge
and Vinj. time of several energy storage technologies. As can be seen,
The change in the phase shift between the SSSC output voltage SMES has a relatively low power system rating, but has a high
and the line current results in the change of the dc capacitor discharge rate.
voltage Vdc, which ultimately changes the magnitude of the
SSSC output voltage VSSSC and the magnitude of the
transmission line current Iline.. The SSSC output voltage
VSSSC is controlled by a simple closed loop ; the per unit value
of the measured line voltage is compared with the injected
voltage and the error of these two values is passed to the PI
controller. The output of the PI controller is the angle α, which is
added to the synchronizing signal passed to the gate pulse
generator by the current synchronization block. To this signal +
α, an angle of –pi/2 or +pi/2. added since the SSSC output
voltage is lagging or leading the line current by 900 depending
on the desired capacitive or inductive operation.

3. SMES
The combination of the two fundamental principles (current with Fig 4: power rating and the discharge time of several energy
very limited losses; and energy storage in a magnetic field) storage technologies
provides the potential for the highly efficient storage of
electrical energy in a superconducting coil. Operationally, The overall efficiency of SMES is in the region of 90% to 99%.
SMES is different from other storage technologies in that a SMES has very fast discharge times, but only for very short
continuously circulating current within the superconducting coil periods of time, usually taking less than one minute for a full
produces the stored energy. At several points during the SMES discharge. Discharging is possible in milliseconds if it is
development process, researchers recognized that the rapid economical to have a PCS that is capable of supporting this.
discharge potential of SMES, together with the relatively high Storage capacities for SMES can be anything up to 2 MW,
energy related (coil) costs for bulk storage, made smaller although its cycling capability is its main attraction. SMES
systems more attractive and that significantly reducing the devices can run for thousands of charge/discharge cycles without
storage time would increase the economic viability of the any degradation to the magnet, giving it a life of 20+ years.

10
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.1 CHOPPER CONTROL FOR SMES Then the values of voltage, active and reactive power and
current are measured of the system for three phase fault by using
SMES consists of a coil with many windings of SSSC.
superconducting wire that stores and releases energy with
increases or decreases in the current flowing through the wire.
Although the SMES device itself is highly efficient and has no 4.2 MATLAB MODEL USING SSSC INTEGRATED
moving parts, it must be refrigerated to maintain the . WITH SMES
superconducting properties of the wire materials, and thus incurs Then the model is designed with connecting SMES in the model
energy and maintenance costs. SMES are used to improve power in parallel with the capacitor of SSSC. The performance of
quality because they provide short bursts of energy (in less than network is checked by integrating SMES with SSSC. Power
a second). An electronic interface known as chopper is needed system model using SSSC and SMES is shown in fig.6. SMES is
between the energy source and the VSC. For VSC the energy simply a storage device conned in parallel with SSSC which is
source compensates the capacitor charge through the electronic used to increase the magnitude of capacitor voltage. Then the
interface and maintains the required capacitor voltage. Two- values of voltage, active and reactive power and current are
quadrant n-phase DC-DC converter is adopted as interface. The measured of the system for three phase fault by using SSSC and
DC-DC chopper allows to reduce the ratings of the overall SMES. The result values of both the circuits are compared.
power devices by regulating the current flowing from the
superconducting coil to the inverter of the SSSC.

4. MODELING AND CONTROL STRATEGY


A power system network is matlab/simulink environment. The
designed model is shown in fig. 4.1. As shown in fig. a three
phase power system network is designed in which the power is
fed by three generators. All the three generators are of same
ratings with voltage rating of 500KV and MVA rating of 8500
MVA. 1 load of 300MW and 200MW are connected in the two
ends of the lines. A measurement of line voltage, active and
reactive power has been taken by using different measuring
instruments of MATLAB.

4.1 MATLAB MODEL USING SSSC


The model of power sysyem is designed in SIMULINK and a
fault is produced to show various comparison of transiants.A
FACT family device SSSC is designed and connected into the
network. Model of power system using SSSC is shown in fig. 6.
SSSC is connected in series with the transmission line as shown
in fig.5. The purpose of SSSC is to inject a voltage with
controlled angle in series with line to reduce the power
oscillations damping time and short circuit current.

Fig.6 Simulink model with SSSC and SMES

5. RESULT AND DISCUSSIONS


The simulation network is tested for three cases which are, Case
(a) Without Using SSSC and SMES, Case (b) With SSSC and
without SMES, and Case (c): With SSSC and with SMES. Then
the results of all cases are compared.
Simulation results of various cases are shown below:

Fig.5 Simulink model with SSSC


Case (a)

11
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Case (b) Case (a)

Case (c) Case (b)

Fig.7 Waveform of network voltage

Case (c)

Fig.9 Waveform of power


Case (a)
The fault occurrence time is 0.1667 seconds and clearance of
fault is 0.2333 seconds. In fig. 7 voltages of all the cases are
compared. In fig. 8 current of all case are compared and settling
time of current of case (a) is 0.3433seconds which is very large as
compared to case (b) and case (c) which is 0.2370 seconds and
0.2358 seconds respectively. In fig.10 waveform of active and
reactive power of all cases are compared and it is seen that
power oscillations of case (b) and (c) are oscillated in less time as
compare to case (a). Power oscillation clearing time for case (a) is
0.3633 seconds, case (b) is 0.2380 seconds and case(c) is 0.2315
Case (b) seconds.

6. CONCLSION
The dynamic performance of the SSSC with and without SMES
for the test system is analyzed with Matlab/simulink. In this
paper SMES with chopper control plays an important role in real
power exchange. SSSC with and without SMES has been
developed to improve transient stability performance of the
power system. It is inferred from the results that the SSSC with
SMES is very efficient in transient stability enhancement and
effective in damping power oscillations and to maintain power
flow through transmission lines after the disturbances.
Case (c)

Fig.8 Waveform of network current 7. REFERENCES


[1] Thangavel M., and Jasmine S.S., “Enhancement of
Voltage Stability and Power Oscillation Damping
Using Static Synchronous Series Compensator With

12
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

SMES”, International Journal of Advanced Research [6] Molina, M.G. and P. E. Mercado, “New Energy
in Technology Vol. 2 Issue 3, March 2012, pp 94-98. Storage Devices for Applications on Frequency
Control of the Power System using FACTS
[2] Barati J., Saeedian A., and Mortazavi S.S., “Damping Controllers,” Proc. X ERLAC, Iguazú, Argentina,
Power System Oscillations Improvement by FACTS 14.6, 2003, pp 1-6.
Devices: A Comparison between SSSC and
STATCOM”, World Academy of Science, [7] Molina, M.G. and Mercado P.E., “Modeling of a Static
Engineering and Technology Vol. 4 2010, pp 135-145. Synchronous Compensator with Superconducting
Magnetic Energy Storage for Applications on
[3] Vamsee R.M., Bankarm D.S., Sailor J., Frequency Control”, Proc. VIII SEPOPE, Brasilia,
“Superconducting Magnetic Energy Storage”, Brazil, 2002, pp. 17-22.
National Conference on Reecent Advances in
Electrical Engineering and Energy system, 2010. pp 1- [8] Gyugyi I., Schauder C.D., “Kalyar K.S., Static
5. Synchronous Series Compansator: A Solid approach to
the series compensation of transmission line”, IEEE
[4] Wen J, Jian X.J.,You G.G, Jian G.Z., “Theory and Transaction on Power Delivery, Vol. 12, No. 1,
Application of Superconducting Magnetic Energy January 1997, pp 406-417.
Storage”, Australasian Universities Power Engineering
Conference, 2006, pp 7-12. [9] Choi S.S., Jiang F. and Shrestha G. ,“Suppression of
transmission system oscillations by thyristor
[5] Hingorani, N.G., “Role of FACTS in a Deregulated controlled series compensation”, IEE Proc., Vol.GTD-
Market,” Proc. IEEE Power Engineering Society 143, No.1, 1996, pp 7-12.
Winter Meeting, Seattle, WA, USA, 2006, pp.1-6.

13
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Electronic toll collection based on Category of Vehicle


Using RFID

Gurpreet Singh Amrik Singh Harpal Singh


Guru Kashi University Guru Kashi University
Guru Kashi University
Talwandi sabo Talwandi Sabo
Talwandi sabo
ratandeep2@gmail.com harpalghuman11@gmail.com
gurpreetpeetu@yahoo.in

Abstract—Electronic toll collection approach is based on Radio (b) RFID reader


frequency identification is a contactless technique that uses the RFID reader is hardware Device that is used to fetch the
radio waves to identify the object uniquely. With the help of information from the tag. Antenna of reader produces the radio
RFID Tag and Reader the Systems able to store the information waves. That received by in range tags. These waves are used to
about the object when enters in the range of the Reader. The activate the tags and send his information to the reader. Then
paper focuses on automatic electronic toll tax collection based reader gathers the information. Reader is also known as antenna
on category of vehicle. Everyone has to pay different payment to or interrogator [4].
the authority according to the classification.

Keywords—RFID; Tag; Reader.

1. INTRODUCTION
Electronic toll collection based on categories of vehicle is
implemented with the help of Radio frequency identification
(RFID). Electronic toll collection (ETC) is technologies used on
highways for electronically deduct the toll tax payments from
owner's account automatically when pass the Toll Tax
Figure 1. RFID Tag
Collection point.the registered vehicles are capable to pay toll
tax electronically to toll tax authority. Vehicles are classified
into different category. Each has to pay different charges. Two
wheeler and four wheeler are not belongs to same category. Ex.
Bike, Taxi, Bus, Truck are belongs to different category and
have to pay different charges .the four wheeler are further
classified into sub categories, like a taxi and truck both belongs
to four wheeler but they have to pay different according to sub
category classification on the basis of load and size .Means that
whole vehicle divided in Different category, Each vehicle will
pay the different payment to the toll tax organization.

Overview of the RFID

RFID is a contactless technology that is used for identify any


vehicle or object .This system is identify any object without any
physical contact with the help of radio waves .This system can Figure 2: RFID Reader
identify object uniquely .The components which are used in
RFID technology that given below.[1] II. RELATED WORK
Electronic toll tax collection based on vehicle category is used
(a) RFID tag for automatic toll tax collection on major highways .Only
(b) RFID reader register vehicle can pay the automatically his payment or toll tax
.For registering any vehicle the organization of the toll tax
(a) RFID tag collection placed RFID tag (Hardware) on the vehicle . A
RFID Tag is the small card placed on the vehicle or object. vehicle owner, wants to register has to submit his bank account
These tags have an internal memory which is used for store detail for automatic payment. Reader is used to detect the
some information .The antenna of the tag receives signal from
reader. Which are used for activates the tag then tag send stored vehicle when appear in the range of reader. . The classification
information to the reader [2][3]. of the all vehicle is based on the vehicle’s size and vehicle’s type
.large and loaded vehicle must pay much as a comparer to small
vehicle .This system collect the payment automatically
according to the vehicle category .Means that different type of

14
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

vehicle pays the different payment to the toll tax collection to send the information. That information received by the reader
organization .The system first identify the type of vehicle and for identifying any vehicle. When reader reads information from
then calculate the his payment .After this process, a system tag then reader forward that information to control unit
(CU).Control Unit have all information about the that ID number
collect the toll tax. This system solve the big problem of cash
which is received by the reader. Here system identify the type of
payment .Nobody have to waste time at Toll Tax Collection vehicle and calculate his payment .Then system automatically
point because this system automatically identify vehicle and deduct his payment from his bank account. This system is
deduct payment. This system does not get the time for paying capable to collect the toll tax from different type of vehicle. Here
payment. So nobody needs to wait in a line for his payment. different type of vehicle pays different payment to toll tax
[5][6] organization.[9]

If anyone has not money in his account then system send a


III. PROPOSED WORK message him. There is system collect the payment from the
registration fee. When owner registered the vehicle then he has
Under the purposed work, insert the RFID Reader on the to pay that fee for emergency like if anyone’s account is null
highways. The RFID Reader is used for identifying or track any then system collect the payment from the registration fee .[10]
object or vehicle. This reader is connected with the central
processing unit and data base. RFID Tag (Hardware) is placed
on the vehicle during the registration time. That RFID Tag has a
unique vehicle identification number for uniquely identifying
any vehicle. Every vehicle’s vehicle identification number is
used to provide the information regarding the priority of the
vehicle and type of the vehicle. The system can uniquely identify
the object or vehicle & its owner with the help of vehicle
identification number. VIN is also known as tag number. [7]

Tag number: Every tag has unique number .In this system
that number divided into two parts, First three bits in 1st
part and remaining for second part. First three bits (first
part) is used for identifying the category of vehicle. The
system identifies the type of vehicle from the first three bits.
Second part of the tag is use for uniquely identifying any
vehicle and its owner. We can differentiate the whole
vehicles into 8 types. Type of vehicle =2n and here n
=number of the bits which are reserved.

Number of reserved bits = 3


With the help of 2n we can find the number of types like 23
= 8 .Here system reserve the first three bit then make the 8
combinations or types of the vehicles.

Types of vehicle: This system divided the whole vehicles in


some category .These categories are given below. Figure3: Communication between Tags and Reader

First category includes the bikes or small vehicle (also includes Pseudo code (algorithm) for electronic toll collection based on
all two wheeler vehicle in this category). Second category vehicle type with a RFID: The following given a pseudo code of
includes the three wheeler vehicle. Third category includes the this
cars and other small four wheeler vehicle like jeeps etc. Fourth
category includes the buses and trucks. Fifth category includes 1. Start
the Heavy and loader vehicle which are using transfer goods.
Some government vehicle like fire brigades and ambulances etc 2. Reader setup on the road and always generate Radio
which are includes in a sixth category. These are six categories waves
which pays different payment .If system reserve first three bit for
identification the vehicle then we can divide eight categories of
3. If
whole vehicles. Here we have six categories of vehicle and other
two combinations reserve for future. [8] Vehicle enter in the range of Reader then

Working of the system (a) antenna receive the radio wave and activate that
tag
A reader set up on the road. This reader reads the tags which are
placed on the every vehicle .Reader always generate the radio (b) Then activated Tag sends his information (id
waves which go to circulation area where from cross the road. number)
When any vehicle appears in the range of that Reader Then the
antenna of the tag which is placed on the vehicle captured radio
wave.After receiving the radio wave tag amplify that wave and
activate the internal circuitry of the tag.Activated tag is capable (c) That id receive by a Reader

15
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

(d) After receive the id then reader forward to next CONCLUSION


control unit
In previous system all vehicle pays equally on electronic toll
(e) Control unit check the type and calculate the collection Point using a RFID. But in manual system every
payment vehicle has to pay different charges on toll tax collection point.
Here loaded vehicle and small vehicle pays same. So in an
4. If (Vehicle is able to payment) “Electronic toll collection based on Category of Vehicle Using
RFID” is collecting the payment that is based on the category of
(a) Then system collects the payment.
vehicle. Here every vehicle pays Different according to their
End of inner IF statement. category.
5. Else (Some government vehicle like fire brigades and
ambulances etc) REFERENCES
(a) Free vehicle. [1] http://RFID.nordic.se
End of inner Else statement and [2] http://www.technovelgy.com/ct/technology-
article.asp?artnum=50
End of outer If statement [3] Elisabeth ILIE-ZUDOR “The RFID Technology and Its
Current Applications”, MITIP mmk2006, ISBN 963
6. Else 86586 5 7, pp.29-36
[4] http://rfid-managerialviewpoint.blogspot.in/2011/01/rfid-
Reader did not find a Tag. tag-and-reader.html
End of else statement [5] Electronic toll collection system using passive RFID
technology, 1khadijah kamarulazizi, 2dr.widad ismail
7. End. [6] An RFID-Based Intelligent Vehicle Speed Controller Using
Active Traffic Signals Joshue Perez , Fernando Seco,
The flow chart given below represents the flow of system and Vicente Milanes, Antonio Jimenez, Julio C. Diaz and
algorithm. Here system check the type of vehicle and collect his Teresa de Pedro
payment. [7] Harpal Singh “Intelligent Traffic Lights Based on RFID”,
International Journal of Computing & Business Research,
pp 13
[8] Sudha Bhalekar, Adesh Chanageri G., Indra Prakash
Chauhan “Automatic Toll Tax Using RFID” International
Journal of Computer Technology and Electronics
Engineering (IJCTEE) Volume 3, Special Issue, March-
April 2013, An ISO 9001: 2008 Certified Journal
[9] Gurpreet singh “ Automatic payment collection on Petrol
pump Using RFID” International Journal of Advance
Research in Computer Science and Management Studies,
Volume 2, pg. 611-616
[10] Chong hua Li “Automatic Vehicle Identification System
based on RFID”, Anti Counterfeiting Security and
Identification in Communication (ASID), 2010, pp 281-
284.

16
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

EXTENDED LOCAL BINARY PATTERN FOR FACE


RECOGNITION
Jatinder sharma Rishav dewan
BGIET, Sangrur BGIET, Sangrur
sharma.jatinder70@gmail.com reshav.dewan@gmail.com

In the Basic LBP operator as we normally seen that it


ABSTRACT
This research Paper represents a recent use of the assign a label to every pixel of image by thresholding the
extended local binary pattern for face recognition. 3*3 matrices and by considering the result also as an
Extended Local Binary Pattern (ELBP) Technique is binary number In the extended LBP normally an image
more accurate and describes the texture and shape of a is divided into small micro patterns i.e 64 regions.By
digital image by using of 3*3 & 5*5 matrices we have to the using of extended LBP neighborhood of P which has
compare the performance of both matrices so that how notation(P.R) consider the sample points of circle.
we recognize the image.Variance help to measure
continuous output where the quantization is needed. By
dividing an image into several small region from which 1.1. LOCAL BINARY PATTERN
the feature are extracted.if match is found then image
face is recognized otherwise if match does not found Local Binary Pattern (LBP) is a humble however actual
then image face is not recognized. If we saw at the mirror well-organized smoothness operative which tags the
we can see that our face has different type of human pixels of an appearance by thresholding the region
expression. These are the peak and valley that make up of every pixel and reflects the product as a two numeral.
the different facial features Due to its discriminative control and computational
easiness, LBP smoothness operative has developed a
general method in several presentations. The situation
Keywords container be realised as a uniting approach to the
LBP, Face Recognition, Extended LBP, Histograms.
usually different arithmetical and important simulations
of smoothness examination. Perchance the greatest
(1) INTRODUCTION significant things of the LBP operative in actual ideas is
its strength to monotonic gray-scale variations produced,
Facial expression play very important role in the human for sample, by lighting variations. Additional essential
being life. As we know that according to JAFFE database stuff is its computational plainness, which types it
there are mainly seven type of face expression i.e. Angry, promising to analyze pictures in stimulating actual
surroundings. The simple impression aimed at emerging
Fear, Neutral, Surprise, Disgust, Happy, Sadness. It is
the LBP operative was that flattened shallow feels can
basically software and is based on the ability to first be labelled by binary balancing events: native three-
recognize face which is a technological feat in itself. dimensional designs and gray rule distinction. The
Vision define these landmark as the nodal point, these unique LBP operative (Ojala ) et al. 1996) forms
are about so nodal point on a human face. Facial labels for the image pixels by thresholding the 3 x 3
expression is one of the utmost dominant unbiased and neighborhood of all pixel with the center charge
then since the consequence as a two integer. The
instantaneous means for socialbeings to communicate
histogram of the 28 = 256 dissimilar tags container then
their feeling and intensions. It is easy process of be rummage-sale by way of a smoothness descriptor.
communication in which we exchanges the This worker used together by a modest native
distinguishable ideas, information, data transfer from one difference measure provided actual decent
place to another. presentation in unsupervised texture segmentation
(Ojala and Pie tikäinen 1999)After this many related
In the Local binary pattern the limitation is that reflect approaches must stood industrialised for smoothness
information of images which is encountered only in the and color smoothness separation. The LBP operative
remained extended to usage neighborhoods of
first derivation and on the other side it does not occur in
dissimilar scopes ( Ojala et al. 2002). By a
the velocity about local variations. However when we round neighborhood and bi linearly inserting standards
using the extended LBP operatorwhich help out original at non-integer pixel organizes let some range and amount
image and gradient magnitude.Face can also be seen as a of pixels in the region. The gray rule alteration of the
composition of various micro patterns which can be well native region container be charity as the balancing
describe by LBP operator. difference size. Now the next, the representation (P,R)
determination be rummage-sale for pixel neighborhoods
which incomes P sample opinions on a ring of range of
R. See Fig. 2 for an sample of LBP division.

17
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

If we assume gray value of central pixel and adjacent


20 30 22 0 0 0 pixel is 253 & 252 respectively sign information is
extracted by layer1 and two other binary unit {i2,i3}are
29 33 69 0 1 used to encode for GD.
36 48 51 1 1 1 Local Binary Pattern:-
Additional extra time to the unique operative is the
meaning of supposed unchanging designs, which
Threshold 33 container be rummage-sale to decrease the distance of
the eye course and tool a humble rotation-invariant
descriptor. This postponement remained enthused by
Figure :- Basic LBP operator Binary : 00011110 the detail that certain two designs happen additional
Decimal : 30 usually in smoothness imageries than others. A native
two design is called unchanging if the two design covers
at greatest binary bitwise changes after 0 to 1 or evil
Local Binary Pattern (LBP) is a kind of piece used for versa once the minute design is crossed circularly.
organization now processor visualization. LBP remained Aimed at instance, the patterns 00000000 change
first suggested in 1996 for analysis of texture of gray- 01110000 (2 changes) and 11001111 (2 changes) are
scale images. LBP remained found to be invariant by unchanging while the designs 11001001 (4 changes)
small deviations of light disorder and small revolutions .
and 01010010 (6 changes) are not. In the computation of
the LBP tags, unchanging designs are rummage-sale
so that there is a distinct tag for all unchanging design
and altogether the non-uniform designs are labeled with
1.2. EXTENDED LOCAL BINARY a solitary tag. Aimed at instance, once using (8,R)
PATTERN region here are a entire of 256 designs, 58 of which are
unchanging, which harvests in 59 dissimilar tags. Ojala
et al. (2002) saw in their trials with smoothness
pictures that unchanging designs explanation aimed at a
One type of Extended Local binary pattern (ELBP) was slight fewer than 90% of all designs once by the (8,1)
suggested It is similar to 3DLBP The ELBP Operator not
region and aimed at about 70% cutting-edge the
only refer to binary comparison between the central pixel
& its adjacent Pixels .It definitely encode the exact gray (16,2)region . All bin (LBP code) container be stared as a
level value differences between them by adding various micro-text on. Native primitives which are organized
binary units and number of additional binary unit k is through these baskets comprise dissimilar kinds of bent
defined by GD limits, acnes, level parts etc. The next representation
remains rummage-sale aimed at the LBP worker:
LBPP,Ru2. The subscript signifies by the operative
cutting-edge a (P,R) region Superscript u2 attitudes for
by first unchanging designs then labeling altogether
residual designs by a solitary tagin which n is the amount
of dissimilar tags shaped by the LBP worker, then
I{A} is 1 if A is correct and 0 if A is incorrect. In the
LBP approach for texture classification, the incidences of
the LBP codes in an image are collected into a
histogram. The classification is then performed by
computing simple histogram similarities. However,
considering a similar attitude for facemask
appearance demonstration marks fashionable a damage
of altitudinal data then thus unique must organize
the smoothness facts however holding also their scenes.
Any method to reach this aim is to practice the LBP
smoothness descriptors to shape some limited images of
the façade and association them hooked on a worldwide
explanation. Such resident images must been ahead
attention recently which is comprehensible assumed the
limits of the general pictures. These limited eye founded
Fig: An example of Extended LBP Operator
approaches remain additional healthy alongside
At same time K control the concerned maximum of GD differences in position or lighting than general
& also approaches.

18
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The elementary policy for LBP founded appearance movie orders, though Zhao et al. (2009) accepted the

narrative future through Ahonen et al. (2006) stays by LBP Highest method toward graphic language credit

way of glooms: The facemask appearance remains realizing cutting advantage presentation deprived of

distributed keen on limited sections then LBP error-prone segmentation of moving lips. In addition to

smoothness descriptors remain take out since every area face and facemask appearance acknowledgment, the LBP

freely. The descriptors are then concatenated to form a takes too remained rummage-sale in numerous additional

global description of the face, This histogram well requests of biometrics, counting sense localization, iris

consumes a narrative of the appearance happening three credit, impression acknowledgment, award pattern

changed ranks of zone LBP tags aimed at the histogram acknowledgment, walk acknowledgment besides

cover data nearby the patterns on a pixel-level, the labels facemask stage organization. Orientations toward

are summed over a small region towards yield evidence numerous of these everything container remain originate

arranged a county near then the local histograms remain after the LBP list.

concatenated to shape a worldwide account of the


1.3.FORMULAS & CALCULATIONS:-
appearance. It must be renowned that once spending the
For calculate Extended local binary technique we used
histogram founded systems the sections organize not
this given formula which is given below :-
essential toward remain four-sided. Neither fix they

essential toward stay of the similar magnitude or form,

besides they prepare not essentially must to protection

the entire copy. The situation stays too probable toward

must somewhat meeting areas. The flat appearance

explanation technique has remained lengthy hooked on 2. FLOW OF ALGORITHM :


spatiotemporal area portrays facemask look explanation
start
by LBP-TOP. Outstanding facemask appearance

acknowledgement presentation takes remained got by

this method. Meanwhile the magazine of the LBP Divide Extended LBP Features into
founded appearance explanation, the method takes Subsets

previously achieved an recognized place popular

appearance examination investigation then requests. A Select Extended LBP Feature


Candidates by using Boosting
famous specimen is brightness invariant appearance
Learning Process
acknowledgement organization planned through Li et al.

(2007), merging NIR imaging by LBP geographies then

Advertisement improvement knowledge. Zhang et al. Generate Pool of New Extended LBP
Feature Candidates
(2005) future the removal of LBP topographies after

pictures got through sifting a facemask copy by 40 Gabor

strainers of dissimilar balances and locations, Construct Extended LBP Feature Set
procurement unresolved marks. Hadid and Pietikäinen

(2009) castoff spatio time-based LBPs aimed at

expression then sexual category acknowledgment after End

19
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. RESULT AND CONCLUSION tational easiness, the technique consumes remained


actual positive in numerous such processor glitches
which remained not previous smooth viewed as
smoothness difficulties, such as appearance examination
and signal examination (Pietikäinen et al. 2011). For a
catalogue of LBP correlated investigation and relations
towards numerous passes

The LBP article course, in its humblest procedure, is


shaped in the next way:

 Division the searched gap into cubicles (e.g.


16x16 pixels for every lockup).
 Aimed at every pixel in a booth, associate the
pixel to every of its 8 neighbors (on its
leftward highest, left hand central, leftward
lowermost, correct highest, etc.). Survey the
pixels beside a round, i.e. circular or pledge
circular.
 Anywhere the middle pixel worth is better
than the neighbor's worth, inscribe "1". Then,
inscribe "0". This stretches an 8-digit two
amount
 Calculate the histogram ended the lockup, of
the incidence of all "amount" happening
Optionally regularize the histogram.

An valuable postponement to the unique worker is the


consequently named unchanging design which container
remain rummage-sale to decrease the distance of the
piece course and tool a humble revolution invariant
descriptor. This impression is interested by the aspect
that selected two policies happen additional usually in
smoothness pictures than others. A native two design is
named unchanging if the two arrangement comprises at
Extended Local binary pattern (LBP) is used for face greatest binary 0-1 or 1-0 changes. For sample,
recognition. Apply Extended LBP on Different Windows 00010000 is a unvarying project, 01010100(6 changes)
size i.e. 3x3 and 5x5Compare the result with ELBP with is not. In the calculation of the LBP histogram, the
Different Windows size. The invariant texture which has histogram consumes a distinct basket for every uniform
been classify with Local Binary pattern has powerful pattern, and all non-uniform patterns are assigned toward
texture feature In this Variance help to measure a solitary basket. By unchanging arrangements, the
continuous output where quantization is needed. The distance of the article direction aimed at a 3x3 frame
basic version of LBP consider measurements from a 3*3 decreases since 256 toward 59.
pixel squareis the binary code.In the face recognition it
is basically based on the ability to first recognize face It
also used in the image analysis database and computer 4. FUTURE SCOPE
vision database.it has many application like security
,authentication, human identity matching, online Apply extended LBP on Different Windows size i.e 3x3
banking, netbanking, airport , defence etc .The best and 5x5. and also Compare the result with ELBP with
example is that human eye is uses as identity in the Different Windows size .
Aadharcard . The face represent mainly human face
expression, ideas and mentality of the person.As we 5. ACKNOWLEDGEMENTS
know that many public place like Bus stand,Railway
station ,Bank ,Cinema hall have uses Surveillance Our heartly thanks to the project guide Mr Rishav Dewan
camera for video recording and capture the photos for who have contributed towards development of the above
security purpose It focuses on detecting face and stated paper.
distinguishing one face from another if you uploaded a
photo on facebook and prompted to tag a particular face
as a particular friend REFERENCES
. A preprocessed appearance is also distributed into 64
[1] Ahmed Faisal, HossainEmam, Bari A.S.M.
section The LBP method takes run to important growth
in feel examination. It is generally castoff all finished the Hossain and Shihavuddin ASM, “Compound
creation together in investigation and submissions. Local Binary Design CLBP aimed at Healthy
Unpaid to his discriminative influence then compu
Facemask Appearance Acknowledgment”,

20
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

12th IEEE Global Conference on [6] DiHuang CaifengShan,MohsenArdebilian,Limi


Computational Cleverness and Informatics • ng Chen, “Facial Image Analysis Based on
21–22 November, 2011 • Budapest, Hungary, Local Binary Patterns: A Survey” – IEEE
PP.: 391-395, 2011.
[7] G. Guo, C.R. Dyer, “Simultaneous feature
[2] Ahmed Faisal and Kabir Md. Hasanul,
selection and classifier training vialinear
“Directional Ternary Pattern ( DTP) for Facial
programming: a circumstance education aimed
Expression Recognition”, IEEE International
at expression communication
Conference on Consumer Electronics (ICCE),
acknowledgment”, :IEEEConference on
IEEE, PP.:265-266, 2012.
Computer Vision and Pattern Recognition
(CVPR), 2003.
[3] Bartlett Marian Stewart, Littlewort Gwen,
Frank Mark, Lainscsek Claudia,Fasel Ian and
[8] Liao Shu, Fan Wei, Chung Albert C. S. and
Movellan Javier, “Recognizing Facial
Expression: Machine Learning and Application Yeung Dit-Yan, “Facial expression
toSpontaneous Behavior”. Recognition using Advanced Local Binary
Patterns, tsallis Entropies and Global
[4] Chuang Chao-Fa, Shih FrankY., “Recognizing
facial exploit components by self-governing Appearence Features” IEEE, PP.: 665-668,
constituent examination then provision course 2006.
mechanism”, Elsevier, PP.: 1795 – 1798, 2006

[5] Do, Hyungrok, “Image Recognition Technique


using Local Characteristics of Sub sampled
Images”, EE368 Digital Image Processing,
PP.:1-5, 2007.

21
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Analysis and implementation of a Full Adder circuit


using Xilinx Software

Girdhari Agarwal Bobbinpreet Kaur Amandeep Kaur


ECE(student) ECE(AP) ECE(AP)
Chandigarh group of colleges Chandigarh group of colleges Chandigarh group of colleges
Mohali, India Mohali, India Mohali, India
girdhari9933112222@gmail.com bobbinece@gmail.com amandeepece.cgc@gmail.com

ABSTRACT v/s time with varying input and their respective output so as to
This paper presents the novel method to analyze and verify the truth table of the full adder circuit.
implement a full adder circuit using VHDL Technology. The Following text is divided into five sections. Section 2 refers to
results include successful compilation of the VHDL code in the previous papers and the literature reviews of those papers
the Xilinx software along with the waveforms that prove the on the same field. Section 3 describes the full adder and its
legality of the truth table. This paper also shows the effective design, section 4 describes the various VHDL architectural
use of Xilinx software in the analysis of the full adder circuit. styles, section 5 describes the implementation of the full adder
It shows the Register Transfer Level (RTL) schematic circuit using the various architectural designs as discussed in
diagrams and technology schematic diagrams of the different section 4 and the last section i.e. section 6 describes the
VHDL architectural styles of modeling that include dataflow analysis of the design.
modeling, behavioral modeling and structural modeling. The
analysis includes the detailed analysis of the fitter report and 2. LITERATURE REVIEW
the timing report along with the synthesis report of the design Rupesh Prakash Raghatate explained a brief overview about
summary. It also shows the chip floor plan of the full adder the full adder circuit using VHDL[1] in the paper titled,
circuit. “Design and implementation of Full adder using VHDL and
its verification in analog domain”. He implemented the circuit
General Terms using VHDL in QuartusII.
Wave simulation, chip floor, architectural, truth table,
karnaugh map. Rajendra Kumar explained about the implementation of full
adder in the addition of multi bit using the carry look ahead[3]
Keywords in his paper titled, “Performance analysis of different bit carry
Full Adder, Xilinx, VHDL, design. look ahead adder using VHDL Environment.”

1. INTRODUCTION 3. FULL ADDER DEESIGN


One of the most fundamental arithmetic operations that are A full adder is a combinational digital circuit with three inputs
widely used in digital system is addition. Addition is the basic and two outputs. The full adder circuit has the ability to add
operation of all logic and arithmetic operations that can be the three 1-bit binary numbers (Input1, Input2, Input3) and
performed in a digital system. The term „digital system‟ is not results in two outputs (Sum, Carry). The fig. 1 below shows
only limited to a low-level component that can be designed the basic logic diagram of the full adder circuit, Table 1 that
theoretically on paper, but it also extends its hierarchy up to shows the truth table and fig. 2 shows the K-Map reduction
the designing the complete system over a chip or board. It is for the two outputs from the truth table.[4]
not always possible to understand the system completely in
case of higher levels of hierarchy due to the increased 3.1 Logic Design
complexity of the digital system at these levels. Thus to
overcome this complex situation, VHDL technology is used
which makes the design of the system simpler and easy to
understand.[6]
This paper describes the analysis and implementation of a full
adder circuit with the help of VHDL technology so that the
chip or on board design of the full adder system gets easy to
implement as it meets the less complexity requirement. One of
the most popular and easy approach to VHDL technology is
done with the help of Xilinx software that allows us to
compile our VHDL code to a machine level program and then
implement it on hardware i.e. on the chip or board. The paper
also shows that how efficiently our digital system i.e. a full
adder can be implemented and analyzed completely using this
software.[9] The analysis part shows the RTL view,
technological map, chip floor plan and wave forms of voltage Fig 1: Basic logic diagram for a full adder[5]

22
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.2 Truth Table 4. VHDL


Hardware Descriptive Languages (HDL) are the basic
Table 1. Truth Table for full adder language that are used for the designing of most of the digital
circuits using software tools. VHDL is one of most important
type of HDL. VHDL is the abbreviation for “Very High
Speed Integrated Circuit Hardware Description Language
(VHSIC HDL)”. [2]
VHDL is supposed to be the heart for the production of
electronic design of any digital circuit. With the advancement
of time and the shrinking of the semiconductor device
dimensions into compact sizes, the VHDL has gained a lot of
importance.
There are three architectural styles of modeling of a VHDL
statement. The internal functionality of digital circuit can be
specified using any of the modeling styles as discussed
below[8]:

4.1 Structural
A structural architectural design refers to the architectural
design where all the used components are interconnected to
each other.

4.2 Dataflow
3.3 Karnaugh Map A dataflow architectural design refers to the architectural
design where a set of concurrent assignment statements are
used to design the program.

4.3 Behavioral
A behavioral architectural design refers to the architectural
design where a set of sequential assignment statements are
used to design the program.

5. IMPLEMENTATION
Entity statement[7] :
entity Full_Adder is
Port ( Input1 : in STD_LOGIC;
Input2 : in STD_LOGIC;
Input3 : in STD_LOGIC;
Sum : out STD_LOGIC;
Carry : out STD_LOGIC);
end Full_Adder;

Architectural design using:


Fig 2: K-Map for logic Sum
5.1 Structural
architecture Structural of Full_Adder is
signal xor_1, and_1, and_2:STD_LOGIC;
component XOR1 port(A, B : in STD_LOGIC;
X: out STD_LOGIC);
end component;
component OR1 port(M, N : in STD_LOGIC;
Y: out STD_LOGIC);
end component;
component AND1 port(P, Q : in STD_LOGIC;
Z: out STD_LOGIC);
end component;
begin
X1:XOR1 port map(Input1, Input2, xor_1);
X2:XOR1 port map(xor_1, Input3, Sum);
A1:AND1 port map(Input1, Input2, and_1);
A2:AND1 port map(xor_1, Input3, and_2);
Fig 3: K-Map for logic Carry O1:OR1 port map(and_1, and_2, Carry);
end Structural;

23
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5.2 Dataflow 6.1.1 Structural


architecture DataFlow of Full_Adder is
signal xor_1, and_1, and_2:STD_LOGIC;
begin
xor_1 <= Input1 xor Input2;
and_1 <= Input1 and Input2;
and_2 <= xor_1 and Input3;
Sum <= xor_1 xor Input3;
Carry <= and_1 or and_2;
end DataFlow;

5.3 Behavioral
architecture Behavioral of Full_Adder is
begin
process(Input1, Input2, Input3)
begin Fig 5: RTL Schematic for structural
if (Input1='0') and (Input2='0') and (Input3='0')
then Sum<='0'; Carry<='0'; 6.1.2 Dataflow
elsif (Input1='0') and (Input2='0') and (Input3='1')
then Sum<='1'; Carry<='0';
elsif (Input1='0') and (Input2='1') and (Input3='0')
then Sum<='1'; Carry<='0';
elsif (Input1='0') and (Input2='1') and (Input3='1')
then Sum<='0'; Carry<='1';
elsif (Input1='1') and (Input2='0') and (Input3='0')
then Sum<='1'; Carry<='0';
elsif (Input1='1') and (Input2='0') and (Input3='1')
then Sum<='0'; Carry<='1';
elsif (Input1='1') and (Input2='1') and (Input3='0')
then Sum<='0'; Carry<='1';
elsif (Input1='1') and (Input2='1') and (Input3='1')
then Sum<='1'; Carry<='1';
end if;
end process; Fig 6: RTL for Dataflow
end Behavioral;
6.1.3 Behavioral

6. ANALYSIS
6.1 Register Transfer Logic(RTL)
schematic of the above written architectural
designs:

Fig 4: RTL Schematic for the block Fig 7: RTL for Behavioral

24
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

6.2 Wave Simulation

Fig 8 – Wave simulation


TABLE V. EQUATIONS
6.3 Fitter Report
TABLE II. SUMMARY ********** Mapped Logic **********

Design Name Full_Adder


Carry <= ((Input1 AND Input3)
Fitting Status Successful
OR (Input1 AND Input2)
OR (Input3 AND Input2));
Software Version M.81d

Device Used XA9536XL-15-VQ44


Sum <= NOT (Input2
XOR
Sum <= NOT (((Input1 AND Input3)
TABLE III. RESOURCES SUMMARY OR (NOT Input1 AND NOT Input3)));

Function
Macrocells Pterms Registers Block
Pins Used
Used Used Used Inputs 6.4 Timing Report
Used
TABLE VI. DATA SHEET REPORT FOR PAD TO PAD LIST
2/36 (6%) 6/180 (4%) 0/36 (0%) 5/34 (15%) 6/108 (6%)
Source Pad Destination Pad Delay

TABLE IV. PIN RESOURCES Input1 Carry 15.500

Signal Type Required Mapped Input1 Sum 15.500


Input 3 3
Input2 Carry 15.500
Output 2 2
Input2 Sum 15.500
Bidirectional 0 0
Input3 Carry 15.500
GCK 0 0
GTS 0 0 Input3 Sum 15.500

GSR 0 0

25
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

6.5 Chip Floor Diagram

Fig 9 – Chip Floor Diagram

7. CONCLUSION
This paper implements Full adder circuit using Xilinx that [3] Rajender Kumar, Sandeep Dahiya, 2013, “Performance
help us to design a full adder with the help of a general 44 pin analysis of different bit carry look ahead adder using VHDL
cmos device. XA9536XL-15-VQ44 is the device used to Environment”, International Journal of Engineering Science
successfully fit the layout of Full adder on chip, the and innovative technology, Volume 2 Issue 4 ǁ April. 2013 ǁ
XA9536XL-15-VQ44 has 44 pins out of which pin 30, 31 and PP.80-88.
28 are used as inputs and pin 41 and 38 are used as sum and
carry output respectively. The wave simulations show about [4] A. Anand Kumar, Fundamentals of Digital Circuits(PHI
all the possible inputs for a full adder and their respective 2nd edition).
outputs to verify the truth table. Further, more improvement [5] Prashant Gurjar and Rashmi Solanki, 2011, VLSI
can be made in the design for optimized implementation by Implementation of adders for high speed ALU., International
varying the design methodology. This full adder circuit can be Journal of Computer Applications, vol. 29-No. 10, PP-11-15.
further used in addition of 2 multi-bits binary numbers using
the concept of carry look ahead in which the carry from the [6] Floyd, Digital Fundamentals, Pearson Publications, 10th
least significant bit is carried to its successive significant bit. edition.

REFERENCES [7] Xilinx inc., Xilinx student edition 4.2i, PHI. 1st edition,
2002
[1] Rupesh Prakash Ragahtate, 2013, “Design and
implementation of Full adder using VHDL and its verification [8] Pedroni, 2004 Circuit design with VHDL, MIT Press,
in analog domain”, International Journal of Engineering Cambridge, England, PP-159-186
Science Invention, Volume 2 Issue 4 ǁ April. 2013 ǁ PP.35- [9] Chiuchisan, Potorac, 2010, Finite state machine design
39. and VHDL coding techniques, 10th international conference
on development and applications systems, Romania, PP-273-
[2] Jayaram Bhasker, A VHDL Primer(PTR Prentice Hall 278.
Englewood cliffs, New Jersey 07632).

26
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Automatic Detection of Diabetic Retinopathy from Eye


Fundus Images: A Review
Manpreet Kaur Mandeep Kaur
Department of Electronics and Communication Department of Electronics and Communication
Engineering, Punjabi university Patiala, Punjab, Engineering, Punjabi university Patiala, Punjab,
India India
manpreet89sidhu@gmail.com ermandeep0@gmail.com

water from blood vessels. (ii) Proliferative diabetic


ABSTRACT retinopathy (PDR) is the more advanced form of the disease. At
Diabetic Retinopathy(DR) is one of the eye diseases that people this stage new fragile blood vessels begin to grow in the retina and
with diabetes may face as a complication of diabetes. It cause into the vitreous. The new blood vessel may leak blood into the
severe vision loss or even blindness. Therefore, to prevent severity vitreous, and also these vessels get damaged due to extremely high
pressure in the eye. If left untreated, PDR can cause severe vision
of this disease it is essential to detect it at early stage. Automatic
loss and even blindness.
detection of DR has taken over manual screening of eye fundus
images in time constraints as well as to estimate the severity of
disease. The algorithms for automated DR detection involves
retinal image preprocessing, blood vessel enhancement and
segmentation and optic disk localization and detection which
eventually lead to detection of different DR lesions. In this paper,
the review of techniques, algorithms and methodologies used for
automatic detection of DR in eye fundus images is presented.

Keywords
Automated detection; Diabetic retinopathy; Fundus photographs;
optic disk; blood vessels.

1. INTRODUCTION
In patients with diabetes, prolonged periods of high blood sugar can Fig.1. Retinal Fundus Image
lead to the accumulation of fluid in the retina along the blood
vessels, which eventually leads to blurred vision. The WORLD
However, risk of vision loss due to diabetic retinopathy can be
HEALTH ORGANIZATION (WHO) has evaluated that Diabetic
reduced by effective control of serum glucose and blood pressure
retinopathy is on the priority list of eye conditions which can be
and by its early detection and timely treatment. Fundus
partly prevented and treated. It is recommended that eye care
photography is used to document the anatomical parts of retina
services for diabetic patients be incorporated into strategic VISION
such as optic disc, fovea, blood vessels and abnormalities related to
2020 national plans. Well tested international guidelines should be
diabetic retinopathy (damage to the retina from diabetes) such as
followed to detect diabetic retinopathy at early stage. Diabetes is a
microaneurysms, hemmorhages, exudates etc. as shown in fig.1.
disease that interferes with the body's ability to use and store sugar,
This is because retinal details may be easier to visualize in
and cause imbalance of sugar levels in the body, which eventually
stereoscopic fundus photographs as opposed to with direct
can cause many other health problems, including the eyes.
examination. Early diagnosis is very important to control rapid
increase in the number of instances of diabetic retinopathy.
Over time, diabetes affects the circulatory system of the retina.
Diabetic retinopathy is the result of damage caused by diabetes to
the small blood vessels located in the retina. Blood vessels The potential solution is to develop an algorithm that can
damaged from diabetic retinopathy can cause vision loss. The automatically detect it at early stage so that ophthalmologist could
longer a person has diabetes, the more likely they will develop give more time for patients which require their attention rather
diabetic retinopathy. Diabetic retinopathy is classified into two seeing each and every fundus image. The speedy growth of
types: (i) Non-proliferative diabetic retinopathy (NPDR) is the
information processing system and the emergence of inexpensive
early state of the disease. In NPDR, the blood vessels in the retina
are weakened causing tiny bulges called microanuerysms (MAs) to ophthalmic imaging devises have led to the development of
protrude from their walls. The abnormal bleeding from blood automated techniques for detection of diabetic eye diseases. This
vessels results in hemmorhages having dark blot configuration in paper provides review of various automated techniques along with
the eye fundus image. Exudates are bright yellow marks in the their strength and weakness. In this review we discuss various
retina that occurs due to leakage of fats and proteins along with

27
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

algorithms on automated detection of retinal features based on 3. EVALUATION and PERFORMANCE


digital image analysis.
MEASURES
For automated detection of lesions in fundus images two measures
2. METHODOLOGY are mostly used: sensitivity and specificity. Confusion matrix is
Automatic detection of diabetic retinopathy is very important for used for measuring the sensitivity and specificity.
diagnosing the disease to prevent blindness. With the help of
automated system, the work of optahlmologists can be reduced and Predicted class  C1(YES) C2(NO)
the cost of detection of diabetic retinopathy can also be reduced. Actual class
Most of the existing methods of automatic detection work in three
stages: (i) image pre-processing, (ii) detection and segmentation of C1(YES) TP FN
optic disc and retinal blood vessels, (iii) extraction of lesions (hard
or soft or both). C2(NO) FP TN

First stage requires image preprocessing for the reduction of noise


and contrast enhancement. Mostly Image preprocessing is Fig.3. Confusion Matrix
performed on the green color plane of RGB image because in green
color plane higher contrast is obtained with the background. The The sensitivity is defined as the ability of a test to detect correctly
presence of non retinopathic features such as blood vessels, optic people with a disease or condition.
disc which may contain similar color, contrast as that of other
retinal features, might lead to a non retinopathic feature be wrongly
classified as a retinopathic feature. Hence, in second stage optic
disc and blood vessels are segmented from the fundus image for the
reduction in false positives. After the optic disc and blood vessel The specificity is defined as the ability of a test to exclude properly
network localization the exudates, hemorrhages and people without a disease or condition.
microaneurysms are extracted from the images. Microaneurysms
and hemorrhages detection(dark red regions) requires the removal
of other brighter regions such as exudates and optic disc. Detection
of hard exudates that are brightly colored is difficult in presence of
TP(True positive): Sick people correctly diagnosed as sick.
other brighter regions i.e., blood vessel network and the optic
FP(False positive): Healthy people incorrectly identified as sick.
disc,hence these must be removed before extracting exudates. In
TN(True negative): Healthy people correctly identified as healthy.
some cases, severity of diseas i.e. normal, mild or severe has also
FN(False negative): Sick people incorrectly identified as healthy.
been detected.

4. LITERATURE SURVEY
In 2010, Hussain F. Jaafar et al.[3] proposed an automated method
for the detection of exudates in retinal images. The proposed
method was adapted to deal with different types and qualities of
images through taking all image information into account. The
performance of the proposed method was measured against
clinician hand-labelled images. Comparing with some recent
related works, the proposed method indicates an improvement in
the specificity and accuracy measures and reasonable sensitivity.
Candidates were detected using a combination of coarse and fine
segmentation. The coarse segmentation was based on a local
variation operation to outline the boundaries of all candidates
which have clear borders. The fine segmentation was based on an
adaptive thresholding and a new split-and-merge technique to
segment all bright candidates locally.A limitation in this work was
that it occasionally fails to exclude some non-exudate objects
particularly those that have similar features to real exudates.
Performance measure shows that exudates were detected from a
database with 89.7% sensitivity, 99.3% specificity value.

Fig.2. Flow chart showing methodology. Then, Carla Agurto et al. (2010) [4] used multiscale amplitude-
modulation-frequency modulation (AM-FM) method to

28
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

discriminate between normal and pathological retinal images. All


the lesions associated with DR i.e. microaneurysms, exudates, Then, L. Giancardo et al. (2011) [7] we propose a new
hemorrhages were detected. The texture feature vectors consist of microaneurysms segmentation technique based on a novel
cumulative distribution functions of the instantaneous amplitude, application of the radon transform, which is able to identify these
the instantaneous frequency magnitude, and the relative lesions without any previous knowledge of the retina
instantaneous frequency angle from multiple scales are used. morphological features and with minimal image preprocessing. In
Distance metrics between the extracted feature vectors is used to this paper we present a MA detector based on a novel radon-based
measure interstructure similarity. Good sensitivity for the approach. The radon based features allows the detection of MAs
abnormal images versus normal images and high directly on the original image without vessel or optic nerve
sensitivity/specificity for IR versus normal images were obtained segmentation. Also, they are inherently able to identify MAs of
by this method. This algorithm allows the user to obtain a detailed different sizes without multiscale analysis. It does not require a
analysis of the images since the features are extracted by regions. large dataset, once some examples of Mas are shown to the
This paper reports on the first time that AM-FM has been used with classifier, it is simply a matter of dynamically selecting the
a multiscale decomposition of retinal structures for the purposes of negative examples on one or two images to make the algorithm
classifying them into pathological and normal structures. The “converge” to the desired performance.
results demonstrate a significant capability to differentiate between
retinal features, such as normal anatomy (OD and retinal vessels), Anitha Mohan et al. (2013) [8] of developed an automatic
from lesions (MAs, hemorrhages, and exudates). diagnostic system to detect diabetic retinopathy changes in the eye
so that only affected persons can be referred to the
Usman M. Akram et al. (2011) [5] proposed a digital diabetic specialist for further examination and treatment. The aim of this
retinopathy system for early detection of diabetic retinopathy. The project was to find the nonhealthy parts i.e. part affected with
algorithms for retinal image preprocessing, blood vessel disease, in the eye of diabetic patients. Here, FCM (fuzzy c-means)
enhancement and segmentation and optic disk localization and algorithm is used to find the ratio of the disease. A major advantage
detection which eventually lead to detection of different DR lesions of this algorithm is that it implies greater accuracy of exudates
using proposed hybrid fuzzy classifier is presented. First step of detection. To classify the segmented regions into healthy and non-
proposed system is preprocessing, in which the background and healthy area, an artificial neural network classifier was used. The
noisy area is separated from the overall image to enhance the significant contributions of this work were: 1) median filter was
quality of acquired retinal image and to lower the processing time. used to reduce the irregular edges or smoothing the edges. 2) hue
After preprocessing, blood vessels are enhanced and segmented by saturation value of the given input image was obtained and 3) FCM
using Gabor wavelet and multilayered thresholding respectively. (fuzzy c-means) algorithm was applied for finding the ratio of the
Then optic disk is localized using average filter and thresholding disease.
and detected the optic disk boundary using Hough transform and
edge detection. After segmenting blood vessels and OD, dark and Shraddha Tripathi et al. (2013) [9] proposed an automatic method
bright lesions are detected using hybrid fuzzy classifier. for exudate detection from colour fundus images based on
Differential Morphological Profile (DMP). The three main phases
Keerthi Ram et al. (2011) [6] proposed a new method for automatic involved in this method consist of (i) pre-processing is done by
MA detection from digital color fundus images The MA detection Gaussian smoothing and contrast enhancement, (ii) the pre-
is formulated as a problem of target detection from clutter, where processed image is gone through DMP. The resultant image
the probability of occurrence of target is significantly smaller contains highlighted bright regions indicating exudates and optic
compared to the clutter. To progressively lower the number of disc, (iii) feature extraction is done which depends on following
clutter responses a successive rejection-based strategy is proposed. parameters: location of optic disc, shape index and area. The image
By using a set of specialized features the processing stages are obtained is consist of actual exudates. The results show that the
designed to reject specific classes of clutter while passing majority proposed method detects exudates more efficiently and accurately
of true MAs. The remained true positives after the final rejector are in comparison to other state of the art .
assigned a score depending on its similarity to a true MA. To
characterize the clutter and MA structures a new set of Syna Sreng et al. (2013) [10] developed an automatic method for
morphological and appearance-based features are used. In previous automatic detection of exudates from fundus images. The proposed
approaches, a single classification step is used to distinguish method first pre-processes the fundus image to improve the quality
between the target and multiple subgroups of clutter class together and then detection and elimination of Optic Disc (OD) is done to
which demands design complexity in feature space and underlying prevent the interference to the result of exudate detection by
classifier. The proposed processing pipeline separates both classes combination of three methods: (i) image binarization, (ii) Region
and further allows various subgroups of clutter class to be handled Of Interest (ROI) based segmentation and (iii) Morphological
through a cascade solution. This gives flexibility to achieve an Reconstruction (MR). Then, the maximum entropy thresholding is
independent optimal solution for individual subtasks and helps applied to filter out the bright pixels from the result of OD region
simplify the final classification step (L stage) by reducing the eliminated. Finally, exudates are extracted by using Morphological
number of clutter subgroups to be handled. reconstruction.

29
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 1. Comparison of Various Algorithms Discussed In Literature Survey

Sr.no Reference Method used Results Advantages/ Disadvantages

1. Hussain F. Jaafar An adaptive thresholding and a 89.7% sensitivity It occasionally fails to exclude some Non-
et al[3] new split-and-merge technique to and exudates that have similar features to real
segment all bright candidates 99.3% specificity exudates.
locally.
2. Carla Agurto et AM-FM has been used with a 100% sensitivity and The various retinal structures are encoded
al[4] multiscale decomposition of retinal 88% specificity differently by the am -fm feature hence lesions
structures . can be easily detected.

3. Keerthi Ram et A successive rejection-based 88.46% sensitivity It is an expensive and complex technique.
al[6] strategy is proposed to and
progressively lower the number of 90% specificity.
clutter responses.
4. L. Giancardo et Segmentation technique based on a Sensitivity at 0.5 FPs i) It is able to identify MAs of different sizes
al[7] novel application of the radon is 36.6% without multiscale analysis.
transform is used. ii) Less sensitivity.

5. Anitha Mohan et FCM (fuzzy c-means) DNP* It implies greater accuracy of exudates
al.[8] Algorithm is used to find the ratio detection.
of disease.
6. Shraddha Gaussian smoothing is used in first 97.6% sensitivity It detects exudates quite efficiently and
Tripathi et al.[9] stage and then Differential and accurately.
Morphological Profile (DMP) is 99.9% specificity.
applied to segment exudates
7. Syna Sreng et Maximum entropy thresholding is 91% sensitivity The average processing time is 3.92 second per
al.[10] applied to filter out the bright image.
pixels.
8. T.Yamuna et Region-based image segmentation 66.67% sensitivity i) ANFIS is used to classify the retinal images
al.[11] is used for candidate extraction and and as normal, mild, severe depending on their
Adaptive Neuro Fuzzy Inference 62.5 % Specificity severity.
System (ANFIS) used for ii) It need training and testing prior to
classification. classification.
9. Sérgio Bortolin Mathematical morphological 87.69% sensitivity It is not able to identify some microaneurysms
Júnior et al[12] operations are used. and 92.4%specificity occupying very small area hence sensitivity
decreased.

10. Sharath Kumar P Multi-channel histogram analysis 88.45% sensitivity The algorithm can detect abnormalities related
N et al.[13] is used . and to the DR with high accuracy and
95.5% specificity. Reliability.

11. Mahendran Morphological operations are used DNP* i) Segmented image shows the location of
Gandhi et al.[14] for segmentation and support exudates confirming the disease.
Vector Machine(SVM) is used for ii) SVM classifier is used to assess the severity
classification of disease. of this disease.
12. Abderrahmane Graph cut algorithm is used for 95% sensitivity and It is a fast method of segmentation.
Elbalaoui et exudate extraction and neural 96.65% specificity
al.[15] network classifier is used to
classify candidate

DNP*- DATA NOT PROVIDED.

30
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

T.Yamuna et al. (2013) [11] proposed a method that aims to detect 5. CONCLUSION
low contrast abnormalities such as microaneurysm in the retinal
image and to classify them on the basis of their severity. Two This review paper analyses the merits and demerits of the existing
preprocessing techniques, Illumination Equalization and Contrast automated techniques for the identification of retinal features and
Limited Adaptive Histogram Equalization (CLAHE) was used and pathologies. Automatic detection of diabetic retinopathy lesions
morphological operations were used to extract the abnormal presents many of the challenges. The size and color of dark lesions is
candidates and the features like area, mean, standard deviation, very similar to the blood vessels, and the intensity of bright lesions is
entropy etc. were used to classify various stages of abnormalities. similar to that of optic disc. There is a chance of false positives in
The tool used here for effective screening of retinal abnormalities is case of overlaying of anatomical features of retina with pathological
Adaptive Neuro Fuzzy Inference System (ANFIS). The retinal features i.e. lesions. So there is a need of an effective automated
images as normal, mild, severe were classified by ANFIS depending detection method so that diabetic retinopathy can be treated at an
on their severity. early stage and the blindness due to diabetic retinopathy can be
prevented. In this paper, some existing methods are reviewed to give
Sergio Bortolin Junior et al. (2013) [12] presented a novel approach a complete view of the field. On the basis of this work, the
for automated detection of red lesions in fundus. The approach here researchers can get an idea about automated microaneurysm,
is based on mathematical morphology and consists of removing haemorrhages and exudates detection and can develop more
components of retinal anatomy to detect the lesions. There were five effective and better method for detection to diagnose diabetic
stages involved in this method: a) pre-processing; b) enhancement of retinopathy.
low intensity structures; c) detection of blood vessels; d) elimination
of blood vessels;e) elimination of the fovea. The main contribution
of the proposed method is the detection of blood vessels, where it
REFERENCES
was used a solution based on Mathematical morphology.
[1] Olson, J. Strachan, F. Hipwell, J. Goatrnan, K. McHardy, K.
Sharath Kumar P N et al. (2013) [13] used a new method for Forrester, J. Sharp. A comparative evaluation of digital
preprocessing and elimination of false positives to detect exudates imaging, retinal photography, and optometrist examination in
accurately. In preprocessing, HSV color space conversion is used to screening for diabetic retinopathy. Diabetic Med. 20, 528-
enhance the brightness of fundus images. Then gamma correction 534, 2003 Singalavanija.
was performed on each red and green components of the image to [2] D. Fleming, S. Philip, K. A. Goatman, G. J. Williams, J. A.
emphasize brighter yellow regions (exudates). The histogram Olson, and P. F. Sharp, “Automated detection of exudates for
analysis is used to detect exudates candidates. Finally, by using diabetic retinopathy screening,” Phys. Med. Biol., vol. 52, pp.
multi-channel histogram analysis false positives were removed. In 7385–7396, 2007.
the future, the integrated fundus image analysis scheme will be [3] Hussain F. Jaafar, Asoke K. Nandi and Waleed Al-Nuaimy,
further improved and full fledged automated system can be “Automated Detection Of Exudates In Retinal Images Using
developed to detect other symptoms of mild DR viz. A Splitand- Merge Algorithm,” in 18th European Signal
microaneurysms, and haemorrhages. Processing Conference (EUSIPCO-2010) EURASIP.
[4] Carla Agurto, Victor Murray,and Eduardo
Mahendran Gandhi et al. (2013) [14] proposed automatic detection of Barriga,“Multiscale AM-FM Methods for Diabetic
diabetic retinopathy through detecting exudates in colour fundus Retinopathy Lesion Detection,” IEEE Transactions On
retinal images and also classifies the rigorousness of the lesions. The Medical Imaging, vol. 29, no. 2, February 2010.
morphological operation such as dilation and erosion are used to [5] Usman M. Akram and Shoab A. Khan, “Automated Detection
detect the presence and location of exudates. This segmented image of Dark and Bright Lesions in Retinal Images for Early
shows the location of exudates confirming the disease diabetic Detection of Diabetic Retinopathy,” SPRINGER, J Med Syst
retinopathy. This paper not only confirms the disease but also tends (2012) 36, pp.3151–3162.
to measure the severity level of the disease. The SVM classifier is [6] Keerthi Ram, Gopal Datt Joshi AND Jayanthi Sivaswamy, “A
used to assess the severity of this disease whether the patient is Successive Clutter-Rejection-Based Approach for Early
moderately affected or severely affected. Detection of Diabetic Retinopathy,” IEEE Transactions On
Biomedical Engineering, vol. 58, no. 3, pp.664-673, March
Abderrahmane Elbalaoui et al. (2014) [15] presented an automated 2011.
method for the detection of exudates in retinal color fundus images [7] L. Giancardo, F. Meriaudeau, T. P. Karnowski, Y. Li, K. W.
with high accuracy. First, the image is converted to HSI model, after Tobin and E. Chaum, “Microaneurysm Detection with Radon
preprocessing possible regions containing exudate, the segmented Transform-based Classification on Retina Images,” in 33rd
image without Optic Disc (OD) using algorithm Graph cuts, Annual International Conference of the IEEE EMBS Boston,
Invariant moments Hu in extraction feature vector are then classified Massachusetts USA, August 30 - September 3, 2011.
as exudates and non-exudates using a Neural Network Classifier. The [8] Anitha Mohan and K. Moorthy, “Early Detection of Diabetic
neural network gives better results with a feature extraction of Retinopathy Edema using FCM,” International Journal of
images by descriptors and Hu moment of GIST. Science and Research (IJSR), vol. 2, no. 5, May 2013.

31
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[9] Shraddha Tripathi1, Krishna Kant Singh , B.K.Singh1 , [13] Sharath Kumar P N , Rajesh Kumar R, Anuja Sathar ,and
Akansha Mehrotra, “Automatic Detection of Exudates in Sahasranamam V, “Automatic Detection of Exudates in
Retinal Fundus Images using Differential Morphological Retinal Images Using Histogram Analysis,” IEEE Recent
Profile,” International Journal of Engineering and Advances in Intelligent Computational Systems (RAICS), Dec
Technology (IJET), vol. 5, no. 3, Jun-Jul 2013. 2013.
[10] Syna Sreng, Jun-ichi Takada, Noppadol Maneerat, Don [14] Mahendran Gandhi, and Dr. R. Dhanasekaran, “Diagnosis of
Isarakorn, and Ronakorn Panj aphongse, “Automatic Exudate Diabetic Retinopathy Using Morphological Process and SVM
Extraction for Early Detection of Diabetic Retinopathy,” in Classifier,” IEEE International conference on
5th International Conference on Information Technology and Communication and Signal Processing, pp.873-877, April 3-
Electrical Engineering, IEEE, Oct 2013. 5, 2013.
[11] T.Yamuna and S.Maheswari, “Detection of Abnormalities in [15] Abderrahmane Elbalaoui, Mehdi Boutaounte, Hassan Faouzi,
Retinal Images,” IEEE International Conference on Mohamed Fakir , And Abdelkrim Merbouha, “Segmentation
Emerging Trends in Computing, Communication and and Detection of Diabetic Retinopathy Exudates,” in
Nanotechnology (ICECCN 2013), pp.236-240, 2013. International Confrence on Multimedia and Computing
[12] Sergio Bortolin Junior and Daniel Welfer, “Automatic Systems, IEEE, pp.1625 – 1627, April 20.
Detection Of Microaneurysms And Hemorrhages In Color
Eye Fundus Images,” International Journal of Computer
Science & Information Technology (IJCSIT). vol 5, no 5, Oct
2013.

32
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Various Methods of Road Extraction from Satellite Images:


A Comparative Analysis
Atinderpreet Kaur, Ram Singh,
Dept. Of Computer Engineering, Punjabi University Dept. Of Computer Engineering, Punjabi University
Patiala Patiala.
atti.punia11@gmail.com

ABSTRACT 2. METHODOLOGY
Extraction of roads from the satellite images is a crucial part of Automatic detection of roads is very important for traffic
the image segmentation. Extraction of roads is essential because management, location based services and especially for vehicle
number of applications are based on road information like navigation system. Most of the automatic detection of roads
vehicle navigation, transportation management, tourism, works in three stages: (i) image pre-processing, (ii) detection
industrial development, urban or rural development. Images and segmentation of roads, (iii) post-processing to link the
used in road extraction are mostly satellite images or aerial segments and remove the unwanted results.
images. As the satellite images are of high resolution, therefore First stage requires an image preprocessing for the reduction of
there are many obstacles like a shadow of trees and buildings, noise and contrast enhancement. Mostly Image preprocessing is
vehicles, interference from the surroundings, broken roads etc. performed on the green color plane of RGB image because in
curb the automatic extraction of the roads. In this paper, the the green color plane higher contrast is obtained with the
review of techniques, algorithms and methodologies used for background. The presence of non-road regions such as roofs of
automatic detection of roads is presented. buildings, parking lots, etc. which may contain similar color,
Keywords: Road extraction, Morphology, Reference Circle, contrast as that of roads, might lead to a non-road region be
Gradient Operator, Unmasking, SVM. wrongly classified as a road region. Hence, in second stage non-
road regions are segmented from the road image for the
reduction in false positives.
1. INTRODUCTION
Image Preprocessing (resize, remove noise
Roads become the necessary part of the society in today’s life as or other obstacles )
its information is important in traffic and fleet management,
vehicle navigation system, location based services, tourism and
industrial development. So there is need of algorithms to extract
roads from the satellite images. Various techniques exist to Detection and Segmentation of roads and non-road
extract the roads like rule based approaches, statistical inference regions
methods, contour method, etc. Despite the fact that much work
on automatic approaches for road extraction has been taken
place, the desired high level of automation could not be
achieved yet. Reason behind it number of obstacles that curb Extraction of roads and linking of segments (if
automatic extraction like blurring, wrecked or misplaced road required)
boundaries, lack of road profiles, dense shadows, and
interference from surrounding objects. As a result, the existing
automatic extraction is not sufficient for practical applications. Fig.1. Flow chart showing methodology

Road network information is required for a variety of 3. EVALUATION & PERFORMANCE


applications and such information is a necessary input to many MEASURES
decision processes. Because of the large areas to be mapped, it is
important to use highly automated means as well as cheap and For automated detection of roads in aerial images three
readily available data. Manual extraction of road from remote
measures are mostly used: Completeness, and Quality
sensing imagery is mostly time-consuming and costly.
The potential solution is to develop an algorithm that can Correctness.
automatically detect it so that road information can be used for
fruitful results. The speedy growth of information processing Predicted class C1 (YES) C2 (NO)
system has led to the development of automated techniques for Actual class
detection roads. This paper provides reviews of various
C1(YES) TP FN
automated techniques along with their strength and weakness. In
this review, we discuss the various algorithms for automated C2(NO) FP TN
detection of roads based on aerial image analysis.
Fig.2. Confusion Matrix

33
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

can be derived according to all the central pixels. The proposed


Correctness are defined as the ability of an algorithm to detect approach is efficient, reliable, and assumes no prior knowledge
the correct roads from an image. about the road conditions, surrounding objects and color
spectrum of roads.

Juan Wang,[[4] et al. [2012] proposed a method for four road


extraction strategies: linear road, curvilinear road, crossing the
Completeness is defined as the ability of an algorithm to get the road and breakage road based on mathematical morphology.
all road pixels in extracting image as road pixels in the actual Various Mathematical Morphology operations are used to keep
image. image features and remove the redundant structure like erosion,
dilation, opening, closing Different structuring elements are
used depends on the structure of the road. Its operations are
simple, flexible and fast.

Quality is defined as the excellence of the road extraction. Beril Sırmac [5] et al. [2010] proposed the spatial voting method
besides edge detection for road extraction. Non-linear Bilateral
filtering is used to smooth the image. Spatial voting matrix
(SVM) is generated by extracting canny edge and gradient
TP(True positive): Roads are correctly detected as roads. information which is used to signify the possible locations of
FP(False positive): Non-Roads are incorrectly identified as road networks. By processing iteratively voting matrix, it detects
roads. initial road pixels. At last tracking algorithm is applied to detect
TN(True negative): Non-Roads are correctly identified as non- the missing road pixels on the voting matrix. The negative point
roads. of the method is that tracking algorithm decrease the operation
FN(False negative): Roads are incorrectly identified as non- speed.
roads.
Shengyan Zhou[6] et al. proposed a technique of road extraction
based on Support vector Machine (SVM) that focus on the
4. LITERATURE SURVEY problem of extraction of feature and classification. It is an
effective approach for self-supervised online learning. It reduces
Liu Xu[1] et al. proposed the Unsharp Mask (USM) sharpening the possibility of wrong categorization of road and non-road
algorithm for image enhancement, and to reinforce the road classes and improves the road detection algorithm.
feature. A precipitous method of road extraction from satellite
images was achieved by following the three operations a) image Sahar Movaghati [7] et al. proposed Extended Kalman filtering
preprocessing, b) threshold segmentation, c) corrosion and with particle filtering. It helps in finding different road branches.
expansion. In USM Gaussian low pass filter is used to suppress It is based on two modules EKF and PF. The EKF module
the noise. According to experimental results of this paper, it can properly find and estimate the coordinates of the road median,
effectively reduce the non- target noise and accurately extract while the PF module is only utilized in critical situations, i.e.,
roads from the images. when the EKF module stops due to road obstacles or road
Pro of this algorithm is that it enhance the color gradient of junctions. When the EKF module faces a severe obstacle, then
roads from the surrounding environment. However, this method update profile cluster procedure cannot successfully create new
also has its shortcomings: the color of a processed image gets profile clusters. To initialize the PF module, the EKF module
changed when the maximum and minimum pixel value will transfers the information about its last successful step of the
exceed the original image sharpening, so the choice of the present road segment onto the PF module. At last clustering is
template is very important. done to merge all branches created in EKF and PF module to
extract the road. PF module slows the speed of the algorithm
Xufeng Guo[2] et al. [2011], proposed a method of extraction of which is the limitation of this method.
roads on the basis of automatic selection of seed that starts the
road extraction based on the combination of geometry and color Xiangyun Hu,[8] [2014] et al. proposed various features to detect
features.
road centerlines from the residual ground points. The
Yan Li and Ronald Briggs [3] et al. proposed a two stage method
of reference circle and central pixel that are amenable to the
distortions which are expected over the roads. The reference
circle provides the feature of direction and shape of the roads
that address the major issues that have caused all existing
extraction approaches to fail, such as blurring boundaries,
interfering objects, heavy shadows, etc. Both visual and
geometric characteristics of roads are taken into account. The
central pixels play an important role throughout the extraction
process, including filtering, segmentation, and grouping and
optimization. Once a centerline is found, then based on the
statistical estimation of the road width, the contour of the road

34
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

TABLE 1
COMPARISON OF VARIOUS ALGORITHMS DISCUSSED IN LITERATURE SURVEY

Sr. Reference Method Used Image (Database) Results Advantages/Disadvantages


No.
1 Liu Xu et al. [1] Unsharp masking to Quickbird DNP 1. Quick & easy method.
effectively reduce Panchromatic image 2. Improper selection of template
noise using leads to the incorrect extraction of
template. roads.
2 Yan Li and A reference circle Images from Google DNP Well address blurring, broken or
Ronald Briggs and central pixel, to Map missing road boundaries, heavy
[3] capture the essence shadows, and interference from
of both visual and surrounding objects
geometric
characteristics of
roads
3 Juan Wang,[[4] et Mathematical Remote Sensing DNP Easy to use.
al Morphology Image
4 Beril Sırmac [5] et Spatial Voting Panchromatic Ikonos 86.85% of the Tracking Algorithm results in
al Matrix besides using satellite road network; decreasing the operation speed.
edge detection. Image 9.03% false
alarm rate.
5 Shengyan Zhou[6] Support Vector Self Captured images DNP Automatic updating of training data
machine based on reduce the possibility of
online learning misclassifying roads and non-road
regions.
6 Sahar Movaghati Extended Kalman IRS and IKONOS Correctness 1. Effectively find the branches of
[7]
Filter to find road image 98%; roads at road junction.
median and Particle Completeness 2. PF module reduces the speed of
filtering module to 92% (IRS) the algorithm
curb the obstacles . and 85% 3. Not apply to more complex urban
(IKONOS) area.
7 Xiangyun Hu,[8] Mean shift, Tensor LiDAR data of urban Completeness 1. Effectively detect roads in
voting, Hough areas 81.7%; complex urban scenes with varying
transform Correctness road widths and noise.
(MTH) to extract 88.4% 2. One problem is the heavy
centerline of the computational cost in the tensor
road. voting step.
8 Fatemeh Extraction using Road scenes in video FAR 2%; 1. No need of any knowledge of road
Mazrouei hough transform on sequences taken by FRR 11% width, mark length and other
Sebdani[9] et al. video sequencing stationary traffic parameter.
frames. Cameras. 2. Due to computational complexity,
algorithm can’t use in real-time
application.
9 Jiuxiang Hu,[11] An Automatic road Any aerial Image 84% to 94% Efficiently trims the paths that leak
et al. seeding method completeness, into the surroundings of the roads
using footprints. Above 81%
correctness,
Quality 82%
to90%
DNP* - Data not provided.

FAR- is the number of pixels that aren’t as road line, but detect these as road line.

FRR – is the number of pixels that are as road line, but we can’t extract

35
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

main purpose of it is to separate connected non-road features of this paper is applied on the rural roads. Color image to binary
like parking portion and bare grounds from the roads. Three image conversion is performed and Gradient is operated twice to
features are applied to detect the roads. Adaptive mean shift remove the noise. Thresholding is also used and after its
finds the center points on the road, Stick tensor voting is applied midpoints are found out and connected. At last coloring and
to enhance the salient linear features of the road and Hough morphological operations are performed. This technique has
Transform is applied to extract the arc primitives of the road provide 72%, 82% and 81% completeness, correctness and
centerline. One problem of this method, the heavy quality respectively.
computational cost in the tensor voting step and another problem
is the identification of the contextual objects of roads, like lane 5. CONCLUSION
markings, road junction patterns, vehicles, and road edges.
This review paper throws a light over number of existing
Fatemeh Mazrouei Sebdani[9] et al. [2012] use Hough Transform
techniques that are used in road extraction. Automatic detection
to extract the roads. In the preprocessing region of interest (ROI)
of roads from an image presents many challenges. The color of
is applied with wiener filter to remove the noise. After it image
roof top also analogous to that as roads and shadow of trees on
is segmented by thresholding and Hough Transform method.
road results in improper extraction of roads. There is a chance of
Morphology operations are applied at last to improve the output
true negative (i.e. not detect the road, even if it is road in actual)
results. Positive point of this method: no need of any prior
due to these problems. Also, most of the techniques are not
knowledge of road width, mark length and other parameters.
applied on both rural and urban area road images. In this paper,
Negative point: Can’t use this method for real time applications
some existing methods are reviewed to give a complete view of
as it is time consuming and also applicable for 2-D images only.
the field. On the basis of this work, the researchers can get an
idea about automated road extraction and can develop more
Hao Chen [10][2014] et al. proposed an algorithm to overcome
effective and better method for it.
the problem of extraction of defects result due to noise using
traditional methods. This method proposes an algorithm based
REFERENCES
on the global features. It uses Top-down approach to extract the
roads. Global topological perception is applied prior to the [1] Liu Xu, Tao Jun, Yu Xiang, Cheng JianJie, and Guo LiQian
perception of other locally featured properties (centerlines) “The Rapid Method for Road Extraction from High-Resolution
which are extracted using Burns algorithm. Topological Satellite Images Based on USM Algorithm”.
information about roads is derived from vector data.
[2] Xufeng Guo, David Dean, Simon Denman, Clinton Fookes,
Jiuxiang Hu,[11][2007] et al. used two-step approach to extract Sridha Sridharan” Evaluating Automatic Road Detection Across
the roads from aerial images, i.e. detection and pruning analysis. a Large Aerial Imagery Collection”, 2011 IEEE International
This method involves detection of road footprints, track the Conference on Digital Image Computing: Techniques and
roads, and grow a road tree. Footprint of a pixel is the local Applications.
homogeneous region around a pixel enclosed by a polygon and
Spoke wheel operator is used to get the road footprint. A toe - [3] Yan Li and Ronald Briggs,” Automatic Extraction of Roads
finding algorithm is used to find the direction of foot –prints. from High Resolution Aerial and Satellite Images with Heavy
Bayes decision model based on the (A/P) area-to-perimeter Noise”
ratio of the footprint, is used to trim the paths that leak into the
surroundings. This paper helps in efficiently trimming the paths [4] Juan Wang, Chunzhi Shan “Extract Different Types of
that lead to the improvement of performance, correctness and Roads Based on Mathematical Morphology” 2012 5th
quality of the extracted roads. International Congress on Image and Signal Processing (CISP
2012).
Mingjun Song[12][2004] et al. used SVM to extract the roads. It
helps in classifying the image into road and non-road region. [5] Beril Sırmacek and Cem U¨nsalan” Road Network
Based on homogenous criteria Region Growing is used to Extraction using Edge Detection and Spatial Voting” 2010 IEEE
segment. After it thresholding is applied to extract roads and International Conference on Pattern Recognition.
then vectorization and thinning is performed to get the centerline
[6] Shengyan Zhou, Jianwei Gong, Guangming Xiong, Huiyan
of the road. This paper helps in extracting the very narrow roads,
Chen and Karl Iagnemma” Road Detection Using Support
but unable to address the roads having tree/building shadows,
Vector Machine based on Online Learning and Evaluation”
gaps etc.
Rajani Mangala [13] et al. [2011] proposed a work for Road 2010 IEEE Intelligent Vehicles Symposium University of
extraction based on the gradient operation and skeletal ray California, San Diego, CA, USA June 21-24, 2010
formation by comparing it with ANN based road extraction
method. Thresholding and coloring with morphological [7] Sahar Movaghati, Alireza Moghaddamjoo, Senior Member,
operations are performed to extract the road. The proposed work IEEE, and Ahad Tavakoli,” Road Extraction From Satellite

36
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Images Using Particle Filtering and Extended Kalman Filtering” [13] T Rajani Mangala, S G Bhirud “A New Automatic Road
IEEE Transactions On Geoscience And Remote Sensing, vol. 48, Extraction Technique using Gradient Operation and Skeletal
no. 7, July 2010 Ray Formation”, International Journal of Computer
Applications (0975 – 8887) Volume 29– No.1, September 2011.
[8] Xiangyun Hu, Yijing Li, Jie Shan, Member, IEEE, Jianqing
Zhang, and Yongjun Zhang” Road Centerline Extraction in
Complex Urban Scenes From LiDAR Data Based on Multiple
Features”, IEEE Transactions On Geoscience And Remote
Sensing, vol. 52, no. 11, November 2014

[9] Fatemeh Mazrouei Sebdani, Hossein Pourghassem” A


Robust and Real-time Road Line Extraction Algorithm Using
Hough Transform in Intelligent Transportation System
Application” 2012 IEEE.

[10] Hao Chen, Lili Yin, Li Ma ”Research on Road Information


Extraction from High Resolution Imagery Based on Global
Precedence” 2014 Third International Workshop on Earth
Observation and Remote Sensing Applications.

[11] Jiuxiang Hu, Anshuman Razdan, John C. Femiani, Ming


Cui, and Peter Wonka, “Road Network Extraction and
Intersection Detection From Aerial Images by Tracking Road
Footprints” IEEE transactions On Geoscience and Remote
Sensing, vol. 45, no. 12, December 2007.

[12] Mingjun Song and Daniel Civco” Road Extraction Using


SVM and Image Segmentation” Photogrammetric Engineering
& Remote Sensing Vol. 70, No. 12, December 2004, pp. 1365–
1371.

37
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Design of Microstrip Patch Antenna by Introducing


Defected Ground Structure

Harpreet Kaur Monika Aggarwal


ECE Department BGIET, Sangrur, India ECE Department BGIET, Sangrur, India
email:er.kaur92@gmail.com email:monikaaggarwal176@gmail.com

ABSTRACT embedding slots in ground plane to obtain multiband


This paper proposes a novel Inset Feed Microstrip patch characteristics.
antenna with Z and F shape defect in ground plane. Initially
simple inset feed rectangular Patch Antenna is designed and
results are analyzed. Further this design is modified by
etching Z and F shaped defect on ground plane. Defected
Ground Structure is studied to enhance the performance
parameters of microstrip patch antenna. Proposed antenna
provides wide bandwidth and reduced return loss. Designed
antenna cover the WLAN 5.2 GHz (5.15-5.35 GHz) band.
Performance parameters Return loss, bandwidth, gain,
directivity, and voltage standing wave ratio (VSWR) have
been analyzed for proposed multiband Microstrip Patch
Antenna by using Finite element method based High
Frequency Structure Simulator software (HFSS).

Keywords
DGS, HFSS Software, Microstrip Patch antenna, Return loss, Fig 1: Basic Microstrip Patch Antenna
VSWR
2. ANTENNA DESIGN
1. INTRODUCTION Three main components for designing of microstrip antenna
Wireless communication is the fastest growing field in are resonant frequency, dielectric material and substrate
communication industry. Antenna is an important element in height. The proposed antenna resonates at 5.2 GHz frequency.
wireless system. Handheld devices in wireless communication For the designing of antenna FR4 epoxy is used having
raise the demand of compact and multiband antenna. dielectric constant 4.4 with 0.02 loss tangent. High dielectric
Microstrip antenna because of its small size employed in constant is used for size reduction. Width and length is
wireless communication. Microstrip patch antenna is widely calculated by using transmission line model equations.
used because of its numerous advantages such as low profile,
low cost, ease of fabrication [1]. Narrow bandwidth and low Width of the patch is calculated by formula given below –
gain are the main limitation of microstrip patch antenna [2]. In
literature various techniques have been studied to improve the
bandwidth of microstrip patch antenna such as increasing
substrate thickness, using low dielectric substrates, stacking
geometry, shorting pins , cutting slots and slits in radiating W=
patch and embedding slots in ground plane. Rectangular and
circular patches are preferred because of ease of analysis. C=3*108 m/s, Dielectric constant ϵr = 4.4, Resonant frequency
Defected Ground Structure is studied to improve the basic fr = 5.2 GHz
characteristics of conventional microstrip patch antenna [9].
An antenna with multiband characteristics and high After calculation Patch width W = 17.55mm.
bandwidth is required for wireless applications. Different
feeding techniques are available to excite microstrip patch Effective dielectric constant calculation (ϵ reff)
antenna. In wireless communication an antenna to cover more
than one frequency band is required to support more
applications with single antenna. Different techniques to
achieve multiband operation are like Probe Compensation (L- ϵreff =
shaped probe), Parasitic Patches, Direct-Coupled Patches, Slot
and Slit loaded patches (U or V- shaped slots and E patch, U-
Shaped Slit, Stacked Patches, Patch with parasitic strip and
Substitute ϵ r = 4.4. h = 1.5748mm, W = 17.55mm
the use of Electromagnetic Band Gap (EBG) structures[9].
Most of these methods require complex feeding techniques
Effective dielectric constant ϵ reff = 3.879
and complex structures like more layers and parasitic
structures. In order to avoid the complexity of structure, an
Length extension calculation
inset feed rectangular Microstrip Patch antenna is designed by

38
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

22.88mm
ΔL = 0.412h

Put ϵreff = 3.879, h = 1.5748mm, W= 17.55mm 12.95mm

Obtained length extension ΔL = 0.718mm

27mm

17.55mm
Effective length calculation

Leff =

C = 3* 108 m/s, fr = 5.2GHz

Leff = 14.64mm

Actual length calculation

L = Leff – 2Δ L
Fig 2: Geometry of Simple Rectangular Microstrip Patch
Leff = 14.64mm, ΔL = 0.718 mm Antenna

L = 13.20mm Return Loss ANSOFT


0.00
Ground plane dimensions are calculated by
-5.00
Transmission line model is suitable for infinite ground planes
but practically we should have finite ground plane. If we take m2 m3
the size of ground plane is greater than the patch dimensions -10.00
by six times the substrate height similar results can be
obtained as with infinite ground plane. Therefore length and
dB(St(1,1))

width of ground is given -15.00

Wg = 6h + W
-20.00
Lg = 6h + L
-25.00

Name X Y
3. SIMULATION RESULTS -30.00 m1 5.2312 -30.8215
m1
Curve Info
3.1 Inset feed Simple Rectangular m2 5.1095 -10.0008
dB(St(1,1))
m3 5.3507 -10.0001 Setup1 : Sw eep1
Microstrip Patch Antenna -35.00
Inset feed Simple Rectangular Patch Antenna is designed with 3.00 4.00 5.00 6.00 7.00
Freq [GHz]
the specifications discussed above but length of patch is
reduced to adjust the frequency. The designed antenna has
Patch length L=12.95mm and Width W= 17.55mm. Ground Fig 3: Return Loss of Simple Rectangular Microstrip
plane dimensions are Wg = 27mm and Lg = 22.88mm. Patch Antenna

Return loss versus frequency graph is shown in figure 3.


Antenna resonates at 5.23 GHz with return loss -30.82 dB.
Bandwidth of antenna can be calculated from return loss
graph. It is observed from figure 3 that bandwidth at
5.23GHz is 241 MHz (5.1095-5.3507 GHz) It covers the
WLAN frequency range 5.15-5.35 GHz.

39
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

XY Plot 1 ANSOFT
140.00
Curve Info
mag(VSWR(1))
120.00 Setup1 : Sw eep1
Name X Y
m1 5.2312 1.0592
100.00
m2 5.1106 1.9156

mag(VSWR(1))
m3 5.3518 1.9362
80.00

60.00

40.00

20.00
m2m1
m3
0.00
3.00 4.00 5.00 6.00 7.00
Freq [GHz]

Fig 4: 3D Polar Plot for Gain of Simple Rectangular Fig 6: VSWR of Simple Rectangular Microstrip Patch
Microstrip Patch Antenna Antenna

Figure 4 and 5 shows the 3D polar plot for Gain 5.04 dB and VSWR plot for inset feed simple rectangular Microstrip patch
directivity 6.62 dB at resonant frequency 5.23 GHz. antenna is shown in figure 6. The value of VSWR is 1.05 at
5.23GHz.

3.2 Z and F shaped Defected Ground


Structure
In proposed design Z and F shaped defect is etched on the
ground plane to achieve multiband characteristics. Geometry
of proposed design is shown in figure 7. In DGS defect on the
ground plane is etched intentionally, size and shape of the
defect can be varied according to design requirement.

22.88mm
1mm
17.55mm
27mm

Fig 5: 3D Polar Plot for Directivity of Simple Rectangular


Microstrip Patch Antenna

12.95mm

Fig 7: Z and F slot DGS antenna

Figure 8 depicts the return loss of the designed antenna after


DGS structure. It is observed from return loss versus

40
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

frequency graph that designed antenna resonates at multiple


frequencies by etching Z and F shaped slots on ground plane.
Designed antenna resonates at 5.2 GHz, 5.60 GHz, 10.44
GHz, and 11.52 GHz with return loss of -40.16 dB, -23.57 dB,
-43.75 dB, -20.50 dB, as shown in figure 8. Antenna has
highest return loss -40.16 dB at fundamental frequency 5.24
GHz. The proposed antenna resonates at multiple frequencies
and has large bandwidth 555 MHz (5.1067-5.6624 GHz) at
resonant frequency 5.2 GHz as shown in figure 2. Percentage
bandwidth of antenna is 10.32%. It has 1.696 GHz (10.1036-
11.80 GHz) bandwidth at 10.39 GHz.
Return Loss Z and F DGS ANSOFT
0.00 Curve Info
dB(St(1,1))
-5.00 Setup1 : Sw eep1

m3 m4 m7 m8
-10.00

-15.00

m6
dB(St(1,1))

-20.00
m2

-25.00 Name X Y
m1 5.2422 -40.1628 Fig 10: 3D Polar plot for directivity of DGS Antenna
m2 5.6040 -23.5708
-30.00
m3 5.1067 -10.0000 Figure 11 shows the VSWR (Voltage standing wave ratio) for
-35.00
m4 5.6624 -10.0021 proposed multiband antenna. Voltage standing wave ratio
m5 10.4432 -43.7501 indicates the impedance matching of antenna. The value of
m1 m6 11.5286 -20.5096 VSWR should lie between 1 and 2. Minimum VSWR is
-40.00
m7 10.1036 -10.0003 m5 achieved 1.01 at 5.24 GHz, 1.14 at 5.60 GHz, 1.01 at 10.44
m8 11.8000 -10.1553 GHz and 1.20 at 11.52 GHz.
-45.00
2.50 5.00 7.50 10.00 12.50
Freq [GHz] XY Plot 1 Z and F DGS ANSOFT

Fig 8: Return loss of Z and F slot DGS Antenna 125.00 Curve Info
mag(VSWR(1))
Setup1 : Sw eep1

100.00 Name X Y
m1 5.2422 1.0198
m2 5.6040 1.1420
mag(VSWR(1))

m3 10.4432 1.0131
75.00
m4 11.5286 1.2082
m5 11.8000 1.9012

50.00

25.00

m1m2 m3 m5
m4
0.00
2.50 5.00 7.50 10.00 12.50
Freq [GHz]
Fig 11: VSWR Plot of Z and F slot DGS Antenna
Fig 9: 3D Polar plot for gain of Z and F slot DGS Antenna
4. Conclusion
Above figures 9 shows the 3D polar plot for gain at 5.24 GHz. Proposed antenna is designed by etching slots in Ground
Designed antenna has 4.91 dB gain .It has 6.45 dB directivity Plane. Better return loss, directivity and VSWR is achived
at 5.24 GHz as shown in figure 10. because of the Defected Ground Structure. Antenna has wide
bandwidth at 5.24 GHz (5.1067-5.6624 GHz) which is
WLAN frequency band. Designed antenna resonates at four
different frequencies. It covers the C band and X band and

41
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

used for satellite and radar communication. Designed antenna [5] Nawale, P.A., and Zope, R.G., 2014, “Design and
is better in terms of bandwidth and return loss. It has Improvement of Microstrip Patch Antenna Parameters
minimum return loss -40.74 dB and bandwidth 555 MHz Using Defected Ground Structure”, International Journal
of Engineering Research and Applications, Vol. 4, Issue
(5.1067-5.6624 GHz) at 5.24 GHz. 6, ISSN: 2248-9622, pp. 123-129.
REFERENCES [6] Patel, J.M., Patel, S.K., Thakkar, F.N., 2013, “Defected
Ground Structure Multiband Microstrip Patch Antenna
using Complementary Split Ring Resonator”,
[1] Gajera,H.R., and Anoop,C.N., 2011, “The study on International Journal of Emerging Trends in Electrical
Bandwidth enhancement of Rectangular Microstrip Patch and Electronics, , Vol. 3, Issue 2, ISSN: 2320-9569, pp.
Antenna for Wireless Application”, Inteernational 14-19.
Journal of Electronics & Communication
Technology,Vol. 2, Issue 4, ISSN: 2230-7109 (Online) [8] Singh, G., and Kaur, J., 2014, “Design a multiband
ISSN: 2230-9543(Print), pp.171-174. Rectangular ring antenna Using DGS for WLAN,
[2] Kaur, J., and Khanna, R., 2013, “Co-axial Fed
WiMAX Applications”, International Journal of
Rectangular Microstrip Patch Antenna for 5.2 GHz Advanced Research in Computer and Communication
WLAN Application”, Universal Journal of Electrical Engineering, Vol. 3, Issue 6, ISSN (Online): 2278-1021
and Electronic Engineering, pp. 94-98. [9] Weng, L.H., Guo, Y. C., Shi, X. W., and Chen, X. Q.,
[3] Kumar, G., and Ray, K.P., “Broadband microstrip 2008, “An Overview On Defected Ground Structure”,
antennas”, Artech House. Progress in Electromagnetics Research B, Vol. 7, pp.
[4] Matin, M.A., and Sayeed, A.I., 2010, “A design rule for 173–189.
inset-fed rectangular microstrip patch antenna”, Wseas
Transactions on Communications, , Vol. 9, Issue 1,
ISSN: 1109-2742.

42
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

EDGE DETECTION OF VIDEO USING ADAPTIVE EDGE


DETECTOR OPERATOR
Pankaj Sharma Jatinder Sharma
Research Scholar ECE Research Scholar ECE
B.G.I.E.T Sangrur B.G.I.E.T Sangrur
pankx001@gmail.com

ABSTRACT
where brightness of image changes sharply and find
This paper presents the implementation of an adaptive edge- discontinuities. The purpose of edge detection is significantly
detection using Simulink. Simulink is a simulation modeling reducing the amount of data in an image and preserves the
and design tool and GUI based diagram environment. The structural properties for further image processing.. For a noisy
Simulink based customizable framework is designed for rapid image it is difficult to detect edges as both edge and noise
simulation, implementation, and verification of video and contains high frequency contents which results in blurred and
image processing algorithms and systems .The Edge detection distorted result. Edge detection is the process of localizing
process detects outlines of an object, scene text and pixel intensity transitions. The edge detection has been used
boundaries between objects and the background in the video by object recognition, target tracking, segmentation, and etc.
image. Edge detection is in the forefront of video processing Therefore, the edge detection is one of the most important
for object detection, it is crucial to have a good understanding parts of image processing . There are many edge detection
of edge detection methods. In this paper the comparative operators available, each designed to be sensitive to certain
analysis of various video image edge detection methods is like types of edges. Certain criteria involved in selection of an
Sobel, Prewitt, and Canny is presented. edge detection operator include edge orientation. There are
mainly exist several edge detection methods Sobel , Prewitt ,
Canny . These methods have been proposed for detecting
KEYWORDS transitions in video images. Edge detection is one of the most
important techniques that have been commonly implemented
in image processing. It is used in image segmentation,
Canny operator, thresholding, edge detection. registration and identification of image processing. The
concept of the edge in an image is the most fundamental
feature of the image because the edge contains valuable
1. INTRODUCTION information about the internal objects inside image. Hence,
edge detection is one of the key research works in image
processing. Edge detection of an image is a very important
1.1. EDGE DETECTION step towards understanding image features. Therefore, other
image processing applications such as segmentation,
Unlike the real world, images do not have edges. An edge is identification, and object recognition can take place whenever
sharp change in intensity of an image. But, since the overall edges of an object are detected.
goal is to locate edges in the real world via an image, the term
edge detection is commonly used. An edge is not a physical 1.2. TYPES OF EDGES
entity, just like a shadow. It is where the picture ends and the
wall starts, where the vertical and the horizontal surfaces of an Edges are classified according to their behaviour – Sharp
object meet. If there were sensor with infinitely small Step, Gradual step, Roof, Trough edge.
footprints and zero-width point spread functions, an edge
would be recorded between pixels within in an image. In
reality, what appears to be an edge from the distance may
even contain other edges when close-up looked. The edge
between a forest and a road in an aerial photo may not look
like an edge any more in a image taken on the ground. In the
ground image, edges may be found around each individual
tree. If looked a few inches away from a tree, edges may be
found within the texture on the bark of the tree. Edges are
scale-dependent and an edge may contain other edges, but at a
certain scale, an edge still has no width. If the edges in an
image are identified accurately, all the objects are located and
their basic properties such as area, perimeter and shape can be
measured. Therefore edges are used for boundary estimation
and segmentation in the scene. Edge detection is a basic tool Fig : (a) Sharp step (b) Gradual step (c) Roof (d) Trough
used in image processing, basically for feature detection and
extraction, which aim to identify points in a digital image

43
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Sharp Step, as shown in Fig. (a), is an idealization of an intensity. The operator needs to be responsive to such gradual
edge. Since an image is always band limited, this type of change, so we do not have problems of false edge detection,
graph cannot ever occur. A Gradual Step, as shown in Fig. missing true edges, edge localization, and high computational
(b), is very similar to a Sharp Step, but it has been smoothed time. Edge detection is one of the most frequently used
out. The change in intensity is not as quick or sharp. A Roof, techniques in digital image processing. The boundaries of
as show in Fig. (c), is different than the first two edges. The object surfaces in a scene often lead to oriented localized
derivative of this edge is discontinuous. A Roof can have a changes in intensity of an image called edges. Edge detection
variety of sharpness, widths, and spatial extents. The Trough, is a difficult task, hence the objection for the comparison of
also shown in Fig. (d), is the inverse of a Roof. Edge detection various edge detection techniques and analysis of the
is very useful in a number of contexts. Edges characterize performance of the various techniques under different
object boundaries and are, therefore, useful for segmentation, conditions.
registration, and identification of objects in scenes.

2. EDGE DETECTION TECHNIQUES


1.3. CRITERIA FOR EDGE DETECTION
Quality of edge detection can be measured from several a) Sobel Operator:
criteria objectively. Some criteria are proposed in terms of
mathematical measurement, some of them are based on The operator consists of a pair of 3×3 convolution kernels as
application and implementation requirements. In all five cases shown in Fig. One kernel is simply the other rotated by 90°.
a quantitative evaluation of performance requires use of Sobel operator is a discrete differentiation operator used to
images where the true edges are known. compute an approximation of the gradient of image intensity
function for edge detection. At each pixel of an image, sobel
1) Good detection: There should be a minimum number of operator gives either the corresponding gradient vector or
false edges. Usually, edges are detected after a threshold normal to the vector. It convolves the input image with kernel
operation. The high threshold will lead to less false edges, but and computes the gradient magnitude and direction. It uses
it also reduces the number of true edges detected. following 3x3 two kernels:

2) Noise sensitivity: The robust algorithm can detect edges in


certain acceptable noise (Gaussian, Uniform and impulsive
noise) environments. Actually, and edge detector detects and
also amplifies the noise simultaneously. Strategic filtering,
consistency checking and post processing (such as non-
maximum suppression) can be used to reduce noise
sensitivity.

3) Good localization: The edge location must be reported as


close as possible to the correct position, i.e. edge localization
accuracy (ELA).

4) Orientation sensitivity: The operator not only detects edge


magnitude, but it also detects edge orientation correctly. These kernels are designed to respond maximally to edges
Orientation can be used in post processing to connect edge running vertically and horizontally relative to the pixel grid,
segments, reject noise and suppress non-maximum edge one kernel for each of the two perpendicular orientations. The
magnitude. kernels can be applied separately to the input image, to
produce separate measurements of the gradient component in
each orientation (call these 𝐺𝑥 and 𝐺 ). These can then be
1.4EDGE DETECTION PRELIMINARIES combined together to find the absolute magnitude of the
gradient at each point and the orientation of that gradient. The
There are certain types of edge variables involved in choosing
gradient magnitude is given by:
a sensitive edge detector they include:
 Edge Orientation: - the geometry of the operator
determines a characteristic direction in which it is most
sensitive to edges. Operator can be optimized to look for
horizontal, vertical or diagonal edges.

 Noise Environment:- Edge detection is different in noisy Typically, an approximate magnitude is computed using :
images. Since both noise and edges contain high frequency
content, attempt to reduce the noise result in blurred and
distorted edges. Operators used on noisy images are typically
larger in scope so they can average enough data to discount
localized noisy pixels. This result in less accurate localization
of the detached edges.
Gx and Gy are the gradients in horizontal and vertical
 Edge Structure: - not all edges involve step change in directions.
intensity effects such as refraction or poor focus can result in
objects with boundaries defined by gradual change in

44
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

b) Prewitt Edge Detector :

The Prewitt Operator is similar to the Sobel operator and it is


used for detecting vertical and horizontal edges in images .
The Prewitt edge detector is an appropriate way to estimate
the magnitude and orientation of an edge. The Prewitt
operator is limited to eight possible orientations although most
direct orientation estimates are not exactly accurate. The
Prewitt operator is estimated in the 3 x 3 neighborhood for
eight directions. The entire eight masks are calculated then the
one with the largest module is selected.

Fig.: Canny Edge Detector

The Canny method finds edges by looking for local maxima


of the gradient of I. The gradient is calculated using the
derivative of a Gaussian filter. The method uses two
thresholds, to detect strong and weak edges, and includes the
weak edges in the output only if they are connected to strong
edges . This method is therefore less likely than the others to
be fooled by noise, and more likely to detect true weak edges.

3. WORKING PARAMETERS FOR


EDGE DETECTION

a) Sources/From Multimedia File: On Windows reads video


frames and/or audio samples from a compressed or
uncompressed multimedia file. Multimedia files can contain
audio, video, or audio and video data.

Fig: Prewitt Edge detection operator b) Analysis & Enhancement/Edge Detection: Finds the
edges in an input image using Sobel, Prewitt, and Canny
methods.

c) Canny Edge detector : c) Sinks/To Video Display: Displays a video stream.

The Canny edge detector is known to many as the optimal d) Sinks/Frame Rate Display: Calculate and display the
edge detector. The canny edge detector first smoothes the frame rate of the input signal.
image to eliminate noise. It then finds the image gradient to
highlight regions with high spatial derivatives. The algorithm
then tracks along these regions and suppresses any pixel that
is not at the maximum. In order to implement the canny edge
detector algorithm, a series of steps must be followed. The
first step is to filter out any noise in the original image before
trying to locate and detect any edges. And because the
Gaussian filter can be computed using a simple mask, it is
used exclusively in the Canny algorithm. Once a suitable
mask has been calculated, the Gaussian smoothing can be
performed using standard convolution methods.
A convolution mask is usually much smaller than the actual
image. As a result, the mask slid over the image, manipulating
a square of pixels at a time. The larger the width of the
Gaussian mask, the lower is the detector's sensitivity to noise.
The localization error in the detected edges also increases
slightly as the Gaussian width is increased. After smoothing
the image and eliminating the noise, the next step is to find
the edge strength by taking the gradient of the image. The
Sobel operator performs a 2-D spatial gradient measurement
on an image. Then, the approximate absolute gradient
magnitude (edge strength) at each point can be found. The
Sobel operator uses a pair of 3x3 convolution masks, one
estimating the gradient in the x-direction (columns) and the
other estimating the gradient in the y-direction (rows). They
are shown below:

45
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.2. Flow chart of canny edge detector :

Fig : Blockset parameters.

3.1. Flow chart of sobel and prewitts


operators :

3.3. Canny edge detection algorithm

STEP I: Noise reduction by smoothing

Noise contained in image is smoothed by convolving the input


image I (i, j) with Gaussian filter G. Mathematically, the
smooth resultant image is given by

Prewitt operators are simpler to operator as compared to sobel


operator but more sensitive to noise in comparison with sobel
operator.

STEP II: Finding gradients

In this step we detect the edges where the change in grayscale


intensity is maximum. Required areas are determined with the
help of gradient of images. Sobel operator is used to
determine the gradient at each pixel of smoothened image.
Sobel operators in i and j directions are given as :

46
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4.1. Frame Rate Display Block Properties :

These sobel masks are convolved with smoothed image and


giving gradients in I and j directions as

STEP III: Non maximum suppressions:

Non maximum suppression is carried out to preserves all local


maxima in the gradient image, and deleting everything else
this results in thin edges. For a pixel M (i, j):

 Firstly round the gradient direction thita nearest


45°,then compare the gradient magnitude of the
pixels in positive and negative gradient directions Table 2
i.e. If gradient direction is east then compare with
gradient of the pixels in east and west directions say
E (i, j) and W (i, j) respectively.

 If the edge strength of pixel M (i, j) is largest than


that of E (i, j) and W (i, j), then preserve the value
of gradient and mark M (i, j) as edge pixel, if not
then suppress or remove.

 STEP IV: Hysteresis thresholding:

The output of non-maxima suppression still contains the local


maxima created by noise. Instead choosing a single threshold,
for avoiding the problem of streaking two thresholds t low and
t high are used. For a pixel M (i, j) having gradient magnitude
G following conditions exists to detect pixel as edge:
If G <t low than discard the edge. Table 3
If G > than t high keep the edge.
If t low < G < and t high and any of its neighbors in a 3 ×3
region around it have gradient magnitudes greater than t high
keep the edge.

4. Edge Detection Block Properties :

Table

47
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4.2. From Multimedia File Block 5. RESULT AND CONCLUSION


Properties:

Table 4

4.3. Video Viewer Block Properties :

Table 5

4.4. Block Type Count:

Fig after detection of edges by canny, sobel and prewitts.

In our research paper the proposed operator’s to


Implementation of this methodology using Simulink blockset
is more useful for Object Edge detection to detect outlines of
an object, scene text and boundaries. This system provides to
identify of objects and recognition scene text. The
performance for edge detection in a video image is evaluated
both subjectively and objectively. The subjective evaluation
of edge detected video images show that proposed operator,
Sobel and Prewitt and Canny operator exhibit better
performances respectively.From the experimental results we
get that the canny detector detects evenly weak as well as
strong edges and hence enhances the edge detection but it
reverses in the case of sobel and prewitts.

48
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES [3] International Journal of Computer Applications (0975 –


8887) Volume 90– No.19, March 2014 Comparison of
[1] Signal & Image Processing : An International Journal Various Edge Detection Techniques in Tree Ring Structure.
(SIPIJ) Vol.4, No.3, June 2013 DOI : 10.5121/sipij.2013.4306
65 ALGORITHM AND TECHNIQUE ON VARIOUS
EDGE DETECTION: A SURVEY Rashmi 1, Mukesh [4] Empirical Study of Various Edge Detection Techniques
Kumar2, and Rohini Saxena2 Department of Electronics and for Gray Scale Images Er. Harsimran Singh Er. Tajinder Kaur
Communication Engg. M.Tech Scholar Assistant Professor
SBBSIET Jalandhar, India SBBSIET Jalandhar, India.
[2] Ireyuwa. E. Igbinosa (2013), “Comparison of Edge
Detection Technique in Image Processing Techniques”,
International Journal of Information Technology and
Electrical Engineering, ISSN 2306-708X Volume 2, Issue 1,
February 2013.

49
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Design of 3-Side Truncated Patch Antenna with Semi-


Circular Open Slot for UWB Applications

Amrik Singh Sushil Kakkar Shweta Rani


M.Tech Student Assistant Professor Associate Professor
ECE Department ECE Department ECE Department
BGIET, Sangrur BGIET, Sangrur BGIET, Sangrur
Amrik3011@gmail.com kakkar778@gmail.com shwetaranee@gmail.com

ABSTRACT antenna are detailed in Table. 1. In this work semicircular slot


A 3-sided truncated microstrip patch antenna with has been introduced to further enhance the bandwidth of
semicircular open slot for ultra-wideband (UWB) and SHF antenna. On the ground a non-symmetric λ/4 L-shaped slot
(Super High Frequency) applications has been presented in has been used for producing miniaturization and a wide
this paper. The proposed antenna is compact in size and operating bandwidth. The rectangular patch fed by microstrip-
designed on FR4 substrate. From the simulation and line has three truncated corners with a open semicircular open
measurement results, it is shown that the corner-truncated slot corners. The effect of different radii of semicular slot has
With Semi Circular open slot patch scheme is an excellent been analyised to get the best possible bandwidth and gain.
approach, which can be used to make the proposed antenna The proposed antenna is simulated using Ansoft High
match well over an enhanced impedance bandwidth of 12.13 Frequency Structure Simulator (HFSS), which is full wave
GHz (2.26~14.39GHz), for a - 10dB return loss. In order to electromagnetic simulation software for the microwave and
validate the antenna performance, simulated results have been millimeter wave integrated circuits. Ansoft HFSS employs the
reported using HFSS EM solver. The proposed antenna is Finite Element Method (FEM), adaptive meshing, and
feasible for WLAN, WIMAX, Wi-Fi and other various brilliant graphics to give an unparalleled performance and
wireless applications. insight to all of the 3D EM problems [11, 12]. The 3D model
of proposed antenna generated in the HFSS is shown in Figure
Keywords 2.
Patch Antenna, Return Loss, UWB
TABLE 1: ANTENNA DIMENSIONS
1. INTRODUCTION Parameters Dimensions
In the changing world scenario of wireless communication Length Of Patch(Lp) 6 mm
systems, a wideband antenna has been playing a very Width Of Patch (Wp) 9 mm
important role for wireless services. Because of their low Thickness of substrate 0.8 mm
profile, wide bandwidth, compact size, low cost, and ease of Length of substrate (L) 35 mm
fabrication slot antennas are attractive candidates for Width of substrate (W) 30 mm
broadband and ultra wideband (UWB) applications. Width of feed (x) 1.53 mm
Enhancement of bandwidth by introducing slots in radiating Length of feed (Lf) 14.55 mm
patch [1,3,5,6] is one of the best method for enhancing
bandwidth of microstrip patch antenna .Feeding techniques
can also make noticeable enhancement in bandwidth, one of
which is microstrip patch antenna with edge feed
technique[4]. The resonant behavior analysis of small-size slot
antenna with different substrates [2] provides another method
for improving resonating behavior .These slot antennas can
achieve a good broadband characteristic. But along with
enhancement of bandwidth one the important factor has been
reduction of size of antenna which is much important. There
is huge range of UWB applications including wireless
communication system, radars, satellite communication and
lot more .In the presented work semicircular slot three side
truncated antenna has been employed for UWB and SHF
applications.

2. DESIGN AND STRUCTURE


Figure 1 shows the geometry of the three corner-truncated Figure 1: Geometry of proposed antenna showing patch
rectangular patch antenna with open semicircular slot and substrate
fabricated on the FR4 Substrate. The dielectric constant of
substrate ɛᵣ= 4.4 and a loss tangent of 0.02 and thickness of
the substrate h= 0.8mm have been used to design microstrip
patch antenna. The dimensional parameters of the proposed

50
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 2: Bandwidth and Return Loss for Different Radii

Radius of Bandwidth Return Loss


slot
2 mm A return loss of -24 dB at 3.1
10.9 GHz GHz and -23.1dB at 12 GHz
is obtained
2.4 mm A return loss of -24dB at
11.56 GHz 3GHz and -45dB at 12.50
GHz is obtained
2.6 mm A return loss of -29 dB at
12.17 GHz 2.90GHz and -26 dB at 12.50
GHz is obtained
2.7 mm A return loss of -28 dB at
12.12 GHz 2.90GHz and -21 dB at 12.60
GHz is obtained

Name X Y XY Plot 3 HFSSDesign1 ANSOFT

m10.00 2.4515 -10.3146 Curve Info


m2 14.0167 -9.9724 dB(S(1,1))
Setup1 : Sweep
-5.00

Figure:. 3D Ansoft HFSS generated model of proposed m1 m2


-10.00
antenna
-15.00
3. RESULTS AND DISCUSSION
dB(S(1,1))

-20.00
3.1 Return Loss
-25.00
Return loss is the loss of signal power resulting from the
reflection caused at a discontinuity in transmission line. This -30.00
discontinuity can be a mismatch with the terminating load or
-35.00
with a device inserted in the line. In the proposed work,
figures 3, 4,5 and 6 show different graphs for return loss with -40.00
circular slot of radius 2 ,2.4 ,2.6 and 2.7 respectively. For
-45.00
different values of radii of circular slot antenna shows 0.00 2.50 5.00 7.50 10.00 12.50 15.00
Freq [GHz]
different bandwidth and gain which are compared in Table 2.
Best results are evident at radius of 2.6 with band width of Figure 4: Return loss for the antenna with circular slot of
12.17 GHz radius 2.4mm
Name X Y XY Plot 3 HFSSDesign1 ANSOFT

m10.00 2.4983 -9.9219 Curve Info


m2 13.4080 -9.9622 dB(S(1,1))
Setup1 : Sw eep
Name X Y XY Plot 3 HFSSDesign1 ANSOFT

m10.00 2.2642 -10.0176 Curve Info


m2 14.4381 -9.8837 dB(S(1,1))
-5.00
Setup1 : Sweep

-5.00
m1 m2
-10.00
dB(S(1,1))

m1 m2
-10.00

-15.00
dB(S(1,1))

-15.00

-20.00

-20.00

-25.00
0.00 2.50 5.00 7.50 10.00 12.50 15.00
Freq [GHz]
-25.00

Figure 3: Return loss for antenna with circular slot of


radius 2mm -30.00
0.00 2.50 5.00 7.50 10.00 12.50 15.00
Freq [GHz]

3.2. VSWR Figure 5: Return loss for the antenna with circular sot of
VSWR stands for Voltage Standing Wave Ratio, and is also radius 2.6mm
referred to as Standing Wave Ratio (SWR). VSWR may be
express in terms of the reflection coefficient, which describes
the power reflected from the antenna. The VSWR is always a
real and positive number for antennas. The smaller the VSWR
is, the better the antenna is matched to the transmission line
and the more power is delivered to the antenna. The minimum
VSWR is 1.0. Figure 7 shows the VSWR which is well below
2.

51
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Name X Y XY Plot 3 HFSSDesign1 ANSOFT vertical/elevation plane [13, 14]. The H-field of proposed
15.00 2.2174 -10.0884
m1 Curve Info
m2 14.3445 -10.2449 dB(S(1,1))
antenna is shown in Figure 8.
m3 Setup1 : Sweep
10.00 14.3445 -10.2449

3.4. E-Field distribution


5.00
An electric field can be visualized by drawing field lines
0.00 which indicates the direction and magnitude of the field. The
E-plane containing the electric field vector and the direction
dB(S(1,1))

-5.00 of maximum radiation [4]. Field lines start on positive charge


-10.00
m1 m3
m2 and end on negative charge. The E-Field of the proposed
antenna is shown in Figure 9.
-15.00

-20.00

-25.00

-30.00
0.00 2.50 5.00 7.50 10.00 12.50 15.00
Freq [GHz]

Figure 6: Return loss for the antenna with circular slot of


radius 2.7 mm
XY Plot 1 HFSSDesign1 ANSOFT
70.00 Curve Info
VSWR(1)
Setup1 : Sw eep
60.00

50.00

40.00
VSWR(1)

30.00

20.00
Figure 9: E Field distribution for the proposed
antenna
10.00

4. CONCLUSSION
0.00
0.00 2.50 5.00 7.50 10.00 12.50 15.00 UWB antenna printed on FR4 substrate has been described.
Freq [GHz]
The simulation results of the antenna show that enhanced
Figure 7: VSWR for Three side truncated patch with impedance bandwidth can be achieved by using L-shaped slot
circular slot of radius 2.6 mm and 3-side truncated corners. Apart from this a semicircular
slot has been introduced on one side of truncated faces to
further enhance the operation bandwidth of antenna. In this
work a comparison has been conducted for various values of
radii of semicircular slot and best bandwidth is achieved at
R=2.6mm. It is seen that the proposed antenna achieved good
performance and compact size, which well meets the
requirements of UWB and SHF applications.

5. REFERENCES
[1] K.Song ,Y.Z .Yin,H.H.Xie and S.Zuo, “A Corner
Truncated Patch Scheme of Bandwidth Enhancement for open
Slot Antenna” Proceedings of International Symposium on
Signals, Systems and Electronics, 2010
[2] S.Kakkar, S.Rani and A.P Singh, “On the Resonant
Behavior Analysis of Small-Size Slot Antenna with
Different Substrates",International Journal of Computer
Applications, pp. 10-12, 2012
[3] A.Agarwal, N.Naushad and P.K.Singhal, “Trucated Gap
Coupled Wideband Rectangular Microstrip Patch
Figure 8: H field distribution for proposed antenna Antenna” International Journal of Computer Science and
Communication Engineering, Vol.-2, No.-1, 2013
3.3. H-Field Distribution [4] A.Singh, R.Kumar and H.S.Dadhwak, “Design of Edge
It is the plane containing the magnetic field vector and the Fed Rectangular Microstrip Patch Antenna for WLAN
direction of maximum radiation. The magnetic field or “H” Applications” VSRD International Journal of Electrical,
plane lies perpendicular to the “E” plane. The H-plane usually Electronics and Communications Engineering Vol.-2, No.-4,
coincides with the horizontal/azimuth plane in case of pp.160-167, 2012
vertically polarized antenna and in case of horizontally- [5] J. Singh, A.Singh, S.Kakkar, “ Design of 3-side Truncated
polarized antenna, it usually coincides with the Patch Antenna for UWB Applications” International Journal

52
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

on Recent and Innovation Trends in Computing and


Communication ,Volume: 2
[6] Y.W. Jang, " Broadband cross-shaped microstrip-fed slot
antenna," IEEE Electronics Letters, vol. 36, Issue 25, pp.
2056-2057, Dec 2000.
[7] S.Rani and A. P. Singh, “On the Design and Optimization
of New Fractal Antenna Using PSO” International Journal of
Electronics, vol. 100, No.-10, pp. 1383-1397, 2012
[8] S.I. Latif and L. Shafai, "Wideband and reduced size
microstrip slot antennas for wireless applications," IEEE
Antennas and Propagation Society Symposium, vol. 2, pp.
1959-1962, June 2004.
[9] A.Dastranj, A. Imani, and M.N. Moghaddasi, “Printed
wide slot antenna for wideband applications,” IEEE Trans. on
Antennas Propagation, Vol.56, No.10, 3097-3102,October
2008
.[10] K. Song, Y.-Z. Yin, and L. Zhang, “A novel monopole
antenna with a self-similar slot for wideband applications,”
Microwave Optical Technology Letters, Vol.52, No.1, p.95-
97, January 2010.
[11] C.-J. Wang , and S.-W. Chang, “A technique of
bandwidth enhancement for the slot antenna,” IEEE Trans. on
Antennas Propagation, Vol.56, No.10, 3321-3324, October
2008.
[12] J.-Y. Jan, and L.-C. Wang, “Printed wideband rhombus
slot antenna with a pair of parasitic strips for multiband
applications,” IEEE Trans. on Antennas Propagation, Vol.57,
No.4, 1267–1270, April 2009.
[13] Balanis, C. A., Antenna Theory: Analysis and Design,
John Wiley & Sons, Inc, USA, 2005.

53
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

GAIT RECOGNITION USING SVM AND LDA WITH PAL AND


PAL ENTROPY IMAGE

Reecha Agarwal Rishav Dewan


Student Associate Professor, ECE Department,
BGIET, Sangrur, India BGIET, Sangrur, India
reechaagarwal@yahoo.co.in rishavdewan@gmail.com

ABSTRACT
In Gait recognition, identification of a person or human being from Today in banks, metropolitan public transport stations, and other real
far distance is performed without any cooperation from his side. Its time applications, authentication and verification are always required.
motive is to develop identification of human being using Gait In such type of applications, biometric authentication methods are
recognition method which provides high security in places such as more attractive. Biomechanics research (e.g. gait analysis, sport or
banks, military, parking slots, airports etc. Recognition of any rehabilitation biomechanics, motor control studies) often involves
individual is a task to identify people. Human recognition methods measuring different signals such as kinematics, forces, and EMG.
such as face, fingerprints, and iris generally require a cooperative Gait is defined as “a manner of walking” in the Webster Collegiate
subject, physical contact or close area. These methods are not able to Dictionary. The extend definition of gait is to include both the
recognize an individual at a distance therefore recognition using gait appearance and the dynamics of human walking motion. Gait analysis
is relatively new biometric technique without these disadvantages. is the systematic study of human walking , using the eye and brain of
Human identification using Gait is method to identify an individual experienced observers, augmented by instrumentation for measuring
by the way he walk or manner of moving on foot. Gait recognition is body movements , body mechanics and the activity of the muscles.
a type of biometric recognition and related to the behavioral Gait analysis can give qualitative as well as quantitative values for
characteristics of biometric recognition. Gait offers ability of distance the gait parameter. Gait can be detected and measured at low
recognition or at low resolution. resolution, and therefore it can be used in situations where face or iris
information is not available in high enough resolution for recognition.
This thesis aims to recognize an individual using his gait features and
proposed new method for Gait recognition using SVM and LDA with II. BIOMETRICS
GAIT PAL and PAL ENTROPY technique. Different parameters are
used such as distance between head and feet, distance between legs The first important steps towards preventing unauthorized access are
and one another additional parameter used by us is distance between user authentication. User authentication is the process of verifying
hands. However the majority of current approaches are model free identity. Traditionally password were set as a string which included
which is simple and fast but we will use model based approach for integer or special characters and were used for authentication and
feature extraction and for matching of parameters with database these password can easily cracked but now biometric authentications
sequences. After matching of parameters CCR (Correct Classification are used. Therefore, biometrics refers to the technologies that analyze
Rate) will be obtained using LDA (Linear Discriminant Analysis) and measure characteristics of human body such as fingerprint iris,
and SVM (Support Vector Machine) technique. Some experimental voice and facial pattern, DNA etc. So it is critical to establish the
results will show the effectiveness of proposed system. identity of an individual in a variety of scenarios ranging from
issuing a drivers license to granting access to highly secured
Keywords resources. The need for reliable identification and authentication
systems has increased due to rapid advancements in networking,
Gait Recognition; SVM; LDA and CCR.
communication and mobility.
Traditional passwords and ID cards have been used for authentication
I. INTRODUCTION in many applications (e.g. Internet banking) or facilities (e.g. library)
Surveillance technology is now present everywhere in modern although such mechanisms have several limitations.
society. This is due to the increase in number of crimes as well as it is Passwords can be guessed or disclosed to unlawful users and ID
necessary to provide a safer environment. Despite the huge increase cards can be stolen or forged, resulting in a break of security.
of surveillance systems, the question whether current surveillance
systems work as a deterrent to crime is still debatable. Security The most general definition of a biometric is:
systems should not be only able to predict when a crime is about to “A physiological or behavioral characteristic, which can be used to
happen but, more importantly they ought to identify the individuals identify and verify the identity of an individual."
suspected of committing crimes through the use of biometrics such as
Gait Recognition. Recently, in surveillance applications the use of They can be classified into two categories as:
Gait for people identification has attracted researchers from computer  Physiological: These are biometrics which is derived from
vision. The suitability of Gait Recognition for surveillance systems a direct measurement of a part of a human body. These
emerges from the fact that Gait can be perceived from a distance as characteristics are related to the body. Recognition techniques
well as its non-invasive nature. come into this category are fingerprint, face, iris, DNA and palm
print.

54
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

III. GAIT RECOGNITION


Gait Recognition aims to identify the individual by the way he
walk or move. Gait based recognition is more suitable in video
surveillance applications because of following advantages:
1 Recognition using gait do not need any user cooperation.
2 The gait of an individual can be captured at a distance.
3 Gait recognition does not require images that have been
captured to be of very high quality and provide good results in
low resolution.

Example: To analyze the video stream from surveillance


cameras. If an unauthorized authority walk in front of camera.
Figure 1 Fingerprint detection System will compare his gait with stored gait sequences and
recognize him and alerts the appropriate authority for necessary
action. The threat has been successfully detected from distance. Such
a system have large amount of applications, such as banks, airports
and high security areas.

Figure 2 Face and iris detection

 Behavioural: These extract characteristics based on an


action performed by an individual, they are an indirect measure
of the characteristic of the human form. The main feature of a
behavioral biometric is the use of time as a metric. Established
measures include keystroke-scan and speech patterns. They are Figure 4. Gait Recognition in Security Access Scenario
related to behavior of the person. Voice and gait recognition
techniques come into this category. Gait as a biometric has many advantages as stated above which make
it an attractive proposition as a method of identification. Gaits main
As the physiological characteristics does not provide good advantage, unobtrusive identification at a distance, makes it a very
results in low resolution and need user cooperation therefore attractive biometric. The ability to identify a possible threat from a
recognition using Gait is more attractive. distance, gives the user a time frame in which to react before the
suspect becomes a possible threat. Another motivation is that video
footage of suspects are readily available, as surveillance cameras are
relatively low cost and installed in most buildings or locations
requiring a security presence, the video just needs to be checked
against that of the suspect.

Ideal intelligent monitoring system should be able to automatically


analyze the collected video data, give out an early warning before the
adverse event happens, and reduce injury and economic loss. For
example, when the system detects abnormal behavior, it can
immediately determine the identities of all persons in the scene,
rapidly investigate their previous activities, and track the suspects
across the regions. It requires the monitoring system can not only
estimate the quantity, location and behavior, but also obtain the
identity information. Gait is the most suitable biometrics in the case
of intelligent visual surveillance. In monitoring scenes, people are
usually distant from cameras, which make most of biometric features
no longer available.

Figure 3 Gait detection

55
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Gait Recognition Scenario IV. GAIT RECOGNITION SYSTEM


Gait Recognition system and background
Block diagram of general gait recognition system is shown below in
figure 6. Steps involved and exsting methods of all relevant steps are
explained below:

Figure 5. Gait Recognition Scenario

Example: in bank scenario, only few authorized people are allowed to


go into lockers room, here gait analysis technique is used, gait Figure 6. Gait Recognition System
sequences of those authorized people are stored in bank’s database,
therefore whenever an unauthorized person tries to enter into room, a) Video capture:
his gait sequences will not match with stored sequences and alarm In this method when a person enters his gait is captured through
system will be activated for any action. camera. Then this method of accurate tracking of a person in indoor
surveillence video stream is obtained from static camera. Example a
3.1 Advantages of Gait Recognition video camera on front door or any where in multicomplex can store
gait sequences of a moving person, so that video can be used for
The advantages of Gait as a biometric can be seen as other forms of
further processing.
biometric identification techniques
b) Background subtraction:
for the following reasons:
It is the most common approach of Gait recognition.
• Unobtrusive – The gait of a person walking can be extracted
In this approach moving objects from background in the scene are
without the user knowing they are being analyzed and without any
identified first. After that we will consider the relevant part of the
cooperation from the user in the information gathering stage unlike
frame and irrelevant part will be deleted i.e. instead of complete
fingerprinting or retina scans.
frame only the portion in which a person is moving is identified. Also
• Distance recognition – The gait of an individual can be captured at
while implementing background subtraction we use median filter so
a distance unlike other biometrics such as fingerprint recognition.
that it helps in to remove noise.
• Reduced detail – Gait recognition does not require images that
Background subtraction techniques can be categorised into two types:
have been captured to be of a very high quality unlike other
1. Non-Recursive method:
biometrics such as face recognition, which can be easily affected by
Non-Recursive techniques use sliding window approach for
low resolution images.
background subtraction.
• Difficult to conceal – The gait of an individual is difficult to
2. Recursive method:
disguise, by trying to do so the individual will probably appear more
The recursive methods use single Gaussian method and Gaussian
suspicious.
mixture model. Recursive techniques require less storage.
With other biometric techniques such as face recognition, the
c) Feature Extraction: An important step in gait recognition is the
individuals face can easily be altered or hidden.
extraction of appropriate feature that will effectively capture the gait
Being a biometric, an individual’s biometric signature will be
characteristics.
affected by certain factors such as:
When the input data is too large to be processed and it is suspected to
• Stimulants – drugs and alcohol will affect the way in which a
be notoriously redundant (e.g. the same measurement in both feet)
person walks.
then the input data will be transformed into a reduced representation
• Physical changes – a person during pregnancy, after an
set of features (also named features vector). To transform the input
accident/disease affecting the leg, or after severe weight gain / loss
data into the set of features is called feature extraction.
can all affect the movement characteristic of an individual.
d) Recognition: This is the last step of gait-based individual
• Psychological – a person’s mood can also affect an individual’s gait
detection. Here, input test video sequences are compared with the
signature.
trained sequence in the database. In general, minimum distance
• Clothing – the same person wearing different clothing may cause
classifier may be used for gait recognition.
an automatic signature extraction method to create a widely varying
signature for an individual.
Although these disadvantages are inherent in a gait biometric V. PROPOSED METHODOLOGY
signature, other biometric measures can easily be disguised and
In this the two videos are captured through mobile and are made
altered by individuals, in order to attempt to evade recognition.
compatible by coding using MATLAB which are then converted into
frames and frames to video using MATLAB command.
After that we are comparing Input or live video with database video
which is then converted into frames.

56
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The steps of the proposed work are as following:-


 Firstly, we are taking different videos having different DATABASE VIDEO INPUT VIDEO
extensions so as the input video when compared with
database video will be converted into frames so that we can
match them.
 Secondly, a live video when compared with database video
will be matched based on the walking style of the person or
on the basis of movement of the person that is if the
walking style of both the persons is same then the result
will be matched otherwise not matched. Database of current video
DATABASE CREATED
 Thirdly, we will implement the concept of background
subtraction so that the unneeded part will be deleted and so (parameters of database)
we need to concentrate only on him rather than the
complete frame.
 Fourthly, the concept of feature extraction is implemented
in which we are taking the parameters based on Hanavan’s
model which is model based approach and also the
technique named Surf feature is used for matching the
video.
Extract features from database
 In the final step, that is recognition we are calculating the
accuracy of the previous work SVM with the accuracy of
our work using SVM and LDA for better results. Also, we
are matching input video with database video i.e. we will
check how much efficient our CCR is.
This proposed Gait Recognition System is the one which
consists of the following steps which is shown with the
help of flowcharts.

Matching of current input video with database video using


START MODEL BASED APPROACH.
Different parameters considered (distance between legs, head
to feet and distance between hands)
Load Input Video or
Live Video

Background Subtraction

Feature Extraction (PAL and


PAL entropy technique) CCR (correct classification rate ) using SVM+LDA

Figure 7. The Proposed Gait Recognition


System
Database is N N
Create Database
present?

Y
RECOGNITION USING
LDA+SVM TECHNIQUE

Display the Results Obtained

END

57
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5.1 Input video: filtering by median filter can be involved to eliminate noise.
The foremost step is to capture an input video for gait identification.
First input video will be converted into frames known as video
sequences, and those frames are used for further gait Recognition
process.

Figure 10. GUI figure file for Background Subtraction

Figure 8. GUI figure file for loading video

Figure 11. Background Subtraction

5.3 Feature Extraction:


Feature selection is a crucial step in gait recognition. The feature
Figure 9. Loading the Video must be robust to operating conditions and should yield good
discriminability across individuals. Each gait sequence is divided into
5.2 Background Subtraction: cycles. Gait cycle is defined as person starts from rest, left foot
After converting video into frames, next is background subtraction. forward, rest, right foot forward, rest. Figure 4.7 shows the stances
Identifying moving objects from a video sequence is a fundamental during gait cycle. Gait cycle is determined by calculating sum of the
and critical task in many computer-vision applications. A common foreground pixels. At rest positions this value is low.
approach is to perform background subtraction, which identifies
moving objects from the portion of a video frame that differs
significantly from a background model. Gaussian mixture model is
used for foreground object estimation in which an additional step of

58
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 12. Stances during gait cycle


Figure 15. Canny Representation

Figure 16. Prewitt Representation


Figure 13. GUI figure file for feature extraction

Figure 14. Cropped image

59
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

low dimensional space with the LDA algorithm. The objective of


LDA is to perform dimensionality reduction while preserving as
much of the class discriminatory information as possible. In general,
if each class is tightly grouped, but well separated from the other
classes, the quality of the cluster is considered to be high.
In PCA, the shape and the location of the original data sets changes
when transformed to a different spaces whereas LDA doesn’t change
the location but only tries to provide more class separability and draw
decision between the given classes. In discriminant analysis, two
scatter matrices, called within-class (Sw) and between-class (Sb)
matrices, are defined to quantify the quality.
T
=
where = x is the mean of ith class
and m= = x is the global mean.

VI. RESULTS AND DISCUSSIONS


In the following figures, results of following figures are highlighted.

Figure 17. Parameters calculated for feature extraction

5.4 Matching and Recognition


The last step in this is recognition which means to compare the
results i.e. we are comparing the input video with the database stored
video to check whether they are same or not. For matching we use the
surf method technique to match the two different images or frames.
Then these extended vectors are matched to the trained data base with
the help of Support Vector Machine and LDA.

A. SUPPORT VECTOR MACHINE (SVM) Figure 18. GUI figure file for matching the video
The theory of SVM is based on the idea of structural risk
minimization. In many applications, SVM has been introduced as a
powerful tool for solving classification problems. Many researchers
have used SVM on gait recognition. Therefore, it is to be noted that
SVM is fundamentally a two-class classifier. First class maps the
training samples into a high dimensions space and then finds a
separating hyper plane that maximizes the margin between two
classes in this high dimension space.
This has two advantages:
a) First, the ability to generate non-linear decision boundaries using
methods designed for linear classifiers.
b) Second, the use of kernel functions allows the user to apply a
classifier to data that have no obvious fixed-dimensional vector space
representation.

B. LINEAR DISCRIMINANT ANALYSIS Figure 19. Successful Match


(LDA) In the above figure Input video was matched with database image and
LDA is a technique which is used for the feature extraction and SVM AND LDA results were calculated. In this case input video is
dimension reduction. It has been used in many applications involving same as database image. SVM and LDA results are better.
high-dimensional data such as image retrieval and recognition. The
LDA method is employs to perform training and projecting on
original gait feature. It reduces dimensionality of high dimensional
gait feature with PCA, and then performs optimal classification on

60
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table (4.1) Comparison of CCR between Previous and our


algorithm

Previous work SVM Proposed work SVM and


LDA

CCR 93.79 99.801

Figure 22. The accuracy value of GPPE with SVM and LDA

This bar graph in figure 23 shows comparison of accuracy between


base paper technique i.e., GPPE GPPE (Gait Pal and Pal Entropy)
with SVM and GPPE with SVM and LDA. Accuracy of proposed
technique is better in comparison to previous technique as can be
seen from bar graph. The values are same as shown in above
table.For bar graph we use bar command in MATLAB.

Figure 20. GUI figure file for matching the unsuccessful video

Figure 23. Accuracy graph of SVM and LDA

This table in figure 24 shows comparison of accuracy between base


paper technique i.e., GEnI (Gait Entropy Intensity) with SVM and
proposed work GEnI with SVM and LDA. Accuracy of proposed
technique is better in comparison to previous technique. Accuracy
value has been calculated using equation and algorithm which has
been implemented using MATLAB software.For table uitable
command is used in MATLAB.

Figure 21. Unsuccessful match

This table in figure 22 shows comparison of accuracy between base


paper technique i.e., GPPE (Gait Pal and Pal Entropy) with SVM and
Figure 24. The accuracy value of GenI with SVM and LDA
proposed work GPPE with SVM and LDA. Accuracy of proposed
technique is better in comparison to previous technique. Accuracy
value has been calculated using equation and algorithm which has This bar graph in figure 25 shows comparison of accuracy between
been implemented using MATLAB software.For table uitable base paper technique i.e., GEnI (Gait Entropy Intensity) with SVM
command is used in MATLAB. and GEnI with SVM and LDA. Accuracy of proposed technique is
better in comparison to previous technique as can be seen from bar
graph. The values are same as shown in above table.For bar graph we
use bar command in MATLAB.

61
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

International conference on Advanced Information Networking and


Applications, Perth, Australia, 20-23.
[7] J.Han and B.Bhanu, 2006. “Individual recognition using gait
energy image,” IEEE Trans.on pattern analysis and machine
intelligence, vol. 28, no.2,pp. 316-322.
[8] J. J. Little and J. E. Boyd, 1998. “Recognizing People by their
Gait: The Shape of Motion,” Videre: J. Computer Vision Research, 1
(2), 1–32 (1998).
[9] Kaur et al., 2013. “Gait Recognition for human identification
using ENN and NN,” International Journal of Advanced Research in
Computer Science and Software Engineering 3(11), pp. 1154-1161.
[10] Lili Liu, Wei Qin, Yilong Yin, Ying Li, 2011 “Gait recognition
based on outermost contour,” International Journal of computational
intelligence systems, Vol.4, No.5.
[11] M.Jeevan, Neha Jain et al., 2013. “Gait Recognition Based on
GAIT PAL AND PAL ENTROPY IMAGE,” pp.4195-4198.
Figure 25. Accuracy graph of GenI with SVM and LDA [12] Niyogi, S., Adelson, E., 1994. “Analyzing and recognizing
. walking figures in XYT,”. In IEEE Computer society conference on
CONCLUSION computer vision and pattern recognition, Seattle, Wash, USA, pp.
469-474.
In this we conclude that to analyze the video streams from
[13] N.K Narayanan,V.Kabeer, “Face recognition using nonlinear
surveillance cameras of a person i.e. if a person walks by the camera
feature parameter and artificial neural network,” International
who's gait has been previously recorded and it is a known threat then
journal of computer intelligence systems, 3(5), 566-574.
the system will recognize him and the concerned authorities can be
[14] Shaveta chauhan et al., 2014 “Gait Recognition using BPNN and
alerted automatically so that the person can be detected before he is
SVM,” International Journal of Computer Application and
allowed to become a threat. The threat can be detected from a
Technology, ISSN: 2349-1841, pp. 63-67.
distance, creating the time buffer for authorities to take action.
[15] S.J.McKenna, S.Jabri, Z.Duric, “Tracking group of people,”
Also in this work we use Gaussian mixture model for foreground
Comput. Vis. Image Understanding, vol.80, no. 1,pp. 42-56.
object estimation in which an extra step regarding filtering through
[16] Su-Li XY, Qian-jin ZHANG, 2010. “ Gait recognition using
median filter can be involved to eliminate noise. Moving target
fuzzy principal component analysis”, 2nd International Confrence on
classification algorithm is used to separate human being (i.e.,
e-business and information system security, IEEE.
pedestrian) from other foreground objects (viz., vehicles). Shape and
boundary information is used for this moving target classification.
The width vector of outer contour of binary silhouette and Gait PAL
AND PAL ENTROPY coefficients for extracting the feature vector
are used and these feature vectors extracted are used to recognize
individual. Surf Feature is used for recognizing persons based on gait.
The parameters like distance between head and feet, distance between
hands and distance between legs are calculated. Finally SVM and
LDA results are calculated which is far better in comparison to
previous research paper.

REFERENCES
[1] A.Hayder, J.Dargham, A.Chekima and G.M. Ervin 2011. “Person
Identification using GAIT,” International Journal of Computer and
Electrical Engineering, vol 3, issue no. 4.
[2] Alese B.K., Mogagi S.A., Adewale O.S. and Daramola O.,2012.
“Design and Implementation of Gait Recognition System,”
International Journal of Engineering and Technology, vol. 2, no.7.
[3] Anupam shukla, Ritu Tiwari, Sanjeev Sharma, 2011.
“Identification of people using gait biometrics”, International journal
of machine learning and computing, Vol.1, No. 4.
[4] Bashir Khalid, Xiang Tao, Gong Shaogang , “Gait Recognition
Using Gait Entropy Image”, 3rd International Conference on Crime
Detection and Prevention (ICDP 2009).
[5] C.Y.Yam, M.S Nixon and J.N., 2001. “Extended model based
automatic gait recognition of walking and running”, 3rd proc.
AVBPA 2001,pp.278-283.
[6] Davrondzhon Gafurov, Einar Snekkenes and Patrick Bour,
2010.“Improves gait recognition performance using cycle matching”,

62
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

ENHANCEMENT OF THE ACCURACY OF PHOTONIC


STRUCTURE OF PHOTONIC CRYSTAL FIBER BY USING
ARTIFICAL NEURAL NETWORK
Amit Goyal Kamaljeet Singh Sidhu
Department of Electronics and communication Engg. Electronics and communication Engg.
Bhai Gurdas Institute of Engineering &Technology Bhai Gurdas Institute of Engineering &Technology
Sangrur, India Sangrur, India
amitgoyal672@gmail.com sidhuk95@yahoo.in

ABSTRACT numerical aperture (NA) values ranging from very low to


There are several methods introduced to refining the accuracy about 0.9, optimized dispersion properties, and air core
of Photonic structures. No one has as yet studied the effect of guidance, among others[3]. Applications for photonic crystal
Neural Networks in refining the accuracy of the photonic fibers include spectroscopy, metrology, biomedicine, imaging,
structure of the Photonic Crystal Fibers. In this paper we use telecommunication, industrial machining, and military, and
The simulation that will be conducted using artificial neural the list keeps growing as the technology becomes a
networks to refining the accuracy of the photonic crystal mainstream. Photonic crystal fibers are generally divided into
fibers &.Artificial neural network will be further optimized by two main categories[3,4]: Index Guiding fibers that have a
varying the number of layers to enhance the accuracy of the solid core, and Photonic Band gap or air guiding fibers that
photonic structure of the photonic crystal fibers. have periodic micro-structured elements and a core of low
index material (e.g. hollow core). Structured optical fibers are
also called micro structured optical fibers and sometimes
Keywords photonic crystal fiber in case the arrays of holes are periodic.
Photonic structure; crystal fiber; artificial neural network. PCF (Photonic Crystal Fiber) is a type of optical fiber using
the properties of photonic crystals[4,5]. Its advantages against
a conventional optical fiber are possibility to control optical
1. INTRODUCTION properties and confinement characteristics of material. The
Photonic-crystal fiber (PCF) is a new class of optical fiber conventional optical fibers simple guide light and they have
based on the properties of photonic crystals. Because of its started revolution in telecommunication[6]. The principle of
ability to confine light in hollow cores or with confinement total internal reflection has been used for guiding of light in
characteristics not possible in conventional optical fiber, PCF the fiber. Nowadays, we almost have reached the maximum of
is now finding applications in fiber-optic communications, its the best properties, which are limited by the optical
fiber lasers, nonlinear devices, high-power transmission, properties of their solid cores (attenuation 0.2 dB/km, zero
highly sensitive gas sensors, and other areas[1,2]. dispersion shifted to the minimum loss window at 1550 nm on
used wavelength with fiber[7,8].

Fig. 2: Point defects in a square lattice made of dielectric


rods (radius=0.2a). Depending on the radius (r) of the point
defect, localized states are created within the cPBG.
Fig. 1: A lattice with the same symmetry (fcc in this case)
may present different topologies. a) isolated dielectric Low-loss dielectric periodic material with sufficiently
spheres in air, b) interpenetrated dielectric spheres in air, c) different dielectric constants in crystals can control flow of
isolated air spheres in a dielectric and d) interpenetrated air the light. Crystals with photonic band gaps can be design We
spheres in a dielectric. can design such structure which can prevent the light from
propagation in certain directions with specified range of
Photonic Crystal Fiber (PCF) can provide characteristics that wavelengths. Hollow Core Photonic Crystal Fibers (HC-PCF)
ordinary optical fiber do not exhibit, such as: single mode is also called holey fiber[9,10]. It enables the guidance of the
operation from the UV to IR with large mode-field diameters, light in the hollow core with lower attenuation than in the
highly nonlinear performance for super continuum generation, solid silica core. The core can be filled by air or gas. There are

63
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

many advantages of using PCF against a conventional optical


fiber. The biggest ones are possibility to control optical
properties and confinement characteristics of material. Since
these fibers allow for guidance through hollow fibers (air
holes).there is smaller attenuation than with fiber with solid
core. PCFs with larger cores may carry more power than
conventional fibers. Larger contrast available for effective-
index guidance[10,11]. Attenuation effects not worse than for
conventional fibers. Control over dispersion: size of air holes
may be tuned to shift of zero dispersion into visible range of
the light. have a 9-point text, as you see here. Please use sans-
serif or non-proportional fonts only for special purposes, such
as distinguishing source code text. If Times Roman is not
available, try the font named Computer Modern Roman[12].
On a Macintosh, use the font named Times. Right margins
should be justified, not ragged.

2. METHODOLOGY
Fig. 3: Photonic fibers. a) Cross-section images of the omni-
guide . This guide is based on the omnidirectional Bragg
Matlab will be used as the simulation tool. Attempt will be
mirrors. b) it consists of a hollow core surrounded by a 2D
made to enhance the Accuracy of Photonic Structure of
photonic crystal that confines light within the core
Photonic Crystal Fiber using Artificial Neural Networks. First
The Parameters of the photonic structure will be considered. At last in the photonic band gap graphs if the region width
Neural networks have proved themselves as proficient <.574 then it guides the value else leaky the value[14].
classifiers and are particularly well suited for addressing non- Further Given an input, which constitutes the measured values
linear Problems. Given the non-linear nature of real world for the parameters of the photonic structure, the neural
phenomena, like Enhancing the Accuracy of Photonic network is expected to identify if the accuracy has been
achieved or not. This is achieved by presenting previously
Structure of Photonic Crystal Fiber, neural networks is recorded parameters to a neural network and then tuning it to
certainly a good candidate for solving the problem. The produce the desired target outputs. This process is called
Parameters (as shown in TABLE I) will act as inputs to a neural network training. The samples will be divided into
neural network and the enhancement of accuracy of the training, validation and test sets[14,15]. The training set is
photonic structure will be the target. Calculate the photonic used to teach the network. Training continues as long as the
band gaps for various input parameters by using plane wave network continues improving on the validation set. The test
set provides a completely independent measure of network
method. Find the unit cells which are perpendicular to the xy
accuracy. The trained neural network will be tested with the
plane then form the equation matrix of 2N by 2N.for this case testing samples. The network response will be compared
there have no TE & TM decoupling[11,12]. Find the band gap against the desired target response to build the classification
& save these values into its appreciate band. Then plotting the matrix which will provide a comprehensive picture of a
variation of band gap between sio2-air PBG. Similarly repeat system performance[16].
all of this for three samples unless you get desired PBG (as
shown in fig 1a,1b,1c,1d,1e).these figures show the photonic 2.1. The training data
band gap variation in 2d triangular lattice of sio2-air PBG.For
figs 1(a,b,c,d,e) epsa is the dielectric constant of air,epsb is The training data set includes a number of cases, each
containing values for a range of input and output variables.
the dielectric constant of sio2,a is the diameter & f is the
The first decisions you will need to make are: which
spacing between holes. variables to use, and how many (and which) cases to gather.
The choice of variables (at least initially) is guided by
intuition. Expertise in the problem domain will give you some
idea of which input variables are likely to be influential. As a
first pass, you should include any variables that you could
have an influence - part of the design process will be to reduce
this set down. Neural networks process numeric data in a
fairly limited range. This presents a problem if data is in an
unusual range, if there is missing data, or if data is non-
numeric. Fortunately, there are methods to deal with each of
these problems[12,13]. Numeric data is scaled into an
appropriate range for the network, and missing values can be
substituted for using the mean value (or other statistic) of that
variable across the other available training cases[16,17].

64
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. RESULTS AND DISCUSSION


In this paper used unary encoding in this simulation to
perform symbol translation. The first six columns of data will
represent the Cloud characteristics. The 7th column epresents
whether the job is to be transferred or not. This data will be
randomly generated. The next step will be to preprocess the
data into a form that can be used with a neural network. The
next step is to create a neural network that will learn to
identify if the accuracy has been achieved or not. The
assumed samples will be automatically divided into training,
validation and test sets. The training set will be used to teach
the network. Training will continue long as the network
continues improving on the validation set (as shown in fig 2).
The test set will provide a completely independent measure of
neural network accuracy to detect accuracy. The trained
neural network will be tested with the testing samples. This
will give a sense of how well the network will do when
applied to data from the real world[10,11]. The overall Fig 4 :-Graph shows number of layers vs. percentage
architecture of your neural network is store in the variable net. accuracy
The simulation of network returned output in matrix or
structure format. The performance evaluation involves 4. CONCLUSION
Comparison between target and network’s output in testing
set.(generalization ability). Comparison between target and Neural networks have proved themselves as proficient
network’s output in training set. (Memorization ability). classifiers and are particularly well suited for addressing non-
Design a function to measure the distance/similarity of the linear problems[12,13]. Given the non-linear nature of real
target and output[13]. The result shows the graphical world phenomena, like Enhancing the Accuracy of Photonic
representation of percentage accuracy versus number of Structure of Photonic Crystal Fiber, neural networks is
layers(shown in Fig 3). From this it is clear that number of certainly a good candidate for solving the problem[15,16].
two layers is sufficient to achieve the 100% accuracy. From the fig 3 it is clear that only number of two layers are
sufficient to achieve the accuracy of 100%.

5. ACKNOWLEDGMENTS
I should like to take this opportunity to express a deep sense
of gratitude to Er. Amit Goyal for providing me valuable &
consistent guidance, generous advices, academic support
,encouragement during the preparation of manuscript fig 2
shows the performance of neural network

Fig.3 graph shows number of layers vs. percentage accuracy


The constant guidance & encouragement received from books
and internet & great help in carrying out the present work.
Our colleague peerless experiences & knowledge in this field
were very helpful to me for this work.

REFERENCES
[1] Liu Ming-sheng; Yue Ying-juan; Li Yan; , "Birefringence
property of asymmetric structure photonic crystal fiber,"
Communications and Photonics Conference and Exhibition
(ACP), 2010 Asia , vol., no., pp.699-700, 8-12 Dec. 2010

[2] Gowre, S.K.C.; Mahapatra, S.; Sahu, P.K.; Biswas, J.C.; ,


Fig 3:- Graph shows the performance of neuron network "Design and analysis of a modified Photonic Crystal Fiber
structure," Industrial and Information Systems, 2007. ICIIS
2007. International Conference on , vol., no., pp.159-162, 9-
11 Aug. 2007

[3] Rostami, A.; Ghanbari, A.; Soofi, H.; Janabi-Sharifi, F.; ,


"Enlarging effective mode area of photonic crystal fibers
using defected core structures," Optomechatronic
Technologies (ISOT), 2010 International Symposium on ,
vol., no., pp.1-4, 25-27 Oct. 2010

65
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[4] Li Bing-xiang; Xie Ying-mao; , "The band gap structure Technology. OECC/ACOFT 2008. Joint conference of the ,
on the disorder hollow core triangular lattice photonic crystal vol., no., pp.1-2, 7-10 July 2008
fiber," Advances in Optoelectronics and Micro/Nano-Optics
(AOM), 2010 OSA-IEEE-COS , vol., no., pp.1-4, 3-6 Dec. [12] Gedik, E.; Topuz, E.; , "An Investigation of the Photonic
2010 Crystal Fiber Structure with Different Air Hole Diameter,"
Signal Processing and Communications Applications, 2006
[5] Gedik, E.; Topuz, E.; , "Photonic Band Gaps of the IEEE 14th , vol., no., pp.1-4, 17-19 April 2006
Honeycomb Photonic Crystal Fiber Structure," Signal
Processing and Communications Applications, 2007. SIU [13] Jung-Sheng Chiang; Tzong-Lin Wu; , "A novel periodic
2007. IEEE 15th , vol., no., pp.1-4, 11-13 June 2007 structures in photonic crystal fibers," Lasers and Electro-
Optics, 2003. CLEO/Pacific Rim 2003. The 5th Pacific Rim
[6] Chen, Daru; , "Absolutely single polarization photonic Conference on , vol.1, no., pp. 17 Vol.1, 15-19 Dec. 2003
crystal fiber based on a structure of sub-wavelength hole
pitch," Communications and Photonics Conference and [14] Chen, Daru; , "Absolutely single polarization photonic
Exhibition (ACP), 2009 Asia , vol.2009-Supplement, no., crystal fiber based on a structure of sub-wavelength hole
pp.1-6, 2-6 Nov. 2009 pitch," Communications and Photonics Conference and
Exhibition (ACP), 2009 Asia , vol., no., pp.1-2, 2-6 Nov.
[7] Morishita, K.; Miyake, Y.; , "Fabrication and resonance 2009
wavelengths of long-period gratings written in a pure-silica
photonic crystal fiber by the glass structure change," [15] Wang Guanjun; Liu Jiansheng; Zheng Zheng; Yi Yang;
Lightwave Technology, Journal of , vol.22, no.2, pp. 625- Xiao Jing; Bian Yusheng; , "A fast response photonic crystal
630, Feb. 2004 fiber grating refractometer with a side-opening structure,"
Lasers and Electro-Optics (CLEO), 2011 Conference on , vol.,
[8] Matsui, T.; Nakajima, K.; Fukai, C.; , "Applicability of no., pp.1-2, 1-6 May 2011
Photonic Crystal Fiber With Uniform Air-Hole Structure to
High-Speed and Wide-Band Transmission Over Conventional [16] Lin, Y.; Herman, P. R.; Valdivia, C. E.; Li, J.; Kitaev, V.;
Telecommunication Bands," Lightwave Technology, Journal Ozin, G. A.; , "Photonic band structure of colloidal crystal
of , vol.27, no.23, pp.5410-5416, Dec.1, 2009 self-assembled in hollow core optical fiber," Applied Physics
Letters , vol.86, no.12, pp.121106-121106-3, Mar 2005
[9] AbdelMalek, F.; Hongbo Li; Schulzgen, A.; Moloney,
J.V.; Peyghambarian, N.; Ademgil, H.; Haxha, S.; , "A [17] Jianhua Li; Rong Wang; Jingyuan Wang; Baofu Zhang;
Nonlinear Switch Based on Irregular Structures and Hua Zhou; , "Highly birefringent photonic crystal fiber with
Nonuniformity in Doped Photonic Crystal Fibers," Quantum hybrid cladding structure," Communications and Photonics
Electronics, IEEE Journal of , vol.45, no.6, pp.684-693, June Conference and Exhibition (ACP), 2010 Asia , vol., no.,
2009 pp.222-223, 8-12 Dec. 2010

[10] Najafi, A.; Jalalkamali, M.; Moghadamzadeh, S.;


Bolorizadeh, M.A.; , "Finite Element Method Analysis of
Photonic Crystal Fiber Band Structure," Photonics and
Optoelectronic (SOPO), 2010 Symposium on , vol., no., pp.1-
4, 19-21 June 2010

[11] Juan Juan Hu; Guobin Ren; Ping Shum; Xia Yu;
Guanghui Wang; Chao Lu; , "Analytical method for band
structure calculation of liquid crystal filled photonic crystal
fibers," Opto-Electronics and Communications Conference,
2008 and the 2008 Australian Conference on Optical Fibre

66
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Effect of Different Feeding Techniques on Slot Antenna

Sushil Kakkar Shweta Rani Anuradha


ECE Department ECE Department ECE Department
BGIET, Sangrur BGIET, Sangrur NIT, Hamirpur
kakkar778@gmail.com shwetaranee@gmail.com sonanu8@gmail.com

ABSTRACT The fields at the edges of the patch undergo fringing because
An analysis of resonant behavior of a small-size square shape the dimensions of the patch are finite along the length and
slot antenna with different feeding techniques has been width [1].
presented in this paper. It has been observed that for nearly
same dimensions of the patch, the microstrip line feed gives
better return loss and VSWR in comparison to CPW and
probe feed whereas CPW feed provides better gain and
bandwidth than rest of the two feeding techniques for the
particular structure. Results shows that the proposed antenna
may be used as small, compact antenna for X-band
applications.

Keywords
Feeding Techniques, Slot antenna, Bandwidth

1. INTRODUCTION
The increasing progress in wireless communication system (a)
and an increasing demand to integrate different technologies
in to small user equipment has remarkably increase the
fashion of introducing compact antennas [1]. Microstrip patch
antenna, because of its small size, low profile, low
manufacturing cost and ease of integration with feed
networks, is find extensive applications in wireless
communication system [2-3]. Because of extremely thin
profile (0.01 to 0.05 wavelength), printed microstrip antennas
have found heavy applications in military aircraft, missiles,
rockets and satellites [4-5]. One of the most applicable
frequency bands is X-band, ranges from 8-12 GHz. The X-
band frequencies are used in satellite communications, radar
applications and terrestrial communications [6]. Traditional
feeding techniques include the use of aperture coupled
microstrip lines, coaxial probe and coplanar waveguide (b)
(CPW). Using coplanar waveguides offers the advantages of
ease of integration with active devices due to their unipolar
design, eliminating the need of vias. The accurate
determination of antenna impedance is important because of
the fact that most microwave sources and lines are
manufactured with 50Ω characteristic impedance [7-10]. In
this paper a square shape slot antenna has been designed to
analyze the relationship between the resonant performances of
the antenna with different feeding techniques.

2. DESIGN CONSIDERATIONS
The basic geometry of proposed antenna for different feeding
configurations is shown in Fig. 1. The antenna is a small-size (c)
slot antenna having length, L = 20 mm and width, W = 20 mm
Fig. 1: Geometrical Construction of Proposed Antenna.
and designed with FR4 substrate having dielectric constant of
(a) Probe feed, (b) CPW feed, (c) Microstrip Line feed
Єr= 4.4. The presented antenna structure has similar slots in its
design which is responsible for the small size and its low cost.
The dimensions of the slots are Ls = 12 mm and Ws = 12 mm.

67
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. SIMULATED RESULTS 3.2 Gain and radiation patterns


The maximum achievable gain with different feeding
3.1 Comparison of Resonant properties with Different
Feeding Techniques techniques of the proposed antenna is shown from Fig. 3 to
The simulation tool adopted for evaluating the performance of Fig. 5. It has been clearly illustrated that the CPW feeding
the proposed antennas is IE3D software, which is based on technique provides highest gain of 4.13 dBi.
method of moments technique. Table.1 has shown the
resonant performance characteristics of the square shape slot
antenna with different feeding techniques. In this analysis we
have taken three different feeding techniques that are coaxial
probe, microstrip line and coplanar waveguide. It has been
observed that when patch has been fed by microstrip line, it
gives better return loss and VSWR in comparison to other two
techniques. While with coplanar waveguide (CPW) feed,
bandwidth and gain increases considerably. The resonant
characteristics with probe feed for the proposed slot antenna is
average between the other two feeding techniques. Fig. 2
shows the s-parameters of proposed small-size slot antenna
and from the figure it is clear that the antenna resonate at
9.455 GHz (9.27-9.66 GHz) with microstrip line feed, at
9.495 GHz (8.76-10.3 GHz) with CPW feed and at 9.697 GHz Fig. 3: Simulated gain of proposed antenna with
(9.535-9.98 GHz) with coaxial feed. microstrip line feed.
Table 1. Comparison of resonant performance
characteristics.
Resonant Microstrip CPW Feed Probe
Characteristics Line Feed Feed

Resonant Freq. 9.455 9.495 9.697


(GHz)

Return Loss (dB) -41.45 -13.81 -17.05

VSWR 1.017 1.513 1.327

Gain (dBi) 1.687 4.13 3.09

Bandwidth (%age) 4.12% 16.1% 4.56%


Fig. 4: Simulated gain of proposed antenna with CPW
feed.

Fig. 5: Simulated gain of proposed antenna with probe


feed.
Fig. 2: S-parameters of the Proposed Antenna with
different feeding techniques.
The radiation characteristics of proposed slot antenna for
different feeding techniques are shown from Fig. 6 to Fig.8
and it has been observed that they are nearly similar in nature.

68
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The radiation pattern is symmetrical to antenna axis in E-


plane and nearly omnidirectional pattern in H-plane.

(b)

Fig. 7: Radiation Pattern at 9.495 GHz (a) E-plane (b) H-


(a) plane.

(b) (a)

Fig. 6: Radiation Pattern at 9.454 GHz (a) E-plane (b) H-


plane.

(b)

Fig. 8: Radiation Pattern at 9.697 GHz (a) E-plane (b) H-


plane

69
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. CONCLUSION [5] K.Chung; T.Yun; and J.Choi,”Wideband CPW-fed


Feeding mechanism plays an important part in the resonant monopole antenna with parasitic elements and
performance characteristics of the antenna design. A slots.”Electronics Letters,vol.40,pp.1038-1040,2004
comparative study of resonant performance of square shape [6] J. Jan and J. Su, “Bandwidth enhancement of a printed
slot antenna with different feeding configurations has been wide slot antenna with a rotated slot,” IEEE Transactions
presented in this paper. It has been illustrated that proposed on Antennas and Propagation, Vol.53, No.6, pp.2111-
square shape slot antenna gives better return loss and VSWR
with microstrip line feed but with CPW feeding it provides 2114, June 2005.
wider bandwidth and highest gain in comparison to rest of the [7] A. P. Singh and S.Rani, “ Simulation and Design of
two feeding techniques. The proposed antenna is feasible for Broad-Band Slot Antenna for wireless applications,”
use as small size, low profile and low cost antenna for X-band Proceedings of the world Congress on Engineering,
applications. WCE 2011, July 6-8, 2011,Imperial College, London,
U.K., vol.2, pp.1390-1393
REFERENCES [8] S.Rani and A. P. Singh, “On the Design and
Optimization of New Fractal Antenna Using PSO”
[1] A. Balanis, “Antenna Theory”. John Wiley & Sons, Inc. International Journal of Electronics, vol. 100, No.-10,
1997. pp. 1383-1397, 2012
[2] M. Hirvoven, P. Pusula, K. Jaakkola and K. Laukkanen, [9] S. Kakkar and S. Rani, “A novel Antenna Design with
“Planar Inverted F- Antenna for Radio Frequency Fractal-Shaped DGS Using PSO for Emergency
Identification”, IEEE Electronic Letters, vol. 40, no. 14, Management”, International Journal of Electronics
pp. 848- 850, 2004. Letters, vol. 1, no. 3, pp. 108-117, 2013.
[3] F.Yong, X. Zhang, X. Ye, and Y. Rahmat-Samii, Wide- [10] S. Kakkar, S. Rani and A. P. Singh, On the Resonant
Behaviour Analysis of Small-Size Slot Antenna with
band E-shaped patch antennas for wireless
Different Substrates, International Journal of Computer
communication," IEEE Trans.on Antennas Propagat., Applications, pp. no. 10-12, 2012.
Vol. 49, No. 7, pp.1094-1100, Jul. 2001.
[4] Jen-Yea Jan and Jia-wei Su, “Bandwidth enhancement of
a printed wide slot antenna with a rotated slot,” IEEE
Trasaction and Antennas Propagation., Vol.53, No.6,
pp.2111-2114, June 2005.

70
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Low Cost Planar Antenna for Wireless Applications


Anand Jain Sushil Kakkar Shweta Rani
Ericsson AB ECE Department ECE Department
Stockholm, Sweden BGIET, Sangrur BGIET, Sangrur
anandjain80@gmail.com kakkar778@gmail.com shwetaranee@gmail.com

ABSTRACT of the dimension of the patch, the height of the substrate and
An analysis of resonant behavior of small-size planar patch dielectric constant Єr. of the substrate. Because of the fringing
antenna and slot antenna with finite ground plane has been effects, electrically the patch of the microstrip antenna looks
presented in this paper. It is observed that a size reduction of greater than its physical size [1].
70.74% has been achieved by taking out a similar shape slot
from the „7‟-shaped patch. As a result this new antenna,
generate an additional band and shifting the primary resonant
frequency towards lower side. The proposed antenna is not
only has good radiation characteristics, but also has the
advantage of low cost, small size and easy manufacture for S-
band and C-band wireless applications.
Keywords
Slot antenna, Return loss, Gain.

1. INTRODUCTION
The astonishing progress in wireless communication system
and an increasing demand to integrate different technologies
in to small user equipment has remarkably increase the
fashion of introducing compact antennas [1]. Microstrip patch
antenna, because of its small size, low profile, low
manufacturing cost and ease of integration with feed (a)
networks, is find extensive applications in wireless
communication system [2-5]. Because of extremely thin
profile (0.01 to 0.05 wavelength), printed microstrip antennas
have found heavy applications in military aircraft, missiles,
rockets and satellites [6-8]. Microstrip antenna can be divided
in to two basic types by structure, patch antenna and slot
antenna. The slot antenna can be fed by microstrip line, slot
line and CPW [9-11]. In this paper a small-size planar antenna
and a slot antenna have been designed to analyze the
relationship between the resonant performances of these
antennas. The radiation properties have also been examined
and it has been observe that they are nearly similar in nature.

2. DESIGN AND STRUCTURE


Fig. 1 shows the basic geometry of proposed planar patch
antenna and slot antenna. The antennas have been designed
(b)
with Roger RT 5880, Duriod (Єr= 2.2) substrate having height
Fig. 1: Geometrical Construction of Proposed Antennas,
of 1.57 mm. The antenna is a small-size „7‟-shaped planar
a) without slot b) with slot.
patch antenna having length, l = 25 mm and width, w = 33
mm on the finite ground plane of length, L = 32 mm and 3. RESULTS AND DISCUSSION
width, W = 47 mm. The presented second antenna structure 3.1 Resonant Performance Comparison of Planar and
has similar shape slot as of original patch in its design which Slot Antenna
is responsible for the small size and its low cost. The IE3D software has been used for evaluating the performance
dimensions of the slots are Ls = 21 mm and Ws = 31 mm. The of the proposed antennas. Table.1 shows the resonant
introduction of slot in the structure increases the electrical performance characteristics of the small-size, planar patch
length of the antenna. Because the dimensions of the patch are antenna and slot antenna. It has been observed that by taking
finite along the length and width, the fields at the edges of the out similar shape slot from the „7‟-shaped planar antenna,
patch undergo fringing. The amount of fringing is a function there is considerable improvement in the return loss, and also

71
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

there is shifting of resonant frequency towards the lower side The radiation pattern is symmetrical to antenna axis in E-
as expected. All of the above, the size of the slot antenna has plane and nearly omnidirectional pattern in H-plane.
been reduced to 70.74% to its original planar structure which
make it effectively less expensive comparable to original
patch.

Table.1: Comparison of resonant performance


characteristics.
Resonant Planar Slot Antenna
Characteristics Antenna

Resonant Freq. (GHz) 3.545 3.363

Return Loss (dB) -21.2 -26.39

VSWR 1.191 1.101

Gain (dBi) 4.35 4.35 (a)

Fig. 2 shows the s-parameters of proposed small-size planar


and slot antenna and from the figure it is clear that the planar
patch antenna resonate at 3.545 whereas slot antenna
resonates at 3.363 GHz and 6.333 GHz but with improved
return loss. While for analysis purpose we have taken the
primary frequency only. The value of VSWR for slot antenna
is also considerably good in comparison of planar patch
antenna as given in Table. 1. With these resonant properties
the proposed antenna is feasible for S-band and C-band
wireless applications.

(b)

Fig. 3: Radiation Pattern at 3.545 GHz (a) E-plane (b) H-


plane.

Fig. 2: S-parameters of the Proposed Patch and Slot


Antenna. (a)

3.2 Radiation Patterns and gain


The radiation characteristics of proposed planar patch antenna
and slot antenna are shown in Fig. 3 and Fig.4 respectively
and it has been observed that they are nearly similar in nature.

72
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. CONCLUSION
The comparative study of resonant performance of small-size
patch and slot antenna has been presented. It has been
illustrated that in addition to the reduction of size of the
antenna by 70.74% with the introduction of similar shape slot,
there is considerable improvement in return loss and VSWR.
And also slot antenna exhibit dual band characteristics. The
simulated results reveal that the proposed antenna is suitable
for S-band and C-band wireless applications.

REFERENCES
[1] A. Balanis, “Antenna Theory”. John Wiley & Sons, Inc.
1997.
[2] S. Kakkar and S. Rani, “A novel Antenna Design with
(b) Fractal-Shaped DGS Using PSO for Emergency
Management”, International Journal of Electronics
Fig. 4: Radiation Pattern at 3.363 and 6.333 GHz (a) E- Letters, vol. 1, no. 3, pp. 108-117, 2013.
plane (b) H-plane. [3] M. Hirvoven, P. Pusula, K. Jaakkola and K. Laukkanen,
“Planar Inverted F- Antenna for Radio Frequency
The Gain for both the proposed antennas is shown in Fig. 5 Identification”, IEEE Electronic Letters, vol. 40, no. 14,
and Fig. 6. The maximum achievable gain for both the pp. 848- 850, 2004.
antenna is same that is 4.35 dBi at 3.545 GHz for patch [4] S.Rani and A. P. Singh, “On the Design and
antenna and at 3.363 GHz for slot antenna. Optimization of New Fractal Antenna Using PSO”
International Journal of Electronics, vol. 100, No.-10,
pp. 1383-1397, 2012
[5] F.Yong, X. Zhang, X. Ye, and Y. Rahmat-Samii, Wide-
band E-shaped patch antennas for wireless
communication," IEEE Trans.on Antennas Propagat.,
Vol. 49, No. 7, pp.1094-1100, Jul. 2001.
[6] S. Kakkar, S. Rani and A. P. Singh, On the Resonant
Behaviour Analysis of Small-Size Slot Antenna with
Different Substrates, International Journal of Computer
Applications, pp. no. 10-12, 2012.
[7] Jen-Yea Jan and Jia-wei Su, “Bandwidth enhancement of
a printed wide slot antenna with a rotated slot,” IEEE
Trasaction and Antennas Propagation., Vol.53, No.6,
pp.2111-2114, June 2005.
[8] K.Chung; T.Yun; and J.Choi,”Wideband CPW-fed
monopole antenna with parasitic elements and
slots.”Electronics Letters,vol.40,pp.1038-1040,2004
Fig. 5 Simulated Gain of Proposed Patch
[9] J. Jan and J. Su, “Bandwidth enhancement of a printed
Antenna
wide slot antenna with a rotated slot,” IEEE Transactions
on Antennas and Propagation, Vol.53, No.6, pp.2111-
2114, June 2005.
[10] A. P. Singh and S.Rani, “ Simulation and Design of
Broad-Band Slot Antenna for wireless applications,”
Proceedings of the world Congress on Engineering,
WCE 2011, July 6-8, 2011,Imperial College, London,
U.K., vol.2, pp.1390-1393
[11] S. Rani and A. P. Singh, “Design of Slot Antenna with
Rectangular Tuning Stub & Effect of various substrates
on it,” Proceedings of 5th International Multi
Conference on Intelligent Systems, Sustainable, New and
Renewable Energy Technology and Nanotechnology
(IISN-2011), Feb 18 - 20, pp no. 238 – 240, 2011.
Institute of Science and Technology Klawad-133105,
Haryana, India.
.

Fig. 6: Simulated Gain of Proposed Slot Antenna

73
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Artificial Eyes: Brain Port Device


Lovedeep Dhiman Deepti Malhotra
Research Scholar Research Scholar
Jmit, Radaur, India Jmit, Radaur, India
lvdp131313@gmail.com Deeptimalhotra1981@gmail.com

ABSTRACT interpreting sensory input. You can train it to read input from,
We present a device which sends visual input through tongue say, the tactile channel, as visual or balance information, and
in much the same way that seeing individuals receive visual to act on it accordingly.
input through the eyes is called the “Brain port Vision
Device” . This device is underlying a principle called “electro
1. FUNCTIONING
tactile stimulation for sensory substitution”, an area of study
that involves using encoded electric current to represent
sensory information and applying that current to the skin,
which sends the information to the brain.

Keywords: sensory, encoded, substitution


INTRODUCTION

Fig 2: Brain Port Device Functioning


The Brain Port vision device is an investigational non-surgical
assistive visual prosthetic device that translates information
from a digital video camera to your tongue, through gentle
Fig 1: Brain Port Device electrical stimulation. The Brain Port vision system consists
The Brain Port is a technology where by sensory information
of a postage-stamp-size electrode array for the top surface of
can be sent to one's brain via a signal from the Brain Port (and
its associated sensor) that terminates in an electrode array the tongue (the tongue array), a base unit, a digital video
which sits atop the tongue. It was initially developed by Paul camera, and a hand-held controller for zoom and contrast
Bach-y-Rita as an aid to people's sense of balance, particularly inversion. Visual information is collected from the user-
of stroke victims. Brain Port technology has been developed adjustable head-mounted camera (FOV range 3–90 degrees)
for use as a visual aid. For example, the Brain Port has and sent to the Brain Port base unit. The base unit translates
demonstrated its ability to allow a blind person to see his the visual information into a stimulation pattern that is
surroundings in polygonal and pixel form. In this scenario, a
displayed on the tongue. The tactile image is created by
camera picks up the image of the surrounding; the information
is processed by a chip which converts it into impulses which presenting white pixels from the camera as strong stimulation,
are sent through an electrode array, via the tongue, to the black pixels as no stimulation, and gray levels as medium
person's brain. The human brain is able to interpret these levels of stimulation, with the ability to invert contrast when
impulses as visual signals and they are then redirected to the appropriate.
visual cortex, allowing the person to see. Brain Port
technology is based on the phenomenon of sensory 1.1 Parts of Brain
substitution. For the vision application, visual information is
perceived via the sense of touch on the human tongue. Well,
not exactly through her tongue, but the device in her mouth Brain Port uses the tongue instead of the fingertips, abdomen
sent visual input through her tongue in much the same way or back used by other systems. The tongue is more sensitive
that seeing individuals receive visual input through the eyes. than other skin areas -- the nerve fibers are closer to the
All sensory information sent to the brain is carried by nerve surface, there are more of them and there is no stratum
fibers in the form of patterns of impulses, and the impulses corneum (an outer layer of dead skin cells) to act as an
end up in the different sensory centers of the brain for insulator. It requires less voltage to stimulate nerve fibers in
interpretation. To substitute one sensory input channel for
the tongue - 5 to 15 volts compared to 40 to 500 volts for
another, you need to correctly encode the nerve signals for the
sensory event and send them to the brain through the alternate areas like the fingertips or abdomen. Also, saliva contains
channel. The brain appears to be flexible when it comes to electrolytes, free ions that act as electrical conductors, so it

74
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

helps maintain the flow of current between the electrode and same way that the eye perceives color from the independent
the skin tissue. stimulation of different color receptors. The electrode array
receives the resulting signal via the stimulation circuitry and
applies it to the tongue.

2.1 Concept of Electrical Simulation


Electro tactile stimulation for sensory augmentation or
substitution, an area of study that involves using encoded
electric current to represent sensory information that a person
cannot receive through the traditional channel and applying
that current to the skin, which sends the information to the
brain. The brain then learns to interpret that sensory
information as if it were being sent through the traditional
channel for such data. . Electro tactile stimulation is a higher-
tech method of receiving somewhat similar (although more
surprising) results, and it's based on the idea that the brain can
interpret sensory information even if it's not provided via the
"natural" channel. The idea is to communicate non-tactile
information via electrical stimulation of the sense of touch. In
Fig 3: An Accelerometer practice, this typically means that an array of electrodes
An accelerometer is a device that measures, among other receiving input from a non-tactile information source (a
things, tilt with respect to the pull of gravity. The camera, for instance) applies small, controlled, painless
accelerometer on the underside of the 10-by-10 electrode currents (some subjects report it feeling something like soda
array transmits data about head position to the CPU through bubbles) to the skin at precise locations according to an
the communication circuitry. When the head tilts right, the
encoded pattern. The encoding of the electrical pattern
CPU receives the "right" data and sends a signal telling the
electrode array to provide current to the right side of the essentially attempts to mimic the input that would normally be
wearer's tongue. When the head tilts left, the device buzzes received by the non- functioning sense. So patterns of light
the left side of the tongue. When the head is level, Brain Port picked up by a camera to form an image, replacing the
sends a pulse to the middle of the tongue. After multiple perception of the eyes, are converted into electrical pulses that
sessions with the device, the subject's brain starts to pick up represent those patterns of light. When the encoded pulses are
on the signals as indicating head position -- balance applied to the skin, the skin is actually receiving image data
information that normally comes from the inner ear -- instead
those nerve fibers forward their image-encoded touch signals
of just tactile information. From the CPU, the signals are sent
to .the tongue via a "lollipop," an electrode array about nine to the tactile-sensory area of the cerebral cortex, the parietal
square centimeters that sits directly on the tongue. Each lobe. Within this system, arrays of electrodes can be used to
electrode corresponds to a set of pixels. White pixels yield a communicate non-touch information through pathways to the
strong electrical pulse, whereas black pixels translate into no brain normally used for touch-related impulses. All sensory
signal. Densely packed nerves at the tongue surface receive information sent to the brain is carried by nerve fibers in the
the incoming electrical signals, which feel a little like Pop
form of patterns of impulses, and the impulses end up in the
Rocks or champagne bubbles to the user. In the case of the
Brain Port vision device, the electronics might be completely different sensory centers of the brain for interpretation. To
embedded in a pair of glasses along with a tiny camera and substitute one sensory input channel for another, you need to
radio transmitter, and the mouthpiece would house a radio correctly encode the nerve signals for the sensory event and
receiver to receive encoded signals from the glasses. send them to the brain through the alternate channel. The
brain appears to be flexible when it comes to interpreting
2. WORKING sensory input. ." Action potentials (AP's) thus recorded had
To produce tactile vision, Brain Port uses a camera to capture amplitudes from 0.1 to 1.0 mV and a 5 : 1 signal-to-noise ratio
visual data. The optical information -- light that would (SNR).A circular electrode surrounding the recording site
normally hit the retina -- that the camera picks up is in digital served as the ground reference. Following pre amplification
form, and it uses radio signals to send the ones and zeroes to and band pass filtering (200-10 000 Hz), a differential
the CPU for encoding. Each set of pixels in the camera's light amplitude detector identified AP's, producing an output pulse
sensor corresponds to an electrode in the array. The CPU runs whenever the recorded signal entered a predefined amplitude-
a program that turns the camera's electrical information into a time window. In the first experiment, electro tactile
spatially encoded signal. The encoded signal represents entrainment currents (iEN) were determined by adjusting the
differences in pixel data as differences in pulse characteristics stimulation current from near zero to the minimal value
such as frequency, amplitude and duration. Multidimensional resulting in one AP for each stimulation pulse. These currents
image information takes the form of variances in pulse current exceeded the absolute thresholds (the currents causing
or voltage, pulse duration, intervals between pulses and the occasional AP's) by approximately 5%.
number of pulses in a burst, among other parameters. , the
pulses may convey multidimensional information in much the

75
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. Experience Observed By Brain Port The resolution of brain port camera varies according to the
necessity. The images below demonstrate how information
Users from the video camera is represented on the tongue. Today's
With the current prototype (arrays containing 100 to 600+ prototypes have 400 to 600 points of information on a ~3cm x
electrodes), study participants have recognized the location 3cm tongue display, presented at approximately 30 frames per
and movement of high-contrast objects and some aspects of second, yielding an information rich image stream. Our
perspective and depth. In most studies, participants use the research suggests that the tongue is capable of resolving much
higher resolution information and we are currently working to
device for between two and 10 hours, often achieving the
develop the optimal tongue display hardware and software.
following milestones:
 Within minutes: users perceive where in space
stimulation arises (up, down, left, and right) and the
6. APPLICATIONS
direction of movement.
6.1 Current Applications:
 Within an hour: users can identify and reach for nearby
The current or foreseeable medical applications include:
objects, and point to an estimate the distance of object
 Providing elements of sight for the visually impaired.
out of reach.
 Providing sensory-motor training for stroke patients.
 Within several hours: users can identify letters and
 Providing tactile information for a part of the body with
numbers and can recognize landmark information.
nerve damage.
 The device provides a new sensory language with
 alleviating balance problems, posture-stability problems
which users learn to translate the impulse patterns on
and muscle rigidity in people with balance disorders
the tongue to objects in space. Neuro imaging research
and Parkinson's disease
suggests that using Brain Port stimulates the visual
 enhancing the integration and interpretation of sensory
regions of the brain in blind individuals
 Information in autistic people.
6.2 Potential Applications:
4.WAVEFORM  Scanning The Brain Port electrodes would receive input
from a sonar device to provide not only directional cues
but also a visual sense of obstacles and terrain. Military-
navigation applications could extend to soldiers in the
field when radio communication is dangerous or
impossible or when their eyes, ears and hands are
needed to manage other things that might blow up.
Brain Port may also provide expanded information for
military pilots, such as a pulse on the tongue to indicate
approaching aircraft or to indicate that they must take
immediate action. With training, that pulse on their
tongue could elicit a faster reaction time than a visual
cue from a light on the dashboard, since the visual cue
must be processed by the retina before it's forwarded to
the brain for interpretation.
 Other potential Brain Port applications include robotic
Fig 4: Relative timing between simultaneous mechanical and surgeries’. The surgeon would wear electro tactile
electro tactile stimulation. The top trace represents the
sinusoidal, 30-Hz, 50-100-_m (0-P) mechanical displacement gloves to receive tactile input from robotic probes
Note how the caption is centered in the column. inside someone's chest cavity. In this way, the surgeon
could feel what he's doing as he controls the robotic
5. RESOLUTION equipment.
 Race car drivers might use a version of Brain Port to
train their brains for faster reaction times, and gamers
might use electro tactile feedback gloves or controllers
to feel what they're doing in a video game.
7. ADVANTAGES AND DISADVANTAGES:
7.1 Safety:
 Navigating difficult environments, such as parking lots,
traffic circles, complex intersections
 Recognizing quiet moving objects like hybrid cars or
Bicycle.
7.2 Mobility:

Fig 5: Example showing the difference in resolution.

76
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 Finding doorways, hall intersections, lobby or restaurant in an


 office or hotel.
 Finding continuous sidewalks, sidewalk intersections and curbs
7.3 Object Recreation:
 Locating people
 Locating known objects such as shoes, cane, coffee mug, Key
 Brain Port device does not replace the sense of sight, it adds to
 Other sensory experiences to give users information about the
size, shape and location of object.
 Device is like normal sunglasses hence it does not look bad
 It uses a rechargeable battery like in normal cell phone.

8. CONCLUSION
Brain port is indeed one of the finest and useful technologies.
This article offers insights and navigates the action about the
pro and cons, of the brain port technology. Technology is a
boon in biomedical and can work for all the field like defense,
sports, robotics, spy gadgets, and is able to change the life of
physically and mentally impaired persons.

REFERENCES
[1] P. Bach-y-Rita, K. A. Kaczmarek and M. E. Tyler, "A
tongue-based tactile display for portrayal of environmental
characteristics," in Psychological Issues in the Design and Use
of Virtualk and Adaptive Environments, L. Hettlinger and M.
Haas, Eds. Mahwah, NJ: Erlbaum, in press.
[2] K. A. Kaczmarek and M. E. Tyler, "Effect of electrode
geometry and intensity control method on comfort of
electrotactile stimulation on the tongue," Proc. ASME Dyn.
Sys. Contr. Div., Orlando, Florida, pp. 1239-1243, 2000.
[3] P Bach-y-Rita, ME Tyler, KA Kaczmarek - International
Journal of Human-Computer Interaction, 2003 -
informaworld.com
[4] Prather, Laura (15 February 2007). "Tongue creates sight
for blind: Visually impaired persons will be able to use device
to sense images on tongue" (PDF). Truman State University
Index 98 (20): 11. Retrieved 2009-05-24.
[5] Clinical Tests of Ultra-Low Vision Used to Evaluate
Rudimentary Visual Perceptions Enabled by the BrainPort
Vision Device by Amy Nau1, Michael Bach2, and
Christopher Fisher.
[6] Benav H., Bartz-Schmidt K. U., Besch D., Bruckmann A.,
Gekeler F., & Greppmaier U. … Zrenner E. (2010).
Restoration of useful vision up to letter recognition
capabilities using subretinal microphotodiodes. In
Engineering in Medicine and Biology Society (EMBC), 2010
Annual International Conference of the IEEE, 5919–5922.
[7] Chebat D. R., Schneider F. C., Kupers R., & Ptito M.
(2011). Navigation with a sensory substitution device in
congenitally blind individuals. Neuroreport, 22, 342–347
[8] Eickhoff S. B., Dafotakis M., Grefkes C., Shah N. J.,
Zilles K., & Piza-Katzer H. (2008). Central adaptation
following heterotopic hand replantation probed by fMRI and
effective connectivity analysis. Experimental Neurology, 212,
132–144 ,,,

[9] Ptito M, Kupers R. Cross-Modal Plasticity in Early


Blindness. J Integr Neurosci. 2005; 4:479-488.

77
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

High Reflectance Multiple step Metal Grating for


Multichannel Reflector

Ramanpreet Kaur Neetu Sharma Jaspreet Kaur


Student Assistant Professor Assistant Professor
BGIET, Sangrur BGIET, Sangrur RIMT, Mandi Gobindgarh
ramanpreetkaur457@gma neetu4ar@gmail.com jasspreetkaur09@gmail.c
il.com om

ABSTRACT
The multiple step subwavelength metal grating with relief In this given structure of traditional metal grating, glass
structure is designed and analyzed in which the profile of substrate is used having refractive index of 1.5 and grating
grating structure is having a relief structure with multiple used is of Silicon having refractive index of 3.48. Grating
steps. The optical presentation of traditional structure is period is of 1μm and grating width of 0.5μm. We have
evaluated and compared in terms of reflectivity over visible examined the reflectivity power of this structure at input
and ultra violet spectrum with the help of Opti-FDTD. It is wavelength of 1.55μm and optimized by using Opti FDTD in
shown that, near the ultra violet band multiple reflections can the wavelength range of 400nm to 900nm and we get high
be found compared to traditional metal grating in the same reflectivity power only at one wavelength point, that is very
parameters. With these characteristics, designed metal grating poor reflectivity over this range. So, here we want to improve
with multiple steps is expected to find applications in optical the reflectivity over visible and ultra violet region. Various
communication as multichannel reflector. structures have been implemented recently to use gratings as
various optoelectronic devices. Included planar waveguide
Keywords circuits are extensively used in optical telecommunication
Metal grating, FDTD, Multichannel reflector. systems, with AWG (arrayed waveguide grating) multiplexers
being one of the most composite of such circuits [4].
Presently, these viable waveguide devices are
characteristically made from doped silica glass with a low
1. INTRODUCTION refractive index distinction. The high-index contrast (HIC)
As an vital optical element, gratings take part in the SOI material system obtain the potential of a considerable size
fundamental role in all types of optical systems. In view of the and cost diminution of integrated planar waveguide devices,
fact that Nevdakh et al. [1] had considered the polarization as well as AWGs [5,11]. Additionally, new applications are
uniqueness of subwavelength grating has develop into the raising for miniaturized SOI waveguide devices. Such as, we
focus of researches in most recent years. Researchers have have lately demonstrated a dense high resolution micro
done work on gratings and they concluded that when grating spectrometer [12]. In our design we will show how a
period is near to or slightly smaller than the wavelength then multistep grating can be used for multichannel reflections.
it will be having good polarization characteristics. With this
characteristic, we can fabricate a variety of polarizing devices,
such as special wave plates [2], polarizing color filters [8, 9],
and polarizing beam splitters [3–7]. But if wavelength and
grating period are at large gap from each other then there will
be weak polarization characteristics. On the supplementary
hand, the fabrication has been very grown-up. As given in the
grating equation, when light propagates onto the
subwavelength grating surface, just the zeroth-order
diffraction exists [6]. Traditional subwavelength metal grating
reflectance is generally about 51% at only one wavelength
point in the particular wavelength area.

Figure 2: Reflectivity vs. wavelength characteristic for


Figure 1: Traditional subwavelength metal grating with traditional grating structure at 1.55nm input wavelength
glass substrate and Si grating where grating period=1μm
and grating width=0.5μm and height=0.25 μm

78
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2. DESIGN AND STRUCTURE can say is perfect result for reflectance. We can see that in our
Figure 1 shows schematic of traditional subwavelength metal result we got multiple reflections. So, we can use our design
as a multichannel reflector.
grating here grating layer used is of Silica (refractive index,
n=3.48). Here p is the grating period, w is the grating width
and h is the grating height. So filling factor for this structure
can be calculated as f = w/p[10]. This factor plays an
significant responsibility in determining the optical properties
of gratings. Owing to different materials with different
reflection powers, thus choosing the right material is chief
problem. In our proposed method we used layer of gratings. In
order to improve reflectivity power of traditional
subwavelength metal grating, subwavelength metal gratings
with relief structures designed by using multiple steps are
shown in figure 3. First layer which we used is of tourmaline
having refractive index 1.63, second layer we choose is of
purpurite having refractive index 1.84 and third layer is
lumicera having refractive index 2.08. Here we will discuss Figure 4: reflectivity vs. wavelength characteristics for 2
two step and three step gratings. Substrate used here is of steps grating over visible and ultraviolet region
refractive index 1.5. Heights of different layers are taken as
h1, h2 and h3 respectively.

(a) Figure 5: Reflectivity vs. wavelength characteristics for 3


steps grating over visible and ultraviolet region

(b)

Figure 3: Representation of subwavelength metal grating


with relief structure having multiple steps Figure 6: Comparison of reflectivity for 2 step and 3 step
grating structure
3. RESULTS AND DISCUSSION
The subwavelength metal grating we used in our structure is As shown from our reflectivity curves, reflectivity for
having glass substrate and grating used is of different traditional metal grating is of about 51% but by using our
refractive indexes which vary layer by layer. Input design we can get reflectivity of about 99.9% by using
wavelength used here in our design is of 1.55μm. We have multiple step structure.
calculated the results in wavelength range of 400nm to 900nm
or we can say over visible and ultra violet region. We have
evaluated results using Opti FDTD. Reflectivity when
4. CONCLUSION
calculated for two step grating is about 80% but when In our design and analysis of subwavelength metal grating
calculated for three step grating it is about 99.9% which we with relief structure using multiple steps we get the result of

79
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

about 99.9% multiple wavelength reflectivity for 3 steps [11] A. Densmore, D.-X. Xu, P. Waldron, et al., “A silicon-
grating and 80% of multiple wavelengthreflectivity for 2 steps oninsulator photonic wire based evanescent field sensor,”
of grating by using Opti FDTD. IEEE Photonics Technology Letters, vol. 18, no. 23, pp.
2520–2522, 2006.
SWG with relief structure gives more exciting results than
[12] P. Cheben, J. H. Schmid, A. Delˆage, et al., “A high-
traditional subwavelength metal grating with input resolution silicon-on-insulator arrayed waveguide grating
wavelength of 1.55μm over the visible and ultraviolet microspectrometer with sub-micrometer aperture
region(400 nm to 900 nm). By further tuning the filling factor, waveguides,” Optics Express, vol. 15, no. 5, pp. 2299–
grating period, grating material or sizes, we can use our 2306, 2007.
design for more purposes like multichannel wavelength filter.
[13] A. Densmore, D.-X. Xu, S. Janz, et al., “Spiral-path
highsensitivity silicon photonic wire molecular sensor
REFERENCES with temperature-independent response,” Optics Letters,
[1] V. V. Nevdakh, N. S. Leshenyuk, and L. N. Orlov, vol. 33, no. 6, pp. 596–598, 2008.
“Experimental investigation of the polarization
properties of reflective diffraction gratings for CO2 [14] Zhongfei Wang, Dawei Zhang, Qi Wang, Banglian Xu,
lasers,” Journal of Applied Spectroscopy, vol. 39, no. 5, Qingyong Tang, Yuanshen Huang, and Songlin Zhuang,
pp. 1249–1254, 1983. ”High-Transmittance Subwavelength Metal Grating with
Relief Structure Composed of Multiple Steps,” the
[2] D.-E. Yi, Y.-B. Yan, Q.-F. Tan, H.-T. Liu, and G.-F. Jin, scientific world journal,volume 2014,18 february 2014.
“Study on broadband achromatic quarter-wave plate by
subwavelength gratings,” Chinese Journal of Lasers, vol.
30, no. 5, pp. 405–408, 2003.

[3] L. Zhou and W. Liu, “Broadband polarizing beam


splitter with an embedded metal-wire nanograting,”
Optics Letters, vol. 30, no. 12, pp. 1434–1436, 2005.

[4] S. Janz, P. Cheben, A. Delˆage, et al., “Silicon-based


integrated optics: waveguide technology to
microphotonics,” in Proceedings of the Materials
Research Society Symposium (MRS ’04), vol. 832, pp. 3–
14, Boston, Mass, USA, November-December 2004,
F1.1.

[5] N. V. Tabiryan, S. R. Nersisyan, T. J. White, T. J.


Bunning, D.M. Steeves, and B. R. Kimball, “Transparent
thin film polarizing and optical control systems,” AIP
Advances, vol. 1, no. 2,Article ID 022153, 2011.

[6] Y. Ye, Y. Zhou, H. Zhang, S. Shen, and L. S. Chen,


“Plorizing color filter based on a submicron metal
grating,” Acta Optics Sinica, vol. 31, no. 4, Article ID
0405003, 2011.

[7] P. Cheben, “Wavelength dispersive planar waveguide


devices: echelle and arrayed waveguide gratings,” in
Optical Waveguides: From Theory to Applied
Technologies, M. L. Calvo and V. Lakshminarayanan,
Eds., chapter 5, CRC Press, London, UK, 2007.

[8] N. Nguyen-Huu, Y.-L. Lo, Y.-B. Chen, and T.-Y. Yang,


“Realization of integrated polarizer and color filters
based on subwavelength metallic gratings using a hybrid
numerical scheme,”

[9] P. Cheben, D.-X. Xu, S. Janz, and A. Delˆage, “Scaling


down photonic waveguide devices on the SOI platform,”
in VLSI Circuits and Systems, vol. 5117 of Proceedings
of SPIE, pp. 147– 156, Maspalomas, Spain, May 2003.

[10] Jing Nie,Hu-Quan Li and Wen Liu,,”Perfect Anomalous


Absorption of TM Polarized Light in metallic grating
situated in asymmetric surroundings,” IEEE photonic
journal,vol. 6,number 6,December 2014.

80
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Speech Recognition Using Neural Network


Pankaj Rani Sushil Kakkar Shweta Rani
BGIET, Sangrur BGIET, Sangrur BGIET, Sangrur
8699500928h@gmail.com kakkar778@gmail.com shwetaranee@gmail.com

ABSTRACT conventional neural network of Multi-Layer Perceptron


Speech recognition is a subjective phenomenon. Despite is going to increase day by day. Work is well done as an
being a huge research in this field, this process still faces effective classifier for vowel sounds with stationary
a lot of problem. Different techniques are used for spectra by those networks. Feed forward multi-layer
different purposes. This paper gives an overview of neural network are not able to deal with time varying
speech recognition process. Various progresses have information like time-varying spectra of speech sounds.
been done in this field. In this work of project, it is This problem can be copied by incorporated feedback
shown that how the speech signals are recognized using structure in the network.
back propagation algorithm in neural network. Voices of 1.1 Procedure of speech recognition process
different persons of various ages in a silent and noise free Speech recognition is mainly done in two stages named
environment by a good quality microphone are recorded. as training and testing. But before these, some basic
Same sentence of duration 10-12 seconds is spoken by techniques that are necessary are applied to these speech
these persons. These spoken sentences are then converted signals.
into wave formats. Then features of the recorded samples
are extracted by training these signals using LPC.
Learning is required whenever we don’t have the Different voice signals
complete information about the input or output signal. At
the input stage, 128 samples of each sentence are
applied, then through hidden layers these are passed to
output layer. These networks are trained to perform tasks
such as pattern recognition, decision making and motoric Pre-processing
control.
Key words: Neural network, speech recognition, back
propagation, training algorithm.

1. INTRODUCTION Feature extraction


Speech could be a useful interface to interact with
machines. To improve this type of communication,
researches have been for a long time. From the evolution
of computational power, it has been possible to have
system capable of real time conversions. But despite
good progression made in this field, the speech Classification
recognition is still facing a lot of problems. These
problems are due to the variations occurred in speaker
including the variations because of age, sex, speed of
speech signal, emotional condition of the speaker can
cause the difference in the pronunciation of different Output(one detected
persons. Surroundings can add noise to the signal.
Sometimes speaker causes the addition of noise itself [4]. voice)
In speech recognition process, an acoustic signal Fig.1: Block diagram of speech recognition process
captured by microphone or telephone is converted to a
set of characters. A view about automatic speech In this process the voice of different persons is recorded
recognition (ASR) is given by describing the integral part by a good quality microphone in such an environment
of future human computer interface. Hence for the where no noise is present. These speech signals are then
interaction with machines human could use speech as a pre- processed by using suitable techniques like filtering,
useful interface. Human always want to achieve natural, entropy based end point detection and Mel Frequency
possessive and simultaneous computing. Elham S. Salam Cestrum Coefficient etc. this type of technique makes the
[13] compared the effect of visual features on the speech signal smoother and helps us in extracting only
performance of Speech Recognition System of disorder the required signal that is free of noise.
people with audio speech recognition system. Samples are recorded with a microphone. Besides speech
Comparison between different visual features methods signals, they contain a lots of distortion and noise
for selection is done and English isolated words are because of the quality of microphone. First of all low and
recognized. The recognition of simple alphabet may be high frequency noise is eliminated by performing some
taken as a simple task for human beings. But due to the digital filtering. The situation of speech signals is mainly
occurrence of some problems like high acoustic between 300Hz to 750Hz.Identical waveforms never
similarities among certain group of letters, speech produced by recorded samples and the background noise,
recognition may be a challenging task [11]. The use of length and amplitude may vary. 128 samples are applied

81
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

with sampling rate 11 KHz, this makes possible to 1.3 Speech Recognition Process
represent all speech signals. Recognition of speech is more difficult than the
recognition of the printed versions. Various techniques
1.2 Speech Classification are to be used for the recognition of speech. Basic
Classification of speech signal is very important procedure is shown by the block diagram. It is shown
phenomenon in speech recognition process. Different that how speech can be recognized using different
models are introduced by different authors to classify the processes.
speech. But in this work of project, neural network is to Speech is used effortlessly by humans as a mode of
be used for classification. A neural network consists of communication with one another. Same type of easy and
small no of neurons. A Number of neurons are natural communication is wanted with machines by
interconnected. A Number of processing units which are people. So, speech is preferred as an interface rather than
used for the processing of speech signals. The very using any other interfaces like mouse and keyboard. The
simple techniques like pre-processing, filtering are speech recognition process is somewhere difficult and
processed by these types of units. A non-linear weight is complicated phenomenon. The speech recognition
computed simply by each unit and the result over its system can further be divided into various classes. It may
outgoing connection to other units is broadcast. Learning be classified based on the model of speaker and type of
is a process in which value of the appropriate weights is vocabulary.
settled. It is necessary whenever we don’t have the This figure shows the general procedure of the speech
complete information about the input and output signal. recognition process. Typical speech sentence consist of
The weights are adjusted by the proposed algorithm to two main parts; speech information is carried out by one
match the input and output characteristics of a network part and silent and noise sections between the utterances
with the desired characteristics. The desired response has without any verbal information is carried out by the other
to be assumed by our self with the help of teacher. In this part. At the input side, different voice signals are applied.
work of project, features of the pre- processed speech Before applying these signals to the neural network, pre-
signals are extracted by using MFCC, LPC. This is called processing of the signals is done by using filtering;
training. The networks are usually trained to perform Entropy based end point detection and MFCC. The audio
tasks such as pattern recognition, decision making, and signals are converted into particular waveforms. The next
motoric control. Training of the unit is accomplished for step is to extract the features of the voice signals by the
the adjustment of the weights and threshold for the of special kind of neural network. Neural Network acts
classification SVM classifier is used. The feature as the brain of human. Trained neural networks trains
extraction may be of two types as temporal analysis and these networks and at the last testing of voice signals is
spectral analysis. In temporal analysis, the wave formats done. Tested signal is detected as the output. All the
of the speech signals are analyzed by it. In spectral working procedure is shown in the block diagram of
analysis, the wave format of the speech signal is speech recognition process how the steps take place.
analyzed by the spectral representation. Except all this,
there are some other tools that are necessary to study out 1.4 Voice Individuality
are linear predictive coding (LPC) and Mel Frequency Before trying to solve the problem described in the goal
Cestrum Coefficients (MFCC). LINEAR Predictive of project, we must understand the characteristics of the
Coding is a tool that is used for the processing of audio different voice signals. Acoustic parameters have the
signal and speech for representation of spectral envelope greatest influence on the voice individuality. Acoustic
and digital signal in compressed form. LPC is based on parameters may be divided into two types: time
the idea that expression of each sample of signal in a dimensions that represent the pitch frequency or
linear combination of the previous samples. Mel fundamental frequency and in frequency dimensions that
frequency cestrum coefficient is preferred to extract the represent the vocal tract resonance. We can consider the
feature of speech signal. It transforms the speech signal voice signals as quasi periodic signals. Pitch may be
into frequency domain, hence training[3] vectors are defined as the fundamental frequency of the voice signal.
generated by it. Another reason of using this method is The average pitch speed, time pattern, gain and
that human learning is based on frequency analysis. fluctuation change from one individual to another and
Before obtaining the MFCC of a speech signal the pre also within the speech of the same speaker.in actual the
emphasis filtering is applied to the signal with finite frequency response of the vocal tract filter is the shape
impulse response filter given by and gain of the spectral envelope of the signal. From
𝑛
some researches on voice individuality, it has been
𝐻𝑝𝑟𝑒 (𝑍) = 𝑘=0 𝑎𝑝𝑟𝑒 (K) 𝑍 −𝐾 concluded that pitch fluctuation that gives the second
place to the format frequencies is the most important
Its Z-Transform is factor in the voice individuality. From many other
studies it also be concluded that the spectral envelope has
𝐻𝑝𝑟𝑒 =1+ 𝑎 𝑝𝑟𝑒 𝑧 −𝑘 the greatest influence on the voice individuality
perception.
The value of 𝑎𝑝𝑟𝑒 is usually taken between -1.0 to 0.4.
From the above discussion it is concluded that there is no
single parameter that can alone define a speaker. A group
Testing is the process, in which different speech signals
of parameters that depend on the nature of speech
are tested by using special type of neural network. This is
materials vary from one individual to another having
the main step in the speech recognition process. Testing
their respective importance.
of the speech signals is done after training.

82
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2. CONVERSION OF SPEECH 2.2 Roll Played by Neural Network in Speech


Recognition Process
SIGNALS INTO WAVES Neural network works as a human brain. These networks
The samples of the speech signals are converted into perform learning phenomenon. Neural network is a
wave formats. This is the most general way for the computational model inspired by an annual central
representation of the signal. A disadvantage of this nervous system which is capable of machine learning as
method is also there which is that it cannot represent well as pattern recognition the artificial neural networks
speech related information. This problem may be solved are generally presented as systems in which number of
by the technique pre-processing. This representation neurons are interconnected which have been used to
shows the change in amplitude spectra over time. There solve a wide variety of tasks that are difficult to solve
are three dimensions. X-axis represents time in meter per using ordinary rule based programming including
second. Y-axis represents frequency and z-axis computer vision, speech recognition. This type of
represents the color intensity that represents the network potentially contains a large number of simple
magnitude of the signal. It is not possible to start samples processing units, roughly analogous to neurons in the
exactly at the same time because different persons brain. All these units are operated simultaneously.
pronounce sentences differently i.e. slowly or fast and as Except neural network, there is no other processor that
the result intensities at the different times might be oversees their activity. These units perform all
different. computations in the system. A scalar function is
computed simply by each unit and the result is
-3

2
x 10 broadcasted to its neighboring units. The course of
dimensionality problem that many attemptations to
1.5
model non- linear functions with large number of
1 variables is also kept in check by neural network.
0.5 Representative data is collected by neural network users
0
and then training algorithms are invoked because they
have to learn the structure of the data automatically [6].
-0.5
Lakshami Kanaka [6] described a method for estimating
-1 a continues target for training patterns of neural networks
-1.5 that are based on the generalized regression neural
-2
network and they compared the performance with the
performance of linear and multilayer perceptron.
-2.5

-3
0 20 40 60 80 100 120 140
There are two input units in a network by which data is
received from the environment, hidden layers y which
transformation of data is represented internally output
Time (m/sec) units whose function is to take decision. It is possible
train recurrent neural networks for sequence labeling
Fig.2 Wave format of speech signal of a male problems where the input and output alignment is not
known by end to end training method such as
This figure shows the wave format of a speech signal connectionist temporal classification. Hence neural
obtained in the implementation in the MATLAB. The network plays a great role in recognizing the speech in
change in amplitude spectra over time is shown by the this work of project.
domain representation. The complete sample is split into
two time frames with almost 50% overlap. We calculate
the short term frequency for each time. Although good W
visual representation of speech signal is provided by the w
spectrogram, verification between samples is still there.
Samples never start exactly at the same time, sentences
are pronounced differently by different persons slower or I/P b O/P
faster and as a result that might have different intensities =
at different times.
b
Hidden layer =
output
2.1 Algorithm Used
Here the algorithm used is the back propagation. The =
back propagation algorithm was originally introduced in Fig.3: A Basic Neural Network Used for Training
=
the 1970s. Several neural networks are described here, in
This is the neural network with an input, ten hidden
which back propagation works faster than the earlier
layers and one output stage. 128 samples of each
approaches used for the learning phenomenon. It makes
sentence are applied. Out of which 70 are used for
possible to use neural network to solve problems which
training and 58 are for testing. All the performance is
have not been solved for a long time. In today’s world,
carried out in MATLAB with coding.When number of
the back propagation algorithm is the work horse of
words which are to be recognized increases, the number
learning in neural network. Besides this, it gives us
of neurons in hidden layers also have to be increase. The
detailed insights into how changing the weights and how
number of neurons required are almost equal to the
the overall behavior of a network is changed by the
number of words are too be recognized. Whenever we
biases.
increase the number of hidden layers, the training time
grows sensitively. The quality of the signal pre-

83
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

processing should be good because performance of the when it runs. The mean square error (MSE) is a network
network is mainly dependent on this unit. performance function. The performance of the network is
measured according to the mean of squared errors.
3. NEURALNETWORK Mean Square Error is defined as the average squared
IMPLEMENTATION difference between the output and targets. If zero is
Neural networks have been used by many of the authors obtained, it means no error is there, lower value means
in the past. For our work of project’s implementation, result is better. In the above graph, it
MATLAB neural network toolbox has been used to is shown that the mean square error of the network is
create, train, and simulate the network. For each starting at large value and decreasing to a small value.
sentence, 128 samples are used. From these 128 samples,
70 are used for training while the other 58 are used for Table 1. Result obtained of different samples
testing the network. The trained network can also be
tested with real time input from a good quality Age No of passed Failed %age
microphone. Setup of MFCC and neural network for group tested
experiment are presented by T.B. Adam. [11]. They took samples
speech data from T146 database isolated alphabet called
T1ALPHA. They set the output nodes to nine in order to 3-10 5 4 3 90%
recognize the nine letters of E-set.
10-20 4 2 2 80%
The hidden layers can be calculated by using the formula
h= 𝑛 ∗ 𝑚, where n is the number of input nodes and m 20-30 6 5 1 95%
is the number of output nodes [11].
30-40 5 4 1 97.23%
4. RESULTS & DISCUSSION
From the above data, it can said that communication can
40-50 5 3 2 86.5%
be module and efficient by speech. Beyond efficiency,
speech helps human in making comfortable and familiar
with speech. More concentration and restrict movement
is required by other modalities due to unnatural
positions. Speech is identified by machines by the use of Above table shows the performance of speech signals of
process Automatic Speech Recognition. Feature vector different persons of various ages including male and
helps in representing each word by conventional method female. From the result it is concluded that the better
of speech recognition. Artificial Neural Networks (ANN) percentage of accuracy is obtained on the recognition of
are biological inspired tools that process the information. speech signals that are recorded in a closed room than
Priori information of speech process is not required by those are recorded in an open room.
artificial neural networks.
5. CONCLUSION
Best result is obtained at epoch 4 in this work. 100% From the presented work, it is concluded that neural
accuracy is not achieved in any of the case. The best networks can be very powerful models for the
training performance rate is 2.2596-20 at epoch 4. classification of speech signals. Some types of very
simplified models can recognize the small set of words.
0
Best Training Performance is 2.2596e-20 at epoch 4 The performance of the neural networks is being
10
Train impacted largely by the pre- processing technique. On
Best the other hand, it is observed that Mel Frequency
Cestrum Coefficients are very reliable tool for the pre-
Mean Squared Error (mse)

-5
10
processing stage. Very good results are provided by these
coefficients. Satisfying results are achieved by the use
-10
of both the Multilayer Feed Forward and Radial basis
10 function neural network with the back propagation
algorithm when Mel Frequency Cestrum Coefficients are
used.
-15
10

REFERENCES
1. John Paul Hosom, Ram Ad Mark Fanty, “Speech
Recognition using Neural Networks” volume.1, July
-20
10
0 0.5 1 1.5 2 2.5 3 3.5 4
4 Epochs 6 1999.
2. Antanas Lipeika, Joana Lipeika, Loimutis Telksnys,
Fig.4: Best Training Performance Obtained “development of Isolated Word Speech Recognition
System” volume.30, No.1, PP 37-46, 2002.
It is shown that, in our work of project best training 3. Ben Gold and Nelsom Margan, “Speech and Audio
performance is obtained at the epoch 4. We handle the Processing” Willey addition New Delhi, 2007.
adjustment with a learning rule from which we can also 4. Wouter Geuaert, Georgi Tsenav, Valeri Mladenov,
derive a training algorithm for a specific task. Data is “Neural Network used for Speech Recognition”
trained using neural network toolbox and remaining of Journals Automatic Control, volume.20.1.7, 2010
the 70 samples are simulated against this trained neural
network. The performance of neural network is seen

84
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5. Santosh K. Gaikwad, Bharti W. Gawali, Pravin 10. Abdul Syapiq B Abdul Sukor, “Speaker
Yannawar, “A Review of Speech Recognition Identification System using Mel Frequency Cestrum
Technique” IJCA, volume.10, No.3, November 2010. Coefficient Procedure and Noise Reduction
6. Lakshami Kanaka, Venkateswarlu Revada, Method” Master Thesis, University Tun Hussein
Yasautcha Kumari Rambatla and Koti Verra Onn Malaysia, January 2012.
NagayaAnde, volume.8, issue. 2, March 2011, 11. T. B. Adam, Md Salam, “Spoken English Alphabet
IJCST. Recognition with MFCC AND Back Propagation
7. Nidhi Srivastava, “Speech Recognition using Neural Network” IJCA, volume.42, No.12, March
Artificial Neural Networks” volume.3, issue.3, may 2012.
2014, IJEST. 12. R. B. Shinde, Dr. V. P. Pawar, “Vowel
8. Dr. R. L. K. Venkates, Dr. R. Vasantcha Kumari, G. Classification Based on LPC and ANN” IJCA,
Vani Jayasatu, “Speech Recognition using A Radial volume.50, No.6, July 2012.
Basis Function Neural Networks” volume.3, PP 13. Elhan S. Salam, Reda A. El-Khoribi, Mahmoud E.
441-445, April 2011, 3rd INC on E computer Shoman, “Audio Visual Speech Recognition For
technique IEEE. People with Speech Disorder” volume.96, No.2,
9. Vansantha Kumari, G.Vani, Dr. R. L. K. June 2014.
Vankateswarlu, Dr. R. Jayasar, “Speech
Recognition by using Recurrent Neural Network”
IJSER volume.2, issue.6, June 2011.

85
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

BER Analysis of Turbo Coded OFDM for different


Digital Modulation Techniques
Gurwinder Kaur Amandeep Kaur
Electronics and Communication Section, YCOE, Computer Science and Engineering, Baba Hira
Talwandi Sabo, Guru Kashi Punjabi University Singh Bhattal Institute of Engineering and
Campus, Patiala Technology, Lehragaga

ABSTRACT value introduced to overcome the drawbacks of Viterbi


coding used in the encoding is known as the soft output
With the rapid growth of digital communication in recent Viterbi algorithm (SOVA)[9]. This interleaver design is
years, the need for high speed data transmission is both tedious and exhaustive. Encoding input sequences
increased. A common problem found in high speed determine minimum codeword weight and the number of
communication is Inter Symbol Interference (ISI). ISI codewords with that weight that produces the largest
occurs when a transmission interferes with itself and the minimum codeword weight with the lowest number of
receiver cannot decode the transmission correctly. ISI codewords of that weight[11].
becomes a lilmitation in high data rate communication.
OFDM avoids this problem by sending many low speed In ASK, the amplitude of the carrier is changed in response
transmissions simultaneously. OFDM is a promising to information and all else is kept fixed. In FSK the
modulation for achieving high rates in mobile environment, frequency of the carrier is changed to two different
due to its resistance to inter symbol interference. In this frequencies depending on the logic state of the input bit
paper we design a Turbo coded OFDM system which can stream. PSK is a digital modulation scheme that conveys
be used for digital broadcasting with powerful error control data by changing, or modulating, the phase of a reference
and we also simulate this OFDM model for BER signal. The outputs of two modulators in QAM are
performance under different digital modulation techniques. algebraically summed and the result of which is a single
signal to be transmitted, containing the In-phase (I) and
Keywords: Turbo coded OFDM, Digital Modulation Quadrature (Q) information. In PAM, the amplitude of the
TECNIQUES, BER(bit error rate). carrier is varied in proportion to sample values of a
message signal and pulse duration is held constant[2].
1. INTRODUCTION
The structure of rest of the paper as follows: The second
Turbo codes powerful error control code of practical section describes the Turbo Coded OFDM model. The
importance. Turbo codes have error correcting capability Third section explain results of the BER analysis for
very close to the theoretical performance limits. A variable- different modulation techniques. In fourth section the result
rate variable-power MQAM scheme to exhibit a 20-dB is concluded.
power gain over non-adaptive modulation on a flat
Rayleigh fading channel. Adaptive Turbo Coded 2. TURBO CODED OFDM MODEL
Modulation adapts the transmit rate by adapting the channel
encoder itself. Due to Complexity in the implementation of OFDM[7] is a multi-carrier modulation technique in which
MAP decoder, Log-MAP and Max-Log-MAP have been a single high rate data stream is divided into multiple low
designed. Performance of WIMAX using turbo code and rate data streams and is modulated using sub-carriers which
using convolutional product code has been measured. are orthogonal to each other. The block diagram of Turbo
AWGN channel gives a gain of approximately 22 dB over coded OFDM shown in Figure. 2.1. The random data
Markov channel. Turbo codes give improvement in the generator(RDG) is used to generate random binary data in
BER performance over convolutional code. Turbo codes serial format. Convolution Encoder(C.Er) is constituent of
give better performance at low SNR. By dividing the Turbo Encoder. The Interleaver(IL) increases resistance to
bandwidth into narrowband flat fading sub-channels, frequency selective fading. After BDC(Binary to digital
OFDM is more resistant to frequency selective fading than conversion) the block provides the digital
single carrier systems are. Channel coding plays a very modulations(DM). Then pilot Insertion(PI) block used to
important role in OFDM systems performance. A turbo recover the amplitude and phase of the signal to be
code can achieve performance within 3 dB of channel transmitted. The IFFT block converts the frequency domain
capacity Random coding of long block lengths may also data into time domain signal while maintaining the
perform close to channel capacity. MAP and SOVA are orthogonality of subcarriers. Cyclic extension(CY.E) used
primary decoding strategies for turbo codes. Interleaver is to mitigate ISI effect in original OFDM symbol. The langth
to randomize the data sequence. BER performance is of cyclic prefix is chosen ¼ of the length of symbol.
improved if the length of the interleaver is increased[6].
The basic turbo encoder which is a parallel concatenation
of two RSC codes, separated by an interleaver. A reliability

86
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

after removing cyclic extension, so as measure the actual


effect of this parameter on BER.

3.1 BER Performance of QAM


Modulated Turbo Coded OFDM
RDG C.Er IL BDC
BER performance using Quadrature Amplitude Modulation
CY. E IFFT PI DM technique in the turbo coded OFDM system with AWGN
channel under the simulation parameters mentioned in the
Table 3.1 is shown in figure 3.1.
AWGN Channel
0
BER vs SNR
10

Dmod P. SYN FFT Re.C.E


.
-1
10

DBC D.IL Turbo BER


.D

BER
-2
10
Figure 2.1. Turbo coded OFDM Model

The signal transmitted through the AWGN channel and the -3


10
receiver section receives the signal with proper phase
sysnchronisation by phase synchronized block(P.SYN).
The cyclic extension removed by Re.C.E. block. After
-4
digital to binary conversion(DBC) de interlaeaving (d.IL) is 10
0 2 4 6 8 10 12 14 16
done and the data transmitted to Turbo Decoder and then SNR (dB)

Bit error analysis plots are obtained using different digital


modulation techniques( ASK, PSK, DPSK, M-QAM and Figure 3.1. BER vs. SNR for QAM modulated turbo
PAM). coded OFDM.
3. RESULTS The BER starts decreasing monotonically as value of SNR
increases and reduced to 0 at SNR value around 20 dB for
Turbo Coded OFDM has powerful error correcting QAM. We do close analysis of BER and SNR Values. The
capabilities due to its multi-path delay spread tolerance and value of BER at SNR equal to 2 dB is 0.4412 and at SNR
efficient spectral usage by allowing overlapping of equal to 4 the value of BER is 0.3844, the difference in the
subcarriers in the frequency domain. All the simulations are BER values is 0.0275. The value of BER at SNR equal to 8
done to achieve a desired BER of 10-4. dB is 0.3183 and at SNR equal to 10 dB the value of BER
is 0.2296, the difference in the BER values is 0.0887 which
Table 3.1 SIMULATION PARAMETER is more as compare to the case above. The value of BER at
SNR equal to 14 dB is 0.0346 and at SNR equal to 16 dB
Sr.N Parameter Value the value of BER is 0.0059, the difference in the BER
o. values is 0.0287. Turbo codes perform better at low SNR.
1 Modulation QAM,PAM,PS
K,DPSK 7.2 BER Performance of PAM Modulated Turbo Coded
OFDM
2 Coding Technique Turbo Coding
BER performance using Pulse Amplitude Modulation
3 No. of subcarriers 64
technique in the turbo coded OFDM system with AWGN
4 Data Frame Size 96 channel under the simulation parameters mentioned in the
5 No. of frames 100 Table 3.1 is shown in Figure 3.2. The BER starts
6 No. of Pilot Subcarriers 04 decreasing monotonically as value of SNR increases and
7 IFFT Size 64 reduced to 0 at SNR value around 30 dB. We do close
8 Cyclic Prefix Length 16 analysis of BER and SNR Values. The value of BER at
9 Channel Type AWGN SNR equal to 2 dB is 0.4231 and at SNR equal to 4 the
value of BER is 0.4167, the difference in the BER values is
Simulation parameters adopted in this thesis are enlisted in
0.0064. The value of BER at SNR equal to 8 dB is
Table 1.1. The BER performance for Turbo coded OFDM
for various modulation techniques such as QAM, PAM,
PSK, DPSK in an AWGN channel is plotted. Moreover,
BER performance of the QAM modulated OFDM is plotted

87
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

0
BER vs SNR BER vs SNR
0
10 10

-1
10

-1
10
BER

-2
10

BER
-2
-3
10 10

-4
10
0 5 10 15 20 25 30
SNR (dB) -3
10
0 5 10 15 20 25
SNR (dB)
Figure 3.2. BER vs. SNR for PAM modulated turbo coded
OFDM. Figure 3.3. BER vs. SNR for PSK modulated turbo coded
OFDM.
0.3968 and at SNR equal to 10 dB the value of BER is
0.3760, the difference in the BER values is 0.0208 which is 7.4 BER Performance of DPSK Modulated Turbo
more as compare to the case above. The value of BER at Coded OFDM
SNR equal to 14 dB is 0.3386 and at SNR equal to 16 dB
the value of BER is 0.2840, the difference in the BER BER performance using DPSK Modulation technique in
values is 0.0546. The value of BER at SNR equal to 22 dB the turbo coded OFDM system with AWGN channel under
is 0.0393 and at SNR equal to 24 dB the value of BER is the simulation parameters mentioned in the Table 3.1 is
0.0085, the difference in the BER values is 0.0308. Turbo shown in figure 3.4.
codes perform better at low SNR. The differences in the
BER values increase when SNR value is about 18 dB and BER vs SNR
0
above 18 dB this difference goes on decreasing. 10

3.3 BER Performance of PSK Modulated


Turbo Coded OFDM -1
10

BER performance using PSK Modulation technique in the


BER

turbo coded OFDM system with AWGN channel under the


simulation parameters mentioned in the Table 3.1 is shown -2
10
in figure 3.3. The BER starts decreasing monotonically as
value of SNR increases and reduced to 0 at SNR value
around 25 dB. We do close analysis of BER and SNR
Values. The value of BER at SNR equal to 2 dB is 0.4861 -3
10
and at SNR equal to 4 the value of BER is 0.4648, the 0 5 10 15
SNR (dB)
20 25 30

difference in the BER values is 0.0213. The value of BER


at SNR equal to 8 dB is 0.4533 and at SNR equal to 10 dB
the value of BER is 0.4066; the difference in the BER Figure 3.4. BER vs. SNR for DPSK modulated turbo coded
values is 0.0467 which is more as compare to the case OFDM
above. The value of BER at SNR equal to 14 dB is 0.3443
and at SNR equal to 16 dB the value of BER is 0.2963; the The BER starts decreasing monotonically as value of SNR
difference in the BER values is 0.0480. The value of BER increases and reduced to 0 at SNR value around 30 dB. We
at SNR equal to 22 dB is 0.0204 and at SNR equal to 24 dB do close analysis of BER and SNR Values. The value of
the value of BER is 0.0021; the difference in the BER BER at SNR equal to 2 dB is 0.4946 and at SNR equal to 4
values is 0.0183. Turbo codes perform better at low SNR. the value of BER is 0.4906, the difference in the BER
The differences in the BER values increase when SNR values is 0.0040. The value of BER at SNR equal to 8 dB is
value is about 18 dB and above 18 dB this difference goes 0.4639 and at SNR equal to 10 dB the value of BER is
on decreasing. 0.4532; the difference in the BER values is 0.0107 which is
more as compare to the case above. The value of BER at
SNR equal to 14 dB is 0.3979 and at SNR equal to 16 dB
the value of BER is 0.3738; the difference in the BER
values is 0.0241. The value of BER at SNR equal to 22 dB
is 0.1470 and at SNR equal to 24 dB the value of BER is
0.0472; the difference in the BER values is 0.0998. Turbo
codes perform better at low SNR. The differences in the

88
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

BER values increase when SNR value is about 24 dB and Table 3.2: Analysis of BER for various modulation
above this difference decreases. techniques with SNR

7.5 BER Performance of QAM


BER
Modulated Turbo OFDM without Cyclic SNR
Extension QAM
QAM PAM PSK DPSK (without
BER performance using Quadrature Amplitude Modulation cyclic
technique in the turbo coded OFDM system with AWGN extension)
channel and removing Cyclic Extension and other 0 0.4687 0.4454 0.4861 0.4950 0.4732
simulation parameters mentioned in the Table 3.1 being
same is shown in figure 3.5. 2 0.4412 0.4231 0.4861 0.4946 0.4428
0
10
BER vs SNR 4 0.3844 0.4167 0.4648 0.4906 0.3932
8 0.3183 0.3968 0.4533 0.4639 0.3404
10 0.2296 0.3760 0.4066 0.4532 0.2507
-1
10
12 0.1385 0.3555 0.3884 0.4291 0.1481
14 0.0346 0.3386 0.3443 0.3979 0.0363
BER

-2
10 16 0.0059 0.2840 0.2963 0.3738 0.0042
18 0.0002 0.2319 0.1955 0.3382 0.0007
-3 20 0 0.1169 0.0979 0.2528 0
10
22 0 0.0393 0.0204 0.1470 0
24 0 0.0085 0.0021 0.0472 0
-4
10
0 2 4 6 8 10 12 14 16 26 0 0.0003 0 0.0096 0
SNR (dB)
28 0 0.0001 0 0.0011 0

Figure 7.5. BER vs. SNR for QAM without using Cyclic
Extension
As BER reduces to 0 at SNR value around 20 dB for QAM,
The BER starts decreasing monotonically as value of SNR around 26 dB for PSK, around 30 for PAM and DPSK
increases and reduced to 0 at SNR value around 20 dB for there will be gain in SNR, it will be different with respect
QAM. We do close analysis of BER and SNR Values. The to different modulated models. For BER equal to 0, Gain of
value of BER at SNR equal to 2 dB is 0.4428 and at SNR QAM modulated system over PAM modulated will be
equal to 4 the value of BER is 0.3932, the difference in the around 10 dB; it will be 6 dB over PSK modulated system
BER values is 0.0496. The value of BER at SNR equal to 8 and 10 dB over DPSK modulated system.
dB is 0.3404 and at SNR equal to 10 dB the value of BER
is 0.2507; the difference in the BER values is 0.0897 which 4. CONCLUSIONS & FUTURE SCOPE
is more as compare to the case above. The value of BER at
SNR equal to 14 dB is 0.0363 and at SNR equal to 16 dB In this paper, The results clearly depicts that the turbo code
the value of BER is 0.0042; the difference in the BER has the capability of reaching very low bit error rates at
values is 0.0321. even small signal to noise ratios. The objective of the
iterative process to further reduce bit errors is accomplished
All the values of BERs are more compare to the values of
by evaluating BER vs. SNR under various modulation
QAM modulated OFDM with cyclic extension. BER
techniques. QAM modulated Turbo OFDM with cyclic
performances under various modulation techniques in an
prefix has better performance as compare to PAM, PSK,
AWGN channel with channel coding conditions have been
DPSK and QAM Turbo OFDM without cyclic prefix. For
evaluated. The BER starts decreasing monotonically when
QAM modulated OFDM plotted with cyclic extension
channel coders are used and reduced to 0 at SNR value
shows better BER performance than one without cyclic
around 20 dB for QAM, around 26 dB for PSK, around 30
extension.
for PAM and DPSK. For QAM modulated OFDM plotted
after removing cyclic extension BER reduced to 0 around The turbo codes have the capability of reaching very low
20 dB but as compare to one with cyclic extension its BER bit error rates at even small signal to noise ratios. Future
is more between 0-20 dB SNR. QAM modulated Turbo work will include effect of multipath fading channels such
OFDM with cyclic prefix has better performance as as Nakagami channel model can be studied as this model
compare to PAM, PSK, DPSK and QAM Turbo OFDM represent severe ISI conditions due to their nature of
without cyclic prefix. Gain (expressed in decibels) is the producing burst of errors and are more realistic for outdoor
reduction in the required SNR to achieve a specified BER. mobile applications. Channel equalization techniques can
also aid to improve the performance of the system.

89
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES
[8] N. Sharma and R. Lal Dua; ‘ To Improve Bit error rate
[1] S. Haykin; ‘Digital Communication’ wiley publications. of OFDM Transmission using Turbo Codes’ International
Journal of Advanced Research in Computer Engineering
[2] B. Sklar, R. P. Kumar; ‘Digital Communication
and Technology, Volume 1, Issue 4, pp. 37-44, June 2012.
Fundamentals and Applications’ Pearson publications.
[9] S. Papaharalabos, P. Sweeny and B. G. Evans; ‘ A new
[3] R. Prasad, ‘OFDM for Wireless Communication
method of improving SOVA Turbo decoding for AWGN,
Systems’ Artech House Publishers, 2004. Rayleigh and Rician fading channels’ IEEE Proceedings,
volume 5, pp. 2862-2866, 2004.
[4] R. Bose; ‘Information Theory Coding and
Cryptography’ McGraw-Hill
[10] A. Ebian, M. Shokair and K. Awadalla; ‘Comparision
between Turbo code and Convolutional product code for
[5] V.D. Nguyen and H.P. Kuchenbecker; ‘Interleaving
WIMAX’ World academy of Science, Engineering and
Algorithm for soft decision Viterbi decoding in OFDM
Technology 51, pp.195-199, 2009.
systems over fading channels’ IEEE International
Conference on Telecommunication, June 2001, Romania. [11] B.Lu, X. Wang and K. R. Narayanan; ‘LDPC based
Space Time Coded OFDM Systems over Correlated Fading
[6] A. Burr; ‘Turbo Codes:The Ultimate Error Correction
Channels: Performance analysis and receiver design’ IEEE
Codes’ Electronics and Communication Engineering
Transaction Wireless communication. Volume 1, pp. 213-
Journal, pp.155-165.
225, April 2002.
[7] F.B. Frederiksen and R. Prasad; ‘An overview of
[12] Shang- Kang Deng, Kuan- Cheng Chen and Mao Chao
OFDM and related Techniques towards the development of
Lin; ‘Turbo Coded OFDM for reducing PAPR and Error
Future Wireless Multimedia Communications’ In
Rates’ IEEE Transaction Wireless communication. Volume
Proceedings IEEE Conference Radio and wireless
7, pp. 84-89, January 2008.
Communication(RWC 02), pp. 19-22, August 2002.

90
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Rectangular Microstrip Patch Antenna with Triangular


Slot
Sandeep Singh Sran Jagtar Singh Sivia
Yadavindra College of Engineering Yadavindra College of Engineering
Punjabi University GuruKashi Campus Punjabi University GuruKashi Campus
Talwandi Sabo, Punjab, India Talwandi Sabo,Punjab, India

sandeepsra07@yahoo.in jagtarsivian@yahoo.com

ABSTRACT Basically there are four feeding techniques available to us


This paper describes the design of Rectangular Microstrip while designing of antenna. These are line feed, probe feed,
Patch Antenna with Triangular Slot (RMPATS) at a frequency aperture coupled feed and proximity coupled feed. The feed
of f = 2.4 GHz. The proposed antenna has been designed by that is used here is probe feed (or coaxial feed).
introducing a triangular slot in the rectangular patch. The
FR4-Epoxy with relative permittivity 4.4 and height 1.6 mm
is used as substrate material. Antenna is fed by coaxial probe
feed. The proposed antenna have acceptable value of return
losses (RL) and gain. HFSS software is used for simulation of
Fig 1: Common available shapes of Microstrip
proposed antenna.
Patch Antenna
In this paper we proposed the rectangular microstrip patch
Keywords
Microstrip; Patch; Triangular; Slot; Antenna; Return loss; antenna with triangular slot. The triangular slot is cut in the
Gain. which is mounted on the substrate. We use here the FR4
epoxy as the substrate material with height of 1.6 mm, relative
1. INTRODUCTION permittivity of 4.4 and loss tangent of 0.02.
Microstrip patch antennas are widely used in many
applications in wireless communication. Because microstrip 2. DESIGN OF PROPOSED ANTENNA
antenna has very attractive features such as low profile, light The Geometry of proposed antenna is shown in Fig 2.
weight, high efficiency and low cost. But the disadvantages of 2.1 Without Slot
microstrip antenna has its narrow bandwidth [1]. The Substrate: Material: FR4 epoxy
techniques used to enhance bandwidth are to choose a thick
substrate with low dielectric constant [2] and slotted patch [3- Width: 60 mm
4]. The first technique is limited because the thich substrate
Length: 60 mm
require the increased length of the probe feed which
introduces large inductance and resulting in increase only a Height: 1.6 mm
few percentage in the bandwidth at resonant frequency.
Position: -30, -30, 0
By using the second technique (slotted patch) the size of the
patch reduces and also results in lowering the antennas Patch: Width: 38.04 mm
fundamental resonant frequency. Length: 29.44 mm
Microstrip antennas are also called the patch antennas. The Position: -19.02, -14.72, 1.6
microstrip antenna consist of three layers. The substrate is
sandwitched betwen a gound plane and metallic patch [5-6]. Feed Line: Location: offset from center in y
The radiating element and the feed line are made by process direction (0, 7, 1.6)
of photo etching on the dielectric substrate. The patch
Coaxial inner radius: 0.3 mm
configuration may be square, rectangular, dipole, circular,
elliptical, triangular or any other shapes as shown in Fig 1. Coaxial outer radius: 0.5 mm
Square, rectangular, dipole and circular are the most common
because of ease of analysis and fabrication and their attractive 2.2 With Slot
radiation characteristics, especially low cross-polarization All the dimensions are same as without slot. But the
radiation [3]. In its simplest form a microstrip antenna dimensions of the slot are given below:
consists of a patch of metal, usually rectangular or circular Slot: Traingle
(though other shapes are sometimes used) on top of a
grounded substrate. 1st line length 12.80 mm

Due to their advantages, they become suitable for various From Point 1(0, -2, 1.6) to Point
applications like, vehicle based satellite link antennas [7], 2(-10, -10, 1.6)
global positioning systems (GPS) [8], radar for missiles and 2nd line length 20 mm
telemetry [7] and mobile handheld radios or communication
devices.

91
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

From Point 1(-10, -10, 1.6) to Table 2. Performance parameters of RMPA with slot
Point 2(10, -10, 1.6)
3rd line length 12.80 mm Freq. Return Lower Upper Gain BW
Losses Freq. Freq.
From Point 1(10, -10, 1.6) to
Point 2(0, -2, 1.6) GHz db GHz GHz db MHz
Feed Line: Location: offset from center in y 2.15 -15.27 2.1213 2.1881 +7.37 66.8
direction (5, 7, 1.6)
3.69 -18.58 3.6646 3.7480 +1.11 103.4
Coaxial inner radius: 0.3 mm
Coaxial outer radius: 0.5 5.43 -30.73 5.3610 5.4953 +6.22 134.3

6.10 -12.51 6.0473 6.1432 +2.57 95.9

6.80 -24.65 6.6690 6.9116 +4.13 242.6

8.19 -43.98 8.1225 8.2573 +3.56 134.8

9.79 -29.97 9.6028 10.000 +4.72 397.2

From the table it is clear that with adding triangular slot it is


clear that without slot upper two and lower two resonant
frequencies 2.35 GHz, 3.70 GHz, 7 GHz, 7.42 GHz are
shifting to lower side 2.15 GHz, 3.69 GHz, 6.10 GHz, 6.80
GHz. Also the antenna without slot has negative gain on 4.49
GHz frequency but by adding triangular slot the gain of
antenna at all resonant frequencies becomes positive.
Triangular slot there antenna has more frequency bands than
without slot Antenna. The gain at Frequency 2.15 GHz is 7.37
db.
3.1 Return Losses
Fig 2: Geometry of RMPATS

3. RESULTS
The Results of Rectangular Microstrip Patch Antenna
(RMPA) without triangular slot and With triangular slot in
terms of performance parameters such as RL, gain, directivity
& bandwidth are shown in Table 1 and Table 2 respectively.
Table 1. Perfoemance parameters of RMPA without
slot
Freq. Return Lower Upper Gain BW
Losses Freq. Freq.
GHz db GHz GHz db MHz
2.35 -13.19 2.3194 2.3821 +2.08 62.7

3.70 -11.57 3.6643 3.7451 +1.20 80.8

4.49 -10.06 4.4855 4.5004 -3.20 14.9

7.00 -11.42 6.9678 7.0417 +7.77 73.9

7.42 -20.70 7.3302 7.4972 +6.14 16.7

Fig 3: Return loss versus frequency plot


without slot

92
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

acceptable level of value to describe the loss of the power


which reflects back from the antenna [7]. Bandwidth of
antenna without & with slot at different frequencies are shown
in Table 1 and Table 2 respectively.

4. CONCLUSION
From the simulation analysis of propoed antenna it can be
easily observed that the designed RMPATS has a gain of 7.37
db and optimized return loses -15 db at a frequency of 2.15
GHz. It has also improved values of return losses, gain and
bandwidth and no. of bands of resonant frequencies.
RMPATS can be used for Wireless Communication
applications. The proposed antennas have achieved good
impedance matching, stableradiation patterns, and high gain.

REFERENCES
[1] Bhomia, Y., Kajla, A. and Yadav, D., “V-slotted
Triangular Microstrip Patch Antenna”, International
Journal of Electronics Engineering, vol. 2, 2010.
[2] James, J.R. and Hall, P.S.: “Handbook of Microstrip
Antennas” (Peter Peregrinus).
Fig 4: Return loss versus frequency plot with
slot [3] Balanis, C.A. : “Antenna Theory, Analysis and Design”
(John Wiley & Sons).
RL versus frequency plot for the proposed antenna without [4] Wang, H., Hunang, X. B. and Fang, D. G., “A single
slot and with slot are shown in Fig 3 and Fig 4 From the graph layer wideband U slot Microstrip patch antenna array”,
it is clear that return loss has been improved by cutting a slot IEEE Antennas and Wireless Propagation Letters, Vol. 7,
than without cutting a slot. 2008, pp:9-12.
3.2 Gain [5] Goyal, R., Jain, Y.K., “Compact Bow Shape Microstrip
Patch Antenna with Different Substrates”, IEEE
Conference on Information and Communication
Technologies, 2013.
[6] Munir, A., Petrus, G., Nusantara, H., “ Multiple Slot
Technique for Bandwidth Enhancement of Microstrip
Rectangular Patch Antenna”, IEEE Conference on
Resarch and Innovation, 2013.
[7] James, R.J., “Some recent developments in microstrip
antenna design”, IEEE Trans. Antennas and Propagation,
Vol.AP-29, January 1981, pp.124-128.
[8] Uzunoglu, N.K., Alexopoulos, N. G. and Fikioris, J.
G.,“Radiation Properties of Microstrip Dipoles,” IEEE
Trans. Antennas propagat., Vol. AP-27, No. 6, pp.853-
858, November 1979.

Fig 5: Three dimension radiation pattern

Three dimension radiation pattern of proposed antenna at one


resonant frequencies 2.15 GHz is shown in figure 5, which
shows that the proposed antenna has gain of 7.37 db at 2.15
GHz frequency.
3.3 Bandwidth
We measured the bandwidth with reference to the -3dB
points on the S11 curve. The -10 dB of return loss is an

93
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

DESIGH REVIEW ON HIGH-SWING, HIGH


PERFORMANCE CMOS OPERATIONAL AMPLIFIER

Harjeet Singh Tushty Bansal


BHS Institute of Engineering &Technology BHS Institute of Engineering &Technology
Lehragaga (Sangrur),India Lehragaga (Sangrur),India

ABSTRACT metrics such as signal-to-noise ratio (SNR), speed and


In this paper a review on the high-swing, high power consumption can be shown to be
performance CMOS operational Amplifier is carried out  gm 
and based on the literature, a design procedure for a
 
. 
SNR .Speed (Swing ) C  2
(1)
HSHPSS (high-swing high performance Single Stage) 
Power  kT  V sup .(I)
CMOS operational amplifier is developed using design  
equations. A proposed design equation based procedure  C 
provides a quick and effective mechanism for directly where the constants ,  and are the feedback
estimating the MOS circuit parameters of the op-amp factor of the closed-loop op-amp, the number of kT/C
noise contributions at the output of the amplifier, and the
Key-words ratio of the total current consumption of the op-amp to
OP-amp, Load capacitance, Slew rate, Power the current I flowing through one of the input devices,
Dissipation. respectively. Here, speed corresponds to the dominant
pole location of the op-amp. The above expression can
be as
1. INTRODUCTION
SNR .Speed (Swing ) 2
We are witnessing the dominance of
microelectronics (VLSI) in every sphere of electronics
 (2)
Power Vsup
and communications forming the backbone of modern
electronics industry in wireless mobile communications,
state-of-art processors, computers etc. All efforts where gm  I as in the case, when the input
eventually converge on decreasing the power devices are in weak/ saturation inversion region of strong
consumption entailed by ever reducing size of the inversion. The proportionality constant in the last term is
circuits enabling the portable gadgets. a function of the architecture of the op-amp and the
Designing high – performance analog circuits is switched-capacitor circuitry around the op-amp. It is
becoming increasingly challenging with the persistent clear from this above mathematical expression that
trend towards reduced supply voltages .The main increasing swing of the op-amp leads to improvement of
bottleneck in an analog circuit is the operational – overall performance ,which leads to achieve lower power
amplifier. At higher supply voltages, there is a trade – or higher SNR or speed [2,10].
off between power, gain and speed,. Characterstics
under consideration are low offset voltage, high gain, ,
high output swing and high PSRR. These characteristics 3. SWING IMPROVEMENT
define Performance of analog circuit. Reducing supply METHODOLOGY
voltages and output swing become the key parameter. In the topology shown in Fig. 1, transistors
Due to the advancement in the Op-amp architectures M7–M9 are deliberately driven deep into the linear
time to time have evolved from a simple Two-Stage region. Since these transistors normally operate in the
architecture to the high performance Telescopic amplifier linear region Vmargin, is not needed across these devices.
involving less high gain low noise, power consumption, Under these conditions, the output swing is shown to be
etc.
In this paper the design procedure for HSHPSS (high-
2Vsup  6Vds,sat  2Vm arg in  2Vds,lintail  2Vds,linload,
swing high performance Single Stage) CMOS Telescopic where Vds,lin  tail and Vds,lin load, are the drain-to-source
operational amplifier is developed using design
equations. The high swing of the op-amp is achieved by voltages for the tail and load transistors, respectively.
employing the tail and current source transistors in the With Vds, sat of 200 mV, Vmargin of 100 mV, Vds, lin-tail of
deep linear region. Trade - off among such factors as 80 mV, and Vds, lin-load of 160 mV, the differential output
bandwidth, output swing, PSRR Gain, bias voltages, swing is 2Vsup-1.88 V, which is more not only to a
slew rate, Phase margin, CMRR, , power are made telescopic amplifier by about 0.7 V but also to a regular
evident. folded-cascode amplifier by roughly 100 mV. The swing
enhancement stems not only from the difference
2. HIGH SWING IN between Vds, sat and the voltage across the devices in
OPERATIONAL AMPLIFIER the linear region but also because of the fact that we no
In analog circuits where kT/C noise is the dominant longer need Vmargin across devices placed in the linear
noise, the relationship between op-amp performance region. It is important to note that any reduction in

94
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

voltage across the tail transistor improves differential VB2 = VGS9 – VTh
swing two fold as the tail transistor cuts into the output
swing from both sides of the amplifier. By elimination
the voltage marginal value (Vmargin ) across the tail and 5.4: Design the differential pair of the circuit, by
the load devices itself contributes to a swing assuming both of them to be working in saturation mode.
enhancement of 4V margin . This benefit of increased Their aspect ratios could be calculated using bias current
swing by pushing the load and tail transistors in the Iss. The equation used is
W 
linear region, however, is accompanied by slightly
degraded common-mode rejection ratio (CMRR) and I SS  C ox   VGS  VTh 2
differential gain of the amplifier. The PSRR positive and  L 1, 2
PSRR negative remains the same and less power 5.5: Calculate the common mode voltage that allows M9
dissipation and good slew rate [13,25]. to be in saturation.
Vin, cm >= Vsat, 9 + VGS1
5.6: Design the High Compliance Current mirror and
calculate the bias voltage that is applied to both the gates
by the following equation
VB1 – V2 – VTh, n = Vsat, 3

Where VB1 is the bias voltage that is applied to


high Compliance current mirror, V2 is the voltage at
node 2 and VTh,, is the Threshold voltage .The aspect
ratios of transistors M3 and M4 can be calculated by
assuming both the transistors in saturation and both
are matching. The current equation is
W 
I SS  C ox   VGS  VTh 
2

  3, 4
L
where VGS = VB1 – Vsat, 2 – VTh, n
5.7: Design the Cascode Current Mirror stage where
Fig .1. Methodology for enhancing swing there are four PMOS transistors, which are identical,
and the current passing through them is same as the
4. DESIGN SPECIFICATIONS drain and gate are tied to each other. The transistors
 Unity gain bandwidth (GB) M5 and M6 are in saturation mode while transistors
 Load capacitance (CL) M7 and M8 are deliberately driven into deep linear
 Slew rate (SR) region. The current flowing is same that was in High
 Power Dissipation (Pdiss) Compliance Current Mirror stage. The aspect ratios
can be calculated by the following current equation
W 
5. DESIGN STEPS I 5,6  C ox   VGS  VTh 
2
5.1 The first step of the design is the estimation  L  5, 6
of the bias current. Assuming the GBW
established by the dominant node, we have Where VGS = VDD – 3VTh,p
The top most cascode load transistors characteristics are
2Iss 1
2f T  driven in deep linear region. The current equation is
VGS  VTH  CL given as follows

where Iss is the tail current. I SS 


C ox  W 
2  L  7 ,8

  2VGS  VT V DS  V DS
2

5.2 : Design Tail transistor M9 and calculate W and L
of this transistor by using the transistor
characteristics in deep linear region. The equation
used is Where VGS  VT , VDS  VGS  VT
I SS 
Cox  W 
2  L 9

  2VGS  VT VDS  VDS
2
 6. RESULTS
The development of a design equation based
procedure provides a quick and effective mechanism for
directly estimating the MOS circuit parameters of the op-
where VGS  VT amp The performance requirements Op-amp designed
with these calculated circuit values were able to satisfy
these requirements to a good extent as evidenced by the
T-spice simulations. The design equations highlighted
VDS  VGS  VT the principal factors affecting the performance
specifications, which made it very convenient to
redesign the circuit for different sets of specifications.
5.3: Calculate the bias VB2 of transistor M9 using the
equation

95
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES [16] Li P.W., Chin M.J., Gray P. R and Castello R., “A


ratio-independent algorithmic analog-to-digital
[1] Allen Philip E., Holberg Douglas R. 2003,, „„CMOS conversion technique,” IEEE J. Solid-State.
Analog Circuit Design” Oxford University Press,
London, Second Edition. [17] Circuits, Vol. SC-19, pp. 1138–1143, Dec. 1984.

[2] Allstot David J,1989, “A Family of High-Swing [18] Maloberti Franco, “Analog Design for CMOS VLSI
CMOS Operational Amplifiers”, IEEE Journal of Systems” KLUWER academic Publisher, Boston/
Solid State Circuits, Vol. 24, No. 6,. Dordrecht/ London.

[3] Babanezad J. N.,1991, “A low-output-impedance [19] Nicollini G., Moretti F., and Conti M,1989, “High-
fully differential op amp with large output swing frequency fully differential filter using operational
and continuous-time common-mode feedback,” amplifiers without common-mode feedback,”IEEE
IEEE J. Solid-State Circuits, Vol. 26, pp. 1825– J.Solid State Circuits, Vol.24, pp.803-813.
1833,. [20] Razavi Behzad, “Design of Analog CMOS
Integrated Circuits”, Tata McGraw-Hill Publishing
[4] Baker R.J, Li H.W, and Boyce D.E. 1998, “CMOS Company Limited.
Circuit Design, Layout, and Simulation”.
Piscataway, NJ: IEEE Press. [21] Ribner David B., Copeland Miles A.1984, “Design
Techniques for Cascode CMOS Op Amps with
[5] Brown William C. and Szeto Andrew Y.J. 2000, Improved PSRR and Common-Mode Input Range”,
“Reconciling Spice Results and Hand Calculations: IEEE Journal of Solid State Circuits, Vol. 19, No. 6.
Unexpected Problems”, IEEE Transaction on
Education, Vol.43, No.1,. [22] Roewar Falk and Kleine Ulrich “A Novel Class of
[6] Bult K. and. Geelen G.J.G.M,1990, “A fast-settling Complementary Folded-Cascode op-amps for low
CMOS op amp for SC circuits with 90-dB DC voltage”, IEEE J. Solid-State Circuits, Vol. 37, No.
gain,” IEEE J. Solid-State Circuits, Vol. 25, 8, Aug. 2002.
pp.1379–1384,. [23] Steyaert Michel and Sansen Willy,1987, “A High-
[7] Chen Fred and Yang Kevin,1999, EECS240 Term Dynamic-Range CMOS Op Amp with Low-
Project Report, “A Fully Differential CMOS Distortion Output Structure”, IEEE Journal of
Telescopic operational amplifier with class AB Solid-State Circuits, pp. 1204-1207, Vol. SC-22,
output stage”, Prof. B.E. Boser,. No. 6.
[24] Steyaert M. and Sansen W,1990., “Power Supply
[8] Das Mrinal and Hellums Jim, Texas Instruments Rejection Ratio in Operational Transconductance
India Ltd., “Improved Design Criteria of Gain Amplifiers”, IEEE Transactions on Circuits and
Boosted CMOS OTA with high speed Systems, Vol. CAS-37, No. 9, pp. 1077-1084.
optimization”, IEEE J.Solid State Circuits, Vol.24, [25] Tsividis Y,1990., “Operation and Modeling of the
pp.553-559, June 1986. MOS Transistors”, Second Ed., Boston: McGraw-
Hill.
[9] EE240 Final Project Report,1999 “High- Gain, 3V [26] Tsividis Yannis P,1978., “Design Considerations in
CMOS Fully Differential Transconductance Single-Channel MOS Analog Integrated Circuits –
Amplifier ‟‟,Department of Electrical and Computer A Tutorial”, IEEE Journal of Solid-State Circuits,
Sciences, University of California, Berkeley pp. 383-391, Vol. SC-13, No. 3.
[10] Fiez Terri S., Yang Howard C., Yang John J. 1989, [27] Yang J. and Lee H.S,1996,“A CMOS 12-bit 4 MHz
Yu Choung, Allstot David J., “ A Family of High- pipelined A/D converter with commutative feedback
Swing CMOS Operational Amplifiers”, IEEE J. capacitor,” in Proc.IEEE Custom Integrated
Solid-State Circuits, Vol. 26, NO. 6,. Circuits Conf., pp. 427-430,.
[28] Zeki A. and Kuntman H., “Accurate and High
[11] Geiger R.L., Allen P. E and Strader N. R.1990, output impedance current mirror suitable for CMOS
“VLSI Design Techniques for Analog and Digital current output stages”, IEE 1997 Electronics letters
Circuits”, McGraw-Hill Publishing Company. online NO. 19970700.
[12] Gray P.R., Hurst P.J., Lewis S.H. and Meyer [29] K.Santosh, Sri G.Ramesh, 1989, “Design of 32Bit
R.G,2001, “ Analysis and Design of Analog Carry-lookahead Adder using Constant Delay
Integrated Circuits”, Fourth Edition, John Wiley & Logic”, International Journal of Scientific and
Sons. Research Publications, Volume 4, Issue 8, pp 1-12.
[30] Viswas Giri,2014, “Design and power optimization
[13] Gulati Kush and Lee Hae-Seung, „„High – Swing of a low voltage cascode current mirror with
CMOS Telescopic Operational Amplifier ‟‟, IEEE enhance dynamic range”, International Journal for
Journal of Solid State Circuits, Vol. 33, No. 12, Technological Research in Engineering Volume 2,
Dec. 1998. Issue 3, pp. 157-161.
[14] Kang Sung-Mo, Leblebici Yusuf, 2003, “CMOS
Digital Integrated Circuits, Analysis and design”,
Tata McGraw-Hill Edition, Third Edition
[15] Krenik W., Hellums J., Hsu W.C., Nail R,1998, and
Izzi L., “High dynamic range CMOS amplifier
design in reduced supply voltage
environment,”Tech. Dig. Midwest Symp. Circuits
and Systems, pp. 368–370.

96
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Application of AHP-VIKOR Hybrid MCDM Approach for


3PL selection: A Case Study
Arvind Jayant Priya Singh
SLIET Deemed University, SLIET Deemed University,
Longowal, Sangrur, Punjab, INDIA Longowal, Sangrur, Punjab, INDIA
arvindjayant@gmail.com Singhpiaa23@gmail.com

ABSTRACT
Multi-criteria decision making (MCDM) is one of the most In classical MCDM methods, the ratings and the weights of
common activities in human society. It consists of selecting the criteria are known precisely, whereas in the real world, in
the optimal one from a set of available alternatives with an imprecise and uncertain environment, it is an unrealistic
respect to the predefined criteria or attributes. In this paper, a assumption that the knowledge and representation of a
hybrid decision making approach integrating Analytical decision maker or expert are so precise. A suitable approach
hierarchical process (AHP) operators into VIKOR is proposed for dealing with such a problem is to use linguistic
for tackling multi criteria problems with conflicting and non- assessments instead of numerical ones to represent the
commensurable (different units) criteria. A manufacturer subjective judgment of decision makers by means of linguistic
produces new products by using original components or by variables. A very useful technique for multiple criteria
remanufactured components. The used products are collected decision making is the VIKOR method (Opricovic and Tzeng,
by the manufacturer or the retailer or a third party logistics 2002), which is based on ideas of compromise programming.
operator. Companies can no longer afford to treatment of The main advantages of the VIKOR method are that it can
recovered products. It needs to be a core capability within the solve discrete decision problems with conflicting and non-
supply chain organization. Understanding and properly commensurable (different units) criteria and provide a
managing the reverse logistics can not only reduce costs, but solution that is the closest to the ideal. The VIKOR method
also increase revenues. It can also make a huge difference in focuses on ranking and selecting from a set of alternatives,
retaining consumer loyalty and protecting the brand. Due to and determines compromise solutions for a problem with
intricacies, considerable risks are involved in product recovery conflicting criteria, which can help the decision makers to
operations; therefore core competency and experience are reach a final decision. Recently, the VIKOR method has been
prerequisite for successful implementation of reverse logistics studied and applied in a wide range of problems. Generally,
process to Third-Party Logistics Providers (3PLPs). The when using VIKOR in decision making, the separate measures
selection of third‐party logistics provider is an intriguing from the best and worst values are calculated by using the
practical and research question. The objective of this work is weighted average and maximum weighted method,
to develop decision support system to assist the decision- respectively. In some cases, it may be interest to consider the
makers in selection and evaluation of different third-party possibility of parameter zing the results from the maximum
reverse logistics providers by Analytical hierarchical process separation to the minimum separation.
(AHP) and Višekriterijumsko kompromisno rangiranje The usage of VIKOR method has been increasing. In the
(VIKOR) methods. A real life case of a mobile manufacturing literature, Chen and Wang (2009) optimized partners’ choice
company is taken to demonstrate the steps of the decision in IS/IT outsourcing projects by fuzzy VIKOR. In this study,
support system. we applied the VIKOR method, which was developed for
multi-criteria optimization for complex systems, to find a
General Terms compromise priority ranking of alternatives according to the
Reverse logistics, supply chain, MCDM methods, 3PRL selected criteria for a selection problem. Sayadi, Heydari, and
Selection Shahanaghi (2009) used extension VIKOR method for the
solution of the decision making problem with interval
Keywords numbers. Liou, Tsai, Lin, and Tzeng (2010) used a modified
Višekriterijumsko kompromisno rangiranje (VIKOR); Mobile VIKOR method for improving the domestic airlines service
industry; Analytical hierarchical process (AHP); reverse quality and Chang and Hsu (2009) used VIKOR method for
logistics operation; prioritizing land-use restraint strategies in the Tseng–Wen
reservoir watershed. On the other hand some researchers have
evaluated VIKOR method under fuzzy environment. For
1. INTRODUCTION example Kaya and Kahraman (2010) used an integrated fuzzy
Multi-criteria decision making (MCDM) is regarded as a main VIKOR and AHP methodology for multi-criteria renewable
part of modern decision science and operational research, energy planning in _Istanbul and also Sanayei, Mousavi, and
which contains multiple decision criteria and multiple Yazdankhah (2010) used VIKOR method for a supplier
decision alternatives. The increasing complexity of the selection problem with fuzzy sets. The objective of present
engineering and management environment makes it less work is to develop decision support system (DSS) to assist the
possible for single decision maker to consider all relevant decision-makers in selection and evaluation of different third-
aspects of a problem. As a result, many decision making party reverse logistics providers by Analytical hierarchical
processes, in the real world, take place in group settings process (AHP) and Višekriterijumsko kompromisno
(Merigó, 2011 and, Yang et al., 2012). Therefore, multiple rangiranje (VIKOR) methods. A real life case study of a cell
criteria group decision making (MCGDM) problem is a hot phone manufacturing company has been developed to
research topic which has received a great deal of attention demonstrate the steps of the decision support system.
from researchers recently.

97
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2. REVERSE SUPPLY CHAIN • Inventory available in different nodes


MANAGEMENT Drivers of Reverse Supply Chain Initiatives
Recently, product and material recovery has received growing • Environmental legislations
attention throughout the world, with its three main motivators • Economic value from returns
that include governmental legislations, economic value to be • Green Image
recovered and environmental concerns (Ali Cetin • Material Resource constraints like lead and other
Suyabatmaz, et. al., 2014). Reverse supply chain or reverse precious resource
logistics is the series of activities required to retrieve a used
product from a customer and dispose of it properly or reuse
after processing. The chain connects end users with
3. AHP METHOD
manufacturer in reverse direction. So Reverse Logistics – RL One of the most popular analytical techniques for complex
have an important role in green supply chains by providing decision making problem is the analytical hierarchy process
customers with the opportunity to return end life products to (AHP). Saaty (1980, 2000) developed AHP, which
the manufacturer, thus re-evaluating them and including them decomposes a decision making problem into a system of
again in the production cycle (Efendigil et al., 2008). Reverse hierarchies of objectives, attributes (for criteria), and
logistics is process of reclaiming recyclable and reusable alternatives.
materials, returns, and reworks from the point of consumption An AHP hierarchy can have as many levels as needed to fully
or sue for repair, remanufacturing, redistribution, or disposal. characterize a particular decision situation. A number of
Often, the reprocessing stage requires the highest investments functional characteristics make AHP a useful methodology.
within reverse logistics network. The process involves These include the ability to handle decision situations
disassembly, repair work, reuse in new products and re- involving subjective judgments, multiple decision makers, and
assembly. The critical issues involved are how to reduce the the ability to provide measures of consistency of preference
uncertainty in the supply of products to be manufactured, how (Triantaphyllou, 2000). Designed to reflect the way people
to ensure a sustainable volume of products to be manufactured actually think, AHP continues to be the most highly regarded
and whether to outsource remanufacturing (open-loop system) and widely used decision-making method. AHP can
or to integrate with existing operation (closed-loop efficiently deal with tangible (i.e., objective) as well as non-
system). Decisions regarding selecting of services and tangible (i.e., subjective) attributes, especially where the
products suppliers are, in general, complex due to various subjective judgments of different individuals constitute an
conflicting objectives involved and, consequently, various important part of the decision process. The main procedure of
qualitative and quantitative criteria (Choy et al., 2005). Taking AHP using the radical root method (also called geometric
this into account, the processes of identifying the best mean method) is as follows:
suppliers for services and/or products or even evaluating the
performance of a former supplier are challenging for decision Step 1: Determine the objective and the evaluation attributes.
Develop a hierarchical structure with a goal or objective at the
makers (DM), but essential in business processes (Bozarth and
top level, the attributes at the second level and the alternatives
Robert, 2008). Furthermore, growing environmental concerns
have motivated businesses to include environmental criteria at the third level.
when selecting services and product suppliers (Efendigil et al.,
Step 2: determine the relative importance of different
2008). Returning used products is becoming an important
attributes with respect to the goal or objective,
logistics activity due to government legislation and the
increasing awareness in society (Kannan et al., 2011).
 Construct a pairwise comparison matrix using a
As shown in Figure 1 (Appendix 1), based on the current
demand for reverse logistics activities companies have scale of relative importance. The judgments are
entered using the fundamental scale of the analytic
basically two options to comply with the law/policy: i)
hierarchy process (Saaty 1980, 2000). An attribute
execute reverse logistics activities internally; and ii) outsource
compared with itself is always assigned the value 1,
reverse logistics activities. This article focuses on the second
option and to be able to select the most appropriate 3PRLP, it so the main diagonal entries of the pairwise
comparison matrix are all 1. The numbers 2, 3, 4,
is proposed a conceptual framework using MCDA modeling
and 5 correspond to the verbal judgments ‘moderate
is proposed, which has systematized steps with the purpose of
importance’, ‘strong importance’, ‘very strong
support decision makers in these kinds of decisions. The steps
importance’ and ‘absolute importance’. Assuming
assist DMs to structure the decision problem regarding
M attributes, the pairwise comparison of attribute i
selecting of 3PRLP from the identifying objectives and the set
of criteria to defining the most suitable MCDA approach and with attribute j yields a square matrix R m∗m where
method, depending on the rationality of the DM. In this paper aij denotes the comparative importance of attribute i
AHP-VIKOR has been used for making strategic decision in with respect to attribute j. In the matrix, bij = 1
multi-attribute decision environment for selection of third- when I = j and rji = 1/rij .
party reverse logistics providers (3PRLPs) for collection of
end of life mobile phones. 1 ⋯ r1m
R m∗m = ⋮ ⋱ ⋮
Reverse Supply Chain Network-Characteristics rm1 ⋯ 1
• Convergent in nature from end-user to manufacturer
• Reverse flow of used products  Find the relative normalized weight (wi ) of each
• Supply driven attribute by (i) calculating the geometric mean of the
• Relatively slow movement i-th row, and (ii) normalizing the geometric means
• Value declines with time while moving upstream of rows in the comparison matrix. This can be
• Very small value addition in some cases represented as
• MH and transportation are not with care

98
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1
m S i −S ∗ R i −R ∗
GMi = m
j=1 rij (1) Qi = v + (1-v) (6)
S − −S ∗ R − −R ∗

wi =
GM i
(2) where S ∗ = mini Si , S − = maxi Si , R∗ = mini R i , R− = maxi R i ,
m
i=1 GM i and v is the weight of the strategy of maximum group utility,
whereas (1-v) is the weight of the individual regret. Here,
The geometric mean method of AHP is commonly when v is larger than 0.5, the index of Qi follows majority rule
used to determine the relative normalized weights of
the attributes, because of its simplicity, easy (4) Rank the alternatives, sorting by the values S, R and Q, in
determination of the maximum Eigen value, and decreasing order.
reduction in inconsistency of judgments.

 Calculate matrices A3 and A4 such that A3 = A1 *


5. PROBLEM DESCRIPTION
A2 and A4 = A3/A2, where A2 = Profitable reuse and remanufacturing of cell phones must meet
the challenges of turbulent business environment which may
w1 , w2 , … . . , wi T .
include continuously change in design pattern, falling prices
 Determine the maximum Eigen value λmax that is
for new phone models, disassembly of unfriendly designs,
the average of matrix A4.
short life cycles, and prohibiting transport, labor and
 Calculate the consistency index CI = (λmax − machining costs in high-wage countries. Today, the
M)/(M-1). The smaller the value of CI, the smaller remanufacturing of expensive, long-living investment goods,
is the deviation from the consistency. e.g. machine tools, jet fans, military equipment or automobile
 Obtain the random index (RI) for the number of engines, is extended to a large number of consumer goods
attributes used in decision making. with short life cycles and relatively low values. Reuse is an
 Calculate the consistency ratio CR = CI/RI. Usually, alternative to material recycling to comply with recovery rates
a CR of 0.1 or less is considered as acceptable, and and quantities as well as special treatment requirements.
it reflects an informed judgment attributable to the (Franke, 2006). The industry segment selected for this study is
knowledge of the analyst regarding the problem Mobile Phones manufacturing industry situated in the
under study. northern part of India. The aim of this study is to evaluate
logistics service providers for hiring their service to collect &
4. VIKOR PROCEDURE supply the End-of-Life (EOL) Mobile Phones to the company
Multi criteria decision making (MCDM) is one of the most door step for reclaiming the useful components for
prevalent methods for resolving conflict management issues remanufacturing of mobile phones. According to Greenpeace
(Deng & Chan, 2011). MCDM deals with decision and report, few mobile phones having toxic materials like
planning problems by consideration of multiple criteria and polyvinyl Chloride plastic (PVC) bars, phthalates antimony
the importance of each (Haleh & Hamidi, 2011). Among the tried, beryllium oxide and Brominates Frame Retardant
many MCDM methods, VIKOR is a compromise ranking (BFR).These toxic materials possess a great threat to
method to optimize the multi-response process (Opricovic, environment and human health if not disposed off in a proper
1998). It uses a multi criteria ranking index derived by method. E-waste rule 2011 (Management and handling
comparing the closeness of each criterion to the ideal Rules) came into effect in May 2012 in India. It places
alternative. The core concept of VIKOR is the focus on responsibility on the producers for the entire life cycle of a
ranking and selecting from a set of alternatives in the presence product. Under electronic waste management rules producer
of conflicting criteria (Opricovic, 2011). In VIKOR, the (OEM) will set up collection centers to dispose of e-waste,
ranking index is derived by considering both the maximum and make manufacturers liable for collection of electronic
group utility and minimum individual regret of the opponent waste of their products, three years since the rules were
(Liou, Tsai, Lin, & Tzeng, 2011). notified most companies have failed to set up collection
centers. An old non-working mobile may fetch up anything
VIKOR denotes the various n alternatives asa1 , a2 ,. . ., an . For between Rs.200 to Rs.1000 depending on its condition A
an alternative ai , the merit of the jth aspect is represented laptop may get you a little more ; but your old fridge or a
by fij ; that is, fij is the value of the jth criterion function for the television may not get you much primarily because of its high
alternativeai , n being the number of criteria. The VIKOR transportations cost to the electronic recycling unit. This new
procedure is divided into the following five steps: rule, however, may put any law–abiding citizen in a fix
because the designated centers where they are actually meant
(1) Determine the best fj∗ and worst fj− values of all criterion to dispose of the e–waste have not come up in most cities. The
functions. If the jth criterion function represents a merit, then effective implementation of the rules looked very unlikely in
light of the present circumstances. Mostly consumers do not
fj∗ = Maxi fij , fj− = Mini fij (3)
know where the e-waste is to be disposed (Toxicslink).
Once the mobile phones are assembled in different production
(2) Compute the values Si and R i , i = 1,2,3,. . .,m, by the units it has to be shipped through distributors, wholesalers,
relations retailers and then customers. After its end of life, consumers

n w i (f j −f ij )
Si = j=1 f ∗ −f − (4) do not know where the e-waste is to be disposed. As there is
j j
no mechanism to collect e-waste from homes, it is mostly
w i (f ∗j −f ij )
lands in municipal bins. Generally used mobile phones are
R i = max (5) collected at the retailers and should be quickly transported to
f ∗j −f −
j
centralized collection center where returned mobile phones
Where, wi is the weight of the jth criterion which expresses are inspected for quality failure, sorted for potential reuse,
their relative importance of the criteria repair or recycling. After inspection, the useless
phones/batteries (not able to recycle) are disposed off by eco-
(3) Compute the value Qi , i = 1,2,3,...,m, by the relation friendly manner and reusable components are transported to

99
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

disassembly/recycling plants and recovered components are 4. Identification of potential reverse logistics service
used in new phones assembly. providers
A series of interviews and discussion sessions was held with 5. Evaluation of request of RL logistics service
the mobile phone industry managers, retailers, and state providers (RLLSP)
pollution control boards officials during this project and 6. Develop request for proposal offer from 3PL reverse
following problem areas are identified for improvement in logistics service providers
closed loop supply chain of the mobile phones. 7. Evaluation of service proposal offer supplied by the
logistics service providers
 Uncertainty involved in supply of used mobile 8. Field visits and inspection of facilities of the
phones to the OEM and industries are unable to logistics service providers
forecast collection of EOL mobile phones quantity. 9. Collection of feed backs from the exiting customers
 Though most of the e- waste generated in India is of the service providers
recycled and it is done in a very hazardous manner 10. Final selection using AHP-VIKOR approach and
by informal sector. agreement for service
 Presence of illegal recycling units in the state for AHP-VIKOR based decision modeling methodology, which
unauthorized mobile collection & PVC recycling is discussed in the next section of the paper, is recommended
operation in business environment. for the final selection of a RL service provider. For any long
 The company is not having any well-structured term business relationship a business contract between two
model of reverse logistics practice. parties must address scope of work, responsibilities, risks and
 Huge cost involved in setting of mobile collection rewards, remedies, extra services, damages types, individual
centers at prime locations under the new status, termination, agreement modification, liabilities, rate
management & handling rules, 2011, Government adjustments, service compensations limitations,
of India. compensation, insurance, , performance measurement issues,
etc.
To solve aforesaid problems and business performance
improvement mobile phones manufacturing industry is ready 5.2 Evaluation of 3PL using AHP-VIKOR
to assign the work of regular supply of End-of-Life (EOL) The AHP-VIKOR based MCDM approach presented in this
phones to logistics service provider. The team of logistics work and applied in evaluation & selection of 3PL for a
managers must have enough knowledge to define the aims and mobile phones manufacturing industry. There are 20
benefits from outsourcing of logistics service and may be able outsourcing service providers are interested to conduct reverse
to convince about the goal and desired objectives of the logistics operation for the case company. In the preliminary
company to the provider exactly understands what goals and screening the 11 service providers were rejected easily by the
objectives the user wants to achieve. An accurate estimation company management. The final selection from the remaining
of business and service requirements of the company would nine potential 3PRLPs (A, B, C, D, E, F, G, H and I) was very
minimize the need of assumptions on the part of the provider tough task who were almost fulfill the requirement of the case
and ensure a high service level. Service level desired from the company. Due to fund limitations and other operational
logistics service providers must include both the present and constraints, the case company was keen interested to apply a
the future service standards. The problem addressed here is to scientific technique to evaluate all eligible 3PL service
build a sound decision support methodology to evaluation & providers and determine the best 3PL service providers of the
selection of best reverse logistics service provider in the nine bidding for the deal. The company management
mobile phones reverse supply chain to minimize the forward identified 10 important attributes that were relevant to their
and reverse supply chain cost comprising procurement, business that they deemed it necessary for the 3PL they
production, distribution, inventory, collection, disposal, dis- intended to choose. These attributes were E-Waste Storage
assembly and recycling cost by making a responsive supply Capacity (EWSC), Availability of Skilled Personnel (AOSP),
chain environment. Level of Noise Pollution (LNP) and Impacts of Environmental
Pollution (IEP), Safe Disposal Costs (SDC), Availability of a
5.1 DSS for the selection of 3PL Service Provider Covered and Closed Area (ACCA), Possibilities to Work with
The developed decision support methodology requires for the NGOs (PWNGO), Inspection/Sorting and Disassembly Costs
assessment of alternative logistics service providers in two (ISDC), Mobile Phone Refurbishing Costs (MPRC), Mobile
steps: (i) Initial screening of the providers by a team of Recycling Costs (MRC). Among these attributes, ISDC
concerned managers from industry and (ii) AHP-VIKOR (thousands of Rupees), EWSC (in tons), MPRC (INR/hour),
based decision support system for the final evaluation of the MRC (thousands of INR) and final disposal costs (thousands
service providers. Often, the initial screening of the service of INR) are quantitative in nature, having absolute numerical
providers is an easy task but the final selection from the list of values. Attributes AOSP, LN P, ACCA, IEP and PWNGO
short-listed providers is a tough task. In this section, we have qualitative measures and for these a ranked value
present a methodology for the initial screening of the judgment on a scale of 1–5 (here 1 corresponds to lowest, 3 is
providers. Later, these short-listed providers would be ranked moderate and 5 corresponds to highest) has been suggested.
by the AHP-VIKOR based approach. The cost of recycling of EOL or used mobiles phones ranges
from INR.1000 to INR.1600 per unit and INR.1200 to
The various steps of decision support methodology are INR.2000 per unit for safe disposal of hazardous waste from
enlisted as follows: mobile. A single mobile refurbishing technician can test and
1. Constitution of a team of competitive managers & troubleshoot a donated mobile, make necessary repairs and
Consultant upgrade and package it for reuse in 3 hours at a cost of on an
2. Decision regarding type of outsourcing service level average INR.1500 (Techsoup, 2008). These data was provided
required and collection objectives by various remanufacturing companies during this research
3. Development of collection and functional project and were used as the reference for the formulation of
specifications of the proposed task reverse logistics data for the case company dealt in this paper.

100
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The data for all 3PL with respect to various attributes are Step 1: Based upon the information provided by various
provided in Table 1.The implementation of the AHP-VIKOR concerned companies, the decision matrix has been prepared
model and analysis are explained in the Following eight steps: as shown in table 1, which illustrates the performance of s
providers with respect to all 10 attribute

Table 1. Decision matrix representing the performance of various RLSP


3PRSPs EWSC ISDC MPRC MRC SDC ACCA PWNGO AOSP LNP IEP
A 150 1600 130 1200 1400 3 4 3 4 5
B 140 1700 150 1300 1800 5 5 4 3 4
C 170 1600 180 1350 1480 4 3 5 5 5
D 180 1650 160 1500 1600 2 3 3 1 2
E 110 1500 160 1500 1400 1 3 5 2 5
F 120 1800 130 1400 1400 5 3 4 4 2
G 130 1650 150 1300 1750 3 2 4 3 5
H 120 1600 130 1550 1800 4 1 2 4 4
I 150 1100 140 1200 1650 5 2 2 4 5

Step 2: In present research project, five experts, three from the Members from industry and academia having life long
mobile manufacturing/ recycling companies and other two experience in the field of reverse logistics practices in
from academia, were consulted for making required pair-wise electronics goods industry. The pair-wise comparison matrix
comparison of attributes. Two senior executives from industry is given herewith:
were the members of the team. The team
Table 2. Pair-wise comparison of attributes
EWSC ISDC MPRC MRC SDC ACCA PWNGO AOSP LNP IEP
EWSC 1 5 4 4 1/3 4 1/2 1/5 2 3
ISDC 1/5 1 3 3 4 1/3 3 1/2 5 4
MPRC ¼ 1/3 1 ¼ 3 2 1/3 4 1/5 2
MRC ¼ ½ 4 1 4 3 1/4 4 2 1/5
SDC 3 ¼ 1/3 ¼ 1 1/5 2 1/2 3 4
ACCA ¼ 3 1/2 1/3 5 1 4 1/3 2 5
PWNGO 2 1/3 3 4 ½ ¼ 1 2 ¼ 3
AOSP 5 2 ¼ ¼ 2 3 1/2 1 3 1/5
LNP ½ 1/5 5 ½ 1/3 ½ 4 1/3 1 2

IEP 1/3 ¼ ½ 5 ¼ 1/5 1/3 5 ½ 1

The weights of the attributes computed using equation (3) is SDC = 0.08, ACCA = 0.02, PWNGO = 0.11, AOSP = 0.10,
given below LNP = 0.08 and IEP= 0.06
EWSC = 0.16, ISDC = 0.14, MPRC= 0.008, MRC = 0.11,

Step 3. The best fj∗ and worst fj− values of all criterion
functions are determined using equation (3) and given in table
3.

Table 3. Best 𝐟𝐣∗ and worst 𝐟𝐣− values


𝐟𝐣∗ 200 1800 180 1550 1800 5 5 5 5 5

101
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

𝐟𝐣− 110 1100 130 1200 1400 1 1 2 1 2

w i (f ∗j −f ij )
Step 4. Value of is calculated and given in table 4.
f ∗j −f −
j

𝐰𝐢 𝐟𝐣∗ −𝐟𝐢𝐣
Table 4. Value of ,,,,, when weights are equal
𝐟𝐣∗ −𝐟𝐣−

EWSC ISDC MPRC MRC SDC ACCA PWNGO AOSP LNP IEP
A 0.088 0.04 0.008 0.11 0.08 0.01 0.027 0.066 0.02 0
B 0.10 0.02 0.005 0.078 0 0 0 0.033 0.04 0.02
C 0.053 0.04 0 0.062 0.064 0.005 0.055 0 0 0
D 0.035 0.028 0.003 0.015 0.04 0.015 0.055 0.066 0.08 0.06
E 0.16 0.06 0.003 0.015 0.08 0.02 0.055 0 0.06 0

F 0.14 0 0.008 0.047 0.08 0 0.055 0.033 0.02 0.06


G 0.124 0.028 0.005 0.078 0.01 0.01 0.082 0.033 0.04 0
H 0 0.04 0.008 0 0 0.005 0.11 0.1 0.02 0.02
I 0.088 0.14 0.006 0.11 0.03 0 0.082 0.1 0.02 0

Step 5: Based on the Table 5, Eq. (3), Eq. (4) and Eq. (5) S ∗ = 0.279, S− =576, R∗ =0.064, R− =0.16,
values of Si , R i and Qi are obtained for each alternative, as
shown in Table 5. Here, the Qi value of each alternative is
calculated using each v value as v = 0.5.

Table 5. 𝐐𝐢 value and ranking


𝐒𝐢 𝐑𝐢 𝐐𝐢 Rank
A 0.449 0.11 0.52 5
B 0.296 0.1 0.216 2
C 0.279 0.064 0 1
D 0.397 0.08 0.281 4
E 0.453 0.16 0.793 8
F 0.443 0.14 0.672 7
G 0.41 0.124 0.532 6

H 0.303 0.11 0.274 3


I 0.576 0.14 0.896 9

Step 6: On the basis of the Qi values, the case company can the presence of conflicting criteria. The obtained compromise
be ranked and choose 3PL for their operations. Raking of 3PL solution could be accepted by the decision makers because it
is C-B-H-D-A-G-F-E-I in the decreasing order of preference provides a maximum group utility for the ‘‘majority’’ and a
as shown in table 5. minimum individual regret for the ‘‘opponent’’. In this paper,
we have studied the use of AHP operators in the VIKOR
method and developed an integrated AHP-VIKOR method to
solve multi-criteria problems with conflicting and non-
6. CONCLUSIONS & FUTURE SCOPE commensurable criteria, specifically considering the complex
The VIKOR method was developed as a MCDM method to attitudinal character of the decision maker. In the present
determine the preference ranking from a set of alternatives in work, the results from mobile phones case study indicate that

102
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3PL service provider ‘C’ is the first choice for the case [3] J.M. Merigó, Fuzzy multi-person decision making
company. An analysis of data provided by 3PL service with fuzzy probabilistic aggregation operators, Int.
provider ‘C’ reveals that the logistics firm ‘C’ has been take J. Fuzzy Syst. 13 (2011) 163–174.
care about environmental aspects like proper disposal of end [4] Kaya, T., Kahraman, C. (2011), ‘Fuzzy multiple
of life and used products. Results indicates that logistics firm criteria forestry decision making based on an
‘C’ have scored high values on almost all quantitative integrated VIKOR and AHP approach’, Expert
attributes as compared to other logistics service providers. Systems with Applications 38 (2011) 7326–7333.
Day by day environmental issues are gaining more importance [5] Kaya, T., Kahraman, C., (2010). Multicriteria
in Indian business environment. So, most important renewable energy planning using an integrated
managerial implication of the developed model is that only the fuzzy VIKOR & AHP methodology: The case of
firms who are dealing with environmental issues significantly Istanbul. Energy, (pp.1–11),
will get success in competitive business environment. The doi:10.1016/j.energy.2010.02.051.
proposed hybrid model in the present research has find out [6] Liou, J. J. H., Tsai, C.-Y., Lin, R.-H., & Tzeng, G.-
several significant attributes for evaluation of logistics firms H. (2010). A modified VIKOR multiple criteria
for conduct of reverse logistics operation with respect to decision method for improving domestic airlines
mobile phones manufacturing companies. This may provide service quality. Journal of Air Transport
support to management and consultants for making strategic Management, 1–5.
decisions like selection of logistics firm, selection of new doi:10.1016/j.jairtraman.2010.03.004.
plant site, selection of business partner in competitive [7] Opricovic, S. (2011), ‘Fuzzy VIKOR with an
business environment. In the present work 10 relevant application to water resources planning’, Expert
attributes has been identified for evaluation and selection of Systems with Applications 38 (2011) 12983–12990.
3PL service provider for reverse logistics operation for the [8] Opricovic, S., Tzeng, G. H. (2007), ‘Extended
mobile phone manufacturing industry. The developed model VIKOR method in comparison with outranking
provide flexibility in accommodating new attributes according methods’, European Journal of Operational
to industry needs time to time for sound decision making. Research 178 (2007) 514–529.
In future research, a comparative study may be conducted by [9] Sanayei, A., Mousavi, S. F., & Yazdankhah, A.
using other multi-criteria decision-making methods to validate (2010). Group decision making process for supplier
the results obtained by present method. An analytic network selection with VIKOR under fuzzy environment.
process (ANP) approach may be used for consideration the Expert Systems with Applications, 37, 24–30.
interactions between attributes and the results could be [10] Sanayei, A., Mousavi, S. F., Yazdankhah, A.,
compared by using interpretive structural modeling (ISM) (2010), ‘Group decision making process for supplier
based approach. Matlab version 11 has been used for selection with VIKOR under fuzzy environment’,
calculation purpose in this work. Customized software may be Expert Systems with Applications 37 (2010) 24–30.
developed to reduce computational speed and simplification [11] Sayadi, M. K., Heydari, M., & Shahanaghi, K.
of calculations. (2009). Extension of VIKOR method for Decision
making problem with interval numbers. Applied
REFERENCES Mathematical Modeling, 33, 2257–2262.
[12] Suyabatmaz, A. C., Altekin, F. T., Sahin. G. (2014),
[1] Chen, L. Y., & Wang, T.-C. (2009). optimizing ‘Hybrid simulation-analytical modelling approaches
partners’ choice in IS/IT outsourcing Projects: The for the reverse logistics network design of a third-
strategic decision of fuzzy VIKOR. International party logistics provider’, Computers & Industrial
Journal of Production Economics, 120, 233–242. Engineering, ISSN: 1984-3046, Vol. 7, No. 2, pp.37
[2] Efendigil, T., Önüt, S. and Kongar, E. (2008) ‘A – 58.
holistic approach for selecting a third-party reverse [13] W.E. Yang, J.Q. Wang, X.F. Wang, An outranking
logistics provider in the presence of vagueness’, method for multi-criteria decision making with
Computers and Industrial Engineering, Vol. 54, pp. duplex linguistic information, Fuzzy Set Syst. 198
269–287. (2012) 20–33.

103
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Appendix 1

Environmental Legislations

Reverse logistics implementation

Execute RL Activities internally Outsource RL activities

Select the most appropriate 3PRLP

Multi criteria Decision Aid

Identify the Assign Identify the


DM’s weights to set of criteria
Rationality criteria

Decision Making Evaluate and Analyze Identify 3PRLP


3PRLP

MCDA METHODS

Figure 1. Adapted and modified from Guarnieri, P. et. al (2014)

104
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Comparison of Different Types of Microstrip Patch


Antennas

Sumanpreet Kaur Sidhu Jagtar Singh Sivia


Yadavindra College of Engineering Yadavindra College of Engineering
Guru Kashi Campus Punjabi University Guru Kashi Campus Punjabi University
Talwandi Sabo, Punjab, India Talwandi Sabo, Punjab India
ersumansidhu@gmail.com jagtarsivian@yahoo.com

ABSTRACT The commonly available shapes of patch antenna are


In this paper, microstrip patch antennas with six different rectangular, circular, dipole, triangular, square and elliptical
shapes i.e. rectangular, circular, square, elliptical, pentagonal with rectangular and circular shapes the most common. The
and hexagonal are implemented using Ansoft HFSS version various shapes are illustrated in fig 2.
13.0.0 software. Antennas are designed on Rogers RT/duroid
5880 material with dielectric constant 2.2 and thickness 3.2
mm. The different performance parameters such as return
loss, gain and bandwidth of these antennas are compared. The
operating frequency for all these antennas is taken 7.5 GHz. Rectangle Triangle Circular Ring Dipole
All the antennas are fed with the probe feed. It is found that
pentagonal microstrip patch antenna has better results at this
frequency than all other shapes.

Keywords
Microstrip; Patch; Antenna; gain; return; loss; bandwidth.
Circle Square Ellipse
1. INTRODUCTION
Wireless technology provides less expensive alternative and a
flexible way for communication. Antenna is one of the Fig 2: Common available shapes of microstrip
important elements of the wireless communications systems. patch antenna
According to the IEEE Standard Definitions, the antenna or
aerial is defined as “a means of radiating or receiving radio
waves" [1]. In other words, antennas act as an interface for Basically there are four feeding techniques available to us
electromagnetic energy, propagating between free space and while designing of antenna. These are line feed [5], probe feed
guided medium. [1 and 8], aperture coupled feed [4] and proximity coupled
feed [6]. The feed that is used here is probe feed (or coaxial
Microstrip patch antennas are widely used in the microwave feed). The antenna array is designed using standard equations
frequency region because of their simplicity and compatibility and simulated by professional software called, High
with printed-circuit technology, making them easy to Frequency Structural Simulator (HFSS). It proves to be the
manufacture either as stand-alone elements or as elements of tool for analyzing the working of any antenna [7]. Before
arrays. The advantages of microstrip antennas make them designing of any antenna, its working and simulation is
suitable for various applications like, vehicle based satellite checked by this software such that any kind of change if
link antennas [2], global positioning systems (GPS) [3], radar required could be made.
for missiles and telemetry [2] and mobile handheld radios or
communication devices [3]. In its simplest form a microstrip 2. DESIGNING OF ANTENNAS
patch antenna consists of a patch of metal, generally The designing of the microstrip antennas with circular,
rectangular or circular (though other shapes are sometimes rectangular, elliptical, pentagonal, hexagonal and square patch
used) on top of a grounded substrate [9] as shown in fig 1. is done with HFSS software [7]. The position and dimensions
of the substrate is kept constant throughout. The dimensions
of substrate are as follows.
Position: -50, -45, 0
XSize: 100 YSize: 90 ZSize: 3.2
The radius of circular patch is taken 30 mm. For the
rectangular patch the length is 60 mm and breadth is 50 mm.
The elliptical patch has major axis with radius 20 mm and
minor axis with 10 mm. Pentagonal, hexagonal and square
patches are designed with 44.36 mm, 35 mm and 60 mm side
respectively. All of these shapes are illustrated in fig 3.
Fig 1: Microstrip patch antenna [1]

105
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

After designing of various patches the simulation is carried in terms of performance parameters such as return loss, gain
out and the performance parameters such as return loss, gain and bandwidth are shown in the table 1.
and bandwidth are found. These results are then compared to
find the best of these microstrip antennas.

Center Position: 0, -10, 3.2 Length: 60 mm, Breadth: 50 mm Center Position: -3, -3, 3.2
Major Radius: 20mm
Radius: 30mm Position: -20, -30, 3.2
Ratio: 2
(a) (b) (c)

Center Position: 2, 5, 3.2 Center Position: 0, -5, 3.2


Center Position: -30, -30, 3.2
Start Position: -30, -15, 3.2 Start Position: 0, 30, 3.2
Side: 44.36mm Size: 60 mm
Side: 35mm

(d) (e) (f)

Fig 3: Microstrip Antenna with (a) Circular patch (b) Rectangular patch

(c) Elliptical patch (d) Pentagonal patch (e) Hexagonal patch (f) Square patch

3. RESULTS AND DISCUSSION


Results of microstrip patch antennas with rectangular,
circular, square, elliptical, pentagonal and hexagonal patches

106
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 1: Comparison of various performance parameters of different patches of microstrip antenna

S. No. Shape of Patch Return loss vs. Gain Lower cut off Higher cut off Bandwidth
frequency (dB) frequency frequency (GHz)
(GHz) (GHz)

Circle
1. -16.50 8.1756 6.80 7.43 0.63
(Radius: 30)
Rectangular
2. -19 6.7619 7.33 8.47 1.14
(60, 50, 0)
Ellipse
3. -25 7.2326 6.90 7.96 1.06
(Major Axis:20, Ratio:2)
Pentagon
3. -23 9.0943 7.30 8.54 1.24
(Side: 44.36mm)

Hexagon
4. -22 7.4406 7.30 8.52 1.22
(Side: 35mm)

Square
6. -21 8.2799 6.53 7.45 0.92
(60, 60, 0)

From the table it is clear that at the operating frequency of 7.5 The bandwidth [1] when observed is found maximum in
GHz the minimum return loss [1] is found in case of elliptical pentagonal microstrip patch antenna with a value of 1.24
patch with -25dB. At the same time the gain [1] is found GHz. Also the return loss of pentagonal patch is comparable
maximum in case of pentagonal microstrip patch antenna with with elliptical patch. Hence the best results are obtained with
a value 9.0943 dB. pentagonal microstrip patch antenna.

Fig 4: Return loss vs. frequency and 3D Radiation Pattern of circular microstrip patch antenna

Fig 5: Return loss vs. frequency and 3D Radiation Pattern of rectangular microstrip patch antenna

107
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The graphs for return loss vs. frequency and 3D Radiation The fig 4 illustrates return loss vs. frequency and 3D
pattern of the six different microstrip patch antennas are Radiation pattern of circular microstrip patch antenna.
illustrated in fig 4,5,6,7,8 and 9.

Fig 6: Return loss vs. frequency and 3D Radiation Pattern of elliptical microstrip patch antenna

Fig 7: Return loss vs. frequency and 3D Radiation Pattern of pentagonal microstrip patch antenna

Fig 8: Return loss vs. frequency and 3D Radiation Pattern of hexagonal microstrip patch antenna

Fig 9: Return loss vs. frequency and 3D Radiation Pattern of square microstrip patch antenna
108
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The return loss vs. frequency and 3D Radiation patterns of [3] James, R.J., et al, “Some recent developments in
rectangular patch, elliptical patch, pentagonal patch, microstrip antenna design”, IEEE Trans. Antennas and
hexagonal patch and square patch microstrip antennas are Propagation, Vol.AP-29, January 1981, pp.124-128.
shown in fig 5,6,7,8 and 9 respectively.
[4] Pozar, M. and S.D. Targonski, “Improved Coupling for
Aperture Coupled Microstrip Antennas,” Electronics
Letters, Vol.27, No.13, 1991, pp.1129-1131.
4. CONCLUSION
From the simulation analysis of the rectangular, circular, [5] Lee, H. F., Chen, W., “Advances in Microstrip and
square, elliptical, pentagonal and hexagonal microstrip patch Printed Antennas, New York”, John Wiley & Sons,
antennas it is observed that at 7.5 GHz of operating frequency 1997.
the pentagonal patch antenna gave the best results with a gain [6] Pozar, D.M. and Kauffman, B., “Increasing the
of 9.09943 dB and bandwidth of 1.24 GHz. bandwidth of microstrip antenna by proximity coupling,”
Electronics Letters, Vol. 23, No.8, 1987 , pp. 368-369.
[7] Ansoft HFSS version 9: Overview, 2003.
REFERENCES
[8] Chen, J. S. and Wong, K. L., “A single-layer dual-
[1] Constantine, A. Balanis , “Antenna Theory Analysis and frequency rectangular microstrip patch antenna using a
Design” 2nd Edition ,Wiley India (p.) Ltd., 2007. single probe feed” Microwave Opt. Technol. Lett., vol.
[2] Mailloux, R.J., et al, “microstrip antenna technology”, 11, pp. 83-84, Feb.5, 1996.
IEEE Trans. Antennas and Propagation, Vol. AP-29, [9] 6x9 Handbook / Antenna Engineering Handbook /
January 1981, pp.2-24. Volakis / 147574-5 / Chapter 7

109
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Artificial Vision towards Creating the Joys of Seeing


For the Blind

Aastha Virpal Kaur Anjali Gulati


BGIET Sangrur BGIET Sangrur Chitkara Uni., Rajpura
bansalaastha4140@gmail.com

ABSTRACT Landmark location and identification information.


Information enabling self- familiarization and, mental
In 1929, it was discovered that stimulating the visual mapping of an environment.
cortex of an individual led to the perception of the spots of
light, known as phosphenes. The aim of artificial human Most existing mobility aids for the blind provide
vision systems is o attempt to utilize the perception of information in either tactile or auditory form.
phosphenes to provide a useful substitute for normal
vision. Currently, four locations for electrical stimulation
are being investigated; behind the retina (The Bionic Eye); 2. EXISTING TECHNOLOGIES AND
in front of the retina (Cortical Implant); the optic nerve and
the visual cortex. This review discusses artificial human
METHODOLOGIES- Artificial vision system is
being studied worldwide. There are many types of artificial
vision technology and requirements. [6]
vision systems being proposed.
The basic concept of artificial vision system is-
1. INTRODUCTION- “Electrically stimulating nerve tissues associated with
vision help to transmit electrical signals with visual
Blindness- In 1997, the World Health Organization information to the brain through intact neutral
estimated that there were close to 150 million individuals networks.”[9] Many systems are being proposed because
with significant visual disability worldwide. In of its potential for the development of various types
economically, developed societies, the leading cause of devices with existing technologies. These take into
blindness and visual disabilities in adult are diabetic consideration about the patient condition, nerve tissues that
retinopathy.[7] In general, more than two thirds of today’s subject to stimulation and their role in the visual network.
blindness could be prevented or treated by applying This review discusses the system being researched and
existing knowledge and technology. Nearby half of all developed these days. These devices mainly use electrodes
blindness is due o cataract and a quarter of the world’s and tissues targeted foe electrical stimulation.[10] Totally
blindness is due to trachoma. Other major causes of implantable system with all the functions however is not
blindness are glaucoma, trachoma, onchocerciasis, and develops yet with the present technology for any type of
exophthalmia.[8] system. Systems being used nowadays are electrodes
implanted in the body, which work with several devices
worn outside the body.
Blind Mobility- Blind Mobility is affected by physical and
mental health factors, such as multiple disabilities. Age is a
mobility issue as many of blind are elderly, which can
restrict their ability to use some mobility aids. Many
congenitally blind children have hypotonia or abnormally
low muscle tone which can affect mobility. [10]
Main techniques used under this technology nowadays are
mentioned below:
 Artificial human vision for blind by connecting
a television camera to the visual cortex
 The Bionic Eye
 Cortical Implant
Figure 1. The Visual Pathway
 Optic Nerve Implant
In 1996, the US Research Council gave the following
summary for the blind [11]: The main techniques used under this technology are
discussed in detail here one by one:
Detection of obstacles in the travel path from ground level
to head height for the full body width..Travel surface 2.1. Artificial vision for blind by
information. Detection of objects bordering the travel path. connecting a television camera to the
Distant object and cardinal direction information
visual cortex:

110
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

This new visual prosthesis produces black and white after the surgery. By using the information given by ms
display of visual cortex “phosphenes”. The system was Ashworth’s implant, certain research is done on how to
primarily designed to promote independent mobility, not make it exactly like a proper, working eye. Bionic eye is
reading. A battery has been provided for the electronic visually a prosthetic one which helps people with optic
interface hat is RF isolated from line currents for safety. impairment se. [1]
This permits the volunteer to directly watch television and
use a computer, including access to the internet. Because Main constituents of bionic eye:-
of their potential importance for education , and to help Camera which is attached to a pair of glass which makes
integrate blind people into the workplace, such television, high frequency radio signals.
computer, and internet capabilities may prove even more
valuable in the future than independent mobility. This A microchip inserted into the retina. The signals get turned
system can also store the past time conditioning which can into electrical impulses by electrodes in the chip. This
be viewed by the volunteer through an RF link to remote replaces the cells in retina which are connected to optic
videotape recorder and viewing screen. Therefore, this nerve.[3]
system allows real-time monitoring camera, as well as These impulses are transferred through optic nerve to brain
post-trial analysis, by the volunteer.[2] thus; it is interpreted into a picture. The invention is
The television camera, which is built into a pair of predicted to be most beneficial to those with macular
sunglasses (Fig-2), the prosthesis, as worn by the blind degeneration- damage to the macula which is central part
volunteer (Fig-3) and complete system, is described of retina where light is focused and changed into nerve
schematically , including both the signals in the brain. Scientists hope to develop an eye that
television/computer/Internet interface and the remote has anywhere from 50 to 100 electrodes. [1] Which would
Video Screen/VCR monitor.[2] allow the patients to see a much more complete visual
image? While the image would undoubtedly still look hazy
and unclear to the seeing person this is considered an
enormous leap forward currently, the eyes allows people to
do things such as find their way through a building, find a
door or window and avoid obstacles that might be in their
path.[4]

Figure 2. Blind volunteer with sub-miniature TV


camera mounted on the right lens of his sunglasses, and
the laser-pointer (position monitor) on the left temple
piece.

Figure 3. The Bionic Eye Implant [5]

2.3 Cortical Implant:-


This system stimulates the visual cortex in the brain
responsible for vision with electrode arrays. This visual
cortex is located at lower back section of head that bulges
out slightly. The brain is the only final destination of the
visual information network. [6]
The main challenges for this stimulation approach are
there is lot of risks involved during and after surgery and
other safety issues.

Figure 3. The complete artificial vision system showing


the computer and electronics package on the belt with
output cable to the electrodes on the brain.

2.2 The Bionic Eye:-


This article talks about the first bionic eye implant on
Dianne Ashworth a 54 year old blind women. After the
implantation she recalls seeing flashes, shapes and light FIGURE 4. The Cortical Implant [5]

111
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[8] World Health Organisation, 1997, Blindness and


Visual Disability: seeing ahead-projections into net
2.4 Optic Nerve Implant:- century
This system composed of a bundle of nerve fibers from [8] World Health Organization, 1999, Blindness and visual
retinal output cells called retinal ganglion cells (RGC’s). disability: other leading causes worldwide.
Bundle exists from the eye and is connected to nerve cells
in the brain. Artificial vision system with electrode array [9] Chawla H., 1981, Essential Ophthalmology.
having embedded electrons in a film substrate is wrapped [10] World health organization factsheet, 1997, blindness
around optic nerves from outside as well as a type and visual disability: socioeconomic aspects.
stimulating optic Nerves head with wire electrodes are
being proposed. This is considered a feasible method. [7] [11] Blasch BB, Weiner WR, 1997, Foundations of
orientation and mobility

Figure 5. The Optic Nerve Implant [5]

3. KEY ISSUES-
Artificial human vision (AHV) involves the electrical
stimulation of a component of the human visual system,
which may invoke the perception of a phosphenes or point
of light. Four locations for AHV implant are currently
utilized; subretinal, epiretinal, optic nerve and the visual
cortex (using intra- and surface electrodes).[9] The only
commercially available system is the cortical surface
stimulation device from the Dobelle Institute. The most
impressive gains in vision have been reported from the
subretinal device developed by the Opt bionics Corp.,
however, these results may not be related to the Micro
photodiode device used. Psychophysical and mobility
assessment standards would help in comparing AHV
systems with other technical aids for the blind. [10]

REFERENCES
[1] Sarah Irvine Belson, The Special Ed wiki, Bionic Eye
[2] Wm.H.Dobelle, ASAIO Journal 2000, Artificial Vision
for the Blind by Connecting a Television
Camera to the Visual Cortex
[3] Bridie Smith, 2014, Vision for high-performance
bionic eye jeopardized by lack of funds
[4] Terasawa, Y.; Osawa, K.; Tashiro, H.; Noda, T.; Ohta,
J.; Fujikado, 2013, Engineering Aspects of Retinal
Prosthesis by Suprachoroidal Transretinal Stimulation
[5] Terasawa, Y.; Ozawa, M.; Ohta, J.; Tano, Y. 2008,
Bulk Micromachining-based Multielectrode Array for
Retinal Prostheses
[6] Jason Dowling, 2005, Artificial Human vision
[7] Agnew WF,
McCreery DB (Eds). Prentice Hall, NJ, 1990, Neural
Prethesis: Fundamental studies

112
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Review: Nanotechnology
Sonali Tyagi Savaljeet Kaur Shilpa Rani
BGIET, Sangrur BGIET, Sangrur BGIET, Sangrur
savaljeet999@gmail.com

ABSTRACT: - Nanotechnology is the development 2.3 Mechanical properties:-It is observed that


of engineering devices in nanometer range. Within the next physical properties like strength, melting point also change
few decades, a vast manufacturing will be made possible at nanoscale level. For example bulk steel has highest
by nanotechnology. The purpose of this paper is to look strength then graphite. At nanoscale level cylinder of
into the present and future aspects of nanotechnogy. The carbon are 100 times stronger than steel and are very
paper gives the brief description of what is flexible.[9]
nanotechnology..?? It’s uses and its applications in various
fields.[2] 2.4 Chemical properties:- The percentage of
surface items in nanoparticles is larges compared with bulk
KEYWORDS:- MIT- Massachusetts Institute Of objects, thus reactivities of nonmaterial are more than bulk
Technology, DNA- Deoxyribo Nucleic Acid, NM- Nano material.[10]
Meter.
2.5 Magnetic Properties:-These properties also
1. INTRODUCTION:- show drastic change in nanoparticle due to surface effects
in maganetic interactions.
Nanoscience is concerned with the production,
characteristics and design of material having atleast one 3. HISTORY
spatial dimension in the size range of 1-100 nanometre.
Nanotechnology is the study of devices, products and The history of nanotechnology tells the development of the
process based upon individual or multiple integrated concepts and experimental work falling under the broad
nanoscale components. [1] category of nanotechnology.
An exciting but challenging aspect of nanosc,ience is that In 1959, Feynman gives after-dinner talk describing
matter acts differently when particle are nanosized. This molecular machines building with atomic precision. In
means that many macro level concepts of physics cannot 1974, Taniguchi uses term "nano-technology" in paper on
be applied to understand nanoscience. [1] For example we ion-sputter machining followed by it in year 1977; Drexler
cannot apply principle of classical physics which are originates molecular nanotechnology concepts at MIT. In
otherwise applicable to motion of macro sized objects. At the year 1981, First technical paper on molecular
nanoscale level we have to apply quantum mechanical engineering to build with atomic precision. In year 1986,
description. [2] First book published First organization formed. By the
time in year 1990, Japan's STA begins funding nanotech
2. PROPERTIES OF projects followed by it in the year of 1997 First company
NANOPARTICLES founded: Zyvex First design of nanorobotic system was
proposed . By the year1998 First DNA-based
The properties of a substance are usually measured by nanomechanical device was designed. The year 2001, was
taking large sample volume. However when these stated with the First report on nanotech industry U.S.
properties were checked for same material at nanoscale announces first centre for military applications. 2003
level then large difference were observed in many physical Congressional implications hearings were introduced. 2006
properties. This implies that at nanoscale level, physical National Academies nanotechnology report calls for
properties become sized dependent. [1] experimentation toward molecular manufacturing and start
implementing it. Technology Roadmap was released for
2.1 Optical properties: - Such as colour and Productive Nanosystems in the year 2008. 2009 year came
transparency are observed to change at nanoscale level. up with anew height to the nanotechnology that gave us an
For example Gold in bulk appears to be yellow in colour improved walking DNA nanorobot. By the year 2011 First
while nanosized gold appears red in colour. The main programmable nanowire circuits for nanoprocessors was
reason for change in optical properties at nanoscale level is discovered.[7]
that nanoparticles are so small that electros in them are not
as freer to move as in case of bulk material. Because of 4. USES
this restricted moment of electrons, Nanoparticles reacted
differently with light as compared to bulk material. [1] Nanotechnology is used to make the material effectively
stronger, lighter, more durable, more reactive or better
2.2 Electrical properties: - It is also observed to electrical conductors. It benefits us as it is possible to tailor
change at nanoscale level. These properties include the essential structures of materials at the nanoscale to
conductivity, resistivity of a material. For example in case achieve specific properties. Some of its important uses
of carbon nanotube conductivity changes with the are:-
changing diameter or area of cross section. Carbon
nanotubes can be conducting or semiconducting while 4.1 Carbon Nano tubes: - These are the sheets of
graphite is a good conductor of electricity.[3] graphite rolled upto make a tube that is carbon
macromolecules in cylindrical form. The nanotube
dimensions are variable and can be as small as 0.4nm in

113
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

diameter. The carbon-carbon bonds are sp2 hybridised due 5.1.3 Diagnostic Techniques: - to monitor the level of
to this carbon nanotube has extra strength.[5] nitric oxide in the blood stream sensors have been
developed using carbon nano tubes embedded in a gel. As
4.2 Nanofilms: - It is an assembly of quantum dot
the nanoparticles are attached to molecules in the blood
layers with gradient of nanoparticle size, composition or stream it indicates the start of an infection. When the
density. Nanofilms are used to protect or treat the surfaces sample is scanned for scattering the nanoparticles,
in eyeglasses, computers and cameras. RAMAN signal is enhanced which allow detection of
molecules and indicates infectious disease at a very early
4.3 Nanoscale Transistors: - These are the
stage.
advanced design reduced size transistors. These transistors
minimized the leakage of current when the device is in the 5.2 Electronics: - Nanotechnology holds many
off state. In nanoscale transistors voltage of operation can applications in the field of electronics or nanoelectronics.
be reduced without lose of performance which leads to less It increases the efficiency of electronic devices and reduces
power dissipation per-operation.[4] their weight and power consumption. [7]
4.4 Water Treatment: - Nanotechnology offers 5.2.1 Flexible Circuits:- We are aiming for a
the potential of nanomaterials for the treatment of combination of flexibility, a process of fabrication and less
contaminated surface water, ground water and waste water power requirements using cadmium selenide nanocrystals.
which is due to micro organism and toxic metal ions.
5.3 Silver Nanoparticle Ink: - It is used to form
4.5 Antimicrobial Bandages: - An antibacterial conductive lines needed for circuit boards. It is a method to
bandage using nanoparticles of silver ion is created by Sir print prototype circuit boards using inkjet printers.[8]
Robert Burrell which is very useful nowadays. These silver
ions suppress the cellular respiration of microbes and kill 5.4 Nanowires:-The electrodes made from nanowires
them.[5] enable flat panel displays to be flexible and thinner than
current flat panel displays. The semiconductor nanowires
4.6 Scratch-resistant coatings:-These types of are used to built transistors and integrated circuits.
coating are commonly used in everything from carts to
eyeglasses. It has been discovered that Scratch-resistant 5.5 Food:-The importance of nanotechnology is not
coatings can become more effective by adding aluminium less in food science. Some nanomaterials are developed
silicate nanoparticles which increase the resistance to that will effect not only the taste of food but also food
chipping and scratching. safety and the health benefits that the food delivers.
4.7 Clothing: - A thin layer of zinc oxide 5.6 Silver Nanoparticles:- With the help of these
nanoparticles are used to coat fabrics of clothes which nanoparticles the storage bins are being produces
gives better protection from ultraviolet radiations. embedded in the plastics. It kills the bacteria and
Nanoparticles in the form of hair or whiskers helps in repel minimizes the health risks.
water and make the clothe stain resistant. [8]
Everyday new products such as wrinkle resistance 5.6.1 Clay Nanocomposites:- In light weight
cosmetics, liquid crystal display (LCD) are coming in the bottles, cartons and packaging films clay nanocomposites
market using the process of nanotechnology. There are are used to provide an impermeable barrier to gases like
also many other uses of nanotechnology rather than the oxygen and carbon dioxide.
above discussed.
5.6.2 Zincoxide Nanoparticles:- The strength
5. APPLICATIONS and stability of the plastic film used for packaging can be
improved using zinc oxide nano particles. It blocks
Nanotechnology is not strucked to any particular field. Ultraviolet rays and provide anti bacterial protection.
These technologies are very vast and have many
applications in various fields. Some of the fields are 5.7 Battery:-Nanotechnology reduces the possibility
described below:- of batteries catching fire, increase the available power
from a battery and decrease the time required to recharge
5.1 Medicine: - The application of Nanotechnology the battery.
in the field of medicine is to detect and treat damage to the
human body and disease.[4] 5.7.1 Nitrogen-Doped CNT Catalyst: - This
type of catalyst is used in lithium air batteries to store upto
5.1.1 Drug Delivery:- It is a technique which 10 times as much energy as lithium ion batteries.
reduces damage to healthy cells and allows earlier
detection of disease. In drug delivery technique particles 5.7.2 Lithium Ion Battery:-In this type of battery
are engineered so that they are attracted to diseased cells, silicon nanoparticles are used in the anode of battery that
which allow direct treatment of disease. can recharge battery within 10minutes.[10]

5.1.2 Therapy Techniques:- Nanosponges have 5.7.3 Nanotubes on grapheme:- The electrodes
been developed which absorb toxin and remove them from made from these have very high surface area and very low
blood stream. Nanosponges are polymer nanoparticles electrical resistance.
which are coated by red blood cell membrane. Breast
cancer tumours can be destroyed using targeted heat The application of nanotechnology are also in other fields
therapy. Nanotubes absorb infrared light from laser and such as fuel cells, solar cells, space, fuels, better air
produces heat that burns the tumour. [6] quality, cleaner water, fabric and chemical sensors.[8]

114
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES [6] Chris Phoenix and Mike Treder, “Safe Utilization Of


Advanced Nanotechnology”, Center for Responsible
[1] Randhir Singh, book name: Interactive physics Nanotechnology (CRN), January 2003, Revised on
December 2003. http://www.crnano.org/safe.htm
[2] Debnath Bhattacharya, shashank Singh, Niraj
Satnalika, 2009, Nanotechnology, Big Things From a Tiny [7] Research Strategies for Safety Evaluation of
World: A Review. Nanomaterials, Part IV: Risk Assessment of Nanoparticles.
[3] Handbook on Nanoscience, Engineeering and [8] Jui-Ming Yeh, Shir-Joe Liou, Chih-GuangLin, Yen-Po
Technology, 2nd Ed., Taylor and Francis, 2007. Chang, Yuan-
[4] Nanotechnology: A Brief Literature Review M.Ellin Hsiang Yu, Chi-Feng Cheng, J. Appl. Polym. Sci. 2004
Doyle, Ph.D Food Research Institute, University of
[9] S. Girand, S. Bourbigot, M. Rochery,I. Vroman, L.
Wisconsin–Madison, Madison, WI 53706. Tighzert, R.
[5] Laura Wright. “Nanotech Safety: More on How Little Delobel, Polym.Degrad. Stab. 2002,
We Know” , OnEarth Magazine, December 12, 2007.
[10] F.A.P. Thomas, J.A.W. Stoks, A. Buegman,US Patent
6291054,September 18, 2001.

115
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Review: 5G Technology
Varinder Bansal Gursimrat Singh Vipin Bansal
BGIET Sangrur BGIET Sangrur BGIET Sangrur

ABSTRACT- This review gives you the complete capacity broadband data up to 2mbps. The bandwidth
information about what is being used and what new is 2mbps. The multiplexing technique is CDMA. The
technologies’ are being added to the existing technology core network is Packet Network.
for future reference and what can further it be implicated e) 4G -The beginning of this network takes place in 2000
in the technology we are reviewing by this paper. The and implemented in 2010. The services under this
viewer will get all the history information about the network is completely IP based, speed up to hundreds
generations being used before about their speed standards MBs. The bandwidth is 200mbps. The multiplexing
and bandwidths and the other required conditions for this technique is CDMA. The core network is Internet.
to be.

Keywords- 1G-First Generation


2. ADVANCEMENT FROM 4G
MBPS-Mega Bite Per Second
FDMA- Frequency Divison Multiple Access
a) Multi Mode User Terminals -The 4G technology is
1. INTRODUCTION based on single station single user operated system but
Mobile and wireless network have developed very much 5G is a single station multi operated system. This is the
now a day. It was firstly introduced in 1877 and was based major achievement of this technology.
on AM schemes. It is limited up to some systems only. But
due to increasing needs new systems were developed
during World War II and due to development of FM radio b) Security -Security level increases in this technology.
communication. At present we are using mainly 2.5G and Reconfigurable and lightweight protection mechanisms
3G networks. They have very limited access. We need a are designed.
fast wireless network to maintain today’s needs. 5G
Technology that stands for 5th Generation Mobile
technology is proposed after the network history of 1G, c) Data Encryption -There is data encryption technique in
2G, 2.5G, 3G and 4G.5G technology may change all the this technology. Due to this it is difficult to encrypt the
present network technologies. There is wide range of data from the transmitter to receiver.
communication with very high speed and bandwidth. The
5G terminals consist of new security techniques and error
control systems. 5G wireless uses OFDM and millimeter d) A Super-Efficient Mobile Network -It delivers a better
wireless that enables data rate of 20 mbps and frequency performing network for lower investment cost. It
band of 2-8 GHz. The 5G communication is capable of addresses the mobile network operates need at unit cost
supporting wireless World Wide Web (wwww). This of data transport falling at roughly the same rate as the
review gives the concept of intelligent internet phone volume of data demand is rising.
where, the mobile can prefer the finest connection.
Historical Prospective
e) A Converged Fiber-Wireless Network -This uses
a) 1G -This network design began in 1970 in Bell’s wireless Internet access having bandwidth 20 to 60
laboratory and was implemented in 1984. The services GHZ in millimeter so as to allow very wide bandwidth
under this network are analog voice communication. radio channels able to support data access speeds of up
The band width provided for this network is 1.9kbps. to 10 Gbit/s.
The multiplexing technique is FDMA. The core
network is PSTN.
b) 2G -The beginning of this network takes place in 1980 f) Speed of Delivery -One of the main benefits of 5G
and implemented in 1991. The services provided by technology over 4G will not be its speed of delivery –
this are digital voice communication. The bandwidth of which is about between 10Gbps and 100Gbps but it is
this network is 14.4kbps. The multiplexing techniques low. At present, 4G provides speed between 40ms and
are TDMA, CDMA. The core network is PSTN. 60ms, which is very low and not enough to provide
real-time response.
c) 2.5G -This network began in the mid of 1985 and
implemented in 1999. The services provided by this
network are higher capacity, packetized data. The
bandwidth for this is 384kbps. The multiplexing 3. KEY CONCEPTS OF 5G
techniques are TDMA, CDMA. The core network is 1) No zone issues in this network technology.
PSTN, Packet Network. 2) No limited access the user can access unlimited
d) 3G -The network began in 1990 and implemented in data.
2002. The services under this network are higher

116
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3) Several technologies such as 2G, 2.5G, 3G, and 7. TECHNOLOGIES BEHIND


4G can be connected simultaneously along with
the 5G. 5G radio access will be built upon both new radio access
4) All the technologies can use the same frequency technologies (RAT) and evolved existing wireless
spectrum in very efficient manner. This is also technologies (LTE, HSPA, GSM and Wi-Fi). This comes
called smart radio. up with the idea of utilizing millimeter-wave frequencies.
5) New features such as online TV, newspaper and This frequency range lies Between 3 to 300MHz, this is
researches with advanced resolution. greater than today’s network frequency. The main benefit
6) High altitude stratospheric platform station of using this frequency range is that it is securely used by
systems. other broadcast technologies. Hence it provides more
speed and data for the network usage. Millimeter-wave
frequencies don't pass through solid objects easily, and it's
difficult to stable the network up to long distance, that’s
4. FEATURES OF 5G NETWORKS why we are unable to use this in previous mobile networks.
TECHNOLOGY As a result, 5G network will use a lot of little base stations
rather than relatively few large masts. The increase in
1) 5G technology has high resolution and high bandwidth.
spectrum means that these smaller base stations will be
2) 5G’s prospective ultra-low-latency could range between able to share data between one another as well as with
1ms and 10ms. This would allow a spectator in a football everyone's phones, easily detect how much data a user
stadium to watch a live stream of an alternative camera much required for that time during the work.
3) The high quality services of 5G technology based on 8. CONCLUSIONS AND FUTURE
Policy to avoid error.
SCOPE
4) 5G technology has a very large bandwidth in which we
can connect up to 65000 of connections. Here, in conclusion of 5G mobile communication surveys,
the 5G technology is designed as an open platform on
5) The uploading and downloading speed of 5G different layers from physical to application. Now the
technology will extremely large present network is in process which offers the best
operating system and at very less service charges. There
5. VISIONS AND REQUIREMENTS are lots of improve mental changes from generation 1 to
FOR 5G NETWORK generation 5 in the world technology of wireless network.
This upcoming technology is very efficient and less
a) The Internet of Things -By the year 2020, it is expensive, a lot of expectations from this technology. 5G
predicted by analysts that each person in the UK will technologies provide high resolution for the use of network
own and use 27 internet connected devices. There will to the mobile phone consumers.
be 51 billion connected devices worldwide. Such as
smart phones, tablets and smart watches, to fridges, REFERENCES
cars and even their will be presence of smart clothes.
Some of these will require significant data to be shifted [1] Shakil Akhtar ― Evolution of technologies, Standards
at very high speed, while others might just need very and Deployment of 2G-5G Networks‖, Clayton State
small amount data information to be sent and received. University, USA-2009.
The 5G system can automatically switch and provide [2] Santhi, K. R. & Srivastava, V. K. & SenthilKumaran,
bandwidth according to requirement of the services. G. (Oct. 2003). Goals of True Broadband’s Wireless
Next Wave (4G-5G). Retrieved June 11th, 2005, from the
IEEExplore Database from Wallance Library.
b) Millimeter Wave Communication -To satisfy
[3] Aleksandar Tudzarov and Toni Janevski Functional
requirement of new increased traffic and technology
Architecture for 5g Mobile Network International Journal
services , additional frequency band width is required
of Advanced Science and Technology Vol. 32, July, 2011
which should be greater than 4G network bandwidth .
To resolve this the use of millimeter wave frequency [4] Ramnarayan, Vashu kumar, Vipin kumar ”A New
bands is must to overcome the problem of rare generation mobile network 5G”IJCA Vol.17 no.20 May
spectrum. 2013.

6. IMPLEMENTATION [5]IEEE, Handover Schemes in Satellite Networks: State-


Of-The-Art and Future Research Directions, NASA, 2006,
As part of a “heterogeneous network”, the points, or cells, Volume 8(4).
will used for LTE-A. Cells will automatically talk to each
other to provide the best and most efficient service no [6] Toni Janevski , 5G Mobile Phone Concept, Consumer
matter where the user is. Larger cells will be used in the Communications and Networking Conference, 2009 6th
same way as they are now, with broad coverage, but urban IEEE.
areas will be covered by multiple smaller cells, fitted in 7] Bria, F. Gessler, O. Queseth, R. Stridth, M. Unbehaun,
lampposts, on the roofs of shops and malls, and in the J.Wu, J.Zendler, ―4-th Generation Wireless
pillars of building . Each of these will make the signal Infrastructures: Scenarios and Research Challenges‖, IEEE
stabilized. Personal Communications, Vol. 8,
[8] Willie W. Lu, ―An Open Baseband Processing
Architecture for Future Mobile Terminals Design‖, IEEE
Wireless Communications, April 2008.

117
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Adaptive Modulation based Link Adaptation for High


Speed Wireless Data Networks using Fuzzy Expert
System

Kuldeep Singh Jatin Shama Danish Shama


GNDU Regional Campus, GNDU Regional Campus, GNDU Regional Campus,
Fattu Dhinga, Kapurthala, Fattu Dhinga, Kapurthala, Fattu Dhinga, Kapurthala,
Punjab,India Punjab,India Punjab,India
kuldeepsinghbrar87@gm Jatin.sha11@gmail.com danishsharma101@gmail.
ail.com com

ABSTRACT Link adaptation is also called Adaptive Modulation and


Coding. In modern high speed wireless data networks, link
With drastic increase in demand of high speed data services, adaptation is the process of matching of the technique of
achieving better performance of high speed wireless data modulation, and/or coding rate to the conditions on the
networks becomes a challenging task because of limited wireless communication channel and making necessary
available spectrum and uncertain nature of wireless changes in modulation and coding depending nature of the
communication link. Adaptive modulation based link channel which may be noisy or clear [3]. Adaptive modulation
adaptation is one of the solutions to this problem which technique requires some information about the nature of
predicts the efficient modulation technique among the communication channel such as signal to noise ratio (SNR),
available modulation techniques depending upon state of Bit error rate (BER) etc. at the transmitter side, which is
channel to ensure high performance of data networks. In this provided through feedback mechanism in order to increase the
paper, Fuzzy Expert System has been introduced which overall performance of wireless data network in terms of
chooses efficient modulation technique among QPSK, 8 spectral efficiency and error free delivery of data at high
QAM, 16 QAM, 32 QAM and 64 QAM depending upon speed [2], An increase in SNR or increase in BER leads to
SNR, BER values and current modulation type. This system choice of higher order modulation technique such as 16 QAM,
gives satisfactory results for prediction of better modulation 32 QAM or 64 QAM and decrease in SNR leads to choice of
technique among others to implement adaptive modulation lower order modulation technique such as QPSK or 8 QAM.
based link adaptation which further enhances the performance This is because of the fact that for higher value of SNR,
of high speed wireless data networks by ensuring error free chances of packet loss are reduced, so by choosing higher
delivery and high spectral efficiency. order modulation, spectral efficiency can be further enhanced.
But on the other side, if SNR is low, then there may be
General Terms chances of variation in phase of above mentioned higher order
Adaptive Modulation, Link Adaptation, Spectral Efficiency modulation techniques because of very small phase difference
between adjacent phases, which may lead to distortion or loss
Keywords of received data packets. Therefore, lower order modulation
QPSK, QAM, SNR, BER, Fuzzy Expert System techniques are mainly preferred in case of lower value of SNR
[2],[4],[5].
1. INTRODUCTION
In the era of information evolution, there is drastic rise in In this research paper, a Fuzzy Expert System has been
internet users over recent years. So this is a challenging task introduced for performing the task of adaptive modulation in
for internet service providers to cop up the requirement of order to provide efficient link adaptation. Different types of
high speed data services by enormous number of users over digital modulation techniques which are being considered in
limited frequency spectrum. High speed wireless data systems this paper are QPSK, 8 QAM, 16 QAM, 32 QAM and 64
such as WCDMA based High Speed Downlink Packet Access QAM. In fuzzy inference system, SNR, BER and Current
(HSDPA), requires robust and spectrally efficient Modulation are taken as input parameters which are used to
communication techniques for transmission of data through decide the new modulation technique which is going to
noisy and fading channels. One of the way of achieving high improve spectral efficiency and ensures error free delivery of
data rate is to increase allocated bandwidth. But this way is data packets.
very expansive because of limited available spectrum band
which is being used by large number of communication
systems. One more barrier in high speed wireless 2. LITERATURE SURVEY
communication is unreliable wireless link suffering from In paper [2], Zalonis et.al has studied the problem of link
Rayleigh fading which reduces the error performance in adaptation using adaptive modulation and coding for multiple
fading environment. Therefore, over the last few years, antenna OFDM system in case of noisy channel. They have
research community is in the progress of developing several given an accurate packet error rate prediction with channel
techniques to improve spectral efficiency for serving high data estimation errors on the basis of extrinsic information transfer
rate requirement over fading channels within the given analysis. In this method, Gaussian approximation has been
bandwidth. Link adaptation is one the techniques [1],[2], [10]. used to characterize the output of the detector and decoder.

118
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

They have also discussed approaches for searching and membership functions with the control rules to derive the
selecting the best modulation and coding scheme for the link fuzzy output. Defuzzification is the process of using different
adaptation algorithm. methods to calculate each associated output and put them into
a table: the lookup table and then to pick up the output from
In paper [4] Parminder Kaur et. al have discussed that the lookup table based on the current input during an
adaptive modulation systems are better than fixed modulation application . Fuzzy Expert System has been shown in the
systems because these systems change its modulation following figure.
technique according to present modulation values. Authors
have proposed an adaptive modulated OFDM system based on
Back Propagation Neural Network by taking into
consideration SNR and BER Parameters and then evaluating
the accuracy of this system on the basis of number of neurons
in BPNN and mean square error. But in this research work,
initial data set having known best modulation techniques on
the basis of known value of SNR and BER is required which
is a difficult task.
In Paper [5], Parminder Kaur et. al have discussed adaptive
modulated Orthogonal Frequency Division Multiplexing
(OFDM) system based on Radial Basis Function and then
performance of the system is evaluated on the basis of mean
square error and classification accuracy is evaluated
according to the number of neurons in RBF network. But
again this system will make decision about best modulation
on the basis of Data set of known inputs and known best Fig1: Fuzzy Expert System
output and this type of data set is very difficult to produce and
also reliability of this type of data set in different fading 4. RESULT ANALYSIS
channel environments is doubtful. In this research work, Fuzzy inference system which is termed
as Fuzzy Expert System, has been implemented using
In paper [6], Iftekhar Alam et.al have discussed various MATLAB. For this research work of link adaptation based
adaptive modulated MC-CDMA systems in Rayleigh fading adaptive modulation has been performed by using MATLAB
channel. They have considered various digital modulation 2014b on windows 8.1 operating system machine having Intel
techniques such as M-ary PSK, M-ary QAM, M-ary MHPM, Core I3 processor. For this system, three input variables are
M-ary CPM, and GMSK with varying bit-duration-bandwidth taken into consideration which are Signal to Noise Ratio
product. The results of their paper show that the dynamic (SNR), Bit Error Rate (BER) and Current Modulation and
switching of the modulation orders can enhance the system output of the system is New Modulation. Fuzzy Inference
performance and capacity per given bandwidth of fading System model for link adaptation is shown in figure 2.
channel with the expected BER performance. But fuzzy logic
system has the capacity of making the adaptive modulation
much easier and more effective.

3. FUZZY EXPERT SYSTEM


In the year of 1965, Dr. Lotfi Zadeh, a professor of
mathematics from U.C. Berkeley, proposed the fuzzy theory
[7]. Fuzzy logic is a valuable tool which is basically used to
solve highly complex real time problems where a
mathematical modelling is very difficult to achieve. Fuzzy
logic also helps in reducing the complexity of existing
solutions as well as increase the accessibility of control
theory.
Fuzzy expert system, also known as Fuzzy Inference system
suitable for tasks involving logic has been proposed as
compared to classical crisp set theory. The “fuzzy set” has
been employed to expand classical sets, which are
characterized by some margins for modelling of real time
problems. Fuzzy logic provides a degree of flexibility for each
object which belongs to a particular set. This quality is
realized by membership functions which give fuzzy sets the
capacity of modeling linguistic, fuzzy expression [8]. Fig: 2 Link Adaptation: Fuzzy Inference System
Fuzzy logic based fuzzy expert system is IF-THEN rule based The input and output variables of above mentioned Fuzzy
system in which a set of rules represents a control decision Inference System are discussed below:
mechanism to correct the effect of certain cause used for
many systems. The configuration of fuzzy logic based system  SNR: This input variable is most important factor for
into three parts they are, Fuzzification , Interface Mechanism describing noisy or clear nature of wireless
and Defuzzification. Fuzzification process is used to convert communication channel which is mainly used to make
classical data or crisp data into fuzzy data or Membership decision about new modulation technique in order to
Functions (MFs). Fuzzy Inference System combines implement adaptive modulation. In this research work, a

119
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

total 0 dB to 30 dB SNR range has been taken into


consideration which is divided into three membership
functions: Low ranging upto 17 dB, Medium ranging
upto 23 dB and High ranging above 23 dB [10]. If value
of SNR is going to be increased, then there is the
possibility of choosing higher order modulation
technique so as to increase spectral efficiency. If SNR
value is low, then lower order modulation is the
favorable choice because with increase in channel noise,
there is the possibility of distortion in higher order
modulated received signal because of lesser phase
difference and amplitude difference in adjacent levels of
higher order modulation techniques such as 32 QAM or
64 QAM. In case of medium values of SNR, there may
be no change in modulation technique. SNR input
variable along with its membership functions is shown in
following figure.

Fig 4: Bit Error Rate (BER) input variable along with its
membership functions
 Current Modulation: This input variable describes
different type of modulation techniques used in wireless
data networks. In this research work, only five
modulation techniques have been taken into
consideration which are Quadrature Phase Shift Keying
(QPSK), 8 Quadrature Amplitude Modulation (8 QAM),
16 QAM, 32 QAM and 64 QAM. As the order of
modulation technique is increasing, spectral efficiency of
the system is also going to be increased. For example, in
QPSK, spectral efficiency is 2. But in case of 64 QAM
spectral efficiency is 6. Current Modulation input
variable along with its five membership functions
representing different modulation techniques as 0,1,2,3
and 4 is shown in following figure.
Fig 3: Signal to Noise Ratio (SNR) input variable along
with its membership functions
 BER: This is the second variable which is helpful in
decision making about best modulation technique in the
implementation of adaptive modulation for link
adaptation. This variable is also divided into three
membership functions: LOW, MEDIUM and HIGH. If
BER is having high value, then it is favorable to choose
higher order modulation technique so as to reduce the
value of BER. If BER is having medium value, then it is
still favorable to select higher order modulation so as to
further improve the performance of high speed wireless
data networks. In case of low value of BER, there is no
need to change existing modulation technique. BER
input variable along with its three membership function
is shown in following figure.

Fig 5: Current Modulation input variable along with its


membership functions
 New Modulation: This is the output variable which
describes favorable modulation technique for giving
better performance of high speed wireless data networks
depending upon above mentioned input variables. In this

120
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

variable , again five different type of modulation modulation for output variable New Modulation. This is
techniques are taken into consideration which are QPSK, graphical representation indicates degree of randomness of IF-
8 QAM, 16 QAM, 32 QAM and 64 QAM. Choice of best THEN rules. This representation shows that all different
modulation technique among others depending upon possible combinations are properly described in this fuzzy
SNR and BER values of network which represent inference system. Similarly combination among other
instantaneous characteristics of the channel leads to variables also represents high degree of variability in rules
implementation of adaptive modulation which in turn formation.
leads to efficient link adaptation for better performance
of wireless data networks in fading or clear channel
environments. New Modulation output variable along
with its five membership functions representing efficient
modulation technique for implementing adaptive
modulation is shown in following figure.

Fig 8: Surface viewer representing SNR and current


modulation input variable
Results obtained from this fuzzy exert system by taking into
consideration random values of SNR, BER and Current
modulation technique are shown in following table.
Fig 6: New Modulation Output variable along with its Table 1: Evaluation of Fuzzy Expert System
membership functions
Sr. SNR (in BER Current New
Figure 7 shows Fuzzy IF-THEN rules modelled in Fuzzy No. dB) Modulation Modulation
Inference System deciding efficient modulation technique
depending upon nature of channel. The unreliable nature of
wireless communication channel can be properly represented 1 3 0.7 1 (8QAM) 0 (QPSK)
by SNR and BER information obtained at the receiver side 2 27 0.9 2 (16QAM) 3 (32QAM)
which can be feedback to transmitter part so as to implement
adaptation modulation in order to obtain link adaptation for 3 15 0.3 3 (32QAM) 3 (32QAM)
better performance of wireless data networks.
4 22 0.5 4 (64QAM) 4 (64QAM)
5 18 0 1 (8QAM) 0 (QPSK)

Now the input variables and corresponding results mentioned


in above table are verified manually, and it reveals that Fuzzy
expert system modelled in this paper gives quite satisfactory
results which match with the corresponding predictions of
manual verification. So, Fuzzy Expert System gives very good
response for implementation of adaptive modulation in order
to obtain link adaptation of uncertain communication channel
so as to enhance the performance of the high speed wireless
data networks in terms of error free delivery and spectral
efficiency.

5. CONCLUSION AND FUTURE SCOPE


In this research paper, Fuzzy expert system has been
implanted for performing adaptive modulation in order to
Fig 7: Fuzzy IF-THEN rules obtain link adaption. In this system, SNR, BER and Current
Figure 8 shows surface viewer diagram by taking into Modulation are taken as input variables which are used to
consideration two input variables SNR and Current decide new modulation technique for improving performance
of the wireless data network using fuzzy IF-THEN rules.

121
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Results reveal that Fuzzy expert system gives satisfactory [4] Parminder Kaur, Kuldeep Singh and Hardeep Kaur.
results for prediction of better modulation technique among 2014. Adaptive Modulation of OFDM by using Back
others to implement adaptive modulation based link Propagation Neural Network (BPNN). In Proceedings of
adaptation which further enhances the performance of high International Multi Track Conference on Science,
speed wireless data networks by ensuring error free delivery Engineering & Technical Innovations, Vol. 1, Jalandhar,
and high spectral efficiency. Punjab, India, (June 2014), 511-514.
In this research paper, only two input variables SNR and BER [5] Parminder Kaur, Kuldeep Singh and Hardeep Kaur,
are taken into consideration in order to examine the uncertain “Adaptive Modulation of OFDM using Radial Basis
behavior of communication channel. There is the possibility Function Neural Network”, International Journal of
of including other parameters so as to improve the results of Advanced Research in Computer and Communication
fuzzy inference system. Secondly, in this research work, only Engineering (IJARCCE), Vol. 3, Issue 6, June 2014,
five modulation techniques have been considered. Other 6886-6888.
higher and lower order modulation should also be taken into
account for provide the benefit of adaptive modulation to [6] Alam, I., Srivastva, V., Prakash, A., Tripathi, R., and
large number of applications. Shankhwar, A.K. 2013. Performance Evaluation of
Adaptive Modulation Based MC-CDMA System.
REFERENCES Wireless Engineering and Technology, Scientific
Research, (2013), 54-58,.
[1] Goldsmith, A.J. and Chua, S.G., “Adaptive coded
modulation for fading channels”, IEEE Transactions on [7] Zadeh, L.A. Fuzzy Logic, Neural Networks, and Soft
Communications, Vol. 46, No. 5, May 1998, 595-602. computing.

[2] Zalonis, A., Miliou, N., Dagres, I., Polydoros, A., and [8] Arshdeep Kaur, Sanchit Mahajan and Kuldeep Singh,.
Bogucka,H., “Trends in Adaptive modulation and 2014. Site Selection for Installation of Cellular Towers
Coding”, Advances in Electronics and using Fuzzy Logic Technique. In Proceedings of
Telecommunications, Vol. 1, No. 1, April 2010, 104-111. National Conference on Latest Developments in Science,
Engineering & Management (LDSEM-2014), Amritsar,
[3] Tan, P.H., Yan Wu and Sun, S., “Link Adaptation Based Punjab, India, (March 2014), 343-347.
on Adaptive Modulation and Coding for Multiple-
Antenna OFDM System”, IEEE Journal on Selected [9] Fuzzy Logic Toolbox for use with MATLAB – Users
Areas of Communications, Vol. 26, No. 8, October, Guide. (2015).
2008, 1599-1606. [10] Kolding, T.E., Pedersen, K.I., Wigard, J., Frederiksen,
F., and Mogensen, P.E. 2003. High Speed Downlink
Packet Access: WCDMA Evolution. IEEE Vehicular
Technology Society News, (February 2003), 4-10.

122
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

DESIGN AND ANALYSIS OF DUAL BAND PRINTED


MICROSTRIP DIPOLE ANTENNA FOR WLAN
Gurmeet Singh Lakhwinder Singh Solanki
Department of Electronics and Communication Department of Electronics and Communication
Engineering Engineering
Sant Longowal Institute of Engineering And Sant Longowal Institute of Engineering And
Technology, Longowal, Sangrur, Punjab Technology, Longowal, Sangrur, Punjab
ergurmeet96@gmail.com lakhwinder_singh_solanki@ yahoo.co.uk

ABSTRACT frequency bands at 2.4 GHz and 3.2 GHz. The proposed
In today’s technologically advanced era, microstrip fed antenna have omnidirectional radiation pattern. A dual-band
rectangular microstrip dipole antenna is predominantly used printed dipole antenna is designed in this study by combining
in wireless communication. In this paper, a dual band printed a rectangular and two “L” shaped radiating elements and are
microstrip dipole antenna is delineated using HFSS as embedded on a single layer structure with relatively small
Simulation software. The antenna has covered -10 dB size. Antenna has been supposed to be printed on an FR4
impedance bandwidth of 245 MHz (18.71% at 2.4 GHz) and
substrate with a thickness of 0.8 mm and relative permittivity
1670 MHz (31.77% at 5.2 GHz). This aspect enables the
antenna to cover the required bandwidth at the 2.4 GHz and 5 of 4.6. The resulting antenna has been found to have a
GHz band. Proposed antenna covers the 2.182-2.633 GHz and compact size of 25.75x22 mm². The antenna offers dual–band
4.465-5.899 GHz band . A simplified feed structure consists characteristics and antenna is fed by a standard 50 Ω
of a pair of parallel metal strips printed on the opposite sides microstrip line using a broadband microstrip-to coplanar strip
of the dielectric substrate and connected to a 50 Ω microstrip line transition. The transition network is realized by a matched
line with partial ground plane. This feeding network does not
T-junction and a narrowband delay line, which not only
depend upon additional conversion devices, such as power
divider, tuning stub etc. increases the model complication but also requires long
transmission lines. In [5, 6] the antenna consists of a printed
Keywords
Dipole antenna, impedance bandwidth, microstrip monopole and a 50 Ω microstrip line with an open-circuited
tuning stub. The tuning stub length was found to be efficient
in controlling the coupling of the electromagnetic energy from
1. INTRODUCTION the microstrip line to the monopole. In [7], a cavity backed
In recent years, there have been fast developments in wireless
double-sided printed dipole antenna is conferred, which
local area network (WLAN) applications. In order to satisfy
impart broadband performance with compact antenna size. In
the need of today’s wireless application, multi band operation
[8] printed and doubled-sided dipole array wideband antenna
is needed. Antenna should cover the 2.4 GHz band of IEEE
in application of 5.2 GHz UNII band, operated in WLAN
802.11b/g and the 5 GHz band of 802.11a WLAN standards,
access point has been demonstrated .The printed dipole array
dual-band operations in the 2.4 GHz (2400 –2484 MHz) and
antenna consists of one dipole array arranged back to back
5GHz (5150–5350MHz and 5725-5825 MHz) bands are
and can be easily framed by printing on both sides of a
demanded in practical WLAN for various applications.
dielectric substrate and is suitable for integration with
A single antenna is highly advantageous if it can operate in
monolithic microwave integrated circuit (MMIC). In [9] a
double band mode. The antenna should be conformable, low
WLAN/WiMax printed antenna executed by using microstrip
profile, lightweight and compact therefore, the antenna can
feeding and matching. In [10] design and simulation of
easily be fixed in the range of communication devices. There
microstrip dipole antenna at 2.4 GHz have been proposed.
are various feeding methods but a basic feeding circuit is also
Frequency resonance has been also analyzed for different
an important component, because it is in planar form which
width of dipole arm. In [11] modification in two parameters
reduces the transmission line length and the radiation losses.
bend on microstrip line and dipole’s gap of known printed
For this purpose, some printed antennas have been devised
dipole antenna has been discussed. In [12] half wave
[3–12]. The antenna in [3] uses the printed dipole for dual
microstrip dipole antenna with tapered arms has been
frequency band. Antenna with sufficient gain covers only 2.4
designed at a frequency of 2.4 GHz. A dual-band compact bi-
GHz and 5.2 GHz band. In [4] the design concept of the
faced printed dipole antenna has been designed for Wi-Fi
printed dipole antenna with reflector has been discussed,
802.11n.The return losses in proposed antenna match the
which enables the transmission and reception of a single-band
frequency requirements in both bands (measured bandwidth
frequency, a parasitic element is embedded to enable the
of 10.8% at 2.5 GHz and 25.3% at 5.5 GHz), with nearly
operation of dual-band and impedance matching is performed
omnidirectional radiation patterns and efficiency 90% In this
by adjusting the width of the microstrip line. In the design [5],
paper, a microstrip-fed dual-band printed dipole antenna is
the new structures with 50 ohm microtsrip feed line
demonstrated. With a prick line fix into each arm of the
rectangular patch slot antenna for WLAN and WiMAX
dipole, a double-band operation is introduced on a single
applications have been discussed, this antenna operates at dual

123
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

antenna. The suggested design does not have need of isolated arms of the dipole antenna printed on the area of L×W mm²,
dipole arms to acquire two separate operating bands, and it and are printed on opposite side of a dielectric substrate. The
has a reduced length of arms than the design in [3]. The feed structure is composed of a 50 Ω microstrip line and two
proposed antenna can easily be excited by a 50 Ω microstrip parallel metal strips. The parallel strips have a uniform width
line and a pair of parallel metal strips between the dipole and of Ws (Ws = 3.5 mm) and a length of Ls (Ls = 16.6 mm in
the microstrip feed line; good impedance matching can be this study).
obtained for operating frequencies within both 2.4 and 5 GHz
band.
2. ANTENNA DESIGN AND ANALYSIS
In this paper, microstrip dipole antenna is printed on double
sided FR4 substrate having thickness 1.6 mm and dielectric
constant of 4.4. The geometry of microstrip dipole antenna is
shown in figure 1. The width of the Microstrip dipole antenna
is given as [2]

Where λ0 is free space wavelength.

Effective dielectric constant (εreff ) has been calculated by


using equation (1) [1]
(a)

Where = Relative dielectric constant of substrate.

h = height of substrate.

w = width of patch.

Effective length Leff has been calculated at particular


resonant frequency by using equation (2)

Incremented length of the patch is given by

(b)

Fig. 1: (a) and (b) Geometry of dual band printed dipole


antenna.
Actual length of patch L has been calculated by using
Both the microstrip line and parallel metal strips must be
equation (2) and (4) carefully designed for proper matching. By carefully changing
the printed dipole arms of length L and width W, the
uncomplicated conventional printed dipole antenna (without
spur line) can operate in the different bands. In this design, a
spur line is imprinted on each of the two rectangular dipole
arms (detailed dimensions are given in Fig. 1). With the
2.1 Antenna Design existence of the spur lines, two extra dipole arms are obtained,
The configuration for printed dipole antenna with a spur line
which form a second dipole antenna which are used to
is shown in Fig. 1, which is printed on a FR4 substrate of
produce a higher resonant mode for 5 GHz band operation
thickness h=1.6 mm and relative permittivity ε =
᷊ 4.4.The top
with a smaller length and width. In this design it is clear that
side consists of a microstrip feed line, one of the parallel
the longer dipole arms are used to operate in the 2.4 GHz
metal strips and one side of the arms of the printed dipole
band. Using the simple microstrip feeding structure, good
antenna. The base side consists of a truncated ground plane
impedance matching for each of the operating frequencies can
(50×22 mm), the other parallel metal strip, and the second
be easily obtained.

124
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig. 4: Radiation pattern at 2.4 GHz

Fig. 2: Simulated return loss for proposed antenna 3. CONCLUSION


A dual-band printed dipole antenna has been proposed and
2.2 Measured Results simulated at 2.4 and 5 GHz. To obtain lower and higher
Fig. 2 shows the simulated return loss for the proposed double
operating modes longer and shorter dipole arms designed by
sided dipole antenna. Each resonant mode with good
using above equations in the proposed antenna. The proposed
impedance matching can be seen. The simulated results
antenna has a simple feeding structure in design and has wide
obtained from simulation software HFSS. In this antenna
operational bandwidth, and has suitable radiation patterns
design the dimensions of long and short dipoles are
such that it is commercially suitable for use in WLAN
14.16×6.25 mm² and 5.2×2.5 mm².These two long and the
applications.
short dipole arms are designed for 2.4 and 5 GHz bands,
respectively. REFERENCES
[1] Constantine A. Balanis 2005, " Antenna Theory Analysis
and Design".

[2] R. Garg, P. Bhatia, I. Bahl, and A. Ittipiboon 2001,"


Microstrip Antenna Design Handbook, Artech House".

[3] H.-M. Chen, J.-M. Chen, P.-S. Cheng, and Y.-F. Lin
2004, " Feed for dual-band printed dipole antenna, Electronics
Letters" Vol. 40 No. 21.

[4] Jean-Marie Floc H, and Ahmad El Sayed Ahmad 2012


“Dual-Band Printed Dipole Antenna with Parasitic Element
for Compensation of Frequency Space Attenuation”
International Journal of Electromagnetics and Applications,
2(5) pp. 120-128.
Fig. 3: Radiation pattern at 5.2 GHz
[5] P.Prabhu, M.Ramkumar, S.Pradhap, S.Navin 2013
For the first and lower band having frequency range 2182
"compact microstrip fed dual band printed rectangular patch
MHz to 2633 MHz and an impedance bandwidth of 18.7%
antenna for wlan/wimax " International Journal of Innovative
(for S11<10 dB) was obtained. For the second and higher
Research in Science, Engineering and Technology, Vol. 2,
band having frequency range 4465 to 5899 MHz and an
pp.472-478.
impedance bandwidth of 27.6% (for S11<10 dB), was also
obtained. It shows that the impedance bandwidth of the lower [6] Mahmood T. Yassen, Jawad K. Ali, Ali J. Salim, Seevan
and the higher bands covers the 2.4 and 5 GHz bands for F. Abdulkareem, Ali I. Hammoodi, Mohammed R. Hussan
WLAN operation. In addition, the overall length (2L+Ws) of 2013 "A New Compact Slot Antenna for Dual-band WLAN
the proposed antenna is about 0.25 λ˳ (30 mm), where λ˳ is Applications" International Journal of Science and Modern
about 125 mm. Figs. 4 and 5 show the measured and Engineering (IJISME),Vol. 1,pp. 28-32.
simulated radiation patterns at 2450 and 5250 MHz,
respectively. It can be observed that the radiation patterns are [7] M. Guo, S.S. Zhong, X.B. Xuan 2012 "Conformal double
dipole-like for the two operating bands. For the two printed dipole antenna" IEEE Topical Conference on
frequencies, the patterns in the x–y plane (E-plane) still act as Antennas and Propagation in Wireless Communications,pp.
a dipole. 312-314.

125
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[8] Jhin-Fang Huang, Mao-Hsiu Hsu, and Fu-Jui Wu 2006 " Results of a Printed Dipole Antenna"Int. J. Communications,
Design of a Double-Sided and Printed Wideband Dipole Network and System Sciences, 3, pp.204-207.
Array Antenna on 5.2GHz Band” IEEE 6th International
Conference on ITS Telecommunications Proceedings, pp.430- [12] Nitali Garg, Zarreen Aijaz 2012 " A Dual band
433. microstrip dipole antenna for wideband application"
International Journal of Computer Technology and
[9] C. J. Tsai , C. S. Lin and W. C. Chen 2011 " A Dual-Band Electronics Engineering (IJCTEE), Volume 2, Issue 6, pp.18-
Microstrip-Matched Printed Antenna for WLAN/WiMax 20.
Applications"Proceedings of the Asia-Pacific Microwave
Conference, pp.1726-1729.

[10] M. H. Jamaluddin, M.K. A. Rahim, M. Z. A. Abd. Aziz,


and A. Asrokin 2005 "Microstrip Dipole Antenna Analysis
with Different Width and Length at 2.4 GHz" Asia pacific
confrence on applied electromagnetics proceedings, pp.41-44.

[11] Constantinos VOTIS, Vasilis CHRISTOFILAKIS, Panos


KOSTARAKIS 2010 "Geometry Aspects and Experimental

126
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Koch Fractal Loop antenna using modified ground

Pravin Kumar Anuradha Sonker Varun Punia


Electronics and Electronics and Electronics and
Communication Engineering Communication Engineering Communication Engineering
Department, NIT Hamirpur, Department, NIT Hamirpur, Department, NIT Hamirpur,
Hamirpur-177005 (H.P) Hamirpur-177005 (H.P) Hamirpur-177005 (H.P)
pkpravin011@gmail.com sonanu8@gmail.com punia.1246@gmail.com

ABSTRACT There are several fractal geometries that have been found
In the present paper a Koch fractal loop antenna is to be useful in developing new and innovative design for
described using modified ground techniques for the miniatured and multiband antennas. Some of these
multiband applications. A Koch fractal loop antenna is classical known fractal geometries e.g. Sierpinski gasket,
printed on dielectric substrate. The return loss value at Sierpinski carpet and Koch fractal geometry have been
each multiple resonating frequencies have been studied studied for the multiband antennas. Fractal antennas
and improved for higher iterations of the Koch fractal based on Koch fractal shape have been widely explored
loop antenna with the use of modified ground feed for the size miniaturization and multiband applications
techniques. Theses antennas have been modeled and [5-9]. The effective length of the Koch fractal curve
simulated using CST microwave studio software. The increases at each iteration of fractal results in shifting the
results of the simulations have been shown and compared frequency response of the antenna. A variety of loop
for every iteration of Koch fractal loop antenna. antennas have been found with the number of fractal
geometries. Simple Loop antennas have some
General Terms disadvantages as low input resistance and larger in size.
R.F and Microwave, Koch fractal loop Antennas, A Koch fractal island is superimposed on circular loop
Communication. antennas. The fractal circular loop antenna based on
Koch island curve has the advantage that the increased
Keywords effective length of the Koch island loop at higher
Fractal antenna, Koch fractal loop, multiband antenna, iteration can be packed into a small space of the antenna
antenna design. as well as increases the input resistance of the simple
circular loop antenna.
1. INTRODUCTION In the present work a Koch loop antenna has been
Now-days in RF and wireless communication systems
modeled and simulated using CST microwave studio
broadband, multiband and compact size antennas are in
software for the analysis the multiband behavior. The
great demand for both military and commercial
reason for carrying study of Koch fractal loop geometry
applications [1]. Nowadays users are trying for antennas
is to see the effect of changing the dimensions of the
that can operate over multiple frequency bands or are
Koch fractal loop on its resonating frequency and
reconfigurable as the challenges on the antenna system
bandwidth. With the introduction of modified ground
changes. Furthermore, the design of the antenna systems
based feed techniques on Koch fractal loop antenna
is always important to be as miniaturized as possible in
shows the better matching in terms of the return loss
many applications. Outstanding solutions for miniature
value at each resonating frequencies for higher iterations
and multi-band antennas have been found in Fractal
of the fractal. Although the higher iterations of Koch
antennas. A fractal antenna theory has been found a great
fractal loop antenna can be useful to design the
progress in the study of antenna engineering. The word
miniaturized size of antenna.
fractal was originally discovered by Benoit Mandelbrot
to describe a family of complex shapes that possess an 2. DESIGN OF KOCH LOOP
inherent self-similarity or self-affinity in their
geometrical structure. Benoit Mandelbrot, the pioneer of ANTENNA
classifying this geometry, first coined the term 'fractal’ in A Koch fractal loop antenna has been designed using
1975 from the Latin word “fractus”, which means CST software [10]. The Koch fractal loop geometries are
broken. Fractals are space filling contours, meaning generated by the IFS method. Multiple iterations of the
electrically large features can be efficiently- packed into Koch fractal curve are shown in Figure 1. The Koch
small areas [2]. Since the electrical lengths play such an curve fractal iterations using IFS method are imposed on
important role in antenna design, this efficient packing an equilateral triangular shape and then various iterations
can be used as a viable miniaturization technique. The of the Koch fractal loop are formed. These Koch fractal
Iterated function system [3, 4], which represents a loop are superimposed on circular patch antenna to
versatile method for convenient generation of wide analyze the frequency and bandwidth response of the
variety of useful fractal structures and is based on antenna.
application of a series of affine transformation, was used In the present work, the higher iterations of Koch loop
to generate these structures. fractal have been studied for the multiband antenna. The

127
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

structure of the 3rd Koch loop patch antenna of radius r


=10 mm is printed on a FR-4 substrate with thickness of
h = 1.5 mm and relative dielectric constant εr = 4.7 as
shown in Figure 2 (a). Where Ws and Ls represents the
width and length of the substrate. A microstrip feed line
on modified ground has been employed for the designing
of Koch loop antenna for the better impedance matching
at each resonating frequency of the antenna. Where the Lf
= 20.3 mm and Wf = 2.6 mm are the length and width of
the feed line. The modified ground has dimensions of
Lg×Wg =1.6 × 42 mm as shown in Figure 2 (b).

Fig. 2. (b) Back view of Modified ground structure

3. RESULTS AND DISCUSSION


In order to ananyze the multiband frequency resposne of
the Koch fractal loop antenna, second and third iterations
of koch fractal loop antennas was simulated. In figure 3,
The return loss plot shows the multiband nature of the
second iterated Koch loop antenna for radius r of 10 mm.
This second iterated Koch loop antenna has three
resonating frequencies at 4.43 GHz, 8.78 and 11.85
respectively. It has been observed from the plots, all the
three frequencies resonates within the -10 dB return loss
level. The first three frequenices have enough
bandwidths at -10 dB tolerance level for the functioning
of antennas. The resonance nature of the second iterated
Koch loop antenna for three frequencies are tabulated in
Table 4.1.
In order to verify the improvemnet in S11 or return loss
plot with the use of modified feed, third iterated koch
fractal loop antenna of same radius is simulated. Figure 4
shows the multiple fequencies for the third iterated Koch
loop antenna. This third iterated Koch loop antenna has
three resonating frequencies. It has been seen from the
plots, all the three frequencies resonates within the -10
dB reference return loss level. The last two frequenices
shows the improved return loss at frequency 8.75 GHz
and 11.68 GHz. The first three frequenices have enough
bandwidths at -10 dB tolerance level for the functioning
Fig.1. Various iterations of Koch fractal loop geometries. of antennas. The resonance nature of the third iterated
Koch loop antenna for three frequencies are tabulated in
Table - 4.2. It has been noticed from the third iterated
Koch loop structure that the higher iterations of Koch
loop fractal reduces the size or area of the antenna.

Fig.2. (a) Front view of the 3rd iterated Koch fractal


loop antenna

128
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig.3. Resonating frequency for 10 mm 2nd iterated Koch fractal loop antenna.

Fig.4. Resonating frequency for 10 mm 3 rd iterated Koch fractal loop antenna

Table 4.1 Resonance nature of 2nd iterated Koch loop 4. CONCLUSION


antenna for 10 mm radius. This paper discussed the role of Koch fractal loop in
antenna as multiband antenna for WLAN band hand set
Band-width S11 (dB) applications, radar and navigation and other higher
Resonating (GHz) at - or frequencies of microwave communication. Antenna
No. of
Frequencies 10dB resonates properly at each resonating frequencies with
resonance Return
(GHz) tolerance the introduction of modified ground in feed method. The
level loss role of effective feeding techniques improves the
impedance matching at each resonating frequencies of
First 4.43 0.53 -40.66
the antenna. Similar approach can be useful to design the
Second 8.78 0.73 -15.72 other fractal multiband and miniaturized antennas.

Third 11.85 0.49 -23.16 REFERENCES


[1] D. H. Werner, and S. Ganguly, “An Overview of
Fractal Antenna Engineering Research,” IEEE
Table 4.2 Resonance nature of 3rd iterated Koch loop Antennas and Propagation Magazine, Vol. 45, No. 5,
antenna of 10 mm radius pp.38-57, February 2003.
[2] Douglas H. Werner, Randy L. Haup, and Pingjuan L
Band-width S11 (dB) Werner, “Fractal Antenna Engineering: The Theory
Resonating (GHz) at - or and Design of Fractal Antenna Arrays. IEEE
No. of
Frequencies 10dB Antennas and Propagation Magazine, Vol. 41, No.5,
resonance Return
(GHz) tolerance pp. 37-59, October 1999.
level loss
[3] D. Kalra, “Antenna Miniaturization Using Fractals,”
First 4.43 0.53 -32.69 M.Sc. Thesis, University of Deemed, India, 2007.
Second 8.75 0.75 -31.96 [4] D.H.Werner, “An overview of fractal antenna
Third 11.68 0.49 -26.86 engineering”, IEEE Antennas and Propagation
Magazine., Vol. 45, No.1, pp.38-57, Feb 2003

129
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[5] K.J.Vinoy, K. A. Jose and V.K.Varadan, “Multiband


Characteristics and fractal dimension of dipole
antenna with Koch curve geometry,” IEEE 2002 AP-
S Inter. Symp., 2002.
[6] D. H Werner and R. Mittra, Frontiers in
Electromagnetic, IEEE press, chapters 1-3, 1999.
[7] H. O. Peitgen, H. Jurgens, and D. Saupe, Chaos and
Fractals: New Frontiers of Science, Springer-
Verlag, New York, 1992
[8] A. Ks. Skrivervik, J. F. Zurcher, O. Staub, and J. R.
Mosig, “PCS Antenna Design: The Challenge of
Miniaturization,” IEEE Antennas and Propagation
Magazine, Vol. 43, No. 4, pp. 12- 26, August 2001.
[9] N. Bayatmaku, P. Lotfi, M. Azarmanesh, and S.
Soltani, “Design of Simple Multiband Patch
Antenna for Mobile Communication Applications
using New E-shape Fractal,” IEEE Antennas and
Wireless Propagation Letters, Vol. 10, pp. 873-875,
September 2011.
[10] www.cst.com

130
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Automatic Detection of Diabetic Retinopathy- A


Technological Breakthrough
Chinar Deepti Malhotra
JMIT, Radaur JMIT, Radaur
chinarchahar@gmail.com deeptimalhotra@jmit.ac.in

ABSTRACT more of all cases of diabetes. It usually occurs in adulthood.


In patients suffering from type 2 diabetics pancreas does not
Diabetic Retinopathy is a dangerous eye disease and the most make enough insulin to keep blood glucose levels normal as
common cause of blindness for worldwide population. body does not respond to insulin. Many patients do not even
Digital color fundus images are becoming very important as know they suffer with type 2 diabetes, although it is a serious
they help in diagnosing Diabetic Retinopathy. With this fact condition. Type2 diabetes is becoming more common due to
new image processing techniques can be applied to improve increasing obesity, and not doing exercise.
automatic detection of diabetic retinopathy. The main image GESTATIONAL DIABETES During pregnancy some
processing elements in detecting eye diseases include women get this disease. In gestational diabetics high blood
segmentation, feature extraction, enhancement, pattern glucose develops in women who does not have diabetes[3].
matching, image classification. Microaneurysms are the The main diabetic eye diseases are:
primary sign of DR, therefore necessary preprocessing step 1.Diabetic retinopathy-in which damage to the blood vessels
for a correct diagnosis to automatically detect the in the retina occur.
microaneurysms in fundus image is an algorithm. For 2.Cataract-clouding of the eye’s lens happens.
detecting the microaneurysms in retina images this review 3.Glaucoma-optic nerve damage and loss of vision due to
paper aims to develop and test a new method. increase in fluid pressure inside the eye.

General Terms 2. DIABETIC RETINOPATHY


Detection, image processing, retina, vessels. DR is becoming a more important problem worldwide.
Diabetic retinopathy (DR) is the most common cause of
Keywords blindness in age group 20-74 years. It starts with mild
nonproliferative abnormalities, characterized by increased
MA, DR, NPDR, PDR, exudates, fundus, neovascularization, vascular permeability, and with the growth of new blood
OCT. vessels on the retina and posterior surface of the vitreous
progresses through moderate and severe nonproliferative
1. INTRODUCTION diabetic retinopathy (NPDR) characterized by vascular
WHO projects that diabetes will be the 7th leading cause of closure, to proliferative diabetic retinopathy (PDR)[4].
death in 2030 and in 2014, 9% of adults 18 years and older According to survey around 60 million people in India are
had diabetes, also the global prevalence of diabetes was diabetic with youngster are more in number[5].Person having
estimated to be 9% among adults aged 18+ years[1].Diabetic diabetics can prevent visual loss and blindness with diagnosis
eye disease is a group of eye problems that people with and treatment. However, worldwide more than 50% of
diabetes face due to complication of diabetes. This can cause patients suffering diabetics don't undergo any form of eye
severe vision loss or even blindness[2]. examination. The use of digital photography for examining
retina by experts during screening programs appears to be
both sensitive and exact in the detection of the early
symptoms of diabetic retinopathy[6].DR is caused by changes
in retina's blood vessels. Sometimes blood vessels may swell
and leak fluid. otherwise normally, abnormal new blood
vessels grow on the surface of the retina. The light-sensitive
tissue at the back of the eye is known as retina .For
good vision a healthy retina is important[2]. 285 million
people are estimated to be visually impaired worldwide: 39
million are blind and 246 have low vision[7]. During starting
stage patients may not notice changes to their vision. But over
time, usually affecting both eyes DR gets worse and cause
vision loss .Diabetic retinopathy may also cause macular
edema which happens when fluid leaks into the part of the
fig 1: side view of eye[2] retina that helps giving you the sharp, central vision you need
for reading, driving, and seeing fine details. Otherwise, things
TYPE 1 DIABETES is also called insulin dependent diabetes look blurry[8].DR is due to long-standing hyperglycemia,
which is usually diagnosed in childhood. when body makes wherein retinal lesions (exudates and micro aneurysm and
little or no insulin then to sustain life daily injections of hemorrhages) appear that could lead to blindness.
insulin are required.
TYPE 2 DIABETES this is more common than type 1 and is 2.1 Stages of Diabetic Retinopathy
also known as non-insulin dependent diabetes, making 90% or

131
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

seeking to develop methods to check for fluid in your retina


[12,14,15].
2.2.4 Fundus Photography
Any process resulting in a 2-D image, where the image
intensities represent the amount of a reflected quantity of
light is known as fundus imaging and when for a specific
waveband image intensities represent the amount of reflected
light then it is called fundus photography. Fundus
photography gives exact images of the back of the eye (the
fundus). An eye doctor can compare images taken at different
times to see the progression of the disease and look how well
treatment is working[13,14].
Fig 2: comparison between normal retina and DR[9] 2.2.5 Fundus Fluorescein Angiography
It is used to check for and locate any leaking blood vessels in
2.1.1. Mild Non-Proliferative Retinopathy the retina of diabetic patients showing symptoms that suggest
In this stages microaneurysms develop, MA is balloon-like damage to or swelling of the retina. can readily demonstrate
swelling in the tiny blood vessels of the retina. MA is caused the extent and location of capillary drop out [14].
due to the distension of capillary walls.MA is the first clinical
sign of retinopathy. MAs are less than 125 µm in size and 2.3 Image Preprocessing Methods
appears as small circular reddish dots.
Sometimes the retinal images obtained are of low quality and
2.1.2 Moderate Non-Proliferative Retinopathy may contain artifacts because photographer does not have full
Vessels supplying blood to the retina gets blocked. control over patient's eye so image preprocessing methods are
2.1.3 Severe Non-Proliferative Retinopathy required. Also patients can't hold their eye still for a long time
Deprived areas of retina send signals to grow new blood during image processing leading retinal images often
vessels in order to maintain nourishment as many blood unevenly illuminated with parts of the retinal image brighter
vessels are blocked. Consequently some areas of the retina do or darker compared to the rest of the image, or can also be
not get enough blood supply. washed out with a little or complete loss of contrast[4].
2.1.4 Proliferative Retinopathy
During this stage, the signals are sent by the retina for the
growth of new blood vessels. New blood vessels formed are
abnormal and fragile, prone to haemorrhage (blood leakage),
that can lead to severe vision loss and even blindness[9].

Fig 4: Flow chart for the automated diagnosis of DR using


fundus image[15].

2.3.1 Detection of Retinal Vessel


Vessel detection can determine the severity of the disease and
effect of the treatment so detection of retinal vessel is
Fig 3: Diabetic retinopathy stages[10] necessary and important too [16]. 2D matched filter based
method to detect blood vessel is found to be more efficient
than Sobel operator and morphological operator[17]. It is seen
2.2 Diabetic Retinopathy Detection that with the help of genetic algorithm matched filter's
Some of the Eye Exams performed on the diabetic patients are sensitivity can be optimized [4].
as follows:
2.2.1Visual Acuity Measurement
It measures the eye's ability to focus at different distances[11].
2.2.2 Ophthalmoscopy And Slit Lamp Exam
These tests allow doctor to view the back of the eye and
structures including blood vessels, nerve bundles, and
underlying layers within the eye. These can be used to detect Fig 5: animated retinal blood vessel picture[18]
clouding of the lens (cataract), changes in the retina, and other
problems. 2.3.2 Detection of Neovascularization
2.2.3 Optical Coherence Tomography For the detection of the neovascularization studying blood
It is an imaging technique to get high-resolution images of the vessel is very important. Neovascularization is proliferation
retina and the interior segment of the eye, to determine the of new blood vessels in the fundus area and inside the optical
thickness of the retina or the presence of swelling within the disk. So segmentation of blood vessel is important step to
retina. This method is also being used by cardiologists detect neovascularization[4].
2.3.3 Detection of Exudates

132
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The primary sign of DR are exudates. To slow down the [4] Dipika Gadriye, Gopichand Khandale ,"Neural Network
progression of retinopathy automatic detection is very Based Method for the Diagnosis of Diabetic
important as exudates are the early lesions of diabetic Retinopathy",2014 Sixth International Conference on
retinopathy[4]. Malaya Nath et al. [19] proposed a novel Computational Intelligence and Communication Networks
method that due to development of diabetics by using [5] http://www.tribuneindia.com/2012/20121115/dun.htm
independent component analysis (ICA) on wavelet sub bands [6] http://www.iovs.org/content/52/7/4866.full
changes appearing in the color fundus image can be detected. [7] http://www.who.int/mediacentre/factsheets/fs282/en/
For the change detection the proposed method consists of [8] http://www.webmd.com/diabetes/h2t-managing-diabetes-
steps such as splitting to different color channels, 11/diabetes-eye-care
preprocessing, wavelet decomposition, selection of sub band, [9]http://www.buchananoptometrists.co.nz/Services/Eye+Con
formation of input matrix for ICA, and fast ICA. The method ditions/Diabetic+Eye+Disease.html
for exudates detection proposed by Narsimhan et al. [20] [10]http://mynotes4usmle.tumblr.com/post/42147240126/diab
involves three steps first, color histogram processing follows etc-retinopathy-phases-non-proliferative#.VPBBXfmUdws
the localization of optic disc. detect the edges by segmenting. [11] Keith P. Thompson, Qiushi S. Ren, Member, IEEE, and
The segmentation is obtained by locating the edges in the Jean-Marie Parel "Therapeutic and Diagnostic Application of
smoothened input image. Third, for exudates detection color Lasers in Ophthalmology".
histogram thresholding is used. The computational [12] R.K.Ghanta, W. Drexler, U. Morgner, F. Kartner, F.P
intelligence technique and Fuzzy C-means clustering were Ippen, J.G.Fujimoto, J.S. Schuman, A. Clermontt, S.Bursell,
used for the exudates detection[4]. "Ultrahigh resolution retinal imaging with optical coherence
tomography",CLEO 2000
[13] Zhuo Zhang, Feng Shou Yin, Jiang Liu, Wing Kee
Wong, Ngan Meng Tan, Beng Hai Lee, Jun Cheng, Tien Yin
Wong,"ORIGA-light:An Online Retinal Fundus Image
Database for Glaucoma Analysis and Research",32nd Annual
International Conference of the IEEE EMBS Buenos Aires,
Argentina, August 31 - September 4, 2010
[14] Michael D. Abràmoff, Senior Member, IEEE, Mona K.
Garvin, Member, IEEE, and Milan Sonka, Fellow,"Retinal
Imaging and Image Analysis", IEEE IEEE Reviews
inbiomedical Engineering, VOL. 3, 2010
[15] Arulmozhivarman Pachiyappan,Undurti N Das,
Tatavarti VSP Murthy and Rao Tatavarti,"Automated
diagnosis of diabetic retinopathy andglaucoma using fundus
and OCT images", Pachiyappan et al. Lipids in Health and
Disease 2012, 11:73
http://www.lipidworld.com/content/11/1/73
Fig 6: retina showing exudates [16] Deepika Vallabha, Dorairaj R, Namuduri K, Thompson
H,"Automated Detection and Classification of Vascular
2.2.4 Detection of Microaneurysm Abnormalities in Diabetic Retinopathy"
Reliable automatic detection of microaneurysms is a major [17] A. Hoover, Kouznetsoza, V., Goldbaum, M. (2000)
challenge. Some other challenges during detection of red “Locating blood vessels in retinal images by piecewise
lesion are segmentation of small MA in the areas of low threshold probing of a matched filter response.” IEEE
image contrast and the presence of bright pathologies. To Transactions on Medical Imaging, 19, 203-210
detect MA Narsimhan et al. [20] gave the method which [18] R.Sivakumar, G. Ravindran,M. Muthayya,S.
involved morphological white top hat transformation to Lakshminarayanan, and C. U. Velmurughendran, "Diabetic
enhance and isolate the micro aneurysm. The difference Retinopathy Analysis" Journal of Biomedicine and
between the input image and opened image gives the top-hat Biotechnology•2005:1 (2005) 20–27•DOI:
transformed image. For efficiently extracting the small 10.1155/JBB.2005.20
circular structures from the image the structuring element is [19] Malaya Kumar Nath, and Samarendra Dandapat,
rotated in twelve different orientations. “Detection of Changes in Color Fundus Images due to
Diabetic Retinopathy”, CISP2012|Proceedings|81.
3. ACKNOWLEDGMENT [20] K.Narasimhan, V.C.Neha, K.Vijayarekha, “An Efficient
I am extremely grateful to my family for their constant Automated System for Detection of Diabetic Retinopathy
support. I would like to express my gratitude to my guide from Fundus Images Using Support Vector Machine and
Assistant Professor Mrs. Deepti Malhotra for her guidance Bayesian Classifiers”, 2012 International Conference on
and advices. Computing, Electronics and Electrical Technologies
[ICCEET]
REFERENCES
[1] http://www.who.int/mediacentre/factsheets/fs312/en/
[2] https://www.nei.nih.gov/health/diabetic/retinopathy
[3] K.Sangeetha,R.Karthiga,K.Jeyanthi,"Advanced Analysis
Of Anatomical Structures Using Hull Based Neuro-Retinal
Optic Cup Ellipse Optimization In Glaucoma Diagnosis",
2012 International Conference on Computer Communication
and Informatics (ICCCI -2012), Jan 10–12, 2012, Coimbatore,
INDIA.

133
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Survey on Hybridization of Wireless Energy


Harvesting and Spectrum Sharing in Cognitive Radio

Ranjit Singh Amandeep Singh Bhandari


Department of ECE Department of ECE
Punjabi University, Patiala Punjabi University, Patiala
ranjitpup@gmail.com singh.amandeep183@gmail.com

ABSTRACT
Wireless Energy Harvesting in Cognitive radio is emerging as
a promising technique to improve the utilization of radio
frequency spectrum. In this paper, we consider the problem of
spectrum sharing among primary (licensed) and secondary
(unlicensed) users. Energy harvesting is a promising solution
to prolong the operation of energy-constrained wireless
networks.

Keywords
Wireless energy harvesting, cognitive radio, radio spectrum,
spectrum sharing.

1. INTRODUCTION
Energy harvesting, also known as power harvesting or energy
scavenging, is a process by which energy is derived from
external sources e.g. solar power, thermal energy, wind
energy, salinity gradients, and kinetic energy, captured, and Fig 1: Spectrum Access
stored. Wireless Energy Harvesting is the process by which .From this definition, two main characteristics of cognitive
energy is derived from external sources (e.g. solar power, radio can be defined:
thermal energy, wind energy, salinity gradients, and kinetic 1) Cognitive capability: Through real-time interaction with
energy), captured, and stored for small, wireless autonomous the radio environment, the portions of the spectrum that are
devices, like those used in wearable electronics and wireless unused at a specific time or location can be identified. CR
sensor networks. Energy harvesters provide a very small enables the usage of temporally unused spectrum, referred to
as spectrum hole or white space. Consequently, the best
amount of power for low-energy electronics the energy source
spectrum can be selected, shared with other users, and
for energy harvesters is present as ambient background and is exploited with the licensed user.
free. For example, temperature gradients exist from the 2) Reconfigurability: A CR can be programmed to transmit
operation of a combustion engine and in urban areas, there is a and receive on a variety of frequencies, and use different
large amount of electromagnetic energy in the environment access technologies supported by its hardware design [4].
because of radio and television broadcasting. Through this capability, the best spectrum band and the most
appropriate operating parameters can be selected and
A cognitive radio is an intelligent radio that can be reconfigured.
programmed and configured dynamically. Its transceiver is
designed to use the best wireless channels in its vicinity. Such 2. LITERATURE SURVEY
a radio automatically detects available channels in wireless In 2013, Liang Liu et al suggested that by assuming a single-
spectrum, then accordingly changes its transmission or antenna receiver that can only decode information of harvest
reception parameters to allow more concurrent wireless energy at any time because of the practical circuit limitation.
communications in a given spectrum band at one location. Therefore, it is important to investigate when the receiver
This process is a form of dynamic spectrum management. [1] should switch between the two modes of information
decoding (ID) and energy harvesting (EH), based on the
The key enabling technologies of CR networks are the instantaneous channel and interference condition. In this
cognitive radio techniques that provides the capability to share paper, the author derived the optimal mode switching rule at t
the spectrum in an opportunistic manner. Formally, a CR is to achieve various trade-offs between wireless information
defined as a radio that can change its transmitter parameters transfer and energy harvesting. [2]
based on interaction with its environment
In 2013, Ali A Nasir et al proposed that an amplify and-
forward (AF) relaying network is considered, where an energy
constrained relay node harvests energy from the received RF

134
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

signal and uses that harvested energy to forward the source communication system with finite energy storage capacity
output to the destination input. Two relaying protocols, serving delay-sensitive applications. This communication
namely, i) time switching-based relaying (TSR) protocol and system is powered by both grid and renewable power sources.
ii) power splitting-based relaying (PSR) protocol are proposed The authors considered the average grid power consumption
to enable energy harvesting and information processing at the minimization subject to the data queue stability and renewable
relay. In order to determine the throughput, analytical energy availability constraints. By exploring the optimality
expressions for the outage probability and the ergodic property and using the theory of random walks, we transform
capacity are derived for delay-limited and delay-tolerant the grid power minimization problem to an asymptotically
transmission modes, respective.[3] equivalent problem. [8]

In 2013, Seunghyun lee et al suggested that wireless networks In 2015, Mojtaba nourian et al proposed a design
can be self-sustaining by harvesting energy from ambient methodology for optimal energy allocation to estimate a
radio-frequency (RF) signals. Recently, researchers have random source using multiple wireless sensors equipped with
made progress on designing efficient circuits and devices for energy harvesting technology. In this framework, multiple
RF energy harvesting suitable for low-power wireless sensors observe a random process and then transmit an
applications. Motivated by this and building upon the classic amplified un-coded analog version of the observed signal
cognitive radio (CR) network model, this paper described a through Markovian fading wireless channels to a remote
novel method for wireless networks coexisting where low- station. The sensors have access to an energy harvesting
power mobiles in a secondary network, called secondary source which is an everlasting but unreliable random energy
transmitters (STs), harvest ambient RF energy from source compared to conventional batteries with fixed energy
transmissions by nearby active transmitters primary network, storage. [9]
called primary transmitters (PTs), while opportunistically
accessing the spectrum licensed to the primary network.[4] 3. CHANNEL CHARACTERISTICS IN
COGNITIVE RADIO NETWORKS
In 2014, Zihao wang et al developed a wireless energy Because available spectrum holes show different
harvesting and information transfer protocol in cognitive two- characteristics that vary over time, each spectrum hole should
way relay networks, in which a secondary network scavenges be characterized considering both the time-varying radio
energy from ambient signals of primary network while shares environment and spectrum parameters, such as operating
the spectrum by assisting the primary transmission. In frequency and bandwidth. Hence, it is essential to define
particular, two primary users exchange information through parameters that can represent a particular spectrum band as
follows:
an energy harvesting secondary user which firstly harvests
1) Interference: From the amount of interfere at the primary
energy from the received primary signals and then uses the receiver, the permissible power of a CR user can be derived,
harvested energy to forward the remaining primary signals which is used for the estimation of channel capacity.
along with the secondary signals [5]. 2) Path loss: The path loss is closely related to distance and
frequency. As the operating frequency increases, the path loss
In 2014, Cryril Leung et al described a resource allocation increases, which results in a decrease in the transmission
problem for spectrum sharing in cognitive radio networks. range. If transmission power is increased to compensate for
Specifically, the authors investigate the joint sub-channel, rate the increased path loss, interference at other users may
increase.
and power allocation for secondary users which share, in a
3) Wireless link errors: Depending on the modulation scheme
non-disruptive manner, some frequency bands with primary and the interference level of the spectrum band, the error rate
users using OFDM technology. The authors considered the of the channel changes.
resource allocation problem for downlink and take into 4) Link layer delay: To address different path loss, wireless
account the maximum total power constraints of the base link error, and interference, different types of link layer
station and the power constraints determined by distributed protocols are required at different spectrum bands. This
results in different link layer delays. It is desirable to identify
spectrum sensing and scanning. [6]
the spectrum bands that combine all the characterization
parameters described previously for accurate spectrum
In 2014. S. Ali Mausavifar et al proposed a wireless energy decision. However, a complete analysis and modeling of
harvesting protocol for a decode and forward relay assisted spectrum in CR networks has not been developed yet.
secondary user network in a cognitive spectrum sharing
paradigm and expression for outage probability of the
following power constraints: 1)The maximum power that the
4. SPECTRUM SHARING
source and relay in the secondary user network can transmit The shared nature of the wireless channel requires the
from the harvested energy, 2)The peak interference power coordination of transmission attempts between CR users. In
this respect, spectrum sharing should include much of the
from the source and the relay in the SU network at the
functionality of a MAC protocol. Moreover, the unique
primary user network, 3)The interference power of PU characteristics of CRs, such as the coexistence of CR users
network at the relay assisted secondary user network .[7] with licensed users and the wide range of available spectrum,
incur substantially different challenges for spectrum sharing
In 2015, Ying Cui et al suggested to study the grid power- in CR networks. The existing work in spectrum sharing aims
delay trade off in a point-to-point energy harvesting wireless to address these challenges and can be classified by four

135
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

aspects: the architecture, spectrum allocation behavior,


spectrum access technique, and scope.
The first classification is based on the architecture, which can
be centralized or distributed:
Centralized spectrum sharing: The spectrum allocation and
access procedures are con-trolled by a central entity.
Moreover, a distributed sensing procedure can be used such
that measurements of the spectrum allocation are forwarded to
the central entity, and a spectrum allocation map is
constructed. Furthermore, the central entity can lease
spectrum to users in a limited geographical region for a
specific amount of time. In addition to competition for the
spectrum, competition for users can also be considered
through a central spectrum policy server.
Distributed spectrum sharing: Spectrum allocation and access
are based on local (or possibly global) policies that are
performed by each node distributively. Distributed solutions Fig 2. Opportunistic (secondary) spectrum usage with
also are used between different networks such that a base underlay spectrum sharing (low power) and overlay
station (BS) competes with its interferer BSs according to the spectrum sharing (higher power) [9]
QoS requirements of its users to allocate a portion of the
spectrum. Finally, spectrum sharing techniques are generally focused
The recent work on comparison of centralized and distributed on two types of solutions: spectrum sharing inside a CR net-
solutions reveals that distributed solutions generally closely work (intranetwork spectrum sharing) and among multiple
follow the centralized solutions, but at the cost of message coexisting CR networks (internetwork spectrum sharing), as
exchanges between nodes. explained in the following:
The second classification is based on allocation behavior, • Intranetwork spectrum sharing: These solutions focus on
where spectrum access can be cooperative or non- spectrum allocation between the entities of a CR
cooperative. network, as shown in Fig. 6. Accordingly, the users of a
Cooperative spectrum sharing: Cooperative (or collaborative) CR net-work try to access the available spectrum
solutions exploit the interference measurements of each node without causing interference to the primary users.
such that the effect of the communication of one node on Intranetwork spectrum sharing poses unique challenges
other nodes is considered. A common technique used in these that have not been considered previously in wireless
schemes is forming clusters to share interference information communication systems.
locally. This localized operation provides an effective balance
between a fully centralized and a distributed scheme. • Internetwork spectrum sharing: The CR architecture
Non-cooperative spectrum sharing: Only a single node is enables multiple systems to be deployed in overlapping
considered in non-cooperative (or non-collaborative, selfish) locations and spectrum, So far the inter-network
solutions. Because interference in other CR nodes is not spectrum sharing solutions provide a broader view of the
considered, non-cooperative solutions may result in reduced spectrum sharing concept by including certain operator
spectrum utilization. However, these solutions do not require policies.
frequent message exchanges between neighbors as in
cooperative solutions.
Cooperative approaches generally outperform non- 5. CONCLUSION
cooperative approaches, as well as closely approximating the The spectrum sharing in CRNs is more challenging than the
global optimum. More-over, cooperative techniques result in a traditional wireless networks. Here, spectrum band varies
certain degree of fairness, as well as improved through-put. continuously across the space and time in terms of both
On the other hand, the performance degradation of non- availability and quality. This varying nature of spectrum
cooperative approaches are generally offset by the demands for a radio environment-aware optimized spectrum
significantly low information exchange and hence, energy allocation mechanism. In this article, we present a spectrum
consumption. sharing scheme that schedules the sensed spectrum holes
The third classification for spectrum sharing in CR among cognitive radio (CR) users by considering the changes
networks is based on the access technology: occur in the radio environment as well as the PU’s activity on
• Overlay spectrum sharing: Nodes access the network current in use channels. In the proposed framework, we
using a portion of the spectrum that has not been used by assume slotted structure where each CR performs sensing
licensed users. This minimizes interference to the operation at the start of each slot. The CR monitors the in-use
primary network.
• Underlay spectrum sharing: The spread spectrum channel for PU activity. If the channel is still idle, it will
techniques are exploited such that the transmission of a perform the transmission on the same channel otherwise it
CR node is regarded as noise by licensed users. looks for some other channel for transmission or remains
Underlay techniques can utilize higher band-width at the silent during the entire time slot to avoid interference with
cost of a slight increase in complexity. Considering this trade- PU. We also propose a dynamic framing process at MAC
off, hybrid techniques can be considered for the spectrum layer, which can form variable size frames depending upon
access technology for CR networks. the capacity of available channels. The simulation results
show that our proposed scheme outperforms in saving the
transmission power while ensuring required throughput and
fairness. Moreover, we compare the service time and
throughput of CR user against different file sizes. The PU
arrival activity on the available channels degrades the
performance of the CRN but in our scheme, the periodic

136
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

monitoring significantly enhances the performance by [8] Mojtaba Nourian et al.2015. Distortion Minimization in
reducing the number of retransmission. Multi-Sensor Estimation with Energy Harvesting IEEE
Journal on Selected Areas in Communications. DOI
10.1109/JSAC.2015.2391691,pp 1-15

REFERENCES [9] Lars Berlemann and Stefan Mangold. 2009. Cognitive


Radio and Dynamic Spectrum Access, John Wiley & Sons
[1]Lu Lu*, Xiangwei Zhou et al 2012, Ten years of research
Ltd.
in spectrum sensing and sharing in cognitive radio, EURASIP
Journal on Wireless Communications and Networking,pp1-6.
[10]Finite Renewable Energy Storage, IEEE Journal on
Selected Areas in Communications DOI
[2] Liang Liu et al. 2013. Wireless Information Transfer with
10.1109/JSAC.2015.2391836, pp 1-16.
Opportunistic Energy Harvesting, IEEE Transactions on
Wireless Communications, VOL. 12, NO. 1, pp 288-300.
[11]M. J. Neely. 2006. Energy optimal control for time-
varying wireless networks, IEEE Trans. Inf. Theory, vol. 52,
[3] Ali A. Nasir et al. 2013. Relaying Protocols for Wireless
no. 7, pp. 2915–2934.
Energy Harvesting and Information Processing, IEEE
Transactions on Wireless Communications, VOL. 12, pp
[12] L. Georgiadis, M. J. Neely, and L. Tassiulas. 2006.
3622-3635.
Resource allocation and cross-layer control in wireless
networks, Foundations and Trends in Networking, vol. 1, no.
[4] Seunghyun Lee et al. 2013. Opportunistic Wireless Energy
1, pp. 1–144.
Harvesting in Cognitive Radio Networks, IEEE Transactions
On Wireless Communications, VOL. 12,pp4788-4799
[13] Y. Cui and V. Lau, “Distributive stochastic learning for
delay-optimal OFDMA power and sub-band allocation,”
[5] Zihao Wang et al. Wireless Energy Harvesting and
IEEE Trans. Signal Process. vol. 58, no. 9, pp. 4848–4858,
Information Transfer in Cognitive Two-Way Relay Networks,
September 2010.
IEEE Trans. Commun., vol. 61,no. 8, pp. 3181–3191
[14] Y. Cui, Q. Huang, and V. Lau, “Queue-aware dynamic
[6] Cryril Leung et al. Energy Harvesting Wireless
clustering and power allocation for network MIMO systems
Communications with Energy Cooperation between
via distributed stochastic learning,” IEEE Trans. Signal
Transmitter and Receiver, 10.1109/TCOMM.2015pp1-12
Process, vol. 59, no. 3, pp. 1229–1238, March 2011.
[7] S. Ali Mausavifar et al. a wireless energy harvesting
[15] Y. Cui, V. Lau, and Y. Wu, “Delay-aware BS
protocol for a decode and forward relay
discontinuous transmission control and user scheduling for
energy harvesting downlink coordinated MIMO systems,”
[7] Ying Cui et al. Grid Power-Delay Tradeoff for Energy
IEEE Trans. Signal Process., vol. 60, no. 7, pp. 3786– 3795,
Harvesting Wireless Communication Systems with IEEE
July 2012.
Transactions On Wireless Communications, VOL. 12, NO. 1,
pp 44-78.

137
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Hybrid Modulation
Rishav Dewan Gagandeep Kaur Harjeet Kaur
BGIET, Sangrur BGIET, Sangrur BGIET, Sangrur

Abstract:. This paper proposes a hybrid single carrier improving the output waveform of the inverter reduces its
sinusoidal modulation suitable for multiphase multilevel harmonic content and, hence, the size of the filter used and
inverters. Multiphase multilevel inverters are controlled by the level of electromagnetic interference (EMI) generated
hybrid modulation to provide multiphase variable voltage by switching operation. Various multilevel converter
and a variable frequency supply. The proposed modulation structures are reported in the literature. The cascaded
combines the benefits of fundamental frequency multilevel inverters appears to be superior to other
modulation and single carrier sinusoidal modulation multilevel inverters in application with the high power
strategies. It has low computational complexity. It involves rating due to its modular nature of modulation as well as
hybrid modulation without coding and hybrid modulation its control and protection requirements for each full bridge
concatenated with convolution encoding. We introduce a inverter. The power circuit for the 5-phase having 5 levels
novel method for high data rate transmission in ultra- cascaded inverter topology used to examine the proposed
wideband communication system. This is achieved through PWM technique is shown in fig.1. Modulation control of a
a new approach based on hybrid techniques between pulse multiphase multilevel inverter is quite challenging and
position modulations (PPM), pulse amplitude modulation much of the reporting research is based on somewhat
(PAM), pulse shape modulation (PSM) using combinations heuristic investigations.
between a minimum possible number of modified hermite
polynomial (MHP) functions by recalling their
orthogonality property. This paper is organized in the
following way section-1 is introduction, section-2 hybrid
pwm algorithm development, section-3 is ultra-wideband,
and section-4 is conclusion.

Keywords:
PAM- Pulse Amplitude Modulation
PPM- Pulse Position Modulation
PSM- Pulse Shape Modulation Switching loss in high power high voltage converters
create a problem in any switching transitions that can be
MHP- Modified Hermite Polynomial eliminated without compromising the harmonic content of
the final waveform are considered advantageous
1. INTRODUCTION
Hybrid modulation is a combination of two types of 2. Hybrid PWM algorithm development
modulation. Hybrid modulation strategies represent a) Multiphase Single Carrier Sinusoidal Modulation:-
combination of fundamental frequency modulation and A level single carrier sinusoidal PWM is the result of
multilevel sinusoidal strategies and designed of the well- two sinusoidal modulating signals with the fundamental
known alternative phase opposition, phase shifted carrier, frequency f , and amplitude m and one carrier signal.
carrier based space vector modulation, and single carrier This PWM technique is aimed at high power voltage
sinusoidal modulation. Multiphase machine drives have source inverter system in utility applications where the
attracted considerable interest among researchers in output frequency is fixed to utility’s grid frequency.
advantage of multiphase machine include improved
reliability and increased fault tolerance, greater efficiency, b) Carrier Sinusoidal Modulation:-The principle behind
high torque density and reduce the rotatory pulsation, low this hybrid modulation is to mix fundamental frequency
per phase power handling requirement, enhanced PWM and conventional SC-SPWM for each inverter
modularity and improved noise characteristics. Multilevel module operation, so that the output contains the
converter technology is based on synthesis of voltage features of both. Unfortunately this arrangement causes
waveforms from several DC voltage levels. As number of switch losses and therefore differential heating among
levels increases, the synthesized output voltage produces the Switches in order to overcome this problem a
more steps and a waveform that approaches the reference sequential switching scheme Is embedded into this
more accurately. Major advantages are- hybrid modulation. Fig.2 shows that the general
structure of the proposed hybrid SC-SPWM scheme
 Low harmonic distortion
 Reduced switching losses
 Increased efficiency
 Good electromagnetic compatibility
Multilevel inverters are also finding increased attention in
industry as a choice of electronic power conversion for
medium voltage and high power applications, because

138
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

orthogonal pulses for the transmission of multiple bits or


symbols. This allows to transmit more information than the
conventional proposed systems in the same amount of time
even with better performance.
a) A Hybrid UWB Transmission Scheme:-In the propose
scheme, each bit of system is Represented by Nf pulse
pairs, where Nf is a positive integer the two pulses in
each pair are separated by a fixed time Td. The time we
take to transmit a symbol is Ts. Each symbol period is
It consists of a base PWM generator and a hybrid PWM
partitioned into NF frames, each of duration Ts. each
controller to generate the new pulses. In this modulation
frame is partioned into NC chips each of duration Tc,
operation, three base PWM signals are needed for each
which typically corresponds to a pulse period. The three
module operation in a cascaded multilevel inverter. A
quantities Ts,Tf and Tc satisfied the following
sequential signal (A) is a square wave signal with a 50%
relationship: Ts = NfTf = NfNcTc
duty ratio and half the fundamental frequency. This signal
makes every power switch operate with SC-SPWM and
fundamental frequency PWM sequentially to equalize and
power losses among the switches. Fundamental frequency
PWM (B) is a square wave signal synchronized with a
modulation waveform; B = 1 during the positive half cycle
of the modulation signal, and B = 0 during negative half
cycle. A SC-SPWM signal is obtained by a comparison of
the rectified modulation waveform of each module with
the carrier waveform. The amplitude of the modulation
waveform is defined as. A SC-SPWM pulse for inverter-I .
(C) is obtained by a comparison between the rectified
modulation waveform and the carrier signal while a SC- (b)Time-hoping (TH) and polarity scrambling are applied
SPWM Pulse, for the inverter-II (D) is obtained by a on each symbol to obtain processing gain, to combat
comparison between the rectified modulation waveform multiple-access interference (MAI), and to smooth the
with a bias of and the carrier waveform. signal spectrum. Specifically each pulse pair is delayed by
a pseudorandom polarity scrambling that can take on the
3. ULTRA-WIDEBAND values +1 (24); this can be easily undergone at the
receiver.
UWB is a baseband communications in which very short
duration pulses in the order of sub nanoseconds with very b) Conventional UWB Transmission:- Systems in this
low power spectral density (-41.5 dBm/ MHz) Are directly section we describe with typical UWB transmission
radiated to the air. This technology has many synonyms in systems and uses as a motivation for proposed system .
technical literature as baseband, carrier-free or impulse If multiple access systems would have been made only
radio According to Federal Communication Commission with pulses having uniform space then a chance for the
(FCC), a radiator is defined as UWB transmitter if it has a collisions can occur where a large number of pulses
fractional bandwidth (FB) equal or > than 0.2, where FB= from the two signals are received at the same time
Signal bandwidth/ Center frequency instants which lead to the corruption of message.
Therefore a random PPM sequence is employed. The
sequences have a period N with each element. Thus this
code provides an additional time shift to each pulse.
These added time shifts are discrete. This additional
time shift could be chosen to be greater or equal the chip
duration. Hence, the two sequences become fully
orthogonal to each other and we can achieve better
performance.
c) Proposed System Description based on Orthogonal
Hermite Pulse Shape:-Modified hermite polynomial
Due to very low power in UWB systems, transmission of
(MHP) pulses are orthogonal to each other. Another
multiple version of pulses for the same information come
advantage of MHP pulses is that the time duration and
to play an important role-to collect energy for detection in
the bandwidth of the pulses do not change when the
conventional modulation techniques such as pulse position
order of pulses is increased. But the drawback of MHP
modulation (PPM). At the same time no transmissions
is that the probability of error increases when narrower
between any two successive pulses guarantee to avoid inter
correlation peak is present.
symbol interference (ISI) during certain durations. This
duration should be at least as long as impulse response 4. CONCLUSION
(IR), which means low data rates. By this low data rate it
can be said that spectrum is not used in an effective way. We have proposed a hybrid UWB modulation scheme that
Hence, searching for techniques allowing this repetition, allows reception with good quality. The key idea is to
avoiding ISI and at the same time provide high data rates transmit information with better quality. This allows to
become more than to be convenient. Here we propose a transmit more information than the conventional proposed
scheme that employs a combination of pulse shape, pulse systems in the same amount of time even with better
amplitude and pulse position modulation to generate bio performed for the same SNR. In this multipath propagation

139
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

is also exploited. In this paper methods to reduce switching 4) J. G. Proakis, Digital Communications, 4th Ed.
losses of conventional single carrier PWM scheme with MeGraw Hill Series in Electrical Engineering, 2000.
low computational is described 5) A. Latif, N. D. Godhar , “A Hybrid MQAM-LFSK
OFDM Transciever with Low PAPR,” 2nd IEEE Int.
REFERNCES Conf. on Wireless Communications, Networking and
Mobile Computing (WiCom06), Sept. 22-24, 2006
1) R. van Nee, R. Prasad, OFDM for Wireless Multimedia
China
Communications, Artech House Publishers, 2000.
6) M. Wetz, W. G. Teich, J. Lindner,”PAPR Reduction
2) Han, S. H., Lee, J. H., “An Overview of Peak-ti-
Methods for Noncoherent OFDM-MFSK”, Proc. 3rd
Average Power Ratio Reduction Techniques for
COST 289 Workshop , Aveiro/Portugal, Jul. 2006
Multicarrier Transmission”, IEEE Wireless Comm.,
7) Saha, D., Birdsall, T., G.,”Quardrature-Quardrature
Vol. 12, pp.56-65, April 2005.
Phase Shift Keying “, IEEE Trans. Comm. Pp437-448,
3) Muller , S. H., Baumal , R. W., Huber, J. B., “OFDM
Vol. 37, No.5, May 1989
with reduced peak-to-average power ratio by multiple
8) M. G. diBenedetto, T. Kaiser, A. F. Molisch, I.
signal representation”, Annals of Telecomm., Vol. 52,
Oppermann, C. Politano, and D. Porcino, Eds., UWB
pp 58-67, Feb 1997
Communications Systems: A Comprehensive
Overview. New York: Hindawi, 2006.

140
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Removal of Power line Interference from EEG using


Wavelet-ICA
Gautam Kaushal V.K.Jain Amanpreet Singh
Department of E.I.E. Department of E.I.E. Punab Technical University
S.L.I.E.T.Lonhgowal (Punjab) S.L.I.E.T. Longowal (Punjab) Jalandhar (Punjab)
gautam_kaushal@yahoo.com

ABSTRACT then subsequently rejecting those segments. This process is


Electroencephalogram (EEG) signals are of having very small laborious and it results in unacceptable data loss when there is
amplitudes and so these can be easily contaminated by a high degree of contamination in the raw data. Alternative to
different Artifacts. Due to the presence of various artifacts in this method is to remove the artifact/noise from the data
EEG, its analysis becomes difficult for the clinical evaluation. which includes different methods such as filtering, Wavelet
Major types of artifacts that affect the EEG are Power Line Transform, Independent component Analysis (ICA).
noise, eye movements, Electromyogram (EMG), and
Electrocardiogram (ECG). Out of these artifacts Power Line
This paper represents a new algorithm based on joint use of
noise and eye movements related are most prominent. To deal
with these artifacts, there are various methods evolved by ICA and Wavelet Analysis. ICA is a multichannel technique
different researchers. In this paper, to remove power line [2]. Thus, ICA cannot be applied directly to Single channel
noise of 50 Hz frequency, a new Wavelet analysis and EEG signal. Thus it needs a technique which can represent the
Independent Component Analysis (ICA) based technique is Single channel signal into virtual multichannel signal [7].
presented, which is applied to a single channel EEG Signal. Wavelet Transform is used to decompose the signal. Then
The signal is first decomposed into spectrally non-overlapping ICA is applied and then Wavelet-ICA components are
components using Stationary Wavelet Transform (SWT). The
SWT decomposes single channel EEG signal into components reconstructed back to form de-noised signal.
based upon different frequency levels. The ICA algorithm is
then applied to derive the independent components. The 2. EASE OF USE
wavelet-ICA components associated with artifact related
event is selected and cancelled out. The artifact free wavelet 2.1 Wavelet Transform
components are reconstructed to form artifact free EEG. The The wavelet transform (WT) [3,4,5] is one of the leading
performance analysis of the algorithm is done using Signal to techniques for processing non-stationary signals. The WT is
Noise Ratio (SNR). thus well suited for EEG signals, since these are non-
Keywords stationary. The major feature of the WT is its capacity of
EEG, SWT, ICA, SNR, CWT, DWT decomposing a signal into components that are well localized
in scale (which is essentially the inverse of frequency) and
1. INTRODUCTION time [6]. The continuous WT (CWT) uses a family of
The electrical activity of active nerve cells produces currents wavelets by “continuously” scaling and translating a localized
spreading through the head which can be recorded as the function called the mother wavelet. The discrete WT (DWT)
electroencephalogram (EEG), which are complex in nature. is obtained by discretising the scale and translation variables
EEG signals ranges from 0.5 to 100 μV in amplitude (peak to “dyadically”, i.e. by using powers of two. This allows to
peak), which is about 100 times lower than ECG signals [1]. implement the CWT on a computer. The DWT is not
EEG waveforms can be categorized into four basic groups: translation invariant [5]. However, such invariance is required
delta (0.4-4 Hz), theta (4-8 Hz), alpha (8-13 Hz) and beta (13- in some applications like change detection and de-noising [5].
30 Hz)[1]. Due to very low in amplitude, EEG signals are The transform obtained by removing the down- samplers and
prone to artifacts and noise. The noise can be electrode noise up-samplers from the CWT is translation invariant, and
or can be generated from the body itself. These various types naturally called the stationary WT (SWT) [5]. The discrete
of noises that can contaminate the signals during recordings wavelet transform (DWT) decomposes the signal into two
are the electrode noise, baseline movement, EMG disturbance, phases: detail and approximation data on different scales. The
eye movements, eye blinks and sometimes ECG disturbance. approximation domain is sequentially decomposed into
The noises in the EEG signals are called the artifacts. Critical further detail and approximation data. These decompositions
point in EEG signal processing is the need for careful of the signal act as the input matrix for ICA technique. The
treatment and reduction of these artifacts which contaminate DWT means choosing subsets of the scales „a‟ and positions
the EEG signals and thus can lead to wrong results and „b‟ of the mother wavelet ψ (t).
conclusions. There are many approaches to deal with these
artifacts. Simply rejecting contaminated EEG epochs is one of ψ(a,b)(t) = 2a/2 (2at-b) (1)
the common methods. But this method involves manually Here, the mother wavelet functions are dilated by powers of
two and translated by integers. Scales and positions chosen
reviewing the data, identifying contaminated segments, and

141
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

based on power of two are named as dyadic scales and matrix. The FastICA is applied to the decomposed signal to
positions. The discrete wavelet transform does not preserve find out the mixing and un-mixing matrix (A and W
the translation invariance [8]. To preserve the translation respectively) along with the matrix of independent
invariance property, a new approach has been defined as
components. After this, selection of desired sources of interest
stationary wavelet transform (SWT) which is close to the
DWT one [9]. is done, i.e. here we remove the components related to
artifacts. To obtain the appearance of signal in the form of
2.1 ICA wavelet components, the source of interest is multiplied with
Independent Component Analysis (ICA) [2] involves the the mixing matrix A. and to recover the signal in the form of
task of computing the matrix projection of a set of original signal; wavelet reconstruction is done using inverse
components onto another set of so called independent stationary wavelet transform (ISWT).
component. Here, the objective is to maximize the statistical
independence of the outputs. If the inputs to the ICA are 3. Results and discussion
known to be linear instantaneous mixture of a set of sources, Procedure described above is applied on 20 different epochs
the ICA process provides an estimate of the original sources. of 4 patients. Data was taken from CSIO Chandigarh, The
The original source vector S is of size M x N and the mixing time period of signals was 4 seconds with 1024 number of
matrix A is of size M x M, where M is the number of samples and sampling frequency 256 samples per second.
statistical independent sources and N is the number of Figure 1 shows the raw EEG signal, and signal after removing
samples in each source. baseline wandering. FFT of raw signal and filtered signal with
proposed algorithm are plotted and shown in figure 2 and 3
The result of the separation process is the de-mixing respectively.
matrix W which can be used to obtain the estimated statistical
Raw Electroencephalogram (EEG) Signal
independent sources, Ŝ from the mixtures. Amplitude (mV) 500

In the EEG signal processing, if the number of channels 0

(mixed signals) are more than or equal to the sources, ICA -500
algorithm is suitable. A group of algorithms i.e. ICA can
-1000
recover these sources. The modeling of ICA can be done by
following equation stated as -1500
0 0.5 1.5 1 2 2.5 3 3.5 4
Time (seconds)
x =A.s (2) Raw Electroencephalogram (EEG) Signal after removal of baseline wandering
1000
Amplitude (mV)

Where, x is the mixed signal, s is the number of sources 500


determined and A is the mixing matrix. The main aim of ICA
0
is to find out the un-mixing matrix W to acquire the
independent components under the conditions of independent -500
criterions. -1000
0 0.5 1 1.5 2 2.5 3 3.5 4
Time (seconds)
s*= W.x (3)
Figure1. EEG Signal in original form are shown, with raw
W=A-1 (4) signal (red), Raw EEG after removing baseline wandering
(blue)
If the coefficients s* are treated as independent random
variables then we have a generative linear statistical model. 4
FFT of raw Signal
x 10
Furthermore if we assume that A is square and invertible we 2.5

have the classic ICA model [2,5]. In this paper FastICA


algorithm is applied. It estimates the independent components
2
from given multi-dimensional signals.

2.2 Wavelet-ICA 1.5


abs fft

In ICA model, the input must be a matrix and not a vector [6].
This means we can‟t apply ICA directly to the single channel
1
signal. So, it is necessary to construct the input matrix if
single channel data is available. Wavelet decomposition is
used to form the input matrix in the method of Wavelet-ICA 0.5
technique.

First of all baseline wandering is removed by taking average 0


0 10 20 30 40 50 60
of the signal. Then SWT is used for wavelet decomposition Frequency (Hz)

with mother wavelet Symlet and decomposition level 8. This Figure2. FFT of Original Signal
decomposed signal acts as input for the ICA in the form of

142
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

x 10
4
FFT of Filtered Signal The low pass signal also generates some distortion in the
2
filtered signal which can lead to information loss. The present
1.8
method preserves the signal and chances of information loss
1.6
are minimal.
1.4

1.2 Table1. Comparison of SNR of Low Pass Filter and


abs fft

1 Proposed Method
0.8
Subject SNR of Low Pass SNR of Proposed
0.6
Filter Method
0.4

0.2 1 43.10 47.28


0
0 10 20 30 40 50 60
Frequency Range 2 41.86 45.12
Figure3. FFT of Filtered signal with proposed algorithm 3 43.06 46.86

100
Filtered Signal from Proposed Algorithm
4 41.76 45.27
80

60 4. CONCLUSION
40 In this paper, a method is proposed for the removal of power-
20 line interference from a Single channel EEG signal on the
AMP (mV)

0 basis of joint use of wavelet analysis and FastICA technique.


-20 Stationary wavelet analysis is done to de-compose the signal
-40
vector into input matrix for the ICA technique. In this paper,
-60
the target noise was 50 Hz power line noise. When compared
-80
with low pass filter, this method shows better SNR results
-100
with negligible information loss.
0 0.5 1 1.5 2 2.5 3 3.5 4
TIME (sec)

Figure4. Filtered EEG Signal using proposed algorithm 5. ACKNOWLEDGEMENTS


We are thankful to CSIO, Chandigarh for providing the EEG
data, their facilities and equipment available for this study.

REFERENCES
[1] M. Teplan, “Fundamentals of EEG measurement,”
Measurement Science Review, 2002; Vol. 2
[2] Aapo Hyvärinen, Erkki Oja, “Independent component
analysis: algorithms and applications,” Neural Networks,
2000; 13: 411-430
[3] S. Mallat., “A Wavelet Tour of Signal Processing,”
second ed. Academic Press, 1999
[4] G. P. Nason, B. W. Silverman, “The stationary wavelet
transform and some statistical applications,” Lecture
Notes in Statistics 103 (1995) 281–300.
[5] Giuseppina Inuso, Fabio La Foresta, “Wavelet-ICA
methodology for efficient artifact removal from
electroencephalographic recordings,” Proceedings of
Figure5. Signals in frequency domain are shown, with raw
International Joint Conference on Neural Networks,
signal (blue line), result from LPF (green line) and using 2007; pp. 12-17
the Wavelet-ICA proposed Algorithm (red line) [6] Lin, J., Zhang, “A. Fault feature separation using
wavelet-ICA filter” NDT & EInt., 2005; Vol.38: 421-427
The performance analysis of the algorithm was done using [7] Monika Sheoran etl. “Wavelet-ICA based Denoising of
signal to noise ratio (SNR). The results of the proposed Electroencephalogram Signal” IJECT ISSN 0974-2239
algorithm are compared with that of low pass filter, and are Volume 4, Number 12 (2014); pp. 1205-1210
shown in table 1. The present method shows better results [8] P. Senthil Kumar, R. Arumuganathan, K. Sivakumar, C.
Vimal A wavelet based statistical method for de-noising
with higher SNR as compared to the low pass filter, and it is
of ocular artifacts in EEG signals International Journal of
consistent for all the signals. The frequency correlation graph Computer Science and Network Security, 2008; Vol.8(9)
of the present method and low pass filter is shown in figure 5 [9] Michel Misiti, Yves Misiti, Georges Oppenheim, Jean-
with the original signal. The SNR value of proposed method Michel Poggi Wavelet toolbox user guide, R2013b,
is better as compared to that of a low pass filter (table 1). 2013; 3:73-91
Filtered signal using the proposed algorithm in time domain is
shown in figure 4.

143
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Survey on Reversible Logic Gates

Santosh Rani Amandeep Singh Bhandari


Department of ECE Department of ECE
Punjabi University, Patiala Punjabi University, Patiala
Sammyrai82@gmail.com singh.amandeep183@gmail.com

ABSTRACT the circuit is calculated by knowing the number of primitive


reversible gates required to realize the circuit.
Reversible computing is more attractive research area to
reduce power dissipation as compare to conventional Quantum networks composed of quantum logic gates in which
computing where dissipates more power by losing bits of each gate performing an elementary unitary operation on one,
information. Reversible computing recovers from losing bits two or more state quantum systems called Qubit. Qubits can
of information through unique path from each state to its perform calculations exponentially faster than conventional
previous state. The main purpose of designing the reversible bits. Each qubit represent as 0 or 1.Every reversible circuit is
computing to reduce the garbage output, gate count and built from 1x1 and 2x2 quantum primitives and its cost is
quantum cost. Reversible computing is used in Nano- calculated by sum of 2x2 gates. Quantum cost of 1x1
technology, low power CMOS design, Optical computing and reversible gate is zero. Since there are different cost
Quantum computing. considerations such as garbage outputs, gate count, quantum
cost methods for specific cost reductions may be developed.
Keywords
2. LITERATURE SURVEY
Reversible logic gates, garbage output, quantum cost, power
dissipation. In 1961, R.LANDAUER described that the logical
irreversibility is associated with physical irreversibility and
1. INTRODUCTION requires a minimal heat generation per machine cycle. For
irreversible logic computations, each bit of information lost
Reversibility in computing implies that information about the generates kTlog2 joules of heat energy, where k is
computational states should never be lost. In Conventional Boltzmann’s constant and T the absolute temperature at which
computing all logical operations performed by millions of computation is performed. In conventional system the
gates thus lost the bits of information and is dissipated in the millions of gates used to perform logical operations. Author
form of heat. It has been shown that for every bit of proved that heat dissipation avoidable if system made
information lost in conventional computations KT*log2 joules reversible [1].
of heat energy are generated where K is Boltzmann’s constant
and T is absolute temperature. For room temperature the In 1973, C.H.BENNETT described that if a computation is
amount of dissipate heat is small, but not negligible. In [1] carried out in Reversible logic zero energy dissipation is
Landauer.R proved that heat dissipation is avoidable if system possible, as the amount of energy dissipated in a system is
is made reversible. directly related to the number of bits erased during
computation. The design that does not result in information
In [2] Bennett showed that a reversible computing, in which loss is irreversible. A set of reversible gates are needed to
no information is destroyed and dissipate small amount of design reversible circuit. Reversible gate can generate unique
heat. A reversible logic gate must have same number of inputs output vector from each input vector and vice versa [2]
and outputs lines. The unique output vector produced from
each input vector. This is termed as “logically reversibility”. In 2008, Majid Mohammdi et.al presented that quantum gates
Physical reversibility is a process that dissipates no heat in to implement the binary reversible logic gates. Quantum gates
terms of wastage of energy. Therefore, research on reversible V and V+ to be represented in truth table forms. Author
logic plays a significant role in quantum computing, quantum proved that several reversible circuit benchmarks are
dot cellular automata, spacecraft, optical computing, optimized and compare with existing work. A new behavioral
nanotechnology and low power CMOS design for model to represent the V and V+ quantum gates based on their
development more powerful computers and computations. properties. This model used to simulate the quantum
realization of reversible circuits [3].
One of the major constraints in reversible logic is to minimize
the number of reversible gates used. Complexity of the In 2010, D.Michael Miller and Zahra Sasanian presented the
circuits becomes very less due to less requirement of primitive reducing the number of quantum gate cost of reversible
gates. Fan-out is not allowed in reversible logic. Fan-out is a circuits. To reduce the quantum cost improves the efficiency
term that defines the maximum number of digital inputs that of the circuit. To determine a quantum circuit is to first
the output of a single logic gate can feed. Minimizing the
garbage outputs produce. Garbage output refers to the output
that is not used for further computations. Constant input is
also main parameter of reversible logic gates. Constant input
which is used in reversible logic function to maintain constant
either 0 or 1 for making it reversible. Each reversible gate has
a cost associated with it called quantum cost. Quantum cost of

144
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

synthesis circuits composed of binary reversible gates then


map that circuit to an equivalent quantum gate realization
[4]
In 2011, Prashant.R.Yelekar et.al described that reversible
logic gates ability to reduce the power dissipation which is
main requirement in VLSI design. Reversible computing
which is requires high energy efficiency, speed and
performance. It include the applications like low power
CMOS, Quantum computer, Nanotechnology, Optical
computing and self-repair [5]. Figure 1: Feynman gate
In 2011, Md.Mazder Rahman et.al presented a quantum BJN GATE:
gate library that consists of all possible two-Qubit quantum
gates which do not produce entangled states. These gates BJN gate is 3x3 gate. The input vector is I(A,B,C) and the
are used to reduce the quantum cost of reversible circuits. output vector is O(P,Q,R).The outputs are defined by
They proposed a two-qubit quantum gate library that plays P=A,Q=B,R=(A+B) C. Quantum cost of a BJN gate is 5.
a significant role in reducing the quantum cost of reversible
gates [6].
In 2012, B.Raghu Kanth et.al described that implementing
of reversible logic has advantages of reducing garbage
outputs, gate count and constant inputs. Addition,
Subtractions operations are realized using reversible DKG
gate and it compare with conventional gates. The proposed
reversible adder/substractor circuit can be applied to design
of complex systems in nanotechnology [7].
In 2012, Mr. Devendra Goyal presented VHDL CODE of
all Reversible Logic Gate, which provide us to design
VHDL CODE of any complex sequential circuit. Here
(a) Block diagram (b) Symbol
author have been tried to make the VHDL code as much as
possible. Author can simulate and synthesis it using Xilinx Figure 2: BJN gate
software [8].
In 2013, Marek Szyprowski presented a tool for
minimizing the quantum cost in 4-bit reversible circuits. PERES GATE
Here Author shown that for benchmarks and for designs Peres gate is 3x3 gate. The input vector is I(A,B,C) and the
taken from recent publications it is possible to obtain output vector is O(P,R,S).The output is defined by
saving in quantum cost comparing with existing circuits
[9]. P=A,Q=A B and R=AB C. Quantum cost of a Peres
gate is 4.
In 2013, Raghava Garipelly provided that the basic
reversible logic gates, which in designing of more complex
system having reversible circuits as a primitive component
and this can execute more complicated operations using
quantum computers. Author introduced some new Gates
which are BSCL, SBV, NCG, and PTR etc. [10].

3. SOME OF IMPORTANT
REVERSIBLE LOGIC GATES
FEYNMAN GATE
Feynman gate is a 2x2 gate. The input vector is I(A,B) and
the output vector is O(P,Q).The output defined by
P=A,Q=A B. Quantum cost of a Feynman gate is
1.Feynman gate can be used as copying gate.
Figure 3: Peres gate
FREDKIN GATE
Fredkin gate is 3x3 gate. The input vector is I(A,B,C) and
the output vector is O(P,Q,R).The output is defined by

145
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

P=A,Q=A’B AC and R=A’C AB. Quantum cost of


Fredkin gate is 5.

(a) Block diagram (b) Symbol


Figure 6: NG gate

4. PARAMETERS RELATETED TO
Figure 4: Fredkin gate REVESIBLE GATES
4.1 Reversible Gate
NFT GATE A reversible gate is an m-input, m-output (denoted by mx
m) circuit that produces unique output pattern from each
NFT gate is 3x3 gate. The input vector is I(A,B,C) and the
input pattern and vice versa. A reversible gate is reversible
output vector is O(P,Q,R).The output is defined by
only if i=o. The number of outputs lines must be equal to
P=A B,Q=B’C AC’ and R=BC AC’. Quantum cost number of inputs lines.
of a NFT gate is 5.
4.2 Garbage Output
Unused output of reversible gate is called garbage output.
The output of reversible gate is not used as a primary
output or as input to other gates is called garbage output.
Garbage’s outputs are needed in circuit to maintain
reversibility concept. For example when 3x3 Toffoli gate is
used to implement the operations like AND EX-OR, output
vector P=A,Q=B are garbage output produced to preserve
reversibility.

4.3 Quantum Cost


Each reversible gate has a cost associated with it called
quantum cost. Quantum cost of the circuit is calculated by
(a) Block diagram knowing the number of primitive reversible gates required
to realize the circuit. Quantum cost of 1x1 reversible gate is
zero. Quantum gates have some property given in equation
1, 2 and 3.
V*V=NOT……………………………… (1)
V*V+ =V+*V=1……………………….. (2)
V+*V+=NOT…………………………... (3)

4.4 Restriction on Reversible logic


(b) Symbol synthesis
Figure 5: NFT gate
Using reversible gate designed the reversible circuit
NG GATE required to some restrictions such as no more than one fan
out is allowed and feedback or loops are strictly restricted.
NG gate is 3x3 gate. The input vector is I(A,B,C) and
output vector is O(P,Q,R,).The output is defined by 4.5 Delay of Reversible Circuit
P=A,Q=AB C,R=(A’C’ B’).

146
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Delay of reversible circuit is delay of critical path. Critical [4] D.Michael Miller and Zahra Sasanian, “Lowering the
path refer to maximum number of gates for any input to quantum gate cost of reversible circuits”, IEEE, 2010.
any output in the circuit. Delay of reversible circuit is based
on following assumption: [5] Prashant.R.Yelekar, “Introduction to reversible logic
gates and its application”, 2nd National Conference on
 Every reversible gate will take same amount of Information and communication Technology, 2011.
time for internal logic operations in the circuit.
[6] Md.Mazder Rahman, “Two-Qubit quantum gates to
 Before computation start, all inputs to the circuit
reduce the quantum cost of reversible circuit”, IEEE
are known.
International Symposium on multiple-valued logic, 2011.
5. CONCLUSION AND FUTURE
[7] B.Raghu Kanth, “A Distinguish between reversible and
WORK conventional logic gates”, International Journal of
Reversible computing has its great significance in reducing Engineering Research and Application, 2012, Vol. No.2,
the complexity of the digital circuits. For this purpose Issue No.2, pp.148-151.
various reversible logic gates are available. By using these [8] Mr. Devendra Goyal, Ms. Vidhi Sharma,“VHDL
gates we can design any of combinational or sequential Implementation of Reversible Logic Gates”, International
circuit with low power, low complexity, less delay, high Journal of Advanced Technology & Engineering Research
speed etc. Reversible computing is becoming an important (IJATER), 2012.
research area include Quantum computing,
Nanotechnology, Low power CMOS design, Spacecraft, [9] Marek Szyprowski “Reducing quantum cost in
Cryptography, Digital signal processing. reversible Toffoli circuits”, IEEE, 2013

REFERENCES [10] Raghava Garipelly, P.Madhu Kiran, A.Santhosh


Kumar, “A Review on Reversible Logic Gates and their
[1] Landauer.R “Irreversibility and Heat Generation in the Implementation’’, International Journal of Emerging
Computational Process”, IBM Journal of Research and Technology and Advanced Engineering, March 2013, Vol.
Development, 1961, Vol. No. 5, Issue No. 3, pp 183-191. No. 3, Issue No. 3.
[2] Bennett C H “Logical Reversibility of Computation”,
IBM J. Research and Development, 1973, Vol. No.17, pp
525-532.
[3] Majid Mohammdi “Behavioral Model of V and V+
gates to implement the reversible circuits using quantum
cost”, IEEE, 2008.

147
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A REVIEW OF REVERSIBLE LOGIC GATES

Sukhjeet Kaur Amandeep Singh Bhandari


Department of ECE Department of ECE
Punjabi University, Patiala Punjabi University, Patiala
sukhjeetsidhu27may@yahoo.com singh.amandeep183@gmail.com

ABSTRACT changing voltages to new levels. Reversible circuit elements


Reversible logic is one of most interesting topic at present will gradually move charge from one node to the next. This
time. It is impossible to realize quantum computing without way one can only expect to lose a minute amount of energy on
implementation of reversible logic. The purposes of designing each transition. Reversible computing will also lead to
reversible are to reduce quantum cost, garbage outputs and improvement in energy efficiency.
use less no. of constant inputs. They are used in various fields Reversible are circuits or gates that have one to one mapping
such low power VLSI design, optical computing, between vectors of inputs and outputs, thus the vector of input
Nanotechnology, Quantum computer, Design of low power states can be always reconstructed from the vector of output
arithmetic and data path for digital signal processing (DSP). states. In reversible logic gates the number of output bits
always equals the number of input bits. The fan out of every
Keywords signal including primary inputs in a reversible gate must be
Quantum cost, Garbage out, Reversible logic gates, Quantum one.
computation

1. INTRODUCTION
The loss of information associated with laws of physics i1 O1

requiring that one bit of information lost dissipates kTln2 i2 O2

i3 O3
energy. In conventional circuits information is lost. So interest Reversible
Ga te
in reversible computation arises from desire to reduce heat
dissipation. In reversibility there is two terms, one is logical
reversibility and another one is physical reversibility. i n-1 O n-1

Reversibility in computing implies that on information about i n On

the computational states can ever be lost, so we can recover


any earlier stage by computing backward or un-computing the
results. This is termed as logical reversibility.
Physical reversibility is a process that dissipates no energy to Figure1: A reversible gate with n-inputs and n-outputs is
heat. An Absolutely perfect physical computing system is called n*n gate.
practically unachievable. Power dissipation of reversible
circuits under ideal conditions is zero. Computing systems
given off heat when voltage levels change from +ve to –ve
(means bits changing from 0 to 1). Most of energy needed to
make that change is given off in form of heat. Rather than
2. LITERATURE SURVEY They proposed behavioral models for well known V & V+
In 1961, R. Landauer described that logical irreversibility is quantum gates. They explained that their proposed
associated with physical irreversibility and requires a minimal representation is used to synthesize reversible logic circuits
heat generation, per machine cycle, typically of the order of instead of the known unitary matrix form of quantum gates
KT for each irreversible function. A device logically [3].
irreversible if the output of a device does not uniquely define In march 2010,D.Michel miller et.al. explained that one
the inputs. Whenever we use a logically irreversible gate we approach to determining a quantum circuit is to first
dissipate energy into the environment. The loss of information synthesize a circuit composed of binary reversible gates and
is associated with laws of physics requiring that one bit of to then map that circuit to an equivalent quantum gate
information lost dissipates kTln2 of energy, where k is realization. In this paper they were considered that mapping
Boltzmann‟ constant and T is the temperature of the system. phase with the goal of reducing the no. Of quantum gates
The dissipation as calculated is an absolute minimum [1]. required. They were presented the results for quantum library
In November 1973, C.H.Bennett described that Reversibility for NOT, Controlled NOT and square root of NOT gate[4].
in computing implies that information about the In 2011,Prof. Suiata S.Chiwande et.al. presented a basic
computational states should never be lost. The information reversible gate to build complicated circuits which can be
can be recovered for any earlier stage by computing implemented in some sequential circuits as well as in some
backwards or un-computing the results. This is termed as combinational circuits. They also built adder circuits using
“logically reversibility”. Physical reversibility is a process that basic gates like Peres and TSG gate. They also proposed 4-bit
dissipates no heat in terms of wastage of energy [2]. reversible asynchronous down counter and explained that this
In 2008, Majid Mohammadi et.al explained quantum gates counter is used in applications of digital circuits like
uses in implementation of binary reversible logic circuits. timer/counter, building ALU and reversible processor[5]

148
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

In 2011, Md.Mazder Rahman et.al.explained that a quantum


gate library that contains of all possible two-qubit quantum
gates which don‟t produce entangled states. The proposed
two-qubit quantum gate library that plays a significant role in
reducing quantum cost of reversible circuits. They also show
some experimental result in which reduction in quantum cost
of more than 20% had been observed in more the half of the
circuits from revlib[6].
In April 2012, B.Ragh Kanth et.al. compared conventional
and reversible logic gates. They also realized the addition and
subtraction operations using reversible DKG gate and also Figure 3.2: Symbol of Toffoli Gate.
compared it with conventional logic gates. The author  DPG Gate: It is 4*4 reversible logic gate with
explained that a 4*4 reversible DKG gate can work singly as a inputs(A,B,C,D) and outputs P=A,Q=A XOR
reversible full adder and a reversible full subtractor.The B,R=(A XOR B) XOR D,S=(A XOR B)D XOR AB
author also compared the results of conventional adder with XOR C.
reversible on Xilinx 9.1 software [7].
In May 2012, Mr. Devendra Goyal et.al. Explained that
reversible logic is very essential for the construction of low
power, low loss computational structures which are every
essentials for construction of arithmetic circuits used in
quantum computation and other low power digital circuits. In
this paper, they were presented the combinational circuits of
all basic reversible logic gates and also had done VHDL codes
of these circuits [8].
In March 2013, Raghava Garipelly et.al. explained that
quantum networks composed of quantum logic gates each
gate performing an elementary unitary operation on one or
more two-state quantum system called qubits. They were also Figure 3.3: Symbol of DPG Gate.
explained constraints for reversible logic circuits that  TR gate: It is 3*3 reversible logic gate with input
reversible logic gates do not allow fan out and required
(A, B, C) and outputs P=A,Q=A XOR B,R= AB
minimum no. Of garbage outputs, constant input and also
XOR C.It has quantum cost six.
minimum quantum cost. They were proposed new 6*6 BSCL
reversible logic gates.[9].
In 2013, Marek Szyprowski et.al. Explained that minimization
of gates count in 4-bit circuits is problem. They explained that
a tool of practical usage for finding optimal gate count Toffoli
network for 4-variable function was developed. In this paper,
they were presented reducing quantum cost of 4-bit reversible
circuits. They also explained that it is possible to obtain
savings in quantum cost of up to 74% comparing with
previously known circuits [10].

3. Different Types of Reversible Logic


Gates:
Figure 3.4: Symbol of TR Gate.
 NOT Gate: It is 1*1 simplest reversible logic gate
with input A and output P= NOT A. The quantum
cost of NOT is zero.  BJN gate: It is 3*3 reversible logic gate with inputs
(A,B,C) and outputs=A,Q=BC=(A OR B)XOR C.It
has quantum cost five.

Figure 3.1: Symbol of NOT Gate.

 Toffoli Gate: It is 3*3 gate with inputs (A, B, C)


and outputs P=A,Q=B ,R=AB XOR C.It has
quantum cost five.

149
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 3.5: Symbol of BJN Gate. [5]Prashant .R.Yelekar,Prof. Sujata S. Chiwande, “


Introduction to Reversible Logic Gates & its Application”,
2nd National Conference on Information and Communication
 Various Parameters for determining the complexity Technology (NCICT) 2011.
& performance of the circuit are No. of Reversible [6] Md. Mazder Rahman†, Anindita Banerjee_, Gerhard W.
Gates, No. of Constant Inputs, No. of Garbage Dueck†, and Anirban Pathak_‟‟ Two-Qubit Quantum Gates to
Outputs and Quantum cost. Reduce the Quantum Cost of Reversible Circuit‟‟, 41st IEEE
International Symposium on Multiple-Valued Logic,2011.
4. QUANTUM COMPUTING [7] B.Raghu Kanth,B.Murali Krishna,M.Sridhar,V.G.Santhi
swaroop,‟‟A Distinguish between Reversible and
In classical reversible computing manipulates bits but in Conventional logic Gates,International journal of engineering
quantum computers manipulates qubits.Each qubit represents research and applications,april 2012,vol.2,issue2
an elementary unit of information corresponding to the [8] Mr. Devendra Goyal, M.Tech, Ms. Vidhi Sharma ,
classical bit values 0 & 1.In quantum circuits model, we have „‟VHDL Implementation of Reversible Logic Gates‟‟,
logical qubits carried along „wires‟ and quantum gates that act International Journal of Advanced Technology & Engineering
on the qubits.Each quantum gate acting an n qubits has input Research (IJATER) ,may 2012,ISSN NO: 2250-3536 volume
qubits carried to it by n wires and n other wires carry the 2, issue 3.
output qubits away from the gate. The quantum cost of any [9]Raghava Garipelly,P.Madhu Kiran,A.Santhosh Kumar,‟‟A
reversible gate is calculated by counting the number of V,V+ Review on Reversible logic gates and their Implemantation‟‟,
and CNOT gates present in its circuit.V is square root of NOT International journal of emerging and advanced
gate and V+ is its Hermitian.The V and V+ quantum gates engineering,March2013,volume3,Issue3.
have following properties: [10] Marek Szyprowski,Pawel Kerntopf,‟‟ Reducing Quantum
V*V=NOT................. (1) Cost in Reversible Toffoli Circuits‟‟,IEEE,2013.
V*V+=V+*V=1.......... (2)
V+*V+=NOT.............. (3)

Figure4.1: Quantum implementation of a 3x3 TR gate.

5. CONCLUSION AND FUTURE SCOPE


Reversible Computing has its great significance in reducing
the complexity of the digital circuits. For this purpose, various
reversible logic gates are available. By using these gates we
can design any of combinational or sequential circuit with low
complexity, less delay, higher speed, etc.They are used in
various fields such low power VLSI design, optical
computing, Nanotechnology, Quantum computer, Design of
low power arithmetic and data path for digital signal
processing(DSP).

REFERENCES
[1] Landauer.R “Irreversibility and Heat Generation in the
Computational Process”, IBM Journal of Research and
Development, 5, pp. 183-191, 1961.
[2] Bennett C H “Logical Reversibility of Computation”, IBM
J.Research and Development, pp. 525-532, November 1973.
[3]Majid Mohammadi, Mohammad Eshghi, Abbas
Bahrololoom,‟‟ Behavioral Model ofV and V+ Gates to
Implement the reversible Circuits Using Quantum Gates‟‟,
IEEE, 2008.
[4] D. Michael Miller and Zahra Sasanian,‟‟ Lowering the
Quantum Gate Cost of Reversible Circuits‟‟, IEEE,2010.

150
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Review of an Adaptive On-demand Routing


Protocols for Mobile Ad-hoc Networks
Navjot Kaur Deepinder Singh
BGIET, Sangrur BGIET, Sangrur
knavjot911@yahoo.com wadhwadeepinder@gmail.com

ABSTRACT
Mobile Ad-hoc networks are most commonly used these days in
smart phones but the energy efficiency is major concern when
discovering the neighboring nodes to check the local
connectivity as progressively trade hello messages are utilized
for it. Overhead goes high because of neighbor disclosure in
MANET routing protocols such as AODV, DYMO .This paper
proposes a versatile hello informing plan to stifle pointless hello
messages. We have made both the protocols adaptive and
analyses the performance in terms of energy efficiency and with
the purposed scheme we have successfully reduced the energy
consumption 54%.

Keywords
Hello messaging, hello intervals ad-hoc networks, energy
utilization, network overhead, local connectivity. Figure 1: Showing difference of cellular network and Mobile
ad hoc network.
1. INTRODUCTION
In the last few years, an expansive utilization of wireless Two commonly used routing protocols are Reactive and
communication has begun. Presently days, there is an expanding Proactive. Reactive routing protocols are used when they are
interest for the wireless communication from both a scholarly required, e.g. DSR (Dynamic Source Routing) and AODV (Ad-
and modern viewpoint. A Mobile specially appointed is the one hoc On Demand Distance Vector). Proactive routing protocols
comprising of a set of portable hosts which can communicate are also known as Table Driven protocols. In this protocol, each
with each other and wander around at their will. Hubs in a node has one more tables that contain up to date data of the
MANET can act has and also switches since they can both routes to any node in the system, e.g. DYMO (Dynamic
create and forward packets [1]. Since an ad-hoc network is a MANET on Demand) and DSDV (Destination Sequence
system without base, example, a wireless or a wired base station Distance Vector). AODV and DYMO conventions are utilized
( in case of Wi-Fi setup in a school where all PCs impart to the where another course is found through the RREQ (Route
web utilizing access focuses). In ad-hoc networks, every hub Request) and RREP (Route Reply) packet trade. In this paper
finds its neighborhood neighbors and through those neighbors it BFOA (Bacterial Forgaging Optimization Algorithm) must be
will speak with those hubs that are out of its transmission extent connected over a adaptive plan of AODV and DYMO
(Multi-hop). Subsequently, in such a system hubs are required to conventions [4], [5].
act in a helpful way to create the system "on-the-fly" [2]. AODV built course utilizing Hello messages, for recognizing
Additionally, these systems are confronted with the customary the neighbor hubs is in extent or not. At the point when any hub
issues crucial to wireless communication, for example, less identifies the connection disappointment in dynamic routes, then
dependable than wired media, time differing channels, mistake message RERR is created by hub and multicast this
interference, and so on. Despite the many design limitations, message to those hubs which are connected with course
Mobile ad hoc networks offer many advantages. This sort of disappointment. In the wake of accepting this message the hubs
system is exceptionally suggested for utilization in redesigns its directing tables and evacuates the entrance of
circumstances where a settled framework is missing, not trusted, influenced routes [6]. AODV evacuates loop issue in light of the
fact that it uses grouping number for every route request
high cost or not dependable. In view of their making toward
message.
oneself, organizing toward oneself and supervising toward
oneself capacities, ad hoc networks can be rapidly deployed with
minimum user intervention [3].

151
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

that a hub is not taking part in any communication for a given


time period, it doesn't have to keep up the status of the link.
Hello packets transmits during this period are superfluous. In the
event that a uniform Hello interval is utilized, the danger of
endeavoring to transmit a parcel through a broken connection
diminishes as the event interval increases. Rather than utilization
of steady Hello intervals, our proposed plan utilizes steady risk
level [8]. As the invent interval increases, the Hello intervals
additionally increments without expanding the danger. If the
occasion interval is hugely large, the Hello messaging interval is
also large i.e. Hello messaging is practically smothers [9]. At the
point when a node receive or sends a packet, the Hello
messaging interval is reset to a default esteem so that forward
data is kept in a neighbor table for active communication.
Simulation results demonstrate that our proposed scheme stifles
pointless Hello messaging and decreases the energy utilization
up to 54% as soon as possible [10].
Figure 2: AODV Route Discovery process via RREQ and
RREP message 2. RELATED WORK
The hello messaging plan has been explored broadly in the
Route discovery procedure of DYMO convention is same as that literature. In [11], an execution of Ad hoc on demand distance
of AODV protocol yet one new gimmick is path accumulation. vector (AODV) is used to focus the viability of hello messages
In route upkeep process RERR message is produced by hub for deciding local connectivity To build the adequacy of the
when join disappointment happens in the made way and hello messages, the gathering qualities of hello messages ought
multicast RERR message to those hubs which are concerned to be equivalent to that of the data packets and for this two must
with the connection disappointment. Every hub redesigns its have comparative attributes of rate and size and thus better
directing tables in the wake of getting this message and erases throughput will bring about this case.
the steering passage of broken connection. Presently source hub In [8], the focal points and disadvantages of both the systems
quit sending information through this way and reinitiates new hello messaging and MAC criticism i.e. used to focus join
course disclosure process if necessary [7]. disappointments in specially appointed ad-hoc routing protocols
are tended to. Simulation results demonstrated that MAC
criticism works better than hello messages with low system load,
yet in the event that the system burden expands, the mistaken
choice about connection disappointment likewise builds which
brings about lower throughput. Additionally when AODV is run
over MAC layer protocols, the vitality that is devoured and
activity control overwhelmed by the MAC protocols are more
prominent than when they run over MAC IEEE 802.11. In [12],
effect of the hello protocols on ad-hoc networks is talked about.
The essential thought behind the proposed plan is to send hello
messages as meager as could be allowed, the proposed hello
protocols acquire less overhead and the network performance
expands contrasted with an periodic hello messaging protocol,
while looking after the precision of indistinguishable neighbor
table. In [10], the relationship between the transmission
recurrence of hello messages and the sensing clock lapse quality
is explored with the system's node mobility. In [13], neighbor
Figure 3: Route Discovery Process of DYMO discovery is abused and overhead of neighbor disclosure courses
of action is lessened. However all these plans address just
ordinary hello messaging plan utilizing consistent hello interval.
In this paper, a adaptive hello messaging plan is proposed for At the same time the system topology changes quickly in
neighbor disclosure by successfully suppressing undesirable MANETs because of arbitrary hub portability and for neighbor
Hello messages. This scheme periodically changes Hello revelation hello messages are sent after standard intervals i.e.
intervals, and does not grow the danger that a sender will steady hello interval and consequently battery utilization
dispatch a packet through a broken connection that has not expands, energy effectiveness diminishes. In [14], the shortest
been finding by Hello messaging; we call this the probability of path routing is very effective as it saves time and beneficial in
failure of detection of an inaccessible connection (PFD) [4]. To terms of cost A standout amongst the most critical qualities in
ascertain this danger, use an average occasion interval, that is, a portable remote systems is the topology dynamics, i.e, the
normal time opening between two successive intervals i.e., system topology changes over the long run because of energy
transmitting or accepting an information bundle on a hub. By conservation or node mobility. Lately, the steering issue has
looking at the occasion intervals, we can decide how effectively been decently tended to utilizing insightful improvement
a hub is included in sending or forwarding. On the off chance systems, e.g., Artificial Neural Networks (ANNs), Genetic

152
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Algorithms (GAs), Particle Swarm Optimization (PSO), and so 𝐹 𝑥, 𝛽 = 1 − 𝑒 −𝑥/𝛽 (2)


on.
The exponential distribution infers that partition of intervals are
3. PROPOSED SCHEME not as much as a given interval x We can decipher x as a
In mobile ad-hoc networks systems utilizing android phones, connection refresh period (Td) and F(x, β) as the likelihood that
power consumed by no sleep energy bugs is a significant an event happens before the connection is invigorated, i.e., F(x,
concern and for the route foundation and support nearby β) = PFD. Since a routine Hello messaging plan utilizes a steady
connection network data is generally essential. For local link esteem for, PFD shifts contingent upon β. This reasons even
connectivity information used in neighbor discovery latent hubs to show Hello packets periodically. We fix (=PFD)
dynamically exchanging hello messages are used, whereas in and make a variable so that the Hello intervals is versatile to the
such traditional hello messaging schemes no start/end condition normal event interval of a hub. We can revise the CDF of the
is defined. Because of no sleep energy bugs hello messages can exponential distribution utilizing the PFD as takes after:
release batteries while cell phones are not being used. A
versatile hello messaging plan for neighbor discovery is
proposed by stifles superfluous hello messages. The proposed 1 − 𝑒 −𝑥/𝛽 = 𝑃𝐹𝐷
plan dynamically modifies hello intervals and does not expand
the danger that a sender will travel a danger through a broken 𝑥 = −𝛽 ln 1 − 𝑃𝐹𝐷 (3)
connection. It demonstrated that our proposed plan suppress
unnecessary hello messaging and decrease energy utilization The probability of failure discovery of an occupied connection is
immediately. We will actualize BFO (Bacterial Foraging made proportional to exponential movement distribution.
Optimization) procedure utilizing multiobjective for a adaptive
hello messaging. BFO is an optimization technique and 4. SIMULATION RESULTS
considers the capacity of tackling complex issues by
collaboration. . This system is likewise roused by the social
forgaging behavior like ant colony and particle swarm Fig 4 analyzes the energy utilization of AODV and AODV-AH.
optimization. It attracts the scientists because of its productivity Every node has at first 150 joules of energy. As the quantity of
in tackling true advancement issues and gives preferred results nodes expands the AODV expends more energy. Then again,
over traditional methods of problem solving. AODV-AH because of proposed plan diminishes unnecessary
The Hello packets are conveyed every working switch interface. hello messages. Consequently the energy utilization will take
They are utilized to find and keep up neighbor connections. Be additional time because of the less transmission and reception of
that as it may because of this occasional HELLO messages, the hello messages.
hub's battery empties all the more rapidly. To keep away from
this issue, Hello parcels ought to be smothered. The correct
arrangement, notwithstanding, relies on upon deciding the right
Hello interval. The greatest interval of time between the
transmissions of hello messages is HELLO_INTERVAL. In
equation 1, time amid which the hub ought to expect that
connection is at present broken is the time if a hub does not get
any bundle from that hub inside the given time i.e. the average
Td is given as
Td = (ALLOWED HELLO LOSS−0.5)∗HELLO INTERVAL (1 )

Let us assume the sender and its neighboring hub. An occasion


on the neighboring hub happens when the sender advances a
packet to the neighboring hub. On the off chance that the
neighboring hub moves out of the sender’s transmission range,
Figure 4: AODV-AH increases the battery lifetime
there are two conceivable game plans: (a) the sender is asked for
to forward; or (b) the sender is not asked for to forward. Here,
just case (a) can cause a connection blunder. In the event that Fig 5 compares the energy utilization of DYMO and DYMO-
(b), the join accessibility does not have to be upgraded. To keep AH. Every node has at first 150 joules of vitality. As the
a connection lapse in the previous case, the sender must know quantity of node build the DYMO consumes more energy and
the accessibility of its connection to the following bounce hub the mobile battery will empty speedier. Then again, DYMO-AH
preceding sending a packet. We watch most event intervals are because of proposed plan suppresses the unnecessary hello
not exactly a default HELLO INTERVAL be that as it may, we messages. Consequently the energy consumed is less because of
examine the event intervals, x, that are bigger than 1 sec on the the less transmission and reception of hello messages.
grounds that the Hello interval is not altered if the interval is not
exactly the default HELLO INTERVAL. The cumulative
distributed function (CDF) in equation 2 demonstrates that all
traffics are limited by the exponential distribution where x > 1.
The CDF of x is as per the following:

153
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5. CONCLUSION

In this we proposed an adaptive Hello interval to diminishing


battery empty through viable concealment of superfluous hi
informing. On the premise of occasion interval of a node, the
Hello interval can be extended without lessened detectibility of a
broken connection, which decreases system overhead and
concealed vitality usage. Simulation results demonstrate that our
proposed plan superfluous unwanted Hello messages and
decreases the energy consumption up to 54% immediately.

6. REFERENCES

[1] D. ,Jaafar, M. Ismail, Mobile ad hoc network


Figure 5: DYMO-AH increases the battery lifetime overview. Melaka: IEEE, 4-6 DEC,2007.
Fig 6 demonstrates the impact of PFD on the throughput and [2] Asst. Prof. Jaswinder Singh Gurpinder Singh,
Hello ratio when the maximum speed differs. A high PFD
utilizes a more extended Hello intervals than a low PFD. As the MANET: Issues and Behavior Analysis of Routing
high PFD does not make a valid difference in throughput from a Protocols, 24th ed.: International Journal of Advanced
low PFD. Research in Computer Science and Software
Engineering, APRIL, 2012.

[3] Rupinder Kaur Gurm Jagdeep Kaur, Performance


Analysis of AODV and DYMO Routing Protocols in
MANETs Using Cuckoo Search Optimization, 28th
ed.: International Journal of Advance Research in
Computer Science and Management Studies,
AUGUST,2014.

[4] Dongman Lee Seon yeong Han, An Adaptive Hello


Messaging Scheme for Neighbor Discovery in On-
Demand MANET Routing Protocols, 175th ed.: IEEE
COMMUNICATIONS LETTERS, 5 MAY,2013.

Fig. 6: Throughput for various max speed and PFD [5] Mr.M.Karnan,Mr.R.Sivakumar G.Kokila, Immigrants
Fig 7 demonstrates the quantity of Hello packets for different and Memory Schemes for Dynamic Shortest Path
node densities. The proposed plan diminishes the quantity of Routing Problems in Mobile Ad hoc networks using
Hello packets by as much as half. The impact of the proposed PSO,BFO, 25th ed.: International Journal of Computer
plan increments as the quantity of nodes increments. Science and Management Research, MAY,2013.

[6] S.D.C.Perkins E.Belding-Royer, Ad-hoc On-Demand


Distance Vector(AODV)., JULY,2003.

[7] C.E.Perkins D.Chakeres, Dynamic MANET On-


Demand(AODV2)., FEB,2008.

[8] M.Singhal C.Giruka, Hello Protocols for Ad-hoc


networks:Overhead and Accuracy tradeoffs.: IEEE
International Symposium on a World of Wireless
Mobile and Multimedia Networks.

[9] Er.Jasmeet Brar Harpreet Kaur, Adaptive Scheme for


suppressing unnecessary Hello messages used for
Neighbor Discovery in MANET routing protocols,

Figure 7: Hello Packet Overhead

154
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

0406th ed.: IOSR Journal of Engineering (IOSRJEN), [12] Elizabeth M. Belding-Royer Ian D. Chakeres, The
JUNE,2014. Utility of Hello Messages for Determining Link
Connectivity” in Proc. 2002 Wireless Personal
[10] J.Abdullah, Multiobjectives GA-Based QOS Routing Multimedia Communications., 2002.
Protocol for Mobile Ad hoc Networks, 0304th ed.:
International Journal of Grid and Distributed [13] Jeff Boleng V. D. Tracy Camp, A survey of mobility
Computing, DECEMBER,2010. models for ad hoc network research., 2002.

[11] S. Hemalath P. Divya1, An Adaptive Hello Messaging [14] Kulvir Kaur Sonia Goyal, OptimizedRouting in
and Multipath Route Maintenance in On-Demand Wireless Ad-hoc networks., MARCH 2011.
MANET Routing Protocol, 36th ed.: International
Journal of Modern Engineering Research (IJMER),
NOV-DEC, 2013.

155
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

EXPLODING ELECTRONIC GADGETS

Avonpreet Kaur Anmol Sharma


Electronics and Communication Electronics and Communication
Engineering, BGIET, Sangrur Engineering, BGIET, Sangrur
avonpreet007@gmail.com

ABSTRACT flow from negative side of cells to positive side,thereby


This paper insights about the explosions in electronic providing power to it. Similarly, when you are charging
gadgets. Highlighting the occurrence of such explosions the battery, electrons are sent back to the electrode points
doesnot undermine its use. People should be make aware of to prepare the battery for consumption.Plenty of factors
the causes of such explosions and they should also be come into play here overcharging, over discharging,
informed about the precautions to be taken to prevent these unwanted carbon build up, etc. A Lithium battery though
explosions. Laptops and mobiles basically explode due to the looks simple from outside, has built-in mechanisms to
thermal runway of batteries of these devices. Most common disconnect from device and adjacent cells in case of
reasons for mobile and laptop battery’s explosion are overcharging etc. A faulty laptop battery is one that
mentioned.Then there are detailed tips on how to keep your would not provide such mechanisms to protect the cells
laptop battery safe and secure . from overcharging etc. This is just to give you an idea of
why companies have recalled products in the past saying
the batteries are defective. They are missing one or more
such precaution mechanisms.A laptop battery can
Keywords explode due to mishandling also. Dropping it somewhere
Thermal runway,gadgets,etc.
disturbs the alignment of electrode. This, in turn, will
change the alignment of electrons in the cells which may
create overheating,a known factor that explodes
1. INTRODUCTION batteries.The atmosphere where you are using the laptop
Nowadays, Life without electronic devices is very tough. We
may also cause explosion. If it is humid and hot enough,
generally come across the explosions in laptops, mobile
it will make a good case for short-circuit that in turn
phones and other electronic gadgets. It is necessary to mind
will result in overheating and hence explosion.Hibernate
the reason behind the cause in order to minimize the damage.
not standby.Although placing a laptop in standby mode
saves some power and you can instantly resume where
you left off, it doesn’t save anywhere as much power as
2. LAPTOP EXPLOSIONS the hibernate function does. Hibernating a PC will
In order to understand about battery explosions, we should actually save your PC’s state as it is, and completely shut
know about the main cause of these explosions.In a laptop,the
part that is most prone to an explosion is its battery.The
smallest part of a battery is a cell.There are minimum three
cells in a laptop and this number can go upto 12.The more the
cells,the more backup for the laptop.The explosion of laptop
batteries will cause more damage as these contain more than
one cell.If one cell is overheated and explodes,it will trigger a
sequence of blasts,where each subsequent explosion would be
more dangerous.The main reasons responsible for laptop
explosions are:-

The most important reason for a laptop to explode is


some sort of fault wih the battery.To understand what
could be the fault you need to understand how down slowly.
the Lithium (Li) Ion battery works. There are different Precautions
types of batteries of which, the popular rechargeable
Always keep laptop batteries in an original packaging
ones are either Acid base or Lithium Ion batteries. You
until new laptop battery is ready to use to prevent
will find an Acid base battery in cars, vehicles, etc.
damages to laptop battery. For example: swelling and
TheLithium Ion batteries are more compact and have
leakage of chemical element from your laptop
one electrode and an anode with suspended electrons in
batteries.The wise precaution for prevention of laptop
form of liquid.When using the laptop on battery,electron
battery explosion is proper maintenance. People tend to

156
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

neglect batteries as they tend to look simple parts and it. Here are some tips for finding quality, choose a
that they carry pretty safe measure of voltage. But since replacement battery for laptop urgently. Battery
their design is complicated, simple negligence can trigger
Voltage: The voltage of a laptop battery is usually
off explosion.Pay attention to Battery voltages and never
labeled on the back of it . You can read it by yourself. A
use different voltage Batteries for your laptop. To avoid
battery cell is of 3.6V or 3.7V. And each cell is usually
laptop damage, never use batteries with different
of 2200mAh .You can figure out how many cells a
voltages in the laptop. One should purchase
battery has from its voltage and its capacity. For
manufacturer recommended products and accessories if
example, if your laptop battery is of 10.8V, 4400mAh.
possible. If you are not sure about whether a replacement
Then your battery is of 6cells — series connection of 3
laptop Battery is compatible with your laptop battery
cells each group, parallel connection of 2 groups. If your
then it is always good to contact the manufacturer of the
battery is of 14.4V, 6600mAh. Then your battery is of
laptop battery and confirm with them before buying
12cells — series connection of 4 cells each group,
laptop battery. Do not put a high pressure on laptop
parallel connection of 3 groups
battery as high pressure can cause an internal short-
circuit leading to overheating of your laptop battery. Do
not get laptop battery wet. Even though laptop battery
MOBILE EXPLOSIONS
Mobile phones may be treated like playthings these days.
can dry and operate normally afterwards, the interlay
However, these flashy gadgets can prove dangerous if not
laptop battery circuitry could corrode and can pose a
handled with care. Sometimes the blast happens to be so bad
hazard. Do not keep a laptop plugged in all the time.
that the victim ends up with severe disfigurement of the body
Though it is fine most of the times, it is not advisable if
or even dies
you are to stay away from the laptop for longer times. To
avoid the dangers of forgetting or leaving the laptop on
without use for a long time,setting up a hibernation time
in a laptop is suggested so that it auto shutsdown.
Using a power stabilizer between the laptop plug and the
mains is recommended. If a higher voltage burst comes
along, at most the fuse will go and laptop will disconnect
from mains.If your battery is considerably old, you
should replace it. If you cannot, for some reasons, I
would advise you to remove the battery and run the
laptop on direct power supply. Older batteries tend to
take in more power and can overheat in no time.
Continuing with an old and exhausted battery is
dangerous. If you are not sure about the health of your
battery, Windows 7 and above tell you if you need to
replace it. You can also use the battery health checker or
battery maintenance software .PC World recommends How and why do mobile phone blasts happen?
replacing your lithium-ion batteries after two to three
years of use. The aging components begin to break down Lithium batteries suffer from a problem called ''thermal
and may start producing additional thermal runaway. runaway,'' whereby excess heat promotes even more excess
Battery life also may drop considerably after two to three heat and so on. Because of this, they are equipped with a
years of use. system that protects against overcharging and prevents
hazardous reactions of the chemicals contained within. It is
How TO Choose Battery Replacement? quite often said that even the inexpensive batteries produced
Li-Ion cells also have much lower levels of self- by unknown companies are just as reliable as the
discharging. These factors combined with the higher original.Lithium-ion batteries are widely used by tech
production costs mean that Li-Ion cells are manufactures because it’s the least dense metallic element;
comparatively more expensive. This battery has currently which means it packs a good amount of power in a
held the major market. Battery is very like the heart of lightweight package. But these batteries are also known to
the laptops/notebooks, and this is just the advantages explode or produce flames.
compared to the desktop PC. Now, most of the laptop
These batteries are prone to a process known as thermal
battery can keep working for 2 to 4 hours of one charge,
runaway; when energetic positive feedback causes the
and it maybe can’t meet some users’ needs. Of course, a temperature to rise and causes a “violent reaction” (or
spare battery for some friends who work as moving explosion). Static electricity or a faulty charger can destroy
office frequently is very necessary. Furthermore, the the battery’s protection circuit, making it more vulnerable to
laptop battery is a kind of consumables. If your laptop the risk of explosion or over-heating.
battery isn't lasting as long as it used to or you find
yourself charging more often, it may be time to replace

157
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

It is considered wise to check the accessories such as


earphones, battery and charger. Make sure the battery
description such as voltage value matches with that of
the charger to avoid overcharging which sometimes lead
to explosion of handset.Avoid using the phone while the
battery is being charged. If you wish to receive a call
during this time, disconnect the phone from charger
before connecting the call. Ensure it is not over-charged
by removing the electric supply when the battery is fully
charged.If your battery seems to have swollen, replace it
immediately.Avoid using public or unsecured Wi-Fi
connections. A hacker could access the mobile device
through a port that is not secured.Make sure the
Bluetooth connectivity is not switched on in public
places as it can be used to send malicious files which
Another most common reasons for a cell phone to explode are corrupt the operating system.After removing the phone
using it while the phone is being charged and 'call bombing'. from water, dismantle it by removing battery, SIM and
Charging puts pressure on the motherboard of the phone, memory cards and switch it off (only SIM card in case of
an iPhone). Dry each component thoroughly (but gently)
using it during charging increases this pressure manifold. This
with a towel until the phone is dry to the touch. Never
causes the cheap electronic components in some mobiles to
explode. Call bombing refers to calls or missed calls received use a hair dryer to try to dry the phone quicker. Drying it
from international numbers. If one receives or calls these with a heated hair dryer can cause important parts to
numbers back and the call exceeds a certain amount of time, melt, while forcing water further into the phone. Don’t
the phone will blast. There is also a malware, or bug, found in use third-party batteries: OEM batteries not only ensure
consistency in components, but also the oversight of the
some Android-based smartphones, that can also cause
explosion by exerting extra pressure on the motherboard phone’s manufacturer during their production, which
during charging. likely also means better battery life.Avoid hot places and
storing batteries near metal: Avoiding excess heat and
the risk for electrical shortage is yet another logical step
PRECAUTIONS
Buy a branded phone as far as possible. Ensure that the to ensure the best safety for your smartphone.Over-
charging, over-use and knock-off chargers may increase
phone has a proper IMEI number, which is a code that
identifies each phone. Check that the number on the fire risk
phone corresponds to that on the box and receipts.
[5] BARTEC brochure Basic concepts for explosion
protection - 11. revised edition - Version 2012
CONCLUSION
Laptop batteries are always a danger as are other products [6] States government Accountability office 2005 [cited
such as a pressure cooker etc. Your safety depends on how june10,2006]; available from
you use them. The only mantra to stay safe is to handle your http://www.gao.gov/new.items/do5937t.pdf
gadgets with care. For more information on using batteries,
please refer to the product manual pertaining to your product. [7] Electronic waste observation on the role of federal
government in encouraging recycling and reuse united.
REFERENCES
[8] Waste wise update:Electronics reuse and recycling
[1] Dr.S.S Verma, Physics Department SLIET , Electronics environmental protection agency 2000[cited july
For You.
14,2006]; available from
[2] Zbigniew Bielecki, Jacek Janucki, Adam Kawalec http://www.epa.gov/wastewise/wrr/updatehtml
Military University of Technology, ul. S. Kaliskiego 2,
00-908 Warsaw zbielecki@wat.edu.pl [9] Ammons J and Sarah, B.2003 “Eliminating E-waste
[3] Tadeusz Pustelny Silesian University of Technology, :Recycling through reverse Production”
Faculty of Mathematics and Physics, ul. Krzywoustego www.lionhrtpub.com
2, 44-100 Gliwice

[4] Tadeusz Stacewicz Institute of Experimental Physics,


Faculty of Physics, University of Warsaw, ul. Hoża 69,
00-681 Warsaw.

158
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Distinguish Abandons in Gears


Avonpreet Kaur Anmol Sharma Shweta Rani
Electronics and Communication Electronics and Communication Assistant Professor
Engineering Engineering BGIET
avonpreet007@gmail.com

ABSTRACT 2) Thermosetting plastic:-This group of plastics differs from


As the innovative advancement is going on the items are thermoplastic materials in that polymerization is completed
currently broadly made utilizing plastic material particularly during the moulding process and the material can never be
as a part of mechanical autonomy which needs to be ultra softened again. Polymerization during the moulding process is
light weight and particular in nature plastic segments like called curing.
gears, according to industry measurements we have observed
that apparatuses are comprised of plastic material High- Forms of supply:
thickness polyethylene (HDPE) which is inclined to different
sorts of deformities when assembling utilizing picture Both thermoplastic and thermosetting plastic moulding
handling. Accordingly we recommend a completely vigorous materials are normally available as powder or as granules
framework exploiting picture transforming systems packed in bags or in drums.Thermoplastic powders and
(segmentation, Non smooth corner recognition and so forth) granules are homogeneous materials consisting of the polymer
must be investigated to manufacture an efficient answer for together with the coloring agent (pigment), lubricant and die-
give Total Quality Management(TQM) in assembling units
release agent. Thermosetting plastic materials are come in
which would permit an eco-arrangement of nonstop checking
and change there by lessening the expense. powder or granule form mixed with additives to make them
more economical to use, to improve their mechanical
properties, and to improve their moulding properties

Keywords
Digital image processing; count number of teeth; count 2. RELATED WORK DONE
number of surface defects; thresholding. Tomislav Petkovi et.al [2] had made an attempt to classify
various defects encountered in the production of molded
plastic products by taking advantage of image processing.
1. INTRODUCTION This paper basically discusses the relevance of doing shape
Plastic moulding is the process of shaping plastic using a rigid analysis for identifying surface defects in plastic products and
frame or mould. The technique allows for the creation of further classifying these defects using nearest neighborhood
objects of all shapes and sizes with huge design flexibility for classifiers. Defect detection starts with image acquisition. The
both simple and highly complex designs. A popular image of the product is obtained by using a background
manufacturing option, plastic are responsible for many car illumination and that provides excellent contrast
parts, containers, signs and other high volume items. nontransparent black object against the illuminated
In Plastic moulding, the moulding process selected depends background. Such images are then segmented by using
upon two main factors: thresholding. All existing boundaries are traced and for each
boundary the pattern vector is calculated. Finally, the pattern
The geometry of the components to be moulded. vectors are compared against the prototype vectors. In surface
The material from which the product to be made. inspection subsystem several algorithms are used which are
designed for line detection, spot and blemish detection. The
There are two main groups of plastic moulding materials:- shape inspection subsystem is to detect and classify possible
1) Thermoplastics:-This group of plastic can be softened shape defects.The results of the algorithms are the position of
every time they are heated; they can be recycled and reshaped the defects and some of their characteristics, and those are
any number of times. This makes them environmentally compared with the product prototype. The various stages of
attractive. However, some degradation occurs if they are development describing algorithms were successfully tested
overheated or heated too often and recycled materials should with the limited number of instances and require further
only be used for lightly stressed components. Thermoplastic testing with greater number of instances.K.N.Sivabalan et.al
moulding usually made by the injection moulding process made an attempt to identify defects, by using a visual
which is suitable for quantity production of both large and inspection system in fabric. Various techniques of feature
small components and is the most widely used moulding extraction and segmentation are used to identify the defects in
process.For Example: - Polyethylene, PVC, Polystyrene, gray level digital images. In this research work the minimum,
Polypropylene, Nylon- Star Wheels, Cams,Brush Handle. maximum and median values are calculated for each row of
the image to frame the feature vector. The defected area is
identified sudden variation of the former pixel or from the

159
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

median value. The pixels which have abrupt changes from the demonstrated that an eigenvalue-based defect recognition
median value about 60% or from the pixel of about the same method is effective to distinguish defective images from non-
value is considered as a pixel in defected area. The defective images.
comparison process excludes the pixels which have zero value
in order to expedite defect detection algorithm. This
Algorithm was most suitable for the defects which have low
frequency.Walter Michaeli had done a study of the various
CONCLUSION
To see the subjectively and quantitatively execution of the
algorithms that focus on inspection of plastic material
proposed calculation, a few examinations are led on a few
exhibiting irregular texture. He uses the local binary pattern
colored and grey scale images. The viability of the
operator for texture feature extraction. Various typical defects
methodology has been legitimized utilizing distinctive
include holes, friction line, burn marks foreign particles or
pictures. The results are processed subjectively (outwardly)
printing rollers. For classification, supervised and semi-
and quantitatively utilizing quality measures.The gear images
supervised approaches are used. This paper discusses the
have been converted into black by using the complement code
plastic surface inspection system that are able
for increasing the visibility.
id
REFERENCES
[1]. Tomislav Petkovi cy, Josip Krapacz, Sven Lon cari
cy.and Mladen Sercerz (2010), “Automated Visual
Inspection of Plastic Products”.
[2]. Walter Michaeli and Klaus Berdel (2011), “Inline-
inspection-of-textured-plastics-surfaces” Optical
Engineering Vol. 50 (2),pp 027205.
[3]. Zhenxiang Zhang, Kun Wang, Xun Yang and Lianqing
Chen (2010), “Research on Detection of Micro Plastic
Gear Tooth Based on Dummy Circle Scan Method”
Applied Mechanics and Material Vol.37-38, pp1002-
1005.
[4]. Sangwook Lee (2009 ) ,Ph.D.“ Automated Defect
Recognition Method by Using Digital Image
Processing”.
[5]. Xianghua Xie (2008) “A Review of Recent Advances in
Surface Defect Detection using Texture analysis
Techniques” Electronic Letters on Computer Vision and
Image Analysis ,Vol 7(3),pp 1-22.
entify defects on unstructured surfaces. The algorithms for
defect detection are directly based on the brightness of a
pixel.The results obtained out of the proposed technique have
been found better compared to the other existing methods.
Experiments with images show detection rate with 97%
accuracy. The real-time tests show that defects can be reliably
detected at speeds of 30 m/min. Marie Vans purposed a
system for automatic, on-line visual inspection and print
defect detection for Variable –Data Printing (VDP). This
system will automatically stop the printing process and alert
the operator that the problems have occurred. The main
advantages of using this system allow a single skilled operator
to maintain and monitor several work loads, reducing the
number of operators and reduce the number of defects.
Examples of defects that may cause an operator to halt the
machine include scratches, spots, missing dot clusters, streaks,
and banding. In this paper the simplest image reference
approach was used, a reference exists that allows a direct
comparison to defective products.Sangwook Lee deals with a
digital image processing method to apply was to evaluate steel
bridge coating condition. The proposed method in this paper
was designed to recognize the existence of bridge coating rust
defects. The image defect recognition method was developed
by pairwise comparisons and calculating Eigenvalues which
were chosen as a key feature to distinguish defective images
from non-defective images. Experimental results were

160
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Study of various aspects related to Wireless Body Area


Networks

Raju Sharma Dr. Hardeep Singh Ryait


ECE Deptt. Of BBSBEC ECE Deptt. Of BBSBEC
Fatehgarh Sahib, Punjab, India Fatehgarh Sahib, Punjab, India
raju.sharma@bbsbec.ac.in hardeepsryait@gmail.com

ABSTRACT 2. System Architecture of WBAN


In recent years, interests in the application of Wireless Body
Area Network (WBAN) have grown considerably. It is one of
the latest technologies in health care diagnosis and
management. Use of a WBAN will also allow the flexibility
of setting up a remote monitoring system via either the
internet or an intranet. This paper presents an overview on the
various aspects of WBAN including System Architecture
,sensors used, wireless technology used and challenges faced
in wireless body area networks. Three network topologies are
also simulated using opnet 18.0.

General Terms
Fig.1 System Architecture of WBAN
Wireless Body Area Network.
Generalized system architecture of a WBAN can be divided in
Keywords three fundamental levels or tiers of communication as
WBAN, Network Topology, Bluetooth, Zigbee. described in [3,4, 5]:
1. INTRODUCTION
i) Tier-1 communication (intra-WBAN)
The aging population in many developed countries and the
rising costs of health care have triggered the introduction of
ii) Tier-2 communication (inter-WBAN)
novel technology-driven enhancements to current health care
practices. For example, recent advances in electronics have iii) Tier-3 communication (beyond WBAN).
enabled the development of small and intelligent bio- medical
sensors which can be worn on or implanted in the human
body. These sensors need to send their data to an external
medical server where it can be analyzed and stored. Using a 2.1 Tier-1 communication - Intra-WBAN
wired connection for this purpose turns out to be too Tier-1 communication or intra-WBAN communication refers
cumbersome and involves a high cost for deployment and to the radio communication range of about 2 meters around
maintenance. However, the use of a wireless interface enables the human body [3]. Intra-WBAN communication can be sub-
an easier application and is more cost efficient [1]. The patient categorized as follows:
experiences a greater physical mobility and is no longer
(a) Communication among body sensors
compelled to stay in a hospital. This process can be
considered as the next step in enhancing the personal health (b) Communication between body sensors and a personal
care and in coping with the costs of the health care system. server (PS)
Where eHealth is defined as the health care practice supported
by electronic processes and communication, the health care is A PS is any machine that can collect data from sensors and do
processing on it to generate some meaningful result, e.g., a
now going a step further by becoming mobile. This is referred
cell phone or a personal digital assistant (PDA) or a palm top.
to as mHealth [2]. In order to fully exploit the benefits of PS is quite different than a coordinator node or a gateway
wireless technologies in telemedicine and mHealth, a new node because PS is a more complicated multipurpose
type of wireless network emerges: a wireless on-body network machine, that is needed to be equipped with some peripheral
or a Wireless Body Area Network (WBAN). radio or cable to communicate with the body sensors. PS
should also have sensor data manipulating or processing
software to generate outputs.

Gateway or coordinator nodes are just like sensor nodes, in


architecture, from which they collect data. Their task is to

161
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

forward the data to AP and then AP will route the data on example, if any abnormalities are found based on the up-to-
internet to the remote server/database. date body signal transmitted to the database, an alarm can
notify patient or doctor through email or short message
Design of intra-WBAN tier is far more complicated and service (SMS). If necessary, doctors or other care-givers can
challenging than other ones. Well known communication communicate with patients directly by video conference via
techniques for intra-WBAN communication are ZigBee, the Internet. In fact, it might be possible for the doctor to
Bluetooth and UWB. remotely diagnose a problem by relying on both video
communications with the patient and the patient’s
2.2 Tier-2 communication - Inter-WBAN physiological data information stored in the database or
The paradigms of inter-WBAN communication are divided retrieved by a WBAN worn by the patient [3,4,5,7]
into two subcategories as follows:
3. Types of nodes in a WBAN
A node in a WBAN is defined as an independent device with
communication capability. Nodes can be classified into three
different groups based on their functionality, implementation
and role in the network. The classification of nodes in
WBANs based on their functionality is as follows [18]

3.1.1 Sensor Node


Sensors in WBAN measure certain parameters .In one’s body
either internally or externally. It consists of sev- eral
components: sensor hardware, a power unit, a processor,
memory and a transmitter or transceiver [8]. A list of different
types of commercially available sensors used in WBANs is as
Fig2. (a)Ad-hoc communication (b) infrastructure based follow: EMG, EEG, ECG, Temperature, Humidity, Blood
communication pressure, Blood glucose, Thermistor etc.

2.2.1 Infrastructure based architecture 3.1.2 Personal Device (PD)


The architecture shown in Fig.2 is use in most WBAN This device is in charge of collecting all the information
applications as it facilitates dynamic deployment in a limited received from sensors and actuators and handles interaction
space such as a hospitals as well as providing centralized with other users. The PD then informs the user (i.e. the
management and security control. The AP can act as a patient, a nurse, a GP etc.) via an external gateway, an
database server related to its application [6] actuator or a display/LEDS on the device. The components
are a power unit, a (large) processor, memory and a
transceiver. This device may also be called body gateway,
2.2.2 Ad-hoc based architecture sink, Body Control Unit (BCU) or PDA in some applications
In this architecture, multiple APs transmit information inside [9].
medical centers as shown in Fig.3 .The APs in this
architecture form a mesh construction that enables flexible 3.1.3 Actuator
and fast deployment, allowing for the network to easily A device that acts according to data received from the sensors
expand, provide larger radio coverage due to multi-hop or through interaction with the user. The components of an
dissemination and support patient mobility. The coverage
actuator are similar to the sensor's actuator hardware (e.g.
range of this configuration is much larger compared to the
Actuator infrastructure based architecture, and therefore hardware for medicine administration, including a reservoir to
facilitates movement around larger areas. In fact, this hold the medicine), a power unit, a processor, memory and a
interconnection extends the coverage area of WBANs from 2 receiver or transceiver.
meters to 100 meters, which is suitable for both short and long
term setups [6]. 3.1.4 Implant Node
2.3 Tier-3 communication-Beyond-WBAN This type of node is planted in the human body, either
Tier-3 involves communication between a WBAN and an immediately underneath the skin or inside the body tissue.
outside network, e.g., internet or some E-care (electronic care)
center. PS and gateway can directly communicate to the 3.1.5 Body Surface Node
outside network and it can also have some base stations This type of node is either placed on the surface of the human
involved in between as well. Fig.3 represents a pictorial
body or 2 centimeters away from it.
representation of the tiers of communication for WBANs. A
database is an important component of the beyond-WBAN
tier. This database maintains, e.g., the user’s profile and 3.1.6 External Node
medical history. According to user’s service priority and/or This type of node is not in contact with the human body and
doctor’s availability, the doctor may access the user’s rather a few centimeters to 5 meters away from the human
information as needed. At the same time, automated body.
notifications can be issued to his/her relatives based on this
data via various means of telecommunications. The design of
beyond-WBAN communication is application-specific, and
should adapt to the requirements of user-specific services. For

162
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.1.7 End Nodes Node Function Single sensors, each performs multiple
The end nodes in WBANs are limited to performing their tasks
embedded application. However, they are not capable of
relaying messages from other nodes. Node Accuracy Limited node number with each required
to be robust and accurate.

3.1.8 Relay Node size Pervasive monitoring and need for


The intermediate nodes are called relays. They have a parent miniaturization.
node, possess a child node and relay messages. In essence if a
node is at an extremity (e.g. a foot), any data sent is required High level wireless data transfer security
to be relayed by other nodes before reaching the PDA. The required to protect patient’s information.
Data Protection
relay nodes may also be capable of sensing data.
Access Implantable sensor replacement difficult
4. Wireless technology used in WBAN and requires biodegradability
Three different short-range wireless communication
technologies for intra-WBAN communications are used: Bio A must for implantable and some
compatibility external sensors. Likely to increase cost.
4.1 IEEE 802.15.1-Bluetooth [10]
IEEE 802.15.1 is a communication standard which is used for Context Very important because body physiology
short range communication, typically within 10m range. The Awareness is very sensitive to context change.
data rate of Bluetooth standard is 3Mbps. The bandwidth of
Bluetooth standard is high while latency rate is low. High
Wireless Low power wireless required, with signal
bandwidth encourages use of Bluetooth standard in UHC.
However, this standard consumes high energy, so this Technology detection more challenging.
standard is normally avoided in UHC. This standard is not
well fit for network of sensitive bandwidth and latency. Data transfer Loss of data more significant, and may
require additional measures to ensure
4.2 IEEE 802.15.4-2011 [11] QoS and real-time data interrogation
It is another frequently used communication standard. ZigBee capabilities.
is used in communication devices which utilize less energy
like sensor nodes. This standard is employed with a collision
avoidance technique. ZigBee got the power to control
complex operations related to communication. It utilizes very
less energy in communication nearly about 60mW. The data
6. Network Topology
The objective of this unit is to gather and collect the intended
rate of this standard is low nearly 250kbps. ZigBee has the
capability of encryption to give considerable protected physiological signal from the surface of the human body, or
communication. from an implant. It consists of the following components:
power source (battery), sensor hardware, processor and either
4.3 IEEE 802.15.6-2012 [12] a transmitter or transceiver [13].
UWB is a high bandwidth communication standard and it is
used in high data rate applications. UWB is best choice
whenever a application requires a high bandwidth. In
emergency applications, UWB is considered best choice to
use for communication. UWB are implemented with Global
Positioning System(GPS). This feature provide short routing
path to medical coordinator. GPS facility provide routes
which has less traffic and that makes communication faster
and emergency data can be easily forward to medical server in
critical situation. The receiver of UWB band is very complex,
that makes it not good for use in wearable application.

5. WBAN Challenges
Table 1.represents Challenges faced by WBAN [19]
Challenges WBAN Fig 3. Network topologies for WBANs: (i) star network, (ii)
mesh network , (iii) Cluster Tree
Scale As large as human body parts
(millimeters/centimeters) 6.1 PAN Coordinator: (FFD)
This unit gathers all the transmitted information from multiple
Node Number Fewer, more accurate sensors nodes biosensors in the WBAN and processes the data for
required (limited by space) monitoring or diagnosis purposes [14]. This can either be a
mobile device with a limited power supply, or a computing
unit with a power source. Data can either be used locally, or

163
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

further transmitted into a larger collection unit or a monitoring router is to extend the network coverage. Only routers and
centre. coordinators can have children.

6.2 Full Function Device: (RFD)


A Router is an FFD. A router is used in tree and mesh
topology to expand the network coverag e. The function of a
router is to find the best route to the destination over which to
transfer a message. A router can perform all functions similar
to a coordinator except the establishing of a network.

6.3 Reduced Function Device:(RFD)


An end device can be an RFD. An RFD operates within a
limited set of IEEE 02.15.4 MAC layer, enabling it to
consume less power. The end device can be connected to a
router or coordinator. It also operates at low duty cycle power,
meaning it consumes power only while transmitting
information. Therefore, Zigbee architecture is designed so that
an end device transmission time is short. The end device Fig.5 Tree Topology
performs the following functions:
7.3 Scenario 3
1. Joins or leaves a network Fig. 6 shows mesh topology. In this scenario one coordinator
six end devices and two routers are used. A mesh topology is
2. Transfers applications packes. a multihop network, packet pass through multiple hops to
reach their destination.
7. Simulation scenarios
In this work, we are comparing the three zigbee network
topologies (Star, Mesh, Tree) to each other. In these only one
zigbee coordinator, two zigbee routers and five zigbee end
devices are used. The comparison includes Global Statistics :
mac data traffic sent, data traffic received, Delay ,Load
throughput

7.1 Scenario 1
Fig. 4 shows star topology .It consists of one coordinator, six
end devices and two routers. In this the end devices and
routers communicates only with the coordinator. Any traffic
exchange between end devices or routers must go through the
coordinator.

Fig.6 Mesh Topology

8. Results
After configuring the scenarios in OPNET 18.0 modeler,
global statistics could be taken to study the performance of
system in terms of many parameters. Global statistics: MAC
data traffic send, data traffic received, delay, load, throughput.

8.1 Throughput
Throughput represents the total number of bits ( in bits/sec)
forwarded from 802.15.4 MAC to higher layers in all WPAN
nodes of network.
Fig. 4 Star Topology

7.2 Scenario 2
Fig. 5 shows tree topology. In this scenario one coordinator,
six end devices and two routers are used. The function of the

164
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

8.4 Delay (sec)


Delay represents the end-to-en delay of all the packets
received by the 802.15.4 MACs of all WPAN nodes in the
networks and forwarded to the higher layer.

Fig. 7 Throughput comparison between three topologies

8.2 Data Traffic Sent


Traffic transmitted by all the 802.15.4 MAC in the network in
bits/sec. While computing the size of the transmitted packets
for this statistic, the physical layer and MAC headers of the Fig. 10 Delay comparisons between three topologies
packets are also included. This statistic includes all the traffic
that is sent by he MAC via CSMA/CA. It does not include 8.5 Load
any of the management or the control traffic, nor it does Load represents the total load submitted to 802.15.4 MAC by
include ACKs. all higher layers in all WPAN nodes of the network.

Fig. 8 End-to-End delay comparison between three topologies Fig. 11 Delay comparisons between three topologies

8.3 Data traffic Received 9. Conclusion


Data traffic received represents the total traffic successfully In this paper we simulate three scenarios for
received by the MAC from the physical layer in bits/sec. This
three topologies (star, tree, mesh) using opnet
includes retransmission.
18.0 modeler. Results ( Fig. 7,8,9,10,11) shows
that throughput, data traffic sent, data traffic
received and load is greater in Mesh topology in
comparison with other two topologies. But delay
is greater in star topology and delay in tree and
mesh topology is almost equal.

REFERENCES
[1 ]D. Cypher, N. Chevrollier, N. Montavont, and N. Golmie,
“Prevailing over wires in healthcare environments: benefits
and challenges," IEEE Communications Magazine, vol. 44,
no. 4, pp. 56-63, Apr. 2006.
[2] R. S. H. Istepanian, E. Jovanov, and Y. T. Zhang,
Fig. 9 Data traffic received comparison between three “Guesteditorial introduction to the special section on m-
topologies health: Beyond seamless mobility and global wireless health-

165
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

care connectivity,"Information Technology in Biomedicine, [11] IEEE Std. 802.15.4-2011 Part 15.4: Low-Rate Wireless
IEEE Transactions on, vol. 8, no. 4, pp. 405-414, Dec. 2004. Personal Area Networks (LR-WPANs), 2011.

[3] Chen M., Gonzalez S., Vasilakos A., Cao H. & Leung V,” [12] IEEE Std. 802.15.6-2012, Part 15.6: Wireless body area
Body area networks: A survey”, Mobile Networks and networks, 2012.
Applications ,The Journal of SPECIAL ISSUES on Mobility
[13] I.F. Akyildiz, Weilian Su, Y. Sankarasubramaniam, and
of Systems, Users, Data and Computing © Springer
E. Cayirci, \A survey on sensor networks," Communications
Science+Business Media, LLC 2010 10.1007/s11036-010- Magazine, IEEE, vol. 40, no. 8, pp. 102 -114, aug 2002.
0260-8, pp. 171–193.
[14] Benot Late, Bart Braem, Ingrid Moerman, Chris
[4] Ullah S., Higgins H., Braem B., Latre B., Blondia C., Blondia, and Piet Demeester, A survey on wireless body area
Moerman I., Saleem S., Rahman Z. & Kwak K,” A networks," Wirel. Netw., vol. 17, pp. 118, January 2011.
comprehensive survey of wireless body area networks”,
Journal of Medical Systems, volume 36 Issue 3, June 2012 [15] IEEE Standards, “Part 15.4: Wireless Medium Access
,pp. 1065–1094. control (MAC) and Physical Layer (PHY) Specifications for
Low-Rate Wireless Personal Area Networks (WPANs),"
[5]Akyildiz I., Su W., Sankarasubramaniam Y. & Cayirci E,” IEEE 802.15.4 Standards, Sept. 2006.
Wireless sensor networks: a survey”, Journal Computer
Networks: The international Journal of Computer and [16] Changle Li, Huan-Bang Li, and R. Kohno, “Performance
Telecommunications Networking, volume 38 Issue 4, 15 evaluation of ieee 802.15.4 for wireless body area network
March 2002, pp. 393 – 422. (wban)," in Communications Workshops, 2009. ICC
Workshops 2009. IEEE International Conference on, june
[6] M.Chen, S. Gonzalez, A. Vasilakos, H. Cao, and V.Leug, 2009, pp. 1-5.
“ Body area networks: A survey,”Mobile Networks and
applications, vol16, pp. 171-193, 2011. [17] J. Zeng, H. Minn, and L. S. Tamil, “Time hopping direct-
sequence CDMA for asynchronous transmitter-only sensors,"
[7] Bilstrup K. (2008) A preliminary study of wban.Technical in Proc. IEEE Military Communications Conference
report IDE0854, Halmstad University. (MILCOM'08), Nov. 2008.

[8] R. Hoyt, J. Reifman, T. Coster, and M. Buller, Combat [18] Samaneh Movassaghi, Justin Lipman,” Wireless Body
medical informatics: present and future," in Proceedings of Area Networks: A Survey”, IEEE COMMUNICATIONS
the AMIA 2002 annual symposium, San Antonio, TX, SURVEYS & TUTORIALS, ACCEPTED FOR
November 2002, pp. 335-339. PUBLICATION,2013

[9] B. Latr´e, B. Braem, I. Moerman, C. Blondia, and P. [19] Ragesh G K, Dr.Baskaran K,” Study of various aspects
Demeester, “A survey on wireless body area networks,” related to Wireless Body Area Networks”, IJCSI International
Wirless Network, vol. 17, pp. 1–18, Jan. 2011. Journal of Computer Science Issues, vol-9,Issue 1, No 2,
January 2012,ISSN (online):1694-014, pp.180-186.
[10] IEEE std. 802.15.4-2011 Part 15.4: Low rate wireless
personal Area Networks 2005.

166
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

How to Reduce Mobile Phone Tower Radiation


Anmol Sharma Avonpreet Kaur Shweta Rani
Electronics & Comm. Engg Electronics & Comm. Engg Assoc. Prof. BGIET, Sangrur.
anmoldear3391@gmail.com avonpreet007@gmail.com

ABSTRACT Moreover, the WHO has classified mobile phone radiation on


Wireless telecommunication systems use a large number of the IARC (International Agency for Research on Cancer)
mobile phone towers in order to provide telecom facilities to scale into Group 2B – possibly carcinogenic to humans.
their subscribers spread across different geographical
locations. These towers employ multiple antennae that radiate 2. ELECTROMAGNETIC RADIATION
electromagnetic waves. There is a strong perception relating (EMR)
to the existence of a high level of electromagnetic radiation in Electromagnetic Radiation consists of electric and magnetic
the vicinity of these towers, which may cause adverse energy waves moving together through space at the speed of
biological effects. According to World Health Organisation light.
(WHO), INTERPHONE (a 13-country coordinated case-
control study), Independent Expert Group on Mobile Phones
(IEGMP) and Scientific Committee on Emerging and Newly
Identified Health Risks (SCENIHR) study researches, it has
been found that electromagnetic radiation can contributed to
health deficiency, including an increased risk of brain
tumours, eye cancer, salivary gland tumours, testicular cancer
and leukaemia.

Keywords
EMF, Radiation, Transmission towers, Radiation mitigation.
Figure 1
1. INTRODUCTION
The telecommunications industry is experiencing a robust When referring to biological radiation exposures,
growth on a global scale. Mobile phones, sometimes known electromagnetic radiation is divided into two type: ionising
as cellular phones or handsets, form an integral part of and non-ionising. Because the human body is composed of
modern telecommunications and are fast becoming a social about 60 percent water, ionising and non-ionising radiations
lifestyle. Mobile phones are very popular because they allow refer to whether the RF energy is high enough to break
people to maintain constant and continuous communication chemical bonds of water (ionising) or not (non-ionising).
without hampering their freedom of movement. The mobile Antennas on Cell tower transmit in the frequency range of:
phone and its base station is a two-way radio, they produce
• 869 - 890 MHz (CDMA)
radiofrequency (RF) radiation as a means of communicating
and expose the people near them to RF radiation. The wide • 935 - 960 MHz (GSM900)
use of mobile phones has inevitably raised the question of
whether there are any implications for human health. There •2110 – 2170 MHz (3G)
have been some reports relating to possible adverse health
effects and these have understandably led to some concern • 1805 – 1880 MHz (GSM1800)
from the members of the public. As the mobile phone base
and various wireless technologies (such as WiMAX, WiBro, 3. RADIATION PATTERN OF A
iBurst, EV-DO Advanced, etc) are rapidly expanding and MOBILE PHONE TOWER ANTENNA
evolving, the requirement for such towers will also grow. It is Propagation of "main beam“ from antenna mounted on a
therefore high time that a strict regulatory regime is tower or roof top
established, as early as possible, to avoid possible fallout. The
Mobile phone and its base station communicate using a two  People living with in 50 to 300 meter radius are
way radio communication. This radio communication in the high radiation zone (dark blue) and more
produces Electro-magnetic magnetic fields. The effect of prone to ill effects of Electromagnetic Radiation.
electromagnetic radiation on human health is the subject of
recent interest and study. ICNIRP (International Commission
on Non-Ionizing Radiation Protection) study has concluded
that the exposure levels due to cell phone base stations are
generally around one-ten-thousandth of the guideline levels.

167
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4.1 Increased cancer cases with proximity


to Towers

Figure 2
Figure 4
4. EFFECTS OF MOBILE PHONE
TOWER 5. RADIATION MITIGATION
TECHNIQUES
If the electromagnetic radiation level in any area accessible to
people is higher than prescribed limits, it is strongly required
to take action to reduce the radiation level. The different
radiation mitigation techniques are:

5.1 Transmitter power reduction


The transmitter power is directly related to the power density
and square of the electric field strength. So, reduction in
transmitter power would result in reduction of radiation level.
But, this method also leads to reduction of the coverage area.

5.2 Increase in antenna height


The power density at any observation point is the function of
antenna height. If the antenna height is increased, then the
power density/field strength at the observation point is
reduced due to an increase in the distance to the point of
observation. The reduction of radiation level is even greater
because at the same time elevation angles to another part of
the vertical radiation patter of the antenna.

5.3 Increase in antenna gain


Gain of an antenna is a key performance figure that combines
the antenna’s directivity and electrical efficiency. I t specifies
how well the converts input power into electromagnetic waves
headed in a specified direction and to limit the radiation in
other directions. So, it is possible to limit the radiation level in
the area accessible to the people with the control of the
directivity of the antenna. So, it is recommended to use a low-
power transmitter with a high-gain antenna for attaining a low
level of radiation.
Figure 3
5.4Multiple method applied simultaneously
All the methods described above are independent and can be
applied either individually or in combination to achieve the
required decrease in radiation level.

168
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

6. CONCLUSION (such as children), adoption of mobile phones and microcells


with as low as reasonably-practicable levels of radiation,
Operators providing wireless communication should seriously wider use of hands-free and earphone technologies such as
consider this study and ITU-T recommendations in order to Bluetooth headsets, headsets with ferrite beads and headsets
keep the operation of base-station transceivers in compliance with air tube, adoption of maximal standards of exposure and
with regulations concerning environmental protection against greater distances of base-station antennae from human
non-ionising radiation. Further, it is important to note that the habitations, and so forth.
present threshold limits prescribed by ICNIRP are considered
to be rather too generous. Hence, there is a need to review and
remedy the situation and not wait until it becomes the subject
matter of a public-interest petition in the light of possible
environmental adverse effects. In order to minimize possible
health hazards, some recommendations are: minimization of
mobile phone usage, limitation of use by at-risk population

REFERENCES [8] Levitt B, Lai H, Biological effects from exposure to


electromagnetic radiation emitted by cell tower base
[1] Kumar G, 2010. Report on cell tower radiation, submitted stations and other antenna arrays, Environ. Rev. 18:369–
to theSecretary, DoT, Delhi. 395,
http://www.ee.iitb.ac.in/~mwave/ GK-celltower-
[9] World Health Organization. (2010). About WHO.
[2] Cell Tower Radiation Hazards and Solutions by Prof. Retrieved March 15, 2010 from
Girish Kumar, IIT Bombay http://www.who.int/about/en/

[3] S Sivani, D Sudarsanam -Impacts of radio-frequency [10] World Health Organization. (2010). Electromagnetic
electromagnetic field (RF-EMF) from cell phone tower fields. Retrieved March 15, 2010 from
http://www.who.int/peh-emf/en/

[4] Haumann Thomas, et al, “HF-Radiation levels of GSM


cellular phone towers in residential areas”,
http://nocelltower.com/German%20RF%20Research%20Artic
le.pdf

[5] Counter-View of the Interphone Study , 2010, -


http://www.radiationresearch.org/pdfs

[6] Karen J. Rogers , Health Effects from Cell Phone Tower


Radiation, by
http://www.scribd.com/doc/3773284/Health-Effects-from-
Cell-Phone-Tower-Radiation

[7] Radiation Mitigation Research Papers by Bill Brodhead


http;//www.wpb-radon.com/bills_radon_research_papers

169
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Optimal ECG Sampling Rate for Non-Linear Heart Rate


Variability
Manjit Singh Butta Singh Vijay Kumar Banga
Research Scholar PTU Dept. of ECE, Guru Nanak Dev Dept. of ECE, Amritsar College
Jalandhar, Dept of ECE Guru University, Regional Campus of Engineering and Technology,
Nanak Dev University Regional Jalandhar, India. Amritsar, India.
Campus Jalandhar, India. bsl.khanna@gmail.com vkbanga@gmail.com
manu_kml@yahoo.co.in

ABSTRACT (CD) [8] are well-developed nonlinear measures to quantify


The principle difficulty with the analysis of the heart rate the chaotic properties, complexity or irregularity of RR
variability (HRV) is that heart rate dynamics are non-linear intervals time series.
and non-stationary. Detrended fluctuation analysis (DFA) and
correlation dimension (CD) are non-linear HRV measures to The fast ECG sampling rate increases the processing time,
quantify fractal-like autocorrelation properties and to requirement of large memory for data storage & access
characterize the complex behaviour of nonlinear time series. whereas signal quality degrades with low ECG sampling rate
Optimal ECG sampling rate is an important issue for accurate that may cause clinically misinterpreted HRV measures thus
quantification of HRV. High ECG sampling rate results in the selection of Optimal sampling rate is an important issue
very high processing time and low sampling rate produces
for researchers [17-18]. Singh B. et al., investigated the errors
signal quality degradation results in clinically misinterpreted
HRV. In this work the impact of ECG sampling frequency on in the estimation of the ApEn and SampEn based non-linear
non-linear HRV have been quantified in terms of short-range HRV measures by various ECG sampling frequencies [23-24].
& long-range DFA and CD on short-term (N=200), medium- The errors in entropy based HRV was clinically significant
term (N=500) and long-term (N=1000) data. Non-linear HRV when ECG sampled at low sampling frequency of 125 or 250
parameters are found to be sensitive to ECG sampling Hz. The other researchers have also recommended the value
frequency and effect of sampling frequency will be a function of sampling frequency under different conditions as listed in
of data length.
table 1.
Keywords Table 1. Recommendation of Sampling Frequency of ECG
ECG, HRV, Sampling frequency, DFA, CD, Signal
processing.
Recommended sampling Recommended By
1. INTRODUCTION frequency
The electrocardiogram (ECG) is one of the most important 250-500 Hz or higher Task force of the European
physiological parameters, which reflects to ionic activity in Society of Cardiology and
the cardiac muscles [1]. Heart rate variability (HRV) is the North American society of
time variation between RR peaks of ECG in cardiac cycle. Pacing and Electrophysiology
HRV represents sympathetic and parasympathetic activities of [4]
autonomic nervous system (ANS) and is non-invasive tool for 1000 Hz Hejjel et al [17]
analysis of cardiovascular control of the heart [2]. Time 500 Hz Ziemssen et al [18]
domain, frequency domain, and non-linear method are the 128 Hz Abboud and Barnea[ 19]
three ways which can be used for the evaluations and analysis
of HRV [3-4].Time domain is the simplest measure of HRV Although Task Force recommended the range of ECG
but incapable to differentiate sympathetic and sampling frequencies for accurate time and frequency domain
parasympathetic contribution of HRV hence increase the use HRV measures, a systematic study is the need of day to
of spectral methods for the analysis of the tachogram [5-7]. quantify the influence of sampling rate on nonlinearities based
The cardiovascular system is consisting of multiple sub- HRV. ECG sampling rate analysis on nonlinearities measures
systems and has both highly nonlinear deterministic and of HRV is required for extensive use of HRV by medical
stochastic properties, and subject to hierarchical regulations practitioners. This study aims to access the impact of ECG
[8-10]. As a result, time series generated by cardiovascular sampling rate on DFA and CD based HRV and to find out
system are often highly nonlinear, non-stationary, random and optimal ECG sampling rate for non-linear measures of HRV
complex. Due to which standard linear measures of HRV may based on computer simulation.
not be able to detect subtle, but important changes in the HR
time series [11]. Therefore, nonlinear methods have been 2. DATA AQUISITION
developed to quantify the dynamics of HR fluctuations. For this study the ECG data has been acquired from the 10
Approximate entropy (ApEn) [12-13], Sample entropy healthy subjects, with no past of any cardiac disorder, in
(SampEn) [14], multiscale entropy (MSE) [15], Detrended supine posture with normal breathing in a lab environment at
fluctuation analysis (DFA) [16] and Correlation Dimension comfortable light and temperature conditions. All subjects

170
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

were refrained from alcohol, coffee and smoking for 12 hour fluctuations F(n) and time scales n can be approximately
prior to data acquisition. The ectopic-free normal RR intervals evaluated by a linear model that provides the scaling exponent
time series were derived for each subject by Lead-II ECG α given in
recordings on Biopac® MP150 system with short-term (N =
200), medium-term (N = 500) and long-term (N = 1000) data F n   n
lengths having sampling frequencies of 125, 250, 500, 1000,
The scaling exponent α is viewed as an indicator of the
1500 and 2000 Hz resulting in a total of 180 time series.
roughness of the original time series: the larger the value of
3. NON-LINEAR HRV the scaling exponent α smoother the time series. The fractal
Recent developments in the theory of nonlinear dynamics scaling (α) for the normal subjects (healthy young) is closer to
have paved the way for analyzing signals generated from 1, and this value falls in different ranges for various types of
nonlinear living systems. It is now generally recognized that cardiac abnormalities
these nonlinear techniques are able to describe the processes
generated by biological systems in a more effective way. DFA quantifies the correlations within the time series for
different time scales [16]. For HRV analysis, correlations may
be short-term or long-term fluctuations which are represented
3.1 Detrended Fluctuation Analysis (DFA) by parameters  1 and  2 respectively. These measures are
In 1994 Peng et al has developed DFA which is a simple and
efficient scaling method commonly used for detecting long- correlation measure as a function of segment length and
range correlations. It is an efficient technique used to approximated by slopes of a log-log plot. In this work,
determine the correlations within the signal and to evaluate calculations for 1 and 2 were obtained from ranges of 4–
the fractal characteristics of RR interval time series. Fractal
16 beats for 1 and range of 16–64 beats for  2 .
are composed of subunits (and sub-sub-units, etc.) that
resembles or show correlation with the structure of the overall
3.2 Correlation Dimension (CD)
time series. This technique is an enhancement of root mean CD, a novel method based on phase space technique to
square approach of random walks applied to non-stationary distinguish normal and abnormal cases, has been used
time series [16, 20]. The technique consists i) calculation of extensively for cardiovascular signals.
root-mean-square fluctuation of an integrated and detrended
time series at different windows sizes ii) plotting the A phase space plot can be obtained by representing the heart
fluctuation against the size of the window on a log–log scale. rate RR[n] in X-axis and the delayed heart rate RR [n+m] in
Y-axis. The minimal mutual information technique is used to
k

 RRi   RR
calculate an appropriate delay. The method of estimating the
1. Obtain integrated series y (k )  avg embedding dimension from the phase space patterns was
i 1
proposed by Grassberger and Procaccia [22]. For a steady and
Here k is the total length of integrated series y(k) is kth unchanging heart rate, the phase plot will be a point; else, the
value of integrated series RR(i) is ith inter-beat interval. RRavg trajectory spreads out to give some patterns. The emerged
is the average of RR intervals over the entire series. pattern interpreted periodic, chaotic, or random behaviour of
heart rate. A CD factor is a quantitative measure of the pattern
2. Then, the integrated time series y (k ) is divided into of trajectory, and the ranges of CD for various cardiac
windows of equal length, n. disorders are identified.
3. Least-squares line is fitted to the RR interval data in each The CD of the attractor is calculated for HRV data using the
window of length n which is local trend in that window. The y following definition [22]:
coordinate of the straight line segments are denoted by yn(k).
 log C  r  
4. Trend yn(k) is subtracted from the integrated signal y(k) in CD  lim  
 log  r  
order to obtain the detrended profile r o

y(k)- y n (k) Where the correlation integral C(r) is given by:


5. Then root-mean-square fluctuation of integrated and
   r  RR 
1 N N
detrended time series is computed by C r   
N 2 x 1 y 1, x  y
x  RRy
N
F n    y(k)- y 
1
n (k)
N Where N is the number of data points, RRx and RRy are the
k 1
trajectory points in the phase space; r is the radial distance
6. The procedure is repeated for different boxes size or time around each reference point and  is the Heaviside function.
scales. Finally, the relationship on a double-log graph between

Table 2. Effect of ECG sampling frequency on short-range DFA, long-range DFA and CD based non-linear HRV for ten healthy subjects

171
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Sampling Short-Range Scaling Exponent Long-Range Scaling Exponent Correlation Dimension


Frequency
(Hz) Data Length

200 500 1000 200 500 1000 200 500 1000

125 1.0734 1.1085 1.1812 0.746 44 0.7674 0.8703 1.1880 1.3392 1.6205

250 1.0818 1.1219 1.1219 0.7556 0.7742 0.8689 1.2029 1.3357 1.6090

500 1.0807 1.1295 1.1295 0.7525 0.7763 0.8695 1.0070 1.3363 1.5970

1000 1.0511 1.0847 1.0847 0.6881 0.7286 0.8304 1.0044 1.0994 1.2520

1500 1.0510 1.0772 1.0772 0.6855 0.7311 0.8310 1.0044 1.0998 1.2528

2000 1.0433 1.0837 1.0837 0.6804 0.7128 0.8314 0.9753 1.0975 1.2480

Table 3. Relative error in short-range DFA, long-range DFA and CD based non-linear HRV of ten healthy subjects by ECG sampling
frequencies

Sampling Short-Range Scaling Exponent Long-Range Scaling Exponent Correlation Dimension


Frequency
(Hz) Data Length

200 500 1000 200 500 1000 200 500 1000

125 7.3397 2.2852 1.3218 9.7084 7.6540 4.6857 21.8027 22.0187 29.8467

250 3.6857 3.5213 1.5193 11.0610 8.6080 4.5179 23.3386 21.6968 28.9266

500 3.5803 4.2180 2.3784 10.5921 8.9108 4.5911 3.2488 21.7537 27.9618

1000 0.7434 0.0849 0.3840 1.1380 2.2172 0.1196 2.9857 0.1740 0.3173

1500 0.7428 0.6021 0.2992 0.7473 2.5690 0.0387 2.9791 0.2028 0.3880

2000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

Table 4. Correlation coefficient between increase in sampling frequency and decrease in REs of  1 ,  2 and CD of short-term, medium term
and long-term data length

Short-Range Scaling Exponent Long-Range Scaling Exponent Correlation Dimension

Data Length

200 500 1000 200 500 1000 200 500 1000

0.8689 0.7860 0.8042 0.9073 0.9233 0.8978 0.7923 0.8932 0.8984

4. STATISTICAL ANALYSIS frequency. The parameters {X1, X2, . . . Xn} are obtained for
In order to study the effect of sampling frequency on DFA DFA and CD based HRV measures of the short-, medium-
and CD based HRV, the parameters  1 ,  2 and CD are and long-term data set with sampling frequencies of 125, 250,
500 1000 and 1500 Hz. Xorigin is the corresponding non-linear
evaluated. The relative errors (REs) were calculated with the
HRV measure at reference ECG sampling frequency of 2000
non-linear measures calculated from the RR interval time
Hz, the relative errors REk are computed as |Xorigin− Xk|/Xorigin
series derived with reference 2000 Hz ECG sampling
× 100 (%). For the statistical calculations 150 error values

172
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

were derived for each HRV parameter and data length of ECG variability: origins, methods, and interpretive
signal. caveats. Psychophysiology, vol. 34, no. 6, 623-
648.
5. RESULT [4] Task Force of European society of Cardiology and
The non-linear HRV measures  1 ,  2 and CD are computed the North American Society of Pacing and
for RR interval series of long-term (N=1000), medium Electrophysiology. 1996. Heart rate variability,
(N=500) and short-term (N=200) data lengths respectively and standard of measurement, physiological
compared with reference values at sampling frequency of interpretations, clinical use. Circulation, Vol. 93,
2000 Hz. Table 2 shows the effect of ECG sampling 1043-1065.
frequency on average non-linear HRV parameters  1 ,  2 and [5] Kay, S.M. and Marple, S.L. 1981. Spectrum
CD respectively, of RR interval time series derived from lead- analysis: A modern perspective. IEEE Proceedings,
II ECG of ten healthy subjects. The REs in short-term  1 and vol. 69, 1380-1419.
long-term fluctuations  2 and CD of RR intervals series [6] Berger, R. D., Akselrod, S., Gordon, D., and
Cohen, R. J. 1986. An efficient algorithm for
derived from ECG at sampling frequency 125, 250, 500, 1000
and 1500 Hz were calculated. Then these REs were compared spectral analysis of heart rate variability. IEEE
with reference parameters to evaluate the impact of ECG Transactions on Biomedical Engineering, vol.
sampling frequency (Table 3). BME-33, 900-904.
[7] Singh, D., Vinod, K., Saxena, S. C., and Deepak,
For very low ECG sampling frequency of 125 Hz, the REs
K. K.. 2006. Spectral evaluation of aging effects on
in  1 ,  2 and CD of time series data of RR intervals with blood pressure and heart rate variations in healthy
respect to reference values of sampling frequency 2000 Hz subjects. Journal of Medical Engineering
were approximately 7.33, 9.7 and 21.8% for short-term data; Technology, vol. 30, no. 3, 145-150.
2.28, 7.65 and 22 % for medium data and 1.32, 4.6 and 29.8%
[8] Hoyer, D., Schmidt, K., Bauer, R., Zwiener, U.,
for long term-data respectively. The REs in  1 ,  2 and CD Kohler, M., Luthke, B. and Eiselt, M. 1997.
respectively was decrease up to 0.7, 1.13 and 2.98 % for short Nonlinear analysis of heart rate and respiratory
term data; 0.08, 2.21 and 0.17% for medium term data and dynamics. IEEE Engineering in Medicine and
0.38, 0.11 and 0.31% for longt term data respectively at
Biology, vol. 16, no. 1. 31-39.
medium ECG sampling frequency of 1000 Hz. The decrease
in REs will be a function of level of sampling frequency and [9] Kanters, J.K., Hojgaard, M.V., Agncr, E. and
RR interval data length. The REs CD due to the ECG Holstein-Rathlou, N-H. 1996. Short- and long-term
sampling frequency were found to be very high at low variations in non-linear dynamics of heart rate
sampling frequency 125 Hz. The correlation coefficients variability. Cardiovascular Research, vol. 31, pp.
values for the increase in sampling frequency with decrease in 400-409.
REs of  1 ,  2 and CD at short-term, medium-term and long- [10] Merati, G., Reinzo, M.D., Parati, G., Veicsteinas,
term data length are shown in Table 4. A. and Castiglioni, P. 2006. Assessment of
autonomic control of heart rate variability in
6. CONCLUSION healthy and spinal-cord injured subjects:
The influence of ECG sampling frequency on DFA and CD Contribution of different complexity based
based non-linear HRV parameters at short-, medium and long- estimators. IEEE Transactions on Biomedical
term data lengths have been quantified. At low sampling Engineering, vol. 53, no. 1, 43-52.
frequency the REs in DFA and CD based HRV are found to [11] Goldberger, A.L. 1991. Is normal heartbeat chaotic
be clinically significant. Further, the REs in HRV measures or homoststic?. News Physiological Science, vol.
depend upon sampling level and RR interval data length. Thus 6, 87-91.
the non-linear HRV parameters estimated by the DFA and CD [12] Pincus, S.M. 1991. Approximate entropy: a
algorithms are sensitive to ECG sampling frequency and data complexity measure for biologic time-series data.
length and erroneous quantification results a bias in DFA and Proceedings of the IEEE 17th Annual Northeast
CD measures and clinically misinterpreted HRV. Bioengineering Conference. 35-36.
[13] Singh, B., Singh, D., Jaryal, A.K. and Deepak K.K.
7. REFERENCES 2012. Ectopic beats in approximate entropy and
[1] Afonso V.X., Tompkins, W.J and Luo, T.Q. 1999. sample entropy-based HRV assessment.
ECG beat detection using filter banks. IEEE Trans. International Journal of Systems Science, vol. 43,
Biomed. Engineering, vol. 46, 192–202. no. 5, 884-893.
[2] Jovic, A. and Bogunovic, N. 2012. Evaluating and [14] Richman, J.S. and Moorman, J.R. 2000.
Comparing Performance of Feature Combinations Physiological time series analysis using
of Heart Rate Variability Measures for Cardiac approximate entropy and sample entropy.
Rhythm Classification. Biomedical Signal American Journal Physiology Heart Circulatory
Processing and Control, vol.7, 245–255. Physiology, vol. 278, 2039-2049.
[3] Berntson, G.G, Bigger, J.T., Eckberg, D.L., [15] Singh, B. and Singh, D. 2012. Effect of threshold
Grossman, P., Kaufmann, P.G., Malik, M., value r on multiscale entropy based heart rate
Nagaraja, H.N., Porges, S.W., Saul, J.P., Stone, variability. Cardiovascular Engineering and
P.H., and Vander-Molen, M.W. 1997. Heart rate Technology, vol. 3, no. 2, 211-216.

173
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[16] Pena, M. A., Echeverrıa, J. C., Garcıa, M. T. and [20] Rodriguez, E., Echeverria, J. C. and Alvarez-
Gonzalez-Camarena, R. 2009. Applying fractal Ramirez, J. 2007. Detrended fluctuation analysis of
analysis to short sets of heart rate variability data. heart intrabeat dynamics. Physica-A, vol. 384, no.
Medical & Biological Engineering & Computing, 2, 429-438.
vol 47, 709-717. [21] Penzel, T., Kantelhardt, J., Grote, L., Peter and A.
[17] Hejjel, L. and Rooth, E. 2004. What is the Bunde, J. H. 2003. Comparison of detrended
adequate sampling interval of ECG signal for heart fluctuation analysis and spectral analysis for heart
rate variability analysis in time domain. rate variability in sleep and sleep apnea. IEEE
Physiological Measurements, vol. 25, 1405-1411. Trans. Biomed. Eng., vol. 50, no. 10, 1143-1151.
[18] Ziemsseen, T., Gascg, Z. and Ruediger, H. 2008. [22] Graxsberger, P. and Procassia, I. 1983. Measuring
Influence of ECG sampling frequency on spectral the strangeness of strange attractors. Physica D,
analysis of RR intervals and baroreflex sensitivity 189-208.
using Eurobarvar data. Journal of Clinical [23] Singh, B., Singh, M. and Banga, V. K.. 2014.
Monitoring and Computing, vol. 22, 159-168. Sample Entropy based HRV: Effect of ECG
[19] Abboud, S. and Barnea, O. 1995. Errors due to Sampling Frequency. Biomedical Science and
sampling frequency of electrocardiogram in Engineering, vol. 2, no. 3, 68-72.
spectral analysis of heart rate signals with low [24] Singh, M., Singh, B. and Banga, V. K. 2014. Effect
variability. Proceedings of IEEE Computers in of ECG sampling frequency on approximate
Cardiology, 461-463. entropy based HRV. International Journal of Bio-
Science and Bio-Technology, vol. 6, no.4,179-186.

174
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Design of Rectangular Microstrip Patch Antenna Array


for S, C and X- Band
Jagtar Singh Sivia Amandeep Singh Sunita Rani
Yadawindra College of M.Tech Student ECE Yadawindra College of
Engineering Punjabi University Yadawindra College of Engineering Punjabi University
Guru Kashi Campus Talwandi Engineering Talwandi Guru Kashi Campus Talwandi
Sabo,Punjab, INDIA Sabo,Punjab, INDIA Sabo,Punjab, INDIA
jagtarsivian@yahoo.com honey02700@gmail.com ersunitagoyal@rediffmail.
com

ABSTRACT and many telecommunication operators using various


In this paper the rectangular microstrip patch antenna arrays frequencies [3]. A number of shapes have been used for the
has been designed for S, C and X-band application by using designing of microstrip patch antennas but the rectangular and
the IE3D simulator. Roger RT/duroid material with dielectric circular patches has been most common to use for the
constant 2.2 and height 1.588 has been used as a substrate different application in different fields due to their effective
material. Microstrip inset line feed has been used for feeding radiation character shown by them when the RF power has
the rectangular patch antenna. Here 1x2 and 1x4 patch been fed to the patch antenna [4][6].
antenna array has been designed using series feed network
and performance parameters of patch antennas array such as
return loss, gain, directivity and VSWR has been computed
and compared.

Keywords
Microstrip antenna; inset line feed; antenna array; return
loss.

1. INTRODUCTION
Antennas are one kind of transducer to convert generated
electrical energy into radiating energy. Antennas are also used
in receiver to collect radiation from free space and deliver the
energy contained in the propagating to the feeder and receiver
Fig 1: Rectangular patch antenna with inset feed
[1] .It has been observed in the recent year that development
in communication system required a light weight, high gain,
high directivity, and high efficiency antenna with minimum Here the dimension of rectangular patch has been computed
return losses and with minimum cost that should work at a by using [6] the equations, given below. Width is calculated
number of frequencies. Microstrip patch antennas fulfill all by using equation (1)
these requirements where in construction such antenna has
been consists of a conducting patch, a substrate material and a 1 2 v 2
RF power feed [4]. According to the IEEE the different bands W  
of frequencies has been defined with different ranges of 2 f r     r 1 2 fr r 1
frequencies such as L, S, C and X -bands and each frequency (1)
band has been used for different applications [5]. Microstrip And the effective dielectric constant is computed by using
antennas found useful in non-satellite based application such equation as
as remote sensing, medical hyperthermia and cancer detection
1
application [2] [6]. From the past history of experiments it has
 r 1  r 1  h 2
been observed that at high frequency range such as X – band  reff   1  12 w 
(8GHz-12GHz) the microstrip antenna are quite effective 2 2   (2)
response as compare to other antenna. However, microstrip
antenna has some drawbacks including narrow bandwidth, Due to fringing effect the increase in length is given by
low power handling capability and low gain[6]. Here a
rectangular microstrip patch antenna has been designed using

  o.3  0.264 
W 
the microstrip inset line feed and roger RT/ duroid material
has been used which shows high value of gain than other reff
substrate material such as Teflon and FR4 [10]. L  0.412h  h  (3)
2. DESIGN OF RECTANGULAR  reff  0.258  0.8 
W 
 h 
MICROSTRIP PATCH ANTENNA
The development of antenna for wireless communication
The actual length of patch is computed by using equation
requires an antenna with more than one operating frequency
because there are various wireless communication systems

175
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1
L  2L (4)
2 f r  reff 0 o
3. RESULT AND DISCUSSIONS
After calculating the dimension of RMPA now the inset The use of the array of antenna has increased much in the
length of feed inside the patch has been [7] calculated by communications by transmit their signals over long distances
using the equation as without the need for relay stations [8]. Here Rectangular patch
antenna arrays has been designed which resonates at C, S and
0.001699  r 7  0.13761  r 6  6.1783  r 5  93.187  r 4  L X-band frequency so such antenna arrays has been used for
yo  10 4   (5) the number of application such as wireless communication,
 682.69  r  2561.9  r  4043  r  6697  2
3 2
radar system, Wi-Max and in biomedical field [6].
Rectangular patch antenna with inset feed has been shown in
Impedance matching between two elements is the main factor Figure 1 and dimension of patch antenna has been given in
affecting the performance of antenna. Here the width of table 1 on IE3D simulator. In Figure 2 and Figure 3, 1x2 and
microstrip inset line feed with characteristic impedance Zo 1x4 multiband RMPA array has been designed using inset
has been given by using the below [4] equations. feed line where these antenna elements has been combined
with each other using the series feed network . Different
W  8 exp( A) 
 when Z 0  r  89.91and A  1.52 (6) performance parameters have been obtained using IE3D
h  exp( 2 A)  2  simulator such as return loss, gain , directivity and VSWR .

W 2   1  0.61 
 B  1  ln(2B  1)  r ln( B  1)  0.39  
h   2 r   r   (7)

when Z 0  r  89.91 and A  1.52

Where the value of A and B has been [4] calculated using Fig 2: Rectangular 1x2 MPA Array
equation (8) and (9)

60
B (8)
Z0  r

z    1  r 1  .11
1/ 2

A 0  r   0.23   (9)
60  2  r 1 r 
Fig 3: Rectangular 1x4 MPA Array
And length of transmission feed line has been given by using
equation (10)
3.1 Results of 1X2 RMPA Array
 c
Please Return loss Vs Frequency, Gain Vs Frequency,
 directivity Vs Frequency and VSWR Vs Frequency plots of
4 f r  reff (10)
1x2 RMPA array are shown in Fig 4 to Fig 7.

Table 1. Dimension of Single Multiband


Rectangular Patch Antenna with Inset Feed
Length of Rectangular Patch (L) 40.48 mm
Width of Rectangular Patch (W) 48.40 mm

Inset Length of Feed (yo) 9.3167 mm


Width of Inset Feed for Patch (Wi) 4.93 mm
Length of Transmission Feed (λ/4) 20.63 mm
Dielectric Constant of Substrate 2.2
(εr)
Effective Dielectric Constant of 2.032
Substrate (εreff)
Gap width (Wo) 0.34 mm

Dimension of the rectangular patch antenna has been given by Fig 4: Return Loss Vs Frequency
using inset feed line has been given in table 1.

176
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 2. Performance Parameters for Rectangular 1x2


Patch Antenna Array
Frequency Return loss VSWR Gain Directi
(GHz) (dBi) (dBi) vity
(dBi)
2.35 -18.3 1.73 2.45 8.30

4.08 -10.01 1.93 3.54 7.58

5 -22.52 1.20 1.63 6.88

6.82 -14.54 1.63 7.01 11.68

8 -19.84 1.23 3.85 10.01

9.17 -27 1.11 9.24 13.78

Fig 5: Directivity Vs Frequency

3.2 Results of 1X4 RMPA Array


The four antenna elements has been connected in series feed
network as shown in Figure 3 and when such antenna array
has been simulated on the IE3d software then shows the
following results. Return loss Vs Frequency, Gain Vs
Frequency, directivity Vs Frequency and VSWR Vs
Frequency plots of 1x2 RMPA array are shown in Fig 8 to Fig
11.

Fig 6: Gain Vs Frequency

Fig 8: Return Loss Vs Frequency

Fig 7: VSWR Vs Frequency Fig 9: Directivity Vs Frequency


It is clear from Figure 4 that 1x2 RMPA array resonates on It is clear from Figure 8 that return losses of 1x4 RMPA array
2.35GHz, 4.08GHz, 5GHz, 6.82GHz, 8GHz and 9.17GHz and has less value than 1x2 RMPA array and shows minimum
each frequency iteration has return losses less than -10dB return loss at 9.17GHz frequency that is -28dB. From Figure 9
which is necessary condition for the working of RMPA [4] and Figure 10 it is clear that RMPA array has maximum gain
[11]. The performance parameters (return loss, gain, 9.93dB and directivity and gain at 9.17GHz frequency. These
directivity and VSWR) at each frequency are shown in Table values of gain and directivity are more than 1x2 RMPA array.
2. From Figure 5 and Figure 6 it has been observed that such It is clear from Figure 11 that each frequency band has VSWR
multiband patch antenna array has maximum gain 9.24dB and [9] less than 2. The different performance parameters for 1x4
directivity 13.46dB at 9.17GHz frequency. RMPA are shown in Table 3.

177
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES
[1] Raja, A. H. 2009 Study of Microstrip Feed Line Patch
Antenna, Eng. & Tech. Journal, Vol.27, No.2.
[2] Mahalakshmi and Jeyakumar, V. 2012 Design and
development of Single Layer Microstrip Patch Antenna
for Breast Cancer Detection Bonfring International
Journal of Research in Communication Engineering, Vol.
2, Special Issue 1.
[3] Chakraborty, U., Chatterjee, S., Chowdhury,S. K., and
Sarkar, P. P.2 011A compact microstrip patch antenna
for wireless communication Progress In
Fig 10: Gain Vs Frequency Electromagnetics Research C, Vol. 18, 211-220.
[4] Garg, R., Bhartia, P., Bahl, I., Ittipiboon, A. 2001
Microstrip Antenna Design Handbook, Artech House
inc.
[5] Lewis, L.L.1991 Introduction to frequency standards,
The IEEE proceeding Vol. 79 No.7. 927-937.
[6] Balanis, C. A.2005 Antenna theory: analysis and design,
3rd ed. ed. Hoboken, N.J. Wiley-Interscience.
[7] Ramesh, M. and Kb Y.2003 Design Formula for Inset
Fed Microstrip Patch Antenna” proceeded in Journal of
Microwaves and Optoelectronics, Vol. 3, N.o 3.
[8] Minervino, D.R. 20134Arrays of Rectangular Patch
Microstrip Antennas for Aerospace Applications 978-1-
4799-1397.
Fig 11: VSWR Vs Frequency [9] Singh, J., Singh, A.P and Kamal,T.S. 2011 On The
Design Of Rectangular Spiral Microstrip Antenna For
Wireless Communication”5th international multi
Table 3. Performance Parameters for Rectangular conference on intelligent system.
1x4 Patch Antenna Array
Frequenc Return loss VSWR Gain Direct [10] Jyothi B., Murthy, V.V.S. and Saha, B.H. 2012 Analysis
y (GHz) (dBi) (dBi) ivity of A Slot Antenna for Different Substrate Materials,”
(dBi) International Journal of Advanced Engineering Research
and Studies E-ISSN2249–8974 IJAERS,Vol. I,Issue III.
2.35 -18.32 1.74 2.63 6.84
[11] Pozar, D. M. and Schaubert, D. H., The Analysis and
4.08 -10.02 1.93 3.414 7.44 Design of Microstrip Antennas and Arrays, published by
john wiley and sons inc., Hoboken, New Jersey.
5 -22.84 1.20 1.53 6.56

6.82 -15 1.63 6.66 11.34

8 -19.61 1.23 3.93 10.06

9.17 -28 1.10 9.93 14.44

4. CONCLUSION
The multiband rectangular microstrip patch antenna has been
designed for S, C and X-band application by using the IE3D
simulator. Roger RT/duroid material with dielectric constant
2.2 and height 1.588 has been used as a substrate material.
Here 1x2 patch antenna array and 1x4 patch antenna arrays
has been designed in series feed network and it has been
concluded that such proposed antenna arrays works for s, c
and x-band applications. When the number of elements has
been increased in the series feed network then performance
parameters also increased efficiently.

178
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

UWB Stacked Patch Antenna using Folded Feed for GPR


application
Mohammad Shahab Tanveer Ali Khan
Deptt. Of Electronics and Communication Engg. Deptt. Of Electrical Engg.
Future Institute of Engineering and Technology Future Institute of Engineering and Technology
Bareilly, India Bareilly, India
shahabbly@gmail.com tanveerk881@gmail.com

ABSTRACT consideration for use in ultrawideband (UWB) systems.


In this paper, a design strategy is presented to achieve However, conventional patch antenna suffers typically from a
enhanced impedance bandwidth for rectangular stacked patch few percent of bandwidth. For this reason much effort is made
stacked with H shape rectangular patch & fed with a folded to develop techniques and find configurations to broaden its
patch feed. It has been found that by selecting appropriate impedance bandwidth. It is a common practice to use the
width of the folded feed and the size of slots on lower patch, stacked patches to increase gain and/ or impedance bandwidth
there is considerable improvement in the impedance of the microstrip antennas. This reduces the impedance
bandwidth & gain of antenna. We achieve an impedance variation of the antenna with frequency, thus enhancing the
bandwidth of 122.60 % across the frequency range of impedance matching across a broad frequency band. Various
3.58GHz - 14.88GHz. The antenna occupies the compact arrangements of the stacked patch structures have been
dimension of 0.419λg by 0.358λg by 0.209λg where λg is investigated. In [5], the maximum bandwidth is 10% and the
central operating frequency wavelength. Antenna pattern structure consists of stacked patches with a shorting wall to
shows good performance with congrous radiation pattern and achieve low profile with two layers of dielectric substrate. For
significant gain over whole frequency band. UWB application a dual layer stacked patch antenna with
56.8% bandwidth has been proposed in [6] which has a
Index Terms— folded patch feed; patch antenna ; UWB antenna. dimension of 26.5 x 18 x 11.5 mm³. More recently an
ultrawideband suspended plate antenna consisting with two
I. INTRODUCTION layers has achieved an impedance bandwidth of 72.7% [7]. A
new antenna feed method [8] namely the folded-patch feed
Ultrawideband (UWB) communication systems have been
has been shown to improve the impedance bandwidth of a
recently received great attention in the wireless world. It is
patch antenna. The rectangular U-shaped-slot patch antenna
widely used technology in communications, radar, imaging
with a folded patch feed with the dimension of 18 x 15 x 7
and remote sensing applications. Ultra wideband was formerly
mm³ is shown to have an impedance bandwidth of 53.5%
known as "pulse radio", but the FCC and the International
Telecommunication Radiocommunication Sector (ITU-R) (VSWR 2) [9]. A stacked patch antenna with a folded feed
currently define UWB in terms of a transmission from an [10] is presented for ultrawideband application. This design
antenna for which the fractional bandwidth greater than 20%. achieves an impedance bandwidth of 111.08%.
Also according to Defense Advanced Research Projects
Agency (DARPA), UWB antennas require to have a fractional In this paper, a new antenna has been proposed with modified
bandwidth greater than 25%. folded patch feed and slots on lower patch which improves the
According to the U.S. Federal Communications Commission impedance bandwidth. Impedance bandwidth of 111.08% has
(FCC) the authorized unlicensed use of UWB lies in the already been reported [10]. Proposed antenna configuration
frequency range from 3.1GHz to 10.6 GHz and power spectral further improves impedance bandwidth upto 122.60% and also
density emission limit for UWB transmitters is improves gain variation over the frequency band.
−41.3 dBm/MHz. [2]. The advantages of UWB systems over
conventional (narrowband) wireless communication systems
are its low transmitting power level and high data rates II. ANTENNA DESIGN AND STRUCTURE
features. They require consistent gain response for sensitivity
and stable radiation patterns over the entire bandwidth.. The The configuration of the proposed rectangular shaped stack
antenna aspects of UWB systems differ significantly from patch fed by modified folded patch feed antenna is illustrated
those narrowband systems. The design of practical antennas in Fig.1. The proposed antenna design has a dimension of 17 x
that radiate efficiently over an ultrawide bandwidth continues 14.5 x 8.5 mm³ and whole radiating element is centered on top
to be a challenging problem. Due to the attractive merits of of a 90 mm x 90 mm ground plane. The radiating patch
low profile, lightweight, ease of fabrication and wide including the shorting wall is made of 0.2 mm copper sheet as
frequency bandwidth, patch antennas are currently under the substrate of the whole proposed antenna is air. The antenna
is supporting by coaxial probe and the shorting wall. To

179
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

support the whole radiating element above the ground plane as


well as to increase a few % impedance bandwidth and to
reduce overall size of antenna we connected a shorting wall to
the ground plane. The probe feed with radius of 0.65 mm is
located on the horizontal central line of the folded patch feed
with 14 mm away from the left edge. With the aid of folded
patch feed design, the probe length shortened, leading to
smaller probe inductance. The overall height of the proposed
antenna is 8.5 mm, while the length of probe is 2.5 mm only
because of this the impedance bandwidth of the antenna
increases significantly. The probe length keeps the real part of
the input impedance as close to 50 Ω. The above specific
shape patch for the slots and the stacked patch configuration
allowed us to obtain a satisfactory matching across the
frequency band of interest. The width of the slots has been
optimize between 3mm to 4mm. The capacitance at the feed (c)
point increases by horizontal section of feeding plate which
Fig. 1. Geometery of the proposed antenna a) top view, b) side view, c) 3D
compensate the increase in inductance due to long probe view
across a broad impedance bandwidth. The extra
electromagnetic coupling in between stack patch of design
further enhance the impedance bandwidth. III. ANTENNA PERFORMANCE & ANALYSIS

The simulation of antenna has been done by Finite-Element


Analysis package, Ansoft HFSS. The simulated return loss
plot is shown in Fig. 2. It is clearly evident that the bandwidth
of the proposed antenna is 122.60 % (10 dB return loss) over
the frequency range of 3.58 GHz-14.88 GHz.
As shown in Fig. 3, the gain of the antenna is above 6 dB and
considerable over the frequency range. VSWR for the said
antenna is plotted in Fig. 4 and is found between 1 and 2. The
radiation pattern at 4.5 GHz and 7.4 GHz is shown in Fig. 6.
The Fig. 4 has been drawn for folded feed width (x_ff) 6, 7, 8
and 9mm and probe feed (x_feed) taking 14mm. The variation
of return loss with probe feed values 11, 12, 13 and 14mm is
shown in Fig.5.

(a)

(b)
Fig. 2. Return loss of the proposed antenna

180
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig. 3. VSWR of the proposed antenna

(c)
Fig. 4. Return loss with folded feed width variation

Fig. 6 (a) Radiation Pattern of the proposed antenna at 4.5 GHz,


(b) 7.4 GHz

IV. CONCLUSION

In this paper, an impedance bandwidth of 122.60% has been


achieved over a frequency range of 3.58 GHZ – 14.88 GHz
which is 11.52% wider than previously reported design [10]
that having impedance bandwidth 111.08%. We optimize the
size of slots on the lower patch, width of folded feed and probe
feed position. The variation of return loss with slots width has
been done and found that by increasing the value above 3mm
Fig. 5 Return loss with probe feed width variation the impedance bandwidth decreases. This antenna have small
size and can be fabricated easily with a copper sheet. The
results and dimensions of antenna are mentioned in the Figures.

181
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES [8] C.Y. Chiu, H. Wong, and C.H. Chan, “Study of small wideband
folded-patch-feed antennas,” IET Microw. Antennas
[1] P. Salonen, M. Keskilammi, and M. Kivikoski, “New slot Propag.,vol.2, no. 1, pp. 501-505, 2007.
configurations for dual-band planar inverted-F antenna”, [9] C.Y. Chiu, K. M. Shum, C.H Chan, and K.M. Luk, “Bandwidth
enhancement technique for quarter-wave patch antenna,”
Microwave Opt. Technol. Lett., Vol. 28, pp. 293–298, Mar.
IEEE Antennas Wireless Propag. Lett., Vol. 2, pp. 130-132,
2001. 2003
[2] P. Li, J. Liang, and X. Chen, “Study of printed elliptical/circular [10] Mohammad Tariqul Islam, Mohammed Nazmus Shakib,
slot antenna for ultrawide band application,” IEEE Norbahiah Misran and Baharudin Yatim, “A Stacked Patch
Trans.Antennas Propag., Vol. 54, no. 6, pp. 1670–1675, 2006 Antenna for Ultrawideband Operation” 9781-4244-2931-8/09
IEEE, ICUWB 2009 (September 9-11, 2009)
[3] S. D. Tragonski, R.B. Waterhouse, and D.M. Pozer, “Design of
[11] M.N. Shakib, M.T. Islam and N. Misrann “Stacked patch
wide-band aperture-stacked patch microstrip antennas,” IEEE
antenna with folded patch feed for ultra-wideband application”
Trans. Antennas Propag., Vol. 46, no. 9, pp. 1245-1251, 1999
IET Microw. Antennas Propag., 2010, Vol. 4, Iss. 10, pp. 1456–
[4] R. B. Waterhouse, “Design of probe fed stacked patched.” IEEE
1461
Trans. Antennas Propag., Vol. 47, no. 12, pp. 1780–1784, 1999
[12] Dawar Awan, Shahid Bashir and Nerijus Riauka “Parametric
[5] J. Ollikainen, M. Fische, and P. Vainikainen, “Thin dual-
Study of UWB Antenna Loaded with Stacked Parasitic Patch
resonant stacked shorted patch antenna for mobile
and Reflector “ 2013 Loughborough Antennas & Propagation
communications,” Electron. Lett., vol. 35, no. 6, pp.437–438,
Conference 11-12 November 2013, Loughborough, UK
1999
[13] David Gibbins, Maciej Klemm, Ian J. Craddock, Jack A.
[6] M.A. Matin, B.S. Sharif, and C.C. Tsimenidis, “Dual layer
Leendertz, Alan Preece, and Ralph Benjamin “A Comparison of
stacked rectangular microstrip patch antenna for ultra wideband
a Wide-Slot and a Stacked Patch Antenna for the Purpose of
applications,” IET Microw. Antennas Propag., Vol. 1, no. 6, pp.
Breast Cancer Detection” IEEE Transactions On Antennas And
1192-1196, 2007.
Propagation, Vol. 58, No. 3, March 2010
[7] X. N. Low, Z. N. Chen, and W. K. Toh, “Ultrawideband
suspended plate antenna with enhanced impedance and radiation
performance,” IEEE Trans. Antennas Propag., Vol. 56, no. 8,
pp. 2490-2495, Aug. 2008

182
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

PARAMETERS EVALUATION OF SEMI-CIRCULAR OBJECT


IN STITCHED IMAGE
Jagpreet Singh Ajay Kumar Vishwakarma Neha Hooda
SLIET, Longowal SLIET, Longowal SLIET, Longowal
Sangrur, Punjab Sangrur, Punjab Sangrur, Punjab
singh.jagpreet86@gmail.com akguru007@yahoo.com getneha.hooda@gmail.com

ABSTRACT used to identify the next possible edge point. Finally, all the edge
In the era of image processing, digital image processing potentially points are linked together to form an object boundary [5].
provides possible, reliable and acceptable solution through the use of Edge detection is one of the most important processes used in image
edge detection technique. This paper presents the use of image processing. It is the basic characteristics of an image and has most of
stitching combined with thresholding and edge detection algorithm. the information of image object. Its application area reaches from
Image stitching in real time application has been a challenging field astronomy to medicine and also for photogrammetric purposes.
for image processing. Image stitching is a process of joining two Different types of edges present in an image have different
images in order to create a single large image. Algorithm is geometrical shapes. These are defined in terms of a step edge, roof
simplified in such a way that the desired parameters such as radius, edge, line edge, ramp edge, gray level edge, texture edge and many
area and other dependent dimensional parameters of selected objects others as well [1, 2]. All the edges are detected by edge detection
could successfully be determined. The fundamental concepts and operations. The edge is the important information source and basis of
theory of image thresholding is used to calculate various parameters. shape quality and texture feature.
Finally all the results are calculated by using MATLAB and
compared with the software tool „imtool‟ in MATLAB. It was found Thresholding is a non linear operation that is used for image
that the evaluated parameters were highly approximating the actual segmentation process. Thresholding of image is necessary for the
values and resulted in good agreement with the proposed algorithm. binary conversion of the colored (RGB) or black and white image
Thus the percentage error calculated for all the images considered for available. Threshold follows the same concept as in basic electronics;
evaluation in this work is very small and even less than one showing here it is used to convert the gray scale image to binary image
the superiority of the algorithm. consisting of 0s and 1s as pixel values. Image thresholding is most
effective in images with high level of contrast [3]. The proposed
paper presents a new approach of thresholding along with edge
Keywords detection for estimating the circular object present in an image. This
Feature extraction, image stitching, Edge detection, Radius method is quite efficient for circular as well as semi- circular object
estimation. estimation present within an image and the estimated parameters are
quite close to actual.
1. INTRODUCTION Chun-ling Fan et.al. presented the use of Edge detection technique
Image stitching is a traditional image process to create seamless in vision navigation. Road edge detection helps in analyzing the
panorama which shows new perspective of the world around us. In specific location of obstacles, speed and size of obstacles on the road,
general, stitching includes feature extraction, registration matching variations in scene illumination and analyzing the direction of road
and blending. Matching of feature of two or several images is used to extension [4]. Beant Kaur et.al. used the new technique of
find the relationship and it directly relates the speed of image Mathematical morphology for edge detection in remote sensing
stitching process and the success rate. images. This technique has applications in image segmentation,
texture analysis, image enhancement [5, 6].
There are many algorithms for matching of images and blending
them. For matching purpose, there are two ways in which one is The paper is organized as follows. Section II Briefly explains the
direct method and other is the feature detection method. Direct SIFT algorithm for stitching the images. Section III explain edge
method is sometimes inconvenient and time taking because it always detection algorithm for detecting the image object. The basic setup
needs a high quality and noise free image. But SIFT is used for high for experiment estimation with the flowchart of the proposed
level feature based detection and also applicable for noisy image. algorithm is discussed in section IV. Section V Shows the
This paper describes the use of SIFT algorithm to merged two small experimental results and observations made. Finally in section VI
images into single large image for calculating the object parameters The paper is concluded and the future scope of the proposed
through edge detection algorithm. algorithm is viewed.

An edge is a set of connected pixels that lie on the boundary


between two regions points in an image where brightness changes 2. SIFT ALGORITHM
abruptly are called edges or edge points. There are different types of
It stands for scale invariant feature transform algorithm by D.G.
sharp changing points in an image. Edges can be created by shadows,
Lowe in 2004 which is a scale invariant and rotation based method. It
texture, geometry, and so for. Edges can also be defined as
is also widely used in object recognition, registration and stitching in
discontinuities in the image intensity due to changes in image
images. The steps involved in image stitching using SIFT algorithm
structure. These discontinuities originate from different features in an
is shown in Fig 1.
image. An edge has both magnitude and direction. The direction is

183
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

of a candidate circle and the center and radius of the circle is


Input Construction DOG computed using the edge threshold property.
Locate strong
image of scale estimation feature
space In this paper we have used the novel technique of canny edge
detection operator for edge detection in an image. It detects edges in a
Merged Matching Make key point Asking key very robust manner. The basic idea of canny operator is smoothing
Image /blending description points the raw image. This involves the use of Gaussian function firstly,
then applying Gaussian derivative filters and thus obtaining the edge
Fig.1 Flow chart of the SIFT algorithm gradients and image orientation. The bold image edges are thinned
The basic steps which are involved in this algorithm are: using non-max suppression before proceeding to the final step, that
First we acquire the two sequential images of a scene through a is, thresholding of the image. Canny operator is to make the image
digital camera for which we are going to apply stitching process and gradient operation and then to obtain the edges by finding a local
find the possible key points to identify location and scales. Detecting maximum gradient magnitude of pixels. Compared to other methods
locations that are invariant to scale change of the image can be such as Sobel, Prewitt etc. it keeps good balance between noise and
accomplished by searching for stable features across all possible edge detection [10].
scales, using continuous function of scale known as scale space.
A block diagram of the canny edge detection algorithm is shown
In order to detect local maxima and minima, each sample point is in Fig. 3. The input to the detector can be either a color image or a
compared to its eight neighbors in the current image and nine grayscale image. The output is an image containing assumed circle
neighbor in the scale spaced image as shown in Fig.2. edges of the traced semicircular object.
After assigning a consistent orientation to each key-point based on
local image properties, the key-point descriptor can be represented
relative to this orientation and therefore achieved invariance to image
Input image Image Calculation of edge
rotation. The best candidate match for each feature points is found by
smoothing strength and direction
identifying its nearest neighbor in the data set of feature vectors from
input image. The nearest neighbor is defined as the feature point with
minimum Euclidean distance [7]. Edge map Thresholding Non-maximum
suppression
Once we found the best transformation we can blend the input
image together with reference image. To overcome the visible
artifacts that are to hide the edges of the component images we use Fig.3. Block diagram of the canny edge detection algorithm.
weighted average method of pixels [8].
Different steps of canny edge detection are given below:

1) First of all it suppress as much noise as possible which is a high


frequency information that overlaps the original signal and introduces
false edges, hence smoothes the image by using Gaussian filter before
detecting edges.

2) The next step is to calculate the gradient and direction at each


point of the image. The image uses Gaussian first order differential
operator to filter, intensity and direction of the image are gained.

3) In this step, the non-maxima suppression of gradient is made such


that the point at which gradient is not maximum are removed and not
considered as a part of edge. So canny suppress the non-maxima
pixels of edge in this step.

4) The last step of this algorithm is to threshold the edges in order to


Fig.2 Difference of two adjacent intervals in the Gaussian scale- maintain only the significant ones. This detector uses the “hysteresis”
space method of thresholding instead of global thresholding. So two
threshold values are applied on the suppressed image and hence edge
3. EDGE DETECTION ALGORITHM pixels are detected [11].
Edge detection algorithm has several detector methods available. Thresholding is another important concept used as part of edge
Sobel algorithm is one of them. It has such advantages as small detection process in this work for the calculation of parameters of an
amount of calculation and high calculation speed but has the object in the image. Generally thresholding is a non linear operation
limitation of inaccuracy and sensitivity to noise. Another is the that is used for image segmentation. Based upon different criteria
Prewitt operator. The principle of Prewitt operator is similar to Sobel thresholding may be understood with one of the following
operator except for the weight assigned to the center pixels [9]. techniques:
 Single Thresholding
Circle detection in basic gray scale images is explored, which is  Double Thresholding
based on the concept of J. F. Canny and the non-maximum  Global Thresholding
suppression process for contour tracing described by Neubeck and  Local Thresholding
Gool. Detection process involves the edge segments if they are part

184
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The basic concept of thresholding is to convert the gray scale 4.4 Fit a circle to the boundary
image to binary image consisting of 0‟s and 1‟s as pixel values. At
the time of process of thresholding, individual pixels in an image are For fitting a circle to the boundary specified by above step, the
assigned as object or background. If a given pixel is greater intensity basic mathematical equation of circle is used. In semi-circle object,
than the background it is assigned as „object‟ and as „background‟ if the complete circle is formed by using the information given by the
otherwise. A threshold image is given by the expression: images of semi-circular case as given in Table III. Radius and area of
the semicircular object is calculated. Other parameters (not reported
in this paper) as circumference, diameter or volume in case of 3-
0 𝑖𝑓 𝑓 𝑥, 𝑦 < 𝑇
𝑔 𝑥, 𝑦 = dimensional image objects can be calculated.
1 𝑖𝑓 𝑓 𝑥, 𝑦 ≥ 𝑇

Such that the pixels with value 1 are considered as an object pixels 5. EXPERIMENTAL RESULTS
and pixel with 0 values are considered as back ground pixels [12].
Thresholding is the easiest form involves mapping all pixels above This section shows the data set of experimental setting and results.
threshold value to one gray value say white and the others are called Firstly we acquired a two sequential images that are stitched through
black. Since the result of image is defined with gray values, the SIFT algorithm for which we evaluated object parameters in stitched
process is called bi-level segmentation process. When multiple image.
threshold values are used the result is a multilevel image and the
process is called multi-level segmentation process. But the Various images of different size are accessed from the reliable
applications dealing with more complex scene automatic multilevel source to check the calculated parameters Initial parameters of the
image segmentation method have to adopt [13]. object in an image such as its tracing point, size, connectivity,
threshold value and value of num point (contour) for each image are
shown in Table 1.
4. EXPERIMENTAL SETUP
This section shows the proposed algorithm in terms of a flowchart as
given in Fig.4. Here the steps on how to estimate the parameters like
radius, area and other parameters are shown. At first we find the
edges of the object in an image using canny edge detection and then
thresholding of the image. Next fit a circle to the detected edge point
using MATLAB software.

4.1 Input image


First of all we acquire the image that have the selective object is
available. In this image we apply the pre processing like de-noising
of image, image dimensioning and conversion of RGB to gray scale
image for edge detection thresholding. Fig.5 Left portion of the image Fig.6 Right portion of the image

4.2 Thresholding of image


It is used to convert the gray scaled or colored image to
binary format. Typically an object pixel is given a value of „one‟
while background pixel is given a value of „zero‟.

Input Thresholding Initialization and


Tracing of boundary
Image of the Image point location

Output Calculation of estimated Fit a circle to


Image Parameters of object the boundary
Fig.7. 248 Points are genuine match out of 1524 points
Fig.4. Block diagram of proposed algorithm.

4.3 Initialization and Tracing of boundary point


location
To find the edges of an object the algorithm picks suitable row
and column pixel value that specifies a point over the semi-circular or
circular object within an image and inspect it until a transition from
background (binary 0) to the object pixel (binary 1) occurs. It may
need to trace the inner boundary (outermost pixel of foreground) or
outer boundary (inner most pixel of background) [14]. Fig.8. Resulting panoramic image using sift algorithm

Tracing point is the starting point where the circular object boundary
is traced in terms of image column pixel value by green line. Tracing

185
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

point of the object can be calculated to read the number of rows and diameter or volume in case of 3-D image objects can be calculated
columns of the object by using MATLAB commands are shown in and comparison of evaluated and actual parameters of image is made.
Table 1.
Table 1. Images and their initial parameters In Table 2 A comparison of evaluated and actual parameters of
image is made. Comparison shows that the parameters resulting from
Sr. Image Image Traci Conne Image Num the proposed algorithm highly approximates the actual values of the
no. taken size ng ctivity threshol point parameters calculated using the image processing tool („imtool‟)
point d value present in the MATLAB toolbox. The percentage error between the
1. Image 256 X 136 8 0.5020 145 actual and estimated results is approaching near zero, thus widely
2 256 acceptable (Table 2).
2. Image 318 X 118 8 0.4314 224
3 340 Table 3 shows the different images taken for experimental
evaluation in the paper. The image named Image1 represents the
standard geometrical shape of a semicircle. Further image named
Image2 is the practical image of the back portion of a car captured by
Table 2. Comparison of Evaluated Image Parameters own camera for the analyses where car-wheel represents the
semicircular object within the image.
Sr Estimated Actual parameters %ag %ag
No e e TABLE 3. SIMULATED IMAGES
. parameters of of image
Erro Erro
image r in r in Sr. Resultant image Conventional image parameter
Parameter calculation (calculation through imtool)
Radi Area No.
through proposed
Radius Area Radius Area us In algorithm
(in (in (in (in pixels) In pixel 1.
pixels) pixels) pixels) pixel s
s
1. 127.01 50684.41 127.02 50661.01 0.007 0.04
2. 114.79 41401.64 114.22 40965.09 0.003 0.01

Radius=127.01 pixels Radius=127.02pixels


A contour plot of the grayscale image automatically sets up the Area=50648.41pixels Area=50661.01pixels
axes so their orientation and aspect ratio match the image. A contour
tracing is a technique that is applied to digital images in order to
extract their boundary. It is one of many preprocessing techniques 2.
performed on digital images in order to extract information about
their general shape.

Once the contour of a given pattern is extracted, its different


characteristics will be examined and used as features which will later
on be used in pattern classification. Therefore, correct extraction of Radius=114.79 pixels Radius=114.22pixels
the contour will produce more accurate features which will increase Area=41401.64pixels Area=40965.09pixels
the chances of correctly classifying a given pattern. In conclusion,
contour tracing is often a major contributor to the efficiency of the
feature extraction process an essential process in the field of pattern
recognition.
6. CONCLUSIONS AND FUTURE SCOPE
Num point for curve tracing and image threshold value which is The estimation of different parameters of semi-circular object present
the ratio of white is to black portion of the image are also given in in stitched image has been done using edge detection algorithm based
Table5.1 for each of the images. In image processing and image on MATLAB. The presented work used the edge detection technique
recognition, pixel connectivity is the way in which pixels in 2-D or 3- over practical image to show its robustness. The use of edge detection
D images relate to their neighbors. Connectivity of the images for 2- algorithm for finding the parameters like radius, area etc. in a semi
D may be divided into three connected pixels such as 4, 6 and 8- circular object present in the stitched image has been performed and
connected pixels. Image connectivity for all the cases is taken to be 8 the results are calculated by using MATLAB and compared with the
here because 8-connected pixels are neighbors to every pixel that software tool „imtool‟. The estimated parameters highly approximate
touches one of their edges or corners. These pixels are connected the actual values resulting in good efficiency of the proposed
horizontally, vertically, and diagonally. algorithm. The resulting technique can be applied in the area of far
field object detection like telemetry for fault finding in an object
The parameters of the images can be calculated for fitting a circle whose captured image is analyzed at remote location. The presented
(blue line) to the boundary of given object in an image by using algorithm has provided accurate results within a few seconds.
MATLAB coding. Radius and area of the semicircular object is
calculated. In semicircular object, the complete circle is formed by But this algorithm has the limitation of being capable of only
using the information given by the images of semi-circular form. circular object detection in an image. No other geometrical shapes are
Other parameters (not reported in this thesis) as circumference, identified or measured for any image whatsoever. However the

186
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

algorithm is quite robust and reliable for circular as well as semi- [9] Raman Maini and Dr. Himanshu Aggarwal “Study and
circular object detection present in an image. A possible extension is Comparison of various Image Edge
to apply the technique over multiple geometrical shapes such as DetectionTechniques”,International Journal of Image Processing
(IJIP), vol. 3,Issue(1),pp.1-12.
triangle, hexagon, square, rectangle etc.
[10] Geng Hao,Luo Min, Hu Feng, “ Improved Self- Adaptive Edge
Detection Method Based on Canny”, Fifty International
REFERENCES conference on intelligent Human – Machine Systems and
Cybernetiecs, IEEE, 2013,pp.527-530.
[1] Wenshuo Gao, Lei Yang, Xiaoguang Zhang, Huizhong Liu “An [11] Siti Salmah Yasiran, Abdhul Kadir Jumaat Aminah, Abdhul
Improved Sobel Edge Detection”, IEEE Conferences Malek, Fatin Hanani hashim,Rozi Mahmud “Microcalifications
2010,pp.67-71. Segmentation using three edge detection techniques”
[2] Jiang Lixia, Zhou Wenjun, Wang Yu “Study On Improved International conference on electronics design, systems and
Algorithm For Image Detection” IEEE Conferences, applications (ICEDSA), IEEE 2012.
2010,pp.476-479. [12] Poonamdeep, Raman Maini “Performance Evaluation of various
[3] Jaskirat kaur, Sunil Aggrawal, Renu vig, “ A Comparative thresholding methods using canny edge detector”, International
Analysis of Thresholding and Edge Detection journal of computer applications. May, 2013,pp.26-32.
SegmentationTechniques‟‟ International Journal of Computer [13] Songcan Chan,Min Wang “Seeking Multithreshold directly for
Applications Volume 39- no.15,2012, pp. 29-34. support vector for image segment”, Geometrical methods in
[4] Chun-ling Fan, Yuan Ren “Study on the Edge Detection Neural Networks and Learning, 2005, pp. 335-344.
Algorithms of Road Image”, Third International symposium on [14] Shohei kumagai, Kazuhiro Hotta “Counting and Radius
information processing, IEEE Conferences, 2010,pp.217-220. Estimation of lipid droplet in intracellular images‟‟, IEEE
[5] Beant Kaur, Anil Garg “Mathematical Morphological Edge International conference on Systems and
Detection For Remote Sensing Images”, IEEE Conferences, Cybernetics,2012,pp.67-71.
2011,pp.324-327. [15] R.C.Gonzalez and R.E.Woods “Digital Image Processing” , low
[6] Yan Wan, Shanshan Jia, Dong Wang “Edge Detection price edition, pearson education publication.
Algorithm Based On Grey System Theory Combined With [16] J.Singh,P.Kumar, “Selective evaluation of image parameters
Directed Graph”, Seventh International conference on image and through edge detection algorithm”,IEEE International
Graphics, IEEE, 2013,pp.180-185. conference on advanced in engineering & technology research
[7] D.G Lowe."Object Recognition from Local Scale-Invariant (ICAETR),pp.196-200,2014.
Features”, International Conference on Computer Vision, vol. 2,
pp.1150– 1157, 1999..
[8] Yang zhang-long, Guo bao-long. “Image Mosaic Based on SIFT”
International Conference on Intelligent Information Hiding and
Multimedia Signal Processing, IEEE pp. 1422-1425, 2008.

187
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Arduino based Microcontroller


Manoj Kumar Vertika Garg
Department of electronic and communication Department of electronic and communication
B.G.I.E.T.,Sangrur B.G.I.E.T.,Sangrur
Manoj95692@gmail.com Gudia.gary95@gmail.com

ABSTRACT source to hardware and electronics? What does it mean to make a


circuit board which is open and extensible, but still usable with
The Arduino board and programming language is an inexpensive little effort? How can we make working with electronics easy,
way for faculty to teach embedded system design in introductory cheap, and quick? These are some of the questions that led to the
courses. In this paper, the Arduino board , programming language, creation the Arduino prototyping platform. This paper discusses
and development will be introduced. Additionally, simple “real- related work, the educational and design context within which
world” electronic interfaces for Arduino systems , such as LED’s, Arduino was developed, the philosophy behind it, the platform
switches, and sensors, will be presented. itself, both hardware and software, and the community that has
formed around Arduino. Integrated development environment
General Terms (based on Processing programming environment) Available for
ARDUINO System open source hardware Platform Window/ Mac/Linux Arduino is an open-source electronics
prototyping platform based on flexible, easy-to-use hardware and
software. It’s intended for artists, designers, hobbyists, and anyone
KEYWORDS interested in creating interactive objects or environments. Arduino
Open hardware, protyping , microcontrollers, opensource, can sense the environment by receiving input from a variety of
education, interactive art. sensors and can affect its surroundings by controlling lights,
motors , and other actuators. The microcontroller on the board is
1. INTRODUCTION programmed using the Arduino programming language and the
It is a movement not a microcontroller founded by Massimo Banzi Arduino development environment. Arduino projects can be stand-
and David Cuartiesslles in 2005 based on wiring platform which alone or they can communicate with software on running on a
dates to 2003 open source hardware platform(1).Open source computer (e.g. Flash, Processing, MaxMSP). The boards can be
development easy to learn language and libraries based wiring built by hand or purchased preassembled; the software can be
language.There are many tools for prototyping with electronics.It downloaded for free.
is used for everything from new musicial instruments to intelligent
rooms.Custom input device and interactive art pieces.These tools The Many Flavors of Arduino
attempt to reduce the difficulty of working with electronics and 1.Arduino Uno
expand the number of people who can experiment with the 2.Arduino Leonardo
medium [4]. Many of them however are either commercial 3.Arduino LilyPad
products expensive and closed or research projects unavailable for 4.Arduino Mega
use by most people.Other consist only of circuit board providing 5.Arduino Mini
no tools to simplify there programming.The open source 6.Arduino Mini Pro
mo9vement mean while has shown that useful and robust software 7.Arduino BT
can be created by a distributed team of volunteers freely sharing
the results of their efforts.Open source projects often gather strong
communities of people working at many levels:some work on the
core code,others contribute small extensions,still other right
documentation of offer support,with the majority simply making
use ogf quality product.Can we apply the principles of open source
to hardware and electronics?What does it mean to make a circuit
board which is open and extensible,but still usable with little
effort?How we can make working with electronics and expand the
number of people who can experiment with the medium [4]. Many
of them , however, are either commercial products expensive and
closed or research projects unavailable for use by most people.
Others consist only of circuit boards, providing no tools to simplify
their programming. The open source movement, meanwhile, has
shown that useful and reboust software can be created by a
distributed team of volunteers freely sharing the results of their
efforts. Open source projects often gather strong communities of
people working at many levels: some work on the core code ,
others contribute small extensions , still others write FIG 1.1 Arduino Based Microcontroller
documentation or offer support, with the majority simply making
use of a quality product. Can we apply the principles of open

188
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Arduino is a tool for making computers that can sense and control cannot be used as standalone units, however and must be
more of the physical world than your desktop computer. It’s an interfaced to a personal computer to use. The learning curve for
open-source physical computing platform based on a simple Phidgets is somewhat steeper than for the MicroDig, but it’s useful
Microcontroller board, and a development environment for writing for those familiar with software development who want to begin
software for writing software for the board. Arduino can be used to making hardware interfaces. D.tools [11] is a high-level hardware
develop interactive objects, taking inputs from a variety of lights, and software tool developed at Stanford University’s HCI group
motors and other physical outputs Arduino projects can be stand- that addresses some of the shortcomings of others in this class.
alone, or they can communicate with software running on your First, d.tools is a more flexible system. The d.tools software can be
computer the board can be assembled by hand or purchased used with other hardware platforms, as long as that hardware is
preassembled; the open-source IDE can be downloaded for free. running a firmware that can communicate in the d.tools protocol.
The Arduino programming language is an implementation of Wiring, Arduino and Phidgets hardware have been used with
wiring, a similar physical computing platform, which is based on d.tools. The software is written in Java as plugin for the Eclipse
the processing multimedia programming environment. There are universal tool platform, and can theoretically run on any Java-
many other microcontrollers and microcontrollers platforms capable operating system. Like Phidgets, the hardware is made up
available for physical computing. Parllax Basic Stamp, Netmedia’s of a series of plug-and-play USB modules, each of which
BX-24, Phidgets, MIT’s Handyboard, and many others offer communicates with the d.tools software d.tools also offers a suite
similar functionality. All of these took take the messy details of of analysis tools which allow users to see the results of their
microcontroller programming and wrap it up in an easy-to-use devices graphed on screen, and time-indexed against a video of the
package. Arduino also simplifies the process of working with person using the device d.tools is an open source platform, and as
microcontrollers, but it offers some advantage for teachers, of this writing, the hardware is not commercially available.
students, and interested amateurs over the systems. Moving down a level of abstraction, there are a number of mid-
level microcontrollers. Controllers in this range feature a
2. RELATED WORK microcontroller with its necessary support electronics (crystal,
There are many different microcontroller development tools power regulator, etc.) on a small module. These modules assume
available for use in teaching and prototyping. Those that are most users can build input and output circuits to attach to the module.
popular outside the electrical engineering community work to offer They’re usually programmed in BASIC, or some variation of C,
some balance between cost, expandability , and ease of use. and attach to the programming environment on a personal
Arduino also seeks to balance these factors, while making up for computer using a serial or USB connection. Parallax’ BASIC
some of the shortcomings of existing platforms 1. This section stamp[8] is the most well –known of these modules. Also in this
presents a survey of a few of the more popular tools on the market, family are NetMedia’s Basic [7] , BASIC Micro’s Basic ATOM
followed by an analysis of how their strengths and weaknesses [4] processors. Arduino most resembles these. The main advantage
affect on the design choices behind Arduino. At the highest level offered by the mid-level controllers is programmability in a higher
of abstraction are microcontroller tools such as Infusion Systems’ level language, with a simple programming interface. The
MicroDig, Phidgets, and Standford’s d. tools, Modules at this level disadvantage is generally that the programming languages are very
are generally not programmable by the end user. Instead, they are limited, and the lower levels of the controller itself are not
configured using a desktop tool. These tools are generally not accessible to the user at all. Students taught to use the controllers
standalone devices, but must be connected to a personal computer generally reach a problem beyond the module’s capability within
in order to be useful. Infusion Systems’ MicroDig [5] is a sensor their first semester. The programming environments are almost all
interface box with a MIDI interface. Its hardware interface consists available for Windows operating systems only, though there are
of an analog-to-MIDI controller with 8 analog inputs, and various some exeptions.
sensor modules that mate with the controller. Users attach pre-
packaged sensors to the inputs, and connect the controller to a
MIDI output devices. The values of the sensors are output as MIDI
values. The MicroDig is handy for teaching students with some
knowledge of MIDI but little programming or electronics 3. ARDUINO IS NOT ????
knowledge how to design hardware interfaces, because it requires Its is not a chip(IC) .It is not a board( PCB) .It is not a company or
little new knowledge. It is an expensive platform, however, with manufacturer .It is not a Programming language . It is not a
the basic kit costing $399, and 1 for a more in-depth comparison, computer architecture.
see [2].Requires that the connecting equipment be MIDI
compatible. Phidgets [9] is a modular system of sensor controllers, Arduino-like systems
motor controllers, RFID readers, land others special function 1.Cortino (ARM)
devices , all united by a common USB interface and a set of 2.Xduino (ARM)
desktop software APIs. Each Phidget device is a self-contained 3.Leaflabs maple (ARM)
electronic device, whether it’s a sensor , motor or LED controller, 4.Beagleboard (LINUX)
or a more complex device like an LCD display. The user needs 5.Wiring Board (Arduino predecessor)
almost no electronics knowledge to use Phidgets. Each device is
connected to a desktop computer in order to access its sensor data Arduino programming
or to control it. The development team has released application Arduino programs are written in C and C++. The Arduino IDE
programming interfaces for the system in several languages, comes with A software library called “Wiring” from the original
including Visual Basic, VBA (Microsoft Access and Excel), wiring project, which make many common input /output
LabView, Java, Delphi, C, C++, and Max/MSP. The modules are operations much easier users only need define two functions to
relatively inexpensive , ranging from $10 to $ 100. The devices

189
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

make a runnable cyclic executive program: *.setup() : a function into a supercomputer. But they will help you to get the most out of
run once at the start of a program that can initialize setting. this small, but surprisingly powerful little processor.
*.loop() : a function called repeatedly until the board power off
development ACKNOWLEDGMENTS
Our thanks to the DR.SHEWETA RANEE & Mr. VINAY
KHATOR who have contributed towards the development of the
international paper.

REFERENCES
1. J.Sarik and I.Kymissis,”Lab Kits using the Arduino
Prototyping platform,”Frontiers in education
conference,Washington DC,27-3 october 2010,pp.1-5.
2. J.Provost,”Why the Arduino won and Why it’s here to
stay,”.tech.Ref.
3. Information general del
Arduino.http://arduino.cc/es/main/Arduino Board Diecimila
Ultima consulte Julio 2 dec 2010.
4. L.buechley,M.Eisenberg.J.Catchen,and A.Crockett,”The
lilypad Arduino:using computational textiles to investigate
engagement,aesthetics and diversity in computer science
education,”in proceeding of the twenty-sixth annual SIGCHI
Conference on Human factors in computing
Fig1.2 Arduino Software systems.ser.CHI”08,2008,pp.423-
432[online].Available:http://doi.acm.org./10.1145//1357054.1
Arduino is open source hardware : the Arduino hardware reference 357123
design are distributed under a creative commons attribution share – 5. 8-
alike 2.5 licence and are available on the Arduino Web site. Layout bitAVRMicrocontroller,ATMEL,2011.[online],Available:http
and Production files for some versions of the Arduino hardware are ://www.atmet.com/dyn/resources/prod-
also available. The source code for IDE is available and released documents/doc8161.pdf
under the GNU General Public licence, version 2. “Arduino is an 6. Barragan Hernando.”wiring:prototyping physical interaction
open-source electronics platform based on easy-to-use hardware design,”IDI Ivrea people. june 2004.IDI Ivera. 25 jan
and software . it’s intended for anyone making interactive 2007<people.interactionivrea.it/h.barragan/thesis/>
projects.” 7. Reas,Casey and Fry,Ben,”Processing.org:a networked context
for learning computer programming.”ACM
SIGGRAPH,2005.
CONCLUSION 8. Infusion Systems,”Microsystem.”Infusion Systems.Infusion
systems.25jan2007
In this guide we have demonstrated that it is indeed possible for the 9. W.Durface,”Arduino Microcontroller Guide”,University of
Arduino to juggle multiple independent tasks while remaining Minnesota,www.me.umn.edu/co urse s/me
responsive to external events like user input. We’ve learned how to 2011/arduino/,October 2011.
define tasks as state machines that can execute independently of 10. J.O.Lim,and T.Oppus,”Arduino Laboratory Manual:Using the
other state machines at the same time. And we’ve learned how to Advanced Board,”Bughow Electronic solutions and
encapsulate these state machines into C++ classes to keep our code technologies,INC,2011.
simple and compact. These techniques won’t turn your Arduino

190
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Application of DGS in Microstrip Patch Antenna

Sushil Kakkar A.P. Singh T. S. Kamal


Assistant Professor Professor Professor
ECE Department ECE Department ECE Department
BGIET, Sangrur SLIET, Sangrur RIET, Abhohar
kakkar778@gmail.com aps.aps67@gmail.com tsk1005@gmail.com

ABSTRACT the DGS is 10 mm and width (W1) of the defect is 1mm. The
This paper aims to analyze the effect of application of structure of the proposed antenna with and without DGS has
defected ground structure (DGS) on the resonating properties been shown in fig. 1.
of microstrip patch antenna. In order to obtain the
comparative results a small size DGS has been introduced in
the ground plane exactly under the feeding strip of the patch
antenna. The representative results obtained from the
simulations have revealed that the proposed antenna with
DGS possesses better impedance matching and wider
bandwidth than its original structure without DGS. The
presented antenna is small size and low profile antenna
suitable for X-band applications.

Keywords
Bandwidth, DGS, Return Loss.

1. INTRODUCTION
Defected Ground structure (DGS) can be used as an
alternative approach to improve the resonating performance of
the microstrip patch antennas [1]. It can be formed by making
simple slot in the ground of the antenna which effects the
distribution of the current in the ground plane leading a
controlled propagation of electromagnetic (EM) waves
through the dielectric layer [2,3]. As per the literature
reviewed, DGS provide the considerable miniaturization,
good impedance matching and wider bandwidth in the design (a)
engineering of patch antennas [4]. DGS structures are periodic
lattices, which can provide effective and flexible control over
the propagation of the EM waves within a particular band [5].
The main limitation of microstrip patch antennas is narrow
band and so far several solutions have been provided in the
literature such as introduction of slots in the patch, higher
dielectric substrate, coplanar wave guide feeding method,
DGS, etc. to increase the bandwidth. Among them DGS can
be the better option because besides wider bandwidth it also
provides miniaturization to the antenna structure [6-8]. In the
similar approach, an attempt has been made in this paper to
introduce the defects in the ground plane exactly beneath the
feed strip to provide better impedance matching and
bandwidth.

2. DESIGN AND STRUCTURE


The proposed antenna structure is a simple patch antenna with
square shape having side of the square (S2) of 10 mm. The
feeding strip is designed in such a way to obtain 50 ohms
impedance. The width of the strip (W) is 5 mm. Square shape
is the candidate structure for the ground plane having side
(S1) of 20 mm. In order to further enhance the resonating (b)
performance of the proposed antenna, a small size rectangular
DGS has been introduced in the structure. The length (L1) of

191
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig 3: Simulated input impedance of proposed antenna for


both with DGS and without DGS structure.

(c) Table 1. Comparison of Resonating properties with and


Fig 1: Geometry of proposed antenna (a) without DGS (b) without DGS
Reson
with DGS (c) bottom view of DGS
ating Return Input Bandwidth
Structu
Frequ Loss Impedance (GHz)
3. RESULTS AND DISCUSSION re
ency (dB) (ohms)
3.1 Resonant Parameters (GHz)
The analysis of presented antenna has been done using Without 2.91
11.26 -18.95 41.14-j5.25
DGS
method of moment based IE3D EM solver. The graphical plot
of scattering parameters of both the antennas (with and With 3.48
11.63 -29.18 48.09-j2.82
without DGS) has been given in fig. 2 and 3. The obtained DGS
results reveals that the antenna structure with the
implementation of DGS under its feeding strip provide wider
bandwidth and better impedance matching in comparison to
3.2 Radiation Patterns
The radiation patterns of the proposed antenna with DGS is
its original structure without DGS. The detailed resonating
shown in fig 4 and the presented results reveals that the
parameters of both the antennas have been given in table 1.
radiation pattern in the E-plane is bidirectional and in H-plane
is omnidirectional.

Fig 2: Simulated s-parameters of proposed antenna for


both with DGS and without DGS structure.

(a)

192
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. CONCLUSION
This paper proposed a simple microstrip patch antenna with
the introduction of DGS beneath the feeding strip. In order to
obtain the effect of DGS critical simulated study has been
done using IE3D simulator. The presented results reveal that
the proposed antenna with DGS provides better impedance
matching and wider bandwidth than its original structure. The
proposed antenna is small size, low profile antenna that can be
used for various wireless applications in X-band region.

REFERENCES
[1] Weng et. al, “An Overview on Defected Ground
Structure”, Progress In Electromagnetics Research B”,
Vol. 7, 173-189, 2008.
[2] D. Schlieter and R. Henderson, “High Q Defected
Ground Structures in Grounded Coplanar Waveguide”,
Electronic Letters, Vol. 48, no. 11, May 2012.
(b) [3] S. Rani and A. P. Singh, “Fractal Antenna with Defected
Fig 4: Radiation pattern of the proposed antenna with Ground Structure for Telemedicine Applications,”
DGS (a) E-plane (b) H-plane International journal on Communications, Antenna and
Propagation, vol.1, pp. 1-15, 2012.
3.3 Gain [4] A. Arya, M. Kartikeyan and A. Patnaik, “On the Size
The gain vs. frequency plot of proposed antenna with DGS is Reduction of Microstrip Antenna Using DGS”,
shown in fig. 5. The maximum achievable gain of the antenna Proceedings of IEEE Conference, 2010.
is 6.77 dBi at 11.63 GHz resonant frequency. [5] A. Nouri and G. Dadashzadeh, “A Compact UWB Band-
Notched Printed Monopole Antenna with Defected
Ground Structure”, IEEE Antennas and Wireless
Propagation Letters, Vol. 10, 2011.
[6] S. Rani and A. P. Singh, “On the Design and
Optimization of New Fractal Antenna Using PSO”,
International Journal of Electronics, vol. 100, no. 10, pp.
1383-1397, 2012.
[7] S. Kakkar, S. Rani and A. P. Singh, On the Resonant
Behaviour Analysis of Small-Size Slot Antenna with
Different Substrates, International Journal of Computer
Applications, pp. no. 10-12, 2012.
[8] R. Azaro, L. Debiasi, M. Benedetti, P. Rocca and A.
Massa, “A Hybrid Prefractal Three-Band Antenna for
Multistandard mobile Wireless Applications”, IEEE
Antennas and Wireless Propagation Letters, vol.8, pp.
905-908, 2009.

Fig 5: Gain of the proposed antenna with DGS

193
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Wavelength Assignment Problem In WDM Network


Gunvir Singh Abhinash Singla Sumit Malhotra
Bhai Gurdas College of Engg. Bhai Gurdas College of Engg. Bhai Gurdas College of Engg.
& Tech.,Sangrur.Punjab & Tech.,Sangrur.Punjab & Tech.,Sangrur.Punjab
gunvirsingh@gmail.com abhinash11@gmail.com sumitmalhotra.mail@gmail.com

ABSTRACT fibers in the ground. This is the case especially in densely

Given a WDM network, the problem of routing and assigning populated areas like cities, where fibers must be dug under

wavelength to lightpaths is of paramount importance. The streets etc. WDM technology has been recognized as one of

clever algorithms are needed in order to ensure that Routing the key components of the future networks. The

and Wavelength Assignment (RWA) function is performed commercialization of WDM technology is progressing

using a minimum number of wavelengths. The number of rapidly. Most important for the development of the WDM

available wavelengths in a fiber link plays a major role in technology was the invention of Erbium Doped Fiber

these networks which is increasing day by day. Once the route Amplifier, (EDFA) an optical fiber amplifier in 1987. The

is defined for the connection then what is left is wavelength optical fiber amplifier is a component capable of amplifying

assignment. For wavelength assignment a number of several optical signals at the same time without converting

algorithm have been proposed but there is a urgent need of a them first to electrical domain (opto-electronic amplification).

wavelength assignment algorithm which should be low It is also important to note that EDFAs can be used to amplify

complex in nature. After the literature survey we can conclude signals of different bit rates and modulations. Other important

that further work can be done in this field to reduce the WDM components include lasers, receivers, wavelength

blocking probability division multiplexers, wavelength converters, optical splitters


and tunable filters amongst others. There is also wide interest
towards the optical networking in academic community as it
1. INTRODUCTION offers a rich research field for scientists from the component
Wavelength division multiplexing (WDM) is a very level up to the network protocols.
promising technology to meet the ever increasing demands of
If there are multiple feasible wavelengths between a source
high capacity and bandwidth. In a WDM network several
node and a destination node, then a wavelength assignment
optical signals are sent on the same fiber using different
algorithm is required to select a wavelength for a given
wavelength channels. Sometimes, the term dense wavelength
lightpath. The wavelength selection may be performed either
division multiplexing (DWDM) is used to distinguish the
after a route has been determined, or in parallel with finding a
technology from the broadband WDM systems where two
route. Wavelength Assignment problem is divided into two
widely separated signals (typically 1310nm and 1550nm)
parts on-line wavelength assignment and off-line wavelength
share a common fiber. In DWDM up to 40 or 80 signals are
assignment problem.
combined on the same fiber. WDM networks are a viable
solution for emerging applications, such as supercomputer
There are two constraints that have to be kept in mind by the
visualization and medical imaging, which need to provide
approaches when trying to solve RWA.
high data transmission rate, low error rate and minimal
propagation delay to a large number of users [1]. i. Distinct wavelength assignment constraint: All lightpaths
Traditionally, only a small fraction of the fiber capacity was sharing a common fiber must be assigned distinct
used, but by using WDM it is possible to exploit this huge wavelengths to avoid interference. This applies not only
capacity more efficiently [2]. The possibility to use the within the all-optical network but in access links as well.
existing fibers more efficiently makes WDM a very attractive ii. Wavelength continuity constraint: The wavelength
alternative commercially, as it is very expensive to install new assigned to each lightpath remains the same on all the

194
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

links it traverses from source end-node to destination 2. PROBLEM FORMULATION


end-node
Although a lot of research work is done in
The first constraint holds for solving wavelength Wavelength Assignment for optimization of blocking
assignment problem in any wavelength routing network. performance in Wavelength division multiplexed networks
The second constraint applies only to the simple case of but not enough research is done on comparing these
wavelength routing networks that have no wavelength Algorithms. By analyzing how conventional algorithms
conversion capability inside their nodes performs, the shortcomings can be found out and more
research could be done on removing those shortcomings
Based on the order in which the wavelengths are searched,
wavelength assignment methods are classified into most- 3. CONVENTIONAL ALGORITHMS
used, least-used, fixed-order and random-order. In the
The algorithms which are used for simulation are
most-used wavelength assignment method, wavelengths are
conventional algorithms such as first-fit algorithm and
searched in non-increasing order of their utilization in
random algorithm. These algorithms can be illustrated as
network. This method tries to pack the lightpaths so that
below:
more wavelength continuous routes are available for the
request that arrives later. In the least used wavelength 1. First-fit algorithm: In this algorithm, firstly the
assignment method, wavelengths are searched in non- wavelengths of the traffic matrix are sorted in non-
decreasing order of their utilization in the network. This decreasing order. Then algorithm steps through this
method spreads the lightpaths over different wavelengths.
sorted list for selecting candidate chains joined. Let
The idea here is that a new request can find shorter route
be the next highest wavelength element in sorted list.
and a free wavelength on it. In the fixed-order wavelength
Then, if both nodes and are the end nodes of two
assignment method, the wavelengths are searched in a fixed
chains, largest chain is formed by joining two ends,
order. The wavelengths may be indexed and the wavelength
otherwise next highest element is considered. This
with the lowest index is examined first. In the random
process is carried on until all chains are considered to
wavelength assignment method the wavelength is chosen
form a single chain representing linear topology.
randomly from the free wavelengths.

2. Random algorithm: In this algorithm, wavelength is


One possible way to overcome the bandwidth loss caused
selected randomly from available wavelengths. A
by the wavelength continuity constraint is to use
number is generated randomly and the wavelength is
wavelength converters at the routing nodes. A wavelength
assigned to this randomly generated number.
converter is an optical device which is capable of shifting
one wavelength to another wavelength [2813, 2914]. The
capability of a wavelength converter is characterized by the Algorithm First-fit
degree of conversion. A converter which is capable of begin
sort elements of U in non-decreasing order;
shifting a wavelength to any one of D wavelengths is said
While (two or more chain exist) do
to have conversion degree D. The cost of a converter grows begin
with the increasing conversion degree let uij be the next highest element in U;
if (i and j are the end nodes of the two chains „ij‟ and „jl‟ ) then
connect i and j to get the chain „kl‟;
The basic function of the wavelength converter is to discard uij ;
convert an input wavelength to possibly different output end;
wavelengths within operational bandwidth of WDM
systems in order to improve their overall efficiency and
hence the reuse factor is increased Due to multi-wavelength
fiber optic systems interest is growing in converting signals
from one wavelength to another Figure 3.1: First-fit Algorithm

195
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The algorithm for the random wavelength assignment is very 2. Distinct wavelength constraint: all lightpaths using the
simple and is limited to the generation of a random number same link must be allocated distinct wavelengths.
but algorithm for first-fit is a bit complex. The algorithm for
the first-fit wavelength assignment can be illustrated by figure If there is no free wavelength available on any link the call

(3.1). will be blocked. In simple terms blocking probability as per


Poisson’s formula can be calculated as the ratio of calls

4. ANALYSIS OF WAVELENGTH blocked to the total number of calls generated as given in


ASSIGNMENT STRATEGIES. equation (4.1).

Here, conventional wavelength assignment strategies are


analyzed and compared with each other on the basis of (4.1)
blocking probability and fairness. The performance of
conventional wavelength assignment algorithms is calculated Also, the blocking probability on the link can be calculated by
in terms of blocking probability and fairness. Erlang‟s-B famous Erlang-B formula as given by Milan Kovacevic[45]
formula is used to compute the blocking probability. We have equation (4.2)
developed approximate analytical models for clear channel
blocking probability of the network with arbitrary topology,
both with or without wavelength translations. The goal of our
analysis is to calculate and compare blocking probability of (4.2)
different algorithms. While analyzing following assumptions
are made: Where is blocking probability for load and
wavelengths.
 The network is connected in an arbitrary topology. Each
link has a fixed number of wavelengths.
5. PROPOSED LEAST-USED
WAVELENGTH CONVERSION
 Each station has an array of transmitters and receivers,
ALGORITHM
where is the number of wavelengths carried by the
fiber. In this section, we have proposed an efficient wavelength
assignment algorithm for dynamic provisioning of lightpath.
 Point to point traffic. This proposed algorithm is an improvement of Least-used
wavelength assignment algorithm. We have used
 There is no queuing of the connection request. The mathematical model for WDM optical networks for
connection blocked will suddenly be discarded. minimization of blocking probability. The results of proposed
algorithm are then compared with conventional wavelength
 Link loads are mutually independent. assignment algorithms such as first-fit, best-fit, random and
most used wavelength assignment algorithms. Simulation
 Static routing is assumed.
results proved that these proposed approaches are very
effective for minimization of blocking probability of optical
We have considered blocking probability for wavelength non-
WDM networks.
convertible networks. The two constraints which are followed
for the wavelength assignment are:
6. COMPARISON OF PROPOSED
1. Wavelength continuity constraint: a lightpath must use WAVELENGTH ASSIGNMENT
the same wavelength on all links along the path from
ALGORITHM WITH THE
CONVENTIONAL ALGORITHM
source to destination edge nodes.
The blocking probability of proposed algorithm is compared
with algorithms such as first-fit, best-fit, random and most-

196
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

used wavelength assignment algorithm and are shown in the proposed algorithm can be used. This algorithm proposed
figure (6.1) and figure (6.4). For this comparison we have gives the blocking free environment.
fixed the number of channels to 20; the total number of links
used in the network is also fixed to 20; and the total number Also, the simulation results proved that blocking probability
(%age) increases with increase in the number of nodes. This
of wavelengths used and the load per unit link are varied. algorithm proposed gives the blocking free environment

Figure 6.1: Comparison of the algorithm for load=10


Figure 6.4: Comparison of the algorithm for load=10
Erlangs; W=20 Erlangs; W=30

REFERENCES

[1]. Anwar Alyatama, “Wavelength decomposition


approach for computing blocking probabilities in
WDM optical networks without wavelength
conversions,” Computer Networks, vol. 49, pp. 727–
742, 2005.
Figure 6.2: Comparison of the algorithm for load=30
Erlangs; W=20 [2]. Junjun Wan, Yaling Zhou, Xiaohan Sun, Mingde
Zhang, “ Guaranteeing quality of service in optical
burst switching networks based on dynamic
wavelength routing,” Optics Communications, vol.
220, pp. 85–95, 2003.

[3]. I. Alfouzan, M.E. Woodward, “Some new load


balancing algorithms for single-hop WDM
networks,” Optical Switching and Networking, vol.
3, pp. 143–157, 2006.

Figure 6.3: Comparison of the algorithm for load=10 [4]. F. Matera, D. Forin, F. Matteotti, G. Tosi-Beleffi, “
Erlangs; W=10
Numerical investigation of wide geographical
transport networks based on 40 Gb/s transmission
7. CONCLUSIONS
with all optical wavelength conversion,” Optics
The blocking probability of proposed algorithm is low in Communications, vol. 247, pp. 341–351, 2005.
comparison to the conventional algorithms. In situations
where the algorithm of the given system can be changed then [5]. Abhisek Mukherjee, Satinder Pal Singh, V.K.
Chaubey, “Wavelength conversion algorithm in an

197
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

intelligent WDM network,” Optics [9]. Jianping Wang, Biao Chen and R. N. Uma,
Communications, vol. 230, pp. 59–65, 2004. “Dynamic Wavelength Assignment for Multicast in
All-Optical WDM Networks to Maximize the
[6]. P. Rajalakshmi, Ashok Jhunjhunwala, “Wavelength Network Capacity,” IEEE Journal on Selected
reassignment algorithms for all-optical WDM Areas in Communications, vol. 21, no. 8, pp. 1274 -
backbone networks,” Optical Switching and 1284, October 2003.
Networking, vol. 4, pp. 147–156, 2007.
[10]. Nen-Fu Huang and Shiann-TsongSheu, “An
[7]. Mahesh Sivakumara, Suresh Subramaniam, Efficient Wavelength Reusing/Migrating/Sharing
“Blocking performance of time switching in TDM Protocol for Dual Bus Lightwave Networks,”
wavelength routing networks,” Optical Switching Journal of Lightwave Technology, vol. 15, no. 1, pp.
and Networking, vol. 2, pp. 100–112, 2005. 62-75, January 1997.

[8]. Raja Datta, BivasMitra, SujoyGhose, and [11]. Arun K. Somaniand Murat Azizoglu, “Wavelength
IndranilSengupta, “An Algorithm for Optimal Assignment Algorithms for Wavelength Routed
Assignment of a Wavelength in a Tree Topology Interconnection of LANs,” Journal of Lightwave
and its Application in WDM Networks,” IEEE Technology, vol. 18, no. 12, pp. 1807-1817,
Journal on Selected Areas in Communications, vol. December 2000.
22, no. 9, pp. 1589-1600, November 2004.

198
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Comparative Study of Different Algorithms to Implement


Smart Antenna Array-A Review
Gurjinder kaur Gautam kaushal
Punjabi University Punjabi University
er.gurjinderkaur90@gmail.com gautamkaushal@yahoo.com

ABSTRACT
In this paper, smart antennas are antenna array use to
empathize spatial signal such as DOA of the signal and used
to find beamforming vectors and locate the antenna ray in the
desired direction of target. Beam formed adaptive algorithms
allow the antenna to steer the beam in desired direction of
interest while cancelling the obtrusive signals. The prompt
growth of smart antenna makes use of different algorithms to
implement it. Now the comparison of different algorithms
discussed in my review paper.

General Terms
Algorithms include LMS, SMI, QRD - RLS.

Keywords
Direction of arrival (DOA), Least mean square (LMS),Sample Fig.1 Smart Antenna System
covariance matrix inversion (SMI), Quadrature Rotation
Decomposition Recursive least squares( QRD-RLS).

1. INTRODUCTION
Smart antenna systems generally used in acoustic signal
processing, SONAR Radio telescopes, Radio astronomy and
mostly in cellular systems like WCDMA and UMTS. It has
two functions: DOA estimation and Beamforming. DOA
estimation is to use the particular data received by the array to
estimate the direction of arrival of the signal. Beamforming
is the method used to make the radiation pattern of the
antenna array by adding effectively the phases of the signals
in the direction of the targets and null the pattern of the
mobiles that are undesired. The smart antenna systems can
generally be revealed as either switched beam or adaptive
array systems. In a switched beam system multiple fixed
Fig.2 Concept of Smart Antenna
beams in predetermined directions are used to serve the users.
Adaptive beam forming uses antenna arrays aided by strong
signal process capability to automatically change the beam
2. SMART ANTENNA ALGORITHMS
An LMS algorithm is used to cancel the unwanted
pattern in correspondence with the changing signal
interference. [3-6]LMS algorithm uses continuous adaptation.
environment. It not only directs maximum radiation in the The weights are adjusted as the data is sampled such that the
direction of the desired target but also introduces nulls at resulting weight vector sequence combines to the most
undesirable directions while tracking the desired mobile user favorable solution. It is an iterative beamforming algorithm
at the same time [1].Fig.1 shows smart antenna system. A that uses to find the gradient vector from the available data.
smart antenna technology can achieve a number benefits like This algorithm makes successive corrections to the weight
vector in the direction of the negative of the gradient vector
increase the system capacity, greatly reduce interference,
which finally wrap up to minimum MSE.Fig.3 shows adaptive
increase power efficiency. beamforming network. LMS is an adaptive beamforming
algorithm, defined by the following equations with input
The smart antenna electronically steer a phased array by signal:
weighting the amplitude and phase of signal at each array
element in response to changes in the propagation y(n) = wT(n-1).u(n)
environment. Capacity improvement is achieved by effective
co-channel interference cancellation and multipath fading e(n) = d(n) – y(n)
mitigation. Figure 2 shows the concept of smart antenna[2].

199
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

w(n) = w(n-1)+µe(n)u*(n) SMI employs direct matrix inversion the convergence of this
algorithm is much faster compared to the LMS algorithm.
where y(n) is the filter output, e(n) is the error signal
between filter output and desired signal d(n) at step n . d(n) is
the training sequence of known symbols (also called a pilot
signal), is required to train the adaptive weights. Enough
training sequence of known symbols must be present to assure
convergence. w(n) update function for the LMS algorithm.
Least Mean Square (LMS) algorithm, proposed by Widrow
and Hoff in 1959 [9] is an adaptive algorithm, which uses a
gradient-based method of steepest decent. It is used in
adaptive antennas which is a is a multi-beam adaptive array
with its gain pattern being adjusted dynamically [1-3]. In
recent decades, it has been mostly used in different areas such
as mobile communications, radar, sonar, medical imaging,
radio astronomy etc. Especially with the increasing demand
for improving the capacity of mobile communications,
adaptive antenna is introduced into mobile systems to
decrease the effect of interference and improve the spectral Fig.4 Adaptive RLS beamformer
efficiency. SMI algorithm for adaptively adjusting the array
weights, uses block adaptation. The statistics are estimated QRD-RLS algorithm is used to solve least square problems.
from a temporal block of array data and used in an optimum The decomposition is the basis for QR algorithm. Algorithm
weight equation. In the literature, there have been many is used to produce the eigen values of matrix. QR
studies about different versions of LMS and SMI algorithm decomposition is one of the best numerical procedures for
Used in adaptive antennas [10]. The SMI algorithm has a
solving the recursive least squares estimation problem. It
faster convergence rate since it employs direct inversion of
the covariance matrix. Sample Matrix Inversion (SMI) which involves the use of numerically well behaved unitary rotations
is also known as Block adaptive approach because it involves and operates on input only.[3] Fig.4 shows the adaptive RLS
block implementation or block processing i.e. a block of beamformer.The RLS algorithm would require floating point
samples of the filter input and the desired output are collected precision, or very long fixed point word lengths, due to its
and processed together to obtain a block of output samples. numerical ill-conditioning. In addition to Multiply/Add
The process involves the parallel conversion of the input data, standard RLS implementation also requires divide operations.
parallel processing of the collected data and parallel to serial
conversion of the bring out output data. The computational
2.2 The QRD process is formed by a sequence of two
complexity can be further decreased by the elegant parallel
operators. These are the unitary rotations that convert
processing of the data samples. Thus we can say that in this
complex input data to real data and an associated angle and
type of algorithm we are adapting the weights block by block
element combiners that decimate the selected elements of the
thus increasing the convergence rate of the algorithm and
input data set one by one. The QRD implementation was
reducing the computational complexity further.
attained by using the Xilinx System Generator [12] for DSP
model-based design flow. This is tool chains that lengthen the
123…….k 123…….k
Mathworks Simulink framework with FPGA hardware
generation capabilities. System Generator is a visual design
123…….k environment that allows the system developer to work at a
suitable level of abstraction
k, is the no. of blocks. from the target hardware, and use the same computation graph
not only for simulation and verification, but for FPGA
Compared to SMI algorithms LMS algorithm is relatively hardware implementation. System Generator blocks are bit-
and cycle-true behavioral models of FPGA subjective
simple; it does not require correlation function calculation nor property components, or library elements.
does it require matrix inversions.
The RLS algorithm provides the solution to the slow
convergence of LMS and SMI with the help of QRD least
square processing solution. The antenna array contains three
types of processing cells including are boundary cells, internal
cells and output cell. The boundary cell performing
“vectoring” on complicate input samples to cancel out their
imaginary part. Rotation angles are formed by using input
cells. The output cells in the linear array process the elements
of the upper triangular array to perform the required back
substitution [2] to produce the beamformer weights.

3. FPGA based QRD


The QR-decomposition(QRD)based recursive least squares
(RLS) algorithm is usually used as the weight-update
Fig.3 LMS Adaptive beamforming network algorithm for its fast convergence and good numerical
properties. The updated beam-former weights are then used

200
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

for multiplication with the data that has been transmitted another option for gradual deployment of additional transmit
through the dedicated physical data channel (DPDCH). diversity schemes.
Maximal ratio combining (MRC) of the signals from all
fingers is then performed to yield the final soft estimate of the Each element of Antenna array may also be described as
DPDCH data. CORDIC block.[17] Cordic describes a method to perform a
number of functions including trigonometric, hyperbolic and
logarithmic functions.

Fig.4 QR-decompositions-based least squares


The least squares algorithm attempts to solve for the
coefficient vector c from X and y. Fig.4 shows QRD based
least square process.[13]To realize this, the QR-
decomposition algorithm is first used to transform the
matrix X into an upper triangular matrix R (N x N matrix) and
the vector y into another vector u such that Rc=u. The
coefficients vector c is then computed using a procedure
called back substitution, which involves solving these
equations:

cN = uNN/RN

ci =1/Rii(ui-∑RijCj)
Fig.5 Configuration of Systolic cordic device.
for i = N-1,-1
systolic arrays are known which reduce processing time by
The beam-former weights vector c is related to parallel implementation of an RLS algorithm based on QR
the R and u outputs of the triangular array as Rc=u. R is an decomposition. u1(n) input signal series for first tap,u2(n)input
upper triangular matrix, c can be solved by a procedure called signal series for second tap, the values of tap coefficients can
back substitution. [14]Substitution procedure works on the also be obtained as an output signal e. [18-19]Fig.5 shows the
outputs of the QR-decomposition and involves mostly configuration of systolic cordic device. In order to perform
multiply and divides operations that can be accurately processing of a sequential least-squares algorithm based on
executed in FPGAs with embedded soft processors. The QR decomposition a final cell which provides the estimated
software can then complete the multiply operation in a single error from the calculated values of boundary cells. QRD
clock cycle. Since hardware dividers generally are not utilizes unidirectional processor array structure for smart
available, the divide operation can be implemented as custom antenna to cancel out the undesired side lobes, to increase
logic block that generally become part of the FPGA-resident receiver sensitivity and to eliminate all errors. It is most
microprocessor. Between the multiply and divide accelerators, suitable algorithm then LMS and SMI to implement smart
back-substitution becomes so easy and efficient. There are a antenna. Therefore, one could select the systolic array
number of beam-forming architectures and an adaptive structure of [7] when the polynomial order is small while
algorithm that gives good performance under different preferring the unidirectional array of [9] for higher
scenarios, such as transmit/receive adaptive beam forming and polynomial orders.
transmit/receive switched beam forming. FPGAs with
embedded processors are reliable by nature, providing options 4. CONCLUSION
for various adaptive signal-processing algorithms. The Algorithms for efficient direction of arrival estimation and to
standards for next-generation networks are generally evolving electronic steer the beam of smart antenna are compared.
and this generates an element of risk for beam-forming ASIC LMS algorithm is simple it does not require correlation
implementations. Transmit beam forming, for example, function and matrix inversion but it has slow speed of
utilizes the feedback from the mobile terminals.[15-16] The convergence and lesser numerical stability.SMI is faster than
number of bits provided for feedback in the mobile standards LMS algorithm but the computational burden can cause
can determine the beam-forming algorithm that is used at the problem. QRD-RLS is less complex than LMS and SMI and
base station. Moreover, future base stations are used to provides speed of convergence. It eliminates almost all errors
support transmit diversity, including space/time coding and and has good numerical properties. Further QRD-RLS with
multiple-input, multiple-output (MIMO) technology. Since simulation results will be taken to implement smart antenna
FPGAs are remotely upgradeable, they minimize the risk of array.
depending on evolving industry standards while providing

201
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[11] E. N. Frantzeskakis and K. J. R. Liu, “A class of square root and


5. ACKNOWLEDGEMENTS division free algorithms and architectures for QRD-based
adaptive signal processing,” IEEE Trans. Signal Process., vol.
I thanks to the Professor Gautam kaushal who have 42, no. 9, pp.2455–2469, Sep. 1994.
contributed towards development of the review research.
[12] B. M. Lee and R. J. P. de Figueiredo, “Adaptive predistorter for
REFERENCES linearization of high-power amplifiers in OFDM wireless
communications,”Circuits Syst. Signal Process., vol. 25, no. 1,
[1] Suraya mubeen, Dr.Am Prasad and DR.A.Jhansi Rani, 2012. pp. 59–80, Feb.2006.
Smart Antenna its algorithms and implementation .
[13] Ray Andraka, A Survey of CORDIC Algorithms for FPGA
[2] L. C. Godara, “Applications of Antenna Arrays to Mobile
based computers, International Symposium on Field
Communications, PartI: Performance Improvement, Feasibility,
Programmable Gate Arrays, 1998.
and System Considerations,” in Proc. IEEE , vol. 85, pp. 1031-
1060, July 1997.
[3] J. Blass, “Multidirectional Antenna: A New Approach to [14] Siva D. Muruganathan, and Abu B. Sesay, “A QRD-RLS based
Stacked Beams,” IRE International Conference Record, Vol.8, predistortion Scheme for High power Amplifier Linearization”
Part 1, 1960. IEEE Transactions on circuit and systems-II Express
Briefs,VOL,53 no. 10,october 2006.
[4] S. Mano, et al., “Application of Planar Multi beam array
Antennas to Diversity Reception,” Electronics and
Communications in Japan, Part 1, Vol. 79, No. 11, pp. 104- [15] Zelst, A. van ”MIMO-OFDM for Wireless LANs,”
112, 1996. Ph.D.thesis,2004.

[5] J.Winters, "Smart Antennas for Wireless Systems", IEEE [16] O¨ calan, A. Savas,c,ıhabes,, A. Go¨rgec,, I˙. Ertug˘, O¨ .
Personal Communications, vol 1,pp. 23-27., Feb. 1998. Yazgan,E. ”Compact space-multimode diversity stacked circular
antenna array for 802.11n MIMO-OFDM
[6] G.Tsoulos, M.Beach, “Wireless Personal Communications for WLANs”LAPC2009,Loughborough,2009.
the 21stCentury: European Technological Advances in Adaptive
Antennas ”IEEE Communications Magazine, Sep.1997.
[7] L. Acar, R.T. Compton, The Performance of an LMS Adaptive [17] O¨ calan, A. Savas,c,ıhabes,, A. Go¨rgec,, I˙. Ertug˘, O¨ .
Array with Frequency Hopped Signals, IEEE Transactions on Yazgan,E. ”Compact space-multimode diversity stacked circular
Aerospace and Electronic Systems, Vol. 21,No. 3, pp. 360-371, microstrip antenna array for 802.11n MIMO-OFDM
1985. WLANs” LAPC2009,Loughborough, 2009.
[8] Y. Ogawa,, et al., An LMS Adaptive Array for Multipath
[18] E. Volder, (1959) “The CORDIC Trigonometric Computing
Fading Reduction, IEEE Transactions on Aerospace and
Technique, “ IRE Trans. Electronic Computing, Vol. EC-8, pp.
Electronic Systems, Vol. 23, No. 1, pp. 17-23, 1987.
330-334.
[9] B. Widrow and M. A. Lehr, “30 years of adaptive neural
networks: Perceptron, madaline, and backpropagation,” Proc. [19] Dongdong Chen and Mihai SIMA,(2011) “Fixed-Point
IEEE, vol. 78, pp. 1415-1442, 1959. CORDIC-Based QR Decomposition by Givens Rotations on
FPGA”, International Conference on Reconfigurable
[10] R. Yonezawa, I. Chiba, A Combination of Two Adaptive Computing and FPGAs.
Algorithms SMI and CMA, IEICE Trans. On Communications,
Vol. 84, No. 7, 2001.

202
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A REVIEW OF DETERMINISTIC ENERY-EFFICIENT


CLUSTERING PROTOCOLS FOR WIRELESS SENSOR
NETWORKS

Gurjit Kaur Shweta Rani


M.TECH (ECE) Student Assistant Professor
BGIET, Sangrur BGIET, Sangrur
gurjit.sidhu510@gmail.com shwetaranee@gmail.com

ABSTRACT troublesome technique once these hubs are placed in or


Wireless Sensor Networks (WSN) advances have been utilized introduced. Sensors generally connect the physical world with
as of late for checking purposes in different spaces from the computerized world by catching and uncovering true
designing industry to our home surroundings because of their advancement and changing these into a kind that may be
capacity to keenly screen remote areas. DEC (Deterministic transformed put away, and so on [3]. Sensors offer astounding or
Energy Efficient Clustering) protocol is rapid, distributive, amazing advantages when coordinated into shifted gadgets,
organizing toward oneself and more productive as far as vitality machines, and situations. They will help to maintain a strategic
than whatever other of the current conventions. DEC utilizes a distance from disastrous base disappointments, preserve
simple methodology which lessens computational overhead-cost valuable common assets, support profit, enhance security, and
to self-sort out the sensor system. It uses a streamlined overhaul new applications, for example, setting mindful
methodology which minimizes computational overhead-cost to frameworks and brilliant home advancements. The scaling down
self-arrange the sensor system. Our simulation result of figuring and sensing innovations permits the improvement of
demonstrates a superior execution with admiration to energy little, low-control, and modest actuators, sensors, and controllers
utilization, which is reflected in the system lifetime in both [4].
homogeneous and heterogeneous settings when contrasted and
the current protocols.

Keywords:
Wireless Sensor Networks, clustering, DEC protocol, energy
efficient, sink.

1. INTRODUCTION
A Wireless sensor network (WSN) is a remote system that
comprises of disseminated sensor to screen certain conditions at
distinctive areas. A WSN are regularly illustrated as a system
comprises of low-size and low-complex gadgets alluded as
sensor hubs that may sense the earth or surroundings and
assemble the information from the recognition field and impart Fig: 1 Sensor nodes spreaded in a sensor field
through remote connections; the data gathered is sent, by means
of various jumps handing-off to a sink that may utilize it locally,
or is associated with option arranges [1] .The sensor hubs are In this paper, a deterministic vitality effective cluster protocol
normally scattered in a sensor field as indicated in Fig. 1. Each that guarantees a superior race of cluster heads is proposed. This
of those scattered sensor hubs has the abilities to accumulate proposed protocol utilizes the sensor hub's remaining energy
data and course data once more to the sink furthermore the end singularly as the decision model. DEC protocol ensures a
clients. Data is steered once more to the end/complete client by superior decision of cluster heads taking into account leftover
multichip foundation plan through the sink as demonstrated in vitality data. It is more energy efficient than LEACH (Low
Fig. 1. The sink could correspond with the task manager node Energy Adaptive Clustering Hierarchy) and some other existing
through web or Satellite [2]. Designing Protocols and energy protocols [4].
applications for such systems needs to be vitality mindful to Deterministic Energy-Efficient clustering protocol utilizes
drag out the lifetime of the system, as a consequence of the residual energy of every hub in the group for race procedure or
substitution of the embedded batteries may be a horribly choice of CH (Cluster Head). Nonetheless, the vulnerabilities in
the cluster head decisions have been minimized in DEC. The

203
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

setup stage utilized as a part of LEACH protocol is altered, yet methodology approximates a perfect answer for adjusted energy
the consistent state stage is kept same as that of in LEACH utilization in progressive remote sensor networks.
protocol. The CH decision procedure is redesigned by utilizing
the residual energy (RE) of every hub as hub's vitality for the 3. PROPOSED WORK
most part decided a priori [5]. In DEC, the sink or Base Station The proposed work comprises of procedure to be utilized as a
(BS) by and large chooses Nopt group heads at round m for the part of request to advance the execution of wireless sensor
system. The BS can just partake in the race of CHs if and if networks. The routing protocol is executed to work in
m=1. homogenous and heterogeneous environment. DEC uses
clustering methodology. In Figure 2 there are two base stations
2. RELATED WORK in two separate systems. These Base Stations are joined with
A few studies have utilized clustering to oversee WSNs. Cluster Head's for the correspondence reason. Cluster head is
Clustering methodology includes choosing pioneers among the chosen in light of the part of leftover vitality to the most extreme
sensor nodes. When the cluster heads are chosen, they vitality controlled by the sensor hubs. No holds barred
accumulate the data from their separate cluster individuals, correspondence happens and distinctive vitality leveled system
refine it utilizing information pressure methods and afterward have been made. Hub substitution happens with a specific end
report the collected information to the base station (BS) [6]. goal to re-stimulate the system and to build the system lifetime.
Then again, being a cluster head could be a vitality expending
errand. By pivoting the cluster head could have much vitality
picks up than if it were to be settled. Subsequently, a standout
amongst the most imperative calculates deciding the
achievement of a decent convention outline for distributive
WSNs is the way competent it has the capacity deal with the
vitality utilization. Already, the revolution of cluster heads is
done in a randomized way and the decision is not ensured to be
ideal [7]. In this extend, a deterministic vitality productive
deterministic energy-efficient clustering protocol that guarantees
a superior race of cluster heads is proposed. This proposed
convention utilizes the sensor hub's leftover vitality singularly as
the race standard.
In [8], minimizes the vitality scattering in remote sensor
networks. LEACH is one of the first progressive steering
approaches for sensor systems. In this calculation development
of clusters is carried out on the premise of the received signal
quality. The principle destination of LEACH is to give
information collection to sensor systems [9]. Downsides in Figure 2: Scenario Case Diagrams
LEACH protocol are additional overhead to do element
clustering furthermore LEACH is not ready to cover extensive Deterministic Energy-Efficient clustering protocol utilizes
territory. remaining energy of every hub in the group for decision
In [10], augments the fundamental plan of LEACH by utilizing procedure or determination of CH (Cluster Head). DEC is by all
remaining energy as essential parameter and system topology accounts like a perfect arrangement as demonstrated in Figure 3.
characteristics (e.g. node degree, separations to neighbors) are
just utilized as optional parameters to break tie between
applicant cluster heads, as a metric for cluster determination to
attain to power adjusting
In [11], the enhanced version of LEACH as opposed to forming
clusters, it is in light of shaping chains of sensor hubs. One hub
is in charge of directing the totaled information to the sink.
Every hub totals the gathered information with its own particular
information, and after that passes the totaled information to the
following ring. The distinction from LEACH is to utilize multi
hop transmission and selecting stand out hub to transmit to the
sink or base station. Since the overhead created by dynamic
cluster arrangement is dispensed with, multi hop transmission
and information accumulation is utilized, PEGASIS beats the Figure 3: Behavior of Node Energy Consumption Overtime
LEACH.
In [12], uses an improved methodology which minimizes
computational overhead-cost to self-arrange the sensor system. 4. CONCLUSION
Our recreation result demonstrates a superior execution as for In this paper an absolutely deterministic protocol DEC that
energy utilization, which is reflected in the system lifetime in better uses the most profitable system asset (energy) in WSN is
both homogeneous and heterogeneous settings when contrasted presented. DEC beats the probabilistic-based models we have
and the current protocols. It is deserving of note that our considered, by ensuring that a settled number of cluster heads

204
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

are chosen every round. At diverse rounds cluster heads are


chosen utilizing the nearby data of their remaining energies
inside every group to pick the suitable cluster heads. The
attributes of DEC is extremely attractive as it is near to a perfect [9] A. Chandrakasan, and H. Balakrishnan W. R. Heinzelman,
arrangement. By and large, DEC enhances the lifetime of An Application-Specific Protocol Architectures for
wireless sensor networks by a request of greatness which is huge Wireless Networks.: IEEE Transactions on Wireless
when contrasted and LEACH, SEP and SEP-E. DEC exploits Communications, 2002.
the neighborhood data i.e. the residual energy of every hub to
enhance the energy utilization. In our future work, we expect to
adjust DEC protocol\ to a certifiable application setting, for [10] F. Xiangning and S. Yulin, Improvement on LEACH
example, in rural farmland for compost splashing operations. It Protocol of Wireless Sensor Network. Washington, DC,
is our trust that this technique can give more understanding into USA: International Conference on Sensor Technologies
streamlining WSN energy utilization in certifiable situations. and Applications, 2007.

REFERENCES [11] O. Younis and S. Fahmy, HEED: A Hybrid, Energy-


Efficient, Distributed Clustering Approach for Ad Hoc
[1] F. Comeau, Optimal Clustering in Wireless Sensor Sensor Networks.: IEEE Transactions on Mobile
Networks Employing Different Propagation Models And Computing, 2004.
Data Aggregation Techniques., 2008.
[12] Q. Zhu, and M. Wang L. Qing, Design of a distributed
[2] S. Gamwarige and C. Kulasekere, An algorithm for energy energy-efficient clustering algorithm for heterogeneous
driven cluster head rotation in a distributed wireless sensor wireless sensor networks.: Computer Communication,
network, 2005th ed.: International Conference on 2006.
Information and Automation.

[3] M. Haase and D. Timmermann, Low energy adaptive


clustering hierarchy with deterministic cluster-head
selection.: IEEE Conference on Mobile and Wireless
Communications Networks (MWCN), 2002.

[4] J. D. Deng, and M. K. Purvis F. A. Aderohunmu,


Enhancing Clustering in Wireless Sensor Networks with
Energy Heterogeneity.: International Journal of Business
Data Communications and Networking, 2011.

[5] Dr. Sonia Vatta Rajesh Chaudhary, Performance


Optimization of WSN Using Deterministic Energy
Efficient Clustering Protocol: A Review, 0403rd ed.: IOSR
Journal of Engineering (IOSRJEN) , MARCH,2014.

[6] I. Matta, and A. Bestavros G. Smaragdakis, SEP: A Stable


Election Protocol for clustered heterogeneous wireless
sensor networks.: International Workshop on SANPA,
2004.

[7] M. Raj Kumar Naik P. Samundiswary, Performance


Analysis of Deterministic Energy Efficient Clustering
Protocol for WSN , 26th ed.: International Journal of Soft
Computing and Engineering (IJSCE), JANUARY,2014.

[8] A. Chandrakasan, and H. Balakrishnan W. R. Heinzelman,


Energy efficient communication protocol for wireless
microsensor networks.: International Conference on System
Sciences, 2000.

205
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Performance Analysis of AODV, TORA and AOMDV


Routing Protocols in the Presence of BLACKHOLE
Gurmeet singh Deepinder Singh Wadhwa Dr. Ravi Kant
ECE Department ECE Department ECE Department
BGGPC, Patiala BGIET, Sangrur BGIET, Sangrur

ABSTRACT Therefore, in this paper, we aim to analyze the performance of


This paper gives the comparative study of Mobile Ad Hoc MANET routing agents under Blackhole attack. The paper is
Network (MANET). In this paper analyzes the Blackhole attack organized as follows. In Section II, we provide a brief overview
which is one of the possible attacks in ad hoc networks. In a of MANET routing protocols. Section III describes the Security
Blackhole attack, a malicious node impersonates a destination of MANET Section IV gives the details Security attack in
node by sending a spoofed route reply packet to a source node MANET. Section V Gives the details of our performance
that initiates a route discovery. By doing this, the malicious simulations for protocol with an outlook to future work. Section
node can deprive the traffic from the source node. In this paper, VI gives the Result and Section VII gives the conclusion along
we simulate the Ad hoc on Demand Vector Routing Protocol with future research directions.
(AODV), TORA and AOMDV under Blackhole attack by
considering different performance metric. The simulation 2. ROUTING PROTOCOLS OF AD-HOC
results show the effectiveness of Blackhole attack on AODV, The routing protocols in ad-hoc network is to establish
TORA and AOMDV protocol. In the past few years MANET minimum path (min hops) between source and destination with
system has achieved new milestones with increased data rates. minimum overhead and minimum bandwidth use so that packets
MANET is a collection of multi-hop wireless mobile nodes that are transmitted in a timely manner. A MANET protocol should
can communicate with each other. In this there is no central function over a large range of networking context from small
control system. In this there is no infrastructure is used. The ad-hoc group to larger mobile multihop networks[4].In Ad-hoc
nodes in MANET are responsible for continuous change their network the nodes are continues to move, and the date pass
position in the network and fine the new nodes. It also has a through many paths. The number MANET routing protocols
certain number of characteristics which make the security had been proposed in the past year. These protocols are
difficult. Performance comparison of AOMDV, AODV and classified according to the routing strategy. These protocols
TORA with ns-2 (version 2.34) simulations shows that performance can be checked by the type of traffic, number of
AOMDV is able to achieve remarkable packet delivery fraction nodes, and rate of mobility and many more other factors can
and almost similar throughput. also be consider. Three routing protocols were studied in this
paper, namely; AOMDV, AODV and TORA. Below is a brief
Keywords-- MANET, AODV, AOMDV, TORA, description of the protocols [1]. Routing protocols can be
SECURITY, BLACKHOLE , WORMHOLE ATTACK. categorized into proactive, reactive and hybrid protocols,
depending on the routing topology. Proactive protocols are
typically table-driven. Examples of this type of protocol are
1. INTRODUCTION Destination Sequence Distance Vector (DSDV). Reactive or
source-initiated on demand protocols, in opposite, do not
A Mobile Ad hoc Network (MANET) consists of a variety of regularly update the routing information. It is circulated to the
mobile Nodes which are formed temporary network without any nodes only when necessary. Example of this type of protocol is
fixed infrastructure. Since it is very difficult to locate routers Dynamic Source Routing (DSR) and Ad Hoc On-Demand
and other infrastructure in such network type of network. In Distance Vector (AODV). Hybrid protocols make use of both
this type of network all the nodes are self-organized and reactive and proactive approaches. Example of this type of
collaborated to each other. All the nodes as well as the routers protocol is Zone Routing Protocol
can move about freely and thus the network topology is highly AODV [1]: Ad Hoc On-Demand Distance Vector routing
dynamic. Due to self-organize and rapidly deploy capability, protocol uses broadcast discovery mechanism, similar to but
MANET can be applied to different applications including modified of that of DSR. To ensure that routing information is
battlefield communications, emergency relief scenarios, law up-to-date, a sequence number is used. The path discovery is
enforcement, public meeting, virtual class room and other established whenever a node wishes to communicate with
security-sensitive. Mobile Ad hoc Networks (MANETs) are another, provided that it has no routing information of the
open to a wide range of attacks due to their unique destination in its routing table. Path discovery is initiated by
characteristics like dynamic topology, shared medium, absence broadcasting a route request control message “RREQ” that
of infrastructure, multi-hop scenario and resource constraints. In propagates in the forward path. If a neighbor knows the route to
such a network, each mobile node operates not only as a host the destination, it replies with a route reply control message
but also as a router, forwarding packets for other nodes that may “RREP” that propagates through the reverse path. Otherwise,
not be within direct wireless transmission range of each other. the neighbor will re-broadcast the RREQ. The process will not
Thus, nodes must discover and maintain routes to other nodes. continue indefinitely, however, authors of the protocol proposed
Data packets sent by a source node may be reached to a mechanism known as “Expanding Ring Search” used by
destination node via a number of intermediate nodes. In the Originating nodes to set limits on RREQ dissemination. AODV
absence of a security mechanism, it is easy for an intermediate maintains paths by using control messages called Hello
node to insert intercept or modify the messages thus attacking messages, used to detect that neighbors are still in range of
the normal operation of MANET routing. connectivity. If for any reason a link was lost (e.g. nodes moved

206
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

away from range of connectivity) the node immediately engages RREQ message for any destination, the black hole node instatly
a route maintenance scheme by initiating route request control responds with an RREP message that contains the highest
messages. The destination sequence numbers in control packets sequence number and this message is received as if it is coming
ensure loop freedom and freshness of routing information. [2] from the destination or from a node which has a fresh enough
Temporally-Ordered Routing Algorithm (TORA) route to the destination. The source considers that the
TORA can work in environment where mobility is highly destination is behind the black hole and rejects the other RREP
dynamic. TORA's algorithm concept is link rever sal, which has packets coming from other nodes. The[4] In this type of attacks,
the feature of loop-free and adaptive distributed routing. It the attacker disrupts routing by short circuiting the usual flow of
provides multiple routes for any required source/destination pair routing packets. Wormhole attack can be done with one node
and that's why it falls in source-initiated category. Localization also. But generally, two or more attackers connect via a link
of control message is the key design of TORA, which adopts called “wormhole link”. They capture packets at one end and
topological changes very quickly[25] replay them at the other end using private high speed network.
Wormhole attacks are relatively easy to deploy but may cause
3. SECURITY & CHALLENGES IN great damage to the network. Wormhole attack is a kind of
replay attack that is particularly challenging in MANET to
MANET defend against. Even if, the routing information is [4].

The security issues in MANET become more complicated, 2.1 Identity Spoofing
because of the several compelling situations. The AODV and
Media Access Control (MAC) and Internet Protocol (IP)
DSR Protocols are studied[3]. Scarcity of resources - The
addresses are frequently used in MANETs to verify and
mobile nodes are often at limited resources availability
ascertain the identity of its nodes. However, these addresses
including the battery power, computational power, memory,
may be easily spoofed using tools that are publicly available,
speed etc. Due to this restrictions the security solutions
which leads to a spoofing attack [6]. In this attack, the malicious
consuming higher resources e.g. public key cryptography, are
user attempts to acquire the identity of a legitimate node in the
not deployable practically. Physical security threats- The mobile
network. Masquerading as a legitimate user allows the
wireless networks are more open to physical security threats.
malicious node to avail of privileged services, that are otherwise
Due to small size of nodes and permitted mobility, they are
accessible to only genuine nodes, and become an authorized
more prone to stealing and physical mishandling . Topological
entity in the network. This attack aims to establish a connection
variations - Due to the transient and moving nature of nodes the
that will enable the attacker to access the sensitive data of the
locational dependency is less assured. Lack of regulating
other hosts.
authorities - Unlike the infrastructure-based network in
MANET the central regulating authorities do not exist in
2.2 Denial of Service (DoS)
MANET. Shared wireless medium - In MANET the wireless
DoS is one of the most well known attacks on computer
based of communication is broadcast based, hence all data is
networks largely because of the impact it has on the smooth
available to all the nodes for tempering, resulting more
functioning of the network. This kind of attack is especially
complexity & challenges to security.. Insufficient rules for
damaging to MANETs owing to the limited communication
association - The MANET lacks proper authentication rules and
bandwidth and resources of the nodes [6]. In the AODV
mechanisms for associating nodes in the network. Unlike in
protocol for instance, a large number of RREQs (message
general network, an intruder can easily join the network and
requests) are sent to a destination node on the network that is
create security hazards. Hostile and insufficient operational
non-existent. As there is no reply to these RREQs, they will
environment - Since the MANET found more complications in
flood the entire network leading to a consumption of all of the
environment like war fields , there are more hazards to security
node battery power, along with network bandwidth and this
issues.
could lead to denial-of service.

2.3 Black Hole Attack


4. SECURITY ATTACKS ON MANET The black hole attack [4] is an active insider attack, it has two
properties: first, the attacker consumes the intercepted packets
Wireless networks are more vulnerable than a wired network. without any forwarding. Second, the node exploits the mobile
There is a range of attacks aim at the weakness of MANETs. ad hoc routing protocol, to announce itself as having a accurate
All data packets should pass through many intermediate nodes route to a destination node, even though the route is counterfeit,
before reaching destination. Each node maintains route entry to with the intention of intercepting packets. The goal of the
other nodes in two ways either node itself initiates the route malicious node in this attack is to drop all packets that are
discovery or other nodes push to discover routes. Hence it directed to it instead of forwarding them as intended. It uses its
maintains proper routing table entry and it becomes an essential routing protocol in order to advertise itself as having the
job of mobile network communications. Route discovery and shortest route to the target node or to any packet that it wants to
maintenance phase are always monitored by malicious node, intercept. The malicious node advertises its availability of new
make other nodes to follow fake route entry and disrupt the routes without checking its routing table [6]. In this way the
directions of the routing protocols.[8] The black hole attack[5] malicious node will always have availability of routes while
is an active insider attack, it has two properties: first, the replying to the route request and hence intercept the data packet.
attacker consumes the intercepted packets without any As a result of the dropped packets, the amount of retransmission
forwarding. Second, the node exploits the mobile ad hoc routing consequently increases leading to congestion. There is a more
protocol, to announce itself as having a accurate route to a subtle form of this attack wherein the attacker selectively
destination node, even though the route is counterfeit, with the forwards packets instead of dropping all of them altogether.
intention of intercepting packets. In an ad-hoc network that uses Packets originating from some particular nodes may be
the AODV protocol, a black hole node pretends to have a fresh modified or suppressed while leaving the data from the other
enough routes to all destinations requested by all the nodes and nodes unaffected, thus limiting the suspicion of its malicious
absorbs the network traffic. When a source node broadcasts the behavior by the other nodes [6].

207
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 2.1 Illustration of Black hole attack in a Network

5. PERFORMANCE EVALUATION

Simulations have been performed in network simulator, ns-


Figure 1. 1. Comparison of energy versus time of AODV ,
2[22], to determine the impact of mobility on performance of
TORA and AOMDV with the Blackhole Attack
routing protocols. We have used network simulator version 2.34
for the evaluation of our work. The NS-2 simulator software
Figure 1. 1. Shows the comparison of energy versus time
was developed at the University of California at Berkeley and
TORA, AODV and AOMDV. It shows that the energy is
the Virtual Inter Network test bed (VINT) project fall 1997. We
minimum using TORA compared to AODV, and AOMDV
have used Ubuntu 9.04 Linux environment. Our simulation
.AODV is having the highest energy compared to all the other
setup [24] is a network with randomly placed nodes within an
protocols. This comparison is analyzed with the Blackhole
area of 1315m * 572m. We have chosen a wireless channel with
attack
a two-way ground propagation model with a radio propagation
model of 250m and interference range of 550m. We evaluate
three MANET protocols (AODV, AOMDV and TORA) against
Blackhole attack. This allows us to compute protocol
performance under different network scenarios.

6. RESULT
In this paper we have considered several metrics in analyzing
the performance of routing protocols. These
metrics are as follows.

Data packet delivery ratio: Total number of delivered data


packets divided by total number of data packets transmitted by
all nodes. This performance metric will give us an idea of how
well the protocol is performing in terms of packet delivery at
different speeds using different traffic

Throughput (messages/second): Total number of delivered Figure 1. 2. Comparison of PDR versus time of AODV , TORA
data packets divided by the total duration of simulation time. and AOMDV with the Blackhole Attack
We analyze the throughput of the protocol in terms of number
of messages delivered per one second. Figure 1. 2. Shows the comparison of Packet Delivery Ratio for
TORA, AODV and AOMDV. It shows that the PDR using
Average End-to-End delay (seconds): The average TORA is high compared to AODV and AOMDV. This
time it takes a data packet to reach the destination. This metric comparison is analyzed with the Blackhole attack
is calculated by subtracting “time at which first packet was
transmitted by source” from “time at which first data packet
arrived to destination”. This includes all possible delays caused
by buffering during route

208
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

attack, selfish attack etc for some of the very popular on-
demand and even secure routing protocols and compare them
and also implementing and evaluating our proposed solution
mechanism for the same.

ACKNOWLEDGMENT
The authors would thanks the reviewers for their help in
improving the document.

REFERENCES
[1] Ahmed Al-Maashri Mohamed Ould-Khaoua “Performance
Analysis of MANET Routing Protocols in the Presence of
Self-Similar Traffic .” pp. 801-807
[2] Arora,V. and Krishna, C.R., 2010 “Performance
Evaluation of Routing Protocols for MANETs under
Different Traffic Conditions.” 2nd International
Conference on Computer Engineering and Technology,
vol. 6 pp. 79-84.
Figure 1.3 Comparison of throughput versus time of AODV , [3] Agrawal, C.P., Vyas,O.P. , and Udaykumar, P. 2010
TORA and AOMDV with the Blackhole Attack “Analysis of MANET Security –Challenges, Threats &
Solutions.” International Journal of Computer Science and
Applications, vol. 3, No. 1, pp. 13- 18.
Figure1. 3. Shows the comparison of throughput versus time
TORA, AODV and AOMDV. It shows that the throughput is [4] Bhosle,A. A., Tushar, P. , Thosar and Mehatre, S. 2012
“Black-Hole and Wormhole Attack in Routing Protocol
minimum using AODV compared to AODV, TORA,.AOMDV AODV in MANET” International Journal of Computer
is having the highest throughput compared to all the other Science, Engineering and Applications, vol.2, No.1, pp.
protocols. This comparison is analyzed with the Blackhole 45- 54.
attack. [5] Balachandran, N., 2012. “Surveying Solutions to Securing
On-Demand Routing Protocols in MANETs” Int. J.
Advanced Networking and Applications vol. 4 , pp. 1486-
1491.
[6] Das , M., Panda, B.K., and Sahu, B., 2012. “ Analysis of
Effect of Mobility on Performance of AODV in Mobile Ad
hoc Network.” International Conference on Computer
Communication and Informatics,
[7] Devi, P., and Kannammal, A., 2012. “Security attacks and
Defensive Measures for Routing Mechanisms in MANETs
– A Survey.” International Journal of Computer
Applications,vol, 42, No.4, pp. 27-32.
[8] Ghazizadeh,S., Ilghami. O., and Sirin, E., 2002 “Security-
Aware Adaptive Dynamic Source Routing Protocol.”
Conference on Local Computer Networks
[9] Gregory, S., Yovanof, and Erikci, K., 2004 “Performance
Figure 1.4. Comparison of end to end delay versus time of Evaluation of Security- Aware Routing Protocols for
AODV , TORA and AOMDV with the Blackhole Attack clustered Mobile Ad Hoc Networks.” International
Workshop on Wireless Ad-Hoc Networks, pp. 286-290.
Figure 1.4 shows the comparison of end to end delay versus [10] Goyal, P., Batra, S., and Singh, A., 2010 “A Literature
Review of Security Attack in Mobile Ad-hoc Networks.”
time TORA, AODV and AOMDV. It shows that the end to end International Journal of Computer Applications,vol, 9.
delay is minimum using AOMDV compared to AODV, TORA, No.12, pp. 11-15
DSR and DSDV.TORA is having the highest end to end delay [11] Huhtonen, A., 2004. “Comparing AODV and OLSR
compared to all the other protocols. This comparison is Routing Protocols Seminar on Internetworking” pp. 1-9.
analyzed with the Blackhole attack. [12] Hu,Y. C., JohnsonD.B., and Perrig, A., 2003 “SEAD:
secure efficient distance vector routing for mobile wireless
ad hoc networks.” pp. 175- 192.
7. CONCLUSION AND FUTURE WORK [13] Juwad, M.F. , and Al-Raweshidy, H.S. “OPNET
This paper discusses common possible attacks on different Performance Comparisons between SAODV&AODV.”
protocols being used in MANETs. We have tried to analyze [14] Keshtgary,M. and Babaiyan,V., 2012. “ Performance
them so as to prevent the attacker to intrude in wireless Evaluation of Reactive, Proactive and Hybrid Routing
networks. As a continuation of this research work, it would be Protocols in MANET.” International Journal on Computer
Science and Engineering, vol. 4, No. 02, pp. 248 – 254.
very interesting to evaluate other protocols that have been
[15] Kamboj,D., Kumar,P.S., and Nath.R., 2012 “
suggested for important operations in MANETs such as those Performance Evaluation of Secure Routing in Ad-hoc
for performing multicast and broadcast communication. Network Environment” 1st Int’l Conf. on Recent Advances
MANET We requires a reliable, efficient, and scalable and most in Information Technology.
importantly a secure protocol as they are highly insecure, self- [16] Kannhavong, B., Nakayama ,H., and Jamalipour,A.,
organizing, rapidly deployed and they use dynamic routing. 2008. “SA-OLSR: Security Aware Optimized Link State
Mobile Ad Hoc network is likely to be attacked by the black Routing for Mobile Ad Hoc Networks.” pp. 1464- 1468.
hole attack and wormhole attack . We plan to extend our work
by comparing and analyzing other routing attack viz, wormhole

209
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[17] Karlsson J. , Dooley L.S. , and Polkkis G. , 2012, Conference on Parallel and Distributed Computing,
“Routing Security in Mobile Ad-hoc Networks.” Vol. 9, Applications and Technologies, pp. 451- 456.
pp. 269- 383. [28] Schmidt, R.D.O. and Trentin, M.A.S., 2008 “ MANETs
[18] Kumar,k., Kumar,y., and Pruthi, G., 2011 “A literary Routing Protocols Evaluation in a Scenario with High
review of manet security protocols.” International Journal Mobility.” pp . 883- 886.
of Computer Science and Management Studies, vol. 11, [29] Shrivastava , L., Bhadauria , S.S. , and Tomar,G.S., 2011,
PP. 64-68. “Performance Evaluation of Routing Protocols in MANET
[19] Khokhar, R.H., Ngadi, M. A., and Mandala, S., A Review with different traffic loads.” International Conference on
of Current Routing Attacks in Mobile Ad Hoc Networks.” Communication Systems and Network Technologies, pp.
International Journal of Computer Science and Security, 13-16.
vol. 2, pp. 18- 29. [30] Sharma, P., Sinha, A.K., 2012. “ Statistical Approach
[20] Li, H., and Dhawan, A. P., 2010 “MOSAR: A Secure On- for Behavior Detection of Routing metrics in MANET.”
demand Routing Protocol for Mobile Multilevel Ad Hoc International Conference on Computer Communication
Networks.” International Journal of Network Security, and Informatics,
vol.10, No.2, pp. 121 -134. [31] Sreenath, N. Amuthan, A. and Selvigirija, P., 2012.
[21] Maan,F., Nauman Mazhar, N., 2011 “MANET Routing “Countermeasures against Multicast Attacks on Enhanced-
Protocols vs Mobility Models: A Performance Evaluation” On Demand Multicast Routing Protocol in MANETs.”
pp. 179- 184. International Conference on Computer Communication
[22] Moses, G.J., Kumar, D.S., Varma, P.S., Supriya, N., and Informatics,
2012“ A Simulation Based Study of AODV, DSR, DSDV [32] Supriya and Khari, M., 2012 “MANET Security
Routing Protocols in MANET Using NS-2” International Breaches: Threat To a Secure Communication Platform.”
Journal of Advanced Research in Computer Science and International Journal on AdHoc Networking Systems, vol.
Software Engineering, Volume 2, Issue 3, March 2012 2, No. 2, pp. 45- 51.
pp. 42- 51 [33] Taneja , S. and Kush,A., 2010. “A Survey of Routing
[23] Nishat, H., Pothalaiah ,S. and Rao, D.S., “Performance Protocols in Mobile Ad Hoc Networks.” International
Evaluation of Routing Protocols in MANETS.” Journal of Innovation, Management and Technology, vol.
International Journal of Wireless & Mobile Networks .v ol. 1, No. 3, pp. 279- 285.
3, No. 2, pp. 67-75. [34] Taneja, S. and Kush, A., 2012 “Energy Efficient, Secure
[24] Naeemv,M., Ahmed , Z., Mahmood, R., and Azad, M.A. and Stable Routing Protocol for MANET.” Global Journal
2010 “QOS Based Performance Evaluation of Secure On- of Computer Science and Technology, Vol. 12, pp.25-38
Demand Routing Protocols for MANET's” ieee [35] Zhao,Z., Hu,H., Ahn,G., and Wu,R., 2012 “Risk-Aware
[25] Rastogi, M., Ahirwar, K.K., 2012 “Adaptive Threat Mitigation for MANET Routing Attacks.” IEEE
Modeling For Secure MANET Routing Protocol.” Transactions on Dependable and SecureComputing , vol. 9,
International Journal of Scientific & Technology Research, No. 2, pp. 250 – 260.
vol. 1, pp. 63 – 66. [36] Woungang, I., Dhurandher,S.K., Peddi, R.D and
[26] Rai, P., and Singh,S.,2010 “A Review of MANET’s Obaidat,M.S., 2012 “Detecting Blackhole Attacks on
Security Aspects and Challenges” IJCA Special Issue on DSR-based Mobile Ad Hoc Networks.
Mobile Ad-hoc Networks pp. 162-166
[27] Shrestha, A., and Tekiner, F., 2009. “On MANET Routing
Protocols for Mobility and Scalability.” International

210
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Study of Evolutionary Optimisation techniques and their


Applications
Mandeep Kaur Dr. B.S.Sohi
Research Scholar Pro-Vice Chancellor
Punjab Technical University, Kapurthala Road (Punjab) Chandigarh University, Gharuan (Punjab)
mandeepkaur03@gmail.com

ABSTRACT influenced both by their own best known position and swarm's
Bio and nature inspired optimization techniques are extensively global best known position. Like genetic algorithms, the PSO
used in engineering field to analyze complex problems for which method depends on information sharing among population
conventional methods are not suitable or difficult to use. The members. In some problems the PSO is often more
capability to find a global optimum result when good numbers of computationally efficient than the GAs, especially in
variables are used is motivating to do a study on optimization unconstrained problems with continuous variables.
techniques. All optimization techniques are intended to achieve Intelligent Water Drops or the IWD algorithm is a nature-inspired
global optima through local search and global search. Researchers optimization algorithm inspired from natural water drops which
are working on improving existing methods or developing new change their environment to find the near optimal or optimal path
tools. This paper is focused to study available optimization to their destination. The memory is the river's bed and what is
techniques like GA, PSO, ACO, BFOA and IWA and their use in modified by the water drops is the amount of soil on the river's
engineering problems. Improving the performance of wireless bed. This paper will give the detailed study of evolutionary
sensor networks is the hot topic for research and these techniques optimisation techniques and their applications in engineering field.
can be applied to achieve good results in the performance
evaluation of WSN. 2. GENETIC ALGORITHM (GA)
Genetic algorithms belong to the larger class of evolutionary
Keywords: algorithms (EA), which generate solutions to optimization
Optimization techniques: GA, PSO, ACO, BFOA and IWA problems using techniques inspired by natural evolution, such
as inheritance, mutation, selection, and crossover. Genetic
algorithms find application
1. INTRODUCTION in bioinformatics, phylogenetics, computationalscience, engineerin
Swarm intelligence is a sub-field of evolutionary computing. g, economics,chemistry, manufacturing, mathematics, physics, pha
Evolutionary Computing is the collective name for a range of rmacometrics and other fields.
problem-solving techniques based on principles of biological
evolution, such as natural selection and genetic inheritance. These Genetic Algorithms (GA‟s) are search methods based on the
techniques are being increasingly widely applied to a variety of principles and concepts of natural selection and evolution. These
problems, ranging from practical applications in industry and optimisation methods operate on a group of trial solutions in
commerce to leading-edge scientific research. [2] parallel, and they operate on the coding of the function parameters
When network size scales up, routing becomes more challenging rather than the parameters directly. In the GA each variable is
and critical. Lately, biologically-inspired intelligent algorithms represented as a binary code called a „gene‟. These genes are then
have been deployed to tackle this problem. Using ants, bees and arranged and combined to form a chromosome. Each chromosome
other social swarms as models, software agents can be created to has an associated fitness value or „cost‟ assigning a value of merit
solve complex problems, such as traffic rerouting in busy to the chromosome. A high fitness value being the characteristic
telecommunication networks. [1] Many issues in WSNs are of a good chromosome.
formulated as multidimensional optimization problems, and
approached through bio-inspired techniques. Issues of the node After the starting chromosomes have been created in the GA, a
deployment, localization, energy-aware clustering, and data selection strategy determines which chromosomes will take part in
aggregation are often formulated as optimization problems. [3] the evolution process. These chromosomes mate with one another
Ant colony optimization (ACO) uses many ants (or agents) to to produce new offspring, which consist of genetic material from
traverse the solution space and find locally productive areas. two parent chromosomes. The new set of chromosomes produced
While usually inferior to genetic algorithms and other forms of from the mating process make up the next „generation‟ of
local search, it is able to produce results in problems where no chromosomes, although chromosomes from the previous
global or up-to-date perspective can be obtained, thus the other generation may also be saved and inserted into the new generation.
methods cannot be applied. The amount of chromosomes in every generation is kept constant.
Particle swarm optimization (PSO) is a computational method for This process is repeated (selection, and mating) until a set amount
multi-parameter optimization which also uses population-based of generations have been completed.
approach. A population (swarm) of candidate solutions (particles)
moves in the search space, and the movement of the particles is

211
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Genetic Algorithm Parameters requires the usage of algorithms from different type, with different
characteristics and settings.
Mardukhi F. et. al. developed genetic algorithm based quality
Selection Strategies: Selection strategies determine which model depends on a set of quality attributes which categorized into
chromosomes will take part in the evolution process. two main types: positive and negative. The objective is to
Population Decimation: In this strategy the chromosomes are maximize the values of positive properties (e.g. throughput and
ranked according to their fitness or cost values from highest to availability), whereas the values of negative properties require to
lowest. This is the recommended strategy to use as it leads to be minimized (e.g. price and response time). [7]
proper convergence and used to produce the best solutions as Kuila P. et. al. proposed a novel GA based load balanced
compared to the other strategies. clustering algorithm for WSN. The proposed algorithm is shown
Proportionate Selection: In this selection strategy the probability to perform well for both equal as well as unequal load of the
of a chromosome being selected is proportionate to the fitness of sensor nodes and compared the results with some evolutionary
the chromosome as compared to the fitness of the total population. based approaches and other related clustering algorithms. [4]
That is a chromosome with a „good‟ fitness has a higher
probability of being selected, than one that has a low fitness value.
Tournament Selection: In tournament selection two individuals 3. ANT COLONY OPTIMIZATION
are randomly selected and then the one with the highest fitness
„wins‟. This process continues until the required number of In computer science and operations research, the ant colony
chromosomes has been reached. optimization algorithm (ACO) is a probabilistic technique for
Mating Schemes: While the selection strategies are involved with solving computational problems which can be reduced to finding
selecting which individuals will take part in the evolution process good paths through graphs. This algorithm is a member of the ant
(be parents), the mating schemes select which two parent colony algorithms family, in swarm intelligence methods, and it
chromosomes will mate with one another. The mating schemes constitutes some metheuristic optimizations. Initially proposed
that exist include Best Mates-Worst, Adjacent Fitness pairing and by Marco Dorigo in 1992 in his PhD thesis, the first algorithm was
Emperor Selective mating. aiming to search for an optimal path in a graph, based on the
Crossover Point: A crossover occurs when two parent behavior of ants seeking a path between their colony and a source
chromosomes mate with one another. When this occurs the two of food. The original idea has since diversified to solve a wider
parent chromosomes are both dissected at the same predefined class of numerical problems, and as a result, several problems
crossover point. The two pieces from the first parent chromosome have emerged, drawing on various aspects of the behavior of ants.
mate with the two complementary pieces from the second parent
chromosome, to form two new chromosomes. As mentioned With an ACO algorithm, the shortest path in a graph, between two
above, the crossover point determines the point at which a points A and B, is built from a combination of several paths. It is
chromosome will be dissected. Allowable inputs values for not easy to give a precise definition of what algorithm is or is not
crossover point is 0crossover pt1 where 0 indicates a crossover an ant colony, because the definition may vary according to the
point that is determined randomly, and 1 indicates that no authors and uses. Broadly speaking, ant colony algorithms are
crossover will occur. regarded as populated metaheuristics with each solution
Mutation: A mutation occurs in a chromosome with a small represented by an ant moving in the search space. Ants mark the
probability of Pmutation. When a mutation occurs in a chromosome, best solutions and take account of previous markings to optimize
a random bit in the binary chromosome is inverted. For example a their search. They can be seen as probabilistic multi-
„1‟ will be changed to a „0‟ and vice versa. agent algorithms using a probability distribution to make the
Chromosomes and Generations: In the Genetic Algorithm each transition between each iteration. In their versions for
chromosome represents a specific antenna design/configuration. combinatorial problems, they use an iterative construction of
The number of chromosomes used in a Generation and the number solutions. According to some authors, the thing which
of generations are both user-defined inputs. The number of distinguishes ACO algorithms from other relatives (such as
chromosomes determines the number of antenna configurations algorithms to estimate the distribution or particle swarm
that will be evaluated in each generation, and the number of optimization) is precisely their constructive aspect. In
generations determines how many iterations the GA optimizer will combinatorial problems, it is possible that the best solution
run through before coming to completion. [10] eventually be found, even though no ant would prove effective.
Thus, in the example of the Travelling salesman problem, it is not
Genetic Algorithms (GA) are direct, parallel, stochastic method necessary that an ant actually travels the shortest route: the
for global search and optimization, which imitates the evolution of shortest route can be built from the strongest segments of the best
the living beings, described by Charles Darwin. GA are part of the solutions. However, this definition can be problematic in the case
group of Evolutionary Algorithms (EA). The evolutionary of problems in real variables, where no structure of 'neighbours'
algorithms use the three main principles of the natural evolution: exists. The collective behavior of social insects remains a source
reproduction, natural selection and diversity of the species, of inspiration for researchers.
maintained by the differences of each generation with the
previous. Genetic Algorithm works with a set of individuals, Cobo L. et. al proposed a QoS routing algorithm such as
representing possible solutions of the task. The selection principle AntSensNet for WMSNs based on an Ant Colony optimization
is applied by using a criterion, giving an evaluation for the framework and a biologically inspired clustering process.
individual with respect to the desired solution. The best-suited AntSensNet outperforms the standard AODV in terms of delivery
individuals create the next generation. The large variety of ratio, end-to-end delay and routing overhead. [1] Liao W.H. et. al.
problems in the engineering sphere, as well as in other fields, proposed a deployment strategy to prolong the network lifetime,
while ensuring complete coverage of the service region and

212
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

modeled the sensor deployment problem as the Multiple Knapsack 5. ARTIFICIAL BEE COLONY
Problem (MKP) based on ACO algorithm.[5] Lin Y. et. al.
proposes an ACO-based approach that can maximize the lifetime ALGORITHM
of heterogeneous WSNs. The methodology is based on finding the In the ABC model, the colony consists of three groups of bees:
maximum number of disjoint connected covers that satisfy both employed bees, onlookers and scouts. It is assumed that there is
sensing coverage and network connectivity. [6] only one artificial employed bee for each food source. In other
words, the number of employed bees in the colony is equal to the
number of food sources around the hive. Employed bees go to
4. PARTICLE SWARM OPTIMISATION their food source and come back to hive and dance on this area.
Particle swarm optimization (PSO) has emerged as a proficient The employed bee whose food source has been abandoned
stochastic approach of evolutionary computation. Since then it has becomes a scout and starts to search for finding a new food source.
been employed in various fields of applications and research and Onlookers watch the dances of employed bees and choose food
is successful in yielding an optimized solution. This algorithm sources depending on dances. The main steps of the algorithm are
mimics the social behavior executed by the individuals in a bird given below:
flock or fish school while searching for the best food location 1. Initial food sources are produced for all employed bees
(global optima). The PSO algorithm neither depends upon the 2. REPEAT
initial condition nor on the gradient information. Since it depends  Each employed bee goes to a food source in her memory
only on the value of objective function, it makes the algorithm and determines a neighbour source, then evaluates its
computationally less expensive and much simple to implement. nectar amount and dances in the hive
The low CPU and memory requirement is another advantage.  Each onlooker watches the dance of employed bees and
However, some experimental results show that the local search chooses one of their sources depending on the dances,
ability around the optima is very poor though the global search and then goes to that source. After choosing a neighbour
ability of PSO is quite good. This results in premature around that, she evaluates its nectar amount.
convergence in problems where multiple optima exist and; hence,  Abandoned food sources are determined and are
the performance is degraded. [8] replaced with the new food sources discovered by
scouts.
In computer science, particle swarm optimization (PSO) is a  The best food source found so far is registered.
computational method that optimizes a problem 3. UNTIL (requirements are met)
by iteratively trying to improve a candidate solution with regard to
a given measure of quality. PSO optimizes a problem by having a In ABC, a population based algorithm, the position of a food
population of candidate solutions, here dubbed particles, and source represents a possible solution to the optimization problem
moving these particles around in the search-space according to and the nectar amount of a food source corresponds to the quality
simple mathematical formulae over the (fitness) of the associated solution. The number of the employed
particle's position and velocity. Each particle's movement is bees is equal to the number of solutions in the population. At the
influenced by its local best known position but, is also guided first step, a randomly distributed initial population (food source
toward the best known positions in the search-space, which are positions) is generated. After initialization, the population is
updated as better positions are found by other particles. This is subjected to repeat the cycles of the search processes of the
expected to move the swarm toward the best solutions. employed, onlooker, and scout bees, respectively. An employed
bee produces a modification on the source position in her memory
PSO is originally attributed to Kennedy, Eberhart and Shi and was and discovers a new food source position. Provided that the nectar
first intended for simulating social behaviour, as a stylized amount of the new one is higher than that of the previous source,
representation of the movement of organisms in a the bee memorizes the new source position and forgets the old
bird flock or fish school. PSO is a metaheuristic as it makes few or one. Otherwise she keeps the position of the one in her memory.
no assumptions about the problem being optimized and can search After all employed bees complete the search process, they share
very large spaces of candidate solutions. However, metaheuristics the position information of the sources with the onlookers on the
such as PSO do not guarantee an optimal solution is ever found. dance area. Each onlooker evaluates the nectar information taken
More specifically, PSO does not use the gradient of the problem from all employed bees and then chooses a food source depending
being optimized, which means PSO does not require that the on the nectar amounts of sources.
optimization problem be differentiable as is required by classic
optimization methods such as gradient descent and quasi-newton As in the case of the employed bee, she produces a modification
methods. PSO can therefore also be used on optimization on the source position in her memory and checks its nectar
problems that are partially irregular, noisy, change over time, etc. amount. Providing that its nectar is higher than that of the previous
one, the bee memorizes the new position and forgets the old one.
Kulkarni R.V. & Venayagamoorthy G.A. discussed Particle The sources abandoned are determined and new sources are
swarm optimization (PSO) that is a simple, effective, and randomly produced to be replaced with the abandoned ones by
computationally efficient optimization algorithm. It has been artificial scouts.
applied to address WSN issues such as optimal deployment, node Sahoo R. et. al. presented a trust based secure and energy
localization, clustering, and data aggregation. It has outlined issues competent clustering method in wireless sensor network using
in WSNs, introduces PSO, and discusses its suitability for WSN Honey Bee Mating Algorithm (LWTC-BMA). The proposed
applications. [3] LWTC-BMA prolong the life time of the network by depriving
malicious nodes to become a cluster head. [9]

213
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

6. INTELLIGENT WATER DROPS a major part of the research on BFOA in future. [11] The Bacterial
Foraging Optimization Algorithm is inspired by the group
ALGORITHM (IWR) foraging behavior of bacteria such as E. coli and M. xanthus.
IWD algorithm is a swarm-based nature-inspired optimization Specifically, the BFOA is inspired by the chemotaxis behavior of
algorithm. This algorithm contains a few essential elements of bacteria that will perceive chemical gradients in the environment
natural water drops and actions and reactions that occur between (such as nutrients) and move toward or away from specific signals.
river's bed and the water drops that flow within. The IWD
algorithm may fall into the category of Swarm
intelligence and Metaheuristic. Intrinsically, the IWD algorithm 8. APPLICATIONS
can be used for Combinatorial optimization. However, it may be These optimization techniques can be used in the given
adapted for continuous optimization too. The IWD was first applications as in Multidimensional Knapsack problem (MKP),
introduced for the traveling salesman problem in 2007. Since then, Vehicle routing problem, MANET Routing algorithm, Economic
multitudes of researchers have focused on improving the Load Dispatch, Travelling salesman problem (TSP), Texture
algorithm for different problems. Almost every IWD algorithm is feature selection, Continuous optimization, Scheduling, Optimal
composed of two parts: a graph that plays the role of distributed data aggregation tree in wireless sensor networks, Test data
memory on which soils of different edges are preserved, and the generation based on test path discovery, Code coverage,
moving part of the IWD algorithm, which is a few number of Optimization of manufacturing process models and Optimizing
Intelligent water drops. These Intelligent Water Drops (IWDs) routing protocol.
both compete and cooperate to find better solutions and by
changing soils of the graph, the paths to better solutions become
more reachable. It is mentioned that the IWD-based algorithms 9. CONCLUSION
need at least two IWDs to work. A study has been provided on various optimization techniques like
Genetic algorithm (GA), Particle swarm optimization (PSO), Ant
colony optimization (ACO) and Artificial bee colony in this paper.
7. BACTERIAL FORAGING In the field of wireless sensor networks these techniques has been
OPTIMISATION (BFO) used to calculate network lifetime, energy efficiency and end to
The bacterial foraging optimization (BFO) proposed by Passino in end delay.
the year 2002 is based on natural selection that tends to eliminate
animals with poor foraging strategies. After many generations, FUTURE SCOPE
poor foraging strategies are eliminated while only the individuals Hybrid algorithms can be studied and developed based on bio
with good foraging strategy survive signifying survival of the inspired algorithms and these can be used to improve the
fittest. BFO formulates the foraging behavior exhibited by E. performance of wireless sensor networks. It is proposed in this
coli bacteria as an optimization problem. Over certain real-world paper to study the possibilities of optimization techniques to
optimization problems, BFO has been reported to outperform improve performance of wireless sensor networks. Best features of
many powerful optimization algorithms in terms of convergence individual optimization techniques can be taken in account and
speed and final accuracy. [8] further hybridization of these optimization techniques can also be
implemented in future research.
Bacterial foraging optimization algorithm (BFOA) has been
widely accepted as a global optimization algorithm of current
interest for distributed optimization and control. BFOA is inspired REFERENCES
by the social foraging behavior of Escherichia coli. BFOA has [1] Cobo, L., Quintero, A. and Pierre, S., 2010 “Ant-based routing
already drawn the attention of researchers because of its efficiency for wireless multimedia sensor networks using multiple QoS
in solving real-world optimization problems arising in several metrics” Journal of Computer Networks, 54 pp. 2991–3010
application domains. Application of group foraging strategy of a [2] Eiben A.E. and Smith J.E., “Introduction to Evolutionary
swarm of E.coli bacteria in multi-optimal function optimization is Computing” Springer, Natural Computing Series, 2nd Edition,
the key idea of the new algorithm. Bacteria search for nutrients in 2007, ISBN: 978-3-540-40184-1
a manner to maximize energy obtained per unit time. Individual [3] Kulkarni, R.V. and Venayagamoorthy, G.A., 2011 Particle
bacterium also communicates with others by sending signals. A Swarm Optimization in Wireless-Sensor Networks: A Brief
bacterium takes foraging decisions after considering two previous Survey IEEE Transaction on Systems, Man, and Cybernetics Part
factors. The process, in which a bacterium moves by taking small C: Applications and Reviews 41(2) pp- 262- 267
steps while searching for nutrients, is called chemotaxis and key [4] Kuila P., Gupta, S.K., and Jana, P.K., 2013 “A Novel
idea of BFOA is mimicking chemotactic movement of virtual Evolutionary approach for Load Balanced Clustering problem for
bacteria in the problem search space. Wireless Sensor Networks” International Journal of Swarm and
Evolutionary Computation 12 pp. 48–56
Since its inception, BFOA has drawn the attention of researchers [5] Liao, W.H., Kao, Y. and Wu, R.T., 2011“Ant Colony
from diverse fields of knowledge especially due to its biological Optimization based sensor deployment protocol for Wireless
motivation and graceful structure. Researchers are trying to Sensor Networks” in International Journal of Expert Systems with
hybridize BFOA with different other algorithms in order to Applications, 38 pp. 6599–6605
explore its local and global search properties separately. It has [6] Lin, Y., Zhang, J., Chung, H., Ip, W.H., Li, Y. and Shi, Y.H.
already been applied to many real world problems and proved its 2012 “An Ant Colony Optimization Approach for Maximizing the
effectiveness over many variants of GA and PSO. Mathematical Lifetime of Heterogeneous Wireless Sensor Networks” IEEE
modeling, adaptation, and modification of the algorithm might be

214
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Transaction on Systems, Man and Cybernetics-Part c:


Applications and Reviews, 42(3) pp. 408-420
[7] Mardukhia, F., Bakhsha, N.N., Zamanifara, K. and Baratib, A.
2013 “QoS decomposition for service composition using Genetic
Algorithm” International Journal of Applied Soft Computing. 13
pp. 3409–3421
[8] Patnaik S.S. and Panda A.K. “Particle Swarm Optimization
and Bacterial Foraging Optimization Techniques for Optimal
Current Harmonic Mitigation by Employing Active Power Filter”
Hindawi Publishing Corporation Journal of Applied
Computational Intelligence and Soft Computing, Volume 2012,
Article ID 897127
[9] Sahoo, R.R., Singh, M., Sahoo, B.M., Majumder, K., Raya, S.
and Sarkara, S.K., 2013 “A Light Weight Trust Based Secure and
Energy Efficient Clustering in Wireless Sensor Network: Honey
Bee Mating Intelligence Approach” International Journal of
Procedia Technology. 10 pp. 515 – 523.
[10] Super NEC‟s GA optimizer manual.
[11] https://softcomputing.net/bfoa-chapter.pdf website accessed
on 05/05/2014

215
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Soft Computing and its various tools: A review


Nishi Sharma Shweta Rani
Department of CSE Department of ECE,
B.G.I.E.T., Sangrur BGIET, Sangrur
naisha_sharma@yahoo.com shwetaranee@gmail.com

ABSTRACT Computing) researchers all over the world started work on


Soft Computing as its name implies is an evolutionary various tools of Soft Computing. It includes fuzzy logic,
approach to produce intelligent system which do have Neural Network, Genetic Algorithm, Particle Swarm
human like expertise. The mind of human is incomparable Optimization etc.[3]. Soft Computing is defined as an
to anything. Human mind does not work on hard and fast approach to construct computationally intelligent systems
rules i.e. it behaves according to the situation. Soft which have human like expertise and having the ability to
Computing follows the same approach.It is A breakthrough adapt and learn in changing environment at low cost. A
in science and engineering fields as it can solve problems single computational approach cannot be used to produce
that have not been to be solved by conventional these kind of machines. Therefore, Soft Computing
approaches. It yields rich knowledge which enable encompasses a group of different methodologies like Fuzzy
intelligent systems to be constructed at low cost. This ,Neural,Genetic Algorithms etc which have information
remarkable idea of producing machines which behave like processing capabilities to solve real-life problems. Soft
human minds comes to L.A.Zadeh in 1965 when he Computing techniques can tolerate imprecision, uncertainty
proposed the fuzzy sets. After t his, enhancement towards and partial truth to produce HMIQ(High Machine
this field never stops. These emerging ideas changed the Intelligent Quotient)[4]. AI techniques deals only with
whole world. Today we have microwave ovens and precision, certainty but in contrast with soft computing it
washing machines that can itself decides how to perform exploit the tolerance for imprecision, uncertainty and
the tasks optimally. The main objective of this paper is to partial truth, low solution cost, achieve tractability, error
introduce about the latest trends in soft computing as well free, and enhanced result with reality. It is used in
as hybrid computing to leverage the advantage of two or conjunction with rule-based expert systems in the form of
more than two models. if-then rules. Despite different approaches have been
proposed in recent years, for detecting intrusion. Various
Keywords hybrid method such as neuro-fuzzy, neuro-genetic, fuzzy-
Computational Intelligence, Fuzzy logic, Soft Computing,
Neural Networks, Genetic Algorithm. genetic, and neuro-fuzzy- genetic are used as most popular
techniques[5].These techniques are different from our
conventional approach of Hard Computing which are time
1. INTRODUCTION
consuming and complex. Soft Computing is a wide ranging
To build the machines which are having human like group of techniques like neural network, fuzzy systems,
knowledge intelligence is required. Intelligence basically Genetic algorithm and many more. Each of these
derives the answer and not simply arrives to the answer. technologies has their own strength. The main
Broadly it is classified into two categories-Artificial characteristic of soft computing is its capability to create
Intelligence and Computational Intelligence. Artificial hybrid systems that is based on the integration of various
Intelligence techniques are time consuming and complex. technologies. So, it is called as collection of techniques.
Under artificial intelligence we do have hard computing. This integration provides complementary reasoning
Today we require low cost and less time consuming methods rather than competitive. Whenever various
solutions to a problem. This can be made possible by Soft methods are used to implement a particular machine it is
Computing. In 1965, L.A.Zadeh started his work of Soft called as Hybrid Soft Computing. Hybrid Soft Computing
Computing with the initial concept of “Fuzzy Sets”. From consists of strength and novelty of two or more
1965 to the end of 20th century, true progress on computational models. The first and most successful hybrid
computation with imprecise concepts took place [1].In approach till now are neurofuzzy systems[6], although
1968 he opened up several fields of research by writhing an more hybridations are being developed with great success
article on “Probability measures of fuzzy events”[2].After for example the genetic systems[7].
the establishment of BISC (Berkley initiative in Soft

216
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1. Input layer: Various inputs can be provided to the system


either by input files or directly from electronic sensors in
real time functions.

2.Hidden Layer: At this step, each of input is multiplied by


INTELLIGENCE respective weight factor. After that summation is done on
overall inputs. However, many different type of operations
can be selected depends upon the requirement. Even we can
modify the weights on the input of each processing element
COMPUTATIONAL ARTIFICIAL
as per the need.
INTELLIGENCE INTELLIGENCE
3.Output layer: The output of this hidden layer is then
HARD SOFT transferred to the output layer which can change it into the
COMPUTING real output according to the devices available.
COMPUTING
2. GENETIC ALGORITHMS:
The term genetic algorithm abbreviated as GA was first
NEURAL GENETIC FUZZY
used by John Holland[11] in 1975. Genetic
NETWORK ALGORITHMS LOGIC
algorithms(GAs) are computer programs that mimic the
processes of biological evolution in order to solve
Figure1: Hierarchy of Intelligence problems and to model evolutionary system. John Holland
presented the theoretical framework for adaptation. It is
basically a method for moving from one population of
2. ARTIFICIAL NEURAL NETWORK
chromosomes to a new generated population using
A continuous approach to mimic the functioning of brain is selection together with the genetic inspired operators of
artificial neural network. An Artificial Neuron is basically crossover, mutation and inversion[12]. It is basically a
an engineering approach of biological neuron. It have search technique that will map data for the problems where
device with many inputs and one output. It consists of large no particular formula can be implemented.
number of simple processing elements that are
interconnected with each other and layered as The operators followed by genetic algorithm[13] are
well[8][9].Even the brain of animal works so good which a describes by the flowchart in Figure 3:
machine cannot follow. Now the researchers are aware that
our brain is also in a structured from. Our brain works on Choose the possible solutions
neurons. Each neuron is very complicated in structure.
They have myriad of parts, sub-systems and control
mechanisms. Some pattern is been trying to follow for
artificial neural network which can work like human-mind.
optimum
solutions? Stop
Input layer Hidden layer Output layer

Selection
Neuron
.
Crossover

Mutation
Figure 2: Artificial Neuron Network

The Artificial Neural Network simulates for basic functions


of biological neuron as shown in Figure 2[10]: Figure 3: Flowchart of Genetic Algorithms

217
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1. Initialization: At the first step all solutions to the 3. Evaluation and testing is done and more over we can
problem are randomly generated. This is called as change the inputs for desired results.
population. The size of this depends upon the type of
problem. Basically, all range of possible solutions are 4. HYBRID TECHNIQUES
considered.
Hybridization of intelligent systems is a promising field of
2. Selection: In this a good selection method is applied to modern intelligence for the development of next generation
find out the proportion of the existing population. The controllers [16]. Integration of various soft computing
selection method we are choosing should be given prime techniques can solve complex problems in real world.
importance because it decides the best possible solutions Hybrid techniques provide more robust and reliable
among the hundreds or thousands of possible solutions. problem solving models than standalone models.
Integrating these techniques enhance the overall strengths
3. Reproduction: This step produces next generation and lessen weakness thereby helping to solve overall
population of solutions from those selected by crossover control problem in effective way. Various strategies,
and mutation. The best fit individuals are selected for models and design have been suggested by researchers to
reproduction. Then evaluation is done for new individuals. integrate various intelligent systems for practical
In this way process continues until least-fit population is applications. The main goal of integration is to take the
extracted. advantages of strengths of each model to achieve
effectiveness and efficiency. Hybridization can be done in
4. Termination: The generation process terminates when all
any manner depends upon the requirement. Neuro-Fuzzy,
the combinations are done or fixed number of combinations
Genetic-Fuzzy, Neuro-Genetic are the useful techniques for
has been reached or any other termination condition
various real time complex problems[16,17].
satisfied.

5. CONCLUSION
3. FUZZY LOGIC
This paper has presented three methods of Soft Computing.
The idea of Fuzzy Logic was conceived by Lotfi Zadeh
A combining method has also been discussed to improve
1965 [1].Basically, Fuzzy Logic (FL) is a multivalued
performance on real world applications. The primary
logic, that allows intermediate values to be defined between
contribution of this paper lies in showing the possibilities
conventional evaluations like true/false, yes/no, high/low,
of various techniques of soft computing which are not strict
etc. In a FLS, a rule base is constructed to control the
as our mathematical approaches are.
output variable. A fuzzy rule is a simple IF-THEN rule
with a condition and a conclusion[14].He gives the concept
to make machines smarter and somewhat behaves
REFERENCES
according to the requirement. Fuzzy Logic also mimics the
[1]. Zadeh, L.A. 1965 Fuzzy Sets. Information and Control
human brain as Neural Network Do but the concept is
338-353.
different.
[2]. Zadeh, L.A. 1968Probability measureson Fuzzy
ogic”,Vol.23 No.2,1968
Rules
[3]. Y.Dote,S.J. Ovaska,”Industrial Applications of Soft
Computing: A Review”,IEEE,Vol.89,No.9,2001.
[4]. E.Trillas, “Lotfi A. Zadeh: On the man and his work”
Fuzzier Inference Defuzzier in scientia Iranica D(2011) pp. 574-579
[5]. B.Prasad ,“Introduction to Neuro-Fuzzy Systems”
vol.226of advances in soft Computing Series.
Figure 4: Fuzzy Logic Springer-Verlag,2000.
[6]. O.Cordan,R.Alcal,J.Alcala-Fernandez and I.Rojas
,“Genetic Fuzzy Systems:What’s next?” IEEE Trans.
Fuzzy Systems,Vol. 15,pp.533-535
1. Definition: The problem to be solved is defined properly
[7]. Sonali,B.Maind,P.Wankar,”Research Paper On Basic
i.e. a basic understanding should be there what has to be
of Artificial Neural Network”,IJRITCC,Vol.2 Issue
done.
1,pp.96-100.
[8]. Eldon Y. Li, “Artificial Neural Networks and their
2. Using the rule-based structure of Fuzzy Logic, break the
Business Applications”, Taiwan, 1994.
problem into series of IF-THEN-ELSE structure that define
the desired output with the given inputs.

218
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[9]. Christos Stergiou and Dimitrios Siganos, “Neural [14]. U.Kose,”Fundamentals of Fuzzy Logic with an Easy-
Networks”. to-use ,Interactive Fuzzy Control Applications”,
[10]. Sonali,B.Maind,P.Wankar,”Research Paper on Basic IJMER ,Vol.2,Issue3,pp.1198-1203,2012
Of Artificial Neural Network”,IJRTCC,vol.2 [15]. B.Joseph,”Paradigm shift- an introduction to fuzzy
Issue1,pp.96-100,2014. logic”,IEEE potentials,2006.
[11]. J.Holland,”Adaptation in Natural and Artificial [16]. R.Malhotra,N.Singh,Y.Singh,”Soft Computing
Systems”,University of Michigan Press,1975 Techniques For Process Control
[12]. C.Reeves,”Genetic Algorithms”, School of Applications”,IJSC,Vol.2,No.3,pp.32-44,2011.
Mathematics and Information Sciences [17]. S.B.Cho,”Fusion of Neural Network with Fuzzy logic
[13]. M.Mitchell,”Genetic Algorithm:An Overview”,MIT and Genetic algorithm”,ICAE,pp.363-372,2002.
press,pp 31-39,1995.

219
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Passive Optical Networks Employing Triple Play


Services: A Review

Harneet Kaur Harmanjot Singh


M.tech student (ECE) Assistant Professor
Punjabi University Punjabi University
Patiala, India Patiala, India
harneet9556@gmail.com harman.dhaliwal.nba@gmail.com

ABSTRACT get motivated to invest in FTTx (Fiber To The x). FTTx is an


Passive optical network in scenario of triple play service has acronym for number of optical access technologies, such as
been reviewed in this paper. In triple play services there is a Fiber to the Node (FTTN), Fiber to the Curb (FTTC), Fiber to
combination of data, video and audio signals. Passive optical the Business–or Building–(FTTB), Fiber to the Home
network is point to multipoint architecture and is regarded as (FTTH), and Fiber to the Premises (FTTP). These networks
one of the best choices for the broadband access network in have various advantages like long reach, high bandwidth and
the future. By developing this network, larger transmission low loss [4]. There are mainly two FTTH architectures that
capacity at higher bit rate and longer transmission distance are of current interest, Active Optical Network (AON), and
can be achieved. Passive Optical Network (PON).

Keywords 2.1 Active optical network


Optical Line Termination (OLT), Optical Network Unit An active optical system uses electrically powered switching
(ONU), Passive Optical Networks (PON). equipment, such as a router or a switch aggregator, to manage
signal distribution and direct signals to specific customers.
This switch opens and closes in various ways to direct the
1. INTRODUCTION incoming and outgoing signals to the proper place. In such a
With the advancement in the communication systems, there is system, a customer may have a dedicated fiber running to his
a need for large bandwidth to send more data at higher speed. or her house [5].
Residential subscribers demand high speed network for voice
and media-rich services. Similarly, corporate subscribers 2.2 Passive optical network (PON)
demand broadband infrastructure so that they can extend their Passive optical network (PON) is a telecommunications
local-area networks to the Internet backbone. This demands network that uses point-to-multipoint fiber to the premises in
the networks of higher capacities at lower costs. Our current which unpowered optical splitters are used to enable a
“age of technology” is the result of many brilliant inventions single optical fiber to serve multiple premises by separating
and discoveries, but it is our ability to transmit information, the signal towards each user. This network is called passive
because no power element is used within the central office
and the media we use to do it, that is perhaps most responsible and the subscribers. Hence the cost of the network and its
for its evolution [1]. Progressing from the copper wire of a installation is reduced. A PON reduces the amount of fiber
century ago to today’s fiber optic cable, our increasing ability and central office equipment required compared with point-to-
to transmit more information, more quickly and over longer point architectures. PON is a form of fiber-optic access
distances has expanded the boundaries of our technological network. It offers a true triple play service of voice, video and
development in all areas [2]. Optical communication data on a network [5].
technology gives the solution for higher bandwidth. By
developing the optical networks, larger transmission capacity, 3. PON ARCHITECTURE
higher bit rate and longer transmission distance can be The PON architecture consists of three main network
achieved. PON uses a dedicated optical fiber, to provide elements as Optical Line Terminal (OLT), Passive Optical
Splitter and Optical Network Unit (ONU). Optical line
virtually unlimited bandwidth, without using any active
terminal (OLT) is located at central office (CO). It modulates
component within the network. It offers a true triple play the light wave and transmits it through fiber to optical
service of voice, video and data on a network. network units (ONUs) which are located at end users. It is
designed to provide virtually unlimited bandwidth to the
2. FIBER TO THE HOME (FTTH) subscriber. We can define it as point to multipoint [p2mp]
The access network, also known as the “first-mile network,” topology as it uses a single optical fiber to serve multiple
connects the service provider central offices (COs) to users usually between 32 to 128[9-13]. A PON is a single,
businesses and residential subscribers. This network is also shared optical fiber (shared feeder fiber) that uses a passive
referred in the literature as the subscriber access network, or optical splitter to divide the signal towards individual
the local loop [3]. Demand for larger bandwidth and triple- subscribers as shown in figure 1.
play delivery to customers (voice, data, and video) ignited
competition among service providers. Thus service providers

220
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.1.3 EPON (Ethernet passive optical network)


ONU EPON is IEEE 802.3 standard, and is a category of networks
Fiber optic in based on the Ethernet technique. An EPON combines the low-
distribution cost Ethernet equipment and fiber infrastructure, and transmits
Ethernet data frames directly. It can provide 1 Gbps capacity
Shared feeder fiber in both upstream and downstream directions. These features
enable EPONs to transmit data, voice and video traffic. EPON
also uses the 1310 nm window for upstream and 1490 nm
OLT window for downstream transmission. This standard allows
 . transmission in downstream and upstream under only 1
 . Central single-mode fiber with a maximum range of 10 km between
splitter and ONU, and there is provision for extending the
 . Splitter office distance to 20 km. The EPON standard establishes a dedicated
wavelength for the broadcast of video from the OLT to the
ONUs. The wavelengths are:

Downstream channel: λ=1480-1500 nm


Figure 1: Passive Optical Network (PON) Upstream channel: λ=1260-1360 nm
Video: λ=1550-1560 nm [7]

3.1 PON standards 3.1.4 GPON (Gigabit PON)


There are several PON standards like APON, BPON, EPON The more advanced standard which is still working is
or GEPON, according to the distance between OLT and ONU evolution of the BPON. To meet rapidly growing demand and
and data rates in downstream and upstream transmission. to work better with changes in communication technologies a,
ITU-T created the series of standards ITU-T G.984.x for
3.1.1 APON (Asynchronous transfer mode PON) Gigabit capacity PON, which are the basis of the standard
These networks are referred as APON (ATM Passive Optical GPON (Gigabit PON). Varied transmission rates are allowed
Network), and are standardized under ITU-T standard G.983.1 by GPON in the range between 622 Mbps to 2,488 Gbps in
based on ATM cell transmission. It was the first network that the downstream channel. Like BPON, this standard allows
was defined by FSAN (Full Service Access Network). APON data transmission both symmetric and asymmetric where rates
offers maximum rate of 155 Mbps shared between the ONU of transmission for each one are:
numbers that are connected. First it was limited to 155 Mbps
which later was increased to 622 Mbps. ATM PON connects Symmetric transmission: flow rates between 622 Mbps and
up to 32 subscribers to the PON [6]. 2,488 Gbps are offered both for downstream and upstream
channel.
3.1.2 BPON (Broadband Passive Optical
Network) Asymmetric transmission: Different flow rates for
BPON (Broadband Passive Optical Network) is an ITU-T downstream and upstream channel
G.983.x standard. It emerges as evolution of APON, and has Downstream channel: up to 2,488 Gbps.
the speed limitation of the same. BPON networks are also Upstream channel: up to 1,244 Gbps.
based on ATM cell transmission, but they differ from APON
as they support other broadband standards. BPON networks The wavelengths of work that sets the GPON standard vary
were first defined under a fixed rate of 155 Mbps transmission depending on whether you use 1 or 2 fibers for each ONT,
for both uplink and downlink. But later they were amended to although for both sets a dedicated wavelength for video
introduce asymmetric channels: broadcast from the OLT to the ONTs, being this different
from those used in the voice and data transmission. For 1
Downlink: 622 Mbps fiber per ONT, shared for transmission and reception:
Uplink: 155 Mbps
Downstream channel: λ=1480-1500 nm
For 1fiber per ONU, sharing upstream and downstream: Upstream channel: λ=1260-1360 nm
Downstream channel: λ=1480-1500 nm Video: λ=1550 nm
Upstream channel: λ=1260-1360 nm
Video: λ=1550-1560 nm For 2 fibers for each ONT, one for transmission and another
one for reception:
For 2 fibers for each ONU, one for upstream and one for Downstream channel: λ=1260-1360 nm
downstream: Upstream channel: λ=1260-1360 nm
Downstream channel: λ=1260-1360 nm Video: λ=1550 nm [8].
Upstream channel: λ=1260-1360 nm
Video: λ=1550-1560 nm [6]. Following table summarize the common standards available
for PON.
Table 1. Common PON Standards

221
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Technol- Stand- Ditance Down- Up- values of data rate from a CO (Central Office) to the PON in
ogy ard (KM) stream stream terms of BER (Bit Error Rate). BER was considered as the
(Mbps) (Mbps) major technical issue to realize the GEPON based FTTH
BPON G.983 20 155,622,
155,622 access network. The simulation work reports for the case of
1244
EPON 802.3 10 1244 56 users at 2 Gbps and by further increasing the data rate of
1244
system say 5 Gbps, and then there was a sharp increase in
GPON G.984 20 1244, 2488 155 to
2488
BER. Similarly in the variation of BER with respect to
transmission distance, BER shows an increase in its value as
transmission distance increases [11].
3.2 WDM PON architecture at 5 Gbps
accommodating 32 users at 15 km reach 3.5 Long reach (100 km) 32 channel FTTH
Evaluation on WDM-PON (wavelength division multiplexed) downstream link employing triple play at
access network architecture accommodating 32 users was
done successfully. Results affirmed that network performance
2.5 GB/s
degrades more linearly over increasing secondary SMF Performance of high capacity, long reach, and 32 channel
(single mode fiber) lengths when Avalanche photo diodes FTTH downstream link employing triple play services had
(APDs) were used compared to exponential decay in signal been investigated. DWDM has been employed for bandwidth
quality which was found to be the case using PIN photo optimization. The triple-play service was realized as a
receivers. Simulation work revealed that performance gains of combination of data, voice, and video signals. The Internet
around 15km in terms of system reach and up to 5 Gbps in component is represented by a data link with a high-speed of
terms of data carrying capacity per user can be attained if
2.5 Gb/s downstream. The voice component was represented
APDs are used at the receiver side in downstream direction
[9]. as VOIP and then combined with data component. The video
component was represented as a RF video signal. The reach of
the WDM-PON system can be severely limited by chromatic
3.3 Bidirectional fiber accommodating 32
dispersion. Therefore, by employing 80 km of non-linear fiber
users and covering 20km at 155 Mbps and in combination with 20 km of reverse dispersion fiber to
622 Mbps negate the accumulated chromatic dispersion which ensured
An extensive research had been carried out to evaluate a long reach of the modeled FTTH system. Investigations
Broadband Passive Optical Network (BPON) for both
revealed the effective bandwidth optimization using DWDM.
downstream and upstream traffic to highly scalable solutions
to service providers to make fiber reach the end user. The High quality factor and low BER results confirmed the
system has been developed to accommodate up to 32 users by feasibility of proposed high capacity, long reach FTTH link.
using bidirectional fiber of length 20 km... The system was The investigations further revealed that the system reach can
analyzed on the basis of Data Rate, Fiber length, Coding be extended up to 100 km with efficient chromatic dispersion
technique, number of users, wavelengths and their effects on management [12].
Bit Error rate (BER) as the key performance parameter .
Remarkable results had been achieved and a novel relation
had been developed between the data rate and accommodated 3.6 Bi-directional PON in scenario of triple
users. The transmitter was configured to generate both play service covering 40 km at 10 Gbps for
155Mbps and 622Mbps data rates for downstream traffic It 128 users
had been revealed that, in downstream direction, doubling the Performance of bi-directional passive optical network
number of users only requires switching to the lower data rate
(BPON) had been evaluated and compared at different bit
in order to maintain identical BER effects over the same fiber
length. Additionally, the relation had also been tested on rates in the scenario of triple play service. The triple-play
Return-to-Zero (RZ) and Non-Return-to-zero (NRZ) coding service is realized as a combination of data, voice and video
and was found to be unaffected by coding formats used The signals. This architecture was investigated for symmetrical
system has been developed to accommodate up to 32 users by data traffic for uplink and downlink transmission and its
using bidirectional fiber of length 20 km. [10]. performance was also evaluated in terms of Q-factor and eye
height at different transmission distance. The Q-factor results
3.4 FTTH triple play services using GE- show the acceptable performance at 10 Gbps data rate for
PON architecture for 56 users at 20 km downstream and upstream transmission, as it accommodates
reach and at 2 Gbps 128 optical network units (ONUs) covering the transmission
Evaluation and comparison of Fiber to the home, FTTH, distance of 40 km[13].
GEPON (gigabit Ethernet passive optical network) link design
for 56 subscribers at 20 km reach at 2 Gbps bit rate was
4. CONCLUSION
PON is point to multipoint mechanism and is one of the best
carried out to provide residential subscribers with triple play
choices for the broadband access network to achieve higher
services. A 1:56 splitter was used as a PON element which data rates and longer transmission distances. GPON is the
creates communication between a Central Office to different most advanced PON protocol and offers higher bandwidth and
users and a boosting amplifier is employed before fiber length longer transmission distance when compare to ATM and
which tends to decrease BER and allows more users to Ethernet based PON technologies. Link length and data rate
accommodate. This architecture was investigated for different can be increased further to meet increasing demand of triple
play services. Use of amplifiers increase the link length and

222
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

system performance but it adds cost and noise to the system. [6] Sang-Heung Lee, et.al ―A Single-Chip 2.5-Gb/s Burst-
Therefore correct amplifier design is required to amplify the Mode Optical Receiver with Wide Dynamic Range‖
signal in the Gigabit Passive Optical Network. Bidirectional IEEE Photonics Technology Letters, Vol. 23, pp.85-87,
Passive Optical Networks are more useful, but with the January 2011
increase in No. of users, Data rate, link length and non linear [7] Hesham A. Bakarman,et.al ―Simulation of 1.25 Gb/s
effects system performance degraded. Work has to be done on Downstream Transmission Performance of GPON-FTTx‖
these parameters for better Bidirectional-GPON systems. International Conference Photonics,pp 1-5, July 2010
[8] Jingjing Zhang, Nirwan Ansari, Scheduling hybrid
WDM/TDM passive optical networks with nonzero laser
REFERENCES tuning time, IEEE/ACM Trans. Netw. Pp.1014–1027,
[1] P. Piskarskas, A. P. Stabinis, and V. Pyragaite, April 2011.
Ultrabroad bandwidth of optical parametric amplifiers, [9] Erik Weis, et.al, “GPON FTTH trial-lessons learned”
IEEE J. Quantum Electron., vol. 46, no. 7, pp. 1031– Journal of SPIE-OSA-IEEE Asia Communications and
1038, July 2010. Photonics, vol. 7633, pp 6330J-1- 6330J-7, July 2009.
[2] J.H. Lee, et al., Extended-reach WDM-PON based on [10] S.F. Shaukat, U. Ibrahim and Saba Nazir, “Monte Carlo
CW super continuum light source for colorless FP-LD Analysis of Broadband Passive Optical Networks”,
based OLT and RSOA-based ONUs, J. Opt. Fiber Tech- IDOSI Publications, vol.12, no. 8, ISSN 1818- 4952,
nol. 15, 310–319, March 2009. 2011.
[3] Jingjing Zhang, Nirwan Ansari, Scheduling hybrid [11] D. Kocher, R.S. Kaler, R. Randhawa, Simulation of fiber
WDM/TDM passive optical networks with nonzero to the home triple play services at 2 Gbit/s using GE-
laser tuning time, IEEE/ACM Trans. Netw., 1014–1027, PON architecture for 56 ONUs, Optik 121 pp.5007-5010,
April 2011. May 2013.
[4] Xianbin Yu, et.al ―System Wide Implementation of [12] J.S. Malhotra, M. Kumar, A.K. Sharma, Performance
Photonically Generated Impulse Radio Ultra-Wideband optimization of high capac-ity long reach 32 channel
for Gigabit Fiber-Wireless Access‖ Journal of Lightwave FTTH downstream link employing triple play
Technology, Vol. 31,pp.264-274, January 2013. services,Optik 124,pp. 2424–2427, 2013.
[5] Konstantinos Kanonakis ―Offset-Based Scheduling [13] Simranjit Singh, Performance evaluation of bi-directional
with Flexible Intervals for Evolving GPON Networks, passive optical networks in the scenario of triple play
Journal of Lightwave Technology, Vol. 27, pp.3259- service, Optik 125, pp. 5837-5841, 2014.
3268, August 2009.

223
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Review of Static Light path Design in WDM network


Harpreet Kaur Munish Rattan
BBSBEC, Fatehgarh Sahib GNDEC, Ludhiana
reet.mavi82@gmail.com

ABSTRACT: As use of internet is increasing rapidly optical network. Further, we discussed the use of nature
so we need networks supporting high bandwidth and data inspired techniques for routing and wavelength assignment
rates. Need of high bandwidth evolve the need of WDM in optical networks. Finally we provide the summary with
network and routing information over them .In this paper future scope to conclude the paper.
we have reviewed various routing and wavelength
assignment algorithms used in literature .We have also 2. TECHNIQUES FOR STATIC RWA
described the contribution of nature inspired techniques in
enhancing the performance of optical networks. In static routing, the light paths are known in advance so
main objective is to minimize the number of wavelengths
when traffic request is routed for a given topology. The
Keywords: Genetic Algorithm, Differential Evolution
problem may be multi objective i.e. minimizing blocking
,Variable Neighborhood Search, Multi-objective,
probability, throughput or failures among the connections
Optimization.
etc. [2].

1. INTRODUCTION According to the objective taken there are number of


techniques proposed in literature. Mostly RWA is solved
In WDM networks, the whole capacity is divided into by decomposing the problem in two parts i.e. routing
multiple channels that have different wavelengths.WDM problem and wavelength assignment problem. The
networks consist of wdm nodes across which optical traffic proposed routing algorithms are:
can be transmitted using light paths. If there are no
wavelength converters then light path must use same
2.1 Shortest Path Routing (SPR): In this
wavelength over the fiber links it is transmitting, this is
type firstly shortest path is selected using Dijkstra
known as wavelength continuity constraint .The light path
algorithm based on current state of the network. Path
that have same physical link but they cannot be assigned
is measured in terms of number of hops .Then
same wavelength . This is called wavelength clash
wavelength is assigned using first fit algorithm if
constraint [1,6].
request is unsuccessful using first wavelength then
For a given set of light paths to be established, second wavelength is tried and process goes on
the process of finding routes for light path and assigning repeating. If request is unsuccessful for all
wavelength for the route is called routing and wavelength wavelengths then call is said to be blocked. In this
assignment problem.RWA problem can be considered there will be unconditional delay due to selection of
under two types of traffic i.e. static and dynamic. Static shortest paths [14, 11].
RWA means that connection requests are known in 2.2 Resource Utilization based Routing
advance. one has to route the request and allot suitable Algorithms (RUR): In this algorithm, set of K
wavelength for it. This is also called as offline RWA. shortest paths are calculated based on minimum
Dynamic RWA means that connection requests arrive in resource utilization i.e. taking in account number of
random fashion with random settling and tearing times. regenerators, hop count and lastly length [14].
Ultimately, objective function needs to be optimized in
either type of RWA. 2.3 Bin packing based Routing
The rest of the paper is organized as follows. In the next
Techniques [6]:
section we present review of already existing and proposed 2.3.1 FF RWA: In this routing, only one copy of
techniques for static routing and wavelength assignment in graph G is created and higher indexed bins are formed
if they are required. The connection request is routed
in this graph( bin Gi ) if it has shortest path between Si

224
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

and Di exist such that length denoted by Pi is less than 2.6 Waveband Switching Routing: In this
H .Wavelength i is assigned to route once the bin is technique, firstly K shortest paths are selected between
used to route the request, then all the edges of Pi are source destination pairs when the connection request
deleted. Once all the edges in bin vanish means that it arrives, then rerouting techniques are employed so as to
doesn’t have capacity to route any other requests. reduce number of ports. It also outperforms traditional
2.3.2 BF RWA: In this routing, if number routing in terms of performance [4].
of bins is created then best fitted bin is one
having shortest path between source and 2.7 Priority and Maximum Revenue
destination pair compared to other bins.
based routing and wavelength
2.3.3 FFD RWA: (First Fit Decreasing
RWA) In this technique, first longer light path is assignment: In this technique firstly the connection
routed in a bin and fills the remaining space with requests are divided into two sets i.e. HPM (high priority
shorter light paths . matrix) and LPM (low priority matrix). In this, light paths
must be formed whereas light paths in LPM can be setup to
2.3.4 BFD- RWA: (Best Fit Decreasing
optimize revenue when there is not enough resource to
RWA) It first arrange requests in decreasing
accommodate all. Since requests in HPM need to be
order of their shortest paths in G and then follows
fulfilled so cost for that will always be fixed, the problem is
BF-RWA. These bin based routing algorithms
to maximize the cost contribution from LPM. For this two
not only minimize the number of wavelength but
algorithms are proposed i.e. SILP and SP.
also the physical length between source and
Maximize
destination.
∑ [ ∑Esd^i Psd ^i + ∑ Bsd *PRI ]
These bin based algorithms not only reduces the
hop length, reduces the number of wavelengths S,d€D 1≤i≤n
required but also reduces the chance of light path
request to be blocked. With the following constraints

∑ (Esd^i + Bsd^i ) ≤ 1 V (m,n) € E 1)


2.4 Fixed Alternate Routing: In this routing, a
routing table is formed which consist of all the ∑ Esd^i ≤Өsd V (s,d )€ D 2)
possible routes between source and destination starting
from the shortest path up to the largest possible route. ∑ Bsd^I ≤ λsd ,V (s,d)€ D 3)
When connection request arrives the route is selected
Equation 1 represents the constraint that no two route can
randomly starting from the shortest path on which
use same link. Equation 2 and Equation 3 ensures that
wavelength can be allotted. If there exist no route on number of routes found is less than desired number. Value
which wavelength can be allotted then the connection of PRI is kept larger than highest market price so that
is blocked [2]. algorithm fulfills all requests in HPM [16].

This technique makes the task of setting up and SR Algorithm: In this algorithm in HPM connection
tearing off light paths easier, provides fault tolerance requests are arranged in decreasing order of their shortest
and reduces the blocking probability compared to path and stored in list A and in LPM connection requests
are arranged in decreasing order of their cost and stored in
shortest path routing. list B. Firstly, route and wavelength for first request in list
A is found, then resources used by it is removed from
2.5 Adaptive Routing: In this routing when network resource availability data. The process is repeated
connection request arrives, a route is selected among for next request in list A and so on. If the algorithm
the N shortest path which has maximum number of couldn’t find any light path for the request then it
terminates. After all requests of list A are fulfilled then
free wavelengths for single hop connection. And in
algorithm works to find light path for first request in list B.
multi hop connection route selected for source If succeeds the network resource availability is updated and
destination pair is the one which don’t have any the process repeats [16].
common wavelength among N other shortest paths.
This routing improves the blocking probability of the 2.8 Alternate Shortest Path Algorithm: In
network [15]. this algorithm for every connection request route is selected
This technique improves the link utilization hence on shortest path with at least one unoccupied channel if not
improves the capacity of the network. then alternate shortest path with unoccupied channel is
selected if still not possible then call is blocked [11].

225
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.9 Maximum empty channel routing: In 2.12 Resource Optimization based with
this type of routing as soon as the call request arrives all the customized Link Disjoint Degree Routing
possible paths are inspected and route is established on the
one having maximum number of empty channels [11]. (ROR):
In this technique path is chosen according to priority of
2.10 Resource Utilization based Routing minimum network resource consumption provided with
Algorithm: It is a fixed alternate routing in this set of required link disjoint degree ‘D’ where
‘n’ shortest paths are selected which uses minimum
resources provided hop count; path length and number of Ddegree = 100 * (1 – (Llast –Lpath)/Llast
regenerator are minimized. In this weight is assigned to
each link between i and j nodes. Llast = Links of Last best path selected

Cij = {(W-aij -2 )/W * (R-rj -2)Lij/R } when Rj =1 Lpath = Links of considerable path [14].

{(W-aij -2 )/W* Lij/R } when Rj = 0 3. EVOLUTIONARY TECHNIQUE


Csd = ∑Cij FOR SINGLE OBJECTIVE STATIC
RWA
Where W = Number of toatal equipped wavelength
The static RWA problem in wdm optical network is also
aij = Number of available wavelengths in link solved with genetic algorithm for single objective the
optimization i.e. minimizes the number of wavelengths. A
R = Total equipped regenerators hybrid approach is used for the initialization of the
population, this approach depends on K shortest path for
Rj =1 regeneration required at node j every source destination pair. A special cost function based
on frequency of occurrence of an edge in different S-D
Rj =0 regeneration is not required j paths is used to find the fitness of chromosome. An m-point
crossover is used to maintain diversity in solution space.
rj =Number of equipped regenerators Wavelength assignment is done using graph coloring
technique. This technique outperforms first fit algorithm
Lij = Length of link(i,j) [14] . [5].

2.11 Resource balance based Link 4. MULTI-OBJECTIVE RWA USING


Disjoint Routing (RBR): HYBRIDEVOLUTIONARY
In this routing, link disjoint paths are selected provided
APPROACH
balance is between minimum resources used and disjointed
In this technique two objective functions are optimized i.e.
level of the paths. If completely disjointed path is used it
maximize number of accepted commodities and
will definitely use more number of resources so it will be
minimizing the number of wavelengths on each network
better to use partially disjointed path. In this weight is
edge. In this technique genetic algorithm is used for routing
calculated for every path.
and minimum degree first for wavelength assignment (GA-
MDF) [8] and fast non dominated sorting GA (Genetic
Coff^i = 2 Rnode ^i ( H^i - Rnode ^i )10 log(L^i )
Algorithm) to search for non dominated solutions (i.e. best
Rnode ^i = It is the necessary times 3R regenerator in path solution).Normally there are number of non dominated
solutions, so in this algorithm pruned optimal mechanism is
H^i = number of total hops used to reduce the number of non dominated solutions [3,
13].
L^i = represents toatal length
The objective function used is
The path having minimum Coff^i is selected as first best
path and 2nd best path is evaluated using Fobj= Wc (Q – Qa) / Q + Ww (Ka / Kmax)

Coff^i = Coff^i + C best [Lcommon]/[Llast] Wc = weight for maximizing number of commodities

C best = Last best path selected Ww = weight for maximizing required number of
wavelengths
Lcommon = set of common links in best path selected
Q = total number of requests
Llast = Links of Last best path selected [14].
Qa = total number of accepted requests

226
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Ka = minimum required wavelengths Conference On Advanced Communication Technology ,


Gangwon-Do, pp. 2151-2154.
Kmax = total wavelength [5] N. Banerjee and S. Sharan 2004 An Evolutionary
Algorithm for Solving the Single Objective Static Routing
Advantages: Results of NSGA-II are more diverse than and Wavelength Assignment Problem in WDM Network In
weighted sum approach Proceedings Of International Conference On Intelligent
Sensing and Information Processing, pp. 13-18.
Disadvantages: It is more time consuming [9]. [6] N. S. Kapov 2007 Routing and wavelength assignment
in optical networks using bin packing based algorithms
An hybrid evolutionary technique is used for static RWA in European Journal of Operational Research .177, pp. 1167-
which two different multi objective algorithms are 1179.
combined. One algorithm is population based i.e. [7] P. Hansen and N. Mladenovic 2001 Variable
differential algorithm [12] and another algorithm i.e. Neighbourhood Decomposition search Journal of Heuristic.
variable neighborhood search is used. Differential 7 ,pp. 335-350.
evolution basically adds the weighted difference between [8] P. Leesutthipornchai, N. Wattanapongsakorn,C.
two population vectors to a third vector. Also the concept Charnsripinyo 2009 Multiobjective design for routing
of pareto tournament (DEPT) is added to compare the wavelength assignment in wdm networks. In Proceedings
individuals keeping only the non dominated solutions in the of International conference on New trends in information
population. In DEPT each individual receives a scaler value Service Science ,Beijing,China(June 30-July 2),pp. 1315-
based on number of individuals that dominates and number 1320.
of individuals by which it is dominated. [9] P. Leesutthipornchai , C. Charnsripinyo and N.
Wattanapongsakorn 2010 Solving multi-objective routing
Variable neighborhood search [7] tries to escape from local and wavelength assignment in wdm using hybrid
optima trap by modifying the neighborhood space. This evolutionary computation approach. J. Computer
algorithm is changed to adapt it to multi- objective context Communication,pp. 2246-2259.
i.e. MO-VNS. Results of the algorithm are compared with [10] R. Ramaswami and K.N. Sivarajan 1995 Routing and
various multi-objective ant colony optimization and NSGA wavelength assignment in all optical networks.IEEE/ACM
II (Fast Non dominated genetic algorithm which proved to Transaction On Networking . 3,pp. 489-500,1995.
be improved one [1]. [11] R. Randhawa and J.S. Sohal 2010 Static and dynamic
and wavelength assignment algorithm for future transport
5. CONCLUSION & FUTURE WORK networks. J. Optik . 121, pp. 702-710.
[12] R. Storn and K. Price 1997 Differential Evolution –A
In this paper we have reviewed the already proposed Simple and Efficient Heuristics for Global Optimization
techniques for the static RWA problem in optical network. over continuous Spaces. Journal of Global Optimization .
We have also discussed the significance of using nature 11,pp 341-359.
inspired techniques for improving the performance of [13] S.K. Konak,D.W. Coit ,F. Baheranwala 2008 Pruned
RWA problem. Comparative study of all proposed Pareto-optimal sets for the system redundancy allocation
algorithms revealed that usage of hybrid evolutionary problem based on multiple prioritized objectives. Journal
techniques for multi objective optimization proved to be of Heuristics . 14,pp. 335-337.
quite promising for static RWA. So, future lines for [14] T. Chap, X. Wang, S. Xu, and Y. Tanaka 2011 Link-
research in optical networks can be using hybrid Disjoint routing algorithms with link-Disjoint degree and
evolutionary techniques for multi-objective optimization resource utilization concern in translucent WDM optical
with dynamic traffic patterns. networks. In Proceedings of International Conference on
Advanced Communication Technology, pp. 357-362.
REFERENCES [15] Y. Sun, J. Gu, and D.H.K.,Tsang 2001 Routing and
wavelength assignment in all optical networks with
[1] A.R. Largo ,M. A. Vega-Rodriguez 2013 Applying multihop connections.Journal Of Electronics and
MOEAs to solve the static Routing and wavelength Communications. 55, pp. 10-17.
Assignment problem in optical WDM networks J. [16] Y. Wang, T.H. Cheng, and M. Ma 2007 Prority and
Engineering Application of Artificial Intelligence. 26 ,pp. Maximum Revenue based Routing and Wavelength
1602-1619. Assignment for All Optical WDM Network. In
[2] H. Zang, J.P. Jue, and B. Mukherjee 2000 A Review of Proceedings of International Conference on Research
Routing and Wavelength assignment approaches For Innovation Vision of the Future,pp. 135-138.
Wavelength Routed Optical WDM Network Baltzer
Science Publishers, pp. 47-60.
[3] H.A. Taboada,D.W. Coit 2008 Multiobjective
scheduling problems:determination of pruned Pareto sets
IIE Transactions . 40 ,pp. 552-564.
[4] L. Guo, X. Wang,W. Ji, W. Hou, T. Wu, and F. Jin
2008 A New waveband Switching Routing Algorithm in
WDM Optical Networks In Proceedings Of International

227
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Review Paper on Nanomaterials & Nanotechnology

Anshu Rao Ravi Kant


Department of Applied Science Department of ECE
BGIET, Sangrur BGIET, Sangrur
anshurao77@gmal.com rk2005mail@gmail.com

ABSTRACT Doping is an effective & facile method to modify the


In recent years, nanostructured materials have received physical properties (e.g. optics, magnetic and electricity) of
steadily growing interests because of their unique properties the base materials and this extends the applications of the base
& various potential applications in the fabrication of the materials. So, doping of nanoparticles such as ZnO, CdS,
nanodevices in the future. Various nanometer semiconductor TiO2 etc. is done with materials such as Ag, Bi, Al etc. to
materials such as ZnO, CdS, TiO2, SnO2 have been study & enhance its optical, magnetic & morphological
synthesized & studied. In the current paper, history of properties.
nanotechnology & nanomaterials has been studied. The
emergence of the most recent branch of Science, particularly 2. RESULTS & DISCUSSION
in Physics is discussed. The discovery of various compounds
of nanoscale is discussed & it is also explained that how the History of nanotechnology & formation of various
term “nanotechnology” came into existence. The foundations nanoparticles has been studied as discussed below:-
of nanotechnology have emerged over many decades of The term nanotechnology was first proposed by James
research in many different fields. Computer circuits have been Clerk Maxwell in 1867. He had proposed an experiment of
getting smaller. Chemicals have been getting more complex. small entity named Maxwell‟s Demon capable of handling
Biochemists have learned more about how to study and person molecules[18].
control the molecular basis of organisms. Mechanical Richard Adolf Zsigmondy was the very first to use
engineering has been getting more precise. nanometer for characterizing particle size in 1914. He
determined it as 1/10,00,000 of millimeter from which he
General Terms created the very first technique classification based on particle
Nanodevices, nanoparticles. size in the nanometer range[19].
Moore‟s Law [20-22] had finest codified the notion of
Keywords the influences. Gordon Moore predicted on Intel in 1965
Nanomolecules, carbon nanorods, AFM, SEM, XRD. around how modern day circuitry would pack far more
characteristics as far more devices had been made for the
1. INTRODUCTION market place. This law has held sturdy for practically 50
With the development of human society, the environmental years. Moore‟s law also predicts(as shown in fig 1) that by
pollution and energy waste have been increasingly serious. about 2020, we must be working at the molecular level to stay
Therefore exploring its solution mechanism and to seek new competitive. Molecular nanotechnology is the likeliest
sources of energy have been one of the crucial events that candidate for providing this ultimate precision in
numerous scientists focus on [1]. The nanotechnology & manufacturing [23].
nanomaterials are the answer to this problem due to their
unique properties.
Nanotechnology is the branch of science that deals
with the particles of size≈10-9m. It is manipulation of matter
on an atomic or molecular scale. Nanotechnology is a broad
field including fields of science as diverse as
surface science, organic chemistry, molecular
biology, semiconductor physics, micro
fabrication etc. It has wide applications in
medicine, electronics, biomaterials & energy
production etc.
Due to their unique physical & chemical properties
such as cheap, harmless, high photosensitivity and stability
[2,3] nanomaterials have attracted more and more attention in
the material science. Their applications in gas sensor[4],
photocatalyst [5],solar cells[6,7], ultraviolet light emitting
materials[8], field effect transistors[9] and transparent
conductors[10-12] are commendable. Moreover,
nanostructures also exhibit fascinating morphologies such as
nanorods[13],nanotubes[14],nanowires[15], hierarchical Fig.1. Moore‟s Law
nanobranches[16] and nanocastles[17].

228
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The notion of constructing machines in microscopic sizes and Era of Nanotechnology, which proposed the idea of a
producing them function like construction bots for generating, nanoscale "assembler" which would be able to build a copy of
organizing and rearranging objects at molecular level is not itself and of other items of arbitrary complexity. He also first
effortless to think when there was no such technology present. published the term "grey goo" to describe what might happen
This notion was put forward by Richard Feynman in 1959[24] if a hypothetical self-replicating machine, capable of
in his speech „There‟s A lot of Area at the Bottom‟. It was an independent operation, were constructed and released.
invention to enter a new field of Physics. This transcript of the Drexler's vision of nanotechnology is often called "Molecular
classic talk that Richard Feynman gave on December 29, 1959 Nanotechnology" (MNT) or "molecular manufacturing."
at the annual meeting of the American Physical Society at the
California Institute of Technology (Caltech) was first Christoph Gerber invented the first atomic force
published[24]. Feynman had described a process which had microscope(AFM) in 1986. The first commercially available
the ability of manipulating individual atoms and molecules atomic force microscope was introduced in 1989.The AFM is
which might be developed, using one set of precise tools to one of the foremost tools for imaging, measuring and
build and operate another proportionally smaller set, so on manipulating matter at nanoscale. The information is gathered
down to the needed scale. In the course of this, he noted, by “feeling” the surface with a mechanical probe.
scaling issues would arise from the changing magnitude of Piezoelectric elements that facilitate tiny but accurate and
various physical phenomena: gravity would become less precise movements on (electronic) command, enable the very
important, surface tension and Van der Waals attraction precise scanning. In some variations, electric potentials can
would become more important [25]. also be scanned using conducting cantilevers.

In the late 1970's, Eric Drexler began to invent what A document from Japan's Ministry of International
would become molecular manufacturing. He quickly realized Trade and Industry, "Suggested Investigations in the Human
that molecular machines could control the chemical Frontier Science Program" (November 1987) suggests a
manufacture of complex products, including additional serious interest in studying and developing molecular
manufacturing systems-which would be a very powerful machines. It calls for "prediction of tertiary protein structures.
technology[26]. . . to predict the functional change due to the structural
modification," "investigations of the functions of movement at
The Japanese scientist Norio Taniguchi of the Tokyo the molecular level, molecular assembly level, and tissue
University of Science used the term "nano-technology" in a level," and "development of artificial molecular assembly
1974 conference, to describe semiconductor processes such as technique based on the mechanism of biomolecules." It notes
thin film deposition and ion beam milling exhibiting that "techniques to control the shapes and the structures of
characteristic control on the order of a nanometer. His biomaterials of the functional molecular aggregates, and the
definition was, “'Nano-technology' mainly consists of the techniques to synthesize these materials by controlling one-,
processing of, separation, consolidation, and deformation of two- and three-dimensional molecular arrangements are
materials by one atom or one molecule"[27-28]. highly required." While mixed in with many other goals, the
understanding, design, and synthesis of molecular machines
In 1980, Drexler encountered Feynman's provocative stands out as a major theme of the Human Frontier Science
1959 talk "There's Plenty of Room at the Bottom" while Program. The Institute (a news supplement to the IEEE
preparing his initial scientific paper on the subject, Spectrum) reports that a delegation from Japan's Ministry of
“Molecular Engineering: An approach to the development of International Trade and Industry (MITI) visited Washington
general capabilities for molecular manipulation,” published in in spring 1986 to discuss the Human Frontier Science
the Proceedings of the National Academy of Sciences in Program.
1981[25]. In this paper he advanced the proposal that the
molecular machinery found in living systems demonstrates IBM researcher Don Eigler was the first to manipulate
the feasibility of doing advanced molecular engineering to atoms using a scanning tunneling microscope in 1989. He
produce complex, artificial molecular machines. used 35 Xenon atoms to spell out the IBM logo[31]. He
shared the 2010 Kavli Prize in Nanoscience for this work[32].
The invention of the scanning tunneling microscope in
1981 provided unprecedented visualization of individual The discovery of carbon nanotubes( as shown in fig 2 )is
atoms and bonds, and was successfully used to manipulate largely attributed to Sumio Iijima of NEC (Nippon Electric
individual atoms in 1989. The microscope's developers Gerd Company) in 1991, although carbon nanotubes have been
Binnig and Heinrich Rohrer at IBM Zurich Research produced and observed under a variety of conditions prior to
Laboratory received a Nobel Prize in Physics in 1986[29-30]. 1991[33]. Iijima's discovery of multi-walled carbon nanotubes
in the insoluble material of arc-burned graphite rods in
The term "nanotechnology" (which paralleled 1991[34] and Mintmire, Dunlap, and White's independent
Taniguchi's "nano-technology") was independently applied by prediction showed that if single-walled carbon nanotubes
Drexler in his 1986 book Engines of Creation: The Coming could be made, then it would exhibit remarkable conducting
properties[35]. Nanotube research accelerated greatly

229
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

following the independent discoveries by Bethune at IBM[36- device applications and for modeling the operation of
38] and Iijima at NEC[33] of single-walled carbon nanotubes molecular machine designs in same year.
and methods to specifically produce them by adding
transition-metal catalysts to the carbon in an arc discharge. In 2001,first report on nanotech industry was made. In this
report, nanotechnology promised world peace, pre-
programmable drug delivery robots swimming inside
bloodstream embattling cancer, the creation of pollution-free
energy systems, and eternal life. It was the cornucopia of
material science, engineering, physics, chemistry and biology.
U.S. announced first center for military applications in the
same year. Theory of nanometer-scale electronic devices was
discussed and synthesis and characterization of carbon
nanotubes and nanowires was carried out in in the same year.

In 2002,first nanotech industry conference held


&regional nanotech efforts got multiplied. This was a
Fig. 2. Carbon Nanotubes
comprehensive event, which was sponsored by ICS in 2002
when the word “nanotechnology” was still unknown, by
First industry analysis of military applications, Takahiro Matsui, then General Manager of Exposition
“Revolution in Military Affairs”, a document published in Department at ICS (now General officer), “nano tech 2002”
June of 1995 by Hughes Aircraft Company, foresees a attracted more than 10,000 researchers and engineers from
“revolution” in military technology that nanotechnology could various fields. A questionnaire survey conducted then not
account for. Feynman Prize in Nanotechnology was also only helped to grasp the latest information on research and
awardedin1995 for synthesis of complex three-dimensional development but also highlighted the trend toward practical
structures with DNA molecules. use. This proved to be valuable reference for formulating the
road map in the industry and attracted not only domestic but
In 1996,Feynman Grand Prize of $250,000 was announced also global interest. Also in the same year, DNA was used to
& First European conference was held. NASA also began to enable the self-assembly of new structures and advanced our
work in computational nanotech & first nano-bio conference ability to model molecular machine systems[40].
held in the same year.
In 2003,There were Congressional hearings on
In 1997, first company named: Zyvex in the field of societal implications & Call for balancing NNI research
nanotechnology was founded&First design of nanorobotic portfolio was made. Drexler/Smalley debate was published in
system was developed. Also, Feynman Prize in Chemical & Engineering News. Richard Smalley, a co-
Nanotechnology was awarded for work in computational discoverer of the fullerenes, was involved in a public debate
nanotechnology and using scanning probe microscopes to with Eric Drexler about the feasibility of molecular
manipulate molecules. assemblers. Feynman Prize in Nanotechnology was awarded
for modeling the molecular and electronic structures of new
In 1998,First NSF(National Science Foundation) forum, materials and for integrating single molecule biological
held in conjunction with Foresight Conference & first DNA- motors with nano-scale silicon devices[40].
based nano-mechanical device was invented. In the
conference, “Laser Assisted Deposition of Bacteriorhodhopsin In 2004,first policy conference on advanced nanotech
Assemblies” was done & “Simulation and Experiments on was held & in this conference, recent progress in designing
Friction and Wear of Diamond: A material of MEMS and novel protein structures and protein-protein interactions was
NEMS applications” was performed[39] and studies of discussed by David Baker and focus was made on the
fullerens and carbon nanotubes by an extended bond order computational design and experimental characterization of
potential was done. Feynman Prize in Nanotechnology was TOP7, a novel hyper-stable protein, which opened new
awarded for computational modeling of molecular tools for avenues for nano-sacle engineering & study on current nano-
atomically-precise chemical reactions and for building medicine was made which focused on targeted nano-particles
molecular structures through the use of self-organization. and self-assembled nanostructures. It was expected that in 10-
20 years, the methods of massively-parallel molecular
In 1999,First Nano-medicine book was published & first manufacturing would allow the construction of complex
safety guidelines were given. Congressional hearings on diamondoid medical nanorobots. These nanorobots would be
proposed National Nanotechnology Initiative was held & used to maintain tissue oxygenation in the absence of
Feynman Prize in Nanotechnology was awarded for respiration, repair and recondition the human vascular tree
development of carbon nanotubes for potential computing eliminating heart disease and stroke damage, and instantly
staunch bleeding after traumatic injury. Other medical
nanoroboits would eliminate microbial infections and cancer,

230
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

and even replace chromosomes in individual cells thus method using metal acetylacetonates of Zn and V and
reversing the effects of genetic disease and other accumulated poly(vinyl alcohol) as precursors were reported [46] as shown
damage to our genes. First center for nano-mechanical in fig. 4.
systems was also formed in 2004. Feynman Prize in
Nanotechnology was awarded for designing stable protein
structures and for constructing a novel enzyme with an altered
function in the same year.

In 2004, single-crystal ZnO nano-wires were


synthesized (as shown in fig 3) using a vapor trapping
chemical vapor deposition method and configured as field-
effect transistors. Electrical transport studies showed n-type
semiconducting behavior with a carrier concentration of ~107
cm-1and an electron mobility of ~17 cm2 / Vs[42].

Fig.4. SEM micrographs of nanocrystalline V-doped ZnO


powders calcined in air at 600 C for 1 h: (a) Zn0.95V0.05O,
(b) Zn0.90V0.10O and (c) Zn0.85V0.15O

In 2008,Technology Roadmap for Productive Nanosystems


was released & Protein catalysts were designed for non-
natural chemical reactions. The work in molecular electronics
and the synthesis of molecular motors and nanocars was
carried out and theoretical contributions to nanofabrication
Fig.3. ZnO Nanotubes
and sensing were generated[47].

In 2005, at Nanoethics meeting, Roco announced


In 2009, an improved walking DNA
nanomachine /nanosystem project count reached 300 & Dr.
nanorobot was formed. Structural DNA nanotechnology
Christain Joachim designed a wide variety of single molecular
arrays were deviced to capture molecular building blocks &
functional nanomachines and synthesized macromolecules of
design 'from scratch' of a small protein was obtained that
intermediate sizes with designed shapes and functions[43].
performed the function performed by natural globin proteins.
Functional components were organized on addressable DNA
In 2006,National Academies nanotechnology report scaffolds. The experimental demonstrations of mechano-
called for experimentation toward molecular manufacturing & synthesis were made using AFM to manipulate single atoms,
Dr. Erik Winfree and Dr. Paul W.K. Rothemund worked in and for computational analysis of molecular tools to build
molecular computation and algorithmic self-assembly, and complex molecular structures[48].
produced complex two-dimensional arrays of DNA
nanostructures[44].
In 2010,DNA-based 'robotic' assembly began &
the work in single atom manipulations and atomic switches,
In 2007, molecular machine systems that function in the and for development of quantum mechanical methods for
realm of Brownian motion, and molecular machines based theoretical predictions of molecules and solids was carried out
upon two-state mechanically interlocked compounds were by Gustavo E. Scuseria[49].
constructed. A molecular switch is a molecule that can be
reversibly shifted between two or more stable states. The
In 2011, first programmable nanowire circuits for
molecule may be shifted between the states in response to
nanoprocessors were invented. A nanoprocessor constructed
environmental stimuli, such as changes in pH, light,
from intrinsically nanometer-scale building blocks is an
temperature, an electric current, microenvironment, or in the
essential component for controlling memory, nanosensors
presence of a ligand. Currently, synthetic molecular switches
and other functions proposed for nanosystems assembled from
are of interest in the field of nanotechnology for application in
the bottom up[50-52]. DNA molecular robots learnt to walk in
molecular computers[45].
any direction along a branched track[53] & mechanical
manipulation of silicon dimers on a silicon surface were
In 2007, the synthesis and optical properties of made[54].
nanocrystalline powders of V-doped ZnO (i.e. Zn0.95V0.05O,
Zn0.90V0.10O, and Zn0.85V0.15O) by a simple sol–gel

231
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

In 2012,undoped and Mg-doped ZnO thin films were Tom Meyer at the Energy Frontier Research Center at the
deposited on Si(1 0 0) (as shown in fig 5) (and quartz University of North Carolina at Chapel Hill built a device that
substrates by the sol–gel method. The thin films were converts the sun‟s energy not into electricity but hydrogen
annealed at 873 K for 60 min[55]. fuel and stores it for later use as shown in fig. The device, a
dye-sensitized photoelectrosynthesis cell generates hydrogen
fuel by using the sun‟s energy to split water into its
component parts. After the split, hydrogen is sequestered and
stored, while the byproduct, oxygen, is released into the
air[58].

Also recently, North Carolina State University


researchers have used silver nanowires to develop wearable,
multifunctional sensors that could be used in biomedical,
military or athletic applications, including new prosthetics,
robotic systems and flexible touch panels. The sensors (as
shown in fig 7) can measure strain, pressure, human touch and
bioelectronic signals such as electrocardiograms[59].

Fig.5.XRD spectra of Mg-doped ZnO thin films annealed at


873 K for 60 m.

In 2013, Copper-doped ZnO nanoparticles had been prepared


by the precipitation method. The dopant contents in the
samples were 0.24, 0.35 and 1.07 at.%. A set of techniques
including XRD, XPS, TG – DTA, EPR and BET analysis had
been applied to characterize Cu-doped ZnO samples. The
results showed that the crystallite sizes of ZnO and Cu-doped Fig. 7.
ZnO nanoparticles were within the range of 45 - 49 nm[56].
3. CONCLUSION
In 2014, A carbon nanotube sponge capable of soaking up
The early 2000s saw the beginnings of the use of
water contaminants, such as fertilizers, pesticides and
nanotechnology in commercial products, although most
pharmaceuticals, more than three times more efficiently than
applications are limited to the bulk use of passive
previous efforts has been presented in a new study[57]. So,
nanomaterials. Examples include titanium dioxide and zinc
the nanotechnology is also beneficial to the field of
oxide nanoparticles in sunscreen, cosmetics and some food
agriculture also. Also, solar energy has long been used as a
products; silver nanoparticles in food packaging, clothing,
clean alternative to fossil fuels such as coal and oil, but it
disinfectants and household appliances such as Silver Nano;
could only be harnessed during the day when the sun's rays
carbon nanotubes for stain-resistant textiles; and cerium oxide
were strongest. In 2014, researchers led by Tom Meyer at the
as a fuel catalyst.As of March 10, 2011, the Project on
Energy Frontier Research Center at the University of North
Emerging Nanotechnologies estimated that over 1300
Carolina at Chapel Hill have built a system (as shown in fig
manufacturer-identified nanotech products are publicly
6)that converts the sun's energy not into electricity but
available, with new ones hitting the market at a pace of 3–4
hydrogen fuel and stores it for later use, allowing us to power
per week.
our devices long after the sun goes down.

REFERENCES
[1] HAN S T, XI H L, SHI R X, FU X Z, WANG X X.
Prospect and Progress in the Semiconductor Photocatalysis
[J]. Chemical Physics, 2003, 16:339-349.
[2] YIN J, LIU Z G, LIU H, WANG X S, ZHU T, LIU J
M.The epitaxial growth of wurtziteZnO films on LiNbO3 185
(0001) substrates[J]. Journal of Crystal Growth, 2000,
220:281-285.
[3] LEE G H, YAMAMOTO Y, KOUROGI M, OHTSU
M.Blue shift in room temperature photoluminescence from
photo-chemical vapor deposited ZnOfilms[J]. Thin Solid
Fig. 6. Solar nano device Films, 2001, 386:117-120.
[4] T. Gao, T.H. Wang, Appl. Phys. A 80 (2005) 1451.

232
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[5] S. Anandan, A. Vinu, T. Mori, N. Gokulakrishnan, P. [32] "The Kavli Prize Laureates 2010". The Norwegian
Srinivasu, V. Murugesan, K. Ariga, Catal.Commun.8 (2007) Academy of Science and Letters.Retrieved 13 May 2011.
1377. [33] Monthioux, Marc; Kuznetsov, V (2006). "Who should be
[6] X.L. Chen, B.H. Xu, J.M. Xue, Y. Zhao, C.C. Wei, J. Sun, given the credit for the discovery of carbon
Y. Wang, X.D. Zhang, X.H. Geng, Thin Solid Films 515 nanotubes?"(PDF). Carbon44 (9): 1621.
(2007) 3753. doi:10.1016/j.carbon.2006.03.019.
[7] H. Chen, A.D. Pasquier, G. Saraf, J. Zhong, Y. Lu, [34] Iijima, Sumio (7 November 1991). "Helical microtubules
Semicond. Sci. Technol. 23 (2008)045004 of graphitic carbon".Nature354 (6348): 56–58.
[8] Y.R. Ryu, J.A. Lubguban, T.S. Lee, H.W. White, T.S. Bibcode:1991Natur.354...56I. doi:10.1038/354056a0.
Jeong, C.J. Youn, B.J. Kim, Appl. Phys. Lett. 90 (2007) [35] Mintmire, J.W.; Dunlap, BI; White, CT (1992). "Are
131115. Fullerene Tubules Metallic?".Physical Review Letters68 (5):
[9] Z.X. Xu, V.A.L. Roy, P. Stalling, M. Muccini, S. 631–634. Bibcode:1992PhRvL..68..631M.
Toffanin, H.F. Xiang, C.M. Che, Appl.Phys. Lett. 90 (2007) doi:10.1103/PhysRevLett.68.631. PMID 10045950.
223509. [36] Bethune, D. S.; Klang, C. H.; De Vries, M. S.; Gorman,
[10] I. Kim, K.S. Lee, T.S. Lee, J.H. Jeong, B.K. Cheong, Y.J. G.; Savoy, R.; Vazquez, J.; Beyers, R. (1993). "Cobalt-
Baik, W.M. Kim, J. Appl. Phys.100 (2006) 063701. catalyzed growth of carbon nanotubes with single-atomic-
[11] K.M. Lin, P. Tsai, Thin Solid Films 515 (2007) 8601. layer walls".Nature363 (6430): 605–607.
[12] M. Lv, X. Xiu, Z. Pang, Y. Dai, L. Ye, C. Cheng, S. Han, Bibcode:1993Natur.363..605B. doi:10.1038/363605a0.
Thin Solid Films 16 (2008)2017. [37] Iijima, Sumio; Ichihashi, Toshinari (1993). "Single-shell
[13] C. Yan, D. Xue, J. Phys. Chem. B 110 (2006) 25850. carbon nanotubes of 1-nm diameter".Nature363 (6430): 603–
[14] C. Yan, D. Xue, Electrochem. Commun.9 (2007) 1247. 605. Bibcode:1993Natur.363..603I. doi:10.1038/363603a0.
[15] B. Wen, Y. Huang, J.J. Boland, J. Phys. Chem. C 112 [38] "The Discovery of Single-Wall Carbon Nanotubes at
(2008) 106. IBM". IBM.
[16] T. Zhang, W. Dong, R.N. Njabon, V.K. Varadan, Z. [39] X Li, B Bhushan, K Takashima, CW Baek, YK Kim -
Ryan Tian, J. Phys. Chem. C 111 (2007) 13691. Ultramicroscopy, 2003 – Elsevier
[17] X. Wang, J. Song, Z.L. Wang, Chem. Phys. Lett. 424 [40] Ming Zheng, Anand Jagota, Ellen D. Semke, Bruce A.
(2006) 86. Diner, Robert S. Mclean, Steve R. Lustig, Raymond E.
[18] Cargill Gilston Knott (1911). "Quote from undated letter Richardson & Nancy G. Tassi, Nature Materials 2, 338-
from Maxwell to Tait". Life and Scientific Work of Peter 342(2003),6 April 2003| doi:10.1038/nmat877.
Guthrie Tait. Cambridge University Press. p. 215. [41] Chen Jianrong, Miao Yuqing, , He Nongyue, Wu
[19] Richard Zsigmondy, Jerome Alexander (1914). “ Xiaohua, Li Sijiao, “Biotechnology Advances” Volume 22,
Colloids & ultramicroscope : A manual of Colloid Chemistry Issue 7, September 2004, Pages 505–518, doi:10.1016.
And Ultramicroscopy”. [42] Quanchang Li , Vageesh Kumar , Yan Li , Haitao Zhang
[20] Moore, Gordon E. (1969). “ Cramming more components , Tobin J. Marks , and Robert P. H. Chang , “Fabrication of
onto integrated circuits” Electronic Magazine. p. 4. Retrived ZnONanorods and Nanotubes in Aqueous Solutions”, Chem.
2006-11-11. Mater., 2005, 17 (5), pp 1001–1006.
[21] “ Excerpts from A Conversation with Gordon Moore : [43] Leonhard Grill , Karl-Heinz Rieder, Francesca
Moore‟s law”. Intel corporation. 2005.P.1. Retrived 2013-09- Moresco, Gorka Jimenez-Bueno, Cheng Wang, Gwénaël
12. Rapenne, Christian Joachim, “Imaging of a molecular
[22] “ 1965-“ Moore‟s law” predicts the future of integrated wheelbarrow by scanning tunneling microscopy”, Volume
Circuits”. Computer history Museum. 2007.Retrived 2009-03- 584, Issues 2–3, 20 June 2005, Pages L153–L158.
19. [44] Hareem T. Maune, Si-ping Han, Robert D. Barish, Marc
[23] 1.J. Markoff, “ Has Size of Chips Reached its Limits?”, Bockrath, William A. Goddard III, Paul W. K. Rothemund &
Erik Winfree , “Self-assembly of carbon nanotubes into two-
San Jose Mercury News, Oct.9, 1999 . dimensional geometries using DNA origami templates”,
[24] Caltech Engineering and Science, Volume 23:5, February Nature Nanotechnology 5, 61 - 66 (2010) , 8 November 2009
1960, pp 22-36. | doi:10.1038/nnano.2009.311.
[25] Gribbin, John; Gribbin, Mary (1997). Richard Feynman: [45] Molecular Machines & Motors ( Structure and Bonding)
A Life in Science. Dutton.p. 170.ISBN 0-452-27631-4. J.P. Sauvage Ed. ISBN 3-540-41382-0.
[26] Drexler, K. Eric. Molecular Machinery and [46] Santi Maensiri ,Chivalrat Masingboon , Vinich Promarak
Manufacturing with Applications to Computation (Ph.D. , Supapan Seraphin, “Synthesis and optical properties of
thesis). Massachusetts Institute of Technology. nanocrystalline V-doped ZnO powders”, Optical Materials 29
[27] Kazlev, M. Alan (24 May 2003). "History of (2007) 1700–1705.
Nanotechnology".Retrieved 12 May 2011. [47] Dreyfus, R.; Baudry, J.; Roper, M. L.; Fermigier, M.;
[28] Taniguchi, Norio (1974). "On the Basic Concept of Stone, H. A.; Bibette, J., Microscopic artificial swimmers.
'Nano-Technology'".Proceedings of the International Nature 2005, 437, 862-5.
Conference on Production Engineering, Tokyo, 1974, Part II [48] Robert A. Freitas Jr., “Meeting the Challenge of Building
(Japan Society of Precision Engineering). Diamondoid Medical Nanorobots,” Intl. J. Robotics Res.
[29] Binnig, G.; Rohrer, H. (1986). "Scanning tunneling 28(April 2009):548-557.(DOI: 10.1177/0278364908100501).
microscopy".IBM Journal of Research and Development 30: [49] Gustavo E. Scuseria, Rice University, “ quantum
4. mechanical methods and computational programs”, Foresight
[30] "Press Release: the 1986 Nobel Prize in Physics". Nobel Nanotech Institute. 20 December 2010. Retrieved 10
prize.org. 15 October 1986. Retrieved 12 May 2011. April 2011.
[31] Shankland, Stephen (28 September 2009). "IBM's 35 [50] Lu, W. & Lieber, C. M. Nanoelectronics from the bottom
atoms and the rise of nanotech".CNET.Retrieved 12 May up. Nature Mater. 6, 841-850(2007).
2011.

233
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[51] Lu, W., Xie, P. & Lieber, C. M. Nanowire transistor [55] Kai Huanga, Zhen Tanga, Li Zhanga, JiangyinYua,
performance limits and applications. IEEE Trans. Electron. JianguoLvb, XiansongLiuc, FengLiud, “Preparation and
Dev. 55, 2859-2876(2008). characterization of Mg-doped ZnO thin films by sol–gel
[52] Das, S. et al. Designs for ultra-tiny, special-purpose method”, Applied Surface Science 258 (2012) 3710– 3713.
nanoelectronic circuits. IEEE Trans. Circuits Syst. Regul. Pap. [56] K. Milenova , I. Stambolova , V. Blaskov, A. Eliyas , S.
54, 2528-2540(2007). Vassilev , M. Shipochka, “THE EFFECT OF
[53] Richard A. Muscat, Jonathan Bath, and Andrew J. INTRODUCING COPPER DOPANT ON THE
Turberfield, “ A Programmable Molecular Robot”, Calerndon PHOTOCATALYTIC ACTIVITY OF ZnO
Laboratory, Department of Physics, University of Oxford, NANOPARTICLES”, Journal of Chemical Technology and
Parks Road, Oxford OX13PU, U.K. Nano Lett., 2011, 11(3), Metallurgy, 48, 3, 2013, p-259-264.
pp 982-987. [57] Qingyu Peng, Yibin Li, Xiaodong He, Xuchun
[54] A. Sweetman, S. Jarvis, R. Danza, J. Bamidele, L. Gui, Yuanyuan Shang, Chunhui Wang, Chao Wang, Wenqi
Kantorovich, and P. Moriarty, “ Manipulating Si(100) at 5 K Zhao, Shanyi Du, Enzheng Shi, Peixu Li,Dehai Wu and
using qPlus frequency modulated atomic force microscopy: Anyuan Cao, “Graphene Nanoribbon Aerogels Unzipped
Role of defects and dynamics in the mechanical switching of from Carbon Nanotube Sponges”, Advanced Materials, s
atoms”, Phys. Rev. B 84, 085426- Published 25 August, 2011.

234
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Review of Segmentation of Thyroid gland in Ultrasound


image using neural network

Mandeep Kaur Deepinder Singh


M.TECH (ECE) Student Assistant Professor
BGIET, Sangrur BGIET, Sangrur
manibhangu1991@gmail.com wadhwadeepinder@gmail.com

ABSTARCT routines are there to upgrade the picture like AWM (Adaptive
The thyroid gland is highly vascular organ and it lies in the Weighted Median Filter)[1] ,anisotropic dispersion model[5] and
interior part of the neck just below the thyroid cartilage.In so forth.
medical organization,there are many ways to detect the affected Thyroid Gland is a butterfly molded organ & comprises of two
interior part of the thyroid gland like CT/MRI and ultrasound cone lobes.It fits in with the endocrine framework & found in
imaging.But CT/MRI are expensive techniques as compare to the neck simply front of the larynx.It serves to control the
US images.But US images are blurred and consist of noise.In the emission of the thyroid hormone which directs the human body
existing method,to segment the thyroid gland in US images feed temperature furthermore significantly influences the youth
forward neural network techniques can be uesd.In the proposed intelligence,growth and also grown-up metabolism.[6]The
method,we can improve the US images a new technique will be undesirable development of cells on the thyroid structures a
used. mass of tissue called as thyroid nodules.[7] Nodules are only a
disorders.Out of which the vast majority of the thyroid knobs are
kindhearted & some may cause growth or malignant.We can
KEYWORDS likewise say that thyroid knobs are robust or cystic bumps
Feed Forward Neural Network, Feature Extraction, Image framed in the thyroid organ which may get brought about
Processing,Thyroid Segmentation,Ultrasound images. because of different sorts of thyroid disorders.[3]The danger of
building up a substantial thyroid knob in a lifetime goes between
1. INTRODUCTION 5% & 10% while half of individuals with single nodules.[4]

Digital Image Processing is a wide area which includes large


number of sub-areas further but fundamentally it is a single
block in which image or a video is an input and output is either
an image or video or set of parameters associated with an image.
It is vastly used in medical imaging processing which helps
radiologists for diagnosis of problem which manually consumes
lot of time ,so it saves time &comparatively less
laborious.[1]Medical Imaging Analysis plays important role to
detect different kinds of human diseases.[2]With technology
advancements ,computerized diagnosis becomes an active
research area and provides an accurate judgements.Basic
Principle of computerized diagnosis is an image processing
which includes image acquisition,image pre-processing,image
segmentation etc according to requirements.
If there should arise an occurrence of medicinal picture
transforming different methods are there to diagnose the human Figure 1.Thyroid Gland [14]
body like CT Scans,MRI, X-beams, OCT, US and so on. US Picture division is the procedure of parceling a computerized
(Ultrasound) is the most broadly utilized tool[3] on the grounds picture into numerous fragments or sets of pixels otherwise
that it has number of focal points over different systems like called super pixels.[8] It is utilized for finding the articles and
non-obtrusiveness, minimal effort, and short obtaining limits in pictures like lines, bends, and so forth. Motivation
times[1].Also, Ultrasound Images have the capacity to give behind division is to adjust the picture representation into the
prompt data and different vital attributes & they doesn't includes structure which is more important & simple to examine.
ionizing radiations.[4]But Ultrasound pictures contain dot Resultant division picture incorporates set of fragments that can
clamor notwithstanding grain noise.[1]Noise is the consequence conceal the complete picture all in all, or we can say that it is a
of blunders which can corrupt the nature of an image.In request situated of forms (sketching out) taken out from the picture.
to improve the nature of picture or to make the picture When it is connected to countless like if there should arise an
commotion free picture upgrade is necessary.Various separating occurrence of restorative imaging, after division of a picture the

235
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

subsequent shapes (blueprints) can get used to create 3D Typically neural network is defined by following three
reproductions by utilizing obliged calculations. Pixels are parameters:-
comparable in connection of a few properties like power, 1. Interconnection pattern between the different neuron
shading, surface etc& neighboring districts are altogether layers.
diverse regarding the same characteristics.[8] 2. Learning process to update the weights of the
Image segmentation is applicable in various areas like:- interconnection.
 Machine Vision 3. Activation function to convert weighted input of
 Medical Imaging for locating tumors and other neuron into an output.
pathologies, Surgery planning, Measure tissue
volumes, Diagnosis, study of anatomical structure etc. Neural Network includes following applications:-
 Object Detection such as face detection & locate  Function approximation or Regression analysis (time
objects in satellite images (roads, forests, crops, etc.) series prediction & modeling)
 Recognition Tasks such as Face recognition,  Classification (pattern recognition, sequence
Fingerprint recognition, iris recognition etc. recognition)
 Traffic Control Systems.  Data processing (filtering, clustering)
 Robotics (directing manipulators)
Various Image segmentation methods are given as follows:-  Control (vehicle control, process control, natural
 Thresholding resources management)
 Clustering methods  Decision making
 Compression based methods  Financial applications like automated trading systems
 Histogram based methods etc.[9]
 Edge Detection
 Optimization Algorithms
Common types of Neural Network:-
 Region growing methods & Graph portioning
methods.
A counterfeit neural system is a gathering of interconnected  FFNN(Feed Forward Neural Network)
hubs like the immense system of neurons in a cerebrum. Every  FBNN (Feedback Neural Network)
roundabout hub speaks to a manufactured neuron and a bolt
speaks to an association from the yield of one neuron to the data 2. RELATED WORK
of an alternate neuron as demonstrated below.[9]Basically
neural system has three layers i.e. data layer, concealed layer & In this part writing overview identified with past work and
yield layer. These layers speak with each other more than an methodologies about thyroid division of ultrasound pictures is
expansive number of weighted associations introduced. Different systems are there to diagnose the human
body like CT Scans,MRI, X-beams, OCT, US and so forth .in
the event of division the majority of the work is performed on
ultrasound (US) pictures due to its favorable circumstances like
non-intrusiveness, minimal effort, short securing times,
capability to give quick information &they does not includes
ionizing radiations.
In our writing review we have considered numerous exploration
approaches. Our point is to study and break down the gimmick
extraction, characterization & division approaches for
fragmenting the thyroid organ to watch the outcomes.
Garg H.& Jindal A.[1] displayed a paper in which Feed-Forward
Figure 2 Neural Networks [15] Neural Network for division of thyroid organ ultrasound pictures
In machine learning, fake neural systems (ANNs) are a group of has been talked about. With the end goal of division of US
factual learning calculations inspired by organic neural systems pictures, power of pixels and surface is taken as criteria and it is
(Central sensory systems like mind) & used to gauge the a crossover methodology. First and foremost they have done
capacities that can rely on upon countless and are by and large picture upgrade so as to uproot the spot clamor. A short time
unknown[9]. By and large, they are introduced as interconnected later different obliged gimmicks get removed to gauge the
neurons to figure values from inputs & fulfilled of machine surface & to prepare the neural system. At that point creators
adapting and example distinguishment. actualized Feed-Forward Neural Network for power based
There is no single formal meaning of manufactured neural grouping and their trial results demonstrate that neural system
system. However, a class of factual models might regularly be gives better results.
called "Neural" in the event that they comprise of sets of Chang CY. et al. [6]presented a paper in which thyroid organ
versatile weights & are fit for assessing non-direct capacities. division & volume estimation of thyroid has been presented.
Versatile weights are association qualities between neurons, Firstly, Image Pre-preparing has been carried out to make the
which get actuated amid forecast and in addition preparing picture commotion stifled and peculiarities get extricated to
stage. prepare the neural system. At that point, Radial Basis Function

236
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

(RBF) Neural Network has been connected to group the squares a methodology called GA-VBAC i.e. a blend of GA (Genetic
whether it is thyroid or non-thyroid organ territory. To Algorithm) & VBAC (Variable Background Active Contour
recuperate the precise state of thyroid organ area, creator model). VBAC uses variable foundation districts with a specific
connected the district developing strategy. With the end goal of end goal to minimize the force inhomogeneity impacts in thyroid
volume estimation of thyroid organ, they connected Particle US pictures. They utilized GA to consequently & productively
Swarm Optimization (PSO) calculation. They contrasted look the VBAC parameters without obliging any specialized
Volume Estimation of Proposed Method and different strategies abilities. Trial results demonstrate that GA-VBAC is a
like standard GA. successful, proficient and very target framework for outline of
Gopinath B. & Gupta B.R.[7] introduced a paper which includes thyroid in US pictures. As GA-VBAC acquired normal cover
the division and grouping of Papillary carcinoma & Medullary esteem 92.5% though specialists got 91.8%, so it is clear that
carcinoma cells in FNAB (Fine Needle Aspiration Biopsy) GA-VBAC is skilled to acquire high outline precision relatively.
infinitesimal cytological pictures of thyroid knobs. They have Future work is to improve GA-VBAC by accelerating the
utilized numerical morphology for picture division so as to expel preparation stage to make it practical for numerous US pictures
the foundation recoloring data from infinitesimal pictures. At & plan a coordinated framework to consolidate heterogeneous
that point, Feature Extraction is carried out by Discrete Wavelet data for judgment of thyroid knobs.
Transform (DWT) and Gray Level Co-event Matrix Maroulis D.E. et al.[12] introduced a paper on VBAC (Variable
(GLCM).For arrangement they have utilized k-Nearest Neighbor Active Contour Model) for outline of thyroid knobs in US
(kNN) classifier. Analytic exactness reported by the DWT & pictures. They joins the profits of ACWE (Active Contour
GLCM is 97.5% and 75.84% individually. Creators executed without Edges) with VBAC & dissimilar to excellent dynamic
Majority voting guideline to enhance the symptomatic exactness form models which are touchy if there should be an occurrence
of GLCM by 90%. of power inhomogeneities, this proposed model likewise
Savelonas M.A. et al.[4] proposed a novel Active Contour includes the variable foundation districts. At that point, they
Model for exact outline of thyroid knobs of assorted shapes on assessed VBAC & contrasted and ACWE display if there should
the premise of their composition & reverberation -genicity in be an occurrence of both engineered and additionally genuine
ultrasound pictures. Proposed model is named as JET (Joint US pictures. Trial results demonstrate that VBAC external
Echogenicity Texture) which incorporates provincial picture performs ACWE when in homogeneity was considered.
force & factual surface gimmick conveyances or LBP Likewise, its outlines of hypoechoic knobs are exact by 91%
disseminations by using Mumford-Shah utilitarian otherwise than that of ACWE in less cycles & it is similar to master
called negligible parcel issue. Chan-Vese Model additionally radiologists as indicated by which it is plausible in clinical
utilized as a part of definition of JET model so as to keep up the practice. VBAC met in 9.6% less cycles than ACWE &
power data. Creator executed JET for dividing the US pictures execution time is likewise speedier by 8.2%,as normal division
containing hypoechoic, hyperechoic and isoechoic knobs. time is 1min 33s for VBAC while for ACWE it is 1min 41 s.
Outline Performance of JET is equivalent to VBAC model Kollorz Eva N.K. et al.[13] exhibited a paper on volumetric
(Variable Background Active Contour) furthermore develops its evaluation of thyroid utilizing 3-D US imaging. They proposed a
ability from depiction of hypoechoic to isoechoic knobs self-loader division approach for grouping & examination is
furthermore adapts to the impediments of cross breed multiscale carried out on the premise of the 3-D US information. Pictures
model. Downside of proposed model is that it is not generally get examined in 3-D then preprocessing keeping in mind the end
ready to separate structures like greater veins from real knobs. goal to uproot spot commotion, sifting & division is carried out.
Helme A. et al.[10]proposed an application called thermography Augmentation of Geodesic dynamic form level is examined in
which non-obtrusively judgment the thyroid organ malady. In point of interest. They joined two anisotropic dispersion
light of non-intrusive framework, a PC based model gadget was channels with level set based dissemination calculation.
outlined which can recognize & show the relative varieties of Affectability & Specificity of division was 75% & 97%
skin temperature & can diagnose the hyperactivity inside the individually. Mean Hausdorff separation is obliged under 3mm
thyroid organ. They likewise exhibited a FEA (Finite Element for clinical utilization.
Analysis) of thyroid knob to assess the temperature circulation Maroulis D.E. et al.[4] exhibited a paper on thyroid knob
in mix with model and to focus the obliged sensor recognition in US pictures utilizing PC supported method that
determination. They have utilized Non-contact IR (infrared) considers the in homogeneity of US pictures. They actualized
thermocouple sensors & Data Acquisition (DAQ) framework VBAC includes variable subset of the picture as its experience
which is tactile 518 card having 16 bit determination. Recreation which can change the shape to diminish the foundation in
results anticipated that a thermocouple temperature homogeneity impacts. Results get assessed by utilizing US
determination of 0.1 ᵒC would be sufficient for tackling the pictures of 35 thyroid patients & demonstrates that VBAC
issue. Downsides of the framework are that it includes (Variable Active Contour Model) gives quick joining and
variability in sensor skin separation, restricted determination & enhanced exactness similar to ACWE (Active Contour without
optical covering in center arrangement. Future degree includes Edges) .As with VBAC they arrived at the meeting in 10% less
the framework with remote connections between the quiet's calculation cycles than ACWE. Future work is to implant the
position and specialist's office, use of fiber optics keeping in textural gimmicks to watch the form advancement which
mind the end goal to center the infrared range for examining and empowers non hypo-echoic knobs recognition.
to give better determination. Keramidas E.G.et al.[14] introduced a paper in which they
Lakovidis D. K. et al.[11] exhibited a paper for division of US depicted TND (Thyroid Nodule Detector) which is a PC
pictures so as to depict thyroid knobs precisely. They actualized supported determination model for ultrasound pictures &

237
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

features to identify the nodular tissue. This proposed technique


incorporates four segments that considers novel commitments
like TBD-2 calculation incorporates thyroid parenchyma for Segmentation of Thyroid region
programmed meaning of ROI and combination of FLBP (Fuzzy
Logic Binary Patterns) & FGLH (Fuzzy Gray Level
Histogram).They examined the attainability of proposed system
on genuine US pictures. Test results demonstrate that joining of
FLBP & FGLH is more powerful relative to different systems.
They additionally demonstrated that TND framework can get Stop
connected clinically. Precision in thyroid knob location has been
assessed to surpass 95%. Future extension is to improve the
TND with the assistance of modern programmed systems like Figure3.Thyroid Segmentation steps
GA & suitable shape approaches for outline of distinguished
knobs.
5. DISCUSSION
3. PROBLEM FORMULATION For the evaluation of proposed technique we have used
1.Existing Techniques are not always capable of distinguishing MATLAB Simulation 2012a version and platform were installed
structures such as bigger blood vessels from actual nodules. at window 8.1 processor version.The proposed algorithm is now
2.Existing segmentation techniques generally results in broken applied to the images in the data set and the required region of
edges. thyroid gland is extracted. Various experiments were performed
3.Existing techniques suffers from limited resolution & to show the ability of the proposed method. The evaluated
boundary leakage problem. parameters were used Accuracy, Specificity, Sensitivity, False
4. Existing techniques suffers from problem of official overlap Negative Rate and False Positive Rate. The five measuring
during focus alignment. indices are defined as follows:
5.Existing techniques are device dependent as they require Accuracy= TP+TN/AP+AN
different set of parameters for the segmentation if they get Sensitivity=TP/AP
acquired from different ultrasound devices. Specificity=TN/AN
TP Rate= TP/(TP+FN)
FP Rate= 1- TN/(TN+FP)
4. METHODOLOGY We will compare all parameters with existing results by using
our proposed techniques.

Start
REFERENCES
[1] Hitesh garg & alka jindal, Segmentation of Thyroid
gland in Ultrasound Image using neural network.
chandigarh, india, 2013.
Read Ultrasound [2] sheeja agustin A et al, Thyroid Classification as Normal
image and Abnormal using SCG based Feed Forward Back
Propagation Neural Network Algorithm., 2013.
[3] Dimitris K. Iakovidis, Ioannis Legakis, and Dimitris
Preprocessing & ROI generation Maroulis, Michalis A. Savelonas, active contours
guided by echogenicity and texture for delineation of
thyroid nodules in ultrasound images., 2009.
[4] M.A. Savelonas, S.A. Karkanis, D.K. Iakovidis, N.
Dimitropoulos D.E. Maroulis, computer-aided thyroid
Feature Extraction nodules for detection in ultasound images. greece,
2005.
[5] Wei Zheng, Li Zhang and Hua Tian Jie Zhao,
segmentation of ultrasound images of thyroid nodules
Training of Neural Network for assisting fine needle aspiration cytology., 2012.
[6] Yue-Fong Lei, Chin-Hsiao Tseng, and Shyang-Rong
Shih Chuan-Yu Chang, thyroid segmentation and
Classification volume estimation in ultrasound images., 2010.
[7] b.gopinath & Dr B.R. Gupta, Majority Voting based
Classification of Thyroid Carcinoma. india, 2010.

238
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[8] en.wikipedia.org/wiki/Image_segmentation.
[9] en.wikipedia.org/wiki/Artificial_neural_network.
[10] Michael Holdmann, and Maher Rizkalla Ahdy Helmy,., 2008.
[11] Michalis A. Savelonas,Stavros A. Karkanis & Dimitris E.
Maroulis Dimitris K. Iakovidis, A genetically optimized level
set approach to segmentation of thyroid ultrasound image.
greece, 2007.
[12] Michalis A. Savelonas, Dimitris K. Iakovidis ,Stavros A.
Karkanis and Nikos Dimitropoulos Dimitris E. Maroulis,
variable background contour active model for computer-aided
delineation of nodules in thyroid ultrasound images, 2007.
[13] Dieter A. Hahn, Rainer Linke,Tamme W. Goecke, Joachim
Hornegger and Torsten Kuwert Eva N. K. Kollorz,
quantification of thyroid volume usin 3-D ultrasound
imaging., 2008.
[14] Dimitris Maroulis ,Dimitris K. Iakovidis ΤND Eystratios G.
Keramidas, A thyroid nodule detection system for analysis of
ultrasound images and videos, 2010.

239
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Review of Robust Document Image Binarization


Technique for Degraded Document Images
Rupinder Kaur Naveen Goyal
M.TECH (ECE) Student Assistant Professor
BGIET, Sangrur BGIET, Sangrur
cheema1579@gmail.com goyal.naveen2006@gmail.com

ABSTRACT foundation and the content stroke crosswise over diverse record
Segmentation of badly degraded document images is done for pictures.
discriminating a text from background images but it is a very
challenging task. So, to make a robust document images, till
now many binarization techniques are used. But in existing
binarization techniques thresholding and filtering is unsolved
problem. In the existing method, an Adaptive contrast map is
first constructed then binarized and combined with cannny edge
map to identify text stroke edge pixels, the documented is
further segmented by local threshold .So the existing methods
are divided into four main steps out of which last two steps used
two different algorithms. In the proposed method, we can
modify algorithms and test degraded document images then
compare the result that come from previous paper results.

KEYWORDS
Adaptive Binarization Techniques, Document Segmentation,
Image Processing, Denoising.
Figure 1 Figure 2
1. INTRODUCTION
As represented in Fig. 1, the manually written content inside the
Document pictures, as a substitute of paper documents, basically
debased records regularly demonstrates a certain measure of
comprise of regular images, for example, manually written or
variety as far as the stroke width, stroke shine, stroke
machine-printed characters, images and design [1]. In numerous
association, and record foundation. Likewise, recorded records
down to earth applications, we just need to keep the substance of
are frequently corrupted by the drain through as showed in Fig.
the record, so it is sufficient to speak to content and graphs in
1 and Fig. 2, where the ink of the other side leaks through to the
parallel configuration which will be more proficient to transmit
front. Furthermore, authentic reports are regularly corrupted by
and process rather than the first dark scale picture. It is crucial to
diverse sorts of imaging curios. These diverse sorts of record
limit the record picture dependably so as to concentrate valuable
corruptions have a tendency to prompt the report thresholding
data and make further preparing, for example, character
lapse and make debased archive picture binarization a major test
recognition and feature extraction, particularly for those low
to techniques [4].Document picture binarization assumes a key
quality archive pictures with shadows, non-uniform
part in report handling following its execution influences the
enlightenment, low differentiation, substantial sign ward
degree of success in resulting character division and recognition.
commotion, smear and smudge[2]. Along these lines,
all in all, picture binarization is arranged in two fundamental
thresholding a checked dim scale picture into two levels is the
classes: (i) worldwide and (ii) nearby. Binarization is a
"rst step furthermore a discriminating part in most archive
preprocessing stage for report examination and it is utilized to
picture investigation frameworks since any blunder in this stage
fragment the frontal area content from the archive foundation.
will spread to all later stages.
This system guarantees quicker and exact document picture
Document Image Binarization expects to section the closer view
handling tasks [5]. Most record investigation calculations are
content from the document foundation and is performed in the
based in view of hidden binarized picture information. The
preprocessing stage for record analysis [3]. For the following
utilization of bi-level information diminishes the computational
record picture preparing undertakings, for example, optical
load and aides in utilizing streamlined investigation techniques
character recognition (OCR), a quick and exact archive picture
contrasted with 256 levels of dim scale or shading picture data.
binarization strategy is key. In spite of the fact that archive
Document picture understanding systems oblige consistent and
picture binarization has been created for a long time, the
semantic substance conservation for thresholding. Despite the
thresholding of corrupted record pictures is still an unsolved
fact that record picture binarization has been considered for a
issue because of the high inter/intra- variety between the report

240
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

long time, the thresholding of pictures is still a testing worldwide and nearby routines. After that an edge surface is
undertaking because of the high variety between the content built in view of the differing qualities and the force of every
stroke and the archive foundation. For a data picture, some locale to determine the twofold picture. Trial results demonstrate
handling stages ought to be utilized before the content the viability of the proposed technique.
extraction. One of the steps includes binarization. Zhou et al. A[11] binarization system is displayed in light of
edge data for feature content pictures. It endeavors to handle
2. RELATED WORK pictures with complex foundation with low differentiation. The
Ioannis Pratikakis et al.[6] has talked about a challenge. The form of the content is distinguished, after that nearby
general focus of the challenge is to recognize current thresholding system can used to search for the inward side of the
advancements in record picture binarization for both machine- shape; along these lines, the forms of the characters are topped
printed and physically composed record pictures using appraisal off to structure characters that are conspicuous to OCR
execution measures that obey report picture examination and programming.
recognition. The challenge points of interest have been depicted Ntirogiannis et.al.[12] present a new archive picture binarization
fusing the appraisal measures utilized furthermore the execution strategy, as an enhanced variant of the versatile intelligent level
of the 23 submitted schedules and a short portrayal of each method (ALLT). The first ALLT makes utilization of altered
system. windows for removing fundamental gimmicks (e.g., the
Abdenour Sehad et al.[7] has present a skilled plan for character stroke width). In any case, there are potential
binarization of antiquated and degraded document pictures, outcomes of characters with a few diverse stroke widths inside
grounded on surface qualities. The proposed method is a an area. This may prompt incorrect results. In this paper,
versatile limit based. It has been computed by utilizing a neighborhood versatile binarization is utilized as a manual for
descriptor focused on a co-event framework and the plan is versatile stroke width discovery. The skeleton and the form
confirmed equitably, on DIBCO dataset corrupted reports purposes of the binarization yield are consolidated to recognize
besides subjectively, using a set of antiquated debased records the stroke width provincially. Also, a versatile neighborhood
offered by a national library. The results are worthy and parameter is characterized that improves the characters and
guaranteeing, exhibit a change to established methodologies. enhances the general execution accomplishing more exact
Hossein Ziaei Nafchi et al.[8] has presumed that the binarization results for both manually written and printed reports
preprocessing and post preparing stages definitively propel the with a specific concentrate on degraded authentic archives.
execution of binarization methodologies, especially in the N.Stathis et al.[13] proposed another method for the approval of
circumstance of cruelly degraded antiquated archives. An report binarization calculations. Creators assert that the proposed
unconfirmed post preparing system is introduced established on system is basic in its execution and can be performed on any
the stage saved denoised picture furthermore stage congruency binarization calculation since it doesn't oblige much else besides
peculiarities extricated from the data picture. The focal piece of the binarization stage. As a showing of the proposed strategy,
the procedure involves two powerful cover pictures that can be we utilize the instance of corrupted recorded records. The
utilized to cross the false positive pixels on the generation of the proposed method is assessed with 30 binarization calculations
binarization method. Firstly, a veil with an amazing review for execution correlation.
worth is achieved from the denoised picture with the assistance Bradley et al.[14]presents a continuous versatile utilizing the
of morphological strategies. In parallel, a second cover is fundamental picture of the information. The system proposed is
procured needy upon stage congruency characteristics. By then, powerful to light changes in the picture suitable for preparing
an average channel is used to clear clamor on these two covers, live feature streams at an ongoing edge rate which makes it
which then are used to redress the yield of any binarization suitable for the intuitive application.
system.
P Lopes, N.V.et al.[9] has present a programmed histogram edge 3. PROBLEM FORMULATION
methodology taking into account a fluffiness measure. Utilizing In the current framework they creator have utilized edge
the ideas of fluffy rationale, the issues included in discovering discovery procedure for recognizing the edges of the old records
the base of a paradigm capacity are evaded. Similitude between original copies, however the system were new furthermore the
dark levels is the way to discover an ideal edge. Two starting yields were enhanced from the current strategy yet not that much
locales of dim levels are characterized at the limits of the precise. In the proposed framework we will attempt to execute
histogram. After that utilizing a file of fluffiness, a likeness existing framework utilizing morphological operators and will
methodology is begun to discover the limit point. A huge enhance the estimations of parameters like PSNR, F-Measure
complexity in the middle of articles and foundation is accepted. and NRM.
Histogram leveling is utilized as a part of pictures having little
differentiation distinction. 4. PROPOSED WORK
S.J. Pai, Y.T.et al.[10] present a versatile calculation for This section describes the proposed document image
productive archive picture binarization with low computational binarization techniques. Given a degraded document image, an
intricacy and elite. This is especially suitable for utilization in adaptive contrast map is first constructed and the text stroke
versatile gadgets, for example, PDA, cellular telephones which edges are then detected through the combination of the binarized
are checked by their constrained memory space and low adaptive contrast map and the canny edge map. The text is then
computational capacity. This strategy partitions the report segmented based on the local threshold that is estimated from
picture into a few squares by incorporating the idea of

241
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

the detected text stroke edge pixels. Some post-processing is 5. CONCLUSION


further applied to improve the document binarization quality. This paper presents an adaptive image contrast based document
In the proposed document image binarization technique mainly image binarization technique that is tolerant to different types of
four steps occurs. document degradation such as uneven illumination and
1. Construction of Contrast Image document smear. The proposed technique is simple and robust,
2. Detection of text stroke edge pixels only few parameters are involved. Moreover, it works for
3. Estimation of Local Threshold different kinds of degraded document images. The proposed
4. Reduction Of Noise technique makes use of the local image contrast that is evaluated
Merit: In output part noise level very low compared to previous based on the local maximum and minimum. The proposed
method. method has been tested on the various datasets. In output part
noise level very low compared to previous method in term of
the F-measure, pseudo F-measure, PSNR, NRM, MPM and
DRD.
Degraded Image
Document Image Normalization
REFERENCES
[1] Yibing, and Hong Yan. Yang, "An adaptive logical
method for binarization of degraded document
images.", 2000.
Contrast Image
Construction [2] Bir. Bhanu, Multistrategy Learning for Computer
Vision. California univ riverside coll of engineering.,
1997.

[3] Basilios, Ioannis Pratikakis, and Stavros J. Perantonis.


Gatos, "Adaptive degraded document image
Canny binarization., 2006.
Contrast Image
edge map
binarization [4] Bolan, Shijian Lu, and Chew Lim Tan. Su, "Robust
document image binarization technique for degraded
document images.", 2013.

[5] Basilios, Ioannis Pratikakis, and Stavros J. Perantonis.


Gatos, "Improved document image binarization by
Text Stroke Edge using a combination of multiple binarization
techniques and adapted edge information."., 2008.
Pixel Detection
[6] Ioannis, Basilis Gatos, and KonstantinosNtirogiannis
Pratikakis, "ICDAR 2013 Document Image
Binarization Contest (DIBCO 2013)," in Document
Analysis and Recognition (ICDAR), 2013 12th
Binary Document International Conference on.IEEE, 2013.
Local Threshold
Image estimation [7] Abdenour, et al. Sehad, "Ancient degraded document
image binarization based on texture features.," in
Image and Signal Processing and Analysis (ISPA),
2013 8th International Symposium on.IEEE, 2013.

[8] HosseinZiaei, Reza FarrahiMoghaddam, and


Mohamed Cheriet Nafchi, "Application of Phase-
Output Based Features and Denoising in Postprocessing and
Noise
Document Image Binarization of Historical Document Images.," in
Reduction Document Analysis and Recognition (ICDAR), 2013
12th International Conference on.IEEE, 2013.

Figure 3: Block diagram of proposed system [9] N.V., Mogadouro do Couto, P.A., Bustince, H., Melo-
Pinto, P Lopes, "Automatic histogram threshold using

242
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

fuzzy measures"., 2010.

[10] Y.T., Chang, Y.F., Ruan, S.J. Pai, Adaptive


thresholding algorithm: efficient computation
technique based on intelligent block detection for
degraded document images.., 2010.

[11] Z., Li, L., Tan, C.L. Zhou, "Edge based binarization
for video text images. ," in In: Proceedings of 20th
International Conference on Pattern Recognition
(ICPR), , 2010.

[12] K., Gatos, B., Pratikakis, I. Ntirogiannis,


"Ntirogiannis, K., Gatos, B., Pratikakis, I.,"A
modified Adaptive logical level binarization
technique for historical documents images" , 2009.

[13] P., Kavallieratou, E., Papamarkos, N. Stathis, An


evaluation technique for binarization algorithms.,
2008.

[14] D., Roth, G. Bradley, Adaptive thresholding using the


integral image.., 2007.

243
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A REVIEW OF MULTIBIOMETRIC SYSTEM WITH


RECOGNITION TECHNOLOGIES AND FUSION
STRATEGIES
Cammy Singla Naveen Goyal
M.TECH (ECE) Student Assistant Professor
BGIET, Sangrur BGIET, Sangrur
Cammysingla931@gmail.com goyal.naveen931@gmail.com

ABSTRACT
Biometrics means technology of measuring and analyzing
physiological or biological characteristics of living body for
identification and verification purposes. A biometric system
provides automatic recognition of an individual based on
some sort of unique feature or characteristic of the individual.
Biometric systems is based on fingerprints, facial features,
voice, hand geometry, handwriting, the retina and iris.User
verification systems that use a single biometric indicator often
have to contend with noisy sensor data, restricted degree of
freedom, non-universality of the biometric trait and
unacceptable error rates. So the need of using multimodal
biometric system occurred. A multimodal biometric system
combines the different biometric traits and provides better
recognition performance as compared to the systems based on
single biometric trait. This paper presents a review of
multibiometric systems including its recognition technologies,
level of fusion and feature extraction for fingerprint and iris. Fig 1: Illustrations of some biometric characteristic
Features like minutia points from fingerprint and texture from
iris are extracted. In unimodal biometric frameworks we confront a mixture of
issues, for example, loud information, intra-class varieties,
KEYWORDS limited degrees of opportunity, non-all inclusiveness, parody
Biometrics, Multimodal,fingerprint,iris,recognitiontechniques, assaults, and unsatisfactory lapse rates. The constraints forced
level of fusion and feature extraction of iris and fingerprints. by unimodal biometric frameworks can be overcome by
utilizing various wellsprings of data for securing character.
Such frameworks are known as multimodal biometric
1. INTRODUCTION frameworks. Multimodal framework is a blend of two or more
In the modern world, there is a high demand to authenticate
than two biometric characteristics of a single person for the
and identify individuals automatically. Hence, the recognizable proof purposes.
development of technology such as personal identification
number (PIN), smartcard or passwords have been introduced. 2. INDIVIDUAL DISTINGUISHMENT
However, those technologies are inadequate since they are
2.1Fingerprint distinguishment
disclosable and transferable. For example, PIN and smart card Unique mark distinguishment is a standout amongst the most
can be duplicated, misplaced, stolen or lost, long password famous and remarkable biometrics. Due to their uniqueness,
can be hard to remember by client and short password can be fingerprints have been utilized for distinguishment for more
guessed easily by the imposter.In order to overcome these than a century. Fingerprints are different to every individual in
problems, biometric-based authentication and identification light of an exceptional papillary gimmicks which are diverse
methods are introduced in late 90s. even in twins. Unique finger impression examples stay
unaltered all through the whole grown-up life and that is the
Biometric systems work by first capturing a sample of the
reason effortlessly utilized for recognizable proof. Regardless
feature for example taking a digital color image for face if a finger is harmed, different fingers that are already selected
recognition or recording a digital sound signal for voice into the framework can likewise be utilized for ID.
recognition or taking a fingerprint samples of fingers. Then
some sort of mathematical functions are applied on the
samples. The biometric template will provide an efficient and
highly discriminating representation of the feature.

Fig 2: Cases of some biometric characteristics

244
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Finger impression handling could be possible in two ways. and the subject is recognized or no match is discovered and
Firstly utilizing equipment and furthermore utilizing the subject stays unidentified.
programming. In equipment unique mark transforming
exceptional biometric scanners are utilized to catch 3. LEVEL OF COMBINATION
fingerprints. There are three sorts of unique mark scanners The vital issue to planning multibiometric framework is to
which are Capacitive scanner, Clear scanner,Optical scanner. focus the wellsprings of data and mix systems. Contingent
upon the kind of data to be melded, the combination plan can
be characterized into distinctive levels. As indicated by
Sanderson and Paliwal [8], the level of combination can be
characterized into two categories, combination before
coordinating (preclassification) and combination in the wake
of coordinating (post characterization) as demonstrated in
Figure 2.

Fig 5: Level of fusion


Fig 3: Rridges and valleys of fingerprints
3.1 Fusion before Matching
2.2 Iris Distinguishment 3.1.1 Sensor level combination
Because of numerous hues, iris is known as "Goddess of the In this level, the crude information from the sensor are joined
Rainbow", which is a Greek word. The slight divide between together as indicated in Fig. 3. In any case, the wellspring of
the dull understudy and white sclera is iris. The iris is the data is relied upon to be debased by commotion, for example,
annular ring between the sclera and pupil boundary and non-uniform brightening, foundation mess and other [10].
contains the flowery pattern unique to each individual. In Sensor level combination can be performed in two conditions
human eye iris is the hued part which is set behind the i.e. information of the same biometric quality is gotten
cornea.In programming transforming there are two utilizing different sensors; or information from various
coordinating systems. Particulars coordinating is broadly depiction of the same biometric qualities utilizing a
utilized distinguishment system. Details coordinating is in singlesensor [11, 12].
view of the particulars focuses, uniquely course and area of
every point. In Example coordinating two pictures are just
contrasted with perceive how comparative or disparate they
are to one another. Sensor Sensor
data 1 data 2

Fusion

Feature Vector

Matching

Fig 4: Human Eye Match


Score
Picture handling systems can be utilized to concentrate the
one of a kind iris design from a digitized picture of the eye
and encode it into a biometric layout, which can be put away
in a database. This biometric format contains a target Decision
numerical representation of the interesting data put away in
the iris and permits correlations to be made between formats.
This format is then contrasted and alternate layouts put away
in a database until either a coordinating layout is discovered Fig 6: Sensor level combination methodology stream

245
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.1.2 Feature level combination 3.2.2 Decision level combination


In gimmick level combination, distinctive peculiarity vectors Combination at the choice level is executed after a match
removed from different biometric sources are joined together choice has been made by the individual biometric source as
into a solitary gimmick vector as portrayed in Fig. 4. This delineated in Fig. 6. In this way, there are various strategies
methodology experiences two stages which are gimmick part voting, weightedmajority voting, Bayesian choice
standardization and peculiarity determination. The gimmick combination, Dempster-Shafer hypothesis of proof and
standardization is utilized to adjust the area and size of conduct information space [7]. On alternate hands, Ramli et
peculiarity qualities by means of a change capacity and this al., [14] executed the proposed choice combination by
alteration could be possible byusing fitting standardization utilizing the spectrographic and cepstrumgraphic as
plans [2]. For example, the min-max method and average peculiarities extraction and UMACE channels as classifiers in
plotting have been utilized for hand and face [9] and the mean the framework to diminish the slip because of the variety of
score from the discourse flag and lipreading pictures scores information.have been connected to join the different choice
have been utilized in the peculiarity level combination. into a definite conclusion, for example, "AND" and "OR"
rules [4], greater

Sensor Sensor Sensor Sensor


data 1 data 2 data 1 data 2

Feature Extraction Feature Extraction


Feature Extraction Feature Extraction

Fusion
Feature Feature
Vector 1 Vector 2
Joint Feature
Vector

Template Template
Template
Matching

Matching Matching

Match
Score

Match Match
Score 1 Score 2
Decision

Fusion
Figure 7: Feature level fusion process flow

3.2 Fusion after Matching


3.2.1 Score level combination Total
In score level combination, the match yields from various
biometrics are joined together to enhance the coordinating Score
execution to confirm or distinguish singular as indicated in
Fig. 5 [13]. The combination of this level is the most
prominent approach in the biometric writing because of its Decision
basic methodology of score accumulation and it is
additionally pragmatic to be connected in multibiometric
framework. In addition, the coordinating scores contain
Figure 8: Score level fusion process flow
sufficient data to make genuine and fraud case recognizable
[6]. Nonetheless, there are a few variables that can influence
Fusion at the decision level is a rather loosely coupled system
the mix transform thus debases the biometric execution. For
architecture, with each subsystem performing like a single
instance, the coordinating scores created by the individual
biometric system. This architecture has therefore become
matchers may not be homogenous because of be in the
increasingly popular with biometric vendors, often advertised
distinctive scale/range or in diverse likelihood conveyance.
under the term “layered biometrics”. Many different strategies
Keeping in mind the end goal to defeat this restriction, three
are available to combine the distinct decisions into a final
combination plans have been introduced i.e. thickness based
authentication decision. They range from majority votes to
plans; change based plan; and classifier-based plan [7].
sophisticated statistical methods.

246
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4.1.1 Implementation of unique finger impression


Sensor Sensor Recognization framework
data 1 data 2

Feature Extraction Feature Extraction

Feature Feature
Vector 1 Vector 2

Fig 10: Original Image


Template
Template

Match Match
Score 1 Score 2

Decision Decision

Fig 11:Thinned Image

Yes/No Yes/No

Decision 1 Decision 2

Fusion

Figure 9: Decision level fusion process flow. Fig 12: Fingerprint feature extraction with minutia points

4. PROPOSED SCHEME 4.2 Feature extraction of iris


4.1 Minutiae focuses extraction from Iris distinguishment is a compelling means for client
validation. iris has a few critical attributes like 1) Iris has
fingerprints exceedingly recognizing surface 2) Right eye contrasts from
Finger impression distinguishment is carried out by a few left eye 3) Twins have distinctive iris composition. The
gimmicks, for example, minutia focuses (Bifurcation & different steps included in peculiarity extraction of iris are as
edges). The general procedure can be separated into taking per the following:
after operations: 1. Load the picture
1. Load the picture 2. Division
2. Binarization 3. Standardization
3. Diminishing 4. Vigilant edge Detection
4. Minutia Extraction 5. Daugman's Rubber Sheet Model
5. Yield picture Iris characteristic extraction is gotten through of a Gabor
Minutia Extraction is carried out by utilizing cover operation channel. The general technique is load the eye picture into the
framework extraction of gimmick in the composition form
a) Bufrication b) Ridges which utilizes the Gabor channel that peculiarity is spoken to
in the iris code.
Hough change used to focus geometric elements, for example,
line, circles. Roundabout Hough change is utilized to
recognize the span and focus directions of student and iris.
The Equation for identifying the circles as takes after: k, l are
the x and y coordinates, g is the range of circle.

k2+l2=g2 (Eq.1)

247
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

For distinguishing the edges of eye watchful edge


identification is utilized. It just perceives the edges from the 1) Feature extraction from unique finger impression.
eye picture. Presently we have confined the iris locale. For 2) Feature extraction from iris.
consistent measurement we are utilizing the daugman's elastic
sheet model. 3) Fusion of unique finger impression and iris characteristics.

4.3. Fusion of fingerprint and iris features


The following step is to wire the two arrangements of
peculiarities to get a multimodal biometric layout for
verification. The Fused Image is indicated be low: Fused
Features of Two Templates (Fingerprint & Iris).These
frameworks have the capacity to meet the execution
prerequisites of different applications. They address the issue
of non-comprehensiveness, since numerous characteristics
guarantee sufficient populace scope. Mocking is unrealistic in
multimodal biometric framework in light of the fact that it
Fig 13: Daugman's elastic sheet would be troublesome for an impostor to satire numerous
biometric qualities of a real client all the while.
Daugman's remap the every purpose of iris locale to a polar
coordinates(r,) where r is in the scope of [0, 1] and 𝞱 is of
reach [0,2pi]. The remapping of directions are carried out
from circle's x and y coordinates it changes over the co-
ordinates into the polar directions .The comparison is as per
the following
R= 𝑎𝑔 ± 𝑎𝑔2 − 𝑎 − 𝑟2
where a =𝞼x2+ 𝞼y2
g= cos (𝑝𝑖 −tanh−1𝜎𝑥2/𝜎𝑦2) (Eq.2)
The 𝞼x, 𝞼y determines the distance of center of the iris and
center of pupil. We get the result in the format of rectangular
portion. Fig 15: Fused Features of Two Templates (Fingerprint &
Iris)

5. DISCUSSION AND CONCLUSION


Multibiometric frameworks, which coordinate data from
various biometric characteristics, are picking up prominence
on the grounds that they find themselves able to overcome
restrictions of unimodal biometrics. Up to this point, most
a) Original Image b) Segmented Image research in multi-modal biometrics have focused on joining
information at choice or score levels. Consequently; give
more and powerful confirmation process. In the late years
there is a noteworthy increment in examination movement
coordinated at seeing all parts of biometric data framework
representation and use for choice making backing, for
utilization by open and security administrations, and for
comprehension the complex methodologies behind biometric
coordinating and distinguishment. The portrayal with respect
c) Normalize Image d)Edge Detection
to the level of combinations was likewise introduced in this
paper. From the study, it uncovers that, execution of
multibiometric frameworks can be further enhanced if a
suitable combination methodology is utilized particularly for
the framework which executed in uncontrolled environment.
e)Extracted Feature of Iris In our undertaking, we have been utilizing two modalities, for
example, Fingerprint combination of two modalities utilizing
Fig 14:Different images during extraction of eye
key part investigation strategy. Here, we got the melded
Multimodal biometrics was gone for security-cognizant biometric that combined biometric layout utilized for
clients. Multimodal biometric framework has some great encryption utilizing particular encryption technique.
favorable circumstances, for example, 1) Improved exactness Essentially now a day's particular encryption strategy is
2) Verification Or Identification on the off chance that utilized for encryption in light of the fact that it gives
sufficient information is not extricated from given biometric accommodating results for sight and sound information and it
format 3) Ability to shield the classified information from is will use for further applications, for example, pressure of
Kparody assault. A few stages included in proposing the
scrambled pictures on web.
multimodal based methodology for cryptographic key era are :

248
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES [9] A. Jaina, K. Nandakumara, A. Ross, and A. Jain, “Score


Normalization in Multimodal Biometric Systems”, Journal of
[1] G. Williams, “More than a pretty face”, Biometrics and PatternRecognition, Vol. 38, 2005, pp. 2270.
SmartCard Tokens. SANS Institute reading room, 2002, pp. 1-
16. [10]. S.S. Iyengar, L. Prasad, and H. Min, Advances in
Distributed Sensor Technology, Prentice Hall, 1995.
[2] A. K. Jain, K. Nandakumar, U. Uludag, and X. Lu,
“Multimodal Biometrics”, from Augmenting Face With Other [11] R. Singh, M. Vatsa, A. Ross, and A. Noore,
Cues, in W. Zhao, and R. Chellappa, (eds) Face Processing: “Performance Enhancement of 2D Face Recognition via
Advanced Modelling and Methods, Elsevier, New York, Mosaicing”, in Proceedings of the 4th IEEE Workshop on
2006. pp. 675-705. Automatic Identification Advanced Technologies (AuotID),
2005, pp. 63–68.
[3] J.P. Campbell, D.A. Reynolds, and R.B. Dunn, “Fusing
High And Low Level Features for Speaker Recognition”, in [12] A. Ross, and R. Govindarajan, “Feature Level Fusion
Proceeding of EUROSPEECH, 2003, pp. 2665-2668. Using Hand And Face Biometrics”, in Proceedings of SPIE
Conference Biometric Technology for Human Identification,
[4] C.C. Lip, and D.A. Ramli, “Comparative Study on Vol. 5779, 2005, pp. 196–204.n
Feature, Score and decision Level Fusion Schemes for Robust
Multibiometric Systems”, Advances in Intelligent and Soft [13] A.K. Jain, and A. Ross, “Multibiometric Systems.
Computing, Vol. 133, 2012, pp. 941-948. Communications of The ACM”, Special Issue on Multimodal
Interfaces, Vol. 47, No. 1, 2004, pp. 34-40.
[5] D.R. Kisku, P. Gupta, and J.K. Sing, “Feature level
Fusion Of Biometrics Cues: Human Identification with [14] R. Brunelli, and D. Falavigna, “Person Identification
Doddington’s Caricature”, in International Conference of Using Multiple Cues”, IEEE Transactions on Pattern Analysis
Security Techonolgy, Communications in Computer and and Machine Intelligence, Vol. 17, No.10, 1995, pp. 955–966
Information Sciences, 2010, pp. 157-164.
[15] K. Chang, K. Bowyer, and P. Flynn, “Face Recognition
[6] M.X. He, S.J. Horng, P.Z. Fan, R.S. Run, R.J. Chen, J.L. Using 2D And 3D Faces”, Workshop on Multi Modal User
Lai, M.K. Khan and K.O. Sentosa, “Performance Evaluation Authentication (MMUA), 2003, pp. 25-32.
of Score Level Fusion in Multimodal Biometric Systems”,
Journal of Pattern Recognition, Vol. 43, No. 5, 2010, pp.
1789-1800.
[7] A. Ross, and A.K. Jain, “Fusion Techniques in
Multibiometric Systems”, from Face Biometrics for Personal
Identification. In. R.I. Hammound, B.R. Abidi and M.A.
Abidi (eds.), Publisher Springer Berlin Heidelberg, 2007, pp.
185-212.

[8] C. Sanderson, and K.K. Paliwal, “Noise compensation in


a person verification system using face and multiple speech
features”, Pattern recognition, Vol. 2, 2003, pp. 293-302.

249
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fractal Reconfigurable Antenna


Anuradha Sonker Shweta Rani Sushil Kakkar
Assistant Professor Associate Professor Assistant Professor
ECE Department ECE Department ECE Department
NIT Hamirpur BGIET, Sangrur BGIET, Sangrur
anuradhasonker@yahoo. shwetaranee@gmail.com kakkar778@gmail.com
co.in

ABSTRACT resulting in a very complex antenna structure with an infinite


This paper presents the use of PN junction diode on fractal number of frequency bands.
monopole antenna with microstrip line for rreconfigurable Electronic reconfigurability is usually achieved by
multiband antennas. The developed multiband antennas focus incorporating switches, variable capacitors, phase shifters, or
on the applications for Wi-Fi, cordless telephone and satellite ferrite materials in the topology of the antenna [4-6]. Most
& radar. A fractal reconfigurable antenna was designed using frequently, lumped components such as PIN diodes, varactor
the known fractal geometry such as Sierpinski gasket is fed by
strip line using coaxial feed. Antenna’s frequencies gets turn diodes, or MEMS switches or varactors are used in the design
on and off with the position and biasing of PN diode on strip of reconfigurable antennas. These components may be used to
line. Simulations were done on the CST software for the electronically change the frequency response and, radiation
analysis of reconfigurable patch antenna. patterns, gain, or a combination of different radiation
parameters of such antennas.
Keywords In the present work, Siepinski gasket monopole antenna was
Smart antennas, Slot antennas, Fractal antennas, Sierpinski
designed at the three resonating frequencies. A PN diode has
gasket antenna..
been inserted on the microstrip line feed system. The different
1. INTRODUCTION positions and bias of the PN diode was used to turn on and
Antennas have become an essential and critical element of all turn of the resonating frequencies of the sierpinski fractal
person electronic devices, microwave and satellite antenna.
communication systems, radar systems and military
surveillance and investigation platforms. In many of these 2. DESIGN OF SIERPINSKI NTENNA
systems, there is a constraint to perform a concourse of Sierpinski monopole gasket is one of the antenna structures
functions across several frequency bands and operating that have got maximum attention among the fractal antenna
bandwidths, particularly in the area of Cognitive radio. category. The monopole antenna based on Sierpinski gasket
Reconfigurable antennas can thus provide great profitableness has been studied extensively as an excellent candidate for
multiband applications.
in applications such as cognitive radio, MIMO systems, smart
antennas, RFIDs etc. The multitude of different standards in
cell phones and other personal mobile devices requires
compact multi-band antennas and smart antennas with
reconfigurable features. The use of the same antenna for a
number of different purposes, preferably in different
frequencies, is highly desirable. A number of different
reconfigurable antennas, planar and 3-D, were developed.
Some of them were developed for radar applications [1] and
other planar antennas were designed for wireless devices [2].
Most of those research works demonstrated only frequency
reconfigurability. There are different types of reconfigurable
antenna have been studied for ultra-wide band and multiband
antennas. Slot antennas have been used for UWB frequency
range. A rectangular patch antenna is modified by a slot
insertion at a particular location in order to alter one resonant
mode more than the other and achieve a dual-band operation. Fig.1 Front view of the Sierpinski gasket monopole antenna
Examples of different techniques that are used to design dual- The structure is usually printed on a dielectric substrate of εr
band slot antennas are presented [3]. In a single-element dual- =2.5 and thickness was 1.5 mm. The overall height H of the
band CPW-fed slot antenna with similar radiation patterns at gasket antenna was taken 22 mm for the designing purpose.
both bands is studied. The Sierpinski gasket monopole antenna consists of a series
A class of fractal antennas has been studied for multiband of scaled triangles having a scaling of r = 0.5. The heights of
the scaled triangle were measured as H/2 and H/4 in the
frequencies. Fractal antenna based on the Sierpinski gasket
gasket antenna. These three heighted antennas operate at three
geometry is described by an infinite number of iterations resonating frequencies respectively as shown in Fig.1.

250
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

In the bottom part of the antenna system, a microstrip line was


printed on the substrate and PN diode has been placed to
produce the reconfiguabilty in the antenna’s resonating
frequencies. The position of diode was used to turn on and off
the resonating frequencies of the antenna. A co-axial feed has
been used to excite the structure though the ground plane on
the back side of the antenna.

3. RESULTS AND DISCUSSION


The sierpinski monopole antenna resonates at three
frequencies with respect to the heights of H, H/2 and H/4
respectively. The three frequencies were observed as 1.8, 5.5 Fig.4 S11 vs frequency plot of gasket monopole antenna
and 8.5 GHz respectively without diode on the microstrip line. with diode position at 11 mm
In Figure 2, S11 plot show the frequency response curve. All
three frequencies were satisfied the accepted level of
minimum return loss of 10 Db.

Fig.5 S11 vs frequency plot with diode position at 5.5 mm

4. CONCLUSION
Fig.2 S11 vs frequency plot of gasket monopole antenna A reconfigurable multiple-frequency fractal antenna was
without diode designed at three frequency bands. First, the designed antenna
needs to be well-matched by the proper feeding technique, all
In Figure 3, when the diode is forward biased and placed at
frequencies of interest. With the forward bias switching diode
position of 22 mm on the strip line. The strip line of 22 mm
and location of diode was used to find the frequency on and
length matches with the overall height H of the antenna and
off for the desirable applications. By the adopted approach, a
the first frequency gets more dominating and the other two
developed reconfigurable antenna works at Bluetooth, GSM
frequencies gets less dominating.
& satellite applications.
.

5. REFERENCES
[1] E. R. Brown, “RF-MEMS switches for reconfigurable
integrated circuits,” IEEE Trans. Microwave Theory and
Techniques, vol. 46, no. 11, pp. 1868–1880, Nov. 1998.
[2] J. C. Chiao, Y. Fu, I. M. Chio, M. DeLisio, and L.-Y. Lin,
“MEMS reconfigurable Vee antenna,” in Proc. IEEE MTT-S
Int. Microwave Symp.Dig. vol. 4, pp. 1515–1518, Jun. 13–19,
1999.
[3] W. H. Weedon, W. J. Payne, and G. M. Rebeiz, “MEMS-
switched reconfigurable antennas,” in Proc. Antennas and
Fig.3 S11 vs frequency plot with diode position at 22 mm Propagation society Int.Symp. vol. 3, pp. 654–657, Jul. 8–13,
2001.
Similarly, when the diode is forward biased and placed at 11 [4] E. R. Brown, “On the gain of a reconfigurable-aperture
mm on the strip line. The strip line length matches with H/2 antenna,” IEEE Trans. Antennas Propag., vol. 49, no. 10, pp.
height of the antenna. So the middle frequency will be more 1357–1362, Oct. 2001.
dominating with respect to the other two frequencies as shown [5] J. Kiriazi, H. Ghali, H. Ragaie, and H. Haddara,
in Figure 4. “Reconfigurable dualband dipole antenna on silicon using
In Figure 5, the strip line length of 5.5 mm matches with the series MEMS switches,” in Proc.Antennas and Propagation
H/4 height of the antenna. So higher frequency was seem to Society Int. Symp., vol. 1, pp. 403–406, Jun. 22–27, 2003.
be more dominating [6] C. Puente, M. Navarro, I. Romeu, and R. Pous,
“Variations on the fractal Sierpinski antenna flare angle,” in
Proc. Antennas and Propagation Society Int. Symp., vol. 4,
pp. 2340–2343, 1998.

251
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Review of Spectrum Sensing in Cognitive Radio by


Using Energy Detection Technique
Ajay Jindal
ECE Department
BGIET, sangrur
ajayjindal786@gmail.coml

ABSTRACT:
Sensing of channel to detect the presence of primary to
provide the vacant band to secondary users we use the
Energy Detection Technique Algorithm in Cognitive Radio.
The simulations of a proper coding to detect or shows the
all kind of requirements like Presence of primary and
secondary and level of noise and level attenuation The
behaviors of Energy Detection Scheme in Cognitive Radio
is mainly depends upon three parameters like Probability of
Detection, Likelihood of False discovery, Probability of
Miss recognition is likewise enhancing by utilizing the
created MATLAB codes. Vitality Detection Technique is
best strategy for cognitive Radio for low SNR.

KEYWORDS:
Spectrum sensing and opportunity, sensor clustering, Spectrum utilization
sensing scheduling energy and feature detection.
A standout amongst the most basic parts of cognitive radio
1. INTRODUCTION innovation is range sensing. By sensing and adjusting to the
Throughout the most recent decade, remote advances have earth, a cognitive radio has the capacity fill in range
become quickly and more range assets are expected to openings and serve its clients without bringing about
bolster various developing remote administrations. Inside hurtful impedance to the authorized client.
the current range administrative structure, however the One of the considerable difficulties of executing range
greater part of the recurrence groups are solely allotted to sensing is the shrouded terminal issue, which happens when
particular the cognitive radio is shadowed, in extreme multipath
Administrations and no infringement from unlicensed blurring or inside structures with high entrance misfortune,
clients is permitted. The issue of range lack gets to be more while an essential client (PU) is working in the region[3].
evident and stresses the remote framework fashioners and Because of the concealed terminal issue, a cognitive radio
information transfers approach. Interestingly, a late review may neglect to perceive the vicinity of the PU and
of the range use made by the Federal Communications afterward will get to the authorized channel and reason
Commission (FCC) has shown that the real authorized impedance to the authorized framework. To manage the
range is to a great extent under- used in tremendous concealed terminal issue in cognitive radio systems,
transient and geographic measurements [1]. different cognitive clients can participate to direct range
With a specific end goal to unravel the contentions between sensing[4-8]. It has been indicated that range sensing
range shortage and range under-usage, cognitive radio (CR) execution can be significantly enhanced with an increment
innovation has been as of late proposed. It can enhance the of the quantity of helpful accomplices. In this letter, we
range usage by permitting auxiliary systems to get unused consider the streamlining of agreeable range sensing with
radio range from essential authorized systems or to impart vitality identification to minimize the aggregate slip rate[9].
the range to the essential systems[2]. It ought to be specified that ideal range sensing under
As a shrewd remote correspondence framework, a information combination was researched in where the ideal
cognitive radio is mindful of the radio recurrence direct capacity of weighted information combination has
environment. It chooses the correspondence parameters, for been gotten[10]. In other late works ideal sensing through
example transporter, recurrence, transmission, and put tradeoff was considered. Ideal dispersed sign discovery
transmission force to advance the range utilization and with probability proportion test utilizing reporting stations
adjusts its transmission and gathering appropriately. from the CRs to the combination focus has been managed
in. Here we explore the optimality of agreeable range
sensing utilizing the sensing channels between the essential
transmitter and the CRs when vitality identification and
disseminated choice combination are connected to a

252
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

cognitive radio system[11]. In particular, we determine the Monte Carlo acknowledgment to a certain degree as
ideal voting tenet, i.e., the ideal estimation of 𝑛 for the "𝑛- determined by the commotion vulnerability element For
out-of-𝐾" principle. We likewise focus the ideal Understand basically we take as basic e.g. as; For e.g.:-
identification limit to minimize the slip rate[12]. We further P>T then flag is accessible means no vacant band and the
propose a quick range sensing calculation for vast cognitive other way around. From this it is clear that we require a
systems which requires just a couple, not all, cognitive high SNR for better use So for better use this component
radios in helpful range sensing to get a target mistake. must be higher.

2. CHARACTERSTICS Probability of False Detection-


Along these lines two primary qualities of the cognitive Due to vicinity of considerably more Noise can command
radio are: on the level on genuine approaching sign and along these
 Cognitive Capability- It alludes to the lines P>T because of vicinity of commotion and cognitive
capacity of the cognitive radio to sense the earth or Radio take it as force level of approaching sign and it can
channels utilized for transmission and infer the data about distinguish the sign and this think is called False Detection
the condition of the channel. It envelops all the essential in the channel some time cognitive radio can false
capacities of the cognitive radio like range sensing, range recognize the sign so estimation of this parameter must be
investigation and range choice. Discovering the empty low.
groups and selecting the most proficient of all accessible
choices is principle character of cognitive.
Probability of Miss Detection-
Now and then cognitive Radio get to be not able to
 Reconfigurability- It alludes to recognize the sign vicinity that is called miss identification
programming the radio powerfully without rolling out any so estimation of this parameter ought to be low.
improvements to its equipment area. Cognitive radio is a Furthermore chart evidence of both of these parameter
product based radio and not equipment based so it has the indicated combindly.
capacity to switch between diverse remote conventions
furthermore bolsters various applications. This product 4. RELATED WORK
based methodology gives the re-configurability qualities to Range Sensing is extremely need work for giving the
the cognitive radio. With this it can without much of a administration to numerous clients from the free empty
stretch switch between frequencies, change tweak plans and band these clients are called optional clients And other
screen force levels without influencing any of the band that are held for some association that called essential
equipment ga. Users. As we seen such a variety of ways are utilized to
finish it however their no appropriately about the parameter
3. PARAMETERS of Energy Detection for better use So our hypothesis tries
There are fundamentally three parameter on which capacity to provide for some more essential data to it.
or part of vitality identification method depends so we have Subhashri G.Mohapatra, Ambarish G.Mohapatra, Dr. S. K.
to take much consideration about these parameter their Lank[1] In this paper three system of range sensing vitality
names are given as recognition, coordinated channel, Cyclostationary based
location in cognitive radio system environment were
 Probability of Detection, examined alongside their execution, appropriateness,
 Probability of Miss identification adequacy under distinctive transmission conditions. They
 Probability of False Alarm assessed the execution of cognitive radio with vitality based
Presently we talk about them one by one qu. and cyclo-stationary based recognition utilizing distinctive
windowing methods. Re-enactment results demonstrated
Probability of Detection that the Cyclostationary based methodology gives better
It is the capacity of Cognitive Radio to recognize the results under low SNR condition with a few windows and
accessibility essential flag by contrasting approaching sign with rest of windows execution is not tasteful when SNR is
and the edge one. For better execution estimation of this in scope of -20 dB.
parameter ought to be high. For this we require limit esteem M.Lakshmi, R.Saravanan, R.Muthaiah[2]In this paper four
we taken it as chi square yet straightforward distinctive procedure of range sensing specifically vitality
comprehension we take it as T and approaching sign is location, coordinated channel, Cyclostationary based
taken as P it implies it is a force level of approaching sign. discovery, multi determination range sensing were talked
what's more for vitality identification we must need a high about. Out of these four primary centre of this paper is on
SNR for better execution on the grounds that this technique MRSS (multi determination range sensing) in view of
for discovery is falls flat at lower SNR. We characterize the wavelet based change for multi-determination sensing
SNR as the proportion of the normal got sign force to the gimmick. Reproductions results demonstrated MRSS
normal clamor power we require the likelihood of false analyzed wide range with low power utilization, Faster
caution Pf Then the limit is discovered in view of the distinguishment, fast operation.
formulae in Section For correlation, we likewise reenact the Xushiynu, Zhaozj, Shangjn[3] In this paper range sensing
vitality discovery with or without commotion vulnerability in light of Cyclostationary were talked about. Creators
for the same framework. The edge for the vitality location proposed Combination identification strategy utilizing
is given in. At commotion instability case, the limit is numerous recognition point for sensing. Reproductions
constantly situated in light of the accepted/assessed clamor results demonstrated that better location execution were
force, while the genuine clamor force is changing in every

253
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

accomplished utilizing this system and some work were Miss Detection (Pm). As we know for working with
carried out on lessening many-sided quality too. cognitive Radio Field we have to correct working
Mayank Sachan, Shipla Gupta, Anjali Kansal[4] In this information of these parameter.
paper two methods for Energy recognition in view of
Cooperative plan named P out of I helpful and Hybrid 2) Energy based indicator neglected to identify the sign at
bunch methodology were talked about on premise of low SNR.
parameter exactness and pace. Reproductions results
demonstrated this plan would be wise to location for
6. METHODOLOGY
essential client than different past plan. For effective consummation of our work we pick a product
Varaka Uday Kanth, Kelli Ravi Chandra, Rayala Ravi as all acquainted with MATLAB. It is best programming
Kumar[5] In this Authors talked about diverse range for any sort of programming and reproduction with more
imparting procedures in cognitive radio system for viably exactness.
using the recurrence range. Range offering in light of
Architecture Spectrum Allocation Behaviour Spectrum We use here MATLAB Coding to conquer the issues of
Access systems were proposed. Conclusion indicated range past frameworks particularly in SNR. We take principle
offering systems use range in more compelling way. consideration about the Rayleigh technique in Energy
Soudilya. P[6] In this paper Combined Design (for channel Detection Scheme. Where we contrast approaching flag and
access and range sense) was talked about for auxiliary hubs contrast it and edge one with recognize the vicinity of
to better get to the channel and minimize the impacts of essential client.
channel sensing slips. Re-enactment results demonstrated Our Method is less computational, intricate and simple to
that there is extensive increment in optional client access actualizes to enhance the SNR in Energy Detection as
probabilities which expand Throughput and diminishing contrast with past qualities.
Delay of both essential and auxiliary systems.
Bodepudi Mounika, Kelli R Chandra, R.R. Kumar[7] In 7. DISCUSSION
this paper sensing systems and Issues which prompt
vulnerability in sensing were talked about. In this Vitality Detection Scheme Energy of the Received Signal is
Interference based location approach gave thought of ultra Compare with Energy of Threshold esteem in certain
wide band innovation for cognitive radio to exist together recurrence band Take the choice about the vicinity of
and at the same time transmit with essential client. essential sign or not. The figures of our dialog demonstrate
Different issues which ought to be taken consideration into that with a specific end goal to get solid execution in
psyche when managing CR methodologies were useful for difficult engendering situations, joint effort among optional
effective recognition. clients is essential. The execution for a solitary auxiliary
Shunqing Zhang, Tianyu Wu, Vincent K.N.Lau[8] In this client working alone is altogether more awful in Rayleigh
paper Energy discovery based agreeable sensing plan for blurring direct than in AWGN. Joint effort among optional
the cognitive radio frameworks were proposed. This plot clients brings the general identification execution in
significantly lessens the period overhead, sensing reporting Rayleigh blurring on the same level with the general
overhead of the auxiliary frameworks and the force community oriented discovery execution in AWGN. Joint
booking calculation alertly assign the transmission force of effort gives spatial assorted qualities and in this manner
the agreeable sensor hubs. Reenactments results decreases the effect of blurring on the general identification
demonstrated that the false caution and miss identification execution. That is, the likelihood that each auxiliary client
execution of this helpful sensing plan enhanced as there is is all the while in a profound blur is littler as the quantity of
in expand the quantity of agreeable sensor hubs. spatially dislodged optional clients increments. Utilizing
Anirudh M. Rao, B. R. Karthikeyan, Dipayan Mazumdar, different cyclic frequencies further enhances the execution.
Govind R. Kadambi[9] In this Principal Component The execution change is 1–2 d
Analysis plan for Energy location range sensing was
examined. Creators found adjustment component to past 8. REFERENCES
segment investigation to compare sign to clamor force of [1] Federal Communications Commission, "Range Policy
got sign to SNR of genuine sign. Reenactments results Task Force," Rep. ET docket no. 02-135, Nov. 2002.
indicated Modified Energy location can sense range gap in [2] J. Mitola and G. Q. Maguire, "Cognitive radio: making
more precise way. Problem Formulation In reference base programming radios more individual," IEEE Personal
paper authors proposed window techniques for Common., vol. 6, pp. 13-18, Aug. 1999.
Cyclostationary and compare with Energy detection but [3] D. Cambric, S. M. Mishra, and R. W. Brodersen,
there are some points which require important "Usage issues in range sensing for cognitive radios," in
consideration taken care into. And this information is also Proc. Asilomar Conf. Signals, Frameworks, Computers,
needed for better utilization of bandwidth and so we can Nov. 2004, vol. 1, pp. 772-776.
increase the efficiency in Cognitive Radio. [4] W. Zhang and K. B. Letaief, "Helpful range sensing
with transmit what's more transfer differing qualities in
cognitive radio systems," IEEE Trans. Remote Common.,
vol. 7, pp. 4761-4766, Dec. 2008.
5. Problem Formulation [5] G. Ganesan and Y. G. Li, "Helpful range sensing in
1) Not notice obviously anything about likelihood of cognitive radio systems," in Proc. IEEE Symp. New
Detection (Pd), likelihood of False Alarm (P f), likelihood of

254
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Frontiers Dynamic Spectrum Access Networks


(DySPAN'05), Baltimore, USA, Nov. 2005, pp. 137- 143.
[6] A. Ghasemi and E. S. Sousa, "Shared range sensing for
crafty access in blurring situations," in Proc. IEEE Symp.
New Frontiers in Dynamic Spectrum Access Networks
(DySPAN'05), Baltimore, USA, Nov. 2005, pp. 131-136.
[7] S. M. Mishra, A. Sahai, and R Brodersen, "Agreeable
sensing among
cognitive radios," in Conf. Rec. IEEE Int. Conf. Common.
(ICC'06), Turkey, June 2006, vol. 4, pp. 1658-1663.
[8] K. B. Letaief and W. Zhang, "Agreeable interchanges
for cognitive radio," Proc. IEEE, vol. 97, no. 5, pp. 878-
893, May 2009. [9] Z. Quan, S. Cui, and A. H. Sayed,
"Ideal direct participation for range sensing in cognitive
radio systems," IEEE J. Sel. Themes Signal Process., vol. 2,
no. 1, pp. 28-40, Feb. 2008.
[10] E. Peh and Y.-C. Liang, "Streamlining for agreeable
sensing in cognitive radio systems," in Proc. IEEE Int.
Remote Common. Organizing Conf., Hong Kong, Mar.
2007, pp. 27-32.
[11] Y.-C. Liang, Y. Zeng, E. Peh, and A. T. Hoang,
"Sensing-throughput tradeoff for cognitive radio systems,"
IEEE Trans. Remote Common., vol. 7, pp. 1326-1337, Apr.
2008.
[12] B. Chen and P. K. Willett, "On the optimality of
probability degree test for
neighborhood sensor choice standards in the vicinity of
non-perfect channels," IEEE Trans. Inf. Hypothesis, vol.
51, pp. 693-699, Feb. 2005.
[13] A. Sahai, N. Hoven, and R. Tundra, "Some central
breaking points on cognitive
radio," in Proc. Allerton Conf. Communion., Control,
Computing, Monticello, Oct. 2004.
[14] F. F. Digham, M.-S. Alouini, and M. K. Simon, "On
the vitality discovery of obscure flags over blurring
channels," in Conf. Rec. IEEE Int. Conf. Common.
(ICC'03), Anchorage, AK, USA, May 2003, pp. 3575-3579.
[15] P. K. Varshney, Distributed Detectiona and Data
Fusion. Springer, 1997.

255
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Review of Cognitive Radio by Cyclostationary Feature


Based Spectrum Sensing
Nishant Goyal
M.TECH (ECE) Student
BGIET, Sangrur
nishant91.goyal@gmail.com

ABSTRACT
The principle of cognitive radio systems is to utilize the licensed
spectrum when their interference to primary users can be
maintained below a certain threshold. Thus, to successfully
coexist, cognitive users must have awareness of primary user’s
presence in the vicinity. As most communication signals exhibit
statistical periodicities, Cyclostationary feature detection can be
used to perform the task of sensing the spectrum for primary
user’s presence. A second-order statistical approach is most
widely used to perform Cyclostationary Feature Detection in
which a set of lags should be chosen for statistical testing. The
optimal method for choosing multiple lags requires knowledge
of the 4th-order cyclic cumulated of Primary user’s signals,
which can be a burden in practice. In this work, a new idea for
lag set selection is presented, which avoids the mentioned 4th-
order cumulated burden. The results are verified via analysis and
simulation. It shows that the performance of the proposed Figure 1: Spectrum utilization [1]
method is comparable to the optimal one in the low signal to
noise ratio region where it is most critical for CR applications. Spectrum sensing that is checking the frequency spectrum for
empty bands forms the foremost part of the cognitive radio.
There are number of schemes for spectrum sensing like energy
KEYWORDS detector, matched filter Cyclostationary based spectrum sensing.
Cognitive radio, Cyclostationary Characteristic Recognition, We have to implement a detector which can perform well under
Range Sensing. low SNR conditions and complexity not as high as the matched
filter. Cyclostationary detector turned out to be the choice for
1. INTRODUCTION such specifications [4]. Cyclostationary based sensing use the
Cognitive Radio is a radio for wireless communications in periodicity property of signals. The signals which are used in
which either a network or a wireless node changes its several applications are generally coupled with sinusoid carriers,
transmission or reception parameters based on the interaction cyclic prefix, spreading codes, pulse trains etc. which result in
with the environment to communicate effectively without periodicity of their statistics like mean and auto-correlation.
interfering with the licensed users [1]. The main interest for the Such periodicities can be easily highlighted when cyclic spectral
cognitive radio users is the frequency range which remains density (CSD) for such signals is found out. Primary user signals
empty for most of the time that is to bring such frequency range which have these periodicities can be easily detected by taking
for unlicensed users in such a way that interference to the their correlation. Fourier transform of the correlated signal
licensed users is minimized. It can also be defined as a radio or results in peaks at frequencies which are specific to a signal and
system that senses its operational electromagnetic environment searching for these peaks helps in determining the presence of
and can dynamically and autonomously adjust its radio the primary user. Noise is random in nature and as such there are
operating parameters to modify system operation. Those system no such periodicities in it and thus it doesn’t get highlighted on
operations can be maximize throughput, mitigate interference, taking the correlation [5]. The main category of interest for the
and facilitate interoperability, access secondary markets [2]. cognitive radio users is the first category in which the hardly
used or empty bands are classified. In layman terms cognitive
radio is nothing but a methodology wherein the first category of
Cognitive Radio is a paradigm that has been proposed so that the
the frequency range is brought to the use for unlicensed users in
frequency spectrum can be better utilized. The formal definition
such a way that interference to the licensed users is minimized
for Cognitive Radio is given as “Cognitive Radio is a radio for
[6]. Spectrum sensing is checking of the frequency spectrum for
wireless communications in which either a network or a wireless
empty bands which forms the foremost part of the cognitive
node changes its transmission or reception parameters based on
radio. There are number of schemes for spectrum sensing like
the interaction with the environment to communicate effectively
Energy Detector, matched filter Cyclostationary based Spectrum
without interfering with the licensed users” [3].
Sensing. The need is to implement a detector which can perform

256
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

well under low SNR conditions and complexity not as high as In [10], investigated the new field of cognitive radios with an
the matched filter. Cyclostationary detector turned out to be the uncommon accentuation on one exceptional part of these radios
choice for such specifications [7]. Cyclostationary locator ended - range sensing. Firstly, it distinguishes two key issues identified
up being the decision for such particulars. Cognitive radio work with the cognitive radio frontend - element range decrease and
well for low SNR conditions. It has the capacity to recognize wideband recurrence nimbleness. Essential client discovery can
essential client and clamor. be further enhanced by cutting edge characteristic recognition
The parameters that can be used for cognitive radio are as plans like cyclostationary identifiers which use the innate
follows: periodicity of balanced signs. Further, individual sensing is not
SNR (Signal-to-noise ratio) - SNR is a measure utilized as a sufficient for solid discovery of essential clients because of
part of science and designing that analyzes the level of a coveted shadowing and multipath impacts. In such a case helpful choice
sign to the level of foundation commotion. It is characterized as making is the way to lessening the likelihood of obstruction to
the proportion of sign force to the commotion power, frequently essential clients.
communicated in decibels. A proportion higher than 1:1 (more In [3], has talked about as of late proposed element range
noteworthy than 0 dB) shows more flag than clamor. While administration and offering plans, for example, medium access
SNR is generally cited for electrical signs, it can be connected to control, range handoff, force control, directing and collaboration
any type of sign, (for example, isotope levels in an ice center or requirement. By tuning the recurrence to the incidentally unused
biochemical motioning between cells) [2]. authorized band and adjusting working parameters to
Probability of detection- As per the hypothesis, there are environment varieties, cognitive radio innovation furnishes
various determiners of how a recognizing framework will future remote gadgets with extra transfer speed, solid broadband
distinguish a sign, and where its edge levels will be. The interchanges, and adaptability for quickly developing
hypothesis can clarify how changing the limit will influence the information applications. In this review, the crucial idea about
capacity to recognize, frequently uncovering how adjusted the cognitive radio qualities, capacities, system structural
framework is to the assignment, reason or objective at which it engineering and applications are displayed.
is pointed. In [5], have portrayed about the constrained accessible range and
the wastefulness in the range utilization requires another
correspondence standard to endeavor the current remote range
2. RELATED WORK craftily. This new systems administration standard is alluded to
In [8], has portrayed that to distinguish the vicinity of the as cutting edge Networks and in addition Dynamic Spectrum
essential client signal, range sensing is a crucial prerequisite to Access (DSA) and cognitive radio systems. XG systems,
accomplish the objective of cognitive radio (CR). It clarifies outfitted with the inherent capacities of the cognitive radio, will
Cyclostationary location is the favored strategy to identify the give an extreme range mindful correspondence ideal model in
essential clients getting information inside the correspondence remote correspondences. In this overview, inborn properties and
scope of a CR client at low SNR. Utilizing the proposed location momentum examination difficulties of the XG systems are
method, it is watched that the MIMO cognitive radio appreciates introduced. The creator has researched the exceptional
6 dB SNR advantage over single receiving wire when utilizing difficulties in XG arranges by a base up methodology, beginning
four get reception apparatuses for all estimations of likelihood of from the capacities of cognitive radio methods to the
identification. correspondence conventions that need to be created for
In [9], have outlined new sensing techniques in view of the proficient correspondence.
eigen estimations of the covariance grid of sign got at the In [11], have formulated plans to distinguish and invalidate the
auxiliary clients. Specifically, two sensing calculations are impact of vindictive hubs for the situation where vitality
proposed, one is taking into account the degree of the greatest identifiers are utilized by the sensing gadgets. We utilized a
eigen worth to least eigen esteem the other is in view of the basic and quick normal mix plan to improve the choice
proportion of the normal eigen quality to least eigen esteem. The methodology at the entrance point. Utilizing reproductions, he
techniques can be utilized for different sign location applications has confirmed that the proposed plans can distinguish 'Forever
without learning of sign, channel and commotion power. Yes' clients, 'Constantly No' clients and malignant hubs
Reenactments taking into account haphazardly created signs, delivering compelling qualities.
remote amplifier flags and caught DTV signs have been carried In [12], has considered cyclostationary range sensing of
out to confirm the techniques. essential clients in a cognitive radio framework. He has
In [4], proposed and examined procedures for recognition and proposed single client multi cycle CFAR locators and stretched
grouping of radio flags in a cognitive radio (CR) environment. out them to oblige client cooperation. In addition, has proposed
Reproduction results demonstrate that it can distinguish the a blue penciling strategy for diminishing vitality utilization and
approaching signs, even at low SNR, if the quantity of the quantity of transmissions of neighborhood test insights amid
perception pieces is sufficiently huge. One of the strategy's joint effort. Dissimilar to vitality recognition the proposed
points of interest is that it doesn't require any from the earlier cyclostationary methodology has the capacity recognize among
information of the transmitting sign with the exception of harsh essential clients, optional clients, and obstruction. Besides, it is
data on sign transfer speed. It permits the CR gadget, which tries not vulnerable to commotion instability. In addition, it is
to get to a particular divert in an artful way, to indiscriminately nonparametric as in no suppositions on information or clamor
recognize dynamic or authorized client motions in that channel. disseminations are needed. Joint effort among optional clients is
Signal characterization can be performed with high precision if crucial for moderating the impacts of shadowing and blurring,
the perception length is sufficiently long. In this work, the cycle and subsequently shortening the location time. In portable
recurrence space profile (CDP) is utilized for sign discovery and applications battery life is a restricted asset that must be
preprocessing for sign arrangement. preserved. A blue penciling plan in which just useful test

257
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

measurements are transmitted to the FC has been proposed. The cyclostationary characteristic identification utilizing various
proposed blue penciling plan has been seen as a suitable slacks, in light of the fact that in this we recognize the vicinity of
methodology for fundamentally diminishing the reporting essential client by checking the periodicity of the sign free of
overhead without yielding the execution. SNR. Ideal and imperfect strategies for selecting various slacks
In [13], has proposed a recurrence area entropy-based range have been exhibited and analyzed under different conditions. It
sensing plan for CRN and demonstrated to enhance the was demonstrated that the proposed problematic system,
recognition execution as for vitality indicator and contrasted with other existing imperfect systems, can prompt
cyclostationary indicators. The entropy of the deliberate sign is predominant location execution in the low SNR locale. It is
evaluated in the recurrence space with the likelihood space likewise apparent that both logical and recreation comes about
parceled into altered measurements. It has systematically nearly coordinate.
demonstrated that the proposed plan is strong against
commotion instability. Through Monte Carlo tests, it exhibited 6. REFERENCES
that the proposed finder extraordinarily outflanks vitality finders
[1] H. L. Hurd and A. Miamee, Periodically Correlated
and cyclostationary indicators, with 6dB and 5dB execution
Random Sequences: Spectral Theory and Practice., 2007.
enhancement individually. The specimen size is fundamentally
diminished by the proposed plan contrasted with vitality [2] T. Pitarque, and E. Thierry P. Rostaing, Performance
indicator under the same discovery execution. analysis of a statistical test for presence of
cyclostationarity in a noisy observation, Speech, and Signal
Processing Acoustics, Ed. Atlanta, GA: IEEE Int. Conf. ,
3. PROBLEM FORMULATION MAY 1996.
The issue of reducing complexity is a challenging process in
[3] Wang & Raylin, Reasonable Cyclostationary-Based
Cognitive Radio. The energy detection technique of Cognitive
Spectrum Sensing for Cognitive Radio With Smart
Radio works only at high SNR. Due to this property of energy
Antennas, 594th ed.: IEEE exchanges on vehicular
detection mechanism, it will not detect the signal if the as at low
innovation, MAY 2010.
SNR. To overcome the disadvantage of energy detection,
Cyclostationary process can be used for Spectrum Sensing. As [4] Kyouwoong Kim, Cross-layer plan: an overview and the
Cyclostationary process does not depend upon SNR, it can street ahead, 4312112119th ed., Communications
detect any of the signals arriving in its range. It can detect the Magazine, Ed.: IEEE, DEC 2005.
presence of the primary user without intervention of SNR. All [5] LanF.Akylidz, Delicate Sensing and Optimal Power
the functionalities call for a spectrum aware communication Control for Cognitive Radio, 91236383649th ed., Wireless
protocol. It can check the periodicity of the signal, thus it can Communications, Ed.: IEEE Transactions , DEC,2010.
evaluate the presence of primary user if signal is periodic. [6] M. Oner and F. Jondral, Cyclostationarity based air
interface recognition for software radio systems , 263266th
4. METHODOLOGY ed. Atlanta, GA: IEEE Radio and Wireless Conf., SEP
The issue of diminishing multifaceted nature is a testing process 2004.
in cognitive radio. The vitality discovery procedure of cognitive [7] P. Kimtho, and J.-I. Takada M. Kim, Performance
radio work just at high SNR. Because of this property of vitality enhancement of cyclostationarity detector by utilizing
identification system, it won't distinguish the sign if the as at multiple cyclic frequencies of OFDM signals, 18th ed.:
low SNR. To conquer the disservice of vitality identification we IEEE Symp. New Frontiers in Dynamic Spectrum, 2010
utilize cyclostationary process for range sensing. As [8] Rajarsh Mahapatre & Krusbeel, An Efficient Multiple Lags
cyclostationary methodology does not rely on SNR, it can Selection Method for Cyclostationary Feature Based
recognize any of the signs landing in its range. It can distinguish Spectrum-Sensing, 202nd ed.: IEEE sign handling letters,
the vicinity of the essential client without mediation of SNR. All FEB 2013.
the functionalities require a range mindful correspondence
[9] Juei-Chin Shen and Emad Alsusa, An Efficient Multiple
convention. It can check the periodicity of the sign subsequently
Lags Selection Method forCyclostationary Feature Based
it can assess the vicinity of essential client if sign is occasional.
Spectrum-Sensing, 202nd ed.: IEEE SIGNAL
Since the cognitive radio is to adjust to the environment changes
PROCESSING LETTERS , FEBRUARY 2013.
there must be a high level of co-appointment among diverse
convention stack layers. This happens to be rather than the [10] Danijela Cabric, Cutting edge/dynamic range access
traditional correspondence which happens between layers in the cognitive Radio Wireless Networks: A Survey,
event of altered recurrence allotted applications. All such 5021272159th ed.: Computer Networks Journal, 2006.
research work of improving the execution increase can be [11] Praveen Kaliyineedi, Range Sensing Techniques For
comprehensively arranged under the term cross layer Cognitive Radio Systems With Multiple Antennas.: MS
configuration. In the cross layer configuration field there have Thesis, Izmir Institute of Technology, 2010.
been various understanding of the idea as still it is not [12] Arslan and Yucek, Range Policy Task Force Report ,
institutionalized and along these lines individuals are working 02155th ed., NOV 2002.
freely to propose diverse outlines.
[13] Won-Yeol Lee, Ideal Linear Cooperation for Spectrum
Sensing in Cognitive Radio Networks, 21st ed.: IEEE diary
5. CONCLUSION of chose points in sign handling, FEB 2008.
The proposed strategy has the capacity identify the vicinity of
essential client at low SNR. This work has explored

258
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Review of Simple Distributed Brillouin Scattering


Modeling For Temperature and Strain
Tushar Goyal Gaurav Mittal
M.TECH (ECE) Student Assistant Professor
BGIET, Sangrur BGIET, Sangrur
tushar91.goyal@gmail.com Mitsgaurav@gmail.com

ABSTRACT [3] little temperature changes can be identified by utilizing this


Amalgamation of appropriated Brillouin dissipating system there by it is conceivable to dodge the overheating issue
demonstrating in optical strands utilizing a recently created in optical filaments. In the fiber optic sensors they consider the
calculation. The recreations of a conveyed fiber optic sensor are optical fiber as a sensing component. There are characteristic
completed with the go for temperature and strain sensing. The and extraneous sensors. The primary point of interest of these
practices of Brillouin scrambling in optical strands are are that they are safe to electromagnetic impedance which is an
contemplated through the backscatter flags under different issue confronted in every correspondence systems. Fiber optic
working parameters along the optical filaments utilizing the sensors have numerous favorable circumstances as contrast with
created MATLAB codes. The examination of backscatter signs other sensing procedure. By relying on the different standards
qualities when influenced by temperature and strain are the fiber optic sensors are of distinctive sorts like Bragg grinding
exhibited. All reproduced models show excellent exactness sensors, Distributed sensing procedure, Quasi conveyed sensing
versus distributed estimation results. The work completed and so on[4].
cleared route for a more intricate dispersed Brillouin
disseminating demonstrating.

KEYWORDS
Distributed Fiber-Optic Sensors; Brillouin Scattering; MATLAB
Temperature; Strain; Sensing

1. INTRODUCTION
Disseminated fiber optic sensing can be utilized for little
separation ordinarily for a few kilometers. Optical frameworks Figure1: Distributed optic fiber sensing system
are by and large broadly utilized for information correspondence
through the development of laser in 1960. Circulated sensing From the above outline it is clear that if there is any change in
offers high adaptability and pace of estimation. This kind of the deliberate field like temperature, weight or strain varieties it
sensing has the capacity for measuring temperature and strain will be coupled through a fiber optic complex and will be getting
for a large number of focuses in a solitary fiber. This makes data through the sign identification and transforming square. The
appropriated sensing methodology unique in relation to other guideline that lies behind the disseminated sensing methodology
kind of sensing procedure not the same as other sort of sensing is the straight and nonlinear diffusing like Rayleigh scrambling,
methodology [1]. There are essentially straight and nonlinear Brillouin dissipating, Mie dispersing, Raman disseminating and
disseminating methodology occurs in optical filaments [2]. This so forth.
has been utilized to quantify circulated temperature and strain
along the length of the fiber.
The circulated sensors in light of Brillouin scrambling, Rayleigh
disseminating and Raman dispersing thinks that its application
mostly in the structural wellbeing observing. In this Brillouin
based conveyed sensor has been utilized generally to screen
strain and temperature in SHM. In disseminated sensing
procedure it makes utilization of the way that reflection qualities
of a laser pillar going through an optical fiber fluctuates with
temperature and strain along the entire length. By utilizing
appropriated sensing methodology, it is conceivable to take Figure2: Various scattering mechanism
ongoing readings of both temperature and strain.
Other than the customary electronic sensors fiber optic sensors
delineate a few qualities like light weight, vigorous, more 2. RELATED WORK
affectability and so on which serves to screen natural varieties

259
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

In the paper [5] the creator portrays a system for measuring the of Brilliouin diffusing upon the fiber length than that of the
Brillouin pick up range in the single mode optical strands. Here pump power. The working strategy of this method is the
just a solitary laser is utilizing alongside an outer modulator. different enhancement and heterodyne discovery of twofold
Here the creators tried distinctive set of strands with diverse sidebands out of a recurrence chose. They kept up connection
refractive lists. Here the single laser aides in the era of between sideband stages at the time of intensification process
inalienable high strength nerves free test signal furthermore the the ideal length
intelligent pump and test level. The creators tentatively T.Schneider, Danny Hannover and MarknsJuner, [12] they
demonstrated that in the back to back estimation the same fiber outlined an appropriated Brillouin sensor whose execution is
demonstrates a deviation of 100 KHz on the Brillouin recurrence totally under the optical differential parametric enhancement
shift. In this manner this estimation is considering as a standout (OPPA). The OPPA gives a slender band parametric Brilliouin
amongst the most exact estimation of BGS. The utilization of pick up range. This proposed system can be utilized for
modulator for producing a test sign created more favorable measuring conveyed Brillouin pick up range. This proposed
circumstances. strategy can be utilized for measuring appropriated strain or
A.Voskoboinik and group [6] introduced a Brillouin diffusing temperature .The system demonstrates a high spatial
based sensing procedure which is recurrence scope free in determination. The inclination floating methodology helps in
nature. Here the primary attributes of Brillouin dispersing is the adjusting.
Brillouin addition and Brillouin recurrence movements are R. Bernini, L. Crocco [13] and their group have been created a
acquired by utilizing numerous recurrence tones for both the recurrence space approach for appropriated fiber optic Brillouin
pump and test waves. sensing. Here both the preparing performed in recurrence space.
Daniele Inaudi, BrankoGlisic, [7] the creators exhibited the Here the remaking is gotten with the assistance of utilizing
circulated fiber optic sensor for measuring the temperature and expense capacity which is utilized for measuring the obscure
strain utilizing reproduction setup. Here they introduced the way strain and temperature values. They gave a stream outline to the
of Rayleigh dispersing by utilizing the back scramble signal. remaking calculation.
Other than this they considered the impact of the prevailing
commotion source i.e. rational Rayleigh commotion (CRN). 3. PROBLEM FORMULATION
Here the fundamental key point was that for remunerating the An optical fiber is a barrel shaped dielectric waveguide (no
Brillouin power as a result of the progressions in the info force leading waveguide) that transmits light along its hub, by the
or fiber misfortunes like weakening, twist, graft they utilized a methodology of aggregate interior reflection. The fiber
Brillouin influence follow which is standardized in nature to that comprises of a center encompassed by a cladding layer, both of
of Rayleigh back scattered sign. That is, in short they presented which are made of dielectric materials. So as to keep the optical
the idea of Landau-Placzek proportion (LPR). They likewise flag in the center, the refractive record of the center must be
displayed both Rayleigh and Brillouin disseminating in the time more prominent than that of the cladding Physical parameters of
area reflectometry setup. They have additionally done the optical fiber can be influenced by temperature and strain, which
investigation of optical force which is scattered nature for turn into the embodiment of dispersed fiber optic sensing. The
different fiber lengths. disseminated sensing procedures are generally in view or
John .M senior [8] the creators clarify a change that happened in something to that affect of light dispersing component (e.g.
the spatial determination of appropriated fiber Brillouin strain Rayleigh and Brillouin diffusing) happening inside the fiber.
sensing with the utilization of a consistent wave systems which The capacity of circulated optical fiber sensing frameworks to
is in light of the relationship process. The primary focus physical parameters as a capacity of position along the
accomplishment was that the strategies can be valuable for part fiber has produced escalated examination interest. Circulated
of the way extended fiber. This can be utilized as a sensory optical fiber sensors are essential when vast structures are to be
system for savvy materials. checked, for example, scaffolds, dams and shafts.
K. Krebber [9] added to a Brillouin pick up range in the This paper concentrates on dispersed optical fiber sensors in
scattering moved strands for concurrent estimation of light of the sensation of SBS. In this kind of fiber optic sensor, a
temperature and strain sensing. Here they give better result by beat of laser light is propelled into one end of the sensing fiber
utilizing the ease non zero scattering moved strands (NZDSF). through a directional coupler and the time subordinate qualities
This NZDSF guarantees productive estimation of temperature of the light that is backscattered to the same fiber end are
and strain circulation utilizing the Brillouin diffusing as a part of measured. This arrangement is in view of the guideline of
optical silica filaments. The primary rule was in view of the Optical Time Domain Reflectometry (OTDR). As the beat
extension of BGS around energized NZDSF as a sensing fiber. engenders along the fiber, it is scattered by Brillouin dispersing
M. A Soto [10] and group displayed a system for measuring systems back to the starting end where it is distinguished by the
temperature and strain at the same time by utilizing the optical recipient. In the meantime, the engendering heartbeat
heartbeat coding strategy. It gives an improvement in sign to additionally. Encounters Rayleigh scrambling, wherein it‟s
commotion proportion which permits Brillouin power and apparent as clamor to the back reflected sensing sign. Brillouin
recurrence estimation which is exact in nature. The optical diffusing alludes to the disseminating of an episode light wave
heartbeat coding improves the temperature and strain by the acoustic phonon of a medium, which is the backscattering
determination as for Brillouin sensor at crest force level. of light because of the connection between the occurrence
T. Schneider, Danny Hannover and MarknsJunker [11] present photon and an acoustic photon. At the point when this procedure
the methodology of creating millimeter waves in the optical happens in an optical fiber, the backscattered light experiences a
fiber with help of invigorated Brilliouin dissipating procedure. recurrence shift known as the Brillouin shift following the
They got the ideal parameter by utilizing numerical reenactment. recurrence movement of a Brillouin pick up range is delicate to
The principle improvement they attained to was that the reliance

260
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

of the fiber mode and the speed of acoustic waves inside the
fiber, it changes at whatever point these amounts change in light
of neighborhood natural varieties and can be utilized to find the
temperature and strain along the fiber.
The venture starts by building a straightforward scientific
Brillouin scrambling model utilizing the couple mode
mathematical statements that relate the forward sign with the
backscattered sign. At that point, an altered scientific recipe is
produced.
Figure 3: Optical Time Domain Reflectometer
.
the temperature and strain, it turns into an exceptionally helpful
impact to fabricate fiber optic sensors.
Then again, in Rayleigh scrambling, as a solitary optical wave
goes along the center of an optical fiber, Rayleigh light is
scattered in all bearings from spatial variances of the refractive
file in the optical fiber this is because of the irregular and
garbled warm changes.

4. METHODOLOGY
The different mathematical statements that are utilized as a part
of the recreation of the optical fiber sensor. The advancement of
Brillouin disseminating in optical strands is represented by a
situated of two between related comparisons under enduring
state condition. They are called rate mathematical statements
𝑑𝐼𝑝
= −𝑔𝐵 𝐼𝑃 𝐼𝑆 − 𝛼𝐼𝑃 (1)
𝑑𝑧

𝑑𝐼𝑆
= −𝑔𝐵 𝐼𝑃 𝐼𝑆 − 𝛼𝐼𝑆 (2)
𝑑𝑧

Here, IP, IS, gB, and Į speak to the Stokes power, the pump force,
the Brillouin pick up coefficient and the misfortunes at
pump/Stokes frequencies individually

The primary term on the right-hand side of mathematical


5. DISCUSSION
statement (1) portrays the Brillouin pick up. While, the first term The principle codes ring a capacity to unravel the ODEs. The
on the right hand side of mathematical statement (2) expresses MATLAB summon ode45 performs a direct numerical joining
the pump exhaustion. The second terms of the above of a set of differential comparisons I'=f (z, I), to some last
mathematical statements express engendering misfortunes of the separation zf. The principle codes pass the characterized
sign and pump influence/intensities individually. The qualities to the capacity. The capacity utilizes offered parameters
fundamental thought basic the utilization of Brillouin dispersing to tackle the set of ODEs and returns the answer. At that point
for fiber sensors can be comprehended from the Brillouin the intensities are separated and handled to get the yield. Ode45
recurrence shift, connection that is given by utilizes all the while fourth and fifth request Runge-Kutta
recipes to make mistake appraises and alter the time step as
2𝑛𝑉 𝐴 needs.
𝑣𝐵 = (3)
𝜆𝑂

Where VA is the speed of acoustic waves in the fiber, n is the 6. REFERENCES


successful refractive file of the fiber mode, and is the [1] Daniele Inaudi, BrankoGlisic, “Distributed fiber optic strain
wavelength of the proliferating light. As the Brillouin recurrence and temperature sensing for structural Health monitoring”,
movement relies on upon both the compelling refractive record IABMAS July 2006, Portugal

261
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[2] John .M senior, „optical fiber communication‟ principles [8] T.Schneider, Danny Hannover and MarknsJuner,
and practice’, 3rd edition 2012 “Investigation of Brillouin scattering in optical fibers for
[3] K.Fidanboylu and H.S.Efendioglu “Fiber optic sensors and the generation of millimeter waves”, Journal of light wave
applications” 5th IAIS May 2009, Turkey technology, vol.24, No. 1 January 2006.
[4] F.T.S. Yu and S.yin, “Fiber optic sensors”, Newyork, [9] K.Hotate and M.Tanaka “Distributed fiber Brillouin strain
Marcel Dekker, Inc; 2002 sensing with 1-cm spatial resolution by correlation based
continuous wave techniques”, IEEE photonics technology
[5] M.Nikles, L.Thevenaz and P.A Robert “Brillouin Gain letters, vol.14, NO.2 February 2002.
spectrum characterization in single mode optical fibers” [10] A.Voskiboinik, J.Wang et.al “ SBS- based fiber optical
Journal of light wave technology, Vol.15, N0, 10 October sensing using frequency domain simultaneous tone
1997 interrogation, vol.29, No. 11, June 1, 2011.
[6] A. Winsock, K.Krebber “Simultaneous Measurement of [11] Y.Hi, X.Bao et.al, “A novel distributed Brillouin sensor
temperature and strain distribution using Brillouin based on optical differential parametric amplification”,
scattering in dispersion shifted fibers” 978-1-4244 IEEE Journal of light wave technology, vol.28, NO.18,
2011 September 15, 2010.
[7] A.H. Reshak, M.M. Shahimin, S.A.Z. Murad , S.Azizan “ [12] M.A.Soto, G.Bolognini et.al‟ “Enhanced simultaneous
Simulation of Brillouin and Rayleigh scattering in distributed strain and temperature fiber sensor employing
distributed fiber optic for temperature and strain sensing spontaneous Brillouin scattering and optical pulse coding,”
application” Sensors and actuators A 190(2013) 191-196. IEEE photonics technology letters, vol.21, NO.7, April 1,
2009;
[13] R.Bernini, L.Crocco et.al, “All frequency domains
distributed fiber optics Brillouin sensing”, IEEE sensors
journal, vol.3, NO.1 February 2003.

262
Computer Science & Engineering
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A comprehensive study of AODVv2-02 routing protocol


in MANET
Vikram Rao Anuj Kumar Gupta
Research Scholar Associate Professor
CSE Dept CSE Dept
BGIET, Sangrur BGIET, Sangrur
vikramrao@live.in anuj21@hotmail.com

ABSTRACT 2. Multi hop routing: When a node tries to send information to


A MANET (Mobile Ad hoc Network) is a collection of self- other nodes which is out of its communication range, the
governing mobile nodes that can communicate to each other packet should be forwarded through one or more intermediate
through wireless links. These are fully distributed networks nodes.
and can be work at any place without any pre-existing
infrastructure. Many protocols are available for such type of 3. Light-weight terminals: In maximum cases, the nodes at
Networks. AODVv2-02 is a revised version of AODVv2 (also MANET are mobile with less CPU capability, low power
known as DYMO) developed by IETF. In this paper we will
storage and small memory size.
discuss its working and its features which makes it different
from other routing protocols. The main goals of this revised
version of AODVv2 is to maintain received RREQ table and 4. Self-governing nodes: In MANET, each mobile node is an
compare the incoming RREQ message, for the elimination of independent node, which could function as both a host and a
redundant or duplicate RREQ messages. router.

5. Shared Physical Devices: The wireless communication


Keywords medium is accessible to any node with the relevant equipment
MANET (Mobile Ad hoc Network) , AODV (Ad hoc on and sufficient resources. Accordingly, access to the channel
Demand vector routing) , Routing Protocols , DYMO cannot be restricted
(Dynamic On-demand MANET routing protocol) , AODVv2-
02 6. Dynamic topology: Nodes are free to move promptly with
1. INTRODUCTION different speeds, thus the network topology may change
A mobile ad-hoc network (MANET) is a infrastructure less randomly and at unpredictable time. The nodes in the
and self-governing network of mobile nodes, in which all MANET dynamically establish routing among themselves as
participating nodes can freely transmit the packets through they travel around, establishing their own network.
wireless transmission media to any remote node in the
network. An ad hoc network doesn‟t have any centralized
administration or server, whereas the control of the network is
allocated among participating nodes. In MANET mobile node
are pretended to be move with more or less relative speed in
random direction. There is no long term ensured path from
one node to another node. MANET have very progressive use
in emergency scenarios like military operations & disaster
relief operation where there is need of immediate
communication network whenever some major event, or some
temporary requirement like conference & meetings at new
place where there is no pre-existing network infrastructure is
available. The figure 1 shows a view of mobile ad-hoc Figure 1. A mobile ad-hoc network
network [12] and the various characteristics of MANET are as
follows: 2. CLASSIFICATION OF ROUTING
PROTOCOLS
1. Distributed Network: There is no background network for The MANET‟s routing Protocols can be classified in many
the central control of the network operations. The control of ways, but mostly this classification depending on routing
the network is divided among the nodes. The nodes involved strategy and network structure. According to the routing
in a MANET should cooperate with each other and strategy these routing protocols can be categorized as Table-
communicate among themselves and each node acts as a relay driven, on demand and Hybrid MANET routing protocols.
as needed, to implement specific functions such as routing and The classification of routing protocols is shown in the
security.[14] figure 2.

263
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.4 COMPARISON OF TABLE DRIVEN


DSDV AND ON-DEMAND ROUTING
Table PROTOCOLS
Driven FSR
(Proactive) 1. In Reactive Protocols, Average end-to-end delay or the
OLSR time taken to send data from source to destination is variable
but remains constant in Proactive Protocols for a given mobile
Ad hoc network.
On AODV
Manet Routing Demand
Protocols 2. In On-Demand Protocols, the delivery of packet data is
(Reactive) DSR much more efficient than in Proactive Protocols.

ABR 3. Reactive Protocols are faster in performance than Proactive


protocols.
CGSR
Hybrid 3. AODVv2-02 PROTOCOL OVERVIEW
ZRP
AODVv2 (DYMO) is the successor of the AODV reactive
routing protocol used for MANET. With many changes
DYMO is designed with future enhancements in mind. It is
Figure 2. Classification of MANET Routing Protocols implemented by many researchers to analyze its performance
in comparison to other routing protocols designed for
MANET. IETF has published its revised version as a draft
2.1 Table-Driven Routing Protocols „draft-ietf-manet-aodvv2-02‟ and is still in progress. It is
purposed by C. E. Perkins, S.Ratliff, and J.dowdell [9]. Using
(Proactive) AODVv2 as the basis, AODVv2-02 borrows Path
These types of protocols maintain a route information from Accumulation and Multi-hop routing from AODVv2. With
one node to every other node in the network. Each node the help of AODVv2-02 MANET routing protocol, reactive
maintains a routing table, contains routing information of the and multi-hop routing can be done between different
network. Each node updates its routing table regularly, so that participating nodes that wants to communicate with each other
each node knows the route in advance. Whenever any node [13]. Some characteristics of AODVv2-02 Protocol are given
wants to send any message to another node than its path is below:
already known. Thus, if a route has already known before
traffic arrives, then transmission starts without delay. Other-
wise, message packets should wait in queue until the node  AODVv2-02 Protocol has low routing overhead.
receives routing information from source to destination. These  Protocol Implementation become simple and easy
protocols generally use link-state algorithms which help to using path accumulation function.
maintain and update a routing table by flooding the link  The basic operation of AODVv2-02 protocol
information about neighbor nodes. It creates more overhead in involves route discovery and route maintenance.
routing table to maintains and update the node information  AODVv2-02 can be used in both IPv4 and IPv6.
entries for each and every node in the network.  ADVv2-02 is a better routing protocol for multi-hop
networks.
2.2 On Demand Routing Protocols  This protocol is energy efficient for large networks.
(Reactive)
In Reactive Protocols, there is no need to maintain routing 3.1 WORKING OF AODVv2-02
information between nodes in the network, if there is no
communication [12]. Whenever any node wants to send PROTOCOL
packets to another node in the network, only than it starts with Similar to DYMO, the basic operation of AODVv2-02
a route discovery process throughout the network. This protocol are route discovery and route maintenance [7]. Route
process runs until routing information is determined or all discovery is performed by sender node to a target node for
possible permutations have been searched. Once a route has which it does not have a valid route and route maintenance is
been determined, it is maintained by a route maintenance performed in order to avoid exiting eradicate routes from
process either until the route is no longer required or until the
routing table and also to decrease packet dropping due to
destination becomes inaccessible to every path from the
source. Therefore, theoretically the communication overhead route breakage or node failure. This protocol can be work as
is decrease due to route research [4]. both reactive and as proactive protocol.

2.3 Hybrid Protocols 3.2 AODVv2-02 Route Messages


Hybrid protocol integrates the features of both proactive as
well as reactive protocols.[4]. It is a combination of proactive AODVv2-02 protocol implements three types of route
and reactive routing and it is based upon distance vector messages during routing operations are Route Request
protocol but contain many features and advantage of link state (RREQ), Route Reply (RREP), and Route Error (RERR) [7].
protocol. Hybrid protocol enhances interior gateway routing
These are route control messages used to find and maintain
protocol.
path from source node to any particular target node.

264
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.3 Route Discovery


During the Route discovery process, the originating node
starts broadcasting of Route Request (RREQ) message
throughout the network to find a path to a particular target
node. Due to AODVv2-02 path accumulation function, each
intermediate node will attach its own address to the RREQ
message. Each intermediate node that propagates the RREQ
message makes a note to the backward path. Figure 3 shows
AODVv2-02 route discovery process, node A is the source
node and node I is the target node. Thus, node A generates
RREQ message which contains its own address, hop count, Figure 4. AODVv2-02 Route Maintenance
sequence number, target node address and then broadcast it on
the network. Each intermediate node having a valid path to 3.5 WORKING DIFFERENCE OF
the target keeps on adding its own address and sequence AODVv2-02 MANET ROUTING
number with the RREQ message as shown in figure 3 with PROTOCOL
nodes D and E, till target is reached. Upon sending RREQ
message in the network, the source node waits for a RREP The main difference of AODVv2-02 MANET routing
(route reply) message. The target node replies with RREP protocol is to maintain a received RREQ table, in-order to
message. Incase no RREP is received within the particular eliminate the duplicate RREQ message by comparison of
wait time; the source node may try again by sending another incoming RREQ message with received RREQ table entries.
RREQ message after some time to discover the route. In this Two RREQ incoming request message can be compared if
they were sent to find a route for the same destination with
way AODVv2-02 discover the route from Source node to
same Metric type by same AODVv2 router. Whenever a
target node [7, 12]. One of special feature of AODVv2-02 is router receive an RREQ message, it must check it with
energy efficient .If any node is low on energy; it has option to previous RREQ message , in order to assure that its response
not engage in route discovery process. RREP message would not contain any duplicate information.
So to avoid retransmission of these duplicate RREQ message,
each AODVv2 router needs to save a list of certain
information of recently received RREQ messages.

This list of RREQ messages are called AODVv2-02 Received


RREQ message table. Two RREQ messages can be compared
if they have same Metric type, OrigNode and TargNode
addresses. RREQ table contain given fields for each RREQ
entry.

Table 1. AODVv2-02 Received RREQ Table

Figure 3. AODVv2-02 Route Discovery Metric type Sequence Number

3.4 Route Maintenance


OrigNode address TargNode address
During the route maintenance process, each node
continuously monitors the status of links and maintains the
latest updates within the routing table. If a route to the target
Metric Timestamp
is lost or a route to the target is not known, then RERR
message is sent towards the message source node, to specify
that route is being invalid or missing towards a particular 3.6 Elimination of redundant/duplicate
node. Upon receiving RERR message, the route table is being RREQ messages
updated and the entry with invalid link is deleted. As shown in
Figure 4 , node D received a packet that wants to go to node I Whenever RREQ messages are multicast in the network for
, but the route from node D to node I is found broken . After route discovery, in common situations AODVv2 router might
this, a RERR message is generated by node D and forward it reply with redundant or duplicate information to some
to the target node A. All the intermediate nodes on the path recently received RREQ message. An AODVv2 must
suppress/eliminate these duplicate RREQ message before
immediately update their route table entries with the new
replying. To determine that recently received incoming RREQ
updated information regarding invalid path and new route message contains new information or not, is done by checking
changes. After updating new route information the packet will the received RREQ table list maintained by AODVv2 routers
be forwarded from node D to node F and then to node I in as given below:
order to reach its target node.

265
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1. AODVv2 router checks the received RREQ table entries [3] Olivia, “Difference Between Reactive and Proactive
that the incoming RREQ message have same OrigNode , Protocols”, 2011.
TargNode and Metric Type , if there is such entry than the [4] Mr. L Raja and Capt. Dr. S Santhosh Baboo,
RREQ message is suppressed/eliminated , otherwise RREQ “Comparative study of reactive routing protocol
table Adds new entry. (AODV, DSR, ABR and TORA) in MANET”,
International Journal of Engineering and Computer
Science, March 2013, Volume 2, Issue 3, pp. 707-718.
2. Even if there is an entry in received RREQ table but [5] Amandeep, Gurmeet Kaur, “Performance Analysis Of
incoming RREQ with new sequence number than the previous AODV Routing Protocol In Manets”, International
table entry is updated with new sequence number to reflect the Journal of Engineering Science and Technology
new entry. Otherwise incoming RREQ messages must be (IJEST), August 2012, Vol. 4, No.08, pp. 3620-3625.
suppressed/eliminate to reduce the overhead of replying to [6] Salman Bhimla, Neeru Yadav,” Comparison between
these redundant messages again and again. AODV Protocol And DSR Protocol In Manet”,
International Journal of Advanced Engineering
Research and Studies, Oct.-Dec., 2012, Vol. II, Issue I,
3. Similarly, if new RREQ message have Same Sequence
pp. 104-107.
number but incoming RREQ message offers better metric
[7] Anuj K. Gupta, Harsh Sadawarti and Anil K. Verma,”
type, then the new RREQ incoming message is not eliminate
IMPLEMENTATION OF DYMO ROUTING
and in order to reflect the new metric received RREQ table
PROTOCOL”, International Journal of Information
entry must be updated.
Technology, Modeling and Computing (IJITMC), May
2013, Vol.1, No.2, pp. 49-57.
[8] Narendran Sivakumar, Satish Kumar Jaiswal,
4. CONCLUSIONS “Comparison of DYMO protocol with respect to
In this paper we have discussed various features of MANET various quantitative performance metrics”.IRCSE‟09 ,
(mobile ad-hoc networks) routing protocols. These protocols IDT workshop on interesting results in computer
science and engineering,oct‟2009.
are classified in three main categories: Table-driven, On-
[9] C. E. Perkins, S.Ratliff, J.dowdell, “Dynamic MANET
demand and Hybrid protocols and also their comparison has On-Demand (AODVv2) Routing”, Internet Draft,
been done. The AODVv2-02 protocol working with its basic draft-ietf- manet- aodvv2-02, work in progress, 2014.
operations like route discovery and route maintenance are [10] Sujata V. Mallapur, Siddarama . R. Patil, “Survey on
discussed in detail. The main differentiating factor of aodvv2- Simulation Tools for Mobile Ad-Hoc Networks”,
02 routing protocol is to maintain a received RREQ table at International Journal of Computer Networks and
each node and then compare the new RREQ messages with it , Wireless Communications (IJCNWC), April 2012,
Vol.2, No.2, pp. 241-248.
in order to eliminate the duplicate RREQ messages. Due to [11] Nitin Kumar, Kunj Vashishtha, Kishore Babu,” A
changing topology and security attacks in mobile ad-hoc Comparative Study of AODV, DSR, and DYMO
networks, lot of research and development on AODVv2-02 is routing protocols using OMNeT++”, International
still required. A comparison of AODVv2-02 routing protocol Journal on Recent and Innovation Trends in Computing
can be done with different reactive protocols in order to and Communication, September 2013, Volume: 1,
evaluate its performance. Issue: 9, pp. 735-739.
[12] Jatinder pal Singh and Anuj Kumar Gupta “ A Review
On Dynamic MANET On Demand Routing Protocol in
REFERENCES MANETs ” International Journal of advance trends in
Computer Science and Engineering , ISSN No. 2278-
[1] Amit Shrivastava, Aravinth Raj Shanmogavel, Avinash
Mistry, Nitin Chander, Prashanth Patlolla, Vivek 3091 , Volume 2 , No.2,March-April2013.
Yadlapalli. “Overview of Routing Protocols in [13] Saloni Sharma and Anuj Kumar Gupta “A
MANET‟s and Enhancements in Reactive Protocols”. Comprehensive Study Of DYMO Routing Protocol”
International Journal of Computer Application (0975-
[2] Surendra H. Raut, Hemant P. Ambulgekar, “Proactive
and Reactive Routing Protocols in Multihop Mobile Ad 8887) ,Volume-73, No.22 , July 2013.
hoc Network”, International Journal of Advanced [14] Aarti and Dr.S.S. Tyagi “ Study Of MANET :
Research in Computer Science and Software Characteristics , Challenges Application and Security
Attacks ” International Journal of advance Research in
Engineering, April 2013, Volume 3, Issue 4, pp. 152-
157. Computer Science and Software Engineering , ISSN
No. 2277 128X , Volume 5 , Issue 5 No.2 , May2013.

266
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Survey on Zone Routing Protocol

Nafiza Mann Abhilash Sharma Anuj Kumar Gupta


PG Student Assistant Professor Associate Professor
CSE dept. CSE dept. CSE dept.
RIMT- Institute of Engineering RIMT- Institute of Engineering Bhai Gurdas Institute of
and Technology, and Technology, Engineering and Technology,
Mandi-Gobindgarh Mandi-Gobindgarh Sangrur

nafizamann@gmail.com abhilash583@yahoo.com anuj21@hotmail.com

ABSTRACT nature sent to certain set of peripheral nodes (surrounded). If


In this paper, the Zone Routing Protocol (ZRP) is surveyed in case there is no reply after BRP, these set of nodes again
for the nature of its parametric performance. ZRP is hybrid perform bordercasting to their peripheral nodes.Two main
routing protocol works on various routing phenomenon such approaches may be followed for bordercasting, root directed
as Intra-Zone Routing Protocol (IARP) which routes within its bordercast and distributed bordercast. The query control
routing zone, Inter-Zone Routing Protocol (IERP) which mechanism in ZRP includes: Query Detection (QD1/QD2),
routes outside the routing zone, Bordercast Resolution Early Termination (ET), and Random Query Processing Delay
Protocol (BRP), Query Control Mechanisms includes Query (RQPD). In bordercast tree all nodes can detect the QD1 and
Detection (QD1/QD2), Early Termination (ET), Random can avoid the redundancy of queries in node‟s routing zone.
Query Processing Delay (RQPD). Multicast Zone Routing Overhearing in transmission range by any node is possible,
Protocol, Two-Zone Routing Protocol along with security of extending the QD2. During relaying of query, it can prune
Zone Routing protocol in considered. The analyzed covered nodes or already relayed nodes resulting ET. With
performance of the variety of parameters such as PDR (Packet RQPD a relaying node can have another chance to prune
Delivery Ratio), Average Jitter, Average Throughput, downstream nodes.
Average End-to-End Delay, Route Acquisition Latency,
Control Traffic, Overhead of Zone Routing Protocol in 2. ZRP ARCHITECTURE
different simulating environment under the normal and with ZRP acts as a framework for other protocols. The local and
blackhole attack circumstances is compared. global neighborhood in ZRP are separated in such a way so as
to gain advantages of each routing technique. Local
Keywords neighborhood, named as „Zone‟, have number of nodes which
Hybrid Routing, Proactive Routing, Reactive Routing, may be overlapped within different zones (may have different
Routing zone, Blackhole Attack, Performance parameters. sizes). Size of the zone in ZRP is defined as the count of
number of hops it takes to reach to its peripheral nodes called
1. INTRODUCTION „Zone Radius‟ [1]. This hybrid behavior suggest and decide to
ZRP is among most popular hybrid routing protocols. Zone
follow the technique among both. This initiates route-
Routing Protocol is a prominent protocol combining both
determination procedure on demand but at limit search cost
proactive and reactive nature of routing. It is the ad hoc
[2]. The proactive nature of this protocol minimizes the waste
protocol in which proactive procedure is being followed
count associated to this technique. The Zone Routing Protocol
within a scope of local neighborhood or routing zone only.
consists of several components, which only together provide
ZRP composed of Intra-Zone Routing Protocol (IARP), Inter-
the full routing benefit to ZRP [14].
Zone Routing Protocol (IERP), Bordercast Resolution
Protocol (BRP) along with various Query Control
mechanisms. IARP has limited scope which is defined by
zone radius. Within this routing zone radius, IARP very well
maintains the topology information of its local zone. IERP
acts as a global routing component for ZRP. Whenever a node
needs to send information outside the routing zone or the
route needed by a node is not available in local neighborhood,
IERP is used to send the data. As the traditional nature of
reactive routing protocols, route discovery and route
maintenance is also performed by IERP. For the reduction of
routing overhead Bordercast Resolution Protocol is used. By
using the information provided by IARP, it directs the route
requests outward. The outward request sent is multicast in Fig 1: ZRP Architecture

267
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Each component works independently of the other and they about local topology, IERP perform the on-demand routing
may use different technologies in order to maximize mechanism [15]. In presence of route, it issues route queries.
efficiency in their particular area. Components of ZRP are Bordercasting helps to minimize the delay caused by route
IARP, IERP and BRP. The relationship between components discovery. Redundancy of nodes (already covered) is avoided.
is illustrated in Figure 1. IARP is responsible for proactive An example is illustrated in Figure 3.
maintenance while IERP for the reactive one. Bordercasting
leverages IARP‟s up-to-date view of local topology to
efficiently guide route queries away from the query source
[3].

2.1 Intra-Zone Routing Protocol (IARP)


This protocol communicates with interior nodes of the zone.
Zone radius limits the zone size. In IARP, the change in
topology results in change in local neighborhood. IARP
signifies the use of indoor routing zone. It always desires to
update the routing information [4]. IARP helps in removal of
node redundancy along with tacking to link-failures. Figure 2.
Shows the routing zone concept with radius 2 hops. Here S is Fig 3: IERP operation
considered as a source node having zone radius of 2 hops. So
S prepares to send data to destination D. S checks if D exists
in this case, the node A, B are considered as interior nodes
in its local neighborhood. If so, route is already known by S.
having hop count less than zone radius. The nodes C, D and G
Otherwise, a query is sent to its peripheral nodes by S i.e. (C,
are considered to be peripheral nodes having hop count less
G, H). These peripheral nodes further checks for D in their
than or equal to the zone radius. While Nodes E, F have the
routing zone. Like in here, when H send query to B, B
hop count greater than the zone radius, i.e. Outwards the
recognized D as the node in its routing zone and respond back
specified zone.
to query. The path then established is S-H-B-D [3].

2.3 Bordercast Resolution Protocol (BRP)


Whenever route is requested with the global reactive
technique, BRP is used to nonstop it and maximizes its
effectiveness. IARP routing information is used by BRP. This
information is constructed by IARP from its map provided by
local proactive technique. It maintains the redundancy
removal phenomenon by pruning the nodes it has already
covered (received the query) i.e. When a node receives a
query packet for a node that does not lie within its local
routing zone, a bordercast tree is constructed so that it packet
can be forwarded its neighbors. Upon reception of the packet,
bordercast tree is reconstructed by these nodes so they can
Fig 2: A routing zone of radius 2 Hops determine whether or not it belongs to the tree of the sending
node. If it does not belongs to the bordercast tree of the
However, it should be kept in mind that zone is not a
sending node, it continues to process the request and
description of physical distance [3]. Media Access Control
determines if the destination lies within it‟s routing zone and
(MAC) protocol and Neighbor Discovery Protocol (NDP) can
takes the appropriate action, so that the nodes within the zone
be use to provide identification of neighbors. Operation of
are marked covered [1]. The two approaches of BRP are
IARP is done by broadcasting “hello” beacons. The reception
of these beacons indicates the connection establishment.
2.3.1 Root Directed Bodercast
In this source nodes and peripheral nodes construct their
2.1.1 When to send
multicast trees to which the forwarding instructions to routing
Source node send new routing information if:
query packet are appended, resulting additional route
 there is change in topology or there is a link failure, overhead which increase with increase in zone radius.
 there is change in routing zone of node,
 if the node has not send the packet in its previous 2.3.2 Distributed Bordercast
time slot. In this, an extended routing zone is established and
maintained by each node which increases the local routing
2.2 Inter-Zone Routing Protocol (IERP) information exchanges, resulting in reduction of route
It is the global reactive routing component of ZRP. IERP is discovery requirement.
responsible for acquiring route to destination that are located
beyond the routing zone. With the help of knowledge gained

268
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.4 Query Control Mechanism simultaneously. This problem can be addressed by


As per ZRP strategy, querying performing is more efficient bordercasting RQPD. During scheduling of random delay by
than directly flooding the route requests but due to heavy waiting node, it can benefit to detect the previous bordercast
overlapping, multiple forwarded route requests can result into tree for already covered areas. This is how RQPD can
more control traffic than flooding. This happens because significally improve performance up to a point [3].
whenever a query is bordercasted, it efficiently covers the
node‟s complete routing zone and excess route query traffic is In Multicast Zone Routing Protocol, a multica--st tree
the result of redundant query messages. Thus a collection of membership information is maintained proactively. Multicast
query control mechanism is introduced by ZRP. ZRP makes on-demand route requests by Multicast Inter zone
routing protocol with an efficient query mechanism [9].
2.4.1 QD1/QD2
For MANETs, there exist an extension of ZRP named as
With the help of BRP, the relaying nodes in the tree becomes
Two-Zone Routing Protocol (TZRP). In this two zones may
able to detect the query which is redundant (QD1). BRP use
having different topologies and route updation mechanisms
bordercast tree hop by hop for this process. It is possible for
are used to attain the decoupling of protocol‟s ability to adapt
queries to be detected within the transmission range of
to traffic characteristics which are gained from the ability to
relaying node, extending query detection capability (QD2).
adapt the mobility. TZRP provides a framework to balance
tradeoff between pure proactive and reactive routing
techniques more effectively than ZRP [9].

The security of any protocol is a big issue. Security of ZRP


aims to tackle the problem of excess bandwidth and long route
requests delay etc. There may be certain mechanisms for
security such as identity based key management. In this,
identifier with the strong cryptographic binding is chosen.
Another can be mechanism which may provide a secure
neighbor discovery. To secure the routing packets can be a
way out. A mechanism for certain alarm messages in presence
of any malicious node (s) can also be adapted [12].
Bordercast relay QD2

Fig 4: Query Detection (QD1/QD2)


3. ANALYSIS OF ZRP
ZRP has been analyzed on number of platforms with different
In example, as when node A bordercast to its peripheral nodes kind of methods. As per simulation in [6], the proactive
(B-F), The intermediate nodes (relaying) (G, H, I, J) are able protocol shows the slightly constant number of flooded
to detect query by QD1. Using QD2, node K able to detect packets with increase in the transmission radius while ZRP as
node G‟s transmission even if node K does not belong to node compare to this protocol, shows a drastic increase in the
A‟s bordercast tree. Even with the high level query detection, number of flooded packets.As concluded in [2], the amount of
QD2 does not guarantee of the whole routing zone being intrazone control traffic required to maintain the zone radius
informed. Like in here, node L does not overhear and is thus increases along with the size of routing zone. ZRP
unaware that L‟s routing is covered by query or not.The configuration can provide good reduction (25%) in control
relationship between components is illustrated in Figure 1. traffic compared to traditional flood search.The reactive
IARP is responsible for proactive maintenance while IERP for nature of ZRP is observed more suitable for networks which
the reactive one. Bordercasting leverages IARP‟s up-to-date exhibits smaller network spans, larger transmission radii. For
view of local topology to efficiently guide route queries away highly volatile networks, it is conclude that ZRP provide 20%
from the query source [3]. less delay then reactive routing only. In real-time scenarios,
ZRP (mobile nodes) may attain good performance as done on
2.4.2 Early Termination (ET) real-time network and traffic configuration as per
In general, it may not be possible to understand that whether experimentation performed in [10][13].
query has perfectly outward to the uncovered zones but the
information obtained from QD1 and QD2 can very well
support Early Termination.

2.4.3 Random Query Processing Delay


(RQPD)Duringbordercast, node‟s routing zone is covered
instantly but even the query take some infinite amount of time
to make a way along the bordercast tree, it can be detected by
QD mechanism. Within this time, it is possible that any
neighboring node may re-bordercast the message
Fig 5: Proactive routing overhead when time is 200sec

269
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig 6: Reactive routing overhead when time is 200sec

Fig 9: IERP traffic per route discovery per node

While increase in traffic also increase the optimal zone radius


value. In case of jitter evaluation of ZRP [8], it shows bad
performance as the number of nodes and packets increases. As
observed in [4] simulation, ZRP is not considered so good in
the presence of black hole attack. In Black Hole attack any
fake or malicious node sends information of having a shortest
route to the destination resulting data discard or data misuse. I
can be a attack by any single node or by group of nodes.
Fig 7: Route Acquisition latency when time is 200sec
Further Performance of ZRP protocol can be enhanced as per
done in [11], where various parameters are taken into
The simulations performed in [5], shows that the proactive
consideration and observed that radius value low (2 here) is
part is able to communicate a large amount of routing
considered to be optimal for small and medium loads while
information at very low overhead as shown in Figure 5.and
medium value (3 here) is optimal for important and high
high value of MAXHOPS results in small route acquisition
density loads.
latencies, but there exist higher routing overhead and higher
information storing cost. If the networks that are largely idle,
the proactive part may cause unnecessary overhead. 4. COMPARISON OF PARAMETERS
In this survey, we have compared the following parameters of
In simulations performed in [3], with query control ZRP with the zone radius as shown in Table 1.
mechanism, it is observed that traffic (number of packets)
Table 1. Comparison of parameters with the zone radius
increases with increase in zone radius during proactive
technique while in reactive technique, there is a fall in number
Zone Radius Low High
of traffic (number of packets) with increase in zone radius.
Parameters
Control Traffic Low High
(Proactive)

Control Traffic High Low


(Reactive)

Overhead Low High

Mobility Increase Decrease

Route Acquisition Delay Low

Packet Delivery Ratio Moderate Moderate

Fig 8: IARP + NDP traffic per route update per node

270
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Packet Delivery Ratio Moderate Deteriorates [5] Rohit Kapoor, Mario Gerla, “A Zone Routing Protocol for
(with blackhole attack) Bluetooth Scatternets”, 0-7803-7700-1/03 IEEE 2003.

Av. Jitter Low High [6] Zygmunt J. Haas, “A New Routing Protocol for
Reconfigurable Wireless Networks”, 0-7803-3777-8/97 IEEE
1997.
Av. Jitter High High
(with blackhole attack) [7] Brijesh Patel, Sanjay Srivastava, “Performance Analysis
of Zone Routing Protocols in Mobile Ad Hoc Networks”,
Av. End-to-End Delay Moderate High 978−1−4244−6385−5/10 IEEE 2010.

[8] Swati Bhasin, Puneet Mehta, Ankur Gupta, “Comparison


Av. End-to-End Delay High High of AODV, OLSR and ZRP in Mobile Ad-hoc Network on the
(with blackhole attack) basis of Jitter”, ECE Department, Punjab College of
Engineering & Technology, Lalru, ECE Department, Geeta
Av. Throughput Moderate Moderate Institute of Management & Technology, Kurukshetra, ISSN
2277–9140 July 2012.

Av. Throughput Low Low [9] Xiaofeng Zhang, Lillykutty Jacob, “Multicast Zone
(with blackhole attack) Routing Protocol in Mobile Ad HocWireless Networks”,
Proceedings of the 28th Annual IEEE International
Conference on Local Computer Networks (LCN‟03) 0742-
1303/03IEEE 2003.

5. CONCLUSION [10] Julian Hsu, Sameer Bhatia, MineoTakai,


In this survey of Zone Routing Protocol, it is concluded that in RajiveBagrodia, Michael J. Acriche, “Performance ofMobile
comparison with only proactive protocols and only reactive Ad Hoc Networking Routing Protocols in Realistic
Scenarios”, IEEE 2003.
protocols, this hybrid routing protocol is more efficient but if
restricted to small area networks only.For the larger network [11] A. Loutfi, M. Elkoutbi, “Evaluation and enhancement of
routing, ZRP is unable to show such a good performance with ZRP performances”, IEEE 2010.
various parameters such as control traffic, Packet delivery
ratio, End to End delay, Jitter, Throughput, Overhead etc. as it [12] Ibrahim S. I. Abuhaiba, Hanan M. M. Abu-Thuraia, “
shows in small routing networks. These parameters are Securing Zone Routing Protocol in Ad-Hoc Networks”, I. J.
Computer Network and Information Security, 2012, 10, 24-
examined and compared on the basis of concluded simulation
36.
results. The efficiency of these parameters also depends on the
proactive or reactive nature of the routing.The whole nature of [13] Anuj k. Gupta, Harsh Sadawarti, Anil k.Verma,
ZRP depends on its zone radius. With increase and decrease “Performance Analysis of MANET Routing Protocols in
in routing zone radius, the performance of ZRP increases or Different Mobility Models”, International Journal of
falls considerably. The lower value of zone radius has proved Information Technology and Computer Science (IJITCS),
better for ZRP processing than the greater one. ISSN: 2074-9015, 5(6): 73-82, May 2013.

[14] Anuj k. Gupta, Harsh Sadawarti, Anil k.Verma, “A


6. REFERENCES Review of Routing Protocols for Mobile Ad Hoc Networks”,
[1] Shaily Mittal, Prabhjot Kaur, “Performance Comparison
WSEAS Transactions on Communications, ISSN: 1109-2742,
of AODV, DSR and ZRP routing protocols in MANET‟s”,
978-0-7695-3915-7/09 IEEE 2009. 11(10):331-340, November 2011.

[2] Zygmunt J. Haas, Marc R. Pearlman, “The Performance [15] Anuj k. Gupta, Harsh Sadawarti, Anil k.Verma,
Of A New Routing Protocol For The Reconfigurable Wireless “Performance analysis of AODV, DSR and TORA routing
Networks”, School of Electrical Engineering, Cornell protocols”, International Journal of Engineering and
University, Ithaca, NY, 14853, IEEE 1998. Technology (IJET),Article No. 125, 2(2): 226-231, April
2010.
[3] Zygmunt J. Haas, Marc R. Pearlman, “The Performance
of Query Control Schemes for the Zone Routing Protocol”,
Senior Member, IEEE, IEEE/ACM TRANSACTIONS ON
NETWORKING, VOL. 9, NO. 4, AUGUST 2001.

[4] Harjeet Kaur, ManjuBala, VarshaSahni, “Performance


Evaluation of AODV, OLSR and ZRP routing protocol under
the Black Hole attack in MANET”, Vol. 2, Issue 6, June 2013.

271
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Review on Cloud Computing & its Current Issues

Sandeep Kaur Simarjit Kaur


Student Assistant Professor
Department of CSE, Bhai Gurdas Institute of Department of CSE, Bhai Gurdas Institute of
Engineering and Technology, India Engineering and Technology, India
sandeepmarahar23@gmail.com er.simar0126@gmail.com

ABSTRACT computing a realistic and compelling paradigm [3]. Fig.1 shows


a basic cloud computing environment. The remainder of this
Cloud Computing is based on the concept of distributed
paper deals with characteristics, issues and challenges of cloud
computing, Grid Computing, Utility Computing
Computing.
&Virtualization. Cloud Computing is the internet based
computing and it provides shared resources, software packages
and others resources as per client requirements at specific time.
Cloud Computing provides us a means by which we can access
the applications as utilities, over the internet. This research
paper presents basics of cloud computing, the various cloud
models , various challenges and issues in cloud computing.

Keywords
Cloud Computing, Cloud Models , Software as a Service
(SaaS), Platform as a Service (PaaS), Infrastructure as a Service
(IaaS), Current Issues.

1. INRODUCTION
Cloud Computing is a model for enabling on demand network
access to a shared pool of configurable computing resources in
a convient way. There are number of resources such as Fig1: Cloud Computing Environment
networks, servers, storage, applications and services. These
resources can be rapidly provisioned and released with minimal
management effort or service provider interaction [1]. Cloud
Computing provides us a means to access the applications as
2. CLOUD DEPLOYMENT STRATEGIES
utilities, over the Internet. It allows us to create, configure, and This section explains the basic cloud deployment strategies. A
customize applications online. Cloud computing has emerged as cloud can be deployed using any of the below mentioned
a popular solution to provide cheap and easy access to strategies
externalized IT (Information Technology) resources.
Within last few years, cloud computing
2.1 Public Cloud
paradigm has witnessed an enormous shift towards its adoption The Public Cloud allows systems and services to be easily
and it has become a trend in the information technology space accessible to the general public. Public cloud may be less secure
as it promises significant cost reductions and new business because of its openness, e.g., e-mail. Public cloud services are
potential to its users and providers. There are various characterized as being available to clients from a third party
advantages of using cloud computing such as: service provider via the Internet. The term “public” does not
always mean free, even though it can be free or fairly
 reduced hardware and maintenance cost,
inexpensive to use. One of the best examples of a public cloud
 accessibility around the globe, and include Microsoft Azure, Google App Engine [2].
 flexibility and highly automated processes where in
the customer need not worry about mundane concerns
2.2 Private Cloud
like software up-gradation[2].
Computing may be applied to solve problems The Private Cloud allows systems and services to be accessible
in many domains of Information Technology like GIS within an organization. It offers increased security because of
(Geographical Information Systems), Scientific Research, e- its private nature. The difference between a private cloud and a
Governance Systems, Decision Support Systems, ERP , Web public cloud is that in a private cloud-based service, data and
Application Development, Mobile Technology etc [1],[2].Cloud processes are managed within the organization without the
Computing is a technique to transport services over the restrictions of network bandwidth, security exposures.
network. New progress in virtualization technology, processor, Eucalyptus Systems is one of the example of a private
disk storage, broadband internet access and fast, economical cloud[2].
and powerful servers have all combined to make cloud

272
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.3 Community cloud levels of service can be built. The customer has the freedom to
build his own applications, which run on the provider‟s
The Community Cloud allows systems and services to be
infrastructure. To meet manageability and scalability
accessible by group of organizations. A community cloud is
requirements of the applications, PaaS providers offer a
controlled and used by a group of Organizations that have
predefined combination of OS and application servers, such as
shared interests, such as specific security requirements or a
LAMP platform (Linux, Apache, MySql and PHP) and
common mission. The members of the community share access
Google‟s App Engine.
to the data and applications in the cloud [1]. An example of a
Community Cloud includes Facebook [2].
3.3 Infrastructure-as-a-Service (IaaS)
2.4 Hybrid Cloud
IaaS provides basic storage and computing capabilities as
The Hybrid Cloud is mixture of public and private cloud. standardized services over the network. Servers, storage
However, the critical activities are performed using private systems, networking equipment, data centre space etc. are
cloud while the non-critical activities are performed using pooled and made available to handle workloads. The customer
public cloud. An example of a Hybrid cloud includes Amazon would typically deploy his own software on the infrastructure.
Web Services [2]. Some common examples are Amazon, GoGrid, 3 Tera, etc.

Fig.3 Cloud Computing Service Delivery Models[2]

Fig2: Cloud Deployment Model [1] 4. CHALLENGES & ISSUES


In this section we explain the challenges & issues cloud
computing has to face. Fig.4 depicts the summary of the survey
3. CLOUD DELIVERY MODELS conducted by us on the basic issues of the cloud computing.
This section of the paper describes the various cloud delivery The client’s primary concern is taken in to account. Hence only
models. Cloud can be delivered in 3 models namely SaaS, PaaS, the percentage of 4, 5 is being shown. The following are the
and IaaS. issues that a cloud computing environment has to still resolve:

3.1 Software-as-a-Service (SaaS) 4.1 Security


In this model, a complete application is offered to the customer, It is the biggest concern about cloud computing. Since
data management and infrastructure management in cloud is
as a service on demand. A single instance of the service runs on provided by third-party, it is always a risk to handover the
the cloud & multiple end users are serviced. On the customers‟ sensitive information to such providers. Although the cloud
side, there is no need for upfront investment in servers or computing vendors ensure more secure password protected
software licenses, while for the provider, the costs are lowered, accounts, any sign of security breach would result in loss of
since only a single application needs to be hosted & maintained. clients and businesses. When using cloud-based services, one is
Today SaaS is offered by companies such as Google, entrusting their data to a third-party for storage and security.
Can one assume that a cloud-based company will protect and
Salesforce, Microsoft, Zoho, etc.
secure ones data (Cloud computing presents specific challenges
to privacy and security. back it up, check for data errors, defend
3.2 Platform-as-a-Service (SaaS) against security breaches) if one is using their services at a very
low cost? Or often for free? Once data is entrusted to a cloud
Here, a layer of software, or development environment is based service, which third-parties do they share the information
encapsulated & offered as a service, upon which other higher with? [1].

273
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

MB/s and GB/s) is important for use of cloud computing


services. Also important are Quality of Service (QoS);
indicators for which include the amount of time the connections
are dropped, response time and loss of data (packet loss) [2].

4.7 Energy Resource Management


Significant saving in the energy of a cloud data center without
sacrificing SLA are an excellent economic incentive for data
center operators and would also make a significant contribution
to greater environmental sustainability [2]. Designing energy-
efficient data centers has recently received considerable
attention. This problem can be approached from several
directions. For example, energy efficient hardware architecture
that enables slowing down CPU speeds and turning off partial
hardware components has become commonplace. Energy-aware
job scheduling and server consolidation are two other ways to
reduce power consumption by turning off unused machines.
Recent research has also begun to study energy-efficient
network protocols and infrastructures. A key challenge in all the
above methods is to achieve a good trade-off between energy
Fig 4:Graph depicting the concerns of clients on cloud savings and application performance. In this respect, few
computing issues[1] researchers have recently started to investigate coordinated
solutions for performance and power management in a dynamic
4.2 Performance cloud environment. The Global Energy Management Center
Cloud computing suffers from severe performance issues. The (GEMC) can help companies monitor energy consumption
cloud provider must ensure that the performance of the service patterns from multiple sources.
being provided remains the same all through. There may be
peak time break downs, internal flaws, and technical snags The cloud computing provides high reliability from the
arising. Load balancer, data replicators, high end servers must previous technologies (grid computing, distributed computing
be installed when needed. etc) but still reliability is primary component to be considered
in cloud computing environment. The challenge of reliability
comes when cloud service provider delivers on-demand
4.3 Reliability & Availability of Service software as a service i.e. accessible through any network
The challenge of reliability comes into the picture when a cloud conditions(slow connections) . The main purpose of discussing
provider delivers on-demand software as a service. The reliability in this paper is to highlight the failures in cloud
software needs to have a reliability quality factor so that users service. From failure characteristics in cloud we can identify the
can access it under any network conditions (such as during slow availability of cloud service when several of its component
network connections). There are a few cases identified due to fails. A cloud is more reliable and available if it is more fault-
the unreliability of on-demand software. One of the examples is tolerant. Fault-tolerance mechanism like FTCloudSim and MCS
Apple’s MobileMe cloud service, which stores and are used for recovering and evaluating the failures in cloud
synchronizes data across multiple devices [2]. computing environment.

4.4 Cost 5. CONCLUSION


Cloud computing can have high costs due to its requirements In this paper basics of cloud computing are discussed. Cloud
for both an “always on” connection, as well as using large
computing enables the user to have convenient and on demand
amounts of data back in-house [1].
access of shared pool of computing resources such as storage,
network, application and services etc. This research paper
4.5 Regulatory requirements represent various issues and challenges of cloud computing, and
What legislative, judicial, regulatory and policy environments pave the way for further research in this area.
are cloud-based information subject to? This question is hard to
ascertain due to the decentralized and global structure of the
internet, as well as of cloud computing. The information stored 6. REFERENCES
by cloud services is subject to the legal, regulatory and policy
environments of the country of domicile of the cloud service, as [1] SRINIVAS,J.,REDDY,K.VENKATA. and QYSER,
well as the country in which the server infrastructure is based. Dr.A.MOIZ.”Cloud Computing Basics”, International
This is complicated by the fact that some data in transit may Journal of Advanced Research in Computer and
also be regulated. Communication Engineering Vol. 1, Issue 5, July 2012.
[2] Nazir, Mohsin."Cloud Computing: Overview & Current
Research Challenges” , IOSR Journal of Computer
4.6 Bandwidth, quality of service and data Engineering (IOSR-JCE) ISSN: 2278-0661, ISBN: 2278-
limits 8727Volume 8, Issue 1 (Nov. - Dec. 2012), PP 14-22.
Cloud computing requires “broadband of considerable speed” [3] Jaiswal, A.A. and Jain, Dr.Sanjeev.”An Approach towards
Whilst many websites are usable on non-broadband connections the Dynamic Load Management Techniques in Cloud
or slow broadband connections; cloud-based applications are Computing Environment”, IEEE 2014.
often not usable. Connection speed in Kilobyte per second (or

274
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[4] Buyya Rajkumar, “Introduction to the IEEE Transactions


on Cloud Computing”, IEEE TRANSACTIONS ON
CLOUD COMPUTING, VOL. 1, NO. 1, JANUARY-
JUNE 2013.
[5] Gowri G., Amutha M. , “Cloud Computing Applications
and their Testing Methodology”, International Journal of
Innovative Research in Computer and Communication
Engineering, Vol. 2, Issue 2, February 2014.
[6] Vouk Mladen A., “Cloud Computing – Issues,Research
and Implementations”, Journal of Computing and
Information Technology - CIT 16, 2008, 4, 235–246.
[7] Mohamaddiah, Mohd Hairy. , Abdullah, Azzizol.,
Shamala, Subramaniam. And Hussin, Masnida. “A Survey
on Resource Allocation and Monitoring in Cloud
Computing”, International Journal of Machine Learning
and Computing, Vol. 4, No. 1, February 2014.
[8] Sriram, Ilango. And Hosseins, Ali Khajeh. “Research
Agenda in Cloud Technologies”.

275
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A SURVEY ON LOAD BALANCING TECHNIQUES IN


CLOUD COMPUTING
Lakhvir Kaur Simarjit Kaur
Student Assistant Professor
Department of CSE, Bhai Gurdas Institute of Department of CSE, Bhai Gurdas Institute of
Engineering and Technology,India Engineering and Technology,India
sweetlucy16@gmail.com er.simar0126@gmail.com

ABSTRACT
Cloud Computing is associated with internet computing. It is
growing very fast and provides an alternative to conventional
computing. Load balancing is one of the issue of the cloud
computing which involves the dividing the load equally so the
throughput is high with less response time. For this purpose
various load balancing techniques and algorithms have been
proposed. In this paper we will study the different types of load
balancing techniques and make a comparative analysis among
all the existing techniques.

Keywords
Cloud computing; Load balancing; Virtual Machine; Static and
Dynamic load balancing algorithms.
Fig1. View of the Cloud Computing Environment
1. INRODUCTION
2. LOAD BALANCING IN CLOUD
In the last few years Cloud Computing became very popular. It
provides a flexible and easy way to keep and receive the cloud COMPUTING
services. It makes a large data sets and files available for the Nowadays implementation of local cloud is popular,
spreading number of users around the world. Cloud computing organization is becoming aware of power consumed by
is an evolutionary outgrowth of prior computing approach, unutilized resources. Reducing power consumption has been an
which builds upon existing and new technologies. Cloud
essential requirement for cloud environments not only to
Computing is an emerging computing technology that is rapidly
consolidating itself as the next big step in the development and decrease operating cost but also improve the system reliability.
deployment of an increasing number of distributed applications. Load Balancing divides the workload between two or more
The cloud has created a new look to align IT and business computers so that more works to be done at the same time and,
visions. It is providing Software-as-Service (Saas), Platform-as- in general, all users get faster services. Load balancing is one of
Service (Paas) and Infrastructure-as-Service (Iaas) in a the central issues in many computers, processes, disks or other
virtualized cloud environment. The cloud computing power is resources. It is a method in which the workload on the resources
made possible through distributed computing and the advanced
communication networks. Cloud works on the principle of of a node spreads to respective resources on the other node in a
virtualization of resources with on-demand and pay-as–you go network without disturbing the running task. On the other hand,
model policy. [2] The Load balancing in clouds is a mechanism that distributes
the excess dynamic local workload evenly across all the nodes
The main aim of cloud computing is to provide the satisfactory [2]. The load balancing is one of the important and critical
level of performance to the user. In cloud computing there are conceptions in cloud computing issues and Proper load
various technique to handle the large services and operations
balancing can help utilizing the available resources optimally,
perform on it. To improve the performance of the user
operations and storage utilization, it is important to research thereby minimizing the resource consumption. Thus load needs
some areas in the cloud computing. One important issue to be distributed over the resources in cloud-based architecture,
associated with this field is load balancing or task scheduling. so that each resource does approximately the equal amount of
There are various algorithms for the load balancing which are task at any point of time which is performed by a load balancer.
used in various environments. The main aim of the load The load balancer determines which web server should serve
balancing algorithm is to efficiently assigning task to the cloud the request. The load balancer uses various scheduling
nodes such than the response time of the request is minimum
algorithm to determine which server should handle and
and request processing is done efficiently[1].
forwards the request on to the selected server.

276
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Here we will discuss static load balancing algorithm:

3.1.1 Round Robin Algorithm


It is the simplest algorithm that uses the concept of time
quantum or slices. Here the time is divided into multiple slices
and each node is given a particular time quantum or time
interval and in this quantum, the node will perform its
operations. Round robin algorithm is based on random
sampling. It means it selects the load randomly in case that
some server is heavily loaded or some are lightly loaded [2].

Here we will discuss dynamic load balancing algorithms:

3.2.1 Equally Spread Current Execution


It is spread spectrum technique in which the load balancer
spread the load of the job in hand into multiple virtual
machines. The load balancer maintains a queue of the jobs that
need to use and are currently using the services of the virtual
machine. The balancer then continuously scans this queue and
the list of virtual machines. If there is a VM available that can
handle request of the node/client, the VM is allocated to that
request [3]. If however there is a VM that is free and there is
another VM that needs to be freed of the load, then the balancer
distributes some of the tasks of that VM to the free one so as to
reduce the overhead of the former VM. The jobs are submitted
to the VM manager, the load also maintains a list of the jobs,
Fig 2: Load Balancer in Cloud Computing [2]
their size and the resources requested. The balancer selects the
job that matches the criteria for execution at the present time.
Though there algorithm offers better results as shown in further
section, it however requires a lot of computational overhead.
3. CLASSIFICATION OF LOAD The following figure shows how
BALANCING ALGORITHMS
ESCEL works:
Based on the current state of the system, there are two other
types of load balancing algorithms [2].

3.1. Static Load Balancing: Static load balancing


refers to load balancing algorithms that distributes the workload
based strictly on a fixed set of rules related to characteristics of
the input workload.

Static load balancing algorithms are not preemptive; therefore


each machine has at least one task assigned for itself.

Round Robin algorithm is a Static load balancing technique.

3.2. Dynamic Load Balancing: A dynamic load


balancing algorithm does not consider the previous state or
behavior of the system and no prior knowledge is needed, i.e., it
depends on the current state of the system.

It allows processes to be moved from an over utilized machine


to an underutilized machine dynamically for faster execution.

This means that it allows process preemption to be run which is


not supported in Static load balancing approach. An important
consequence of this approach for load balancing is that it
decides based on the current state of the system which improves Fig 3: Working of ESCEL
overall system performance.

277
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.2.2. Throttled Load Balancer: processing a request from its queue calculates a profit or
reward, which is analogous to the quality that the bees show in
Throttled load balancer [3] is a dynamic load balancing their waggle dance. One measure of this reward can be the
algorithm. In this algorithm, the client first requests the load amount of time that the CPU spends on the processing of a
balancer to find a suitable Virtual machine to perform the request. The dance floor in case of honey bees is analogous to
required operation. an advert board here. This board is also used to advertise the
profit of the entire colony. Each of the servers takes the role of
In Cloud computing, there may be multiple instances of virtual either a forager or a scout.
machine. These virtual machines can be grouped Based on the
type of requests they can handle. Whenever a client sends a 3.2.4. Biased Random Sampling:
request, the load balancer will first look for that group, which
can handle this request and allocate the process to the lightly Biased Random Sampling [3] is a dynamic load
loaded instance of that group. balancing algorithm. It uses random sampling of system domain
to achieve self-organization thus, balancing the load across all
nodes of system. In this algorithm, a virtual graph is constructed
with the connectivity of each node representing the load on
server. Each node is represented as a vertex in a directed graph
and each in-degree represents free resources of that node.
Whenever a client sends a request to the load balancer, the load
balancer allocates the job to the node which has at least one in-
degree. Once a job is allocated to the node, the in-degree of that
node is decremented by one. After the job is completed, the
node creates an incoming edge and increments the in-degree by
one. The addition and deletion of processes is done by the
process of random sampling. Each process is characterized by a
parameter know as threshold value, which indicates the
maximum walk length. A walk is defined as the traversal from
one node to another until the destination is found. At each step
on the walk, the neighbor node of current node is selected as the
next node.

3.2.5. Active Clustering:


It is a self-aggregation algorithm that works on the principle of
grouping similar nodes together and working on these groups.
The process consists of iterative execution of the following
steps:

 A node initiates the process (initiator node) and


selects another node called the matchmaker node
from its neighbors satisfying the criteria that it should
be of a different type than the former one.

 The matchmaker node then creates a link between one


of its neighbour which is of the same type as the
initiator node.

Fig 4: Working of Throttled Load Balancer  The matchmaker node then removes the link between
itself and the initiator node.
3.2.3. Honeybee Foraging Algorithm:
If variety of nodes is increased then the algorithm performs
This algorithm is derived from the behavior of honey poorly compare to the Honeybee foraging approach [4].
bees for finding and reaping food. There is a class of bees called
the forager bees which forage for food sources, upon finding 3.2.6. Join-Idle Queue:
one, they come back to the beehive to advertise this using a
dance called waggle dance. The display of this dance, gives the Join-Idle-Queue load balancing algorithm proposed
idea of the quality or quantity of food and also its distance from for dynamically scalable web services. This algorithm provides
the beehive. Scout bees then follow the foragers to the location large scale load balancing with distributed dispatchers by, first
of food and then began to reap it. They then return to the load balancing idle processors across dispatchers and then,
beehive and do a waggle dance, which gives an idea of how assigning jobs to processors to reduce average queue length at
much food is left and hence results in more exploitation or each processor. By removing the load balancing work from the
abandonment of the food source. In case of load balancing, as critical path of request processing, it effectively reduces the
the web servers demand increases or decreases, the services are system load, incurs no communication overhead at job arrivals
assigned dynamically to regulate the changing demands of the and does not increase actual response time [4].
user. The servers are grouped under virtual servers (VS), each
VS having its own virtual service queues. Each server

278
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. COMPARISONS OF LOAD 4. CONCLUSION


Since cloud computing has the potential to effectively handle
BALANCING ALGORITHM the future computing requirements, it is necessary to optimally
handle the major issues arising during computing over clouds.
Table1. Comparison of Different Load Balancing Load balancing is one such important issue which affects the
Algorithms utilization of resources and performance of the cloud system.
So a lot of research work has been done to efficiently balance
overall workload over available resources. This paper has
surveyed some static and dynamic based load balancing
Algorithm Benefit Drawback algorithms with their advantages and disadvantages.

5. REFERENCES
Round Robin Equal distribution of Job processing
work load time is not [1] Ghutke, Bhushan. And Shrawankar, Urmila. “Pros and
considered Cons of Load Balancing Algorithms for Cloud Computing
“, International Conference on Information Systems and
Computer Networks (2014) ,IEEE, pp: 123-127, 2014.
[2] Shoja, Hamid. , Nahid, Hossein. And Azizi, Reza. “A
Equally Response time and Not fault tolerant Comparative Survey On Load Balancing Algorithms in
Spread Current because of single Cloud Computing “, 5th ICCCNT 2014,IEEE - 33044,
processing time of a point of failure. 2014.
Execution job is improved. [3] Jaiswal, A. A. And Jain, Dr. Sanjeev “An Approach
Algorithm towards the Dynamic Load Management Techniques in
Cloud Computing Environment “,IEEE, pp: 112-122,
2014.
[4] Shaw, Subhadra Bose. And Singh , Dr. A.K. “A Survey
Throttled Load Assign job for Does not consider on Scheduling and Load Balancing Techniques in Cloud
balancing appropriate the current load Computing Environment “,5th International Conference
on VM. on Computer and Communication Technology (ICCCT) ,
VM IEEE, 2014, pp. 87-95.
[5] Hans, Abhinav. And Kalra, Sheetal. “Comparative Study
of Different Cloud Computing Load Balancing
Techniques “, International Conference on Medical
Honey Bee Works well under Increase in Imaging, m-Health and Emerging Communication
Foraging resources does not Systems (MedCom),IEEE, 2014, pp. 395-97.
heterogeneous improve
[6] Vouk Mladen A., “Cloud Computing – Issues,Research
resources throughput
and Implementations”, Journal of Computing and
equally.
Information Technology - CIT 16, 2008, 4, 235–246.
[7] Mohamaddiah, Mohd Hairy. , Abdullah, Azzizol.,
Shamala, Subramaniam. And Hussin, Masnida. “A Survey
on Resource Allocation and Monitoring in Cloud
Active Balances load Performs poorly
Computing”, International Journal of Machine Learning
Clustering efficiently. in heterogeneous
and Computing, Vol. 4, No. 1, February 2014.
environment.
[8] Sriram, Ilango. And Hosseins, Ali Khajeh. “Research
Agenda in Cloud Technologies”.

Biased Necessitate high Slower than other


Random storage in all nodes algorithms

Sampling

Join-Idle Reduces system load. More power


Queue consumption.
No communication
overhead at job
arrivals.

279
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Review of various Fractal Detection


Techniques in X-Ray Images
Tanudeep Kaur Anupam Garg
Student, Department of Computer Assistant Professor, Department of
Science and Engineering, Computer Science and Engineering,
BGIET, Sangrur BGIET, Sangrur
tanudeepkaur@gmail.com er.anugarg@gmail.com

ABSTRACT medical, medicines, art, architecture, education,


This paper represents the detection and crime prevention, etc., digital image processing
segmentation of X-Ray bone fracture Detection is used.
using Edge detection algorithms. Edge detection Due to the technological and software
is applied to find the fracture in a bone in the advancements, Medical image processing is
body (Skull, Hand, Leg, Chest, and gaining wide acceptance in healthcare industry.
Spine).Fracture is a medical condition in which It plays important role in disease diagnosis and
there is a break in the continuity of the bone. improved patient care by helping medical
The work of various researchers is discussed practitioners during decision making. Human
about the fractures present in X-Ray images and organs in digital forms are produced by several
their detection techniques. states of art equipments such as X-Ray,
Computed tomography (CT), Magnetic
Resonance Imaging (MRI), Ultrasound (US),
General Terms Positron Emission Tomography (PET), and
Methodology for Bone Fracture Detection Single Photon Emission Computed Tomography
(SPECT). Out of these, X-Ray is non-invasive,
Keywords —Image Segmentation, Edge painless and economical, and is one of the
Detection, Fracture X-Rays, SVM. oldest and frequently used technique. An X-ray
can make image of any bone in the body. A
1. INTRODUCTION typical bone ailment, which occurs when bone
Digital image processing is an application cannot suffer outside force and a person, is
specifically used in progressive transmission of unable to move that part of body is called
images, teleconferencing, digital libraries, Fracture. A fracture is cracks in bones and is
image database, remote sensing, and other defined as the breaking up of a continuous bone.
particular applied usage. For interpretation of As due to fracture, the patient can suffer from
remote sensing images and to extract as much severe pain, dissatisfaction, expensive litigation,
information as possible from the image, many so the detection and correction treatment of
image processing and analysis techniques have should be done quickly. Detection of fracture
been developed and used. Due to large can be done by passing x-ray emission through
collection of digital images, improvement in the that part of body with the help of X-Ray
digital storage media, image capturing devices machine. Medical imaging has become possible
like scanners, web cameras, digital cameras and to approach the problem of automated diagnosis
rapid development in internet are done. For image with the advancement of computer
rapid and efficient retrieval for visual processing capabilities. As the fracture cannot
information in the different fields of life such as be seen by naked eyes, a radiologist can

280
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

experience difficulties in reading and analyzed. Moreover, the databases have


understanding the x-ray images due to the very high number of “dimensions” or
presence of noise or lack of proper illumination “features”, which again pose challenges
source. By building the system, one can hope during classification.
the system to help the radiologist on detecting
bones anomalies properly. A classification technique is a systematic
The methodology followed by the approach to building classification models from
fracture detection system involves the following an input data set. Examples include, Decision
steps: preprocessing, segmentation and fracture Tree Classifiers, Rule-Based Classifiers, Neural
detection. Networks, Support Vector Machines and Naïve
The Preprocessing enhances the x-ray input Bayes Classifiers. Each technique employs a
by removing the noise from the X-ray or other learning algorithm to identify a model that best
un-wanted effects present in the X-ray by fits the relationship between the attribute set and
making it more suitable for segmentation. In the class label of the input data. The model
segmentation process, the separation of the generated by a learning algorithm should both
fractured part is done from the body structure fit the input data well and correctly predict the
and identification of that part is done. The class labels of records it has never seen before.
mostly used algorithm for segmentation is the Therefore, a key objective of the learning
Edge Detection, which separates the boundary algorithm is to build models with good
of the object from the background. Then, the generalization capability, i.e., models that
Fracture Detection which is a tough task verifies accurately predict the class labels of previously
that the segmented image is fractured or not unknown records. First, a training set consisting
fractured. This Image processing use edge of records whose class labels are known must be
detection methods and SVM Classifier that provided. The training set is used to build a
expected to minimize error on detecting bone classification model, which is subsequently
fracture. applied to the test set, which consists of records
with unknown class labels [1].
2. GENERAL APPROCH TO
CLASSIFICATION 3. RELATED WORKS
Classification, also known as pattern Now at present, a large amount of research has
recognition, discrimination, supervised learning been started in image processing for medical
or prediction, is a task that involves construction field as per the specification of the diagnosis
of a procedure that maps data into one of several while, research in the field concerning X-ray
predefined classes. It applies a rule, a boundary images are less. The general segmentation
or a function to the sample’s attributes, in order techniques are edge based, region based, edge
to identify the classes. Classification can be detection algorithms, watershed algorithms can
applied to databases, text documents, web be used.
documents, web based text documents, etc. There are general approaches for
Classification is considered as a challenging database like neural networks, rule based
field and contains more scope for research. It is classifier and SVM. The main focus of this
considered challenging because of the following research is SVM.
reasons: Bielecki, A., Korkosz, M and
 Information overload –The information Zielinski, B [2], [3], the proposed an automated
explosion era is overloaded with algorithm to compute the joint width in the x-
information and finding the required ray images of the hand. Such a process is
information is prohibitively expensive. essential in age assessment as well as diagnosis
 Size and Dimension – The information of hand diseases (such as rheumatic arthritis)
stored is very high, which in turn, and their prognosis. Their approach performs
increases the size of the database to be dilation of the image followed by a filtering step

281
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

using Gauss function. Then a thinning Y Jia and Y Jiang [7] presented a
procedure is used to define the skeleton of the method of segmentation that outlines fractured
hand and an analysis of the branches is bones in an X-ray image of a patient’s arm
performed to find the correct branches of the within casting materials, and displays the
fingers. Based on these joints locations are alignment between the fractured bones.
detected and their widths are computed. Geodesic active contour models with global
Mahendran and Baboo [4] presented constraints are applied to segment the bone
a fusion classification technique for automatic region. A prior shape is collected and used as a
detection of existence of fractures in the Tibia global constraint of our model. A maximum-
bone (one of the long bones of the leg). The likelihood function is derived to provide
authors start with preprocessing steps of feedback for each evolving process.
contrast adjustment, edge enhancement, noise Experimental results show that the method
removal and segmentation before extracting produces the outlines of the fractured bones on
texture features. For the classification step, the the low contrast X-ray images robustly and
authors propose combining the results of three accurately. [8]
common classifiers, viz., feedforward Martin Donnelley et. al. [9] developed
backpropagation Neural Networks (NN), a method of automatically detecting fractures in
Support Vector Machines (SVM) and Naive long bones. First the edges are extracted from
Bayes (NB), using a simple majority vote the x-ray image using a non-linear anisotropic
technique. diffusion method (the affine morphological
SP. Chokkalingam and K. Komathy scale space) that smoothes the image without
[5] implemented a new scheme to diagnose the losing critical information about the boundary
presence of rheumatoid arthritis by a series of locations within the image. Then a modified
image processing techniques which have been Hough transform with automatic peak detection
termed to be more effective than the other is used to determine parameters for the straight
methods which perform the same task and hence lines that best approximate the edges of the long
provide a more effective approach in computer bones. The parameters used to approximate the
aided diagnosis. The system may be further long bone edges are then used for centerline
enhanced by the improvement of the edge approximation, diaphysis segmentation and
detection as well as finding a better fracture detection in the segmented region.
segmentation technique. Gray level co- C.Linda et.al [10] proposed a procedure
occurrence matrix (GLCM) features like Mean, for crack detection in X-ray image, which is
Median, Energy, Correlation, Bone Mineral based on the minimization of a fuzzy measure.
Density (BMD) and etc. After finding all the The image histogram is divided into three fuzzy
features it stores in the database. This dataset is sub- sets using iterative approach to obtain
trained with inflamed and non-inflamed values subsets parameters. The obtained parameters
and with the help of neural network all the new were used as initial estimates and each pixel in
images are checked properly for their status. the fuzzy regions were classified as belonging to
He et. al. [6] specified to use a one of the sub-sets by minimizing the fuzzy
hierarchical SVM classifier system for fracture index. After segmenting the image into three
detection in femur bones. To use hierarchical regions, the background and skin regions are
classifier, the problem is divided into smaller removed to detect the cracks in the bone region.
sub-problems. This is done in the SVM’s kernel A binary image thus obtained contains cavities
space instead of the feature space due to the or holes. A hole-filling step utilizing the
complexity of the problem and the limited morphological operation is then applied to the
dataset. Each sub-problem is handled by an binary image to fill these spots and create a
optimized SVM classifier and to ensure that the temporary image. The temporary image is
hierarchical performs well, lower-level SVMs subtracted by the original binary image to
should complement the performance of higher- isolate the small pots. Morphological filtering
level SVMs. functions (erosion followed by dilation) are then

282
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

used to screen noise or undesirable spots using Control Engineering, Vol. 8, Issue 5, pp.834-
the iteration number as an operational 843.
parameter. The morphological operation can
eliminate or maintain the spots on the image [6] He, J. C., Leow, W. K. and Howe, T.
according to their area size. S.(2007),“Hierarchical classifier for detection of
fractures in x-ray images”. In Computer
4. SUMMARY Analysis of Images and Patterns. Springer,
2007, pp. 962–969
This paper surveyed various segmentation
models that segment the bone structure and
[7] Y, Jia, and Y.Jiang. (2006) “Active contour
fractured region from an x-ray image. The steps
model with shape constraints for bone fracture
used to extract the bone fracture from the x-ray
detection.” Computer Graphics, Imaging and
image are Edge detection algorithms,
Visualization, International Conference on.
preprocessing, feature extraction and SVM
IEEE, 2006.
classifier in a serial fashion.
[8] E Jacob, Nathanael. and Wyawahare, M. V.
REFERENCES (2013). “Survey of Bone Fracture Detection
Techniques”.International Journal of Computer
[1] Mahendran, S.K. and Baboo, S.Santosh.
Applications, pp.31-34.
(2011) “Automatic Fracture Detection Using
Classifiers- A Review” International Journal of
[9]Montejo-Raez, A. (2005) “Automatic Text
Computer Science Issues, Vol. 8, Issue 6, No 1,
Categorization of documents in the High Energy
Pp.340-345.
Physics domain,” CERN-THESIS-2006-008,
ISBN: 84-338-3718-4.
[2] Bielecki, A., Korkosz, M. and Zielinski, B.
(2008),“Hand radiographs preprocessing, image
[10] Linda, C. Harriet, and G. Wiselin Jiji.
representation in the finger regions and joint
(2011) “Crack detection in X-ray images using
space width measurements for image
fuzzy index measure.” Applied Soft Computing,
interpretation”. Pattern Recognition, Vol. 41,
Vol, 11, Issue .4: 3571-3579.
Issue.12, pp.3786–3798.

[3] Zielinski, B. “A fully-automated algorithm


dedicated to computing metacarpophalangeal
and interphalangeal joint cavity widths”.
Schedae Informaticae, vol. 16, pp.47–67.

[4]Mahendran, S.K. and Baboo, S. S. (2011).


“An enhanced tibia fracture detection tool using
image processing and classification fusion
techniques in X-ray images”. Global Journal of
Computer Science and Technology, Vol.No.11,
Issue No.14, Version 1.0, pp.23-28.

[5] Chokkalingam, SP. and Komathy, K. (2014).


“Intelligent Assistive Methods for Diagnosis of
Rheumatoid Arthritis Using Histogram
Smoothing and Feature Extraction of Bone
Images”. World Academy of Science,
Engineering and Technology International
Journal of Computer, Information, Systems and

283
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Review of the Techniques used for Face


Recognition
Priyanka
Dept. of Computer Science
Bhai Gurdas Institute of Engineering and Technology, Sangrur
priyankabansal3006@gmail.com

ABSTRACT sequences; and those that require other sensory data such
Face recognition presents a challenging problem in the as 3D information or infra-red imagery.
field of image analysis and computer vision as sucha large
number of face recognition algorithms have been Therefore a basic face recognition system contains the
developed in last decade. In this paper firstlyI present an following sub-modules:
overview of face recognition and discuss its application
and technical challenges. Thereafter I represent the various Input image/video
face recognition techniques. This includes PCA, LDA,
ICA, Gabor wavelet, soft computing tool like ANN for
recognition and various hybrid combination of these
techniques. This review investigates face recognition and
all these methods of face recognition with parameters that FACE DETECTION
have challenges like illumination, pose variation and facial
expressions.

General terms
Face recognition, Feature extraction, and Face detection FEATURE EXTRACTION

Keywords
Principal Component Analysis (PCA), Independent
Component Analysis (ICA), Linear Discriminant Analysis
(LDA), Artificial Neural Networks (ANN). FACE RECOGNITION

1. INTRODUCTION
Over the last ten years, face recognition has become a
specialized applications area within the larger field of Identification/Verification
computer vision and it becomes one of the most biometrics
authentication techniques. Face recognition is an Fig 1: Generic face-recognition system
interesting and successful application of Pattern
recognition and Image analysis. Face recognition is used 2. HISTORICAL PERSPECTIVE
for two primary tasks [1]: The earliest works on this subject were made in the 1950
in psychology. Then came attached to other issues like face
1. Verification (one-to-one matching): When presented expression, interpretation of emotion or perception of
with a face image of an unknown individual along with a gestures. Engineering started to show interest in face
claim of identity, verify whether the individual is who recognition in the 1960. Historical perspective of this work
he/she claims to be.
started from Pioneers Automated Facial Recognition
2. Identification (one-to-many matching): Given an include: W. Bledsoe, H. C. Wolf, and C. Bisson. During
image of an unknown individual, determining that person’s 1964 and 1965, Bledsoe, along with Chan and Bisson,
identity by comparing (possibly after encoding) that image worked on using the computer to recognize human faces
with a database of (possibly encoded) images of known [2, 3 and 4]. He was proud of this work, but because the
individuals. funding was provided by a semi-automatic system
unnamed intelligence agency that did not allow much
Face verification is a 1:1 match that compares a face publicity, so little of the work was published. He continued
images against a template face images, whose identity later his researches at Stanford Research Institute. Some
being claimed .On the contrary, face identification is a 1: face coordinates were selected by a human operator, and
N problem that compares a query face image against all then computers used this information for recognition. He
image templates in a face database. Face recognition described most of the problems that even 50 years later
techniques can be broadly divided into three categories Face Recognition still suffers - variations in illumination,
based on the face data acquisition methodology: methods
head rotation, facial expression, and aging. Researches on
that operate on intensity images; those that deal with video
this matter still continue, trying to measure subjective face

284
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

features as ear size or between-eye distance. For instance, problem is harder to solve with respect to the others and
this approach was used in Bell Laboratories by A. Jay not much has been done especially for age variations.
Goldstein, Leon D. Harmon and Ann B. Lesk. 5. Occlusions: Occlusion can dramatically affect face
Theydescribed a vector, containing 21 subjective features recognition performances, in particular if they located on
like ear protrusion, eyebrow weight or nose length, as the the upper-side of the face.
basis to recognize faces using pattern classification 6. Image Orientation: Face images directly vary for
different rotations about the camera’s optical axis.
techniques.
7. Imaging Conditions: When the image is formed,
factors such as lighting (source distribution and intensity)
3. APPLICATION AREAS and camera characteristics (sensor response, lenses) affect
There are numerous application areas in which face the appearance of a face.
recognition can be exploited, a few of which are outlined 8. Presence or Absence of structural components:
below [1]: Facial features such as beards, mustaches, and glasses may
or may not be present and there is a great deal of
variability among these components including shape,
Table 1: Applications of face recognition color, and size.

5. TECHNIQUES FOR FACE


Areas Application RECOGNITION
Biometrics Person identification (national
IDs, Passports, voter
registrations, driver 5.1 Principal Component Analysis (PCA)
licenses),Automated identity The PCA method is one of the generally used algorithm
verification for face recognition. Karhunen-Loeve is the eigenfaces
Information Access security (OS, data technique in which the Principal Component Analysis
Security bases), Data privacy ,User (PCA) is used. This method is successfully used to
authentication (trading, on line perform dimensionality reduction. Principal Component
banking) Analysis is used by face recognition and detection [5].
Investigation of reports Missing person identification, Mathematically, Eigenfaces are the principal components
criminal identification divide the face into feature vectors. The feature vector
Access Secure access authentication, information can be obtained from covariance matrix.
management Permission based systems, These Eigenvectors are used to quantify the variation
Access log or audit trails between multiple faces. The faces are characterized by the
Law Enforcement Video surveillance, Suspect linear combination of highest Eigenvalues. Each face can
identification, Suspect be considered as a linear combination of the eigenfaces.
tracking ,Simulated aging, The face can be approximated by using the eigenvectors
Forensic Reconstruction of having the largest eigenvalues. The best M eigenfaces
faces define an M dimensional space,called “face space”.
Personal security Home video surveillance Principal Component Analysis is also used by L. Sirovich
systems, Expression and M. Kirby to efficiently represent pictures of faces.
interpretation (driver They defined that a face images could be approximately
monitoring system) reconstructed using a small collection of weights for each
face and a standard face picture. The weights describing
each face are obtained by projecting the face image onto
4. TECHNICAL CHALLENGES the Eigen Pictures. Eigenface is a practical approach for
There are some key factors that can significantly affect face recognition. Because of the simplicity of its
system face recognition performances: algorithm, implementation of an eigenface recognition
system becomes easy. It is efficient in processing time and
1. Illumination: The variations due to skin reflectance storage. PCA reduces the dimension size of an image in a
properties and due to the internal camera control. Several short period of time. There is a high correlation between
2D methods do well in recognition tasks only under the training data and the recognition data. The accuracy of
moderate illumination variation, while performances eigenface depends on many things. As it takes the pixel
noticeably drop when both illumination and pose changes value as comparison for the projection, the accuracy would
occur. decrease with varying light intensity. Preprocessing of
2. Pose: Changes affect the authentication process, image is required to achieve satisfactory result. An
because they introduce projective deformations and self- advantage of this algorithm is that the eigenfaces were
occlusion. Even if methods dealing with up to 32 Degree invented exactly for those purpose what makes the system
head rotation exist, they do not solve the problem very efficient. A drawback of this is that it is sensitive for
considering that security cameras can create viewing lightening conditions and also finding the eigenvectors and
angles that are outside of this range when positioned. eigenvalues are time consuming on PPC .This limitation is
3. Expression: On the contrary, with exception of extreme overcome by Linear Discriminant Analysis (LDA). LDA is
expressions such as scream, theAlgorithmsare relatively the most dominant algorithms for feature selection in
robust. appearance based methods [6]. But many LDA based face
4. Time Delay: This happen because the face changes over recognition system first used PCA to reduce dimensions
time, in a nonlinear way over long periods. In General this and then LDA is used to maximize the discriminating
power of feature selection. Due to this Modified PCA
algorithm for face recognition were proposed in [7], this

285
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

method was based on the idea of reducing the influence of the problem of undersampling. The key idea of OLDA, the
eigenvectors associated with the large Eigen values by discriminant vectors are orthogonal to each other. Ye
normalizing the feature vector element by its provides an efficient way of computing OLDA,
corresponding standard deviation. The simulation results Logarithmic Total Variation (LTV) algorithm. However,
show that the proposed method results in a better The LTV algorithm has high time complexity. Therefore,
performance than conventional PCA and LDA approaches the LTV method is not practically applicable.
and the computational cost remains the same as that of
Implementing LDA directly resulted in poor extraction of
PCA and much less than that of LDA.
discriminating features. For this in some methods Gabor
filter is used to filter frontal face images and PCA is used
5.2 Linear Discriminant Analysis (LDA) to reduce the dimension of filtered feature vectors and then
Linear discriminant analysis (LDA) is a generalization LDA is used for feature extraction. LDA is better than
of Fisher's linear discriminant methods, that used PCA if we need a method which has better computational
in statistics, pattern recognition and machine learning to time but due to its small sample space its usability is
find a linear combination of features which characterizes limited.
or separates two or more classes of objects or events. The
resulting combination may be used as a linear classifier, or
more commonly for dimensionality reduction before
5.3Independent Component Analysis
later classification. Linear discriminant analysis (LDA) is a (ICA)
powerful method for face recognition. It yields an effective ICA is a widely used subspace projection technique that
representation that linearly transforms the original data projects data from a high-dimensional space to a lower-
space into a low-dimensional feature space where the data dimensional space. This technique is a Generalization of
PCA that decorrelates the high-order statistics in addition
is well separated. However, the within-class scatter matrix
to the second-order moments.ICA provided a more
becomes singular in face recognition and the classical
powerful data representation than PCA as its goal was that
LDA cannot be solved which is the under sampled of providing an independent rather than uncorrelated
problem of LDA (also known as small sample size image decomposition and representation. A fast
problem). A subspace analysis method for face recognition incremental principal non Gaussian directions analysis
called kernel discriminant locality preserving projections algorithm called IPCA_ICA was proposed in [10]. This
was proposed in [8] based on the analysis of LDA, LPP algorithm computes the principal components of a
and kernel function. The nonlinear subspace which can not sequence of image vectors incrementally without
only preserves the local facial manifold structure but also estimating the covariance matrix and at the same time
emphasizes discriminant information, Combined with transform these principal components to the independent
maximum margin criterion (MMC) and a new method directions that maximize the non-Guassianity of the
called maximizing margin and discriminant locality source. PCA_ICA achieves higher average success rate
preserving projections (MMDLPP) was proposed to find than Eigenface, the Fisherface and FastICA methods.
the subspace that best discriminates different face change
and preserving the intrinsic relations of the local 5.4 Gabor Wavelet
neighborhood in the same face class according to prior Gabor wavelets have proven to be good at local and
class label information. But this method has variation discriminate image feature extraction as they have similar
problem, for that Illumination adaptive linear discriminant characteristics to those of the human visual system. Gabor
analysis (IALDA) was proposed to solve illumination wavelet transform [12–13] allows description of spatial
variation problems in face recognition. The recognition frequency structure in the image while preserving
accuracy of the suggested method (IALDA), far higher information about spatial relations which is known to be
than that of PCA method and LDA method. The robust to some variations, e.g., pose and facial expression
recognition accuracy of the suggested method was lower changes. Although Gabor wavelet is effective in many
than that the nearby in the output space, thereby providing domains, it nevertheless suffers from a limitation. The
Logarithmic Total Variation (LTV) algorithm. However, dimension of the feature vectors extracted by applying the
The LTV algorithm has high time complexity. Therefore, Gabor wavelet to the whole image through a convolution
the LTV method is not practically applicable. At the same process is very high. To solve this dimension problem,
time, this also indicates that the proposed IALDA method subspace projection is usually used to transform the high
is robust for illumination variations. David Monzo et.al. dimensional Gabor feature vector into a low dimension
[9]Compared several approaches to extract facial one. For enhancing face recognition high intensity feature
landmarks and studied their influence on face recognition vectors extracted from Gabor wavelet transformation of
problems. In order to obtain fair comparisons, they used frontal face images combined together with ICA in [14].
the same number of facial landmarks and the same type of Gabor features have been recognized as one of the best
descriptors for each approach. The comparative results representations for face recognition. In recent years, Gabor
were obtained using FERET and FRGC [1] datasets and wavelets have been widely used for face representation by
shown that better recognition rates were obtained when face recognition researchers, because the kernels of the
landmarks are located at real facial fiducial points. In this Gabor wavelets are similar to the 2D receptive field
work, comparison was done using Principal Component profiles of the mammal cortical simple cells, which
Analysis (PCA), Linear Discriminant Analysis (LDA) and exhibits desirable characteristics of spatial locality and
Orthogonal Linear Discriminant Analysis (OLDA). OLDA orientation selectivity. Previous works on Gabor features
is one of the many variations of LDA which aims to tackle have also demonstrated impressive results for face
recognition. Typical methods include the dynamic link

286
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

architecture (DLA) [15], elastic bunch graph matching network (PNN).The PCA technique used to reduce the
(EBGM) [16], Gabor Fisher classifier (GFC) [17], and dimensionality of image patterns and extract features for
AdaBoosted GFC (AGFC) [18]. The Gabor phases are the PNN. Using a single network the author had achieved
sensitive to local variations, they can discriminate between fairly high detection rate and low false positive rate on
patterns with similar magnitudes, and i.e. they provide images with complex backgrounds. In comparison with a
more detailed information about the local image features. multilayer perceptron, the performance of PNN is superior.
To best reflect the geometry of the 3D face manifold and
Therefore, the Gabor phases can work comparably well
improve recognition, Spectral Regression Kernel
with the magnitudes, as long as its sensitivity to
Discriminate Analysis (SRKDA) based on regression and
misalignment and local variations can be compensated spectral graph analysis introduced, when the sample
carefully. P. Latha use Gabor wavelet to present face, and vectors are non-linear SRKDA can efficiently give exact
applied neural network to classify views of faces. The solutions than ordinary subspace learning approaches. It
dimensionality was reduced by the principal component not only solves high dimensional and small sample size
analysis. A technique to extract the feature vector of the problems, but also enhances feature extraction from a face
whole face in image database by using Gabor filters, local non-linear structure. SRKDA only needs to solve a
known to be invariant to illumination and facial set of regularized regression problems and no eigenvector
expression. This network achieved higher recognition rate computation involved, which is a huge saving in
and better classification efficiency when feature vectors computational cost. This ANN yield better performance in
had low dimensions. face recognition rate and accuracy.

6.SUMMARY
5.5 Artificial Neural Networks (ANN) This paper has attempted to review a significant number of
The neural networks are used in many applications like papers to cover the recent development in the field of face
pattern recognition problems, character recognition, recognition. Present study reveals that for enhanced face
object recognition, and autonomous robot driving. The detection, new algorithm has to evolve using hybrid
main objective of the neural network in the face methods of soft computing tools such as ANN and Gabor
recognition is the feasibility of training a system to filter (Feature Extractor) that yield better performance in
capture the complex class of face patterns. To get the terms of face detection rate and accuracy.
best performance by the neural network, it has to be
extensively tuned number of layers, number of nodes, 7. ACKNOWLEDGMENTS
learning rates, etc. The neural networks are nonlinear in I would like to thank all experts and authors whose work
the network so it is widely used technique for face helped me a lot. I am also thankful to my professors for
recognition. So, the feature extraction step may be more their support.
efficient than the Principal Component Analysis. The
authors achieved 96.2% accuracy in the face REFERENCES
recognition process when 400 images of 40 individuals
are analyzed. The disadvantage of the neural network [1] Reddy, M. Janga. "A Survey of Face Recognition
approach is that when the number of classes increases Techniques.
its ability is decreased. Due to this, Multi-Layer
Perceptron (MLP) with a feed forward learning algorithms [2] Patel, Ripal, NidhiRathod, and Ami Shah.
was chosen for the proposed system for its simplicity and "Comparative analysis of face recognition
its capability in supervised pattern matching. It has been approaches: a survey." International Journal of
successfully applied to many pattern classification Computer Applications 57.17 (2012): 50-69.
problems. A new approach to face detection with Gabor
wavelets & feed forward neural network was presented.
[3] Eleyan, Alaa, and Hasan Demirel. "PCA and LDA
The method used Gabor wavelet transform and feed
based face recognition using feedforward neural
forward neural network for both finding feature points and network classifier." Multimedia Content
extracting feature vectors. The experimental results have Representation, Classification and Security. Springer
shown that proposed method achieves better results Berlin Heidelberg, 2006. 199-206.
compared to other successful algorithm like the graph
matching and eigenfaces methods. Ahybrid neural network [4] Ahonen, Timo, AbdenourHadid, and
was presented which is combination of local image MattiPietikainen. "Face description with local binary
sampling, a self-organizing map neural network, and a patterns: Application to face recognition." Pattern
convolutional neural network. The SOM provides a Analysis and Machine Intelligence, IEEE
quantization of the image samples into a topological space Transactions on 28.12 (2006): 2037-2041.
where inputs that are nearby in the original space are also [5] Gupta, Bhaskar. "Performance Comparison of
nearby in the output space, therefore providing Various Face Detection Techniques."
dimensionality reduction and invariance to minor changes
in the image sample. The convolutional neural network [6] Swets, Daniel L., and John JuyangWeng. "Using
(CNN) provides for partial invariance to translation, discriminant eigenfeatures for image retrieval." IEEE
rotation, scale, and deformation. PCA+CNN and Transactions on pattern analysis and machine
SOM+CNN methods are both superior to eigenfaces intelligence 18.8 (1996): 831-836.
technique even when there is only one training image per [7] Luo, Lin, M. N. S. Swamy, and Eugene I. Plotkin. "A
person.SOM +CNN method consistently performs better modified PCA algorithm for face
than the PCA+CNN method.After that a new face recognition." Electrical and Computer Engineering,
detection method is proposed using polynomial neural

287
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2003. IEEE CCECE 2003. Canadian Conference on. [13] Liu, Chengjun. "Gabor-based kernel PCA with
Vol. 1. IEEE, 2003. fractional power polynomial models for face
recognition." Pattern Analysis and Machine
[8] Huanga, Rongbing, et al. "Kernel Discriminant Intelligence, IEEE Transactions on 26.5 (2004): 572-
Locality Preserving Projections for Human Face 581.
Recognition." (2010).
[14] Kar, Arindam, et al. "High Performance Human Face
[9] Monzo, David, Alberto Albiol, and J. M. Mossi. "A Recognition using Independent High Intensity Gabor
Comparative Study of facial landmark localization Wavelet Responses: A Statistical Approach."arXiv
methods for Face Recognition using HOG preprint arXiv:1106.3467 (2011).
descriptors."Pattern Recognition (ICPR), 2010 20th
International Conference on. IEEE, 2010. [15] Lades, Martin, et al. "Distortion invariant object
recognition in the dynamic linkarchitecture." Compu-
[10] Dagher, Issam, and Rabih Nachar. "Face recognition ters, IEEE Transactions on 42.3 (1993): 300-311.
using IPCA-ICA algorithm." Pattern Analysis and
Machine Intelligence, IEEE Transactions on28.6 [16] Wiskott, Laurenz, et al. "Face recognition by elastic
(2006): 996-1000. bunch graph matching."Pattern Analysis and Machine
Intelligence, IEEE Transactions on 19.7 (1997): 775-
[11] Yang, Ming-Hsuan, David Kriegman, and Narendra 779.
Ahuja. "Detecting faces in images: A survey." Pattern
Analysis and Machine Intelligence, IEEE [17] Liu, Chengjun, and Harry Wechsler. "Gabor feature
Transactions on 24.1 (2002): 34-58. based classification using the enhanced fisher linear
discriminant model for face recognition." Image
[12] Jones, Judson P., and Larry A. Palmer. "An processing, IEEE Transactions on 11.4 (2002): 467-
evaluation of the two-dimensional Gabor filter model 476.
of simple receptive fields in cat striate
cortex." Journal of neurophysiology 58.6 (1987): [18] Shan, Shiguang, et al. "AdaBoost Gabor Fisher
1233-1258. classifier for face recognition."Analysis and
Modelling of Faces and Gestures. Springer Berlin
Heidelberg, 2005. 279-292.

288
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

ENHANCEMENT IN INTRUSION DETECTION SYSTEM


FOR WLAN USING GENETIC ALGORITHMS
Rupinder Singh Sandeep Kautish
BGIMT, Sangrur Guru Kashi University, Talwandi Sabo
rupinder.bgimt@hotmail.com sandeepkautish@yahoo.com

ABSTRACT
The rapid need of Information Technology and that too This section is an overview of the four major categories of
wireless usage demands a great deal of security in order to networking attacks. Every attack on a network can
keep all the data sources and equipments secure. So we comfortably be placed into one of these groupings.
need a secured network to secure all these. But Secured
data communication over Internet and any other network is a) Denial of Service (DoS): A DoS attack is a type of
always under threat of intrusions and misuses. So Intrusion attack in which the hacker makes a
Detection System have become necessary in terms of computing or memory resources too busy or too full to
network security. There are various approaches that have serve legitimate networking requests and hence
been used before in Intrusion Detection Systems but all of denying users access to a machine e.g. apache, smurf,
the approaches have some types of flaws in it. So the scope neptune, ping of death, back, mail bomb, UDP storm
of betterment is always there. In this paper we proposed an etc. are all DoS attacks.
Intrusion Detection System (IDS) by applying Genetic
Algorithm (GA) to effectively detect the network b) Remote to User Attacks (R2L): A remote to user
intrusions. Various Parameters and evolution processes for attack is an attack in which a user sends packets to a
GA are discussed in detail and implemented so that to machine over the internet, which s/he does not have
overcome the flaws that were present in previous Intrusion access to in order to expose the machines
Detection Systems. vulnerabilities and exploit privileges which a local
user would have on the computer e.g. xlock, guest,
Keywords xnsnoop, phf, sendmail dictionary etc.
Intrusion Detection, wireless networks, IDS, GA, c) User to Root Attacks (U2R): These attacks are
Chromosomes, Genes exploitations in which the hacker starts off on the
system with a normal user account and attempts to
abuse vulnerabilities in the
1. INTRUSION DETECTION SYSTEM system in order to gain super user privileges e.g. perl,
xterm.
A wireless network is not as secured as a Wired Network
because in wired network we can put the check on wires d) Probing: Probing is an attack in which the hacker
but in wireless data is transferred via air so any intruder can scans a machine or a networking device in order to
access the data by using various hacking techniques. In determine weaknesses or vulnerabilities that may later
wireless networks it is very difficult to secure the network be exploited so
for lifetime and detect various attacks by Intruders. Some as to compromise the system. This technique is
commonly used attacks are more in wireless environment commonly used in data mining e.g.
as compared to wired one and some extra efforts should be saint, portsweep, mscan, nmap etc.
used to prevent those. An Intrusion Detection System aim
to detect the different attacks against network and system. 3. CLASSIFICATION OF INTRUSION
Because of the multitude of methods of intrusions, there are
several reasons why IDS is essential to any network, both DETECTION SYSTEM
wired and wireless. While the wireless IDS technology is
new, we need to find out its capabilities and how it can help IDS can be classified in two main categories. They are
in providing a robust level of security for wireless shown below:
networks. Additionally, we need to know what types of
IDS are available and the drawbacks that come with using a i) Host Based Intrusion Detection:
wireless IDS.
HIDSs checks the information found on single or multiple
2. NETWORKING ATTACKS host systems, including the content of operating system,
files of operating system and various application files.

289
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

ii) Network Based Intrusion Detection: 6. IDS USING OUR GENETIC


NIDSs evaluate information taken from various network
ALGORITHM
communications, analyzing each packet that are routed By using various ways IDS can be implemented. We have
from source to destination. It also takes into account the chosen Genetic Algorithm to make our IDS. A genetic
stream of packets across the network. Algorithm has many operator, processes and parameters
which decide its arrival to an optimal solution. A short
4. COMPONENTS OF INTRUSION description of the parameters, operators and processes is
handy. The genetic algorithms start processing by initially
DETECTION SYSTEM selecting a random population of chromosomes. Each
chromosome is composed of a finite number of genes,
An intrusion detection system normally consists of three which is predefined in every implementation. These
functional components. The first component of an intrusion chromosomes are the data representing the problem. This
detection system, also known as the event generator, is a initial population is refined to a high quality population of
data source. The second component of an intrusion chromosomes, where each chromosome satisfies a
detection system is known as the analysis engine. This predefined fitness function. According to the requirements
component takes information from the data source and of the solution needed, different gene positions in a
examines the data for symptoms of attacks or other policy chromosome are encoded as numbers, bits, or characters.
violations. The analysis engine can use one or both of the Each population is refined by applying mutation, crossover,
following analysis approaches: inversion, and selection processes. The working of a
genetic algorithm when applied to intrusion detection can
Misuse/Signature-Based Detection: This type of detection be viewed as a sequence of following steps:
engine detects intrusions that follow well-known patterns
of attacks (or signatures) that exploit known software i) The packet capturing module or sniffer present in the
vulnerabilities. The main limitation of this approach is that intrusion detection system collects the information about
it only looks for the known weaknesses and may not care the network traffic or logs.
about detecting unknown future intrusions.
ii) The intrusion detection system applies genetic
Anomaly/Statistical Detection: An anomaly based detection algorithms to the captured data. The genetic algorithm at
engine will search for something rare or unusual [26]. They this stage has classification rules learned from the
analyses system event streams, using statistical techniques information collected.
to find patterns of activity that appear to be abnormal. The
primary disadvantages of this system are that they are iii) The intrusion detection system then applies the set of
highly expensive and they can recognize an intrusive rules produced in the previous phase to the incoming
behavior as normal behavior because of insufficient data traffic. Application of rules to captured data results in the
population initialization, which in turn results in the
The third component of an intrusion detection system is creation of a new population with good qualities. This
the response manager. In basic terms, the response manager population is then evaluated and a new generation with
will only act when inaccuracies (possible intrusion attacks) better qualities is created. Then genetic operators are
are found on the system, by informing someone or applied to the newly created generation until the most
something in the form of a response. suitable individual is found.

5. PROBLEMS WITH EXISTING 7. CONCLUSION


SYSTEMS
Intrusion detection methods based on genetic algorithms
i) The information used by Intrusion Detection have attracted most of the attention from the research
System is obtained from Audit or from Packets community and the industry during the past so many years.
on a network. Packets have to travel a long The requirements for building efficient intrusion detection
distance from the origin to IDS and finally to systems and the features of genetic algorithms is the main
destination and in this process can potentially be reason behind genetic algorithms getting such an attention
destroyed or modified by an attacker. from the intrusion detection research community. This survey
ii) The Intrusion Detection System Continuously provides an introduction to intrusion detection and genetic
monitoring even when there is no intrusion algorithms. The generics of genetic algorithm based
occurring, because the components of IDS have intrusion detection systems are discussed. Also, the work
to run all the time. This pure wastage of done by different researchers in the direction of applying
resources. genetic algorithms for intrusion detection is surveyed. In
near future we will try to improve our intrusion detection
system with the help of more statistical analysis and with
better and may be more complex equations.

290
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES
[1] J. P. Planquart, “Application of Neural Networks to [7] H. G. Kayacık, A. N. Zincir-Heywood, M. I.
Intrusion Detection”, SANS Institute Reading Room. Heywood, “Selecting Features for Intrusion
Detection: A Feature Relevance Analysis on KDD 99
[2] R. G. Bace, “Intrusion Detection”, Macmillan Intrusion Detection Datasets”, May 2005.
Technical Publishing. 2000.
[8] G. Folino, C. Pizzuti, G. Spezzano, “GP Ensemble for
[3] S. Kumar, E. Spafford, “A Software architecture to Distributed Intrusion Detection Systems”. ICAPR 54-
Support Misuse Intrusion Detection” in The 18th 62, 2005.
National Information Security Conference, pp. 194-
204. 1995. [9] W. Lu, I. Traore, “Detecting New Forms of Network
Intrusion Using Genetic Programming”.
[4] K. Ilgun, R. Kemmerer, P. A. Porras, “State Transition Computational Intelligence, vol. 20, pp. 3, Blackwell
Analysis: A Rule-Based Intrusion Detection Publishing, Malden, pp. 475-494, 2004.
Approach”, IEEE Transaction on Software
Engineering, 21(3):pp. 181-199. 1995. [10] M. M. Pillai, J. H. P. Eloff, H. S. Venter, “An
Approach to Implement a Network Intrusion Detection
5] S. Kumar, “Classification and Detection of Computer System using Genetic Algorithms”, Proceedings of
Intrusions”, Purdue University, 1995. SAICSIT, pp:221-228, 2004.

[6] V. Bobor, “Efficient Intrusion Detection System [11] S. M. Bridges, R. B. Vaughn, “Fuzzy Data Mining
Architecture Based on Neural Networks and Genetic And Genetic Algorithms Applied To Intrusion
Algorithms”, Department of Computer and Systems Detection”, Proceedings of 12th Annual Canadian
Sciences, Stockholm University / Royal Institute of Information Technology Security Symposium, pp.
Technology, KTH/DSV, 2006. 109-122, 2000.

291
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Review on Reliability Issues in Cloud Service


Gurpreet Kaur Rajesh Kumar
Department of CSE, Bhai Gurdas Institute of Department of CSE, Bhai Gurdas Institute of
Engineering and Technology, India Engineering and Technology, India
preetsidhu004@gmail.com rajeshkengg@gmail.com

ABSTRACT
Cloud computing is a recently developed new technology that
provides a model for resource sharing over the internet, which
is different from the resource sharing of the grid computing
systems. Cloud computing is designed to deliver computing
resources as a service to consumers over the internet from
large-scale data centers – or “clouds". Reliability is one of the
major issue in the cloud computing. Cloud reliability analysis
and modeling are not easy tasks because of the complexity and
large scale of the system.. This research paper is the systematic
review on the cloud computing basic concepts. This research
paper also analyzes the key research challenges present on
reliability of cloud service.

Keywords
Cloud Architecture, Cloud Computing, Cloud Management
System(CMS), Reliability, FTCloudSim, Monte Carlo
Simulation.

Fig 1: A bird’s eye view of cloud computing [3].


1. INRODUCTION
This section gives an introduction to Cloud computing basic 1.1.1 Deployment Models
concepts and models. Cloud computing is a model for enabling
convenient, on-demand network access to a shared pool of Deployment models are defined on the type of access to the
configurable computing resources (e.g., networks, servers, cloud, i.e., how the cloud is located? Cloud can have any of the
storage, applications, and services) that can be rapidly four types of access: Public, Private, Hybrid and Community.
provisioned and released with minimal management effort or
service provider interaction [1].  Public Cloud: are developed, deployed and
maintained by a third part service. The services within
Cloud computing is a combination of various previous the public cloud has been developed for the general
technologies like “grid computing”, “distributed computing”, public. Public clouds have lack of security due to its
“utility computing” or “autonomous computing”. Cloud openness to public, e.g., e-mail.
computing provides easy and cheap access to all resources.
Cloud computing has the ability to scale up or down their  Private Cloud: services are developed, deployed,
service requirements. With the help of virtualization different maintained and maintained for a single enterprises.
computational needs are accomplished on the same physical The private cloud provide more security and greater
infrastructure. Usually Cloud Computing services are delivered control than public clouds.
by a third party provider.
 Community Cloud: a cloud that is developed for
Cloud computing is a general term that provides sharing of resources by the several organizations.
resources as a service over the internet. A bird’s eye view These clouds are designed for specific purpose like
of Cloud computing is shown in Fig. 1 [3]. for security requirements.

1.1 Basic Concepts  Hybrid Cloud: a cloud that is built by combining the
There are working models that make the cloud computing above of three deployment models. This cloud
feasible and accessible to end users. Following are the working provide the features of that clouds from whom that it
models for cloud computing: is made.
1.1.2 Service Models

Service Models defines the main models for cloud computing


on which computing is based. These are three basic service
models as listed below:

292
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

cloud that is completely free from failures or failure resistant.


Various types of failures are interleaved in the cloud computing
environment such as overflow failure, timeout failure, resource
missing failure, network failure, hardware failure, software
failure, and database failure [6].

The reliability of the cloud computing is very critical but hard


to analyze because cloud is made up by the combination of
various factors like wide-area networks, heterogeneous
software/hardware components. There are many complicated
interactions among the various components of the clouds.
Hence, the reliability models that are defined for pure
Fig 2: Cloud computing deployment and service models [5]. software/hardware or conventional networks cannot be simply
applied to study and evaluating the cloud reliability.
 Software as a Service (SaaS): In SaaS pre made
software is provided to consumers on pay-per-use 2.1 Cloud Computing System and Failure
manner. The SaaS is the software that is developed by
a hosted service and accessed through the internet. Analysis

 Platform as a Service (PaaS): PaaS gives the Cloud computing is different from distributed computing by its
platform to develop programs and application without focus on high massive-scale service sharing than the distributed
the need of software. Applications are developed by computing.
using a set of programming languages and tools that
are supported by the PaaS provider. It also provides 2.1.1 Cloud Service System Architecture
an infrastructure to test cloud applications. The architecture of our cloud service system is described in Fig.
3,. There is a cloud management system (CMS) which define
 Infrastructure as a Service (IaaS): IaaS provides the by a set of servers (either centralized or distributed). The CMS
environment for sharing of resources like servers, mainly fulfills four different functions as shown in Fig. 3:
networks and storage. IaaS helps users to use that
shared resources to deploy and run their applications. The four different functions managed in cloud service system
are given below [6]
Figure 2 provides an overview of the common deployment and
service models in cloud computing, in which the three service  To manage a request queue that receives job requests
models could be deployed on top of any of the four deployment from different users for cloud services;
models.
 To manage computing resources (such as PCs,
2. CLOUD SERVICE RELIABILTY Clusters, Supercomputers, etc.) all over the Internet;

This section gives an overview about the architecture and  To manage data resources (such as Databases,
various failures of cloud service system. Reliability is one of the Publicized Information, URL contents, etc.) all over
key factor to be considered in cloud computing environment. the Internet; and
Reliability is defined as the probability that a given item will
perform its intended function for a given period of time under a  To schedule a request and divide it into different
given set of conditions. subtasks and assign the subtasks to different
Cloud reliability means how the cloud is available to provide computing resources that may access different data
the services even when several of its component fails. A cloud resources over the Internet.
will be more reliable if it is more fault-tolerant and more
adaptable to changing situations. It is impossible to have a

Fig 3: Cloud Service System[6]

293
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.1.2 Failure Analysis Of Cloud Service  Software failure,


 Database failure,
There are a various types of failures that may affect the  Hardware failure, and
success/reliability of a cloud service, including Overflow,  Network failure.
Timeout, Data resource missing, Computing resource missing,
Software failure, database failure, Hardware failure, and Therefore, the two groups of failures could be considered as
Network failure[6]. independent. But failures within each group are strongly
correlated.
 Overflow: this failure occur if the job request is
greater than the maximum number of the requests set 3. CLOUD SERVICE RELIABILITY
in the request queue. After the maximum number all
the new request are discarded and the users are unable ENHANCEMENT MECHANISM
to get the service that they want, and the overflow
failure occurs. If the new request have to wait for too The downtime of the cloud data center has the negative affect
much time then that leads to more timeout failures. on the cloud service reliability. Evaluating and enhancing
reliability in cloud computing is not so much easy task.
 Timeout: The time is set by the users or service However, there are some tools that are used to evaluate and
provider for the result of the requested job in cloud enhance the cloud service reliability. This section gives the
service that is called due time. If the waiting time of introduction about the tools that help to enhance the reliability
the job/request is greater than due time the timeout of cloud service.
failure occurs.
3.1 FTCloudSim
 Data resource missing: In CMS, all the data resource
are registered on the data resource manager (DRM). If
the data resources that are registered on DRM are FTCloudSim is developed by extending the basic functionality
removed but the DRM is not updated. As a result, if of CloudSim. FTCloudSim, a CloudSim-based tool provides an
So when those data resources are assigned in a certain extensible mechanism to enhance the cloud service relaibility.
job request, they will cause the data resource missing FTCloudSim can handle the failure events with the help of
failure. check-pointing mechanism.

 Computing resource missing: The computing


resources in cloud are any physical or virtual
component. Similarly to the data resource miss, the
computing resource missing may also occur, such as
PC turns off without notifying the CMS.

 Software failure: The software failures that are due


fault or unexpected results in programs running on
same or different computing resources.

 Database failure: The database that stores the


required data resources may also fail due to mistake
in the design of the database, fail to connect with the
database and database crashes that causing that the
subtasks when running cannot access the required
data.

 Hardware failure: The computing resources and data


resources have both hardware (such as computers or
servers) which may also encounter hardware failures
e.g. crashes in he storage devices. Fig 4: FTCloudSim Framework[9]

 Network failure: The network problems arises due to


3.1.1 Design of FTCloudSim
the bad design of the communication channels and
broken of the communication channels when subtasks
access remote data. As shown in Fig. 4, FTCloudSim is developed by adding 6
modules to CloudSim: (fat-tree data center network
We classify the above failures in two groups[6]: construction, failure and repair event triggering, checkpoint
image generation and storage, checkpoint based cloudlet
recovery, and results generation)[9] which will be described in
1) Request Stage Failures: this section.
 Overflow
 Timeout.  Fat-tree data center network construction.
2) Execution Stage Failures: FTCloudSim automatically constructs a fat-tree data
 Data resource missing, center network. With the help of fat-tree construction
 Computing resource missing, the networks can be used with any bandwidth and

294
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

with any communication technology. This feature REFERENCES


helps the cloud service providers and consumers to
reliable communication of the data resources.
[1] SRINIVAS J., VENKATA K. and QYSER Dr.
 Failure and repair event triggering. With the help of A.MOIZ, “CLOUD COMPUTING BASICS”,
FTCloudSim all the failure and repair events are International Journal of Advanced Research in
saved on a file so that the experiments can be Computer and Communication Engineering Vol. 1,
repeated for improving the reliability of the cloud. Issue 5, July 2012.
[2] Nazir Mohsin, “Cloud Computing: Overview &
Current Research Challenges”, IOSR Journal of
 Checkpoint image generation and storage. A Computer Engineering (IOSR-JCE) ISSN: 2278-
checkpoint image is generated and stored for the 0661, ISBN: 2278-8727Volume 8, Issue 1 (Nov. -
purpose of resume the task from that stored point in Dec. 2012), PP 14-22.
the event of failure.
[3] Buyya Rajkumar, “Introduction to the IEEE
 Checkpoint-based cloudlet recovery. A task is
Transactions on Cloud Computing”, IEEE
resumed from the latest checkpoint image stored in
TRANSACTIONS ON CLOUD COMPUTING,
the event of host failure. If there is no accessible
VOL. 1, NO. 1, JANUARY-JUNE 2013.
checkpoint image, it will fetch the necessary data
from the central database and restart the interrupted
[4] Gowri G., Amutha M., “Cloud Computing
task from the beginning.
Applications and their Testing Methodology”,
International Journal of Innovative Research in
 Results generation. This module generates all the Computer and Communication Engineering, Vol. 2,
failure, repair and checkpoint results to the user. Issue 2, February 2014.

3.2 MCS (Monte Carlo Simulation) [5] Sriram Ilango and Khajeh-Hosseini Ali, “Research
MCS is a computer based mathematical technique that deals Agenda in Cloud Technologies”.
with the quantitative analysis of the risks occurred in the
system.MCS also analyzes the behavior of the component that [6] Dai Yuan-Shun, Yang Bo, Dongarra Jack and Zhang
causes the uncertainty.MCS is a stochastic simulation tool Gewei, “Cloud Service Reliability: Modeling and
which comes in two varieties: non-sequential and sequential. Analysis”.
The general non-sequential MCS algorithm used for evaluating
reliability[11]. [7] Vouk Mladen A., “Cloud Computing –
All four steps of the MCS algorithm (sampling, classification, Issues,Research and Implementations”, Journal of
calculation, and convergence) are dependent on an efficient Computing and Information Technology - CIT 16,
representation of individual states[11].Non-sequential Monte 2008, 4, 235–246.
Carlo Simulation (MCS) is used for evaluating cloud service
reliability.
[8] Yadav Nikita, Singh V B and Kumari Madhu,
“Generalized Reliability Model for Cloud
4. CONCLUSION Computing”, International Journal of Computer
Applications (0975 – 8887), Volume 88 – No.14,
February 2014.
The cloud computing provides high reliability from the
previous technologies (grid computing, distributed computing
etc) but still reliability is primary component to be considered [9] Zhou Ao, Wang Shangguang, Sun Qibo, Zou Hua
in cloud computing environment. The challenge of reliability and Yang Fangchun , “FTCloudSim: A Simulation
comes when cloud service provider delivers on-demand Tool for Cloud Service Reliability Enhancement
software as a service i.e. accessible through any network Mechanisms”.
conditions (slow connections) . The main purpose of discussing
reliability in this paper is to highlight the failures in cloud [10] Zhang Congyingzi, Green Robert and Alam
service. From failure characteristics in cloud we can identify the Mansoor, “ Reliability And Utilization Evaluation of
availability of cloud service when several of its component A Cloud Computing System Allowing Partial
fails. A cloud is more reliable and available if it is more fault- Failures”, 2014 IEEE International Conference on
tolerant. Fault-tolerance mechanism like FTCloudSim and MCS Cloud Computing.
are used for recovering and evaluating the failures in cloud
computing environment. [11] Snyder, Brett W., "Tools and techniques for
evaluating the reliability of cloud computing systems"
(2013).Theses and Dissertations.Paper 211.

295
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Influence of Anti-Patterns on Software Maintenance:


A Review
Sharanpreet Kaur, Satwinder Singh
Mata Gujri College, Fatehgarh Sahib Punjab, Baba Banda Singh Bahadur Engineering College,
sharancgm@gmail.com Fatehgarh Sahib, Punjab,
satwinder.singh@bbsbec.ac.in

ABSTRACT The purpose of the paper is to disclose the commonly


Anti-patters are the defects which affects the system quality occurring anti-patterns in system which are generally
negatively. An indication of the existence of anti-patterns, in specified in [5] and [6]. The work done will be helpful to the
the software is known as ―Code Smell‖ which leads to the community of software engineers and managers for improving
refactoring of system. Thus the maintenance becomes difficult the software development maintenance activities.
to manage. More the number of smells more refactoring is 2. ANTI-PATTERNS IN SOFTWARE
needed. Different approaches have been identified for the
detection of anti-patterns in the system. The paper aimed at
SYSTEM
The software quality is characterized by good design. The
investigating the impact of anti-patterns on classes and what
design crumbles with the passage of time as changes are made
are the certain kinds of anti-patterns that have a higher impact
in the structure due to changing user requirements. The
than others Finally the results have been concluded for the
defects of the software disclose themselves in the form of
future studies in open source systems. The paper is divided in
―Anti-patterns‖ or ―Code Smells‖ [6]. There is a very fine
to four sections in which the introduction is followed by the
line between the anti-pattern and code smell. Anti-patterns are
types of anti-patterns. Furthermore the related work have been
considered to be a bad programming practice but not an error.
examined carefully with a brief conclusion. Thus the paper
Due to lack of experience and relevant knowledge of the
reveals different approaches for the identification code smells
software developers anti-patterns are introduced to the
in the software system. Hence the detection of smells will be
literature, for solving a specific problem. Code Smell is a
helpful in providing more reliability during testing and
manifestation that indicates the problem of software system
maintenance phases by predicting anti-patterns and faults
[5]. It is an implication that admits anti-patterns. However
before the delivery of the product. Moreover the identification
code smells are technically not wrong but they indicate
of anti-patterns will be of usage to the community of software
delicacy in design. It may lead to the failure of system and
engineers and managers for improving the software
risk of bugs in future. Hence system demands ―Refactoring‖–
development maintenance activities.
changing the existing software code without affecting the
Keywords external behavior. Nearly 20 code smells are specified in [6]
Anti-patterns, Code Smells, Refactoring and Maintenance. which are embodied in the source code where refactoring of
code is needed. The following types of anti-patterns/code
1. INTRODUCTION smells are suggested by the following author.
Software systems require continuous maintenance and Table 1. Types of Anti-Patterns
intelligent development for achieving good quality. Quality
can be improved by various quality assurance activities like Author Name Anti-pattern/Code Smells
formal technical reviews, testing, enforcement of standards. Lazy Class, Large Class, Long
Every time when new software is to be developed; already Method, Long Parameter List,
present capital (source code documents, design templates) is Message Chain, Duplicate Code,
used in the development process. But in most of the cases the Divergent Change , Shot Gun Surgery,
reusability of software components degrades the performance M. Fowler [6] Feature Envy, Data Clumps, Primitive
and quality of the software. Some common examples of Obsession, Switch Statements,
software reusability are: Software Libraries, Design Patterns, Parallel Inheritance Hierarchies,
Frameworks, Systematic Software Reuse. In software Middle Man, Speculative Generality,
designing when a problem is occurring time and again than a Inappropriate Intimacy, Temporary
typical solution is needed and a way is provided by the Field
software developers by introducing the term –Design Patterns.
But in majority of cases the design patterns start performing
like ―ANTI-PATTERNS‖. Such design defects are called Blob, Spaghetti Code, Conditional
Code Smells or Anti-Patterns. In case code smells are present B.F. Webster [1] Complexity, Anti-Singleton, Class
maintenance becomes the need of the system. The code smell and W. J. Brown Data Should Be Private(CDSBP),
is referred as an indication of the presence of anti-patterns in et al. [5] Refused Parent Bequest (RPB), Swiss
the system. More the number of smells more maintenance is Army Knife
needed. Anti-Patterns are defined as a classical style which
provide a solution of the trouble which could generates
negative results. Hence, anti-patterns are treated to be the 3. RELATED WORK
negative solutions that yield more problems than they locate. Anti-patterns are derived from work on patterns. These are
Therefore the system starts demanding Refactoring [6]. considered to be a poor design choices but not an error. Code

296
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Smells is an evidence of the presence of anti-patterns. The could be easily identified. For the implementation a prototype
term anti-pattern was originated by Andrew Koenig, who tool had been used on two case studies
introduces the perception of patterns in the software E.H.Alikacem and H. Sahraoui [20] provided a language
engineering [3]. Both the terms anti-patterns and code smells which identifies the overlook of quality factors and provides s
are used interchangeably. methodology for identification of smells in object oriented
Several authors studied the influence of anti-patterns and code systems. The terminology provided the guidelines with the
smells in the software systems. The work explores the anti- help of fuzzy logic, metrics, association and inheritance.
patterns and code smells, in context to software engineering However it was not validated on any real world project.
activities. The first book on smells was written by Webster [1] Other related detection approaches are discussed by R.Allen
who includes risks of quality assurance, coding, etc. and D. Garlan [2],[4], E.M. Dashofy and A. vander Hoek et
According to [5] attention is needed towards object oriented al.[17].
systems. More than 35 smells were specified in [5] which
include the well known design smell ―BLOB‖. The concept of
3.2 Visualization Based Detection
refactoring was presented in [6], due to the presence of nearly Techniques
20 code smells. The author provided a detailed insight of term In visualization based detection techniques the researchers
―Refactoring‖ which is a process of altering the software used different approaches i.e. Metric Based Visualization
system without changing its external behavior. The author Technique, Visualized design defect Detection Strategy,
revealed different types of code smells like Lazy Class, Long Domain Specific Language etc. Some of the approaches are
Method, Long Parameter List, Shotgun Surgery, etc. The discussed here.
above mentioned authors provide a detailed knowledge on F. Simon et al. [10] suggested a powerful technique to inspect
code smells and anti-patterns. However the approach followed the internal quality of the software using a metric based
by the authors for the identification of anti-patterns was visualization approach. Four types of source code refactoring
completely manual. It was a time consuming and error prone had been analyzed: move function, move attribute, extract
job for large projects. Thus some researchers proposed the class and inline class. An enhanced metric tool- Crocodile had
automatic, visualization based and statistical detection been used. The approach enabled the software engineers to
techniques. identify the ―code smells‖ with a click of mouse and
following the visualization rules.
3.1 Traditional Detection Techniques G. Langelier et al. [19] specified a visualization approach for
In the traditional detection techniques the researchers the quality analysis of large scale systems. A framework had
introduced the different methods using which anti-patterns been provided which was implemented on open source
and code smells can be identified in the system manually. systems. Geometrical 3D Box was used for the representation
This was the first step towards the future of semi automatic of classes. Analysis had been done on the values of Metrics
and automatic anti-pattern detection techniques. It varies from i.e. For Coupling, Cohesion, Inheritance and Size Complexity
software reading method to metric based and template driven CBO, LCOM5, DIT.WMC metrics had been used.
approach. Some of the approaches are discussed here: K. Dhambri et al. [25] introduced a design defect detection
visualization based strategy. The approach is validated for
The manual detection strategy [31] was suggested by three types of anomalies- Blob, Functional Decomposition
Travassos and F. Shull [7]. It was a software reading and Divergent Change. The study is further extending to
technique which provided assistance in detection of smells in automatic detection based approach in the near future.
Object Oriented Systems. Different types of Reading Cedric Bouhours et al. [29] worked on the investigation of
Techniques had been included for the purpose of detection i.e. bad smells in the designing process. The spoiled patterns had
Defect Based Reading, Perspective Based Reading, Use been targeted for the identification of bad smells. Spoiled
Based Reading. A project was developed by the team of Patterns are defined as the patterns which did not provide
students which was reviewed on two factors: Horizontal proper functionality to the system for which it had been
Review and Vertical Review. The approach was completely designed. A comparison had been made between design
manual and time consuming. patterns and spoiled patterns
Connie U. Smith and Lloyd G. William [8] investigated the Naouel Moha et al. [31] specified a domain specific language
affect of anti-pattern God Class on the system and show how based on DECOR, for unmasking anti-patterns. It is a
to solve it. They also proposed three new performance anti- mechanism which provides a track for description of anti-
patterns that often occur within the software systems patterns, by going through the sequence of steps: Description
R. Marinescu [14] introduced a metric based approach for analysis, Specification, Processing, Detection, and Validation.
detection of anti-patterns. The technique was realized on tool- It casts a detection system, DETEX which plays role of
Iplasma More than 8 anti-patterns were detected with nearly reference instantiation of DECOR. More than 15 types of
same number of techniques. The threshold values were code smells had been identified on 11 open source systems.
compared against values of metrics combined with set
operators. 3.3 Automatic Detection Techniques
M.J.Munro [16] suggested a template driven model to detect In the automatic detection strategies fully automatic detection
anti-patterns. The template consists of three components: tools had been used. Different types of anti-patterns are
Name of smell, Description of the properties of code smell in identified and few of these approaches are validated on the
text format, Heuristics for the detection of smells. He real world systems. Some of the approaches are discussed
explored the product metrics for the apperception of ―bad here:
smells‖ in java source software. The paper aimed at using the Yann-Gael Gueheneuc et al. [9] classified three types of
metrics to identify the peculiarity of code smell. Interpretation design defects i.e. Intra-Class (within class), Inter-Class
rules had been applied to calculate the metrics results which (among classes) and Semantic Nature. A Meta model had
are applied to Java source-code. Hence based on the been used to describe design patterns. Inter-class design
calculated results, location of bad smells in the java code defects could be resolved easily with the help of Ptidej Tool.
Eva van Emden and Leon Moonen [11] specified an approach
by which the java source code software’s quality can be

297
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

improved. The concluded results can also be used in the tool much clear. Therfore six versions of Gantt Project had been
for automatic software inspection. jCOSMO code smell explored for the detection of four types of code smell, using
browser have been developed to disclose the smells in java more than six tools.
source code. The tool was validated for CHARTOON system. Daniele Romano, Paulius Raila et al. [38] studied the system
Jagdish Bansiya and Carl G. Davis [12] introduced a by considering source code changes (SCC) obtaining from 16
hierarchical prototype for the evaluation of quality attributes Java open source systems. Three anti-patterns Complex Class,
(reusability, flexibility, understandability etc.) in object Spaghetti Code, and Swiss Army Knife have been identified.
oriented designs. Architectural and Detectable equities of It had been detected that the number of code changes in anti-
classes and their objects is calculated using design metrics- patterns classes is greater than the number of changes with no
DAM, DCC, CAM, etc. The model provided approach to anti-pattern.
implement it on real world projects easily. Foutse Khomh et al. [40] investigated the affect of anti-
Yann-Gael Gueheneuc [15] introduced a tool suite ―Ptidej‖ patterns on classes. More than 50 releases of four systems-
which have the capability to reverse-engineer different ArgoUML, Eclipse, Mylyn, and Rhino had been considered.
programming languages to UML class diagrams accurately. 13 types of anti-patterns have been identified .The relation
PTIDEJ generates the UML class diagrams which help in between the habitation of anti-patterns with the change-
identification of code smells with a higher level of abstraction. tendency and fault-tendency is investigated. It had been
The author provided a brief outline of different reverse detected that classes participating in anti-patterns are faultier
engineering tools like Rational Rose, ArgoUML version than others.
0.14.1, Chava Fujaba version 4.0.1 IDEA, Borland Together, Hui Liu et al. [41] aimed at detecting bad smells in the code.
and Womble recover. A detection strategy had been introduced that reduces the
S. Counsell and Y. Hassoun [21] described the refactoring of efforts of detecting bad smells by a factor of 17 to 20 %.
seven open source Java systems - MegaMek, JasperReports, Abdou Maiga, Nasir Ali et al. [42] described ―SMURF‖
Antlr, Tyrant, PDFBox, Velocity and HSQLDB. The results which is an Anti-pattern Detection Approach. More than 290
demonstrated that the most common re-engineering experiments have been conducted on three systems i.e
components of open source systems are- Renaming and ArgoUML, Xerces, Azureus. Four types of anti-patterns Blob,
Moving fields/methods among the code. Spaghetti Code, Functional Decomposition, and Swiss Army
Yann Gael Gueheneuc and Giuliano Antoniol [26] presented Knife have been identified. Author revealed that the accuracy
Design Motif Identification Multilayered Approach rate of SMURF is greater than that of DETEX and BDTEX
(DeMIMA) for the detection of micro architectures for detection of anti-patterns in the system.
(complementary to design motifs). It was a three layered Kwankamol Nongpong [43] carried out the research by
architecture in which the first two layers provided a miniature combining the code smells with the tools needed for
of the source code, and the third layer identified design refactoring. A tool had been generated called JcodeCanine
patterns. The approach provided 100 % recall on the open which could easily identify the code smells and provided with
source and industrial systems as well, using explanation-based the information where the refactoring was needed.
constraint programming. Fehmi Jaafar, Yann Gael Gueheneuc al. [44] provided a
Stephane Vaucher et al. [28] examined carefully the God relationship between anti-patterns and design patterns. Three
Classes to detect the occurrence of bad smells in the software. open source systems ArgoUML, JFreeChart and XercesJ had
Xerces and Eclipse JDT (open-source systems) - had been been considered for the evaluation of relationship. It had been
studied for the investigation of God Classes. concluded that relationship exists between anti-patterns and
Salima Hassaine et al. [32] introduced IDS (Immune based design patterns but on temporary basis. The classes had more
Detection Strategy) - a machine learning process which was error tendency which is present in such anti-patterns.
energized from the immune system of the human body. Harshpreet Kaur Saberwal et al. [45] explored the open source
System could be easily identified for the presence of code systems for the identification of code smells in the classes. An
smells and anti-patterns. Gantt Project v1.10.2 and Xerces empirical model had been designed for the detection of smells
v2.7.0 were manually-checked for the existence of smells. in the system. The work carried out is validated on the
Foutse Khomh et al. [34] proposed Bayesian Detection versions of real world project – JfreeChart.
Expert, a Goal Question Metric (GQM) based approach to Pandiyavathi and Manochandar [47] suggested the methods
construct Bayesian Belief Networks (BBNs) from the for the revealing the code smells in the system. An overview
descriptions of anti-patterns.BBN examined that identify of refactoring technique had been proposed which would be
whether a class is an anti-pattern or not. BDTEX is validated time saving. Algorithm had been proposed to implement the
for three anti-patterns: Blob, Functional Decomposition, and refactoring methods.
Spaghetti code including two open source systems Gantt Francis Palma et al. [48] specified the detection of anti-
Project and Xerces. The approach is also applied to two patterns in business processes. The rule-based approach has
industrial projects Eclipse & JHotDraw. been detected for improving the quality of BPEL (Business
Satwinder Singh and K.S Kahlon [35] investigated the Process Execution Language) processes to detect BP anti-
importance of software metrics and encapsulation for patterns. Seven BP anti-patterns have been specified and four
revealing the code smells. A software metric model had been have been detected with three example BPEL processes.
introduced that provided the categorization of smells in the Francis Palma et al. [49] proposed that the quality of service
code. Firefox open source system had been investigated for based systems get affected by the use of anti-patterns. Based
the validation of results. on the data collected from the SBS FraSCAti, it was shown
Satwinder Singh and K.S Kahlon [36] introduced a metric that the services suspicious of anti-patterns require more
model for investigating the smelly classes in the system. The maintenance that non patterns services.
paper revealed that the results obtained from the metrics could Satwinder Singh and K. S. Kahlon [50] revealed the
be helpful in determining the code smells and faulty classes. importance of metrics and the threshold values in software
Francesca Arcelli Fontana et al. [37] proposed that various quality assurance. Analysis of risk in software system was
software analysis code smell detection tools are available in explored against the threshold values for the detection of bad
the market but the accuracy of their judgment is still not very smells. Hence based on threshold values faulty classes could

298
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

be easily identified. The study is validated by the three results obtained will be beneficial to the software industry to
versions of open source systems of Mozilla Firefox. improve the quality of the software system by predicting the
Jiang Dexun, Ma Peijun et al. [51] suggested that the classes faulty classes, during the testing phase. It helps in providing
which were functionally not related could generate problems more reliability during testing and maintenance phases by
in software maintenance. Hence the detection and refactoring predicting anti-patterns and faults before delivery of the
of such classes is needed. A bad smell was proposed by the product. Thus the results produced will also be of interests to
author named - Functional over related classes (FRC).A engineers, as they can predict which classes are to be tested
detection strategy was suggested to indentify the bad smell. more precisely. The study will be valuable for the software
The work was validated on four open source systems- engineers and managers for improving their maintenance
HSQLDB, Tyrant, ArgoUML and JfreeChart. activities by eliciting the code smells.

3.4 Empirical Detection Techniques REFERENCES


The following are the empirical detection techniques which
explore the work done on code smells and anti-patters. [1] B. F. Webster, 1995 Pitfalls of Object Oriented
Different types of anti-patterns have been considered by Development.1st Ed. M & T Books.
different authors. Some of the proposed work is given below:
Mika Mantyla et al. [13] presented the research work done on [2] D. Garlan, R. Allen and J. Ockerbloom, 1995
the bad code smells. The paper provided taxonomy for ―Architectural Mismatch: Why Reuse Is So Hard,‖ IEEE
making the smells more understandable. Author revealed Software, Vol. 12, No. 6, pp. 17-26.
different types of classes for bad smells like Bloaters,
Encapsulators, Dispensables, Couplers, etc. A Survey was [3] Koenig and Andrew, 1995 ―Patterns and Anti-
performed at a Finnish software company, which provided a patterns,‖ Journal of Object-Oriented Programming 8, pp 46–
correlation between the smells. 48.
Foutse Khomh et al. [24] introduced the concept of software
quality maintenance by avoiding the use of harmful anti- [4] R.Allen and D. Garlan, 1997 ―A Formal Basis for
patterns. The paper revealed that the quality of the software is Architectural Connection,‖ ACM Trans. Software Eng. and
affected by use of anti-patterns. Methodology, Vol. 6, No. 3, pp. 213-249.
S. Olbrich et al. [27] considered the historical data of Lucene
and Xerces. It had been identified that classes with the anti- [5] W.J. Brown, R.C. Malveau, W.H. Brown, H.W.
patterns Blob and Shotgun Surgery have a higher change McCormick III, and T.J. Mowbray, 1998 ―Anti patterns:
frequency than non anti-patterns classes. refactoring software, architectures, and projects in crisis‖. 1st
Min Zhang, Tracy Hall et al. [33] provided a deep insight of Ed. Wiley, New York.
literature by going through more than 300 papers on code bad
smells since 2000. The paper disclosed that research work is [6] M.Fowler, 1999 ―Refactoring—improving the design of
needed on the percussion of code smells. It had been existing code‖ 1st Ed. Addison-Wesley.
concluded that the smell-Duplicated Code is studied more
than other code smells. [7] G. Travassos, F. Shull ,Michael Fredericks, Victor R.
Rabia Bashir [39] identified the impact of anti-patterns on Basil, 1999 ―Detecting Defects in Object-Oriented Designs:
open source software development. The paper revealed that Using Reading Techniques to Increase Software Quality,‖ In
the anti-patterns, which are available in open source software Proceedings of 14th Conf. Object-Oriented Programming,
development and the solutions to avoid them. Systems, Languages, and Applications, pp. 47-56.
Harvinder Kaur and Puneet Jai Kaur [46] examined various
types of anti-pattern detection techniques i.e. Manual (metric [8] Connie U. Smith and Lloyd G. William, 2000 “Software
based approach, metric-based heuristics, ad hoc domain Performance Anti-patterns,‖ ACM Soft. Engg. Research, pp.
specific language) Semi-Automated (DCPP matrix) and SVM 127-136.
based anti-pattern detection techniques (DTEX, BTEX,
SMURF). [9] Yann Gael Gueheneuc, Herve Albin-Amiot and Ecole
des Mines de Nantes,2001 ―Using Design Patterns and
4. CONCLUSION Constraints to Automate the Detection and Correction of
Inter-class Design Defects,‖ Paper accepted at TOOLS USA.
In this paper a vast literature survey have been done to
limelight the affect of anti-patterns on the source code. Our
[10] F.Simon, F.Steinbruckner and C.Lewerentz, 2001.
study reveals different approaches for the detection of anti-
―Metrics Based Refactoring,‖ In Proceedings of Fifth
patters and code smells. It has been concluded that the
European Conf. Software Maintenance and Re-eng., p.30
research community have analyzed their results on the basis
of within company projects only. The need has been identified
[11] Eva van Emden and Leon Moonen,2002 ―Java Quality
to examine the results for different company projects. Further
Assurance by Detecting Code Smells,‖ In Proceedings of
it has been analyzed that not much researchers have done
Ninth Working Conference on Reverse Engg. IEEE.
work on large projects to identify the anti-patterns. Only few
have done work on large number of anti-patterns to disclose
[12] Jagdish Bansiya and Carl G. Davis, 2002 ―A Hierarchical
the impact of these smells. Therefore the need has been
Model for Object Oriented Design Quality Assessment,‖
generated for the identification anti-patterns and the kinds of
IEEE Trans. on Software Eng. Vol. 28, No. 1, pp.4-17.
anti-patterns with their impact on classes in the object
oriented open source projects. Thus the future work will also [13] Mika Mantyla, Jari Vanhanen and Casper Lassenius,
be possible for the identification of the commonly occurring 2003 ―A Taxonomy and an Initial Empirical Study of Bad
anti-patterns in the open source systems and to investigate the Smells in Code,‖ In Proceedings of the Inter. Conference on
impact of anti-patterns using the software metric. Hence the Software Maintenance, IEEE. pp. 381-384.

299
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[14] R.Marinescu, 2004 ―Detection Strategies: Metrics-Based 3rd Inter. Symposium on Empirical Soft. Engg. and
Rules for Detecting Design Flaws,‖ In Proceedings of 20th Measurement, pp. 390-400.
Int. Conf. Software Maintenance, pp. 350-359.
[28] Stephane Vaucher, Foutse Khomh, Naouel Moha and
[15]Yann Gael Gueheneuc, 2004 ―A Systematic Study of Yann-Gael Gueheneuc, 2009 ―Tracking Design Smells:
UML Class Diagram Constituents for their Abstract and Lessons from a Study of God Classes,‖ In 16th Working
Precise Recovery,” 11th Asia-Pacific Conference on Soft. Conference on Reverse Engg..
Engg, pp. 265-274.
[29] Cedric Bouhours, Herve Leblanc, 2009 ―Bad smells in
[16] M.J. Munro, 2005 ―Product Metrics for Automatic design and design patterns, ” Journal of Object Techn. , Vol.
Identification of ―Bad Smell‖ Design Problems in Java 8, No. 3, pp. 43-63.
Source-Code,‖ In Proceedings of11th IEEE Int. Software
Metrics Symp. [30] Naouel Moha, Yann-Gael Gueheneuc, Anne-Fran_coise
Le Meur, Laurence Duchien and Alban Tiberghien,2010
[17] E.M. Dashofy, A. vander Hoek and R.N. Taylor, 2005 ―From a Domain Analysis to the Specification and
―A Comprehensive Approach for the Development of Detection of Code and Design Smells,‖, Springer Verlag
Modular Software Architecture Description Languages,‖ (Germany), pp.345-361.
ACM Trans. Software Eng. and Methodology, Vol. 14, No. 2,
pp. 199-245. [31] Naouel Moha, Yann-Gael Gueheneuc, Laurence
Duchien, and Anne-Francoise Le Meur,2010 ―DECOR: A
[18] Yann Gael Gueheneuc, 2005 ―Ptidej: Promoting Patterns Method for the Specification and Detection of Code and
with Patterns‖ In Proceedings of 1st ECOOP workshop on Design Smells,‖ IEEE Trans. on Software Eng. Vol. 36, No.
Building a System using Patterns (BSUP), pp. 1-9 Springer- 1, pp.20-36.
Verlag.
[32] Salima Hassaine, Foutse Khomh, Yann-Gael Gueheneuc,
[19] G. Langelier, H.A. Sahraoui and Pierre Poulin, 2005 and Sylvie Hamel,2010 ―IDS: An Immune-Inspired Approach
―Visualization-Based Analysis of Quality for Large-Scale for the Detection of Software Design Smells,‖ 7th IEEE Inter.
Software Systems,‖ ACM Inter. Conf. on Automated Soft. Conference on the Quality of Infor. And Comm. Tech., pp.
Engg, pp. 214-223. 343-348.

[20] E.H. Alikacem and H. Sahraoui, 2006 ―Generic Metric [33] Min Zhang, Tracy Hall and Nathan Baddoo,2011 ―Code
Extraction Framework,‖ In Proceedings of 16th Int. Bad Smells: a review of current knowledge,‖ Journal
Workshop Software Measurement and Metrik Kongress, pp. Software Maintenance Evol. Res. Pract.,Vol. 23, pp. 179–
383-390. 202..

[21] S. Counsell and Y. Hassoun, 2006 ―Common [34] Foutse Khomh, Stephane Vaucher, Yann Gael
Refactorings, a Dependency Graph and some Code Smells: Guéhéneuc and Houari Sahraoui, 2011 ―Bdtex: A gqm-based
An Empirical Study of Java OSS,‖ IEEE Inter. Symposium on bayesian approach for the detection of anti-patterns,‖ J. Syst.
Empirical Soft. Engg. pp. 288-296. Softw., Vol. 84, No. 4, pp. 559–572.

[22] Yann-Gael Gueheneuc, 2007 ―Ptidej: A Flexible [35]Satwinder Singh and K.S Kahlon, 2011 ―Effectiveness of
Reverse Engineering Tool Suite,‖ IEEE Inter. Confer. On Refactoring Metrics Model to Identify Smells and Error Prone
Soft. Maintenance, pp 529-530. Classes in Open Source Software,‖ ACM SIGSOFT Soft.
Engg. Notes, Vol. 36 No.5 pp.1-11.
[23] Naouel Moha, Yann-Gael Gueheneuc and Anne-
Francoise Le Meur, Laurence Duchien, 2008a ―A domain [36] Satwinder Singh and K.S Kahlon, 2012 ―Effectiveness of
analysis to specify design defects and generate detection Encapsulation and Object Oriented Metrics to Refactor Coe
algorithms,‖ In Proceedings of of 11th Int. Conf. on and Identity Error Prone Classes using Bad Smells” ACM
Fundamental Approaches to Soft. Engg., Springer New York, SIGSOFT Soft. Engg. Notes, Vol. 37 No.2 pp.1-10.
pp. 276-291
[37] Francesca Arcelli Fontanaa, Pietro Braione and Marco
[24] Foutse Khomh and Yann-Gael Gueheneuc, 2008 ―Do Zanoni,2012 ―Automatic detection of bad smells in code: An
Design Patterns Impact Software Quality Positively?‖ experimental assessment,‖ Journal of Object Technology,
In Proceedings of 12th Conf. on Soft. Maintenance and Vol. 11, No. 2, pp.1–38.
Reengineering IEEE pp. 274-278.
[38] Daniele Romano, Paulius Raila, 2012 ―Analyzing the
[25] K. Dhambri, H. Sahraoui, and P. Poulin, 2008 ―Visual Impact of Anti-patterns on Change-Tendency Using Fine-
Detection of Design Anomalies,‖ In Proceedings of 12th Grained Source Code Changes,‖ Proc of the 19th Working
European Conf. Software Maintenance and Reng, pp. 279- Conference on Reverse Engineering (WCRE), IEEE
283. Computer Society Press.

[26] Yann Gael and Giuliano Antoniol, 2008 ―DeMIMA: A [39] Rabia Bashir, 2012 ―Anti-patterns in Open Source
Multilayered Approach for Design Pattern Identification,‖ Software Development,‖ Int. Journal of Computer
IEEE Trans. on Software Eng. Vol. 34, No. 5, pp. 667-684. Applications, Vol.44, No.3.

[27] S. Olbrich, D. S. Cruzes, 2009 ―The evolution and impact [40] Foutse Khomh, Massimiliano Di Penta, Yann Gael
of code sells: A case study of two open source systems,‖ In Gueheneuc and Giuliano Antoniol,2012 ―An exploratory

300
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

study of the impact of anti-patterns on class change- and fault- [46] Harvinder Kaur, Puneet Jai Kaur, 2014 ―A Study on
tendency,‖ Springer Science Business Media, LLC,Aug. Detection of Anti-Patterns in Object-Oriented Systems‖ Inter.
Journal of Computer Applications Vol. - 93, No 5, pp. 25-28.
[41] Hui Liu, Zhiyi Ma, Weizhong Shao, and Zhendong Niu,
2012 ―Schedule of Bad Smell Detection and Resolution: A [47] Pandiyavathi and Manochandar, 2014 ―Detection of
New Way to Save Effort,‖ IEEE Trans. On Software Engg., Optimal Refactoring Plans for Resolution of Code Smells,‖
Vol. 38, No. 1. Inter. Journal of Advanced Research in Computer and Comm.
Engg. Vol. 3, No. 5, pp. 6-11.
[42] Abdou Maiga, Nasir Ali, Neelesh Bhattacharya, Aminata
Sabane, Yann-Gael Gueheneuc, and Esma Aimeur, 2012 [48] Francis Palma, Naouel Moha and Yann Gael
“SMURF: A SVM-based Incremental Anti-pattern Detection Guenheneuc, 2013 “Detection of Process Anti-patterns: A
Approach, ―Presented at 19th Working Conference on BPEL Perspective,‖ 17th IEEE Int. Workshop on Enterprise
Reverse Engineering, pp. 466-475. Distributed Object Computing, pp. 173-177.

[43] Kwankamol Nongpong, 2012 ―Integrating \Code Smells [49] Francis Palma, Le An, 2014 ―Investigating the Change-
Detection with Refactoring Tool Support‖ Ph.D. Dissertation, proneness of Service Patterns and Anti-patterns,‖ 7th Inter.
University of Wisconsin-Milwaukee. Conf. on Service-Oriented Computing and Applications IEEE
pp. 1-8.
[44] Fehmi Jaafar, Yann-Gael, Sylvie Hamel, and Foutse
Khomh, 2013 ―Analysing Anti-patterns Static Relationships [50] Satwinder Singh and K. S. Kahlon, 2014 ― Object
with Design Patterns,‖ In Proceedings of of the First oriented software metrics threshold values at quantitative
Workshop on Patterns Promotion and Anti-patterns acceptable risk level,‖ CSI Transactions on ICT, Springer
Prevention, EASST Vol. 59. Vol. 2, No. 3,pp 191-205.

[45] Harshpreet Kaur Saberwal, Satwinder Singh and Sarabjit [51] Jiang Dexun1, Ma Peijun, Su Xiaohong and Wang
Kaur,2013 ― Empirical Analysis Of Open Source System For Tiantian,2014 ―Functional over-related classes bad smell
Predicting Smelly Classes,‖ Inter. Journal of Engineering detection and refactoring suggestions,‖ International Journal
Research & Technology, Vol. 2 Issue 3,pp.1-6. of Software Engineering & Applications (IJSEA), Vol.5,
No.2, pp 29-47.

301
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A review on Data clustering and an efficient k-Means


Clustering Algorithm
Sukhjeet kaur, Satwinder Singh,
BBSBEC, Fatehgarh Sahib BBSBEC, Fatehgarh Sahib

ABSTRACT
Clustering is the task of grouping & organizing a set of similar
objects together. The set of organized objects with the help of
clustering is known as clusters. Clustering together with software
artifacts provides an automatic technique for discovering high level
abstract entities within a system. K-means clustering algorithm which
came into existence in 1955 is most efficient, understandable and
easy to grouped similar behavioral objects. After that, many
algorithms have evolved over the years but k-means is still being
widely used. The reason is complications in designing of a clustering
algorithm and ill-posed problem of clustering. The history and brief
over view of popular clustering methods, key issues in designing
clustering algorithms and the useful researches over the clustering
will be discussed in this work. The implementation and analysis of
heuristic for k-means clustering is known as Lloyd's algorithm, also
named as filtering algorithm. The filtering algorithm requires a kd-
tree as the only major data structure for clustering. The practical
efficiency of the filtering algorithm is discussed in two ways in
scientific community: Firstly, a data-sensitive analysis of the
algorithm's run time and secondly, a number of empirical studies
both on synthetically generated data and on real data sets.

Keywords
Clustering, K-means, Developments, Filtering algorithm

I. INTRODUCTION
The growth of data increases day by day, so the different data is The aim of data clustering, also known as cluster analysis, is to
available. The images and videos contain large amount of data. It is discover the natural grouping(s) of a set of patterns, points, or objects.
estimated by Gantz[1] that the digital universe consumed Webster (Merriam-Webster Online Dictionary, 2008) defines cluster
approximately 281 exabytes in 2007, and it is projected to be 10 times analysis as „„a statistical classification technique for discovering
that size by 2011 (1 exabyte is 101,000,000 terabytes).Most of this whether the individuals of a population fall into different groups by
data is unstructured, which lead to the difficulty in analyzing it. Tukey making quantitative comparisons of multiple characteristics.”
[2] broadly classified data analysis techniques into two major types:
(i) exploratory or descriptive, meaning the researcher is interested in Importance of data clustering:
understanding general characteristics or structure of the high-
dimensional data but pre-specified models or hypotheses are not
required, and (ii) confirmatory or inferential meaning that the Cluster analysis is applicable in any discipline that contains analysis
researcher is interested in confirming the validity of a of multivariate data. A search via Google Scholar (October 2014)
hypothesis/model or a set of assumptions given the available data. The found 2,600,000 entries with the words data and clustering that
data clustering is a very good technique for organizing and analyzing appeared in 2014.It is complex to comprehensively record the
the data. innumerable scientific fields and applications that have used
clustering techniques and many of published algorithms. Image
segmentation, a significant dilemma in computer vision, can be coined
A. Data Clustering as a clustering problem. Data clustering is applicable for the following
main intents: Underlying structure, Natural classification and
A differentiation is made between learning problems by Duda [3] that Compression.
are (i) classification (supervised) or (ii) clustering
(unsupervised).Clustering is a more complicated problem than Background development:
classification. In general, in classification there has a set of predefined According to JSTPR [4], data clustering first appeared in the title of a
classes and want to know which class a new object belongs to. 1954 article dealing with anthropological data. Data clustering is
Clustering tries to group a set of objects and find whether there acknowledged as Q-analysis, typology, clumping, and taxonomy as
is some relationship between the objects. well. Major approaches to clustering will be briefly reviewed .Frank

302
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

and Todeschini [5] defined the Jarvis–Patrick algorithm that states the filtered. As the kd-tree is computed for the data points more willingly
similarity between a pair of points as the number of common than for the centers, the update of this structure with each stage of
neighbors they share. Dempster [6] proposed the EM algorithm, is Lloyd's algorithm is not required. This is not a new clustering
usually applied to infer the parameters in mF4ixture models. Agrawal method, although an accurate implementation of Lloyd's algorithm.
[7] defined CLIQUE to find subspaces in the data with high-density Data analysis can be done of the time spent in each stage of the
clusters. The representation of the data points as nodes in a weighted filtering algorithm. Traditional worst-case analysis is not really
graph is done using Graph theoretic clustering, also known as spectral appropriate here since, in principle, the algorithm might encounter
clustering. scenarios in which it degenerates to brute-force search. Empirical
analysis can be done on Synthetic and real data to determine the
Clustering algorithms can be subdivided as hierarchical and variations in running time as a function of cluster separation, data set
partitional.K-means is the simplest and most popular partitional size, and dimension.
algorithm.

II. LITERATURE REVIEW


B. k-means clustering:
Jain and Dubes [8] defined the main steps of K means algorithm as:
1.Select an initial partition with K clusters; repeat steps 2 and 3 until
cluster membership stabilizes.2. Generate a new partition by assigning
each pattern to its closest cluster center.3.Compute new cluster
centers. The K-means algorithm depends upon three user-specified
parameters: number of clusters K, cluster initialization, and distance
metric. Choosing value of K is most censorious. There are different
extensions of K-means algorithm. The enlarged heuristics are tackled
by some of these extensions that involve the minimum cluster size and
merging and splitting clusters. There are two popular alternatives of
K-means in pattern recognition literature that are ISODATA proposed
by Ball and Hall [9] and FORGY Forgy [10]. Fuzzy c-means is an
extension of K-means where each data point can be a member of
multiple clusters with a membership value (soft assignment), which
was proposed by Dunn [11] and later improved by Bezdek [12].

1) k-means’ Filtering algorithm

The simple iterative scheme for finding a locally minimal solution is


the base of the notorious heuristic for solving the k-means problem,
known as k-means algorithm. Many variations are available to this
algorithm. Here Lloyd‟s algorithm is discussed whose result was for
scalar data.
A center is optimally placed at the centric of the associated cluster;
this is the fundamental of Lloyd‟s algorithm. Given any set of k
centers Z, for each center z belongs to Z, let V(z)†denote its
neighborhood, that is, the set of data points for which z is the nearest Fig.1.The filtering algorithm
neighbor. In geometric terminology, V (z) †is the set of data points
lying in the Voronoi cell of z [48]. Each stage of Lloyd's algorithm
moves every center point z to the centroid of V (z) and then updates
V (z) by re-computing the distance from each point to its nearest
center. Until a specific condition doesn‟t occur, these steps are
repeated.

Lloyd's algorithm presumes data as memory resident. Lloyd‟s


algorithm is simple and flexible so it is preferred for statistical
analysis. The final distortion of any other clustering algorithm can be
improved by applying Lloyd's algorithm as a post processing stage.
This can provide a significant improvement. An undemanding
implementation of Lloyd's algorithm can be sluggish. The reason is
the computing cost required for the nearest neighbors. A well
organized and simple implementation of Lloyd's algorithm is
presented in this paper, which is known as filtering algorithm. The
very first step of this algorithm is storage of the data points in a kd-
tree. The purpose of computing and moving each center to the Fig.2. Candidate z is pruned because g lies entirely on one side of
centroid of the associated neighbors is to maintain, for each node of the bisecting hyper plane r.
the tree, a subset of candidate centers. The candidates for each node
are broadcasted to the node‟s children as they are condensed or C. Difficulties and challenges for users regarding clustering

303
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

at the same data different partitions of the same data can be


The elementary challenges associated with clustering were highlighted generated, which is the principle idea. It is possible to obtain a good
in Jain and Dubes [13] are as follows: data partitioning while the clusters are not compact and well
(a) What is a cluster? separated by combining the resulting partitions. Then a co-occurrence
(b) What features should be used? matrix that provided a good separation of the clusters was used to
(c) Should the data be normalized? combine these partitions.
(d) Does the data contain any outliers? The data based on the new pair-wise similarity is used to obtain the
(e) How do we define the pair-wise similarity? resulting clustering. Many different ways of generating a clustering
(f) How many clusters are present in the data? ensemble and then combining them are available. For example,
(g) Which clustering method should be used? multiple data partitions can be generated by: (i) applying different
(h) Does the data have any clustering tendency? clustering algorithms, (ii) applying the same clustering algorithm with
(h) Does the data have any clustering tendency? different values of parameters or initializations, and (iii) combining of
different data representations (feature spaces) and clustering
Some of these challenges will be highlighted and illustrated. The algorithms.
making of clusters becomes easy while the data representation is good
and the clusters become compact and isolated. Then even a simple Semi-supervised clustering:
clustering algorithm can easily find those clusters. The purpose of
grouping plays an important role in the data representation. Most Chapelle [18] stated that clustering algorithms are said to be operating
methods for automatically determining the number of clusters cast it in a semi-supervised mode that utilize such side information. There
into the problem of model selection. Mostly, the different values of k are two open questions: (i) how should the side information be
are used by the clustering algorithms. Based on the predetermined specified? And (ii) how is it obtained in practice? A must-link
criterion, the best value of k is selected. The minimum message constraint specifies that the point pair connected by the constraint
length abbreviated as MML,criteria proposed by Wallace and Boul belong to the same cluster. On the other hand, a cannot-link constraint
ton; Wallace and Freeman[14] in conjunction with the Gaussian specifies that the point pair connected by the constraint do not belong
mixture model (GMM) to estimate K. Cluster validity indices can be to the same cluster. Other approaches for including side information
defined based on three different criteria: internal, relative, and include (i) „„seeding”, where some labeled data is used along with
external, was used by Figueiredo and Jain[15]. large amount of unlabeled data for better clustering stated by Basu[19]
and (ii) methods that allow encouraging or discouraging some links
D. Comparison of clustering algorithms: stated by Law[20] and Figueiredo[21].
Even if the different clustering algorithms applied on the same data,
often the results can be entirely different partitions. FORGY, Large-scale clustering:
ISODATA, CLUSTER, and WISH are partitional algorithms that
minimize the squared error criterion.MST (minimum spanning tree) The challenge of clustering millions of data points that are represented
works as a single-link hierarchical algorithm, and JP is a nearest in thousands of features is addressed by large-scale data clustering.
neighbor clustering algorithm. A partition can be generated by The application of large-scale data clustering to content-based image
specifying a threshold on the similarity by applying a hierarchical retrieval is reviewed below.
algorithm.
The goal of Content Based Image Retrieval (CBIR) is to retrieve
visually similar images to a given query image. There is not much
Clustering algorithms were formally analyzed by Fisher and vanNess success in this topic instead of the study for the past 15 years. Datta
[16] with the objective of comparing them and providing guidance in [22] stated that a 2008 survey on CBIR highlights the different
choosing a clustering procedure. A set of admissibility criteria for approaches used for CBIR through time. Recent approaches for CBIR
clustering algorithms was defined by them that test the sensitivity of use key point based features.
clustering algorithms with respect to the changes that do not alter the
essential structure of the data. A clustering is called A- admissible if it On the other hand, text retrieval applications are much faster. It takes
satisfies criterion A. about one-tenth of a second to search 10 billion documents in Google.
A large number of clustering algorithms have been developed to
E. Trends in clustering efficiently handle large-size data sets. Most of these studies can be
classified into four categories:
A diverse set of data is created by Information explosion along with 1. Efficient Nearest Neighbor (NN) Search, 2. Data summarization,
the large amounts of data. The data created is both structured and 3.Distributed computing, 4 Incremental clustering and 5.Sampling
unstructured. The structure in the objects is ignored by most of the based clustering
clustering approaches and a feature vector based representation for
both structured and unstructured data is used by them. The traditional Multi-way clustering:
view of data partitioning based on vector- based feature representation
does not always serve as an adequate framework. A brief summary of
some of the recent trends in data clustering is presented below. A combination of related heterogeneous components forms objects or
entities that have to be clustered. The objects can be converted into a
1) Developments: pooled feature vector of its components but it is not a natural
representation of the objects and it may result in poor clustering
Fred and Jain [17] stated that the development of ensemble methods performance. Hartigan[23] and Mirkin[24] defined Co-clustering as it
for unsupervised learning has been motivated by the success of aims to cluster aims to cluster both features and instances of data
ensemble methods for supervised learning. By taking different looks simultaneously to identify the subset of features where the resulting

304
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

clusters are meaningful according to certain evaluation criterion. Bi- [5]Frank, I ldiko E .,Todeschini,Roberto ,1994.”Data Analysis Hand
dimensional clustering double clustering, coupled clustering, or book “. Elsevier Science Inc ., pp . 227 – 228 .
bimodal clustering are its other names. Bekkerman [25] stated that the [6]Dempster,A.P ., Laird , N.M. ,Rubin ,D.B. ,1977 .”Maximum
co-clustering framework was extended to multi-way clustering into likelihood from in complete data via the EM algorithm”. J. Ro y.
cluster a set of objects by simultaneously clustering their Statist.Soc. 3 9, 1 – 38 .
heterogeneous components. The problem is much more challenging [7]Agrawal , Rakesh , Gehrke, Johannes , Gunopulos , Dimitrios ,
because of different relationships. Raghavan , Prabhakar,1998.”Automatic subspace clustering of high
dimensional data for data mining applications”.In : Proc . ACMSIG
Heterogeneous data: MOD, pp . 94 – 1 0 5.
[8]Jain,Anil K.,Dubes,Richard C . ,1988 .”Algorithms for Clustering
Data”. Prentice Hal l. Jain ,Ani l K. , Flynn,P.,1996.Image
Heterogeneous data refers to the data where the objects may not be segmentation using clustering.In :Advances in Image Understanding .
naturally represented using a fixed length feature vector. IEEE Computer Society Press,pp.65 – 83
[9] Ball,G.,Hal l,D.,1965.ISO DATA,anovel method of data anlysis
Rank data: Consider a dataset generated by ranking of a set of n and pattern classification.Technical report NTISAD 69
movies by different people; only some of the n objects are ranked 9616.Stanford Research Institute, Stanford ,CA.
movies by different people; only some of the n objects are ranked. The [10] Forgy, E .W .,1965.Cluster analysis of multivariate data :
task is to cluster the users whose rankings are similar and also to Efficiency vs .interpretability of classifications .Biometrics 21,768–
identify the „representative rankings‟ of each group. Dynamic data: 769 .
Dynamic data, as opposed to static data, can change over the course of [11]Dunn,J.C .,1973.A fuzzy relative of the ISO DATA process and
time e.g., blogs, Web pages, etc. As change over the course of time its use in detecting compact well -separated clusters. J. Cybernet.3,32
e.g., blogs, Web pages, etc. A data stream is a kind of dynamic data –57 .
that is transient in nature, and cannot be stored on a disk. Graph data: [12] Bezdek,J.C ., 1981.”Pattern Recognition with Fuzzy Objective
Several objects, such as chemical compounds, protein structures, etc. Function Algorithms” .PlenumPress .
can be represented most naturally as graphs. Relational data: Another [13]Jain ,A nil K.,Dubes , Richard C. ,1988 .”Algorithms for
area that has attracted considerable interest is clustering relational Clustering Data”. Prentice H al l .
(network) data. Unlike the clustering of graph data, where the [14]Wallace , C. S., Boulton , D. M. , 1 9 6 8 .” An information
objective is to partition a collection of graphs into disjoint groups, the measure for classification” .Comput . J. 1 1 , 1 8 5 – 19 5 .
task here is to partition a large graph (i.e., network) into cohesive sub [15] Figueiredo ,Mario ,Jai n ,Anil K .,2002 .”Unsupervised learning
graphs based on their link structure and node attributes. of finite mixture models”. IEEE Trans. Pattern Anal . Machine Intel l
.2 4 (3),381– 396 .
III. CONCLUSION [16]Fisher , L .,van Ness , J., 1971 . “Admissible clustering
procedures” .Biometrika .
[17]Fred ,A ., Jain , A. K., 2002 . “Data clustering using evidence
The various developments, methods and improvements done over the accumulation” . I n: P roc. Internat . Conf. Pattern Recognition (I CP
years, are available for the different clustering techniques which are R) .
discussed in this paper. Many difficulties and challenges are faced by [18]Chapelle , O . , Scholkopf, B ., Z ie n , A. (E d s .), 2 0 0 6.
the users regarding clustering that are still obligatory to be recovered. “Semi - Supervised Learning”. MIT Press , Cambridge , MA .
The k-means algorithm is most simple and well known algorithm [19]Basu ,Sugato , Banerjee ,Ar indam, Mooney, Ray mo n d, 2 00 2
which can be easily implemented. So its heuristic, Lloyd‟s algorithm . “Semi- supervised Basu,Sugato,Banerjee” ,
is presented. Kd-tree is the main significance of this algorithm as it Arindam,Mooney,Raymond, 2 00 2 . Semi- supervised
requires only once to be generated for the given data points. The [20] Law , Martin , Topchy ,Alexander , Jai n , A.K ., 2 0 0 5 .
variation of these data points does not take place, due to this reason “Model - based clustering with probabilistic constraints” .In : Proc.
the efficiency is achieved. SIAM Conf. on Data Mining , p p . 64 1 – 6 4 5.
[21]Figueiredo , M. A. T. , Chang, D. S. , Murino , V. , 20 0 6 .
IV. FUTURE WORK “Clustering under prior knowledge with application to image
segmentation” . Ad v. Neural Inform. Process . Systems 19 , 4 01 – 4
Future research is required make the vital amendments in clustering 0 8.
methods to advance the competence and implementation of [22]Datta, Ritendra ,Joshi , Dhiraj , Li , Jia , Wang , James Z ., 2 0 08
clustering algorithms. . “Image retrieval : Ideas , inuences , and trends of the new age”.
ACM Computing Surveys 40(2 ) ( Article5) .
[23]Hartiga n , J. A., 1972 . “Direct clustering of a data matrix. J”.
REFERENCES Amer. Statist . Assoc. 6 7( 3 3 7) , 1 2 3 – 13 2 .
[24]Mirkin , Boris , 1996 . “Mathematical Classification and
[1]Gantz,John F.,” The diverse and exploding digital universe”, Clustering” .Kluwer Academic Publishers.
“http://www .emc.com/collateral/analyst-reports/diverse-exploding- [25]Bekkerman ,Ron,El - Yaniv , Ran, McCallum , Andrew , 2005.
digital universe.pdf” “Multi - w ay distributional clustering via pairwise interactions” . In :
[2]Tukey,John Wilder,”Exploratory Data Analysis”.Addison- Proc. 22n d Internat. Conf. Machine Learning , pp . 4 1 –4 8 .
Wesley.Umeyama,S.,1988.An eigen decomposition approach to
weighted graph matching problems.IEEE Trans.Pattern An al.
Machine Inte ll.10(5),695–703 .
[3]Duda ,R.,Hart,P.,Stork ,D. ,”Pattern Classification”. John Wiley
and Sons,New Yo r k.
[4]JSTOR, 2009 . JSTO R.< http :// www. j s to r .o r g>.

305
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Data Mining Technique to Predict Mutations from Human


Genetic Information in Bioinformatics: A Review
Manpreet Kaur Shivani Kang
Department of Computer Science & Engg, Department of Computer Science & Engg, BGIET,
BGIET,Sangrur,Punjab,India, Sangrur,Punjab, India.
manpreet033@gmail.com kangshivani@gmail.com

ABSTRACT Bioinformatics and computational biology are concerned with


With the widespread use of databases and the explosive the use of computation to understand biological phenomena
growth in their sizes, there is a need to effectively utilize these and to acquire and exploit biological data, large-scale data.
massive volumes of data. To effectively handle this data there Methods from bioinformatics and computational biology are
is a need for new techniques and tools to acquire useful increasingly used to augment or leverage traditional
information from the large amount of data. This growth need laboratory and observation-based biology. These methods
gives a view for a new research field called Knowledge have become critical in biology due to recent changes in our
Discovery in Databases (KDD) or Data Mining. Based on the ability and determination to acquire massive biological data
type of knowledge that is mined, data mining techniques can sets, and due to the ubiquitous, successful biological insights
be mainly classified into association rules, decision trees and that have come from the exploitation of those data. This
clustering. Until recently, biology lacked the tools to analyze transformation from a data-poor to a data-rich field began
massive repositories of information. The data mining with DNA sequence data, but is now occurring in many other
techniques are effectively used to extract meaningful areas of biology.[1]
relationships from these data and can also monitor the
changing trends in data. 1.1.1 Molecular biology Information - DNA and
Mutation
Keywords
Data mining, Nucleotide, Mutations, Association.
1.1.1.1 DNA
DNA is made of a long sequence of smaller units strung
1. INTRODUCTION together. There are four basic types of unit: A, T, G, and C.
In recent years Biological related data are growing speedily in These letters represents the type of base each unit carries:
size and complexity. A huge amount of data is available for adenine, thymine, guanine, and cytosine.
extracting high level of information in the form of finding
interested pattern from the database. Biological data contains The sequence of these bases encodes instructions. Some parts
DNA sequences and protein sequences. Bioinformatics of your DNA are control centers for turning genes on and off,
involves the manipulation, searching and data mining of DNA some parts have no function, and some parts have a function
sequence data. The development of techniques to store and that we don't understand yet. Other parts of your DNA are
search DNA sequences have led to widely applied advances in genes that carry the instructions for making proteins — which
computer science, especially string searching algorithms, are long chains of amino acids. These proteins help build an
machine learning and database theory. Research in organism.
bioinformatics includes development of applications using
data mining techniques to solve biological problem. 1.1.1.2 Mutations
Therefore, there is a great potential to increase the interaction A mutation is a change in DNA, the hereditary material of
between data mining and bioinformatics. life. An organism's DNA affects how it looks, how it behaves,
and its physiology. So a change in an organism's DNA can
1.1 Bioinformatics:- cause changes in all aspects of its life.
The term bioinformatics was coined by Paulien Hogeweg in
1979 for the study of informatics processes in biotic systems. Types of mutations:
Bioinformatics can be defined as the application of computer
technology to the management of biological information. a) Substitution: A substitution is a mutation that exchanges
Bioinformatics is the science of storing, extracting, one base for another (i.e., a change in a single "chemical
organizing, analyzing, interpreting and utilizing information letter" such as switching an A to a G). Such a substitution
from biological sequences and molecules. It has been mainly could:
used in DNA sequencing and mapping techniques. Over the
past few decades rapid developments in genomic and other  change a codon to one that encodes a different amino
molecular research technologies and developments in acid and cause a small change in the protein produced.
information technologies have combined to produce a  change a codon to one that encodes the same amino acid
tremendous amount of information related to molecular and causes no change in the protein produced. These are
biology. The primary goal of bioinformatics is to increase the called silent mutations.
understanding of biological processes.[3]  change an amino-acid-coding codon to a single "stop"
codon and cause an incomplete protein. This can have
serious effects since the incomplete protein probably
won't function.

306
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Third Stage : Finding useful features of the data depending


on the goal of the task and using dimensionality reduction or
Eg. CTGGAG will change to CTGGGG after substitution. transformation methods to reduce the effective number of
variables under consideration.
b) Insertion: Insertions are mutations in which extra base
pairs are inserted into a new place in the DNA. Fourth Stage : Depending on the goal of KDD process,
choose data mining task such as doing clustering, regression,
Eg. CTGGAG will change to CTGGTGGAG after insertion. classification etc. Then choosing the appropriate data mining
algorithm for searching patterns satisfying constraints of
c) Deletion: Deletions are mutations in which a section of overall KDD process.
DNA is lost, or deleted.
Fifth Stage : In this stage, search for patterns of interest
Eg. CTGGAG will change to CTAG after deletion. (which are in particular representational form. Interpretation
of mined patterns and consolidating discovered knowledge is
d) Frameshift: Since protein-coding DNA is divided into the main objective at this stage.
codons three bases long, insertions and deletions can
alter a gene so that its message is no longer correctly Data mining is a step in the knowledge discovery process. It
parsed. These changes are called frameshifts. refers to the application of algorithms for extracting patterns
from data without the additional steps of the KDD process.
Eg. Suppose the sequence is ‘the fat cat sat’ and after the
frameshift if t get deleted then sequence become ‘hef atc ats
at’[2] 1.2.1 Techniques of Data Mining :
1.2 KDD, Data Mining and its Techniques 1. Characterization: This kind of functionality in data mining
Researchers identify two fundamental goals of data mining: summarize general features of objects in a target class. For
prediction and description. Prediction makes use of presented example -Characterize Graduate Student in Science etc.
variables in the database in order to predict future values of
interest and description focuses on finding patterns describing 2. Discrimination : This functionality is comparison of general
the data and the subsequent presentation for user explanation . features of objects between a target class and a contrasting
The relative importance of both prediction and description class. It is used for Comparison between the two subjects in
differ with respect to primary application and the technique. target class. For example - Compare students in Science and
With the enormous amount of data stored in files, databases, Students in Arts.
and other repositories, it is increasingly important, if not
necessary, to develop powerful means for analysis and 3. Association : Given a database with n Boolean attributes
explanation of data and for the extraction of interesting A1,A2,A3..,An, the task of discovering association rules is to
knowledge that could help in decision-making. Data Mining, derive implications of the form X!Y, where X and Y are sets
also popularly known as Knowledge Discovery in Databases of attributes from Ai (i=1 to n) and X\Y = null with support
(KDD). S% and confidence C% (user defined) where: Support S%
indicates that S% of transactions in the database contain both
X and Y. Confidence C% indicates that C% of transactions
containing X in the database, also contain Y. For example :
Bread->Milk with support 0.4 and confidence 0.7, indicates
that when people buy Milk they also buy cookies 70% of the
time and that 40% of transactions contain both these items.

4. Prediction : In this functionality, prediction of some


unknown or missing attribute values based on other
Information. For example : Forecast the sale value for next
week based on available data.

5. Classification : A set of object (record), called the training


data-set T, is available. This data-set belongs to a population,
which is partitioned into K mutually exclusive and exhaustive
Fig 1-Steps of KDD Process groups or classes. Each object in T is labeled with exactly one
class and serves as an example. The objective is to classify or
The overall process of finding and interpreting patterns from assign an unlabelled object (belonging to the parent
data involves the repeated application of the following steps population) to one of the classes under conditions of
(KDD Process uncertainty, based on the observations made on the training
data set T i.e. the classification problem strives to construct a
First Stage : Selecting data-set and focusing on data samples classification scheme which classifies an unlabelled object (i.e
on which the discovery is to be performed. whose class is unknown ) to the correct class with some
known confidence.
Second Stage : Preprocessing of selected data is to be
performed in this stage to remove irrelevant data i.e. outliers 6. Clustering : In this, Data Mining organizes data into
and noise etc. Missing data fields are also handled in this meaningful sub-groups (clusters) such that points within the
stage. group are similar to each other, and as different as possible
from the points in the other groups. It is an Unsupervised

307
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Classification. For example : Group crime locations to find artificial intelligence techniques, Decision tree approach,
distribution patterns. In this, Minimize interclass similarity Genetic algorithm, Visualization etc.
and Maximize intra-class similarity.
The work of mining gene expression databases is closely
7. Outlier Analysis : In this, Data Mining is done to identify related to the outcome using association rules has been
and explain exceptions. For example, in case of Market proposed by[5]. In this research work, they had made an
Basket Data Analysis, outlier can be some transaction which algorithm for effectively mining association rules from gene
happens unusually i.e which happens surprisingly or so. expression data using the data set of 300 expression profiles
for yeast. By analysing these rules, we can extract numerous
8. Time-series Analysis : In this, Data Mining analyzes trends associations between certain genes.
and deviations, regression, sequential pattern, similar
consequences.[10] The work of Data Mining in Bioinformatics using WEKA has
been proposed by [6].WEKA is based on the machine learning
1.3 Need of Data Mining in Bioinformatics technique. WEKA explorer provide the graphical user
Interface. It is used for automatic classification, clustering and
The entire human genome, the complete set of genetic feature selection. It can be also used for comparing various
information within each human cell has now been determined. machine learning algorithms.
Understanding these genetic instructions promises to allow
scientists to better understand the nature of diseases and their The work of exploration of DNA sequences using pattern
cures, to identify the mechanisms underlying biological mining has been proposed by [7]. In this analysis they had
processes such as growth and ageing and to clearly track our perform a comparison between the nucleotides of normal
evolution and its relationship with other species. The key breast gene with BRCA1 gene and predict that whether the
obstacle lying between investigators and the knowledge they patient is suffering from breast cancer or not.
seek is the sheer volume of data available.
The work of predicting outcome of thoracic surgery by data
Biologists, like most natural scientists, are trained primarily to mining techniques has been proposed by [8] .This paper helps
gather new information. Until recently, biology lacked the in predicting outcome of thoracic surgery. The dataset contain
tools to analyze massive repositories of information such as the information of 470 patients where 400 patients survived a
the human genome database. Luckily, the discipline of year after surgery and 70 patients failed to survive at least 1
computer science has been developing methods and year after the surgery. For the analysis of data they had used
approaches well suited to help biologists manage and analyze the WEKA toolkit. For the prediction purpose they had used
the incredible amounts of data that promise to profoundly the data mining techniques like Naïve Bayes, Simple Logistic
improve the human condition. Data mining is one such regression, J48 and Multilayer perceptron.
technology.
The work of mining DNA sequences to predict sites which
Applications of data mining to bioinformatics include gene mutations cause genetic diseases has been proposed by [9].
finding, protein function domain detection, function motif They had developed a data mining system called rSNP_Guide
detection, protein function inference, disease diagnosis, to discover regulatory sites in DNA sequences and analyse the
disease prognosis, disease treatment optimization, protein and change in nucleotides to predict the type of mutations which
gene interaction network reconstruction, data cleansing, and could be the cause of genetic diseases.
protein sub-cellular location prediction.
Diseases which are caused due to abnormalities in individual's
For example, microarray technologies are used to predict a genome is the most focused area of research at present.
patient’s outcome. On the basis of patients’ genotypic Genetic mutations accompanied by disorders make it more
microarray data, their survival time and risk of tumour complex.
metastasis or recurrence can be estimated. Machine learning
can be used for peptide identification through mass DNA is made of a long sequence of smaller units strung
spectroscopy. Correlation among fragment ions in a tandem together. There are four basic types of unit: A, T, G, and C.
mass spectrum is crucial in reducing stochastic mismatches These letters represents the type of base each unit carries:
for peptide identification by database searching. An efficient adenine, thymine, guanine and cytosine.
scoring algorithm that considers the correlative information in Mutations are the changes in the pattern of DNA due to some
a tuneable and comprehensive manner is highly desirable. diseases or due to some treatment like cancer treatment.

We can predict the mutations in pattern of DNA using data


2. Related Works mining tools by extracting information about the patterns
which are followed by nucleotide chains of DNA. Monitoring
Data mining is used in various applications of bioinformatics. changes in patterns, we will better be able to recognize
The various applications of bioinformatics where data mining patterns of nucleotides which are actually changing due to
can be used has been proposed by [3]. He had discussed the some mutations or disorders. This information can be useful
grand area of research in bioinformatics like Sequence in medical research to let them develop tools and techniques
analysis, Genome Annotation, Analysis of gene expression, to handle those nucleotide patterns.
analysis of protein expression, analysis of mutations in cancer,
comparative genomics etc. To extract this information about changes in DNA pattern, we
can use the technique of data mining like association. From
There are many data mining techniques are available to this, we will find the association rules and from these
discover meaningful pattern and rules. These techniques has association rules we will extract the information about
been discussed in [4].These techniques can be statistics, mutations.

308
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES
Although data mining has become popular as an emerging [1] Ezziane,Zoheir 2006. Applications of artificial
technique, still there are several issues to be resolved to make intelligence in bioinformatics:A review. Expert System with
it useful in diverse domains. Some of the issues faced by data Applications(ELSEVIER), Vol.30, pg. 2-10.
mining are quality of data, inter-operability, security and
privacy etc. Other than all this, there is one major issue with [2] Lodish,H., Berk A.,Zipursky,S Lawerence, Matsudaira,P.,
the data mining is its lack of taking into account the analysis Baltimore,D. and Darnell,J. 2000. Molecular Cell Biology, 4th
of real time data e.g Time Series Data, Evolving Data Edition.
Streams. Data mining does not capture all these changes. It is
just a static method which analyse the data which it is having [3] Raza,Khalid 2006. Application of Data Mining in
as input (static data). Bioinformatics. Indian Journal of Computer Science and
Engineering, Vol. 1 No. 2, p.114-118 ,ISSN.
To complete the lacking needs of Data mining mentioned
above, we need to develop a new data mining technique to [4] Lee,Sang Jun and Siau,Keng 2001. A review of Data
capture the changing trends in databases. Mining Techniques.Vol. 101/1,p.41-46,ISSN.

3. CONCLUSION [5] Creighton,Chad and Hanash,Samir 2003. Mining gene


expression databases for association rules. University of
So far, many genetic diseases have no effective cure. Michigan. Vol.19, No. 1, pg. 79-86.
Scientists are facing difficulties in finding out the causes.
Bioinformatics field researches have identified techniques to [6] Frank, E., Hall,M., Trigg,L., Holmes, G. and Witten, Lan
analyze huge chunks of genetic information. The technique H. 2004. Data Mining in Bioinformatics Using Weka.
called next generation data mining have revolutionized University of Waikato. Vol. 20, No. 15, pg. 2479-2481.
diversity of research in biology. Data mining will help us to
detect diseases by sequences of genomic data. Scientists can [7] Kawade, Dipak R. and Oza,Kavita S. 2013. Exploration
effectively handle the large volume of data and can easily of DNA Sequences Using Pattern Mining. International
extract the useful information. This useful information will Journal of Emerging Technologies in Computational and
help us find the defective genome and will help in curing the Applied Sciences (IJETCAS), pg. 144-148, ISSN.
disease. This technique can be used in cancer treatment, drug
treatment, AIDS etc. Data mining in future will help us study [8] Harun,Ahasan Uddin and Alam, Nure 2015. Predicting
genomic data in more detail and tell information about how Outcome of Thoracic Surgery By Data Mining Techniques.
our body formed and how its working is associated with International Journal of Advanced Resaerch in Computer
genome. It is therefore an effective technique to solve the Science and Software Engineering, Vol.5, Issue 1, pg.7-
problem of enormous data faced by researchers in their quest 10,ISSN.
to solve the puzzles of our life. .
[9] Ponomarenko,J., Merkulova,T., Orlova,G., Fokin,O.,
We can use this data mining technique in various fields. This Gorshkov,E. and Ponomarenko,M. 2002. Mining DNA
application can be extended to other fields like market basket sequences to predict sites which mutations cause genetic
analysis to tell how things should be placed in baskets, media diseases, Vol.15, pg.225-233.ISSN.
applications (trend of news and media quality) etc, Fraud
detection techniques like finding out outliers or frauds in [10] Saurkar,Anand V., Bhujade,V., Bhagat,P. and
criminal investigation data and detecting worms and viruses in Khaparde,A. 2014. A Review Paper on various Data Mining
cyber attacks. Techniques, Vol.4, Issue 4, pg.98-101.ISSN.

309
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Analysis of AODV, OLSR and ZRP Routing Protocols in


MANET under cooperative black hole attack
Sukhman Rupinder Kaur Gurm Gurjot Singh Sodhi
Dept of Computer Science, RIMT, Dept of Computer Science, RIMT, Dept of Computer Science,
Mandi Gobindgarh ,India Mandi Gobindgarh ,India Tangori, Mohali ,India

ABSTRACT
An ad hoc network (MANET) is a set of different types of
mobile nodes , these nodes are connected with each other
through wireless link. MANET can be developed at any time,
at any place with low cost. In MANET protocols are used to
connect nodes which are not in direct range of each other.
These protocols are mainly of three types i.e reactive (on
demand), proactive (table driven) and hybrid routing
protocols. This research effort is focused on first the
comparative investigation of routing protocols under the
cooperative black hole attack to create scenario and simulate
and investigate the performance in terms of packet delivery Figure I Mobile Ad hoc network (MANET)
ratio , average end to end delay and throughput.

Keywords
MANET , AODV ,OLSR ,ZRP ,Black hole.

1. INTRODUCTION
A mobile Ad hoc Network (MANET) as its name implies, is
a collection of mobile nodes that can communicate with each
other without the use of predefined infrastructure or
centralized administration. Mobile ad-hoc network have the
attributes such as wireless connection, continuously changing
topology, distributed operation and ease of deployment. Figure II Routing Protocols in MANET
Mobile ad hoc networks (MANETs) face different levels of
2.1 AODV (Ad hoc on demand distance vector )
challenges due to its varying mobile characteristics. The major
goal of these networks is to bring the idea of mobility into The Ad-hoc On-Demand Distance Vector (AODV) routing
real-life networks. These networks are known for their protocol [14] is designed for use in ad-hoc mobile networks.
infrastructure less characteristics. The nodes are free to move AODV is a reactive protocol: the routes are created only when
anywhere and hence the communications links may be broken they are needed. It uses traditional routing tables, one entry
at any moment. per destination, and sequence numbers to determine whether
routing information is up-to date and to prevent routing loops.
2. ROUTING PROTOCOLS An important feature of AODV is the maintenance of time-
In MANET routing protocols are used for communicate. They based states in each node: a routing entry not recently used is
are classified into different categories according to the expired. In case of a route is broken the neighbours can be
methods used during the route discovery and route notified. Route discovery is based on query and reply cycles,
maintenance procedures. and route information is stored in all intermediate nodes along
the route in the form of route table entries. The following
control packets are used: routing request message (RREQ) is
broadcasted by a node requiring a route to another node,
routing reply message (RREP) is back to the source of RREQ,
and route error message (RERR) is sent to notify other nodes
of the loss of the link. HELLO messages are used for
detecting and monitoring links to neighbors.

310
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

route acquisition latency. Reactive routing also inefficiently


floods the entire network during route establishment phase.
The aimof Zone Routing Protocol (ZRP) is to trace the
problems by joining the best properties of both approaches.
ZRP can be called as a hybrid reactive/proactive routing
protocol [5].

2.3.1 Architecture

The architecture of Zone Routing Protocol is basically based


on the concept of zones, in which a large network is
partitioned into number of zones and a routing zone is defined
for each node separately, and the zones of neighbouring nodes
overlap. The routing zone has a radius r denoted by hops. The
Figure III (a) AODV Route Discovery
zone thus includes the nodes, whose distance from the node is
at most r hops. Figure1 illustrates that, the routing zone of S
consists of the nodes A–I, but not K. In the illustrations, the
radius is pointed as a circle around the node. It should
however, be noted that the zone is not a physical distance,
rather it is defined in hops.

Figure III (b) Route Error Message in AODV

2.2 OLSR (Optimized Link State Routing)

Optimized Link State Routing Protocol, OLSR [4] is


developed for mobile ad hoc networks. It is well suited to
large and dense mobile networks. It operates as a table-driven,
proactive protocol, that is, it exchanges topology information
with other nodes of the network regularly. Each node selects a Figure V Example routing zone with r=2
set of its neighbour nodes as “multipoint relays” (MPR) [2],
[6]. MPRs, are responsible for forwarding, control traffic, The nodes of a zone are partitioned into horizon nodes and
declaring link state information in the network, provide an interior nodes. Horizon nodes are those nodes whose
efficient mechanism for flooding control traffic by reducing minimum distance from the centre node is exactly equal to the
the number of transmissions required. zone radius r. The nodes whose, minimal distance is
comparatively less than radius r are interior nodes. The nodes
having distance equal to zone radius r are horizon nodes and
nodes with distance more than radius r are exterior nodes. In
Figure 1, the nodes A, B, C, D, E and F are interior nodes, the
nodes G, H, I and J are horizon nodes and node K lies outside
the routing zone. The node G can be reached in two ways, one
with hop count 2 and another with hop count 3. The node is
said to be within the zone, because the shortest path is less
than or equal to the zone radius [6].

3. BLACK HOLE ATTACK


Black holes refer to places in the network where incoming
traffic is dropped without informing the source that the data
Figure IV Topology graph of network did not reach its intended recipient. In Black hole Attack a
node uses the protocol and advertises itself as having the
2.3 ZRP – ZONE ROUTING PROTOCOL shortest path to the destination node where the packet is
destined to. Black hole attack can occur when the malicious
ZRP (Zone outing Protocol) is a wireless Ad hoc routing
node present in the network is intended to attack directly the
protocol uses both proactive routing and on-demand routing
data traffic and intentionally drops, delay or alter the data
policies [4]. Proactive routing uses needless bandwidth to
traffic passing through it [8]. In black hole attack, black hole
maintain routing information, while reactive routing involves

311
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

node acts like black hole in the universe, it consumes all the 5.Singh etal. (2014)Stimulated and analysed the performance
traffic towards itself and doesn’t forward to other nodes . of AODV , DSR and TORA for E-Mail application under
There are two types of black hole attack. blackhole attack.

3.1 Single black hole node 6.Arora etal. (2014)studied and analysed the performance of
MANET Routing protocol like DSDV , DSR , AODV , OLSR
In the Black hole attack with single malicious node, only one and ZRP with and without black hole attack.
node will act as malicious node in a zone. Other nodes of the
zone will be authentic [10]. 6. NS 2 SIMULATION

NS2 Simulation Ns2 is most widely used simulator by


researchers; it is event driven object oriented simulator,
developed in C++ as back end and OTcl as front end. If we
want to deploy a network then both TCL (Tool Command
Language) as scripting language with C++ to be used [11].

Simulation parameters
Figure VI Single malicious node
For simulation, we use NS2 network simulator .Mobility
scenario is generated by random way point modal by taking
3.2 Collaborative black hole attack 50 nodes in simulation area of 1500*1500 m . WE use the
following parameters.
In Collaborative Black Hole Attack multiple nodes in a group
act as malicious node. It is also known as Black Hole Attack Parameters Value
with multiple malicious nodes [10]. Simulator NS 2
Routing protocols AODV , OLSR and ZRP
MAC layer 802.11
Packet size 512 bytes
Terrain size 1500*1500
Nodes 50
Mobility Modal Random waypoint modal
Data traffic rate CBR
No. of sources 5,10,15,20,25,30
Simulation duration 30 sec
CBR Traffic Rate 8 packet / sec
Maximum speed 0-20 m/sec (30 sec pause
time)
Figure VII Multiple malicious nodes
7. CONCLUSION
4. RELATED WORKS
In future the study will be performed on the decided values of
1. Mistry etal.(2010)Proposed modified AODV protocol and parameters, mentioned in this paper and the effect of these
justified the solution with appropriate implementation and parameters on the various performance matrices i.e packet
simulation using NS-2.33 delivery ratio, average end to end delay and throughput will
be noticed.
2. Shivahare etal. (2012)Compared AODV , DSR and DSDV
under black hole attack for parameters such as Route REFERENCES
Discovery, Network overhead ,periodic broadcast and node
overhead. [1] Prabhu and Subramanium , (2012), “Performance
comparison of routing protocols in MANET” ,
3. Gupta etal.(2013)Proposed an algorithm Secure Detection International journal of Advanced research in computer
Technique (SDT)for ZRP protocol which can be used to science and software engineering , Volume 2 , Issue 9,pp
prevent blackhole attack in MANETs. 388 -392.

4.Satveer etal. (2013)Compared FSR,DYMO and LANMAR [2] Arunima patel and ashok verma ,(2012) , “A Review
to evaluate their performance under blackhole attack over evaluation of AODV protocol in MANET with and
various parameters. without black hole attack” , International journal of
emerging technology and advanced engineering ,
Volume 2 , issue 11 , , pp.673-677.

312
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[3] Muzamil Basha and Raghu veer Matam ,(2013) ,


“Improved performance of DSDV, AODV and ZRP
under black hole attack in MANETS ” , IJECT , Volume
4 , Issue 4 .

[4] Pradish Dadhania and Schin Patel ,(2013) , “Performance


evaluation of routing protocol like AODV and DSR
under Black hole attacks” , International journal of
engineering research and application , Volume 3 , Issue 1
,pp. 1487 – 1491.

[5] Monika verma and Dr. N.C ,Barwar ,(2014), “A


Comparative analysis of DSR and AODV protocols
under black hole attack and grey hole attack in MANET”
, IJCSIT , Volume 5 pp.1228 -7231.

[6] Neeraj Arora and Dr. N. C. Barwar ,(2014), , “Evaluation


Of AODV , OLSR and ZRP routing protocol under black
hole attack ” , International Journal Of Application or
Innovation in Engineering and management , Volume 3 ,
Issue 4

[7] Lovepreet Singh , Navdeep Kaur and Gurjeevan Singh ,


(2014), “Analysis of the performance of MANET
protocol under black hole attack for E-mail applications”
, International Journal of Computer application , Volume
3 , Issue 4.

313
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A REVIEW ON CONTENT BASED VIDEO RETRIEVAL

Jaspreet Kaur Mann Navjot Kaur


Department of Computer Engineering Punjabi Department of Computer Engineering Punjabi
University,Patiala. University,Patiala.
preetsmann23@gmail.com navjot_anttal@yahoo.co.in

ABSTRACT
The advancement in the data capturing, data storage, and
communication technologies have made huge amount of
video data available to consumer and for business
applications. The retrieval of relevant data by user is
becoming difficult day by day. In the past the conventional
method of video retrieval was used as the number of videos
were less, video can be retrieved using keywords manually
annotated, but today manual annotation method does not give
relevant results besides it depends on human perception and
is very time consuming. Processing video is slightly difficult
as video contains varied type of information audio, text and
images. Content based video retrieval system retrieves the
video based on content of video i.e. color, texture, shape,
audio and motion. These low level features help in retrieving
relevant videos .In this paper the various existing techniques
of content based video retrieval are discussed. Content based
video retrieval has many useful applications in various areas Fig.1. a) video divided into shots b) Shots into key frames.
such as news broadcasting, video archiving, video
surveillance etc. In content based video retrieval various query techniques can
be used. Querying by example is a method in which user
Keywords provides the CBVR system with an example video and the
Content based video retrieval, distance measure, shot
system searches database using the example. Querying by
boundary, Key frame, GLC,LBP.
direct specification of image features and semantic queries
1. INTRODUCTION can be used like “find videos of Barack Obama”. This paper
Increased use of multimedia has resulted in development of provides introduction to content based video retrieval system
efficient technique for video retrieval i.e. Content based Video and reviews various techniques used for content based video
Retrieval Systems. Usual query-by-text retrieval cannot retrieval.
satisfy users requirements in finding the desired videos
effectively. Content based video retrieval consider the actual 2. ARCHITECTURE OF CBVR
content of the video i.e. the low level features like color, Content-based video retrieval, uses the visual contents of a
texture and shape .Processing videos is difficult than video such as color, shape, texture, and motion to represent
processing images a number of factors must be dealt when and index the video. In content based video retrieval we detect
using video such as shot boundary detection, key frame the shots from video, from the shots we extract the key frames
extraction ,feature extraction and similarity measurement. For from videos in the database, then feature extraction is
enhancing conventional search engines, there is an abundant performed on key frame and features are stored in database.
space in the field of video retrieval via the exploitation of the Similarly when user input query video, the system changes
rich media contents. This has transformed content based video video into shots, shots into key frames these frames are
retrieval (CBVR) into a promising direction for creating represented using feature vectors .The distance between the
future video search engines[1]. Video has complex structure. feature vector of query frame and those frames in the database
Video is divided into scenes, then shot boundary analysis is are then calculated and retrieval is performed. Lower the
done to form shots and key frame analysis is done to extract distance value higher the similarity between frames.
key frames.

314
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.3.1 Color
Color feature is widely used for many applications like image
retrieval and video retrieval. Color feature is relatively robust
and simple to represent. Color histogram, color moments and
color coherence vector are commonly used for representing
color feature.

3.3.2 Texture
Texture involves the spatial distribution of gray levels.
Texture is that property of surfaces that describes visual
patterns. Various techniques used for texture analysis include
GLCM, edge histograms, auto-correlation and LBP.

3.3.3 Shape
The shape of an object is a binary image representing the
extent of the object. The human visual system relies on shape
properties. Shape representations can be divided into two
categories, boundary-based and region-based. Shape is
invariant to translations, rotations and scaling. For
representing shape feature chain codes, edge histogram,
Fourier descriptors are used.

3.4 Similarity Measurement


A distance function like Euclidean distance, Manhattan
distance, Chebyshev distance, chi-square etc can be used for
matching feature vectors of query video and video in the
databases. Videos having feature vector closest to feature
vector of the query are returned as best results.

4. LITERATURE SURVEY
B.V Patel et al. in[1] proposed a method of retrieving videos
by using entropy and black and white points on edge.
Proposed content based video retrieval system work in two
phases. In first phase GLC is constructed for every frame and
entropy is calculated. These entropy values are stored in
database and in second phase when query video frame is
received again its entropy is calculated using GLC matrix and
Fig.2. Block diagram of Content Based Video Retrieval also black and white points are calculated using prewitt edge
detection and then similarity measure is applied between
query frame and every key frame of database to get relevant
3. METHODOLOGY OF CBVR video. The experimental results showed that combining both
3.1 Shot Boundary Detection features is more effective than using only a single feature .
A shot consists of set of contiguous frames all acquired
through a continuous camera motion. Each shot can be Asha S,Sreeraj M in[2] proposed a robust content based video
represented by one or more key frames. The various retrieval system. The system retrieves similar videos based on
techniques used for shot detection are pixel based shot a local feature descriptor SURF. Manhattan and Euclidean
boundary detection, block based shot boundary detection and distance are used for similarity matching. Then the
histogram based shot boundary detection. performance of system is evaluated using recall and precision.
The experimental results shows that system provides an
average precision of 75% with 83% recall.
3.2 Key Frame Selection
Key frame extraction is one of the important step in content Chaoqun Hong et al. in[3] proposed a new method to
based video retrieval. Key frame selection helps in reducing efficiently find the objects and locate them. First, the tracked
redundancy. Key frame is the frame which can represent the stable features are used to represent the image and encoded by
significant content of the shot. The key frames extracted must compressed feature description. This description is fast to
review the characteristics of the video, all the key frames on process and lossless. It takes less space than the traditional
the time sequence gives visual outline of the video to the user. feature descriptors to store the features. Second, the fast
region searching method finds the positions of query target
3.3 Feature Extraction with very low computational complexity. Besides, it is able to
Feature extraction is another crucial step in video retrieval. handle scale changes and locate multiple targets in one frame.
The color, texture and shape features are extracted from key The experimental results are satisfactory.
frames and represented in the form of feature vector.

315
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Ja-Hwung et al. in [4] proposed a method for high quality transform and color histogram .The experimental results
content based video retrieval by discovering temporal patterns showed that proposed method was better than other methods
in video contents and applied efficient indexing and sequence in terms of precision. The computational complexity was also
matching technique to improve retrieval accuracy. In this reduced due to use of wavelet transformation.
method first pre-processing was done and shot was detected
then color layout and edge histogram are extracted from shots. Madhav Gitte et al in[11] gives overview of content based
Indexing was done using FPI-tree and AFPI-tree and then video retrieval system. First video segmentation is done, then
searching is performed using FPI-search to retrieve videos. key frame is selected using Euclidean distance, features are
Experimental results show that this approach is very good for extracted from frame and stored in feature vector. For
improving content-based video retrieval efficiency. indexing of frames hierarchical clustering tree algorithm was
used. For retrieving relevant video similarity matching is done
A Vadivel et al in[5] compared performance of different between query and feature vectors in database .The
distance metrics on different color histograms. Manhattan performance was evaluated using precision and recall.
distance, Euclidean distance, Vector Cosine Angle distance Indexing and clustering improved efficiency of the system.
and Histogram Intersection distance are used for performance
comparison. The results show that the Manhattan distance BV Patel et al in[12] provided the review of visual features
performs better than the other distance metrics for all the five that can be extracted from video for efficient retrieval and
types of histograms. similarity measurements are discussed. Comparison of various
feature extraction algorithms is performed like color-texture
T.N Shanmugam et al. in[6] proposed a method of retrieving classification, character identification, semantic video
video using video query clip. In this method first the video retrieval etc. Different types of algorithms are suitable for
was divided into shots, then video shots were segmented using different applications .At the end it is concluded that the
2-D correlation coefficient technique. Motion, edge histogram feature must be chosen carefully and user interaction must be
and color histogram features were extracted from the videos increased for better retrieval.
and stored in feature database and the same features are then
extracted from the query clip and Kullback Leibler distance Shimna Balakrishnan et al in[13] discusses about existing
method was used for similarity measurement. The Kullback techniques and uses various combination of algorithms for
Leibler distance was effective in retrieving similar videos. color, texture and shape features for video retrieval. Various
algorithms are compared the results vary depending upon type
Shradha Gupta et al. in[7] proposed a method of retrieving of video and on size of video. In the end it is concluded that
video from a database by giving query as an object .In this combination of color moment, geometric moment and co-
method the video was divided into frame and segmentation occurrence gave the best results.
was applied to separate the object from the frame. Then
features are extracted from object using SIFT features. Then
the features of video in database and query object were
matched by nearest neighbor matching algorithm. The
performance of the system was more than 95%,it was more
effective as it was invariant to illumination changes.

Fazal Malik et al. in[8] has done the analysis of distance


metrics in CBIR using statistical quantized histogram texture
features in DCT domain. In this method first the quantized
histogram statistical features are extracted from the DCT
block of image using DC and first 3 coefficients of AC.
Various distance measures were used to measure similarity
using texture features .The corel image database was used.
The precison values showed that Euclidean distance, city
block distance and sum of absolute differences metrics give
good retrieval results using quantized histogram texture
features in DCT domain.

Poonam O.Raut et al in[9] provides an overview of general


techniques used in content based video retrieval. In the
proposed method first video is divided into shots and
statistical difference method is used for boundary detection,
reference frame based method is used for key frame extraction
and feature extraction is done using color RGB moment is
used, for texture LBP is used and for shape edge histogram is
used and then clustering is used to cluster frames based on
low level features. Then the similarity measure between
clustered frames and query frames is found. The results show
that clustering helps in reducing the number of comparisons.
Thus provide efficient video retrieval.

Manimala Singha et al in[10] proposed the method of content


based video retrieval using texture and color feature .The
texture and color features were extracted using wavelet

316
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 1. Comparison of various techniques discussed in literature survey

Sr.no Reference Method used Results Advantages/ Disadvantages

1. B.V.Patel et al Entropy(GLC) and black and DNP* i) Integration of both techniques gives better
[1] white points on edge(prewitt results.
edge detection)
ii)Freq, histogram and data mining
techniques can be used to enhance results.

2. Asha S et al[2] SURF used for feature extraction. Precision=75% i)SURF is robust against rotation, scaling &
Euclidean and manhattan noise.
distance is used for similarity Recall=83%
measurement. ii)It is low dimensional therefore faster.

3. Chaoqun Hong Compressed feature DNP* i)Low cost , high accuracy, lossless and fast
description(CFD) using motion to process.
et al[3] and fast region search(FRS)
ii)Takes less space.

iii)FRS has very low computational


complexity.

4. Ja-Hung et al Temporal pattern used for Precision=90% i) Decrease computation cost.


[4] indexing(FPI and AFPI tree),
Recall=20% ii) Improves retrieval accuracy.
feature extraction using color and
edge histogram

5. T.N Motion(FFT),edge(edge Precision=10-89% i)segments video into shot proficiently.


Shanmugam et histogram),color(histogram) &
al[5] texture(Gabor filter) features Recall=27-90% ii) Efficient method.
used. Kullback Leibler distance
is used for similarity. (depends on type of
query clip)

6. Shradha Gupta SIFT used for feature extraction Precision=95% It is invariant to illumination changes.
et al[6]

7. Poonam O.Raut Shot detection is done from video DNP* Clustering reduces no. Of comparisons.
et al[7] and key frames are
extracted.ColorRGB,Texture(LB
P) & shape (edge histogram)
features used. Clustering of
frames is done.

8. Madhav Gitte et Video segmentation is done Precision=20-80% Indexing improves efficiency.


al[8] .Indexing is done using
hierarchical clustering tree Recall=7-75%
algorithm and clustering(K-
means algorithm used)

317
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

9. Shimna Various combination of average Precision=42% i)More accuracy as all three features color,
Balakrishnan et RGB, color moment, edge shape and texture used.
al[13] direction histogram, geometric
mean, Co-occurrence & auto- ii)Time complexity reduced.
correlation used.

DNP*- DATA NOT PROVIDED

5. CONCLUSION
Content-based retrieval of visual information is an emerging
[10] Manimala Singha and K.Hemachandran , Content Based
area of research. This paper provided a brief overview of
Image Retrieval using Color and Texture, Signal & Image
major content based video retrieval techniques. The
Processing : An International Journal (SIPIJ) Vol.3, No.1,
techniques explained are widely used for video retrieval in
February 2012.
many fields. The combination of various feature extraction
techniques can be used for efficient video retrieval .The data
[11] Madhav Gitte, Harshal Bawaskar, Sourabh Sethi and
mining techniques like classification and clustering can be
Ajinkya Shinde, Content Based Video Retrieval System,
used for better results.
International Journal of Research in Engineering and
Technology, Volume: 03 Issue: 06 | Jun-2014.
REFERENCES
[12] B V Patel and B B Meshram , Content Based Video
[1] B. V. Patel, A. V. Deorankar and B. B. Meshram, Content Retrieval Systems, International Journal of UbiComp (IJU),
Based Video Retrieval using Entropy, Edge Detection, Vol.3, No.2, April 2012.
Black and White Color Features, IEEE, 2nd International
Conference on Computer Engineering and Technology [13] Shimana Balakrishnan and Kalpana S.Thakre, Video
2010,volume 6. match analysis’ Comprehensive Content based Video
Retrieval System, International Journal of Computer Science
[2] Asha S and Sreeraj M, Content Based Video Retrieval
and Application Issue 2010
using SURF Descriptor, 2013 Third International
Conference on Advances in Computing and
Communications.
[3] Chaoqun Hong, Na Li, Mingli Song, Jiagun Bu and chun
chen , An efficient approach to content-based object
retrieval in videos, 2011 Elsevier.
[4] Ja-Hwung Su, Yu-Ting Huang, Hsin-Ho Yeh and Vincent
S. Tseng, Effective content-based video retrieval using
pattern-indexing and matching techniques, Expert
Systems with Applications 2009 Elsevier.
[5] A.Vadivel, A K Majum dar and Shamik Sural,
Performance comparison of distance metrics in content-based
Image retrieval applications.

[6] T.N. Shanmugam and Priya Rajendran, An Enhanced


Content Based Video Retrieval system based on query clip,
International Journal of Research and Reviews in Applied
Sciences, Volume 1, Issue 3 December 2009.

[7] Shradha Gupta, Neetesh Gupta and Shiv Kumar,


Evaluation of Object Based Video Retrieval Using SIFT,
International Journal of Soft Computing and Engineering
(IJSCE), Volume-1, Issue-2, May 2011.

[8] Fazal Malik and Baharum Baharudin, Analysis of distance


metrics in content-based image retrieval using statistical
quantized histogram texture features in the DCT domain,
Journal of King Saud University – Computer and Information
Sciences (2013) 25, 207–218.

[9] Poonam O. Raut and Nita S. Patil, Performance Analysis


of Content Based VideoRetrieval System Using Clustering,
International Journal of Science and Research (IJSR), Volume
3 Issue 8, August 2014.

318
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Web Services: An E-Government Perspective


Monika Pathak Gagandeep Kaur Sukhdev Singh
Research Scholar Research Scholar Research Scholar
monika_mca@yahoo.co.in mailtogagandeep@gmail.com tomrdev@gmail.com

ABSTRACT The web based service is one of the initiatives that have been
The e-Government is increasing its popularity through taken under the umbrella of e-Governance [2]. The study is
number of web services. The web services in e-Government aimed to explore different web services for citizen,
are aimed to provide services to citizens, businesses and businessman and analyzed difficulties being faced by the
government agencies. The e-Government web services services providers. The objective of web service is to reach to
provide relevant government information in digital form to all section of people in respect of distance and language. The
the citizens in a timely manner and cost saving manner. The web services are usually provided in the form of web portal,
numbers of web services are available on web maintained by where information of different government services is
different departments of government. The aim of the current provided. For instance, passport portal as web service is being
paper is to provide exploratory research which explores latest used as tool to file application for passport [3, 4]. The
advancement in e-Governance through web services. The different web services are considered for study some of them
different types of web services are available such as are discussed in this paper in detail. Its purpose is to provide
Agmarknet, Bhuiyan, Examination Results Portal, JUDIS, government services in more transparent manner so that
Passport Website, Value Added Tax (VAT), etc. The study is everybody who so ever is eligible to avail the service should
also analyzed the problems and challenges are being faced by have equal chance to available it. It is has been seen through
the service providers. literature survey[8-10] that the challenging tasks for any
government service provider is to involves lot of efforts in
terms of man power and money. The study is aimed to
Keywords: E-Governance, Web Services, Web Portal, explore different issues related to success and failure of web
Agmarknet, Bhuiyan, Judis. services.

1. INTRODUCTION 2. WEB SERVICES THROUGH E-


The information communication & technology is widely used
in providing the government services which results in new
GOVERNANCE
The web services are not only source of information that is
technological service platform known as e-Governance
gathered and stored at one place but it need to organized such
through web services. Moreover with the increase in internet
as manner that it can access with ease. The web portal [4] is
users and mobile connections, the people are now well aware
not systematic arrangement/classification of information but
through web services about information related to government
is dynamic in nature and provides user interaction. If the
services. In the early stage of e-governance in India [1], it was
information of the government is gathered at one place
focused to develop application for economic monitoring,
without any arrangement or management, it would prove
planning, managing data of census, taxes. Nowadays, e-
difficult for the users to find the required information. The
governance services are almost everywhere, all the
managing information is an important aspect of e-Governance
government functioning is highly influenced by the e-
which requires technical skills. The web service is platform
Governance. Where, the term e-Governance is used to
where user can access important information and can
describe government services provided through means of ICT
available other useful services. On the base of literature
(Information Communication & technology).
review [5-10], we have listed some of the popular web
services which are available in the form of web portals.

Web services Functions


Agmarknet (For Agricultural Information Its function is to share information related to production of agriculture and
Portal) wholesale markets in the country. The portal has been developed to
http://www.agmarknet.nic.in strengthen the interfaces among government and non-government
organizations. It acts as a platform for the communication among farmers,
traders, exporters, policy makers and academic institutions etc.
Bhuiyan (For Land Records) This web based application is for the online retrieval of land information.
http://cglrc.nic.in These details of land are generally required for loan purposes and
sale/purchase of property. The application facilitates the easy retrieval of
these details. Other than this, under admin access one can view the abstract of
Khasara, Khatauni, Area wise details, Farmer wise details, Revenue
collection details of a particular land.
Results Portal The online results of various academic entrance and recruitment examinations

319
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

http://results.nic.in conducted by the various government agencies are available on portal


―results.nic.in‖. This portal also includes results of CBSE, State Education
Boards, Various Universities, Professional Institutes (Engineering, Medical,
MBA, CA, etc.).
Judis Judis is comprehensive online library of caselaw that contains all reportable
http://www.judis.nic.in judgments of the Supreme Court of India and various High Courts of India.

Passport Website This site provides the information about passport visa and other related
http://passport.nic.in information.

RuralBazar This site is aimed to provide the business opportunity for poor people.
http://ruralbazar.nic.in Products manufactured by rural and poor people are sponsored and sell with
this site.
Value Added Tax http://megvat.nic.in The application is being used by the Taxation Department to monitor the
revenue generated by the state government in the form of collecting taxes and
monitoring of the sales returns from the commercial establishments in the
state. The major functions are registration, challenges, way bill, transit
documents, etc.

Table1: Web services in e-Governance

3. CHALLENGES FACED TO the field of e-governance. The technical infrastructure is


developed to provide availability of web services. In spite
IMPLEMENT THE WEB SERVICES of various efforts, the current system is facing various
IN E-GOVERNANCE challenges like lack of IT literacy, lack of information
It has been observed that the lots of efforts are being used technology infrastructure; some of them are listed below:
in the development of various web based services and in

Challenges Description
Lack of ITC literacy It has been observed that in spite of availability of web services most of people are not able
to access information due to lack of IT literacy. The web services require working
knowledge to access internet services which require training about computer and internet.
Awareness regarding There is general lack of awareness regarding benefits of e-Governance through web services.
benefits of web services There must be awareness programmes regarding the facilities being provided through web.
Lack of existing ICT The lack of ICT infrastructure is one of the major problems. The country like India which is
infrastructure facing problem of poverty at great extends and availability of ICT infrastructure for large
population is a great challenge. On the counter part, sometime with high performance
system, the under utilization of the computers in terms of their usage is another issue that
needs to be addressed.
Attitude of government The psychology of government servants is quite traditional and they are still stick to
servants towards adopting traditional working style. The attitude of government department is pathetic and non-serious,
new technology most of official web site are not updated daily basis and consistent.

Lack of coordination There is lack of communication between government organization and solution providers
between Govt. department which results in inadequate application like web services. The government authority has
and solution developers little knowledge about the technical issues and solution providers fail to understand the
requirements of the customers.

Resistance to digitalize the Successful implementation of web services depends on availability of digital information.
information and its online Moreover the digital information and more the rate of success of web service. But
availability unfortunately, most of the government institutions are not willing to provide information in
digital format. The authenticity of information is another problem being faced. Therefore the
content as is collected or maintained by various e-Governance portals is unreliable or full of
gaps. In such a scenario, it’s difficult for any e-Governance solution to achieve its intended
results.
Lack of infrastructure for The web services under e-Governance are required lot of infrastructure in the form of
sustaining network, ICT devices, software and manpower.
e-Governance

Table2: Challenges faced to implement the web services in e-Governance

320
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. CONCLUSION Courier and Small Package Services in Japan , Strategic


Management‖, 251-273.
It is evident from above analysis that web service is one of the 4. Singh G. Pathak R. D. Naz R. (2010) ―Service Delivery
devices of e-governance which is available to the large population through E-Governance: Perception and Expectations of
by internet. The objective of the web service is to overcome Customers in Fiji and PNG‖, Public Organization Review,
geographical barriers and provide equal opportunity to all to avail 1566-7170, pp 1-14, Springer Science Business Media, LLC.
the government services. It is an instrument to transforming india. 5. Nikita Yadav, V.B. Singh(2012), ―E-Governace: Past,Present
The complete transformations require people of Indian to come out and Future in India‖, International Journal of Computer
of tradition psychological mindset and move forwards to provide Applications.
appropriate information. It requires fundamental change in work 6. Mrinalini Shah(2007), ―E-Governance in India: Dream or
culture and goal orientation. Foremost of them is to create a culture reality?‖, International Journal of Education and Development
of maintaining, processing and providing updated information to using Information and Communication Technology, Vol.3,
web portal which will be provided to the people in timely manner. Issue 2, pp.125-137.
7. Sanjay Kumar Dwivedi and Ajay Kumar Bharti.(2010), ―E-
Governance in India–Problems and Acceptability‖,Jourmal of
REFERENCES Theoretical and Applied InformationTechnology, pp.2005-
1. Gilbert, D., Balestrini, P., &Littleboy, D. (2004). Barriers and 2010.
benefits in the adoption of e-government. The International 8. N.S. Kalsi, Ravi Kiran and S.C Vaidya.(2009), ―Effective e -
Journal of Public Sector Management, 17(4/5), 286-301. Governance for Good Governance in India‖, International
2. Wong, K. F., Tam, M. K. W., & Cheng, C. H.(2006). E- Review of Business Research Papers,Vol No. 5, pp.212-229,
government—A Web Services framework. Journal of 9. Jamal A Farooquie,(2011).International Journal of Humanities
Information Privacy & Security, 2(2), 30-50 and Social Science, ―A Review of E-Government Readiness in
3. Jack A. Nickerson , Barton H. Hamilton , Tetsuo Wada India and the UAE‖
Market Position, Resource Profile, and Governance.(2001).― 10. Anu Paul,Varghese Paul.(2014),―A Framework for e-
Linking Porter and Williamson in the Context of International Government Interoperability in Indian Perspective‖,
International, Journal of Computer Information Systems and
Industrial Management Applications,pp.582-591.

321
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A review on ACO based and BCO based routing


protocols in MANETS
Jatinder Pal Singh
A.P. CSE Department
SLIET Longowal, PUNJAB
sachdeva.jp@gmail.com

ABSTRACT
Networks have changed our life style drastically in last 10 years. We
always want to stay connected to an external network like Internet or
Intranet with the help of wireless communication. A lot of research
has been in done this field to explore the best and the optimum
protocols which can be used in WSN, WMN and Ad-hoc networks.
Identification of optimal route for data communication, efficient
utilization of energy, providing congestion free communication,
offering scalability, maintaining the Quality of Service are few
research issues in the Ad-hoc Networks. To overcome these issues
swarm Intelligence (SI) inspired routing algorithms have become a Figure 2: MANET connected to Internet
research focus in recent years due to theirs self-organizing nature,
which is very suitable to the routing problems in Mobile Ad hoc The nature of this kind of networks makes them suitable for
Networks (MANETS). In this paper, we focus on routing algorithms deployment in places where network infrastructure is hard to build
based on Ant Colony Optimization(ACO) and Bee Colony and maintain. Applications of ad hoc networks are emergency search-
Optimization(BCO) for MANETS that have been designed according and rescue operations, war like situations, VANETS or meetings in
to the principles of swarm intelligence.. which persons wish to quickly share information

General Terms Challenges in MANETS Routing Protocols:


Mobile adhoc networks, Swarm Intelligence, Ant Colony  MANETS are usually made up of small or tiny nodes equipped
Optimization, Bee Colony optimization, Routing protocol. with little memory, limited non-rechargeable battery, low-end
processors, and small bandwidth links. As a result, MANET
Keywords protocol designers face strict constraints on the use and the
MANETS, Ad-hoc, ACO, BCO, SI . availability of node resources.
 The majority of target applications for MANETS require the
deployment of the sensor nodes in large numbers, ranging from
1. INTRODUCTION thousands to millions. Hence, the scalability of the used protocols
MANET[1] is one of those types of wireless networks which do not is also a major concern.
require any infrastructure to set up itself; we can set up this type of  Individual nodes can potentially generate huge amounts of data.
network anywhere with the help wireless facility of its nodes as The transmission of every data bit to a common sink node would
shown in the figure below: make use of a large amount of energy, bandwidth, and processing
power. Therefore, possibly redundant information needs to be
detected and filtered, in order to reduce the in-network traffic.

MANETs are dynamic, flexible and require few resources but at the
same time they suffer from a variety of problems like: lack of
centralized management, resource availability, Dynamic topologies,
device discovery, security, reliability, quality of service and
internetworking. Many different approaches dealing with these
problems do exist, one of them is Swarm intelligence. The rest of the
paper is organized as follows. In Section 2, we briefly discuss the
Figure 1: A MANET Scenario Swarm Intelligence(SI) and SI based Ant Colony Optimization and
Bee Colony Optimization techniques. The third and fourth sections
This type of network which can be set up without any centralized are the main part of this paper gives description about the most
control or infrastructure is also known as Infrastructure less network. commonly used ACO and BCO based routing protocols. In the fifth
Such a network can operate in standalone fashion or either connected section we compare ACO and BCO based routing protocols.
to the Internet: after one of the nodes is configured as a gateway as
shown in the figure below:

322
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2. SWARM INTELLIGENCE The real ants drop a pheromone, chemical from their bodies naturally,
Swarm intelligence(SI)[4,5,6] is a new discipline of study that on the path which leads them for the various decisions. Rest of the
contains a better optimal approach for problem solving which are ants reaches the food source by following this pheromone trail.
inspired from the social behavior of insects and animals. Examples in General steps in ACO are shown in Figure 4. More the concentration
natural systems of SI include ant colonies, bird flocking, animal of pheromone value on the path means path probability is greater
herding, bacterial growth, fish schooling and Microbial intelligence. than others. Here some important features of ad hoc networks which
Different swarm intelligence fields are shown in figure3 below. This flavors the designing of swarm intelligence based protocols:
paper are lend itself to give brief information about swarm  Dynamic Topology: The dynamically change topology, causes
intelligence based ACO and BCO routing protocols. The nature of bad performance of mostly routing algorithm in mobile multi-hop
swarms largely resembles mobile ad-hoc networks (MANETs) and ad hoc networks. The working principle of ACO is based on
that is why ideas from swarm animals like ants and bees are used for agent systems and works individually and flavours high
creating suitable routing protocols for MANETs. Bio inspired, adaptation to the current topology of the network.
SI[4,5,6] approaches are more promising for MANETS due to the  Local Work: The ACO based algorithms are based only on local
following reasons. information, so it no needs the transmission of routing tables or
 Self-organizing behavior & Multiple path availability other information to neighbour nodes in networks.
 Failure backup & Dynamic topology  Support for Multi-path: The selection decision is based on the
pheromone value on the current node. It provides the multi-path
selection choices.

2.2 BEE COLONY OPTIMIZATION


Similar to ACO, the Bee Colony Optimisation (BCO) [2,3] is a
nature-inspired meta-heuristic, which can be applied in order to find
solutions for optimization problems that mimics the food foraging
behaviour of swarms as in honey bees. BCO is the name given to the
Figure 3: Subdivision of Swarm Intelligence collective food foraging behavior of honey bee. The bee system is a
The following sub-sections will describe about ant colonies and bee standard example of organized team work, and simultaneous task
colonies optimization techniques which can be used in conjunction performance with well-knit communication. In a bee colony there are
with MANETS. different types of bees like a queen bee, many male drone bees and
thousands of worker bees.
 Queen Bee: The main work of queen bee is to laying egg which is
2.1 ANT COLONY OPTIMIZATION used to develop a new colony because there is only one queen bee
Ant Colony Optimization[2,3] is paradigm of Swarm Intelligence that in the hive.
is inspired by the collective behaviour of ants. The basic idea of ANT  Male Drone Bees: In the hive there are two types of male drone
colony optimization is taken from the food searching behavior of real bees first is Food Packers Bees the work of food packer bees is to
ants. When ant searches for food, they start from nest and walk servethe queen help it in laying the eggs. The second is Nurses
towards the food. When ant reaches an intersection, ant has to decide Group Bees which is responsible for feeding the queen and the
which branch to take next while walking. Ants deposit a substance babies.
called pheromone. This marks the route taken. The concentration of  Worker Bees: There are two types of worker bees, namely scouts
pheromone decreases due to diffusion effects. The behavior of ants is and foragers. The scouts start from the hive in search of a food
used to find the shortest route in the networks. ACO meta-heuristic source randomly keeping on this exploration process until they
computational approach was proposed by Marco Dorigo in 1996. The are tired. The movement of scout bees in a typical bee hive is
basic principle of ACO is simulation of ability of ants to find the shown in figure 5. When they return back to the hive, they convey
shortest path between their nest and a food source. Ants are capable to the foragers information about the odor of the food, its
to find the shortest path between their nest and food source, without direction, and the distance with respect to the hive by performing
any visible, central and active coordination mechanism. dances. A round dance indicate that the food source is nearby
whereas a waggle dance indicate that the food source is far away.
Waggling is a form of dance made in eight-shaped circular
direction and has two components: the first component is a
straight run and its direction conveys information about the
direction of the food; the second component is the speed at which
the dance is repeated and indicates how far away the food is. Bees
repeat the waggle dance again and again giving information about
the food source quality. The better is the quality of food, the
greater is the number of foragers recruited for harvesting. The
Bee Colony Optimization (BCO) meta-heuristic has been derived
from this behavior and satisfactorily tested on many
combinatorial problems. Following is the flowchart which gives
the basic structure of the working of the bee colony system.

Figure 4: Steps in ACO

323
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 6. Route discovery phase in ARA showing a


forward ant (F) and backward ant (B)

In ARA[10], a route is indicated by a positive pheromone value in the


node’s pheromone table over any of its neighbours to the FANT
destination. When the ant reaches the destination it is sent back along
the path it came, as a backward ant. All the ants that reach the
destination are sent back along their path. Nodes modify their routing
table information when a backward ant is seen according to number
of hops the ant has taken. In Route Maintenance phase, DUPLICATE
Figure 5. Flowchart for Basic BCO Steps. ERROR flag is set for duplicate packets to prevent from looping
problems. ARA[10] also allows for the evaporation of pheromone by
3. ACO Based Routing Protocols for MANETS decrementing factor in route table. In Route Failure Handling phase,
In this section we explained three Ant Colony Optimization based node deactivates the path by reducing pheromone value to 0 in
routing protocols[7,8] namely: ARA, Antnet and AntHocnet. These corresponding route table entry and go to the Route Discovery phase
three protocols are covered in the next subsections: for selecting path and sending packets to the destination over that
path [26].
3.1 ARA
Gunes et al proposed Ant Colony Based Routing Algorithm[10] 3.2 AntNet
(ARA) which reduces overhead, because routing tables are not Antnet[11] is proposed by G. Di Caro and M. Dorigo. It is an agent
interchanged among nodes. ARA consists of three phases namely: based routing algorithm that is influenced from the real ants
Route Discovery, Route maintenance and Route failure handling. behaviour. In antnet ants (nothing but software agents) explores the
ARA is a purely reactive MANET routing algorithm. It does not use network to find the optimal paths from the randomly selected source
any HELLO packets to explicitly find its neighbours. When a packet destination pairs. Moreover, while exploring the network ants updates
arrives at a node, the node checks it to see if routing information is the probabilistic routing tables and builds a statistical model of the
available for destination d in its routing table. In ARA, the route nodes local traffic. Ants use these tables to communicate with each
discovery is done either by the FANT (forward ant) flood technique other. The idea in AntNet is to use two different network exploration
[13] or FANT forward technique. In the FANT flooding scheme, agents, i.e. forward ants (FANTs) and backward ants (BANTs),
when a FANT arrives to any intermediate node, the FANT is flooded which collect information about delay, congestion status and the
to all its neighbours. If found, it forwards the packet over that node, if followed path in the network. FANTs are emitted at regular time
not, it broadcasts a forward ant (FANT) to find a path to the intervals from each node to a randomly selected destination. This
destination. By introducing a maximum hop count on the FANT, transmission occurs asynchronously and concurrently with the data
flooding can be reduced. In the FANT forwarding scheme, when a traffic. As soon as a FANT arrives at the destination, a BANT moves
FANT reaches an intermediate node, the node checks its routing table back to the source node reverse the path taken by the FANT. The
to see whether it has a route to the destination over any of its subdivision in forward and BANTs has the following reasons. The
neighbours. If such a neighbour is found, the FANT is forwarded to FANTs are just employed for data aggregation of trip times and node
only that neighbour; else, it is flooded to all its neighbours as in the numbers of the path taken without performing any routing table
flood scheme. updates at the nodes. The BANTs get their information from the

324
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

FANTs and use it to achieve routing updates at the nodes. By using routing information related to the ongoing flow up-to-date with
two different types of ant agents FANT and BANT; each node in the network changes for both topology and traffic.
network has shortest path to all the destinations. However, they cause
When a data session is started between source and
extra overhead but it reduces the available traffic capacity for actual
destination, firstly source node S checks whether it has up-to-date
data communication. AntNet is designed in such way that the forward
routing information for destination D. If no such information is
ants carry the information about the status of the links it traverses.
available, it reactively sends out ant-like agents, called reactive
This status information can be captured and can be used to find the
FANT, to look for paths to D. These ants gather information about
best path. AntNet[11] is one of the dynamic routing algorithms for
the quality of the path they followed, and at their arrival in D they
learning new routes.
become BANT which trace back the path and update routing tables.
The routing table Ti in node i contains for each destination D and
each possible next hop n a pheromone value. In this way, pheromone
tables in different nodes indicate multiple paths between S and D, and
data packets can be routed from node to node as datagrams. Once
paths are set up and the data session is running, S starts to send
proactive FANT ants to D. These ants follow the pheromone values
similarly to data packets. In this way they can monitor the quality of
the paths in use. Moreover, they have a small probability of being
broadcasted, so that they can also explore new paths. In case of link
failures, nodes either try to locally repair paths, or send a warning to
their neighbors such that these can update their routing tables.
4. BCO Based Routing Protocols for MANETS
In this section we explained three routing protocols based on Bee
Colony Optimization[7,8,9] namely: BeeAdhoc, BeeSensor and
BeeIp. These three protocols are covered in the next subsection:
Figure 7. Trip times table in Antnet algorithm
4.1 BeeAdHoc
An example of AntNet[11] trip time table is given in the figure 7. BeeAdHoc[13] is a nature inspired routing protocol for MANETs
Routing table at each node stores the list of reachable nodes and their based on the foraging principles of honey bees. It is a reactive source
pheromone value. It is represented as structure consisting of routing algorithm based on the use of four different bee-inspired
following fields: types of agents: packers, scouts, foragers, and bee swarms. Scouts
 destination_id – This represents the address of the destination are used to discover routes and foragers to transport data. Figure 8
node will give an overview of the BeeAdHoc architecture. In this
 next_id – This represents the address of the adjacent node used to architecture each node maintains a hive with an Entrance, Packing
reach destination node. Floor and a Dance Floor.
 pheromone – This represents the value used by the node to
calculate the probability of each adjacent node to be the next hop
in order to reach the destination.
 An example of such a routing table is given in the table below:

Table 1. Routing table based on Pheromone Values


Routing Table at node 1
Destination Next Node Pheromone Value
0 2 0.5
1 1 1

3.3 AntHocNet(Ant Agents for Hybrid Multipath


Routing)
AntHocNet[12] is a hybrid algorithm proposed by Di Caro, Ducatelle
and Gambardella in 2004 consisting of reactive and proactive
components. It does not maintain routes to all possible destinations at
all times (like AntNet), but only sets up paths when they are needed Figur 8. Overview of the BeeAdHoc architecture
at the start of a data session. It is reactive in the sense that a node
 Packers: Packers mimic the task of a food-storekeeper bee,
only starts gathering routing information for a specific destination
reside inside a network node, receive and store data packets from
when a local traffic session needs to communicate with the
the upper transport layer. Their main task is to find a forager for
destination and no routing information is available. It is proactive
the data packet at hand. Once the forager is found and the packet
because as soon as the communication starts, and for the entire
is handed over, the packer will be killed.
duration of the communication, the nodes proactively keep the

325
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 Scouts: Scouts discover new routes from their launching node to sorts: forward scouts and backward scouts. A scout is uniquely
their destination node. A scout is broadcasted to all neighbors in identified by its agent ID and the source node ID. Forward scouts
range using an expanding time to live (TTL). At the start of the propagate in the network using the broadcasting principle. During
route search, a scout is generated; if after a certain amount of time the exploration of the network, they do not construct a source
the scout is not back with a route, a new scout is generated with a routing header. The result – that their size becomes independent
higher TTL in order to incrementally enlarge the search radius of the path length – helps BeeSensor to scale to large networks.
and increase the probability of reaching the searched destination. Once a forward scout reaches a sink node, it delivers the event to
When a scout reaches the destination, it starts a backward journey the upper layer and starts its return journey as a backward scout.
on the same route that it has followed while moving forward Its task is to build a path leading from the sink to the source node
toward the destination. Once the scout is back to its source node, (or vice versa) and report the quality of the discovered path once
it recruits foragers for its route by dancing. A dance is abstracted it reaches at the source node.
into the number of clones that could be made of the same scout.
 Foragers: As in BeeAdHoc[13], foragers are the main workers in
 Foragers: Foragers are bound to the bee hive of a node. They BeeSensor. Their major role is to carry events to the sink nodes
receive data packets from packers and deliver them to their through a predetermined path that is selected stochastically at the
destination in a source-routed modality. To attract data packets source node. Foragers that follow the same path are grouped
foragers use the same metaphor of a waggle dance as scouts do. together in BeeSensor. Foragers traverse using point-to-point
Foragers are of two types: delay and lifetime. From the nodes mode by utilizing the forwarding information stored at
they visit, delay foragers gather end-to-end delay information, intermediate nodes. They index the table using their path
while lifetime foragers gather information about the remaining identifier (PID). Foragers also evaluate the quality of their path
battery power. Delay foragers try to route packets along a and report it back to fellow foragers at the source node.
minimum-delay path, while lifetime foragers try to route packets
in such a way that the lifetime of the network is maximized. A  Swarms: To save energy, foragers are implicitly piggy-backed in
forager is transmitted from node to node using a unicast, point-to- the link layer acknowledgment packets to the source node.
point modality. Once a forager reaches the searched destination However, sometimes they need to be explicitly transported back
and delivers the data packets, it waits there until it can be to their source nodes. A swarm agent exactly serves this purpose.
piggybacked on a packet bounded for its original source node. In Foragers wait for a certain amount of time at the sink node and
particular, since TCP (Transport Control Protocol) acknowledges then take the initiative to build a swarm of waiting foragers. A
received packets, BeeAdHoc piggybacks the returning foragers in swarm can transport multiple foragers in its payload back to the
the TCP acknowledgments. This reduces the overhead generated source node. A swarm, like an individual forager, is also routed
by control packets, saving at the same time energy. on the reverse links.
 Bee swarms: Bee swarms are the agents that are used to
In BeeSensor[14], route discovery is achieved by using forward
explicitly transport foragers back to their source node when the
scouts and backward scouts. Results show that the honeybee- inspired
applications are using an unreliable transport protocol like UDP
protocol is able to transmit more packets than an energy optimised
(User Datagram Protocol). The algorithm reacts to link failures by
version of AODV, achieving less control overhead and lower energy
using special hello packets and informing other nodes through
consumption.
Route Error Messages (REM).

In BeeAdHoc, each MANET node contains at the 4.3 BeeIP


network layer a software module called hive, which consists of three BeeIP[15] is a new honeybee-inspired adaptive routing protocol
parts: the packing floor, the entrance floor, and the dance floor. The based on the collaborative behaviours of honeybee foragers.
entrance floor is an interface to the lower MAC layer; the packing Following a reactive approach, BeeIP’s honeybee agents explore the
floor is an interface to the upper transport layer; the dance floor topology only when data are required to be transmitted between
contains the foragers and the routing information. nodes. The model uses three types of agents in the form of data
packets. The scout, the ack scout, and the forager.
4.2 BeeSensor  Scout: They are sent when a scouting process is initialized in
Beesensor[14] (Saleem and Farooq, 2007) is an algorithm based on order to discover new paths towards a given destination. A scout
the foraging principles of honey bees with an on-demand route is transmitted using broadcast to all neighbouring nodes. This
discovery (AODV). BeeSensor focuses on minimising the energy technique benefits not only the propagation of the initial request,
costs using bee agents. The algorithm works with four types of but also the introduction of the transmitting node to its
agents: packers, scouts, foragers and swarms.A brief description of neighbourhood. Apart from the details of the scouting process,
each type of agent is as follows. scouts also carry important information about their sender’s state.
A node’s state is a group of attributes that describe the situation in
 Packers: Packers behave like the food-storer bees in a hive. Their which the node is at the time of broadcasting the scout packet.
major responsibility is to receive packets coming from the upper  Ack scout: Once the scout reaches its destination the scouting is
layer and locate an appropriate forager (route) for them. Once a considered successful and an ack scout packet is created. Ack
forager is found, packet is encapsulated in its payload and the scouts use a source routing fashion to travel back to the source,
packer starts waiting for the next packet. Failure in locating a using unicast transmission. Therefore, the route that was followed
forager is an indication to the packer that no route exists to a sink. towards the destination is used in reverse. On their way back, ack
 Scouts: Like their natural counterparts, scouts explore the scouts acknowledge the success of the scouting to both the
network in search of a potential sink node. Scouts are of two intermediate nodes and the source node.

326
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 Forager: When BeeIP[15] is unable to transmit a data packet, it low delay. These algorithms have lot of scope for future
stores it into a local queue and starts a new scouting process for improvements. The future work lies in incorporating factors like
its destination. This decreases the packet loss due to incomplete signal strength into these algorithms so that the routing algorithms
routing information. Once an ack scout returns back and will adjust in dynamic topology changing environment.
acknowledges the existence of a path, all packets for the
corresponding destination in the queue are being transmitted. REFERENCES
The way they do this, is by using the most important agent type of [1] A. Boukerche, M. Ahmad, B. Turgut, D. Turgut, A taxonomy of
BeeIP[15], the forager. Foragers are specially crafted packets that routing protocols in sensor networks, in: A. Boukerche (Ed.),
have three important roles. Firstly, they carry (in form of payload) the Algorithms and Protocols for Wireless Sensor Networks, Wiley,
data packets from the source to the destination. Secondly, they are 2008, pp. 129–160 (Chapter 6).
used to update neighbouring nodes’ states and links’ information, just [2] M. Dorigo, T. Stutzle, Ant Colony Optimization, MIT Press,
like scouts did in the first place. Thirdly, foragers are constantly Cambridge, 2004.
monitoring the path they traverse for any improvements. [3] D. Teodorovic, T. Davidovic, M. Selmic, Bee colony
optimization: The applications survey, ACM Transactions on
5. Summary of ACO and BCO based Routing Computational Logic, 2011, pp. 1–20.
Protocols [4] Bonabeau, E.; Dorigo, M. & Theraulaz, G. (1999). “Swarm
Intelligence.From Natural to Artificial Systems”, Oxford
Routing Year Authors SI Routing Strength
University Press, 0-19-513159-2, Oxford
Protocol Tech Type
nique [5] Bonabeau, E.; Dorigo, M. & Theraulaz, G. (2000). “Inspiration
Gianni Di for optimization from social insect behaviour, Nature”, Vol. 406,
Antnet 1998 Caro, Marco ACO Proactive Scalable No. 6, (July 2000) 39-42, 0028-0836
Dorigo [6] Dr. Arvinder Kaur, Shivangi Goyal,” A Survey on the
Mesut Gunes, Applications of Bee Colony Optimization Techniques”,
Udo Sorges, Less International Journal on Computer Science and Engineering
ARA 2002 ACO Reactive (IJCSE), ISSN : 0975-3397, Vol. 3 No. 8 August 2011
Imed overhead
Bouazizi [7] Xun-bing Wang, Yong-zhao Zhan, Liang-min Wang and Li-
Di Caro ping Jiang, “Ant Colony Optimization and Ad-hoc On-demand
G.A., PDR, Multipath Distance Vector Based Routing Protocol,” IEEE
AntHocNet 2004 ACO Hybrid
Ducatelle F., delay Fourth International Conference on Natural Computation, pp.
Gambardella 589-593, 2008.
Horst. F. [8] G.A. Di Caro, F. Ducatelle, and L.M. Gambardella. Swarm
Wedde, Energy intelligence for routing in mobile ad hoc networks. In
BeeAdhoc 2005 BCO Reactive
Muddassar efficient Proceedings of the IEEE Swarm Intelligence Symposium, pages
Farooq 76–83, Pasadena, USA, June 2005. IEEE Press.
Muhammad
[9] M. Dorigo, G.A. Di Caro, and L. M. Gambardella. Ant
Saleem, Energy
BeeSensor 2007 BCO Reactive algorithms for discrete optimization. Artificial Life, 5(2):137–
Muddassar efficient
Farooq 172, 1999.
Alexandros [10] G.A. Di Caro and M. Dorigo. AntNet: Distributed stigmergetic
Giagkos, PDR, control for communications networks. Journal of Artificial
BeeIP 2010 BCO Reactive Intelligence Research (JAIR), 9:317–365, 1998.
Myra S. delay
Wilson [11] M. Gunes, U. Sorges, and I. Bouazizi, “Ara-the ant-colony
based routing algorithm for manets,” in Parallel Processing
Workshops, 2002. Proceedings. International Conference on,
6. CONCLUSION AND FUTURE SCOPE 2002, pp. 79 – 85.
The objective of this study is to briefly review the major contribution [12] G.A. Di Caro, F. Ducatelle, and L.M. Gambardella. AntHocNet:
of swarm intelligence based MANET routing protocols. For this an ant-based hybrid routing algorithm for mobile ad hoc
purpose, swarm intelligence based ACO and BCO protocols are networks. In Proceedings of PPSNVIII, volume 3242 of LNCS,
reviewed and put to partial comparison. The agents in Ant Colony pages 461–470. Springer, 2004. (Best paper award).
inspired routing algorithms communicate indirectly through the [13] H. F. Wedde, M. Farooq, T. Pannenbaecker, B. Vogel, C.
environment (stigmergy) and the agents provide positive feedback to Mueller, J. Meth, and R. Jeruschkat. BeeAdHoc: an energy
a solution by laying pheromone on the links. Moreover, they have efficient routing algorithm for mobile ad-hoc networks inspired
negative feedback through evaporation and aging mechanisms, which by bee behavior. In Proceedings of GECCO, pages 153–161,
avoids stagnation. Finally, Bee Colony algorithms allow for direct 2005.
agent-to-agent communication which makes them more responsive to [14] M. Saleem, M. Farooq, BeeSensor: A Bee-Inspired Power
changes in the network. It is shown that by using ideas taken from the Aware Routing Protocol for Wireless Sensor Networks,
simple behavior of ants and bees optimization and innovations in Springer-Verlag, 2007.
routing protocols can be done, that help outperform the standard [15] A. Giagkos, M. Wilson, BeeIP – A Swarm Intelligence based
MANET routing protocols because SI based protocols ensure higher routing for wireless ad hocnetworks, Information Sciences, 2014
packet delivery ratio, lesser overhead, lower energy consumption and – Elsevier.

327
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Comprehensive Study on the Basics of Artificial


Neural Network

Neha Singla Mandeep Kaur Sandeep Kaur Amandeep Kaur


BGIET, Sangrur BGIET, Sangrur BGIET, Sangrur BGIET, Sangrur
neha_singla93@yahoo.in kaurmandeep796@yahoo. gill28sandeep@gmail.com amehak07@gmail.com
com

ABSTRACT
Artificial Neural Network (ANN) is an information processing
system that is inspired by the way biological nervous systems,
such as the brain, process information. Like brain, it consists
of various interconnected neurons which are also known as
processing elements, that exchanges information to solve
specific problem. An ANN is configured for a specific
application, such as pattern recognition or data classification,
through a learning process. This paper gives an overview of
Artificial Neural Network, its working, trainings &
architecture. It also explains the applications and advantages
of ANN.

Keywords: Artificial Neural Network, Feedback Network,


Feed-Forward Network.
Fig 1:- Neural Network in Human Body [2]
1. INTRODUCTION In machine learning, Artificial Neural Networks (ANNs) are
Artificial Intelligence (AI) is a combination of computer used to estimate or approximate functions that can depend on a
science, physiology, and philosophy. AI is the area of large number of inputs that are generally unknown. These
computer science focusing on creating machines that can ANNs are generally presented as systems of interconnected
engage on behaviours that human consider intelligent. [5] "neurons" which can compute values from inputs, and are
capable of machine learning as well as pattern & image
Lotfi Zadeh, father of fuzzy logic, has classified computing as recognition due to their adaptive nature.
hard computing and soft computing. The computations based
on Boolean algebra and other crispy numerical computations
are defined as hard computing, whereas fuzzy logic, neural
1.1 What is Neural Network?
network and probabilistic reasoning techniques, such as The simplest definition of a neural network, more properly
genetic algorithm and parts of learning theory are categorized referred to as an 'artificial' neural network (ANN), is provided
as soft computing. [6] by the inventor of one of the first neurocomputers, Dr. Robert
Hecht-Nielsen. He defines a neural network as:
The study of the human brain is thousands of years old.
Human brain is a collection of more than 10 billion “...a computing system made up of a number of simple, highly
interconnected neurons. Each neuron is a cell that uses interconnected processing elements, which process
biochemical reactions to receive, process, and transmit information by their dynamic state response to external
information. [8] inputs.” [16]
The concept of ANN is basically introduced from the subject Artificial Neural Networks is considers as major soft-
of biology where brain plays an important key role in human computing technology and have been extensively studied and
body. In human body work is done with the help of brain. As applied during the last two decades. The most general
like as brain, Neural Network is just a web of inter connected applications where neural networks are most widely used for
neurons which are millions and millions in number. With the problem solving are in pattern recognition, data analysis,
help of these interconnected neurons all the parallel processing control and clustering. Artificial Neural Networks have
is done in human body and the human body is the best abundant features including high processing speeds and the
example of Parallel Processing. [2] ability to learn the solution to a problem from a set of
examples. [14]
The first step toward artificial neural networks came in 1943
when Warren McCulloch, a neurophysiologist, and a young In an ANN, simple artificial nodes, known as "neurons",
mathematician, Walter Pitts, wrote a paper on how neurons "neurodes", "processing elements", "units" or “Computational
might work. They modelled a simple neural network with Elements” are connected together to form a network which
electrical circuits. [1] mimics a biological neural network. These Computational

328
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

elements or nodes are connected via weights that are typically


adapted during use to improve performance. [7]

1.2 Working of ANN


The working of an Artificial Neural Networks depends on the
different types of layers. Whole processing has done between
these layers. There are three different types of ANN layers.
These are as follows:
1. Input Layer: The layer of input neurons receive the data
either from input files or directly from electronic sensors
in real-time applications. [1]
2. Hidden Layer: Between Input & Output layers there can
be many hidden layers. These internal hidden layers
contain many of the neurons in various interconnected
structures. The inputs and outputs of each of these internal
hidden neurons simply go to other neurons. All functions
and algorithms are performed during this layer, and then it Fig 2:- General Structure of ANN [1]
sends the final result to the neurons of output layer. 3. Output Layer: The output layer sends information directly
to the outside world, to a secondary computer process, or
to other devices such as a mechanical control system. [1]

Fig 3:- Different Types of Layers in ANN [16]

1.3 Architecture of ANN


The Neurons are usually arranged in a series of layers, Connections usually feed from the input to the output layers, a
bounded by input and output layers, encompassing a variable feed-forward network, although feedback connections from
number of hidden layers, connected in a structure which hidden layer to input layer. [11] There are two types of
depends on the complexity of the problem to be solved. topologies in the architecture of ANN.

Fig 4:- ANN Architecture [2]

329
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1. Feed-forward: In feed-forward networks signals are instructions. These instructions are then converted to a high
processed from a set of input units in the bottom to output level language program and then into machine code that the
units in the top, layer by layer. [4] These allows only for computer can understand. These machines are totally
one directional singnal flow. In this architecture, the predictable; if anything goes wrong is due to a software or
processing elements or neurons can’t send their results in hardware fault. Neural networks and conventional algorithmic
backward direction. Most of the feed-forward neural computers are not in competition but complement each other.
networks are organised in layers. There are tasks are more suited to an algorithmic approach like
arithmetic operations and tasks that are more suited to neural
networks. Even more, a large number of tasks, require systems
that use a combination of the two approaches (normally a
conventional computer is used to supervise the neural
network) in order to perform at maximum efficiency. [1]
The other characteristics those are not present in Conventional
Computers are:
 massive parallelism,
 distributed representation and computation,
 learning ability,
 generalization ability,
Fig 5:- ANN Feed-forward Topology [4]  adaptivity,
 inherent contextual information processing,
2. Feedback: On the other hand, in feed-back networks the  fault tolerance, and
synapses are bidirectional. Activation continues until a  Low energy consumption. [10]
fixed point has been reached, reminiscent of a statistical
mechanism system. [4] 1.5 Trainings of ANN
Once a network has been structured for a particular
application, that network is ready to be trained. To start this
process the initial weights are chosen randomly. Then, the
training, or learning, begins. There are some approaches to
training:
1. Supervised: In supervised training, both the inputs and the
outputs are provided. The network then processes the
inputs and compares its resulting outputs against the
desired outputs. Errors are then propagated back through
the system, causing the system to adjust the weights which
control the network. The set of data which enables the
training is called the "training set." During the training of a
network the same set of data is processed many times as
the connection weights are ever refined. The current
Fig 6:- ANN Feedback Topology [1] commercial network development packages provide tools
to monitor how well an artificial neural network is
converging on the ability to predict the right answer. These
1.4 Why use Neural Networks over tools allow the training process to go on for days, stopping
Conventional Computers? only when the system reaches some statistically desired
point, or accuracy. However, some networks never learn.
Neural networks take a different approach to problem solving
This could be because the input data does not contain the
than that of conventional computers. Conventional computers
specific information from which the desired output is
use an algorithmic approach i.e. the computer follows a set of
derived. Networks also don't converge if there is not
instructions in order to solve a problem. Unless the specific
enough data to enable complete learning. [1]
steps that the computer needs to follow are known the
The back-propagation algorithm belongs into this
computer cannot solve the problem. That restricts the problem
category. [15]
solving capability of conventional computers to problems that
we already understand and know how to solve. But computers 2. Unsupervised: Unsupervised training is used to perform
would be so much more useful if they could do things that we some initial characterization on inputs. In unsupervised
don't exactly know how to do. Neural networks process training, the network is provided with inputs but not with
information in a similar way the human brain does. The desired outputs. The system itself must then decide what
network is composed of a large number of highly features it will use to group the input data. This is often
interconnected processing elements (neurons) working in referred to as self-organization or adaption. At the present
parallel to solve a specific problem. Neural networks learn by time, unsupervised learning is not well understood. [1]. It
example. They cannot be programmed to perform a specific does not require a correct answer associated with each
task. The disadvantage is that because the network finds out input pattern in the training set. Explores the underlying
how to solve the problem by itself, its operation can be structure in the data, or correlations between patterns in
unpredictable. On the other hand, conventional computers use the data, and organizes patterns into categories from these
a cognitive approach to problem solving; the way the problem correlations.
is to solved must be known and stated in small unambiguous The Kohonen algorithm belongs into this category. [15]

330
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. Hybrid: Combines supervised and unsupervised trainings. 2. Self-Organisation: An ANN can create its own
Part of the weights are determined through supervised organisation or representation of the information it
learning and the others are obtained through an receives during learning time.
unsupervised learning [15]
3. Real Time Operation: ANN computations may be carried
4. Reinforced: In this method, a trainer though available, out in parallel, and special hardware devices are being
does not present the expected answer, but only indicates if designed and manufactured which take advantage of this
the computed output is correct or incorrect. The capability.
information provided helps the network in its training
4. Pattern recognition: It is a powerful technique for
process. A reward is given for a correct answer computed
harnessing the information in the data and generalizing
and a penalty for a wrong answer. But, reinforced training
about it. Neural nets learn to recognize the patterns which
is not one of the popular forms of training. [12]
exist in the data set.

1.6 Applications 5. The system is developed through learning rather than


programming.. Neural nets teach themselves the patterns
The various real time applications of Artificial Neural in the data freeing the analyst for more interesting work.
Network are as follows:
6. Neural networks are flexible in a changing environment.
1. Function approximation, or regression analysis, including Although neural networks may take some time to learn a
time series prediction and modelling. sudden drastic change they are excellent at adapting to
2. Call control- answer an incoming call (speaker-ON) with a constantly changing information.
wave of the hand while driving. 7. Neural networks can build informative models whenever
3. Data processing, including filtering, clustering, blind conventional approaches fail. Because neural networks can
signal separation and compression. handle very complex interactions they can easily model
data which is too difficult to model with traditional
4. Application areas of ANNs include system identification approaches such as inferential statistics or programming
and control (vehicle control, process control), game- logic.
playing and decision making (backgammon, chess,
racing), pattern recognition (radar systems, face 8. Performance of neural networks is at least as good as
identification, object recognition, etc.), sequence classical statistical modelling, and better on most
recognition (gesture, speech, handwritten text recognition), problems. The neural networks build models that are more
medical diagnosis, financial applications, data mining (or reflective of the structure of the data in significantly less
knowledge discovery in databases, "KDD").[1] time. [1]

5. A classic application for ANN is image recognition. A 9. Distributed memory: The network does not store
network that can classify different standard images can be information in a central memory. Information is stored as
used in several areas: patterns throughout the network structure. The state of
neurons represents a short-term memory as it may change
 Quality assurance, by classifying a metal welding as
with the next input vector. The values in the weight matrix
whether is holds the quality standard.
(the connections) form a long-term memory and are
 Medical diagnostics, by classifying x-ray pictures for
changeable only on a longer time basis. Gradually, short-
tumour diagnosis.
term memory will move into long-term memory and
 Detective tools, by classifying fingerprints to a modify the network as a function of the input experience.
database of suspects. [13] [9]
6. The other usage of ANNs are in:
 Airline Security Control. 2. CONCLUSION
 Investment Management and Risk Control.
The computing world has a lot to gain from Artificial Neural
 Prediction of Thrift Failures.
Networks. Their ability to learn by examples makes them more
 Prediction of Stock Price Index.
flexible and powerful. In this paper we discussed about the
 OCR Systems. Artificial Neural Network. In which we discussed working of
 Industrial Process Control. ANN, Architecture, Trainings, Advantages and Applications.
 Data Validation. By studying ANN we had concluded that as per as technology
 Risk Management. is developing day by day the need of Artificial Intelligence is
 Target Marketing. increasing because of its major advantage of parallel
 Sales Forecasting. processing, which conventional computers can’t provide.
 Customer Research.
 Prediction/Forecasting REFERENCES
 Optimization
 Content-addressable Memory. [2,15] [1] Ms. Sonali. B. Maind & Ms. Priyanka Wankar,
“Research Paper on Basic of Artificial Neural Network”,
International Journal on Recent and Innovation Trends in
1.7 Advantages Computer & Communication, Vol. 2, Issue: 1, ISSN:
There are several advantages of ANN in today’s world. These 2321-8169, pp: 96-100.
are as follows:
[2] Vidushi Sharma, Sachin Rai & Anurag Dev,“A
1. Adaptive learning: An ability to learn how to do tasks Comprehensive Study of Artificial Neural Networks”,
based on the data given for training or initial experience. International Journal of Advanced Research in computer

331
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

science & Software Engineering, Oct. 2012, Vol. 2, [9] Eldon Y. Li, “Artificial Neural Networks and their
Issue: 10, ISSN: 2277 128X, pp: 278-284. Business Applications”, Taiwan, 1994.
[3] Bogdan M. Wilamowski, “Neural Network Architecture [10] Anil K Jain, Jianchang Mao and K.M Mohiuddin,
& Learning”, IEEE Auburn University, 2003, 0-7803- “Artificial Neural Networks: A Tutorial”, Michigan
7852-0, ICIT 2003, pp: 1-12. State university, 1996.
[4] Carsten Peterson & Thorsteinn Rognvaldsson, “An [11] A.S. Miller, B.H. Blott & T.K. Hames, “Review of
Introduction to Artificial Neural Network”, Sweden, pp. neural network applications in medical imaging and
113-170. signal processing”, Sept. 1992, pp. 449-450.
[5] Jyoti Singh & Pritee Gupta, “Advance Applications of [12] Girish Kumar Jha, “Artificial Neural Netoworks and its
Artificial Intelligence and Neural Networks: A Review”, applications”, pp. V41-V49.
pp. 1-4.
[13] Fiona Nielsen, “Neural Networks- algorithms and
[6] Prof D.P.Salapurkar, “Introduction to technique of Soft applications” Neil’s Brock Business College, Dec 2001,
Computing: Artificial Neural Networks”, International pp. 1-19.
Journal of Engineering & Computer Science, ISSN:
2319-7242, Vol. 4, Issue: 1, Jan. 2015, pp. 9868-9873. [14] Koushal Kumar & Gour Sundar Mitra Thakur,
“Advanced Applications of Neural Networks and
[7] Richard P. Lippmann, “An introduction to computing Artificial Intelligence: A Review”, IJITCS, ISSN: 2074-
with neural nets”, IEEE ASSP Magazine, April 1987, 9007, Vol. 4, No. 6, June 2012, pp. 57-68.
pp. 4-22.
[15] http://www.cse.unr.edu/~bebis/MathMethods/NNs/lectur
[8] Ajith Abraham, “Artificial Neural Networks”, Stillwater, e.pdf
OK, USA, 2005.
[16] http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.htm.

332
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Survey of Routing Protocols in Mobile Ad-Hoc


Network

Preet kamal Sharma R. K. Bansal


Assistant Professor, Department of Computer Dean Research, Gurukashi University
Applications Talwandi Sabo, Bathinda
Post Graduate Govt. College for Girls, Sector 42,
Chandigarh bansalrajk2009@gmail.com
(Punjab University)
preetkamal20@gmail.com

ABSTRACT Limited Physical Security, Dynamic network topology and


In recent years the demand of wireless communication rises Frequent routing updates.
due to which mobile ad hoc networks also called MANET
have become very popular and lots of research is being done
on it. An Ad hoc network is a self-configuring network of
wireless links connecting mobile nodes. These nodes may be
routers and/or hosts. Each node or mobile device is equipped
with a transmitter and receiver. They are a type of networks
which can take different forms in terms of topologies and in
term of devices used. Since wireless links developed in this
type of network are more error prone and can easily lose their
signal because of mobility of nodes. Therefore, selection of
optimal routing protocol in MANET is very critical task.
DSR, AODV and TORA are some of the well-known routing
protocol suitable for MANET. This paper provides an
overview of different proposed routing protocols and also
provides a comparison between them.

Keywords- MANET, AODV, DSR, TORA .

1. INTRODUCTION 2. ROUTING PROTOCOLS


Routing protocols tells a set of rules which are responsible for
Wireless network can be classified into two major categories; the delivery of message from source node to destination node
Infrastructure networks and Ad Hoc Networks. Infrastructure in a network. Large number of routing protocols have been
networks are those in which the base stations are fixed and as suggested for ad hoc network. These protocols find a route for
the node goes out of the range of a base station, it gets into the packet delivery and deliver the packet to the correct
range of another base station. One of the drawback of using an destination. The studies on various aspects of routing
infrastructure network is that it goes through the overhead of protocols have been an active area of research for many years.
maintaining the long complex routing tables. See fig. 1 given Following are some ad hoc network routing protocols. a)
below which give us a better description of Infrastructure Table-driven (proactive) routing and b) On-demand (reactive)
network. routing
Ad-Hoc Network is usually thought of as a collection of
wireless nodes maintaining temporary or for the time being
network without any fixed infrastructure. Hence the topology Table Driven or Proactive routing: This type of protocols
of the network is much more dynamic and the changes are maintains fresh lists of destinations and their routes by
often unpredictable opposite to the Internet which is a periodically distributing routing tables throughout the
infrastructure network. An ad hoc network is a decentralized network. Here each node maintains one or more tables
type of wireless network. The network is ad hoc because it containing routing information to every other node in the
does not rely on a pre-existing infrastructure, such as routers network. All nodes keep on updating these tables to maintain
in wired networks or any centralized access points in latest information of the network. Therefore, if a route has
infrastructure wireless networks. Ad Hoc networks do not already existed before traffic comes, the transmission of may
have a certain topology or a central coordination point. In possible without any delay. Some of the existing Table Driven
MANET, each node try to act as both host and router. This or Proactive Protocols are: Dynamic Destination-Sequenced
flexibility of allowing nodes to join, leave and transfer data to Distance Vector (DSDV), Wireless Routing Protocol (WRP),
the network make Ad-Hoc Network more critical and error Optimized Link State Routing Protocol (OLSR) and Cluster
prone. Following are some of the challenges faced by Gateway Switch Routing (CGSR). The main disadvantages of
MANET; Bandwidth constrained, variable capacity links,

333
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

such kind of algorithms are: i) Respective amount of data for destination becomes inaccessible along every path from the
maintenance. ii) Slow reaction on restructuring and failures. source or until the route is not desired. In reactive protocol,
nodes maintain the routes to active destinations. A route
i) Dynamic Destination-Sequenced Distance search is needed for every unknown destination. The main
Vector (DSDV) disadvantages of such kind of algorithms are: i) High latency
time in route finding. Ii) Excessive flooding can lead to
DSDV is a table-driven routing algorithm for ad hoc mobile network clogging. Some of the existing On Demand Routing
networks which is based on the Bellman–Ford algorithm. It Protocols are: Cluster Based Routing Protocol (CBRP), Ad
was developed by C. Perkins and P. Bhagwat in 1994. The hoc On-Demand Distance Vector (AODV), Dynamic Source
algorithm was basically developed to solve the problem of Routing (DSR), Temporally Ordered Routing Algorithm
routing loop. Each entry in the routing table contains a (TORA), Signal Stability Routing (SSR) and Location Aided
sequence number, the sequence numbers are generally even if Routing (LAR).
a link is present; else, an odd number is used. The number is Every Routing Protocol of On Demand routing protocol has
generated by the destination, and the emitter needs to send out its own advantages and limitations. Here in this paper we
the next update with this number. Routing information is concentrate on the three main routing protocols which are
distributed between nodes by sending full dumps infrequently equally likely competitor of each other. These three protocols
and smaller incremental updates more frequently. that are best suited for Ad Hoc Network are DRS, AODV
and TORA.
ii) Wireless Routing Protocol (WRP)
WRP comes under the category of path-finding algorithms, i) Dynamic Source Routing (DSR)
known as the set of distributed shortest path algorithms that DSR is designed for use in multi hop wireless ad hoc
calculate the paths considering the information regarding the networks of mobile nodes. Basically (DSR) is an Ad Hoc
length and till last hop the shortest path to each destination. routing protocol which is based on the theory of source-based
WRP reduces the number of chances in which a temporary routing rather than table-based. This protocol is source-
routing loop can occur. For the purpose of routing, each node initiated rather than hop-by-hop. The protocol in DRS is based
maintains four things: 1. A distance table 2. A routing table on the link state algorithm in which source initiates route
3.A link-cost table 4. A message retransmission list. WRP discovery on demand basis. DSR was designed for multi hop
uses periodic update message transmissions to the neighbours networks for small Diameters. It is a beaconless protocol in
of a node. The nodes in the response should send which no HELLO messages are exchanged between nodes to
acknowledgments. If there is no change from the last update, notify them of their neighbours in the network. DSR has a
the nodes in the response list should send an idle Hello unique advantage by virtue of source routing. As the route is
message to ensure connectivity. part of the packet itself, routing loops, either short – lived or
long – lived, cannot be formed as they can be immediately
iii) Optimized Link State Routing Protocol detected and eliminated. This property opens up the protocol
(OLSR) to a variety of useful optimizations. It must be noted that
neither AODV nor DSR assure best shortest path. If the
OLSR is an IP routing protocol optimized for mobile ad hoc destination alone can respond to route requests and the source
networks, which can also be used on other wireless ad hoc node is always the initiator of the route request, the initial
networks. OLSR is a proactive link-state routing protocol, route may be the shortest one.
which uses hello and topology control messages to discover
and then disseminate link state information throughout
ii) Ad-Hoc On-Demand Distance Vector
the mobile ad hoc network. Individual nodes use this topology Routing (AODV)
information to compute next hop destinations for all nodes in AODV is a variation of Destination-Sequenced Distance-
the network using shortest hop forwarding paths. Vector (DSDV) routing protocol which is collectively based
on DSDV and DSR. In AODV, the network is silent until a
iv) Cluster Gateway Switch Routing connection is needed. At that point the network node that
(CGSR) needs a connection broadcasts a request for connection. Other
AODV nodes forward this message, and record the node that
CGSR considers a clustered mobile wireless network instead they heard it from, creating an explosion of temporary routes
of a „„flat‟‟ network. For structuring the network into back to the needy node. When a node receives such a message
separate but interrelated groups, cluster heads are elected and already has a route to the desired node, it sends a message
using a cluster head selection algorithm. By forming several backwards through a temporary route to the requesting node.
clusters, this protocol achieves a distributed processing It aims to minimize the requirement of system-wide
mechanism in the network. However, one drawback of this broadcasts to its extreme. It does not maintain routes from
protocol is that, frequent change or selection of cluster heads every node to every other node in the network rather they are
might be resource hungry and it might affect the routing discovered as and when needed & are maintained only as
performance. long as they are required.
The key steps of this algorithm for the establishment of
On Demand or Reactive Protocols: These types unicast routing are : a) Route Discovery, b) Expanding Search
of protocol search a route on demand by flooding the network Technique, c) Setting up of Forward Path and d) Route
with Route Request packets. A node initiates a route Maintenance.
discovery throughout the network, only when it wants to send
packets to its destination. For this purpose, a node begins a Advantages
route discovery process through the network. This process is
The main benifit of this protocol is having routes established
completed once a route is determined or all possible cases
on demand and that destination sequence numbers are applied
have been examined. Once a route has been established, it is
to find the latest route to the destination. The connection setup
maintained by a route maintenance process until either the
delay is lower.

334
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Limitations receiver actually receives from sender divided by the time


First limitation of this protocol is that intermediate nodes can taken by the receiver to obtain the last packet.
lead to inconsistent routes if the source sequence number is 4) Media Access Delay: The time a node takes to access
very old and the intermediate nodes have a higher but not the media for starting the packet transmission is called as
latest destination sequence number, thereby having stale media access delay. The delay is recorded for each packet
entries. Also, multiple Route Reply packets in response to a when it is sent to the physical layer for the first time.
single Route Request packet can lead to heavy control 5) Packet Delivery Ratio: The ratio between the amount of
overhead. Another Limitation of AODV is unnecessary incoming data packets and actually received data
bandwidth consumption due to periodic transmission. packets.
6) Path optimality: This metric can be defined as the
difference between the path actually taken and the best
i) Temporary Ordered Routing possible path for a packet to reach its destination.
Protocol (TORA)
The main feature of TORA is that the control messages are 3. CONCLUSIONS
localized to a very small set of nodes near the occurrence of a
topological change. To achieve this, the nodes maintain In this paper a number of routing protocols for MANET,
routing information about adjacent nodes. TORA defines a which are broadly categorized as proactive and reactive
parameter, termed as height. Height is a measure of the protocols. The effort has been made on the comparative study
distance of the responding nodes distance up to the required of Reactive, Proactive routing protocols has been presented in
destination node. In the route discovery phase, this parameter the form of table. The attempt towards a comprehensive
is returned to the querying node. The TORA attempts to performance of three commonly used mobile Ad-Hoc routing
achieve a high degree of scalability using a "flat", non- protocols (DSR, AODV and TORA). There are various
hierarchical routing algorithm. In its operation the algorithm shortcomings in different routing protocols and it is difficult
attempts to suppress, to the greatest extent possible, the to choose routing protocol for different situations as there is
generation of far-reaching control message propagation. In trade off between various protocols. There are various
order to achieve this, the TORA does not use a shortest path challenges that need to be met, so these networks are going to
solution, an approach which is unusual for routing algorithms have widespread use in the future.
of this type. TORA builds and maintains a Directed Acyclic
Graph (DAG) rooted at a destination. No two nodes may have ACKNOWLEDGEMENT
the same height.
Information may flow from nodes with higher heights to Our special thanks to the reviewers and editors for their
nodes with lower heights. Information can therefore be valuable suggestions and expert comments that helps to
thought of as a fluid that may only flow downhill. TORA improve the paper.
protocol has three basic functions: a) Route Creation, b) Route
Maintenance and c) Route Erasure. REFERENCES
Advantages
The main advantage of TORA is that the multiple routes [1 ] C K Toh, Ad Hoc Mobile Wireless Networks, Prentice
between any source destination pair are supported by this Hall Publishers , 2002.
protocol. Therefore, failure or removal of any of the nodes is [2] Robinpreet Kaur & Mritunjay Kumar Rai, A Novel Review
quickly resolved without source intervention by switching to on Routing Protocols in MANETs, Undergraduate Academic
an alternate route. Research Journal (UARJ), ISSN : 2278 – 1129, Volume-1,
Issue-1, 2012
Disadvantage [3] Ammar Odeh, Eman AbdelFattah and Muneer Alshowkan,
The main disadvantage of TORA is that it depends on Performance Evaluation Of AODV And DSR Routing
synchronized clocks among nodes in the ad hoc network. This Protocols In Manet Networks, International Journal of
dependence of protocol on intermediate lower layers for Distributed and Parallel Systems (IJDPS) Vol.3, No.4, July
certain functionality assumes that the link status sensing, 2012.
neighbour discovery, in order packet delivery and address [4] Mina Vajed Khiavi, Shahram Jamali, Sajjad Jahanbakhsh
resolution are all readily available. Gudakahriz, Performance Comparison of AODV, DSDV, DSR
and TORA Routing Protocols in MANETs, International
Performance Metrics Research Journal of Applied and Basic Sciences. Vol., 3 (7),
There exists various numbers of metrics that can be used to 1429-1436, 2012 ISSN 2251-838X ©2012 Victor Quest
compare reactive routing protocols. Most of them ensure the Publications.
qualitative metrics. Therefore, the following different metrics [5] Sachin Dnyandeo Ubarhande, Performance Evolution of
have been considered to make the comparative study of these AODV and DSR Routing Protocols in MANET Using NS2,
routing protocols through simulation. International Journal of Scientific & Engineering Research
1) Routing overhead: This metric describes how many Volume 3, Issue 5, May-2012, ISSN 2229-5518
routing packets for route discovery and route maintenance [6] G.Vijaya Kumar , Y.Vasudeva Reddyr , Dr.M.Nagendra ,
need to be sent so as to propagate the data packets. Current Research Work on Routing Protocols for MANET: A
2) Average Delay: This metric represents average end-to-end Literature Survey, International Journal on Computer Science
delay and indicates how long it took for a packet to travel and Engineering Vol. 02, No. 03, 2010, 706-713
from the source to the application layer of the destination. It is [7] Tarek Sheltami and Hussein Mouftah “Comparative study
measured in seconds. of on demand and Cluster Based Routing protocols in
3) Throughput: This metric represents the total number of MANETs”, IEEE conference, pp. 291-295, 2003
bits forwarded to higher layers per second. It is measured in [8] Dr. Kamaljit I. Lakhtaria, Analyzing Reactive Routing
bps. It can also be defined as the total amount of data a Protocols in Mobile Ad Hoc Networks, Int. J. Advanced

335
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Networking and Applications Volume:03 Issue:06 [16] J. Broch, D.A. Maltz, D. B. Johnson, Y-C. Hu, J.
Pages:1416-1421 (2012) ISSN : 0975-0290 Jetcheva, “A performance comparison of Multi-hop wireless
[9] Johnson. D and Maltz. D. A, “Dynamic source routing in ad-hoc networking routing protocols”, in the proceedings of
ad hoc wireless Networks” in Mobile Computing (Imielinski the 4th International Conference on Mobile Computing and
and H. Korth, eds.), Kluwere Academic Publishers, 199 Networking (ACM MOBICOM ’98), October 1998, pages 85-
[10] Perkins CE, Royer EM, Chakeres ID (2003) Ad hoc On- 97.
Demand Distance Vector (AODV) Routing. IETF Draft, [17] Md. Golam Kaosar, Hafiz M. Asif, Tarek R. Sheltami,
October, 2003 Ashraf S. Hasan Mahmoud, “Simulation-Based Comparative
[11] Toh C-K (1996) A Novel Distributed Routing Protocol to Study of On Demand Routing Protocols for MANET”,
Support Ad- Hoc Mobile Computing.Proceedings of the 1996 available at http://www.lancs.ac.uk
IEEE 15th Annual International Phoenix Conference on [18] Per Johansson, Tony Larsson, Nicklas Hedman, Bartosz
Computers and Communications:480–486 Mielczarek, “Routing protocols for mobile ad-hoc networks –
[12] Elizabeth M. Royer“A Review of Current Routing a comparative performance analysis”, in the proceedings of
Protocols for Ad Hoc Mobile Wireless Networks” University the 5th International Conference on Mobile Computing and
of California, Santa Barbara Chai-Keong Toh, Georgia Networking (ACM MOBICOM ’99), August 1999, pages
Institute of Technology, IEEE Personal Communications, pp. 195-206.
46-55, April 1999. [19] P. Chenna Reddy, Dr. P. Chandrasekhar Reddy,
[13] Krishna Gorantala , “Routing Protocols in Mobile Ad-hoc “Performance Analysis of Adhoc Network Routing
Networks”, A Master’ thesis in computer science, pp-1-36, Protocols”, Academic Open Internet Journal, SSN 1311-4360,
2006. Volume 17, 2006 [20] R. Misra, C. R. Manda, “Performance
[14] Perkins CE, Bhagwat P (1994) Highly Dynamic Comparison of AODV/DSR On-Demand Routing Protocols
Destination-Sequenced Distance-Vector Routing (DSDV) for for Ad Hoc Networks in Constrained Situation”, IEEE
Mobile Computers. Proceedings of ACM SIGCOMM ICPWC 2005.
1994:234–244 [21] S. Gowrishankar, T.G. Basavaraju, M. Singh, Subir
[15] Cheng C, Riley R, Kumar SPR, Garcia-Luna-Aceves JJ Kumar Sarkar, “Scenario based Performance Analysis of
(1989) A Loop Free Extended Bellman-Ford Routing AODV and OLSR in Mobile Ad hoc Networks”, available at
Protocol Without Bouncing Effect. ACM SIGCOMM http://www.ijcim.th.org
Computer Communications Review, Volume 19, Issue 4:224–
236

336
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The Beginning of Statistical Machine Translation System


to convert Dogri into Hindi
Manu Raj Moudgil Preeti Dubey
Research Scholar (Ph.D) Assistant Professor
JJT University, Jhunjhunu Central University of Jammu, Jammu
manu.moudgil@gmail.com preetidubey2000@yahoo.com

ABSTRACT undertaking translation. The structural difference between


Research in the field of machine translation has paced up in English Malayalam pair is resolved applying order
these few years. Statistical machine translation is the latest conversion rules. This system verifies the translation by
approach of MT using which many machine translations using BLEU, F measure and WER evaluation metrics.[5]
systems are being developed producing excellent results. This
paper provides a brief introduction to SMT and the various  Machine Translation at IBM India Research Lab. IBM‟s
tools required for developing a SMT. some systems that have India Research Lab have been working on SMT systems
been developed using this approach are also discussed. This for Indian languages. IBM Research has developed several
paper presents the idea and the work plan for developing NLP tools which can be used to speed up the development
SMT system to convert Dogri into Hindi, which will be the of SMT systems and improve the quality of translation.
first system for the language pair. They have developed a framework for SMT for Indian
languages and have developed a Hindi-English SMT
Keywords: Statistical machine translation, Decoding, system and an English-Hindi SMT system.[6]
Translation, Language model
 Microsoft Bing Translator (2009) is a service provided
by Microsoft as part of its Bing services which allow users
1. MACHINE TRANSLATION to translate texts or entire web pages into different
Machine Translation (MT) is a sub field of artificial
languages. All translation pairs are powered by Microsoft
intelligence which focuses on automatically translating one
Translation (previously Systran), developed by Microsoft
natural language called the source language into another
Research, as its backend translation software. The
natural language called the target language. It can be applied
translation service is also using statistical Machine
to both text and speech. MT was the first computer-based
Translation strategy to some extent [7]
application related to natural language processing [1]. MT
research dates back to 1930‟s, when the first patents for
 EILMT System, aims to design and deploy a Machine
translations were applied. to develop dictionaries. The use of
Translation System from English to Indian Languages in
computers for translations was first discussed by Warren
Tourism and Healthcare Domains. The project is funded by
Weaver in 1949 [2]. In 1951, the first full-time researcher,
Department of Information Technology, MCIT and
Yehoshua Bar-Hillel was appointed at MIT for research on
Government of India. It uses statistical model and its
machine translation. MT research geared in 1950‟s and that is
primary objective is to initially build an English-Hindi
the first generation of translation using the direct approach of
translation system capable of translating free flow text and
translation, where the focus of translation was majorly on
gradually adapt it to other Indian language pairs as well.
dictionaries. The first machine translation systems were
The Consortium Members of EILMT system are C-DAC
developed using dictionaries i.e. they were based on word to
Mumbai, IISc Bangalore, IIIT Hyderabad, C-DAC Pune,
word translations. Second generation of translations was in
IIT Mumbai, Jadavpur University Kolkata, Utkal
mid 60‟s; these systems were based on the transfer approach.
University Bangalore, Amrita University Coimbatore and
Since the late 80‟s corpus based research methods have
Banasthali Vidyapeeth, Banasthali .[8]
emerged [3]. Many systems have been developed based on the
above mentioned approaches. Some systems developed using
SMT are listed below:  Unnikrishhanan et. al. (2010) [9] explained the
development of machine translation system using statistical
 approach for translating English to South Dravidian
Google Translate (2007) uses a statistical Machine
languages like Malayalam and Kannada. The various tools
Translation approach. It was created by Franz-Josef Och,
used for the development of the said system are: SRILM
who is now the head of Google‟s machine translation
for creating language model, GIZA++ for training
Group. Initially Google translate used SYSTRAN for its
translation model and MOSES decoder for translating
translation till 2007. Currently, it is providing the facility
English to Malayalam (or Kannada). Other tools used at
of translation among 51 language pairs. It includes only
various levels of translation process are: The Stanford
one Indian language Hindi. The accuracy of translation is
statistical parser, Roman to Unicode and Unicode to
good enough to understand the translated text. [4]
Roman converter, Morphological analyzer and generators,
 Malayalam and Kannada morphological analyzers, English
English to Malayalam Translation (2008)(A Statistical
morphological analyzer, Malayalam and Kannada
Approach): The system uses statistical models for
morphological generators and Transfer rule file. The
undertaking translation. Monolingual corpus of Malayalam
architecture of the system is:
is used and bilingual corpus is used for English language.
This system uses various pre-processing techniques for

337
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

dimensions of the machine translation were explored which


included Statistical Machine Translation (SMT). It was
introduced by IBM researchers in a workshop sponsored by
the US National Science Foundation and Johns Hopkins
University‟s Centre for Language and Speech Processing [12,
13].
Statistical MT model takes the view that every sentence in
the target language is a translation of the source language
sentence with some probability. The best translation, of
course, is the sentence that has the highest probability. The
key activities in statistical MT are: availability of large
parallel corpus, estimating the probabilities of translation, and
efficiently finding the sentence with the highest probability.
The other problems include sentence alignment, compound
words, idiom translation, morphology and out of vocabulary
words.

3.1 Overview of Proposed Statistical MT

Fig 1: Architecture of English to Dravidian Language The system proposed in this research paper is the SMT
SMT System system for Dogri-Hindi language pair. Suppose we have a
sentence, f, given in Dogri and we want to find a good Hindi
The main ideas implemented and proven very effective for translation, e. There are many possible translations of f into
English to south Dravidian languages SMT system are: (i) Hindi, and different translators will have different opinions
reordering the English source sentence according to about the best translation. We can model these differences of
Dravidian syntax, (ii) using the root suffix separation on both opinion with a probability distribution P(e|f) over possible
English and Dravidian words and iii) use of morphological translations, e, given that the Dogri text was f. A reasonable
information for improving the quality of translation. The way of choosing the „best‟ translation is to choose e which
BLEU score using baseline system for English to Malayalam maximizes the conditional probability
is 15.9 (and for English to Kannada is 15.4). When the syntax P(e|f).
and morphological information was incorporated to the The problem with this strategy is that we do not know the
baseline system, BLEU scores show considerable conditional probability P(e|f). In order to solve this problem,
improvement i.e. 24.9 and 24.5 respectively for above suppose we have a Dogri-Hindi parallel corpus as seed
systems. The low BLEU scores are due to small training corpus. We will use this corpus to infer a model estimating
corpus size. The performance evaluation shows that when the conditional probabilities P(e|f).
reordering and morphological information was applied, the The standard approach to estimate the conditional probability
blue score increased approximately by 9.0. P(e|f)is to use Bayes‟ theorem. Using this theorem P (e|f) is
rewritten as
2. RESEARCH PROPOSAL
The objective of this research work is to develop a machine Because f is fixed, the maximization over e is thus equivalent
translation system for automatic conversion of text from to maximizing P(f|e)P(e).The problem of machine translation
Dogri into Hindi using the statistical approach. Dogri and is broken into three steps:
Hindi are closely related languages. These languages are
written from left to right. Both languages are written in (1) Build a language model which allows us to estimate P(e);
Devanagiri script. The close relationship between Hindi and (2) Build a translation model which allows us to estimate
Dogri is established by a study by Dr.Preeti Dubey, Dr. P(f|e); and
Devanand and. et. al.[10]. The authors have concluded in their (3) Search for e maximizing the product P(f|e)P(e). Each of
paper that Hindi and Dogri are closely related languages. these problems is itself a rich problem which can be solved in
There is no work of translating Dogri into any language. The many different ways. The next three sections describe simple
only available system that is Hindi to Dogri MTS developed approaches to solve these problems.
using the direct approach of translation.[11] A brief
introduction of SMT is given below
3.2 The language model

Suppose we break a Hindi sentence e up into words


3. STATISTICAL MACHINE e=e1e2…em. Then we can write the probability for e as a
TRANSLATION product of conditional probabilities:
Early efforts based on this idea are example-based translation
systems that have been built especially in Japan since the
1980s. These systems try to find a sentence similar to the For example, in the sentence fragment “इक ।”, the next
input sentence in a parallel corpus, and make the appropriate
changes to its stored translation. In the late 1980s, the idea of word with highest conditional probability can be “ ”,
statistical machine translation was born in the labs of IBM certainly much higher than the probability of its occurrence at
Research in the wake of successes of statistical methods in a random point in a piece of text. The challenge in building a
speech recognition. By modelling the translation task as a good language model is that there are so many distinct
statistical optimization problem, the Candide project put conditional probabilities that need to be estimated. The
machine translation on a solid mathematical foundation. New

338
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

following assumptions about the probabilities may simplify training corpus, essentially reducing their probability. The
the problem. easiest way to proceed is simply to multiply them all by some
The most drastic assumption that could be made is to assume constant k in the range , and then to spread the
that the probability of seeing a word is independent of what
came before it, i.e., remaining probability uniformly over all bigrams
, so
.
which don't appear in the corpus.
The probabilities P can be estimated by taking a very large
corpus of Hindi text, and counting words i.e. P . 3.3 The Translation Model
Where C(e1)represent the number of times e1 appears in
Hindi corpus, In this section a simple translation model is constructed to
N is total number of words in corpus. compute . Intuitively, when we translate a sentence,
The problem is that this model is not very realistic. words in the source text generate words in the target
A more realistic model is the bigram model, which assumes language. In the sentence pair ( एह् साढ़ा ग्ाां ऐ। | यह हमारा गाांव है। )
/( eh sāṛhā grāṃ ai | yaha hamārā gāṃva hai)।
the probability of a word occurring depends only on the word
immediately before it:
we intuitively feel that एह्/ eh corresponds to यह/ yaha, साढ़ा/
sāṛhā to हमारा/ hamārā, and ग्ाां/ grāṃ to गाांव/ gāṃva. Of course,
A simple approach to estimate conditional probabilities in the
bigram model is to take a large corpus of Hindi text, count the there is no need for the word correspondence to be one-to-
number of occurrences of a particular word pair in one. Sometimes, a word in Hindi may generate two or more
words in Dogri; sometimes it may generate no word at all.
that corpus, and then set Despite these complications, the notion of a correspondence
More realistic still is the trigram model, which assumes the between words in the source language and in the target
probability of a word occurring depends only on the two language is so useful that it is formalized through alignment.
words immediately before it: Consider the following aligned sentence pair (बेरोजगारी दे
कारण युवकें च नशे दी आदत ववकसत होंदी जा’रदी ऐ।
Similar approach used for the trigram model is to (berojagārī de kāraṇa yuvakeṃ ca naśe dī ādata
set . vikasata hoṃdī jā’radī ai) बेरोजगारी/ berojagārī (1) के/ ke
The problem with this procedure is that there are a lot of (2) कारण/ kāraṇa (3) युवकों/ yuvakoṃ (4) में / meṃ (5)
bigrams. If we assume that there are 50,000 words in Hindi,
say, then there are 2.5 billion possible bigrams. Even if you नशे/ naśe (6) की/kī (7) आदत/ ādata (8) ववकससत/ vikasita
take a large corpus of training data (say, a billion words), it‟s (9) होती/ hotī 10-11) जा रही / jā rahī 12) है /) hai । । In this
reasonably likely that there will be some bigrams which don‟t
example, the numbers in parentheses tell us that युवकों
appear in the corpus, and thus are assigned zero probability,
yet they are likely to appear in translations of some Dogri corresponds to the 4th word युवकें in the Dogri sentence. One
sentences. That is, this kind of training procedure is likely to notion derived from alignments, particularly useful in
underestimate the probability of bigrams which don‟t appear building up our translation model is fertility. It is defined as
in the training set, and overestimate the probability of those the number of Hindi words generated by a given Dogri word.
which do. The problem is even worse for trigrams. So, in the example above, युवकें has fertility 1, since it
There‟s no obvious best solution to this problem. Many
different ad hoc solutions have been tried, and a quick survey corresponds to only one word in the Hindi sentence. The
of the literature suggests that there‟s as yet no broad other notion is distortion. In most of the sentences, the Hindi
agreement about the best way of solving the problem. The word and its corresponding Dogri word or words appear in
two basic approaches used in the literature are given here. the same part of the sentence – near the beginning, perhaps,
The first approach is to move away from a pure bigram or near the end, such words are translated roughly without
model, and instead to use linear interpolation between the distortion, while words which move a great distance have
unigram and bigram models. A large and diverse enough high distortion. Since Dogri and Hindi are closely related
corpus of text is likely to give pretty good estimates for languages and have same sentence structure so problem of
nearly all single-word probabilities . So one way of distortion can be ignored without affecting the accuracy of
translation. The translation model can be created with
estimating is as: following parameters:

 The fertility probability , the probability that


Here Nis the total number of words in the corpus. is a the Hindi word has fertility n.
 The translation probability , one for each
parameter in the range which needs to be Dogri word and Hindi word with alignment a.
A translation model is defined as the probability that
determined. This can be done using a second corpus of text, the Dogri sentence f is the correct translation of Hindi
and setting so that the average probability of the bigrams in sentence e, with a particular alignment a.
What remains to be done is to estimate the parameters used in
that corpus is maximized. constructing the translation model – the fertility, and
A second approach is to apply a discount factor to the translation probabilities.
conditional probabilities for the bigrams which appear in the

339
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.4 Decoding Table 1: shows the resources and tools available for
developing a SMT system
It was explained in the beginning how translation from Dogri
to Hindi can be viewed as the problem of finding e which
Tool Available at
maximizes P(e|f), and that this was equivalent to maximizing
P(f|e)P(e).Considering the above translation model based on Dogri Corpus
alignment, the problem is reformulated as: search for a pair Dogri Unigrams, bigrams, Not available at present state
(e,a) which maximizes P(e,a|f). Using Bayes‟ theorem this is trigrams along with their will be developed from scratch.
equivalent to finding (e,a)which maximizes frequencies

Dogri-Hindi Parallel Not available at present state


Corpus will be developed from scratch
using Hindi-Dogri machine
Because f is fixed in the denominator can be omitted, so translation system developed by
Dr.Preeti Dubey at Department
the search is equivalent to finding a pair (e,a) which of CS&IT, University of Jammu
maximizes Word Alignment Tool Available with Advanced
Centre for Technical
Development of Punjabi
Our translation problem is to find (e,a) maximizing above Language, Literature & Culture,
expression. Unfortunately, there are far too many possibilities Punjabi University, Patiala
to do an exhaustive search. But there are some heuristics developed by Dr. Tejinder
which work pretty well. One is to use a greedy algorithm Saini.
which gradually builds up partial translations and alignments. Hindi Parser Language Technologies
Using the translation and language models described earlier, Research Centre, IIIT
the probabilities for each of these partial alignments can be Hyderabad
computed to find the product
. Hindi Morphological Language Technologies
Analyzer and Generator Research Centre, IIIT
Hyderabad
3.5 Resources Required
Dogri Morphological Developed by Dr.Preeti Dubey
For the development of the said system a large number of
Analyzer and Generator at Department of CS&IT
resources are required. Some of them are listed below:
 Huge Hindi Corpus is required for the development
of Language Model
 SRILM for creating Language Model
 Dogri-Hindi Parallel Corpus required for the SRILM www-
development of Translation Model speech.sri.com/projects/srilm/d
 Word Alignment tool required for improving word ownload.html
alignment probability
GIZA++ http://www.fjoch.com/GIZA++.
 GIZA++ for creating Translation Model
html
 Moses decoder for Translation
 Morphological Analyzer and Generator for
handling OOV words. Moses http://www.statmt.org/moses/

3.6 Resources Available


The only work for the automatic translation done on Dogri is
the Hindi-Dogri machine translation developed using direct
approach of translation by Dr. Preeti Dubey et.al[9]. Therefore
there are hardly any resources available. Almost everything 4. CONCLUSION
has to be done from scratch. This paper provides a brief introduction of how the statistical
Some required resources are available with various sources machine translation system for Dogri-Hindi language pair
which can be requested and acquired for use in this system. shall be developed. This system will be developed from
The compatibility of these resources is doubtful and may scratch as it is the first work where Dogri is the source
require modification. The detail of the available resources is language. IT will promote Dogri literature and culture to the
given: world.

5. ACKNOWLEDGEMENT
My sincere thanks to Dr. Vishal Goyal, Assistant Professor,
Department of Computer Science, Punjabi University, for his
guidance.

340
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES
[1] WJohn Hutchins and Harold L Somers, (1992), “An [10] Preeti Dubey, Shashi Pathania, Devanand, “Comparative
introduction to machine translation” London: Academic Study of Hindi and Dogri Languages with regard to
Press, [ISBN: 0-12-362830-X] Machine Translation” , Language In India, Volume 11:10
[2] Hutchins, W. John , “Machine Translation: A Brief October 2011,ISSN 1930-2940
History”, From: Concise history of the language sciences: [11] Preeti Dubey,Devanand. “Machine translation system for
from the Sumerians to the cognitivists. Edited by Hindi-Dogri language pair”, IEEE Conference
E.F.K.Koerner and R.E.Asher. Oxford: Pergamon Press, (ICMIRA), Dec 2013,ISBN :978-0-7695-5013-8, pages
1995. Pages 431-445 422-425
[3] Sneha tripathi, et.al, “Approaches to Machine [12] Statistical Language Approach Translates into Success
Translation”, Annals of Library & Information Studies, Steven J. Vaughan-Nichols, Technology News,
vol 57, dec 2010, pp: 388-393 November 2003.
[4] Google,“Translator”,Translate in various languages, [13] An Online Relevant Set Algorithm for Statistical
http://translate.google.com/ Machine Translation Christoph Tillmann and Tong
[5] Sampark,”Machine Translation among indian languages”, Zhang, IEEE Transactions on Audio Spech and
funded by TDIL program, department of information Language Processing, Vol. 16, NO. 7, September 2008.
technology, govt of india, http://sampark.iiit.ac.in [14] Lopez A,“Statistical machine translation”. ACM
Computing Survey, 40, 3, Article 8, 49 pages,2008
[6] Vishal Goyal, “Development of a Hindi to Punjabi DOI = 10.1145/1380584.1380586
Machine Translation System”, PhD Thesis, Department http://doi.acm.org/10.1145/1380584.1380586]
of Computer Science, Punjabi University, Patiala, 2010. [15] Sanjay Kumar Dwivedi and Pramod Premdas
[7] Microsoft,” Translator in various languages text”, Sukhadeve,) “Machine Translation System in Indian
http://www.microsofttranslator.com/ Perspectives”, Journal of Computer Science 6 (10):
[8] Eilmt, “Machine Translation System from English to ISSN 1549-3636 pp: 1082-1087,2010
Indian Languages in Tourism and Healthcare Domains “,
http://www.cdacmumbai.in/e-ilmt
[9] Unnikrishnan P et. al. (2010) “A Novel Approach for
English to South Dravidian Language Statistical
Machine Translation System”, (IJCSE) International
Journal on Computer Science and Engineering Vol. 02,
No. 08, 2749-2759.

341
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A brief study about evolution of Named Entity


Recognition
Varinder kaur Amandeep kaur Randhawa
BGIET, Sangrur BGIET, Sangrur
varikhangura86@gmail.com amehak07@gmail.com

ABSTRACT identification of NPs that are NEs and classification of NEs


into different types, such as organization, location, person
The objective of this research paper is to provide brief names etc.
overview on diverse conferences like MUC-6(Message
understanding conference), IREX, ACE (Automatic Content 2. CONFERENCES ON NER
Extraction) project, CoNLL-2002 and 2003, NERSSEAL
which contributed highly in the emergence of Named Entity 2.1 Message Understanding Conference -6:
Recognition. All these conferences provided their own The Message Understanding Conferences were initiated by
guidelines for the evaluation of Named entity Recognition. NOSC to assess and to foster research on the automated
own to increase the system‟s performance. This paper reports analysis of military messages containing textual information.
It is designed to promote and evaluate research in information
about the various approaches and methods proposed, domain
extraction.
set, entity types and relevant features essential in Named
Entity Recognition system. It also includes the evaluation 2.1.1 Early history
procedure followed which is used for measuring the
performance metrics like recall, precision, F-score. MUC-1 (1987) : In this, each group designed its own format
for recording the information in the document, and there was
no formal evaluation.
Keywords: Named Entity Recognition, ACE, CoNLL,
IREX, MUC-6 MUC-2 (1989), the task had crystallized as one of template
filling. One receives a description of a class of events to be
1. INTRODUCTION identified in the text; for each of these events one must fill a
template with information about the event.
1.1 Natural Language Processing: MUC-3 (1991), the task shifted to reports of terrorist events in
Central and South America, as reported in articles provided by
It is a branch of artificial intelligence that deals with the Foreign Broadcast Information Service, and the template
analyzing, understanding and generating the languages that became somewhat more complex (18 slots).
humans use naturally in order to interface with computers
using natural human languages instead of computer MUC-4 (1992), same task is used with a further small increase
in template complexity (24 slots).
languages.
MUC-5 was the use of a nested template structure which
In the past decade, successful natural language processing further increases the task complexity.
applications have become part of our everyday experience,
from spelling and grammar correction in word processors to MUC-6: The first goal was to identify, from the component
machine translation on the web and from email spam technologies being developed for information extraction,
detection to automate question answering and the very functions which would be of practical use, would be largely
upcoming research area named entity recognition. domain independent, and could in the near term be performed
automatically with high accuracy.
1.2 Named Entity Recognition
To meet this goal the committee developed the \named entity"
A named entity (NE) is a phrase , which serves as a name for task, which basically involves identifying the names of all the
something or someone (person, organization, location, people, organizations, and geographic locations in a text. The
number, time, measure etc). According to this definition, the final task specification, which also involved time, currency,
phrase in question must by a noun phrase (NP). Clearly, not and percentage expressions, used SGML markup to identify
all NPs are named entities. the names in a text. Figure 1 shows a sample sentence with
named entity annotations. The tag ENAMEX (\entity name
Named entity Recognition (NER) (which might also be expression") is used for both people and organization names;
called “proper name classification”) is a computational
linguistics task in which we classify every word in a Mr.<ENAMEX
document as falling into some predefined categories: person, TYPE="PERSON">Dooner</ENAMEX> met with
location, organization, date, time, percentage, monetary 2.1.2 MUC evaluations TYPE="PERSON">Martin
<ENAMEX
value and “none-of-the-above”. NER involves two tasks: Puris</ENAMEX>, president and chief executive
officer of <ENAMEX
TYPE="ORGANIZATION">Ammirati &
Puris</ENAMEX>, about <ENAMEX
TYPE="ORGANIZATION">McCann</ENAMEX>'s
342
acquiring the agency with billings of <NUMEX
TYPE="MONEY">$400 million</NUMEX>, but
nothing has materialized.
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

In MUC events, a system is scored on two axes: its ability to 2.3 Automatic Content Extraction (ACE)
find the correct type (TYPE) and its ability to find exact text
Program
(TEXT). A correct TYPE is credited if an entity is assigned
the correct type, regardless of boundaries as long as there is an The objective of the ACE program is to develop technology
overlap. A correct TEXT is credited if entity boundaries to automatically infer from human language data the entities
are correct, regardless of the type. being mentioned, the relations among these entities that are
For both TYPE and TEXT, three measures are kept: directly expressed, and the events in which these entities
 the number of correct answers (COR) participate. Data sources include audio and image data in
 the number of actual system guesses (ACT) addition to pure text, and Arabic and Chinese in addition to
English.
 the number of possible entities in the solution
(POS).
The final MUC score is the micro-averaged f-measure (MAF), 2.3.1 Task Definitions
which is the harmonic mean of precision and recall calculated The Automatic Content Extraction (ACE) program, a new
over all entity slots on both axes. A micro-averaged measure effort to stimulate and benchmark research in information
is performed on all entity types without distinction (errors and extraction, presents four challenges:
Successes for all entity types are summed together). The 1. Recognition of entities, not just names. In the ACE entity
harmonic mean of two numbers is never higher than the detection and tracking (EDT) task, all mentions of an
geometrical mean. It also tends toward the least number, entity, whether a name, a description, or a pronoun, are
minimizing the impact of large outliers and maximizing the to be found and collected into equivalence classes based
impact of small ones. The F measure therefore tends to on reference to the same entity. Therefore, practical co-
privilege balanced systems. reference resolution is fundamental.
In MUC, precision is calculated as COR / ACT and the recall 2. Recognition of relations. The relation detection and
is COR / POS. For characterization task (RDC) requires detection and
the previous example, COR = 4 (2 TYPE + 2 TEXT), ACT = characterization of relations between (pairs of) entities.
10 (5 TYPE + 5 TEXT) There are five general types of relations, some of which
POS = 10 (5 TYPE + 5 TEXT). The precision is therefore are further sub-divided, yielding a total of 24
40%, the recall is 40% and the MAF is 40%. types/subtypes of relations:
This measure has the advantage of taking into account all
possible types of errors. It also gives partial credit for errors •Role, the role a person plays in an organization, which
occurring on one axis only. Since there are two evaluation can be sub typed as Management, General-Staff,
axes, each complete success is worth two points. The worst Member, Owner, Founder, Client, Affiliate-Partner,
errors cost this two points (missing both TYPE and TEXT) Citizen-Of, or Other,
while other errors cost only one point. •Part, i.e., part-whole relationships, sub typed as
Subsidiary, Part-Of, or Other,
•At, location relationships, which can be sub typed
2.2 IREX: IR and IE Evaluation project in Located, Based-In, or Residence,
Japanese •Near, to identify relative locations and
•Social, sub typed as Parent, Sibling, Spouse,
The main goal is to have a common platform in order to Grandparent, Other-Relative, Other-Personal,
evaluate systems with the same standard. We believe such Associate, or Other-Professional.
projects are useful not only for comparing system
performance but also to address the following issues: 3. Event extraction. Though not in any previous ACE
evaluation, event detection and characterization is
• To share and exchange problems among researchers. planned for the 2004 evaluation (August-September,
• To accumulate large quantities of data. 2004). Details of the task definition, annotation
• To let other people know the importance and the quality of guidelines, and scoring are being worked out at the time
Information Retrieval and Information Extraction techniques. of writing this paper.
• To attract young researchers into the field. 4. Extraction is measured not merely on text, but also on
To start a long term and larger-size project of this kind. speech and on OCR input. Moving beyond name finding
is a crucial leap for modalities other than text, since the
2.2.1 Tasks in IREX ability to relate two strings (as in ACE) in very noisy
input may degrade much more than finding strings in
isolation (as in named entity recognition.) Furthermore,
Information Retrieval task (IR) the lack of case and punctuation, including the lack of
IR is the task of retrieving documents relevant to a given topic sentence boundary markers, poses a challenge to full
from a database of newspaper articles. Each topic is expressed parsing of speech.
by a description using a few noun phrases and a narrative
using a few sentences.
Named Entity task (NE)
2.3.2 ACE evaluation
ACE has a complex evaluation procedure. It includes
NE is the task to extract Named Entities, such as names of
mechanisms for dealing various evaluation issues (partial
organizations, persons, locations, and artifacts, time and
match, wrong type, etc.). The ACE task definition is also
numeric expressions, money and percentage expressions.
more elaborated than previous tasks at the level of named
There are 8 kinds of NE‟s
entity “subtypes”, “class” as well as entity mentions
(coreferences), and more, but these supplemental elements
will be ignored here.

343
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Basically, each entity type has a parameterized weight and 2.4.2 Results
contributes up to a maximal proportion (MAXVAL) of the
final score (e.g., if each person is worth 1 point Twelve different systems have been applied to data covering
and each organization is worth 0.5 point then it takes two two western languages: Spanish and Dutch
organizations to counterbalance one person in the final score).
Some entity types such as “facility” may account for as Spanish test precision Recall f-score
little as 0.05 points, according to ACE parameters. In
addition, customizable costs (COST) are used for false Carreras et. 81.38% 81.40% 81.39
alarms, missed entities and type errors. Partial matches of al.
textual spans are only allowed if named entity head matches Florian 78.70% 79.40% 79.05
on at least a given proportion of characters. Temporal
expressions are not treated in ACE . Cucerzan et. 78.19% 76.14% 77.15
The final score called Entity Detection and Recognition Value al.
(EDR) is 100% minus the penalties. Wu et. al. 75.85% 77.38% 76.61
ACE evaluation may be the most powerful evaluation scheme
because of its customizable cost of error and its wide coverage Burger et. al. 74.19% 77.44% 75.78
of the problem. It is however problematic because the final Tjong sim 76.00% 75.55% 75.78
scores are only comparable when parameters are fixed. In sang
addition, complex methods are not intuitive and make error Patrick et.al. 74.32% 73.52% 73.92
analysis difficult.
Jansche 74.03% 73.76% 73.89
2.4 CoNLL-2002 shared task: language Malouf 73.93% 73.39% 73.66
dependent NER
Tsukamoto 69.04% 74.12% 71.49
Named entities are phrases that contain the names of persons,
Black et.al. 60.53% 67.29% 63.73
Locations, organizations, times and quantities. In this, it
consider four entities persons, locations, organizations, McNamee 56.28% 66.51% 60.97
miscellaneous .The participants of shared task have been et. al.
offered training and test data in Spanish and Dutch language. Baseline 26.27% 56.48% 35.86
They will use the data for developing a named-entity
recognition system that includes a machine learning
component. Information sources other than the training data Dutch test precision Recall f-score
may be used in this shared task.
Carreras et. 77.83% 76.29% 77.05
al.
2.4.1 Data evaluation
Wu et. al. 76.95% 73.83% 75.86
The data consists of two columns separated by a single space.
Each word has been put on a separate line and there is an Florian 75.10% 74.89% 74.99
empty line after each sentence. The first item on each line is a
word and the second the named entity tag. The tags have the Burger et. al. 72.69% 72.45% 72.57
same format as in the chunking task: a B denotes the first item Cucerzan et. 73.03% 71.62% 72.31
of a phrase. There are four types of phrases: person names al.
(PER), organizations (ORG), locations (LOC) and Patrick et.al. 74.01% 68.90% 71.36
miscellaneous names (MISC). Here is an example:
Tjong sim 72.56% 68.88% 70.67
DATA TAG
sang
Wolff B-PER
Jansche 70.11% 69.26% 69.68
, O
currently O Malouf 70.88% 65.50% 68.08
a O
journalist O Tsukamoto 57.33% 65.02% 60.93
in O
McNamee 56.22% 63.24% 59.52
small O
et. al.
company ORG
Black et.al. 62.12% 51.69% 56.43
of O
china LOC Baseline 64.38% 45.19% 53.10

The data consists of three files per language: one training file
and two test files testa and testb. The first test file will be used Here are some remarks on these results:
in the development phase for finding good parameters for the
learning system. The second test file will be used for the final
Evaluation. The Spanish data is a collection of news wire  The baseline results have been produced by a system
articles made available by the Spanish EFE News Agency. which only selects complete named entities which appear
The Dutch data consist of four editions of the Belgian in the training data.
newspaper "De Morgen" of 2000 (June 2, July 1, August 1  The system performs worse than the baseline system
and September 1) when processing the Dutch data because the authors used

344
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

a poor representation of the data. They had removed all Word Prefix: Prefix information of a word may
sentence breaks. also be helpful in identifying whether it is a NE.
Root Information of Word: Indian languages
are morphologically rich. Words are inflected in various
2.5 Named Entity Recognition in Indian forms depending on its number, tense, person, case etc. The
Languages task becomes easier if instead of the inflected words,
corresponding root words are checked whether these are NE t
Hybrid NER system for Indian languages which is designed
for the IJCNLP NERSSEAL shared task competition, the goal Parts-of-Speech (POS) Information: The POS of the current
of which is to perform NE recognition on 12 types of NEs - word and the surrounding words may be useful feature. The
person, designation, title-person, organization, abbreviation, POS information is also used by defining several binary
brand, title-object, location, time, number, measure and term. features. An example is the NomPSP binary feature. The
In this, shared task was defined to build the NER systems for value of this feature is defined to be 1 if the current token is
5 Indian languages - Hindi, Bengali, Oriya, Telugu and Urdu nominal and the next token is a PSP.
for which training data was provided.
The workshop included two tracks. The first track was for 2.5.4 Annotation Constraints
regular research papers, while the second was organized on The annotated corpus was created under severe constraints.
the lines of a shared task. The annotation was to be for five languages by different
teams, sometimes with very little communication during the
2.5.1 Previous Work process of annotation. As a result, there were many logistical
A variety of techniques has been used for NER. The problems. There were other practical constraints like the fact
two major approaches to NER are: that this was not a funded project and all the work was mainly
1. Linguistic approaches. voluntary. Another major constraint for all the languages
2. Machine Learning (ML) based approaches. except Hindi was time. There was not enough time for cross
validation as the corpus was required by a deadline.
The linguistic approaches typically use rules manually written
by linguists. There are several rule based NER systems, 2.5.5 Evaluation Measures
containing mainly lexicalized grammar, gazetteer lists, and As part of the evaluation process for the shared task,
list of trigger words, which are capable of providing 88%- precision, recall and F-measure had to be calculated for three
92% f-measure accuracy for English. cases: maximal named entities, nested named entities and
lexical matches. Thus, there were nine measures of
ML based techniques for NER make use of a large amount of performance:
NE annotated training data to acquire high level language
knowledge. Several ML techniques have been successfully • Maximal Precision: Pm = cm/ rm
used for the NER task of which Hidden Markov Model
(HMM), Maximum Entropy (MaxEnt), Conditional Random • Maximal Recall: Rm = cm/ tm
Field (CRF) are most common. Combinations of different ML
approaches are also used. • Maximal F-Measure: Fm = 2∗Pm∗Rm/ Pm+Rm

2.5.2 Maximum Entropy Based Model • Nested Precision: Pn = cn/ rn


This model is used to build the baseline NER system. MaxEnt
is a flexible statistical model which assigns an outcome for • Nested Recall: Rn = cn/ tn
each token based on its history and features.
• Nested F-Measure: Fn = 2PnRn/ Pn+Rn
2.5.3 Features
In the following it discussed about the features which • Lexical Precision: Pl = cl/ rl
identified and used to develop the Indian language NER
systems:- • Lexical Recall: Rl = cl/ tl
Static Word Feature: The previous and next words of a
particular word are used as features. • Lexical F-Measure: Fl = 2PlRl/ Pl+Rl
Context Lists: Context words are defined as the
frequent words present in a word window for a particular where c is the number of correctly retrieved (identified)
class. named entities, r is the total number of named entities
Dynamic NE tag: Named Entity tags of the previous words retrieved by the system being evaluated (correct plus
(ti−m...ti−1) are used as features. incorrect) and t is the total number of named entities in the
First Word: If the token is the first word of a reference data. The participants were encouraged to report
sentence, then this feature is set to 1. Otherwise, it is set to 0. results for specific classes of NEs.
Contains Digit: If a token „w‟ contains digit(s)
then the feature ContainsDigit is set to 1.
Word Suffix: Word suffix information is helpful to identify
the NEs.

345
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.5.5 Results

Table 1: Results for the experiments on a baseline for the five South Asian languages

Measure Precision Recall F- measure


Language F
Pm Pn Pl Rm Rn Rl Fm Fn

Bengali 50.00 44.90 52.20 07.14 06.90 06.97 12.50 11.97 12.30
Hindi 75.05 73.61 73.99 18.16 17.66 15.53 29.24 28.48 25.68
Oriya 29.63 27.46 48.25 07.60 12.18 09.11 13.94 11.91 19.44
Telugu 00.89 02.83 22.85 00.20 00.67 5.41 00.32 01.08 08.75
Urdu 47.14 43.50 51.72 18.35 16.94 18.94 26.41 24.39 27.73
m: maximal; n: nested ; l: lexical

Table 2: Baseline results for specific named entity classes (F-Measures for nested lexical match)
Bengali Hindi Oriya Telugu Urdu

NEP 06.62 26.23 28.48 00.00 04.39


NED 00.00 12.20 00.00 00.00 00.00
NEO 00.00 15.50 03.30 00.00 11.98
NEA 00.00 00.00 00.00 00.00 00.00
NEB NP NP 00.00 00.00 00.00
NETP 00.00 NP 11.62 00.00 00.00
NETO 00.00 05.92 04.08 00.00 00.00
NEL 03.03 44.79 25.49 00.00 40.21
NETI 34.00 47.41 22.38 01.51 38.38
NEN 62.63 62.22 10.65 03.51 09.52
NEM 13.61 24.39 08.03 00.71 07.15
NETE 00.00 00.18 00.00 00.00 00.00
NP: Not present in the reference data

3. Comparisons of multifarious methods by


some factors 3.2 Domain factor
MUC-6 collection composed of newswire texts, and on a
3.1 Language factor proprietary corpus made of manual translations of phone
conversations and technical emails. For ACE project, data
German is well studied in CONLL-2003 and in earlier is collected from web which composed of text, audio and
works. Similarly, Spanish and Dutch are strongly video.
represented, boosted by a major devoted conference:
CONLL-2002.Japanese has been studied in the MUC-6 3.3 Entity type factor
conference, the IREX conference and other work.
Moreover, Arabic, Chinese with English also studied in In the ACE program, the type “facility” subsumes
ACE project. entities of the types “location” and “organization”. The
type “GPE” is used to represent a location which has a
government, such as a city or a country.

346
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The type “miscellaneous” is used in the CONLL Applications (0975 – 8887),2010, Volume 1 – No. 3,
conferences and includes proper names falling outside pages 119-122.
the classic “enamex”. The class is also sometimes [12] K. S. Hasan and Md. A. R. and V. Ng, “Learning Based
augmented with the type “product” (e.g., E. Bick 2004). Named Entity Recognition for morphologically resource
The “timex” (another term coined in MUC) types “date” scarce languages”, In Proc. Of the 12th Conf. of the
and “time” and the “numex” types “money” and European Chapter of the Association for Computational
“percent” are also quite predominant in the literature. Linguistics (EACL), pp. 354-362, March 2009, Athens,
Greece.
4. CONCLUSION [13] K. Church. “A stochastic parts program and noun phrase
In this paper, we covered diverse conferences like MUC-6, parser for unrestricted text”, In Proc. Of the 2nd conf. on
IREX, ACE, CoNLL-2002 and 2003 which includes the applied natural language processing, pp., 136-143, 1988,
different procedures set for the evaluation of NER system Association for Computational Linguistics Publishers,
performance. By addressing all aspects of conferences, we Austins, Texas.
also make their comparisons based on some factors like [14] Tom M. Mitchell, The Role of Unlabeled Data in
language, domain set and entity type. Supervised Learning. In Proceedings of the Sixth
International Colloquium on Cognitive Science, San
Sebastian, Spain, 1999.
REFERENCES [15] Andrei Mikheev, Marc Moens and Claire Grover, Named
Entity Recognition without Gazetteers, In Proceedings of
[1] Mohammad Hasanuzzaman, Asif Ekbal and Sivaji EACL'99, Bergen, Norway, 1999, pp. 1-8.
Bandyopadhyay , Maximum Entropy Approach for [16] Michael Collins and Yoram Singer, Unsupervised
Named Entity Recognition in Bengali and Hindi , models for named entity classification. In Proceedings of
International Journal of Recent Trends in Engineering, the 1999 Joint SIGDAT Conference on Empirical
Vol. 1,No.1, May 2009. Methods in Natural Language Processing and Very
[2] David Nadeau, Satoshi Sekine, A survey of named entity Large Corpora, University of Maryland, MD, 1999.
recognition and classification.
[2] Satoshi Sekine, Hitoshi Isahara, IREX: IR and IE
Evaluation project in Japanese.
[3] George Doddington, Alexis Mitchell, Mark Przybocki,
Lance Ramshaw, Stephanie Strassel, Ralph Weischedel,
The Automatic Content Extraction (ACE) Program
Tasks, Data, and Evaluation.
[4] Sujan Kumar Saha, Sanjay Chatterji, Sandipan Dandapat
Sudeshna, Sarkar Pabitra Mitra, A Hybrid Approach for
Named Entity Recognition in Indian Languages. In
Proceedings of the IJCNLP-08 Workshop on NER for
South and South East Asian Languages, pages 17–24.
[5] Erik F. Tijong Kim Sang, Introduction to the CoNOLL-
2002 shared task: language independent named entity
recognition, CNTS- Language Technology Group.
[6] Erik F. Tijong Kim Sang, Fien De Meulder, Introduction
to the CoNOLL-2003 shared task: language independent
named entity recognition, CNTS- Language Technology
Group.
[7] Mohammad Hasanuzzaman, Asif Ekbal and Sivaji
Bandyopadhyay , “Maximum Entropy Approach for
Named Entity Recognition in Bengali and Hindi” ,
International Journal of Recent Trends in Engineering,
Vol. 1,No.1, May 2009.
[8] Thoudam Doren Singh , Kishorjit Nongmeikapam, Asif
Ekbal and Sivaji Bandyopadhyay “Named Entity
Recognition for Manipuri Using Support Vector
Machine” In Proc. of 23rd Pacific Asia Conference on
Language, Information and Computation, pages 811–
818.
[9] Asif Ekbal , Sivaji Bandyopadhyay “Named Entity
Recognition in Bengali: A Multi-Engine Approach” ,
Northern European Journal of Language Technology,
2009, Vol. 1, Article 2, pp 26–58.
[10] Asif Ekbal , Sivaji Bandyopadhyay “Named Entity
Recognition using support vector machine: A language
independent approach” , Int. J. of electrical and
electronics engineering, 2010, Vol4, Issue2.
[11] Sobhana N.V, Pabitra Mitra, S.K. Ghosh, “Conditional
Random Field Based Named Entity Recognition in
Geological Text”, International Journal of Computer

347
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Comparison between PID, fuzzy, genetic algorithm and


particle swarm optimization soft computing techniques
Rakesh Kumar Nishant Nakra
Assistant Prof. Assistant Prof.
Department of Electrical Engineering Department of Electrical Engineering
B.H.S.B.I.E.T. Lehragaga B.H.S.B.I.E.T. Lehragaga.
rkd5975@gmail.com

ABSTRACT Significant studies based on the closed-form analysis of fuzzy


The goal of this work is to make a comparison between PID, PID-like controllers started with the work of Ying, Siler, and
fuzzy, genetic algorithm and particle swarm optimization soft Buckley, where they have used a simple four-rule
computing techniques in the case of linear dynamic process controller similar to that of Murakami and Maeda, More
control systems. This comparative study is made using analytical work in this regard was subsequently reported for
computer simulation. In this work, the main objective of the the four-rule controllers, and linear-like fuzzy controllers.
investigator is to compare the performances of various
Palm has analytically demonstrated the equivalence between
controllers using various soft computing techniques for the
tuning of their parameters. For this comparison of various the fuzzy controller and sliding-mode controllers.
system the performance parameters i.e. Steady-state error,
rise time, overshoot, settling time, gain margin, phase margin
etc. are proposed to be evaluated. PID Controller is important
part in industrial plants and control engineering because it is
simple and robust. The proportional integral derivative (PID)
controllers are widely applied in industrial process owing to
their simplicity and effectiveness for both linear and nonlinear
systems. Comparison of time domain and frequency domain
characteristic such as: rise time and settling time, over shoot,
steady state error, gain margin and phase margin etc. of
controlled process/systems using various soft computing
techniques. The second part of this paper is a description of a
simulated system, and a presentation of simulated controllers.
In the second part, fuzzy controller isexamined in details. A
sensitivity of the fuzzy logic controllerto design parameters,
different shapes and superposition of membership functions,
Fig. 1. Fuzzy Control System
is tested. Moreover, the simulations are done for the different
types of fuzzy reasoning and defuzzification methods. A block diagram of a fuzzy control system is shown in
Figure1. The fuzzy controller is composed of the following
elements:
Keywords: Process plants, Steam temperature
control, Industrial system, Multi objective a. A fuzzification interface, which converts controller inputs
control; Optimal-tuning; PID control Fuzzy logic into information that the inference mechanism can easily use
to activate and apply rules. Shapes of membership functions
control, genetic algorithms, nonlin-ear control, are the triangular, trapezoidal and bell, but the shape is
optimal control, PID control, PID controller tuning generally less important than the number of curves and
using GA, PSO and Fuzzy logic. placement. From 3 to 7 curves are generally enough to cover
the intended range of an input value.
1. INTRODUCTION
b. A rule-base (a set of If-Then rules), which contains a fuzzy
Well-known proportional-integral-derivative PID controller is logic quantification of the expert’s linguistic description of
the most widely used in industrial application because of its how to achieve good control. In other word, the rule base is
simple structure. On the other hand conventional PID derived from an “inference engine” or “fuzzy inference”
controllers with fixed gains do not yield reasonable module, which emulates the expert’s decision making in
performance over a wide range of operating conditions and interpreting and applying knowledge about how best to
systems (time-delayed systems, nonlinear systems, etc.). A control the plant. The processing stage is basically a group of
wide variety of fuzzy PID-like controllers have been logic rules in the form of IF-THEN statements, where the IF
developed. In most cases, fuzzy controller design is accom- part is called the "antecedent" and the THEN part is called the
plished by trial-and-error methods using computer simulations. “consequent".

348
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

This Paper is an attempt to make a comparison between PID, In the classical set theory one element can either belong to a
fuzzy, genetic algorithm and particle swarm optimization soft set, or not. This property can be represented by a degree of
computing techniques in the case of linear dynamic process membership. In the case shown before, the element b belongs
control systems. We propose a new methodology for the to A, and its degree of membership is 1. The element a
optimal design of fuzzy PID controllers. In the proposed work doesn’t belong to A and its membership is 0. We can form a
the main objective of the investigator is to compare the function which represents this property:
performances of various controllers ie comparison of time
domain and frequency domain characteristic such as: rise A x 1 if x A
time and settling time, over shoot, steady state error, gain
margin and phase margin etc. of controlled process/systems 0 if' x A
using various soft computing techniques.
This concept is basic in the classical set theory. The main
concept of fuzzy theory is a notion of fuzzyset. Fuzzy set is
2. FUZZY LOGIC – A BRIEF HISTORY an extension of crisp set. Zadeh [Zadeh65] gave the following
definition:
Fuzzy logic, invented by LotfiZadeh in the mid 1960s
provides a representation scheme and a calculus for dealing A fuzzy set is a class of objects with a continuum of grades of
with vague or uncertain concepts. It is a paradigm for an membership. Such a set is characterized by a membership
alternative design methodology which can be applied in (characteristic) function which assigns to each object a grade
developing linear and non- linear systems for embedded of membership ranging between zero and one.
control. Zadeh originally devised the technique as a means for
solving problems in the soft sciences, particularly those that
4. GENETIC ALGORITHM
involved interactions between humans, and/or between
humans and machines [6]. Since then there has been rapid
developments of the theory and application of Fuzzy logic to Genetic Algorithms are a stochastic global search method that
control systems. Fuzzy logic controllers are being increasingly mimics the process of natural evolution. It is one of the
applied in areas where system complexities, development time methods used for optimization. John Holland formally
and costs are the major issues [7].In Japan, Terano, inspired introduced this method in the United States in the 1970 at the
by Zadeh’s work introduced the idea to the research University of Michigan. The continuing performance
community in about 1972. This led toactive research and a improvement of computational systems has made them
host of commercial applications, almost entirely in the area of attractive for some types of optimization. The genetic
physical system control. In 1990 research institute namely algorithm starts with no knowledge of the correct solution and
LIFE (Laboratory for International Fuzzy Engineering) started depends entirely on responses from its environment and
functioning under the leadership of Terano [6]. The Japanese evolution operators such as reproduction, crossover and
researchers have been a primary force in advancing the mutation to arrive at the best solution. By starting at several
practical implementation of Fuzzy theory and now have more independent points and searching in parallel, the algorithm
than 2000 patents in the area. avoids local minima and converging to sub optimal solutions.

3. BASIC NOTIONS OF FUZZY LOGIC 5. PARTICLE SWARM OPTIMIZATION

3.1. Fuzzy set PSO is one of the optimization techniques and kind of
evolutionary computation technique .The technique is derived
General definition of a set is that a set is a collection of from research on swarm such as bird flocking and fish
objects distinct and perfectly specified [Kaufmann88]. A part schooling. In the PSO algorithm, instead of using evolutionary
of a set is a subset. For example, let E is a finite referential operators such as mutation and crossover to manipulate
set: algorithms, for a d-variable optimization Problem, a flock of
particles are put into the d-dimensional Search space with
E = {a, b, c, d, e} randomly chosen velocities and positions knowing their best
values. So far (p best) and the position in the d-dimensional
We can form a crisp subset of E, for example:
space [9].
A = {b, d, e}
The velocity of each particle, adjusted accordingly to its own
flying experience and the other particles flying experience [9].
If we present it in the other form:
For example, the ith particle is represented, as
A = 0 1 01 1
xi=(xi,1,xi,2,……….,xi,d) ....(1)
abcde
In the d-dimensional space. The best previous position of the
ith particle is recorded as,

349
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Pbesti=(Pbesti,1,Pbesti,2······, Pbesti,d) ---(2)  Analysis of the system performance.

 Comparison of time domain and frequency domain


characteristic such as: rise time and settling time,
The index of best particle among all of the particles in the over shoot, steady state error, gain margin and phase
group in g best d .The velocity for particle i is represented as margin etc. of controlled process/systems using
various soft computing techniques.
Vi = ( Vi,1,Vi,2 ,…….Vi,d ) ----- (3)
7. SOME UNIQUE PROBLEMS IN AGRO
The modified velocity and position of each particle can be INDUSTRIES
calculated using the current velocity and distance from
Managing the properties of oil starting from the input stage
Pbesti,d to g bestd as shown in the following formulas. with the aim of controlling them is not an easy task for several
reasons.
Vi,m(t+1) = W. Vi,m(t) + c1 * rand () * (P besti,m - xi,m(t) )
 There are many parameters in Agro industry that
+c2 * Rand () * (g best m – xi,m(t) ) ………(4)xi,m(t+1) must be taken into consideration inparallel. A single
sensory property like color or texture can be linked
= xi,m(t) + Vi,m(t+1)
individually toseveral dimensions recorded by the
human brain.
i = 1 , 2, .... , n  The Agro industry works with non-uniform,
variable raw materials that, whenprocessed, should
m = 1 , 2, ..... , d shaped into a product that satisfies a fixed standard.
 The process control in agro industry are highly non-
Where linear and variables are coupled.
 Little data are available in traditional manufacturing
n= Number of particles in the group plants that produce, for example,sunflower oil or
rice bran and this situation is applies to most Agro
d = dimension processing industry.
 In addition to the temperature changes during a
t = Pointer of iterations (generations) heating or cooling process, there arebiochemical
(nutrient, color, flavor, etc.) or microbial changes
that should be considered.
Vi,m(1) = Velocity of particle I at iteration t
 The moisture in oil is constantly fluctuating either
loss or gain throughout the processwhich can affect
W= Inertia weight factor
the flavor, texture, nutrients concentration and other
properties.
c1, c2 = Acceleration constant  Other properties of oil such as density, thermal and
electrical conductivity, specificheat, viscosity,
rand()=Random number between 0 and 1 permeability, and effective moisture diffusivity are
often a function ofcomposition, temperature, and
xi,m(t) = Current position of particle i at iterations moisture content, and therefore keep changing
duringthe process.
Pbesti = Best previous position of the ith particle  The system is also quite non-homogeneous. Such
detailed input data are not available.
gbestm =Best particle among all the particles in the  Often, irregular shapes are present.
population.

6. PLAN OF WORK AND RESEARCH 8. SIMULATION RESULTS


METHODOLOGY
Oil Tank Temprature Controller
The proposed work will be carried out by adopting the
following step by step methodology:
This PID temprature controller is used to control the
 Detailed Literature Survey of various tuning temprature of raw oil in oil tank. In this controller the set
methods of PID controller. value of temprature is 125 degres and PID controller reaches
the set value in a period of 3 hours 35 minutes.
 Collection of real time data of PID constants and
Temperature, Pressure, level and Flow etc. in Fuzzy model was developed using error, change in error and
Process industries. fuzzy output to improve the time domain and frequency
domain characteristic such as: rise time and settling time,
 Implementation of PID controller tuning using GA, over shoot, steady state error, gain margin and phase margin
PSO and Fuzzy logic.

350
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

etc. of controlled process/systems using various soft Very Large (VL) 115 121 125
computing techniques.

Table 1.
Fuzzy system, for flow controller To develop Fuzzy controller, firstly error signal(e)is
Membership functions of Error input.
calculated by subtracting output of PID temperature controller
Membership function for Error from set temperature then change in error(∆e) was calculated
by subtracting previous error from current error. Considering
Initial Peak Final error and change in error as input and fuzzified output as
Linguistic variable output function membership functions are created for each
value value value
input and output. Membership functions for these quantities
Very Small (VS) -0.03 -0.02 -0.01 are defined as in above Table 1. The membership functions
are shown in schematic form in Fig. 2.
Small (S) -0.02 -0.01 0

Medium (M) -0.01 0 0.01

High (H) 0 0.01 0.02

Very High (VH) 0.01 0.02 0.03

Membership functions of Change in Error input.

Membership function for Change in Error


(a) Membership functions of Error input.
Initial Peak Final
Linguistic variable
value value value

Very Small (VS) -1.5 -1 -0.5

Small (SM) -1 -0.5 0

Medium (MD) -0.5 0 0.5

Large (L) 0 0.5 1 (b) Membership functions of Change in Error input.

Very Large 0.5 1 1.5

Membership functions of Fuzzy output.

Membership function for Fuzzy Output

Initial Peak Final


Linguistic variable (c) Membership functions of Fuzzy output.
value value value
Fig.2. Fuzzy system, for oil tank temperature
Very Small (VS) 30 45 60 controller

Small (S) 45 62 80
A rule base was developed for the fuzzy model using simple
Medium (M) 65 82 100 IF-THEN rules.

Large (L) 90 105 122

351
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

respectively. Now a rule-base has been developed for the


fuzzy model using simple IF-THEN rules. On the basis of this
rule base a fuzzified output is calculated.

The rule base is summarized as in Table 2

Table 2 Rule base

Fuzzy output (Fz) Change in Error (∆e)

Error(e) N SM M L
This Fuzzy model is simulated in MATLAB fuzzy logic
VS VL VL VL - toolbox GUI, and results are obtained. Then results are plotted
along with the actual temperature and set temperature
S VL VL VL - obtained from the process, are plotted in Fig.3.

M VL L L -

H - - - -

VH N SM SM -

Fig.3. Response curve of PID Vs Fuzzy flow controller in


oil tank

Red graph shows the fuzzy output of fuzzy model of oil


tank temperature controller, black line represent the output of
PID temperature controller and blue line represent the set
value of temperature controller. Fuzzy output possesses some
oscillations. Improvements have been made by revising the
membership functions by revising the rule-base as shown in
Table. The rule base is also revised as shown in Table 3. By
using this rule base, oscillations in fuzzy response decreases
and steady state error were also reduced than the last fuzzy
model.

Table 3 New Improved Rule base

Membership function for Error

Error input has five membership functions and the change in Initial Peak Final
Linguistic variable
error and fuzzy output also has five membership functions value value value

352
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Very Small (VS) -0.03 -0.02 -0.01 S VL VL VL -

Small (S) -0.02 -0.01 0 M VL L L -

Medium (M) -0.01 0 0.01 H - - - -

High (H) 0 0.01 0.02 VH N SM SM -

Very High (VH) 0.01 0.02 0.03


Here the steady state error is decreased and settling time also
improved It is observed from Fig.4.that the oscillations are
Membership function for Change in Error reduced and steady state error is also reduces. Settling time is
also found to be improved. . Further improvements in Fuzzy
Initial Peak Final output and rule base have been made. To achieve this
Linguistic variable requirement, the range of last two membership function of
value value value
fuzzy output has been changed.
Very Small (VS) -1.5 -1 -0.5

Small (S) -1 -0.5 0

Medium (M) -0.5 0 0.5

High (H) 0 0.5 1

Very High (VH) 0.5 1 1.5

Fig.4. Improved Response curve of PID flow controller Vs


Fuzzy flow controller.
Membership function for fuzzy output
CONCLUSIONS
Initial Peak Final
Linguistic variable
value value value PID controller cannot be applied with the systems which have
a fast change of parameters, because it would require the
Very Small (VS) 30 45 60 change of PID constants in the time. The fuzzy based
controller gives the best performance, but the control engineer
Small (S) 45 62 80 faces different kind of challenges to design such a controller.
Expected conclusions of proposed research are that designing
Medium (M) 65 82 100 of PID controller using various soft computing techniques:

High (H) 90 105 122  Would dramatically improve the steady state and
transient response of the system.
 Overshoot, Rise time and settling time of system
Very High (VH) 115 121 125 response would be reduced in magnitude So production
can surely be increased by adopting such intelligent
controllers.
 Would prove suitable for controlling various process
The MATLAB simulation results are plotted along with the parameters such as: temperature, pressure, level and flow
actual temperature and set temperature obtained from the etc in process industries and would be more efficient and
process, are plotted in Fig.4. effective.

Fuzzy output (Fz) Change in Error (∆e)


REFERENCES
Error(e) N SM M L
[1] Hyung-Soo Hwang, Jeong-Nae Choi“ A Tuning
Algorithm for The PID Controller Utilizing Fuzzy
VS VL VL VL - Theory”Wonkwang University, IEEE Transactions on
Fuzzy System, pp.2210-2215,1999.

353
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[2] F.Karray, W.Gueaieb,“soft computing techniques as


applied to expert tuning of pid controllers”Greece,
Proceedings of the 15th IEEE international Symposium on
Intelligent Control, pp.91-96, 17-19 July2000.

[3]Dong Hwa Kim,“Comparison of PID Controller Tuning of


Power Plant Using Immune and Genetic Algorithms”
Proceedings of the IEEE International Symposium on
Computational Intelligence for Measurementsand
Applications, Ligano, Switzerland,pp.169-174, 29-31 July
2003.

[4]Andrey Popov, Adel Farag and Herbert Werner, “Tuning


of a PID controller Using a Multi-objective Optimization
Technique Applied to A Neutralization Plant” Proceedings of
the 44th IEEE Conference on Decision and Control, Spain,
pp.7139-7143,12-15 December 2005.

[5] Zhenyu Yang, “Automatic Tuning of PID Controller for a


1-D Levitation System Using a Genetic Algorithm - A Real
Case Study”Proceedings of the 2006 IEEE International
Symposium on Intelligent Control Munich,
Germany,pp.3098-3103 October 4-6, 2006.

[6] Tae-Hyoung Kim, Ichiro Maruta and Toshiharu Sugie ,


“Particle Swarm Optimization based Robust PID Controller
Tuning Scheme” Proceedings of the 46th IEEE Conference on
Decision and Control New Orleans, LA, USA,pp.200-205,
Dec. 12-14, 2007.

[7] J.L. CalvoRolle, “Developed an expert system of an


empirical method to choose correct expressions for PID
controllers tuning in open loop” Proceedings of the IEEE
Conference on expert system, pp.2044-2049, 2009.

[8] M. Sridhar, K.Vaisakh, “Adaptive PSO Based Tuning of


PID- Fuzzy and SVC-PI Controllers for Dynamic Stability”
Proceedings of the Second International Conference on
Emerging Trends in Engineering and Technology, ICETET-
09, pp.985-990, 2009.

[9]B.Nagaraj, “A Comparative Study of PID Controller


Tuning Using GA, EP, PSO and ACO” Proceedings of the
IEEE International Conference on Trends in Engineering and
Technology, ICCCCT-10, pp.305-113, 2010.

[10] Hwan Gil Bae, “Comparative Study of PID Controller


Designs Using Particle Swarm Optimizations for Automatic
Voltage Regulators” Proceedings of the IEEE Conference on
expert system, 2011.

354
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Survey on Routing Protocol for Wireless Sensor


Network

Mahima Bansal Dr. Harsh Sadawarti Abhilash Sharma


PG Student Director Assistant Professor
CSE dept. RIMT- Institute of Engineering CSE dept.
RIMT- Institute of Engineering and Technology, RIMT- Institute of Engineering
and Technology, Mandi-Gobindgarh and Technology,
Mandi-Gobindgarh Mandi-Gobindgarh
harshsada@yahoo.com
mahima0911@gmail.com abhilash583@yahoo.com

ABSTRACT and Jiang and Manivannan [3] identify a wide range of


Recent advances in wireless sensor networks have led to approaches to routing in wireless sensor network. Few of
many new protocols specifically designed for sensor networks these protocols have been formally verified or operationally
where energy awareness is an essential consideration. Most of deployed however. The Minimum Cost Forwarding (MCF)
the attention, however, has been given to the routing protocols routing protocol [1], has been proposed. The application of
since they might differ depending on the application and MCF is restricted to networks possessing a single sink node
network architecture. This paper surveys recent routing and multiple source nodes. However, it offers several
protocols for sensor networks and presents a classification for potential advantages for sensor nodes with limited resources.
the various approaches pursued. The three main categories
explored in this paper are data-centric, hierarchical and 2. RELATED WORK
location-based. Some Routing protocol is described and The growing interest in WSNs and the continual emergence of
discussed under the appropriate category. Moreover, protocols new techniques inspired some efforts to design
using contemporary methodologies such as network flow and communication protocols for this area. Communication
QoS modeling are also discussed. protocols take the task of data transmission in the large scale
network and are important to achieve possible better
Keywords performance. Normally, current routing can be typically
Wireless Sensor Network, Routing Protocols, Classification of classified into four main categories, namely data-centric
Protocols. protocols, hierarchical protocols, location-based protocols and
flow-based and QoS-aware protocols [4]. Of course, there are
1. INTRODUCTION
also some hybrid protocols that fit under more than one
Wireless sensor networks can provide low cost solution to a
category. The typical data-centric routing protocols proposed
variety of real-world problems. Sensors are low cost tiny
for WSNs include SPIN [5] and Directed Diffusion [6], which
devices with limited storage, computational capability and
are obviously different from traditional address-based routing;
power. The large scale deployment of wireless sensor
location-based protocols such as MECN, GAF and GEAR
networks is expected to guarantee real time communication.
require location information for sensor nodes, which are
Devices in sensor networks have a much smaller memory,
energy-aware. Hierarchical protocols are scalable for a larger
constrained energy supply, less process and communication
number of sensors covering a wider region of interest, which
bandwidth. Topologies of the sensor networks are constantly
overcome the defects of single-gateway architecture. LEACH
changing due to a high node failure rate, occasional shutdown
is one of the first hierarchical routing approaches for WSNs
and abrupt communication interferences. Due to the nature of
[7]. Although the above three categories are promising in term
the applications supported, sensor networks need to be
of energy efficiency, more attentions should be paid to
densely deployed and have anywhere from thousands to
address the issues of network flow and QoS posed by real-
millions of sensing devices, which are the orders of magnitude
time applications. One of the protocols for WSNs that
larger than traditional ad hoc mobile networks. In addition,
includes some notions for QoS in its routing decisions is the
energy conservation becomes the center of focus due to the
Sequential Assignment Routing (SAR) [9]. The SAR protocol
limited battery capacity and the impossibility of recharge in
creates trees routed from one-hop neighbor of the sink by
the hostile environment. Wireless sensor networks (WSN)
taking into account the QoS metric, the energy resource on
allow flexible, powerful, automated data collection and
each path and the priority level of each data packet. By using
monitoring systems to be created. Many routing protocols
created trees, multiple paths from sink to sensors are formed.
have been proposed to facilitate data transport from sensor
One of these paths is selected according to the energy
nodes to a base station. Surveys of Al-Karaki and Kamal [2]
resources and achievable QoS on each path. Akkaya et al

355
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

extend SAR by selecting a path from a list of candidate paths SPIN Messages
that meet the end-to-end delay requirement and maximizing SPIN nodes use three types of messages to communicate
the throughput for best effort traffic. Their protocol does not  ADV – new data advertisement. When a SPIN node
require sensors involved in route setup so that the overhead has data to share, it can advertise this fact by
problems in SAR approach can be avoided. transmitting an ADV message containing meta-data.
Minimum cost forwarding protocol is a kind of flow-based  REQ - request for data. A SPIN node sends a REQ
routing protocol. It aims at finding the minimum cost path in message when it wishes to receive some actual data.
large scale sensor networks, which will be simple and  DATA - data message. DATA messages contain
scalable. The data flows over the minimum cost path and actual sensor data with a metadata header.
resources on the nodes are updated after each flow. Ye et al
also propose a cost field based protocol to Minimize Cost Because ADV and REQ messages contain only metadata, they
forwarding Routing (MCR) [1]. In the design, they present a are smaller, and cheaper to send and receive, than their
novel back off-based cost field setup algorithm that finds the corresponding DATA messages. Using SPIN routing
optimal cost of all nodes to the sink with one single message algorithm, sensor nodes can conserve energy by sending the
overheadS at each node. Once the field is established, the metadata that describes the sensor data instead of sending all
message, carrying dynamic cost information, flows along the the data. SPIN can reduce the power consumption of
minimum cost path in the cost field. Each intermediate node individual node, but it may decrease the lifetime of the whole
forwards the message only if it finds itself to be on the network due to extra messages.
optimal path, based on dynamic cost states.
3.2 Directed Diffusion ( DDiff)
3. DATA CENTRIC PROTOCOLS C. Intanag onwiwat et. al. [6] proposed a popular data
Data-centric protocols are query-based. In many applications aggregation paradigm for WSNs, called directed diffusion.
of sensor networks, it is not feasible to assign global Directed diffusion is a data-centric (DC) and application-
identifiers to each node. Sink sends queries to certain regions aware paradigm in the sense that all the data generated by
and waits data from sensors located in that region. Attribute- sensor nodes is named by attribute-value pairs. The main idea
based naming is necessary to specify properties of data. of the DC paradigm is to combine the data coming from
different sources and combines them by eliminating
Data Centric Protocols are redundancy, minimizing the number of transmissions; thus
 Flooding saving network energy and prolonging its lifetime. But still
 Gossiping power consumption is high.
 Sensor Protocols for Information via Negotiation Therefore, we propose an algorithm that tries to prolong the
(SPIN) network lifetime by compromising between minimum energy
 Directed Diffusion consumption and fair energy consumption without additional
 Energy-aware Routing control packets. It also improves its data packet delivery ratio,
 Rumor Routing minimizes delay and maximizes throughput of the network.
 Gradient-Based Routing (GBR)
 Constrained Anisotropic Diffusion Routing (CADR) 4. HIERARCHICAL PROTOCOLS
 COUGAR Aim at clustering the nodes so that cluster heads can do some
 Active Query Forwarding In Sensor Networks aggregation and reduction of data in order to save energy.
(ACQUIRE) Scalability is one of the major design attributes of sensor
networks. A single-tier network can cause the gateway to
3.1 Sensor Protocols for Information via overload with the increase in sensors density. Such overload
might cause latency in communication and inadequate
Negotiation (SPIN) tracking of events. The single-gateway architecture is not
Hein Zelman et.al.[5] in proposed a family of adaptive scalable for a larger set of sensors covering a wider area of
protocols called Sensor Protocols for Information via interest.
Negotiation (SPIN) that disseminate all the information at Maintain energy consumption of sensor nodes
each node to every node in the network assuming that all
 By multi-hop communication within a particular
nodes in the network are potential base stations. This enables
cluster.
a user to query any node and get the required information
 By data aggregation and fusion decrease the number
immediately. These protocols make use of the property that
of the total transmitted packets.
nodes in close proximity have similar data, and hence there is
a need to only distribute the data that other nodes do not Hierarchical Protocols are
possess. One of the advantages of SPIN is that topological  LEACH : Low-Energy Adaptive Clustering
changes are localized since each node needs to know only its Hierarchy
single-hop neighbors. SPIN provides much energy savings
 PEGASIS: Power-Efficient Gathering in Sensor
than flooding, and meta-data negotiation almost halves the
Information Systems
redundant data.
 Hierarchical PEGASIS

356
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 TEEN: Threshold sensitive Energy Efficient sensor routing protocols for sensor networks require location
Network protocol information for sensor nodes. There is no addressing scheme
 Adaptive Threshold TEEN (APTEEN) for sensor networks like IP-addresses. Location information
 Energy-aware routing for cluster-based sensor can be utilized in routing data in an energy efficient way.
networks Protocols designed for Ad hoc networks with mobility in
 Self-organizing protocol mind
 Applicable to Sensor Networks as well
4.1 Low-Energy Adaptive Clustering  Only energy-aware protocols are considered.

Hierarchy (LEACH)
Location-based protocols are
LEACH is a clustering-based protocol that minimizes energy
 MECN & SMECN
dissipation in sensor networks[7]. LEACH randomly selects
Minimum Energy Communication Network
sensor nodes as cluster heads, so the high energy dissipation
in communicating with the base station is spread to all the  GAF
sensor nodes in the sensor network. However, data collection Geographic Adaptive Fidelity
is centralized and is performed periodically. LEACH collects  GEAR
data from distributed micro sensors and transmits it to a base Geographic and Energy Aware Routing
station. LEACH uses the following clustering-model: Some of
the nodes elect themselves as cluster-heads. These cluster- 6. NETWORK FLOW & QOS AWARE
heads collect sensor data from other nodes in the vicinity and These are based on general network-flow modeling and
transfer the aggregated data to the base station. Since data protocols that strive for meeting some QOS requirements
transfers to the base station dissipate much energy, the nodes along with the routing function. In addition to minimizing
take turns with the transmission– the cluster-heads “rotate”. energy consumption, it is also important to consider quality of
This rotation of cluster-heads leads to a balanced energy service (QoS) requirements in terms of delay, reliability, and
consumption of all the nodes and hence to a longer lifetime of fault tolerance in routing in WSNs. In this section, we review
the network. Therefore, this protocol is most appropriate when a sample QoS based routing protocols that help find a balance
there is a need for constant monitoring by the sensor network. between energy consumption and QoS requirements.
LEACH can suffer from the clustering overhead, which may
result in extra power depletion. Network Flow
Maximize traffic flow between two nodes, respecting the
capacities of the links
4.2 Power-Efficient Gathering in Sensor
Information Systems (PEGASIS) QOS-aware protocols
PEGASIS [8] is an extension of the LEACH protocol, which Consider end-to-end delay requirements while setting up
forms chains from sensor nodes so that each node transmits paths
and receives from a neighbor and only one node is selected
from that chain to transmit to the base station (sink). The data Network Flow & QOS-aware Protocols are
is gathered and moves from node to node, aggregated and  Maximum Lifetime Energy Routing
eventually sent to the base station. The chain construction is  Maximum Lifetime Data Gathering
performed in a greedy way. Unlike LEACH, PEGASIS avoids  Minimum Cost Forwarding
cluster formation and uses only one node in a chain to  Sequential Assignment Routing
transmit to the BS (sink) instead of using multiple nodes. A  Energy Aware QOS Routing Protocol
sensor transmits to its local neighbors in the data fusion phase  SPEED
instead of sending directly to its CH as in the case of LEACH.
In PEGASIS routing protocol, the construction phase assumes 6.1 Sequential Assignment Routing (SAR)
that all the sensors have global knowledge about the network, SAR [9] is one of the first routing protocols for WSNs that
particularly, the positions of the sensors, and use a greedy introduces the notion of QoS in the routing decisions. It is a
approach. When a sensor fails or dies due to low battery table-driven multi-path approach striving to achieve energy
power, the chain is constructed using the same greedy efficiency and fault tolerance. Routing decision in SAR is
approach by bypassing the failed sensor. In each round, a dependent on three factors: energy resources, QoS on each
randomly chosen sensor node from the chain will transmit the path, and the priority level of each packet. The SAR protocol
aggregated data to the BS, thus reducing the per round energy creates trees rooted at one-hop neighbors of the sink by taking
expenditure compared to LEACH. QoS metric, energy resource on each path and priority level of
PEGASIS has been shown to outperform LEACH by about each packet into consideration. By using created trees,
100–300% for different network sizes and topologies. multiple paths from sink to sensors are formed. One of these
paths is selected according to the energy resources and QoS
5. LOCATION-BASED PROTOCOLS on the path. Failure recovery is done by enforcing routing
IT utilizes the position information to relay the data to the table consistency between upstream and downstream nodes on
desired regions rather than the whole network. Most of the each path. Any local failure causes an automatic path

357
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

restoration procedure locally. The objective of SAR algorithm Table 1 represents Classification and Comparison of routing
is to minimize the average weighted QoS metric throughout protocols in WSNs. Table 2 represents routing protocols
the lifetime of the network. Simulation results showed that selection for particular applications in WSNs. These tables are
SAR offers less power consumption than the minimum- based on the survey of Ref. [10].
energy metric algorithm, which focuses only the energy
consumption of each packet without considering its priority. Table2. Routing protocols selection for particular
SAR maintains multiple paths from nodes to sink. Although, applications in WSNs
this ensures fault-tolerance and easy recovery, the protocol
suffers from the overhead of maintaining the tables and states
at each sensor node especially when the number of nodes is
huge.

7. COMPARISON OF ROUTING
PROTOCOLS
In this paper we compared the following routing protocols
according to their design characteristics.
 SPIN [5]: Sensor Protocols for Information via
Negotiation.
 DD [6]: Directed Diffusion
 RR: Rumor Routing
 GBR: Gradient Based Routing. 8. CONCLUSION
One of the main challenges in the design of routing protocols
 CADR: Constrained Anisotropic Diffusion Routing.
for WSNs is energy efficiency due to the scarce energy
 COUGAR.
resources of sensors. The ultimate objective behind the
 ACQUIRE: ACtive QUery forwarding In sensoR
routing protocol design is to keep the sensors operating for as
nEtworks.
long as possible, thus extending the network lifetime. The
 LEACH [7]: Low Energy Adaptive Clustering
energy consumption of the sensors is dominated by data
Hierarchy.
transmission and reception. Therefore, routing protocols
 TEEN & APTEEN: [Adaptive] Threshold sensitive designed for WSNs should be as energy efficient as possible
Energy Efficient sensor Network.
to prolong the lifetime of individual sensors, and hence the
 PEGASIS [8]: The Power-Efficient GAthering in Sensor
network lifetime.
Information Systems.
It is an evolving field, which offers scope for a lot of research.
 VGA: Virtual Grid Architecture Routing.
Moreover, unlike MANETS, sensor networks are designed, in
 SOP: Self Organizing Protocol.
general, for specific applications. Hence, designing efficient
 GAF: Geographic Adaptive Fidelity.
routing protocols for sensor networks that suits sensor
 SPAN networks serving various applications is important. In this
 GEAR: Geographical and Energy Aware Routing paper, we have surveyed a sample of routing protocols by
 SAR[9]: Sequential Assignment Routing. taking into account several classification criteria, including
 SPEED: A real time routing protocol. location information, network layering and in-network
processing, data centricity, path redundancy, network
Table1. Classification and Comparison of routing dynamics, QoS requirements, and network heterogeneity. For
protocols in WSNs each of these categories, we have discussed a few example
protocols and also compared and contrasted the existing
routing protocols. As our study reveals, it is not possible to
design a routing algorithm which will have good performance
under all scenarios and for all applications. Although many
routing protocols have been proposed for sensor networks,
many issues still remain to be addressed.

REFERENCES
[1] F. Ye, A. Chen, S. Lu, and L. Zhang, “A scalable solution
to minimum cost forwarding in large sensor networks,” in
Proc. IEEE 10th Int. Conf. Computer Communications and
Networks, Scottsdale, Arizona, Oct. 2001.

[2] J. Al-Karaki and A. E. Kamal, “Routing techniques in


wireless sensor networks: A survey,” IEEE Trans. Wireless
Commun., Dec. 2004.

358
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[3] Q. Jiang and D. Manivannan, “Routing protocols for


sensor networks,” in Proc. IEEE CCNC, Jan. 2004.

[4] Akkaya, K.; Younis, M., “A survey on routing protocols


for wireless sensor network. Ad Hoc Networks 2005.

[5] W. Heinzelman, J. Kulik, and H. Balakrishnan, “Adaptive


Protocols for Information Dissemination in Wireless Sensor
Networks” in 5th ACM/IEEE Mobicom Conference
(MobiCom '99), Seattle, WA, August, 2006.

[6] C. Intanagonwiwat, R. Govindan, and D. Estrin, “Directed


diffusion: a scalable and robust communication paradigm for
sensor networks”, ACM MobiCom '05, Boston, MA, 2008.

[7] L. Subramanian and R. H. Katz, “An Architecture for


Building Self Configurable Systems”, IEEE/ACM Workshop
on Mobile Ad Hoc Networking and Computing, Boston,, MA,
August 2005.

[8] S. Lindsey and C.S. Raghavendra, “PEGASIS: Power-


efficient Gathering in Sensor Information System”,
Proceedings IEEE Aerospace Conference, vol. 3, Big Sky,
MT, Mar. 2002.

[9] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E.


Cayirci, “Wireless sensor networks: a survey”, Computer
Networks (Elsevier) Journal, Vol. 38, no. 4, Mar. 2002.

[10] Ian F. Akyildiz, Weilian Su, Yogesh


Sankaraubramaniam, and Erdal Cayirci: A Survey on sensor
networks, IEEE Communications Magazine (2002).

359
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Survey on Optical Amplifiers

Kirandeep Kaur Dr.Harsh Sadawarti


PG Student Director
CSE dept. RIMT- Institute of Engineering and Technology,
RIMT- Institute of Engineering and Technology, Mandi-Gobindgarh
Mandi-Gobindgarh
harshsada@yahoo.com
kiranrimt@gmail.com

ABSTRACT RELATED WORK


An optical fiber is an enabling and promising technology used
in almost all the trunk lines of existing networks. It is also
2. FIBER OPTIC COMMUNICATION
capable of allowing the transmission of many signals over SYSTEM
long distances because of its huge transmission bandwidth (in Fiber optic communication is a communication technology
THz) and low losses. In WDM networks optical fibers are that uses light pulses to transfer information from one point to
employed to transmit information in form of light pulse another through an optical fiber. The information transmitted
between the transmitter and the receiver. WDM systems have is essentially digital information generated by telephone
the potential to transmit multiple signals simultaneously. But systems, cable television companies, and computer systems.
the light signals degrade in intensity when they travel a long An optical fiber is a dielectric cylindrical waveguide made
distance inside the fiber. So it is required to amplify all the from low-loss materials, usually silicon dioxide. The core of
light signals simultaneously after a certain interval of light the waveguide has a refractive index a little higher than that of
propagation to regain the original signal. Optical amplifiers the outer medium (cladding), so that light pulses is guided
are generally used to amplify the light pulses. In this paper, along the axis of the fiber by total internal reflection [4].
the three main categories of Optical amplifiers An optical fiber communication system has three basic
(Semiconductor optical amplifier, Raman amplifier and components, transmitter, receiver and the transmission path as
Erbium doped fiber amplifier) and comparison between these shown in the figure below.
amplifiers are explored.

Keywords
WDM, CWDM, DWDM, SOA, EDFA, Raman.

1. INTRODUCTION
In the modern era, the demand of communication has
increased to larger extent due to introduction of new
communication techniques. As the number of clients are
increasing day by day and need arises to send more and more
data over optical fibers. Residential subscribers demand high
speed network for voice and media-rich services. Similarly,
corporate subscribers demand broadband infrastructure so that Fig 1: Block Diagram of Optical fiber Communication
they can extend their local-area networks to the Internet
System
backbone [2], so the demand of huge bandwidth, high
capacity and high speed network are main requirements to
provide good quality of service to clients. It can only be In the transmitter side, the input signal is generated by a data
possible with the help of fiber optic communication system source. The optical source is a laser source which generates
that is used to carry light signal with the highest frequency optical light signal at a certain wavelength. The data source
range i.e. in the range of Terahertz (THz) [4]. This system and the optical signal are fed to the modulator and then the
utilizes different types of multiplexing techniques but resulting modulated pulse signal propagates through the
wavelength division multiplexing (WDM) maintains good transmission path which is an optical fiber. At the receiver
quality of service without traffic, less complicated instruments side the optical signal is detected through an optical detector.
with efficient utilization of bandwidth. When the signal that The detected signal then passes through the demodulator to
carries the information transmit over long distance then it get the desired output signal. An optical fiber is a flexible thin
becomes weaken so the signal needs to be amplified with the filament of silica glass that accepts electrical signals as input
help of amplifier at regular intervals. An optical amplifier is a and converts them to optical signal. It carries the optical
device which amplifies the optical signal directly without ever signal along the fiber length and re converts the optical signal
changing it to electricity. The light itself is amplified. Reasons to electrical signal at the receiver side.
to use optical amplifiers are
Francis Idachaba, Dike U. Ike, and Orovwode Hope et. al. [1]
 Reliability
proposed the future trends in fiber optics communication.
 Flexibility
With the advancement in technology and demand of using
 Wavelength Division Multiplexing (WDM) optical fiber has driven the evolution of fiber optic
 Low cost

360
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

communication. In future, it also expected to continue the Mr.Sachin Chaugule, Mr. Ashish More et. al. [5] proposed the
development of new and advanced technology. Some of the principle that is understood by considering the fact that we
envisioned future trends in fiber optic communication were can see many different colors of light like rainbow all at once.
mentioned in this paper is below. The colors are transmitted through the air together and may
mix, but they can easily be separated using a simple device
 All Optical Communication Networks like a prism, just like we separate the white light from the sun
 Multi – Terabit Optical Networks into a spectrum of colors with the prism.
 Improvements in Laser Technology
 Ultra – Long Haul Optical Transmission
 Intelligent Optical Transmission Network
 Laser Neural Network Nodes
 Polymer Optic Fibers
 Improvements in Optical Transmitter/Receiver
technology
 High – Altitude Platforms
 Improvement in Optical Amplification Technology
 Improvement in WDM Technology
Fig 3: Separating a beam of light into its color
 Improvements in Glass fiber design and components
miniaturization
Depending on channel resolution and number of channels,
there are two types of WDM systems.
3. WAVELENGTH DIVISION
MULTIPLEXING (WDM) i. Conventional or coarse wavelength division
In fiber optic communications, WDM is a technology or multiplexing (CWDM)
technique which multiplexes multiple optical carrier signals of ii. Dense wavelength division multiplexing (DWDM)
varying wavelengths of laser light onto a single optical fiber
[2]. It is an analog multiplexing technique and allows multiple CWDM supports 4 to 8 wavelengths per fiber, sometimes
optical signals to be transferred along the same piece of fiber more. The spacing between the channels is large i.e. 1.6 nm –
at the same time. It enables bidirectional communication over 25 nm. It is used in metro, short-haul networks. While
one strand of fiber as well as multiplication of signal capacity DWDM supports 8 or more wavelengths with denser channel
i.e. transmission capacity per fiber is multiplied by the spacing i.e. 1.6 nm or less. It is designed for long-haul
number of signal wavelength.
networks.

4. OPTICAL AMPLIFIERS
Hidenori Taga et al. proposed [6] that with the demand for
longer transmission lengths, optical amplifiers have become
an essential component in long-haul fiber optic system. When
a signal travels in an optical fiber, it suffers from various
losses like fiber attenuation losses, fiber tap losses and fiber
splice losses. Due to these losses, it is difficult to detect the
signal at the receiver side. So in order to transmit signal over a
long distance in a fiber (more than 100km) it is necessary to
compensate the losses in the fiber. Initially the optical signals
Fig 2: WDM System were converted to electrical signal then amplified and then
reconverted to optical signal. But it was a complex and costly
The working of WDM system is illustrated in Fig2. It consists procedure. The introduction of optical amplifiers allowed the
of a multiplexer at the transmitter side to combine the signals
signal amplification in optical domain. There was no need to
together and a demultiplexer at the receiver side to split them
apart. With the proper choice of fiber, it is possible to have a convert the optical signal to electrical signal [4].
device that do both multiplexing and demultiplexing
simultaneously and can function as an optical add-drop
multiplexer. It should be consider that each optical channel is
completely independent of the other optical channels. It may
use its own encodings and protocols and run at its own rate
(speed) without any dependence on the other channels at all.
Each signal having different wavelengths (represented as
colors) of laser light are combined with the help of
multiplexer and then transmitted over single fiber. The signals
can be voice, video or data which may be either in an analog
or multiplexer format. These different carriers are separated
with the help of demultiplexer before the photo detection of
individual signals [3].
Fig 4: Block Diagram of basic optical amplifier

361
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Application of Optical Amplifier [5] SOAs are essentially laser diodes, without end mirrors, which
have fiber attached to both ends. They amplify any optical
 Power Amplifier signal that comes from either fiber and transmit an amplified
Power or booster amplifiers are placed just after the version of the signal out of the second fiber. SOAs are
transmitter to boost the signals. This helps to typically constructed in a small package and they work for
increase the transmission distance by 10-100 km 1310 nm-1550 nm systems.
depending on the amplifier gain and fiber loss.

Fig 8: Semiconductor optical amplifier


Fig 5: Power Amplifier
4.1.1 Working
 In-Line Optical Amplifier An optical signal can amplify by using semiconductor optical
In a single mode optical fiber the signal goes amplifier are as follow
through loss due to attenuation. So after a certain
interval of time the regeneration of the signal and  An electrical current passed through the device that
its amplification needs to be done. So inline optical excites the electrons in the active regions.
amplifier can be used to compensate attenuation  When photon (light) travel through the active region
loss and increase the distance between regenerative it can cause these electrons to lose some of their
repeaters. extra energy in the form of more photons that match
the wavelength of the initial ones.
 Therefore, an optical signal passing through the
active region is amplified and is said to have
experienced ―gain‖.

Fig 6: In-line Optical Amplifier

 Preamplifier
An optical amplifier can be used as a front-end
preamplifier just before an optical receiver. The
weak optical signals can be amplified by the
preamplifier so that the SNR degradation can be Fig 9: Working of SOAs
minimized, Also preamplifier shows high gain and
better bandwidth. 4.2 Raman Amplifier
Raman Amplifiers are based on the phenomena called
stimulated Raman Scattering which is a nonlinear process [5].
Raman gain arises from the transfer of power from one optical
beam to another that is downshifted in frequency by the
energy of an optical phonon, a vibrational mode of the
medium. Raman amplifiers utilize pumps to impart a transfer
Fig 7: Preamplifier of energy from the pumps to the transmission signal through
the Raman Effect mechanism. Raman scattering does not
The three types of Optical amplifiers are classified as follows require population inversion as it is an elastic scattering
mechanism.
 Semiconductor Optical Amplifier (SOA) T. N. Nielsen [8]can summarized that stimulated Raman
 RAMAN Amplifier scattering is a nonlinear optical process in which intense
 Erbium Doped Fiber Amplifier (EDFA) pump light interacts with a signal of low frequency,
simultaneously amplifying the signal and producing an optical
4.1 Semiconductor Optical Amplifier phonon. . Raman scattering occurs in all optical fibers with its
Aruna Rani, Mr. Sanjeev Dewra et. al. [7] proposed that strength depending only on the type of optical fiber and the
Semiconductor Optical amplifier (SOAs) are under rapid frequency offset and power of the interacting waves.
development to achieve polarization independent gain, low Maximum gain occurs for a frequency offset between the
facet reflectivity, good coupling to optical fibers and high gain pump and signal of 13.2THz. Gain is directly proportional to
saturation power. SOAs have been employed to overcome the pump power and inversely to the length and fiber cross
distribution loses in the optical communication applications sectional area.
and pursued for metropolitan-area networks as a low-cost Three important points are
alternative to fiber amplifiers.  SRS can occur in any fiber.
 Raman gain can occur at any signal wavelength.

362
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 Raman gain process is very fast, differs from 4.3 Erbium Doped Fiber Amplifier
EDFA. EDFAs consist of erbium-doped fiber having a silica glass
host core doped with active Er ions as the gain medium.
Where, Erbium is a chemical element of lanthanide series in
periodic table. Erbium symbol is Er and atomic number is 68.
Erbium looks like a silvery-white solid metal when artificially
isolated.
Basic elements of an EDFA are shown schematically in
Fig. 13. Erbium-doped fiber is usually pumped by
semiconductor lasers at 980 nm or 1480 nm. Signal
propagates along short span of a special fiber and is being
amplified at that time. Amplifier is pumped by a
Fig 10: Raman amplifier configuration semiconductor laser, which is coupled by using a wavelength
selective coupler, also known as WDM coupler, which
4.2.1 Principle combines the pump laser light with the signal light. The pump
Mr.Sachin Chaugule, Mr. Ashish More et. al. [5] proposed the light propagates either in the same directions as the signal (co-
principle of Raman amplifier that is Stimulated Raman propagation) or in the opposite direction (counters
Scattering (SRS).An incident photon excites an electron to the propagation). Optical isolators are used to prevent oscillations
virtual state and the stimulated emission occurs when the and excess noise due to unwanted reflections.
electron de-excites down to the vibrational state of glass
molecule.

Fig 13: Scheme of EDFA

4.3.1 Principle
Fig 11: Energy levels and transitions involved in SRS To make the principle work, erbium atoms needed to be set
in excited state. This is done by 980nm and/or 1480 nm lasers.
The laser diode in the diagram generates a high-powered
4.2.2 Raman Amplification
beam of light at a wavelength such that the erbium ions will
Raman amplification requires no special doping in the optical
absorb it and get to their excited state. Pumping laser power is
fiber. It is usually accomplished as ―distributed amplification‖
usually being controlled via feedback.
— that is, it happens throughout the length of the actual
transmission fiber, rather than all in one place in a small box
V. Bobrovs, S. Berezins et. al. [9] proposed that increase in
(as in the case with an EDFA).
input signal power reduced EDFA gain and ASE noise.
According to their observations the most effective way to
decrease ASE noise created by the amplifier is not to have a
too weak signal to be amplified at EDFA input. There is
always an optimum EDFA length depending on the pumping
laser power. If the fiber is too short then the whole potential
of the amplifier won’t be realized. Some laser energy will
remain unused. If the EDFA fiber is longer than the optimal
value then erbium inversion level at the end of the EDFA will
be less than 50 % and fiber will start to absorb the signal.
Fig 12: Raman Amplification The second key component of the EDFA besides erbium-
doped fiber is pumping laser. All simulation results showed
4.2.3 Types of Raman Amplifier the advantage of 980 nm co-directional laser usage in
There are two types of RAMAN Amplifiers amplifier setup compared to 1480 nm laser, higher gain and
lower noise values are possible to achieve in this setup by
 Distributed Raman Amplifier using the same pumping power.
It is one in which the transmission fiber is utilized
as the gain medium by multiplexing a pump 5. COMPARISON OF OPTICAL
wavelength with signal wavelength.
AMPLIFIERS [10]
In this survey, we have compared the following
 Discrete/Lumped Raman Amplifier
It utilizes a dedicated, shorter length of fiber to characteristics, advantages, disadvantages and application
provide amplification. It is highly nonlinear fiber areas of EDFA, SOA and Raman amplifiers as shown in
with a small core, utilized to increase the interaction Table 1 and Table 2.
between signals and pump wavelengths and thereby
reduce the length of fiber required.

363
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 1. Comparison of characteristics of EDFA, SOA and attenuation loses, They boost up the power level of multiple
Raman amplifiers light wave signals.

In comparison of EDFA, SOA and RAMAN amplifiers, it is


concluded that SOAs offer certain advantages over more
commonly used optical fiber amplifier such as low power
consumption, compactness and non-linear gain properties. A
particular attraction of EDFA is their large gain bandwidth,
which is typically tens of nanometers and thus actually more
than enough to amplify data channels with the highest data
rates without introducing any effects of gain narrowing.
Before such fiber amplifiers were available, there was no
practical method for amplifying all channels. The only
competitors to erbium-doped fiber amplifiers in the 1.5-μm
region are Raman amplifiers, which profit from the
development of higher power pump lasers.

REFERENCES
[1] Francis Idachaba, Dike U. Ike, and Orovwode Hope,
―Future Trends in Fiber Optics Communication‖, Proceedings
of the World Congress on Engineering 2014 Vol I, WCE
2014, July 2 - 4, 2014, London, U.K.
Table 2. Comparison of advantages, disadvantages and
application areas of EDFA, SOA and Raman amplifiers [2] Banerjee A., Park Y., Clarke F., Song H., Yang S., Kramer
G., Kim K., Mukherjee B., ―Wavelength-division-multiplexed
passive optical network (WDM-PON) technologies for
broadband access‖, Journal of optical networking, Vol. 4, No.
11, 2005.

[3] Jun Zheng & Hussein T.Mouftah, ―Optical WDM


networks, concepts and Design‖, IEEE press John Wiley -
Sons, Inc., Publication, p.1-4, 2004.

[4] G.P. Agrawal, ―Fiber Optic Communication Systems‖,


John Wiley and Sons, New York, 1997.

[5] Mr.Sachin Chaugule, Mr. Ashish More, ―WDM and


Optical Amplifier‖, IEEE, Volume 2, V2-232, 2010.

[6] Hidenori Taga, ―Long Distance Transmission Experiments


Using the WDM Technology‖, journal of lightwave
technology, Volume 14, 1996, Page 1287.

[7] Aruna Rani, Mr. Sanjeev Dewra, ―Semiconductor optical


amplifiers in optical Communication system‖, IJERT, Vol.2 –
Issue 10 (October – 2013).

[8] T. N. Nielsen, ―Raman Amplifiers in WDM systems‖,


IEEE, 0-7803-5634-9/99, 1999.

[9] V. Bobrovs, S. Berezins, ―EDFA Application Research in


WDM Communication Systems‖, Elektronika Ir
Elektrotechnika, ISSN 1392-1215, VOL. 19, NO. 2, 2013.

[10] Mahmoud Ibrahim and Dr. Amin Abubake, ―A


comparison of optical amplifiers in optical communication
systems EDFA, SOA and RAMAN‖, International Journal of
Current Research Vol. 6, Issue, 09, pp.8738-8741, September,
6. CONCLUSION 2014.
In this survey, it is concluded that WDM provides good
efficiency without traffic, less complicated instruments with
efficient utilization of bandwidth and Optical amplifiers are
used to transmit the signals over long distances to compensate

364
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Managing Big Data with Apache Hadoop


Amandeep Singh Khangura
Maninderpal Singh Dhaliwal
Deptt Of Computer Science, PTU Jalandhar
Deptt Of Computer Science, PTU Jalandhar
LCET,katani Kalan,Ldh,Punjab
LCET,katani Kalan,Ldh,Punjab
khangurasony@gmail.com
ermdhaliwal85@gmail.com

ABSTRACT technologies needed to store and analyze large volumes of


Over the last few years, various organizations have made a diverse data has dropped; we appreciate the pen source
strategic decision to turn big data into competitive advantage. software running on industry-standard hardware. The cost has
The challenge of extracting value from big data is comparable dropped, in fact, that the key strategic query is no longer what
in many ways to the age-old problem of distilling business data is relevant, but rather how to fetch the most value from
intelligence from transactional information. To overcome this all the available data we have.
problem to extract data from several sources, convert it to fit
according to our analytical requirements, and load it into a Rapidly ingesting, storing, and processing big data demands a
data warehouse for subsequent analysis, a process is used cost effective infrastructure that can scale with the quantity of
called as ―Extract, Transform & Load‖ (ETL). The nature of data and the scope of analysis. Most of the organizations with
big data requires that the infrastructure for this process can traditional data platforms—normally relational database
scale cost-effectively. Apache Hadoop* has emerged as the management systems (RDBMS) coupled to enterprise data
one of the standard for managing big data. This paper warehouses (EDW) using ETL tools—find that their legacy
examines some of the platform hardware as well as software infrastructure is either technically incapable or financially
considerations in using Hadoop for ETL provided by impractical for storing and analyzing big data.
companies like Intel.
A traditional ETL method extracts information from number
Keywords-Apache Hadoop, ETL with Apache Hadoop, Big of sources, then cleanses, formats, and loads it into a data
Data, MapReduce and HDFS, offload ETL with Hadoop, warehouse for further analysis shown in figure 1. When the
Infrastructure for ETL. source information sets are huge, fast, and unstructured,
traditional ETL can become the bottleneck, because it is too
difficult to develop and too expensive to operate, takes too
1. THE EXTRACT, TRANSFORM & long to execute.
LOAD BOTTLENECK IN BIG DATA By most accounts, more than 80 percent of development
ANALYTICS effort in a big data project goes into data integration and only
Big Data refers to the large amounts of data, at least terabytes, 20 percent goes toward data analysis. Furthermore, a long-
of poly-structured information that flows through and around established EDW platform can cost upwards of USD 60K per
organizations also include video, text, sensor logs as well as terabyte. For Analyzing one petabyte the amount of data
transactional records. The business benefits of analyzing this Google processes in 1 hour would cost USD 60M. Obviously
data can be significant. According to a latest study by the MIT ―more of the same‖ is not a big data strategy that any CIO can
Sloan School of Management, organizations which use pay for. So, Apache Hadoop is the most excellent choice.
analytics are twice as accepted to be top performers in their
industry. 2. APACHE HADOOP FOR BIG DATA
When various companies like Yahoo, Google, and Facebook
Business analysts at a large company like Intel, for example, extended their services to web-scale, the amount of data they
with its large-scale market and complex supply chain, have collected regularly from user exchanges online would have
long sought insight into consumer demand by analyzing far- overwhelmed the capabilities of long-established IT
flung data points culled from market information and business architectures. As a result they built their own. In the interest
transactions. Increasingly, the data we need is embedded in of advancing the improvement of core infrastructure
financial reports, discussion forums, news sites, social and components rapidly, they published papers and released code
professional networks, climate reports, tweets, and various for many of the components into open source. Among these
blogs, as well as transactions. By analyzing all the available components, Apache Hadoop has quickly emerged as the de
data, decision-makers can better assess threats, anticipate facto benchmark for managing large volumes of unstructured
changes in customer behavior, make stronger supply chains, data.
improve effectiveness of marketing campaigns, and enhance Apache Hadoop is an open source distributed software
their business stability. platform for storing and processing data. Written in Java, it
runs on a bunch of industry-standard servers configured with
Many of these benefits are not latest to organizations that direct-attached storage. Using Hadoop, we can store petabytes
have mature processes for incorporating business intelligence of data reliably on tens of thousands of servers while scaling
(BI) and analytics into their decision-making but meanwhile performance cost-effectively by merely adding inexpensive
most organizations have yet to take full benefits of fresh nodes to the cluster.
technologies for handling big data. Put simply, the cost of the

365
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Central to the scalability of Apache Hadoop is the distributed APIs provided by HDFS. Capacity and performance can be
processing framework called as MapReduce (Figure 2). scaled by adding up Data Nodes, and a single NameNode
MapReduce helps programmers to resolve data-parallel method manages data placement and monitors server
trouble for which the data set can be sub-divided into small availability. HDFS clusters in production utilize nowadays
parts and processed separately. MapReduce is an essential reliably hold petabytes of data on thousands of nodes.
advance because it allows normal developers, not just those
skilled in very high-performance computing, and to use In addition to MapReduce and HDFS, Apache Hadoop
parallel programming constructs without worrying about the included with many other components, some of which are
complex details of intra-cluster communication, various task very valuable for ETL.
monitoring, failure handling. MapReduce helps in
simplifying all that. A. Apache Flume* is a distributed system for collecting data,
aggregating, and moving bulk amounts of data from several
The system splits the input data-set among multiple chunks; sources into HDFS or another central data store. Enterprises
each one is assigned a map task which can process the data in usually collect log files on application servers or other
parallel. Each map task reads the input as a set of (key, value) systems and archive the log records in order to comply with
pairs and provides a transformed set of (key, value) pairs as regulations. Able to ingest and analyses that structured or
the output. The framework shuffles and sorts outputs of the semi-structured data in Hadoop can turn this passive resource
map tasks, sending the intermediate (key, value) pairs to into a significant asset.
reduce tasks, which group them into final conclusion.
MapReduce uses Job and Task Tracker mechanisms to plan B. Apache Sqoop* is a type of the tool for transferring data
tasks, monitor them, and restart any that fail. among Hadoop and relational databases. We can use Sqoop to
bring in data from a MySQL or Oracle database into Hadoop
The Apache Hadoop platform also integrated with the Hadoop Distributed File System, run MapReduce on the data, and then
Distributed File System (HDFS), which is designed for export the data back into a Relational Database Management
scalability and fault tolerance. HDFS keeps large files by System. Sqoop automate these processes, via MapReduce to
separating them into blocks (usually 64 or 128 MB) and import and export the data in parallel with fault-tolerance.
replicating the blocks on three or more servers. MapReduce
applications can read and write data in parallel with help of
C. Apache Hive* and Apache Pig* are two programming III. HADOOP ARCHITECTURE
languages which simplify development of applications Hadoop is a powerful platform for storage of big data and data
employing the MapReduce framework. HiveQL is a dialect of processing. On the other hand, its extensibility and novelty
SQL and supports a subset of the syntax. Even though slow, renew questions around data integration, quality of data,
Hive is being actively improved by the developer community governance, data security, and a host of other issues that
to enable low-latency queries on Apache HBase* and HDFS. organizations with extremely mature BI processes have long
Pig Latin is another procedural programming language which taken for granted. Even with the lots of challenges of
provides high-level abstractions for MapReduce. integrating Hadoop into a traditional BI environment, ETL
We can extend it with User Defined Functions written has confirmed to be a frequent use case for Hadoop in
in Java, Python, and other languages. enterprises. Why it is so popular? It is quite easy ETL, ELT,
and ETLT with Apache Hadoop.ETL tools by performing
D. ODBC/JDBC Connectors for HBase and Hive are often three functions moves data from one place to another:
proprietary components included in distributions for Apache
Hadoop software. They provide connection with SQL A. Extract data from ERP or CRM applications.
applications by means of translating standard SQL queries During the extract step, we may require to gather data from
into HiveQL commands that can be executed upon the data in numerous source systems and in various file formats, such as
HDFS or HBase. flat files with delimiters (CSV) and XML files. We may also
need to collect data from legacy systems which store data in
arcane formats no one else uses anymore. It seems to be easy, B. Convert that data into a common format that fits other data
but can in fact be one of the major obstacles in getting an ETL in the warehouse.
solution off the ground.

The transform step may consist of multiple data appears to be impervious to the charms of ETL tools, like
manipulations, like moving, splitting, translating, merging, Oracle Warehouse Builder,* which can generate code for
sorting, pivoting etc. For example, a customer name might be databases.
split into first and last names, even dates might be changed
according to the standard ISO format (e.g., from 07-24-13 to Code generators also have some boundaries, since they work
2013-07-24). Generally this step also involves validating the with only a limited set of databases and are frequently
data against data quality protocols. bundled with them. In contrast, the next generation of ETL
tools includes a general-purpose engine which performs the
C. Load the data into the data warehouse for analysis. transformation as well as set of metadata that stores the
This action can be done in batch processes or row by row, transformation logic. Because engine-based ETL tools like
more or less in real time. Early on, before ETL tools existed, Pentaho Kettle* and Informatica Powercenter* are
the merely way to combine data from different sources was to independent of source and target information stores, they are
hand-code scripts in languages like COBOL, RPG, and more versatile than code generating tools.
PL/SQL. Antiquated though it seems, about 45 percent of all
ETL work nowadays is still done by such hand-coded A traditional ETL architecture (Figure 3) accommodates
programs. Even though they are error-prone, very slow to several ETL iterations as well as an intermediate step,
develop, and difficult to maintain, they have loyal users who performed in the ―staging area,‖ which gets data out of the

366
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

source system as quickly as possible. A staging area may use Supplementary ETL iterations may be implemented to move
a database or just plain CSV files, which makes the process data from the EDW into data marts that support particular
quicker than inserting data into a database table. analytic purposes and end user tools.
Much has changed in data warehousing over the past two ETLT by off-loading the ingestion, transformation, and
decades. Databases have become vastly more powerful. integration of unstructured data into your data warehouse
RDBMS engines currently support complex transformations (Figure 4). Because Hadoop enables you to embrace
in SQL, including in-database data mining, data quality additional data types than ever before, it enriches our data
validation, profiling, statistical algorithms, cleansing, warehouse in various ways that would otherwise be infeasible
hierarchical as well as drill-down functionality etc. It has turn or prohibitive. Because of its scalable performance, we can
out to be more efficient to execute most types of drastically increase speed of the ETLT jobs. Furthermore,
―transformation‖ within the RDBMS engine. because data stored in Hadoop can persist over a much longer
duration, we can grant more granular, detailed data through
As a result, ELT emerged as a substitution approach in which your EDW for high-fidelity analysis.
data is extracted from the sources and then loaded into the
target database, and afterwards transformed plus integrated IV. OFFLOAD ETL WITH HADOOP
into the required format. All the heavy data processing takes
By Using Hadoop in this manner, the organization gains an
place within the target database. The benefit of this approach
additional ability to Store as well as to access data that they
is that a database system is much better suited for handling
―might‖ need, huge data which may never be loaded into the
huge workloads for which hundreds of millions of records
data warehouse. For example, data scientists might wish to
need to be integrated. RDBMS engines are too optimized for
make use of the large amounts of source data from social
disk I/O, which increases throughput. As long as the RDBMS
media, web logs, or third-party stores (from curators, such as
hardware scales up, the system performance scales with it.
data.gov) stored on Hadoop to enhance new analytic models
that drive research and discovery. They can store the available
But ETL is not dead yet. Traditional ETL vendors have
data cost effectively in Hadoop, and get back it as needed by
improved their tools with addition of pushdown SQL
using Hive or other analytic tools native to the platform,
capabilities in which transformation can also take place either
without affecting the EDW environment.
in the ETL engine (for operations not supported by the target
database) or after loading inside the database. The end result
Regardless of whether your enterprise takes the ETL, ELT, or
is the ETLT approach, supported in the same way by leading
ETLT approach to data warehousing, we can cut the
database vendors like Microsoft (SQL Server Integration
operational cost of our overall BI/DW solution by offloading
Services) and Oracle (Oracle Warehouse Builder).
common transformation pipelines to Apache Hadoop and
Not any of these solutions is cheap or simple, and their cost using MapReduce on HDFS to make available a scalable,
and complexity are compounded by big data. Consider eBay, fault-tolerant platform to process large amounts of
which in 2011 had over 200 million, items for sale, broken up heterogeneous data.
into 50,000 categories, and bought also sold via 100 million
registered users— all of which entailed approximately 9 V. PHYSICAL PLANNING FOR ETL
petabytes of data. Google reportedly processes over 24 WITH HADOOP
petabytes of data for every day. AT&T processes 19 petabytes The thumb rule for Hadoop infrastructure planning has long
with the help of their networks every day, and the video game been to ―throw more nodes at the problem.‖ This is one of the
World of War craft uses 1.3 petabytes of storage space. All of best approaches where the size of your cluster matches a web-
these calculated figures are already out-of-date as of writing scale challenge, like a returning search results or sometimes
paper because online data is growing so fast. personalizing web pages for hundreds of millions of people on
shops online. But a usual Hadoop cluster in an enterprise has
Under these conditions, Hadoop brings at least two most around 100 nodes and it is supported by IT resources that are
significant advantages to traditional ETLT: significantly additional constrained than those of Yahoo! and
Facebook. Organizations which are managing these clusters
1) Ingest huge amounts of data exclusive of specifying a
must adhere to the planning related to capacity and
schema on write. An important characteristic of Hadoop is
performance tuning processes typical of other IT
called ―no schema on- write,‖ states that there is no need to
infrastructures. Later, they need to provision, configure, and
pre-define the data schema before loading data into Hadoop.
tune their Hadoop cluster for the particular workloads their
This is not only true for structured data (such as point-of-sale
businesses run.
transactions and call detail records, ledger transactions, as
well as call center transactions), but also for unstructured data Workloads vary widely, so it is essential to select and
(like user comments, doctor’s notes, detail of insurance configure compute, storage, network, and software
claims, and web logs) and social media data (from sites such infrastructure to match exact needs. But before considering
as Facebook, LinkedIn, Pinterest, and Twitter). Despite of each of these issues, let’s examine the general
whether our arriving data has explicit or implicit structure, we misunderstanding that all Hadoop workloads are I/O-bound.
can quickly load it as-is into Hadoop, where it is available for From last few years, developers at organizations like Intel
further process (downstream analytic processes). have been testing the performance of successive releases of
Hadoop on successive generations of Intel processor-based
2) Offload the transformation of raw data by parallel
servers using a reference set of test workloads. Many of these
processing at scale. Once the data is in Hadoop, we can
workloads resemble real-world applications, such as ETL and
execute the traditional ETL tasks like cleansing, normalizing
analytics. On the basis of in-depth instrumentation of the test
data, aligning, also aggregating data for our EDW by
clusters, they noticed that I/O and CPU utilization vary
employing the massive scalability of MapReduce. Hadoop
extensively across workloads and within the stages of
allows us to avoid the transformation bottleneck in our usual
MapReduce in every workload.

367
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

directly with processor cache memory, rather than only with


A. TeraSort (map: CPU bound; reduce: I/O bound) TeraSort main memory. This feature helps to deliver more I/O
transforms data from one representation to another. As the bandwidth along with lower latency, which is mainly
size of data remains the same from input throughout shuffle to beneficial when processing large data sets.
O/P, TeraSort tends to be I/O bound. still, when we compress
map output so as to minimize disk and network I/O during the Another important feature that has captured the attention of
shuffle phases, TeraSort shows very high utilization of CPU Hadoop users is Advanced Encryption Standard New
along with moderate disk I/O during the map and shuffle Instructions (AES-NI), which accelerates ordinary
phases, moreover moderate CPU utilization as well as heavy cryptographic functions to remove the performance
disk I/O during the reduce phase. consequence typically associated with encryption and
decryption of big-data files.
B. WordCount (CPU bound) WordCount extracts a small
quantity of interesting information from a huge data set, In several surveys conducted by Gartner we came to know
which means that the map output and the reduce output are that power and cooling issues were the largest challenge they
much smaller than the job input. As a result, the WordCount are faced in the data center.4 Maximizing energy-efficiency
workload is mostly CPU bound (especially during the map can be mostly important for Hadoop clusters, which grow
stage), with high CPU consumption and light disk and with data volume.
network input output.
With its system design on a chip (SoC) and power envelopes
C. Nutch indexing (map: CPU bound; reduce: I/O bound) the as low as 6 watts, the Intel® Atom™ processor offers
input to this workload is about 2.4 million web pages improved density and energy efficiency for a few workloads.
generated by crawling Wikipedia. Nutch Indexing Since Intel Atom processors are based on the industrial-
decompresses whole crawl information in the map stage, that standard x86 instruction set, they are much more compatible
is CPU bound, and then converts intermediate results to with all application code which runs on Intel Xeon processors.
inverted index records in the reduce period, which is I/O This helps to cut total cost of ownership by reducing various
bound. costs like software development, porting, and system
management costs. Adding together, Intel organization offers
D. PageRank Search (CPU Bound) The Page Rank workload
a power and thermal management software creation called
represents the original inspiration for Google search which is
Intel® Data Center Manager (Intel® DCM). Intel DCM uses
based on the PageRank algorithm for link analysis. These
the instrumentation in Intel Xeon and upcoming Intel Atom
workloads utilize most of its time on iterations of numerous
processors and integrates with existing management consoles
jobs, and these jobs are normally CPU bound, with low to
using standard APIs. We can utilize it to monitor power plus
medium disk I/O and memory utilization.
thermal data in real time for individual servers and blades, and
also for racks, rows as well as logical server groupings. We
E. Bayesian classification (I/O bound) this workload
can cap power by means of set limits or workload-related
implements the trainer part of the Naïve Bayesian classifier, a
policies, can configure alerts for power and thermal events,
admired algorithm used for knowledge discovery and data
and we can store and analyze data to perk up capacity
mining. The workload contains four chained Hadoop jobs
planning.
mostly all are disk I/O bound, except for the map tasks of the
initial job, which also have high utilization of CPU and turn
out to be the most time consuming. VI. MEMORY
Enough system memory is necessary for high-throughput of
F. K-means clustering (CPU Bound in iteration; I/O bound in big numbers of parallel MapReduce tasks. Hadoop naturally
clustering) This workload first of all computes the centroid of requires 48 GB to 96 GB of RAM per server, and 64 GB is
each of the cluster by running a Hadoop job iteratively until optimal in most of the cases. Always balance server memory
iterations converge or the maximum number of iterations is across offered memory channels to keep away from memory-
reached. This is CPU-bound. Later than t, it runs a clustering bandwidth bottlenecks.
job (I/O bound) that assigns each sample to a cluster. Even a Memory errors is one of the most common reason of server
cursory scan of a variety of representative workloads shows failure and data corruption, so error-correcting code (ECC)
that the system resource utilization of Hadoop is even more memory is highly suggested.5 ECC is supported in servers
critical for enterprises than it has been for early adopters and which are based on Intel Xeon processors family whereas in
much more complex than many have supposed. Various micro-servers based on the Intel Atom processor family.
companies like Intel have worked strongly with a number of
customers to help out develop and deploy a balanced platform VII.STORAGE
for diverse real-world deployments of Hadoop. Due to
Every server in a Hadoop cluster needs a relatively large
Company’s assessments and benchmarking efforts have led to
number of storage drives to avoid Input output bottlenecks.
make several recommendations when customers consider
Two hard drives per processor core usually deliver excellent
infrastructure hardware and distributions of Apache Hadoop
results. However, a single solid-state (SSD) drive per core can
software.
deliver higher I/O throughput, reduced latency, and improve
overall cluster performance. Intel® SSD 710 Series SATA
A processor from family of The Intel® Xeon® E5 provides a
SSDs offer significantly faster read/write performance as
strong foundation for many Hadoop workloads. A number of
compared to mechanical hard drives, and this extra
inbuilt features into the Intel Xeon processor E5 family are
performance can be helpful for latency sensitive MapReduce
mostly well suited for Hadoop. One of the feature is Intel®
applications. . Moreover It can accelerate jobs as intermediate
Integrated I/O, which helps in reduction of I/O latency by up
outcome files are shuffled between the map and the reduce
to 32 percent and increases I/O bandwidth by as much as 2x.2,
phase. Intel Company tests show that by replacing mechanical
3 Another we have is Intel® Data Direct I/O Technology
drives with Intel SSDs can boost performance approximately
(DDIO), which allows Ethernet adapters to communicate

368
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

80 percent. We can use mechanical drives and SSDs both powerful mechanism for configuring the Hadoop cluster. Intel
together, using Intel® Cache Acceleration Software. This Active Tuner uses a genetic algorithm to try a variety of
storage model provides some of the performance benefits of parameters and converge quickly on the optimal configuration
SSDs at a lower cost. for a given application of Hadoop.

If we use hard drives, 7,200 RPM SATA drives provide an C. Supports for a broad range of analytic applications.
excellent balance among cost and performance. Run the
drives in the Advanced Host Controller Interface (AHCI)
mode with Native Command Queuing (NCQ) enabled to
X. CONCLUSION
improve the performance when multiple read and write The recent technology of big data is generating new
requests are invoked at the same time. Although we can use opportunities and new challenges for businesses across every
RAID 0 to logically combine smaller drives into a larger pool, organization. The challenge of data integration—
RAID is not recommended because Hadoop automatically incorporating data available from social media and all other
orchestrates data provisioning and redundancy across nodes. unstructured data into a traditional BI environment—is one of
the most burning issues in front of CIOs and IT managers.
VIII. NETWORK Apache Hadoop gives us a cost-effective and scalable
A fast network not only allows data to be imported and platform for ingesting big data sets and preparing it for
exported quickly, but can also improve performance for the analysis. Hadoop can be used to offload the traditional ETL
shuffle phase of MapReduce applications. A 10 Gigabit processes and can decrease time to analysis by hours or even
Ethernet network provides a simple, cost-effective solution. days. Running the Hadoop cluster economically means
Intel tests have shown that using 10 Gigabit Ethernet rather selecting an optimal infrastructure of servers, storage,
than 1 Gigabit Ethernet in a Hadoop cluster can improve networking, and software. Companies like Intel provide
performance for key operations by up to 4x when using software as well as hardware platform components to help our
conventional hard drives. The performance improvement is design and deploy an efficient, high-performing Hadoop
even greater when using SSDs—up to 6x.7 the greater cluster optimized for big data ETL. We can take advantage of
improvement with SSDs can be attributed to faster writes into suggested architectures; available training, specialized
the storage subsystem. services, as well as technical and scientific support provided
by the company like Intel to speed up our deployment and
As a Hadoop cluster grows to include multiple server racks, help to reduce risk.
we can improve network performance by connecting each of
the 10 Gigabit Ethernet rack-level switches to a 40 Gigabit
Ethernet cluster-level switch. As requirements continue to REFERENCES
rise, we can interconnect multiple cluster level switches
moreover include an uplink to a higher-level switching [1] Hadoop HDFS User Guide at
infrastructure. http://hadoop.apache.org/common/docs/current/hdfs_use
r_guide.html.Ding, W. and Marchionini, G. 1997 A
Study on Video Browsing Strategies. Technical Report.
IX.SOFTWARE University of Maryland at College Park.
Apache Hadoop is one of the open source software that is
available free of cost from the apache.org source code [2] Hadoop Map-Reduce Tutorial at
repository. A few numbers of early adopters and test http://hadoop.apache.org/common/docs/current/mapred_t
deployments can directly download the source and make their utorial.htmlTavel, P. 2007 Modeling and Simulation
platform based on the Apache Hadoop distribution. Design. AK Peters Ltd.
Enterprises that require a vendor-supported platform, though,
[3] The Age of Big Data. Steve Lohr. New York Times, Feb
look to one of several independent software vendors (ISVs) to
11, 2012. http://www.nytimes.com/2012/02/12/sunday-
provide a entire product with various software, latest updates,
and services. The Intel® Distribution for Apache Hadoop review/big-datas-impact-in-the-world.htmlForman, G.
software (Intel® Distribution) is an enterprise-grade s//w 2003. An extensive empirical study of feature selection
platform that includes Apache Hadoop together with other metrics for text classification. J. Mach. Learn. Res. 3
(Mar. 2003), 1289-1305.
software components. The Intel Distribution contains a
number of exceptional features. [4] Big data: The next frontier for innovation, competition,
and productivity. James Manyika, Michael Chui, Brad
A. Built from the silicon up for performance and security. Brown, Jacques Bughin, Richard Dobbs, Charles
Running the Intel Distribution on Intel® processors enables Roxburgh, and Angela Hung Byers. McKinsey Global
Hadoop to fully utilize the performance and security features Institute. May 2011.
available in the x86 instruction set in general and the Intel
Xeon processor in particular. For example, the Intel [5] Dumbill, Edd. What is big data? An introduction to the
Distribution includes enhancements that take advantage of big data landscape. O'Reilly Radar. [Online] 11 January
Intel AES-NI, available in Intel Xeon processors, to accelerate 2012 at http://radar.oreilly.com/2012/01/what-is-big-
cryptographic functions, moreover erasing the usual data.html.
performance consequence of encryption and decryption of [6] Apache Hive. http://sortbenchmark.org/
files in HDFS.
[7] HADOOP-3759: Provide ability to run memory intensive
B. Automated performance tuning. jobs without affecting other running tasks on the nodes.
This integrated method saves time during delivering a more https://issues.apache.org/jira/browse/HADOOP-3759
optimized configuration. The Intel Distribution also includes a
management console which provides an intelligent and [8] J. Dean and S. Ghemawat, ―MapReduce: Simplified data
processing on large clusters,‖ in USENIXSymposium on

369
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Operating Systems Design and Implementation, San [13] MIKE 2.0, Big Data Definition,
Francisco, CA, Dec. 2004, pp. 137–150 http://mike2.openmethodology.org/wiki/Big
_Data_Definition
[9] Jefry Dean and Sanjay Ghemwat, MapReduce:A Flexible
Data Processing Tool, Communications of the ACM, [14] A Navint Partners White Paper, ―Why is BIG Data
Volume 53, Issuse.1,January 2010. Important?‖ May 2012,
http://www.navint.com/images/Big.Data.pdf.
[10] MarcinJedyk, MAKING BIG DATA, SMALL, Using
distributed systems for processing, analysing and [15] http://bigdataarchitecture.com/
managing large huge data sets, Software Professional‟s
Network, Cheshire Data systems Ltd [16] http://www.informationweek.com/software/business-
intelligence/sas-gets-hip-tohadoop-for-big-
[11] Brad Brown, Michael Chui, and James Manyika, Are data/240009035?pgno=2
you ready for the era of „big
data‟?,McKinseyQuaterly,Mckinsey Global Institute, [17] G. Noseworthy, Infographic: Managing the Big Flood of
October 2011. Big Data in Digital Marketing, 2012
http://analyzingmedia.com/2012/infographic-big-flood-
[12] DunrenChe, MejdlSafran, and ZhiyongPeng, From Big of-big-data-in-digitalmarketing/
Data to Big Data Mining: Challenges, Issues, and
Opportunities, DASFAA Workshops 2013, LNCS 7827,
pp. 1–15, 2013.

Figure 2. MapReduce, the programming paradigm implemented by Apache Hadoop, distribute a batch job into several smaller tasks for parallel
processing on a distributed system. HDFS, the distributed file system stores the data reliably.

370
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 1. A traditional ETL process

Figure 3. The traditional ETL architecture has served enterprises well for the past 20 years and many deployments still use it.

Figure 4. Using Apache Hadoop,* any organization can ingest, process, as well as export huge amounts of diverse data at scale.

371
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Gateway Based Energey Efficient Protocol For


Wireless Sensor Network
Maninder Jeet Kaur Avinash Jethi
BGIET,Sangrur, Punjab India BGIET,Sangrur, Punjab India
maninderkaur865@gmail.com avinashjethi@ymail.com

ABSTRACT
In the research of the wireless sensor networks Energy
conservation is the main issue. For designing an efficient
protocol for WSN, it is important to monitor the energy
usage of the network. Further the energy usage by a WSN
depends upon the design of network. So there are number
of protocols which are used for designing the structure of
network for WSN.

These protocols are mainly of two types single hop &


multihop energy efficient protocols. But still there is a wide
scope of improvement and research.In this research paper ,
we implement and analyze the results of basic SEP(Stable
Ellection Protocol)for WSN. Also proposed a gateway Fig:-Clustering in WSNs.
based energy efficient , protocol for wireless sensor
networks .Which utilizes minimum energy and resolve the 2. RELATED WORK
issue of long distance between sink and sensing nodes.
Energy consumption and network lifetime are the most
Proposed protocols can resolved the problems of SEP such
important features in the design of the Wireless Sensor
as long distance etc. up to some extent. Network. This study present clustering based routing for
WSNs. Many clustering based protocols are homogeneous
Keywords: Wireless Sensor Network, Mobile ,such as LEACH,PEGASIS,HEED etc. In these type of
Gateway, Clustering, Life time, Multi-hop. mechanisms CHs collect data from its member nodes and
then forward it to the base station. This process overloads
the CH and it consumes lots of energy. LEACH is the basic
2. INTRODUCTION protocol for all other protocols of WSNs. In LEACH CHs
are selected periodically and consume uniform energy by
A wireless sensor network (WSN) (sometimes called a selecting a new Ch in each round. A node become CH in
wireless sensor and actor network (WSAN) of spatially current round on the basis of probability p. LEACH
distributed to monitor physical or environmental performs well in homogeneous network. But this protocol
conditions, such as temperature, sound, pressure, etc. and to not considered good for heterogeneous networks.
cooperatively pass their data through the network to a main
location. The more modern networks are bi-directional, In [2] author introduce an approach in which a new
also enabling control of sensor activity. The development heterogeneous network with different types of nodes
of wireless sensor networks was motivated by military
having different initial energy levels have been considered
applications such as battlefield surveillance; today such
networks are used in many industrial and consumer which is called SEP (Stable Election Protocol) for WSN.
applications, such as industrial process monitoring and Then they compare SEP with existing protocols like
control, machine health monitoring, and so on. LEACH,DEEC etc in terms of life time and energy
consumption. Nodes in SEP are heterogeneous I terms of
their initial energy i.e called normal nodes and advance
nodes. The probability to become CH depends upon the
initial energy of the node. It is found that SEP provides
better network lifetime than the existing protocols. He

372
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

proposed new EEEP protocol for heterogeneous WSNs. ETx (k,d)= ETx elec (k)+ ETx  amp (k,d)........(i)
which gives better performance as compared to SEP.

In [3] author analyzed the principle of SEP in wireless ETx (k,d)= E elec *k +  amp *k* d 2 ...........(ii)
sensor networks (WSNs).He proposed a multi-hop routing
scheme (MH-SEP) based on different spatial density of and to receive this message, the radio expends:
nodes. The network is to be divided into different sizes of
areas according to the multi-hop routing scheme. CH‟s are E Rx (k)= E Rx elec (k)
connected via multi-hop communication algorithm. like
spanning tree. where BS is at root. This paper mainly
analysis the performance of scheme based on different E Rx (k)= E elec *k
spatial density of nodes. It worked at minimum
communication energy consumption. Because with the
increase of the monitoring area, the space density 4. SIMULATION SET UP
decreases. The advantage of the MH-SEP is more as
The total number of sensors nodes be N which are
comparing with SEP.
distributed randomly over the field. The number of nodes in
the network can be taken as 100 as optimal values and
In [4] author proposed MEEP(Multi-hop Energy Efficient
which can be increased from 100 to 1000 for different
Protocol) for Heterogeneous Wireless Sensor Network. The protocols.
proposed protocol combines the idea of clustering and
multi-hop communication. Heterogeneity is created in the The maximum distance of any node from the sink is
network by using some nodes of high energy. Low energy approximately 70m.The initial energy of a normal node is
nodes are selected as CH based on there residual energy. set to E0 = 0.5 Joules. The base station with „x‟ is located
at the center of the field with point (100, 100).
High energy nodes act as the relay nodes for low energy
cluster head. when they are not performing the duty of a The size of the message that nodes send to their cluster
cluster head to save their energy further. Result shows that heads as well as the size of the (aggregate) message that a
proposed scheme is better than the SEP in energy cluster head sends to the sink is set to 4000 bits.
efficiency and network lifetime.
The radio characteristics used in our

simulations are s ummarized in Table I.


3. RADIO ENERGY DISSIPATION Table 1: Parameters settings
MODEL
We assumed a simple model for the radio hardware energy
dissipation where the transmitter dissipates energy to run
the
radio electronics and the power amplifier, and the receiver
dissipates energy to run the radio electronics as shown in
figure A. Using this radio model, to transmit k-bit message
at distance d the radio expends:

Fig. A: Radio energy dissipation model

373
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5.SIMULATION RESULTS OF SEP

5.1Graph showing packets send to Cluster head in SEP


: The graph showing packets or data send to Cluster head is
as shown below:

Figure 5.11: SEP energy dissipation

5.4Graph showing dead nodes in case of SEP: The graph


showing dead nodes in case of SEP is as shown below:

Figure 5.3: SEP packets to Cluster Head

5.2Graph showing packets send to Base Station in case


of SEP: The graph showing packets or data send to base
station is as shown below:

Figure 5.15: Dead nodes in SEP

6. PROPOSED WORK
After the study of the various present protocols for
heterogeneous/homogeneous WSNs ,we proposed a new
energy efficient scheme for heterogeneous WSNs. Which
will use gateway nodes between cluster heads and sink.
These gateway nodes will be mobile which can move all
around the sensing field. These nodes are rechargeable. It is
possible to solve the problem of long distance between
Figure 5.7: SEP packets to Base Station cluster head and sink. This proposed work can also
improved the lifetime and energy consumption of the
5.3Graph showing energy dissipted in SEP : The graph WSNs. After set up a WSN it is difficult to replace a sensor
within the whole circuit. So an efficient network designing
showing energy dissipated is as shown below: technique is always required for further enhancement.

There are number of single hop energy efficient protocols


that are used for design energy efficient network design for
wireless sensor network like LEACH,SEP,MEEP etc. But
the scope of improvement is still there. Because in all these
protocols CH‟s are overloaded because cluster heads have

374
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

the responsibility to collect the data from all other member REFERENCES
nodes and aggregate it & then send to the base station. To
do this CH consume a large amount of energy. Proposed [1]Pallavi Jain and Harminder Kaur 2014 volume 4.”An
technique can also resolve this problem. Improved Gateway Based Multi Hop Routing Protocol
For Wireless Sensor Network”. . International Journal of
Fig:-Nearly Layout Our Proposed work Information and Computation Technology.

[2] Chengzhi Long, et al. December2012 “A Multi-Hop


Routng Scheme Based On Different Spatial Density Of
Nodes in WSN‟S”. Journal of Theoretical and Applied
Information Technology.Vol.46 No.1.

[3] Amanjot Singh Toor 2013 “Implementation and


Analysis of Stable Election Protocol”, International Journal
of Advanced Research in Computer Science and Software
Engineering.Vol.3,Issue 9.

[4] Surender Kumar,M.Parteek,et.al March 2014 “Multi-


hop Energy Efficient Protocol for Heterogeneous Wireless
Sensor Network” . International Journal of Computer
Science and Telecommunications.Vol.5,Issue 3.

[5] Pallavi Jain (2014) “ An Improved Gateway Based


Multi Hop Routing Protocol for Wireless Sensor Network”
International Journal of Information & Computation
Gateway nodes Technology. Vol.4.No 15,pp.1567-1574.

Cluster Head [6] Jin Wang ; Jian Shen et al.(2013) “An Improved Stable
Election Based Routing Protocol with Mobile Sink for
Normal Nodes
WSN” IEEE International Conference on and IEEE cyber.
Pages:945-950
X Sink
[7] Chunyao FU,Zhifang JIANG et al. 2013 vol.10. “An
Energy Balanced Algorithm of LEACH Protocol in
WSN.IJCSI . International Journal of Computer Science
6.CONCLUSION Issues.
After the analysis of basic SEP we find above results. But
[8]Rajesh Patel,Sunil Pariyani et.al 2011 International
its results can also be improved further by modifying the Journal of Computer
network structure. There is also a scope of improvement in
the types of nodes that are used in the network for data
transmission.

375
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Literature survey of AODV and DSR reactive routing


protocols.

Charu Sharma1 Harpreet kaur2


Department of CSE Department of CSE
Rayat Bahra Institute of Engg. & Tech. Rayat Bahra Institute of Engg. & Tech.
Patiala, Punjab Patiala, Punjab
charusharma23s@gmail.com preet.harry11@gmail.com

ABSTRACT Hybrid routing protocols[16]: These are the


MANET(mobile ad-hoc networks) is a type of network in combination of both proactive and reactive protocols and take
which mobile nodes communicate to each other via radio benefits of both. Due to this combination routes can be found
waves. MANET is self configuring, infrastructure-less and easily and fastly in the network area. Some hybrid routing
robust network that’s why nowadays its routing protocols protocols are ZRP, Hazy Sighted Link State etc.
become active research area. In this paper , literature survey
of on-demand AODV and DSR routing protocols is to be Section 2 of this paper gives an overview of MANETs
discussed with their comparison. background because for discussing anything , we should have
a little knowledge of its root concepts.
Keywords
Ad-hoc network, AODV, DSR, MANET. Section 3 provides description of AODV and DSR protocols.
Their work with their advantages and disadvantages are also
1. INTRODUCTION mentioned in this.
Ad-hoc network provides wireless communication among
nodes in a network and the participated nodes acts as routers Section 4 includes the survey to be done on both AODV and
also when they are used as intermediate nodes between the DSR protocol. This describes different works to be done on
source node and the destination node for data transmission these routing protocols. Some of the papers also provide their
beyond the network range. Ad hoc network is the vast and an performance comparisons with different parameters and
emerging research area which has created its efficient different scenarios.
application known as MANET.
Section 5 gives the conclusion of this survey paper.
MANET provides rapid deployment, flexibility, robustness,
energy efficiency , self administration, fully distribution to Section 6 gives the description of resources which are used for
the ad-hoc networks. MANET provides various routing the whole survey.
protocols to make the communication among nodes effective
and efficient. 2. BACKGROUND
Routing is the process of transferring information from source
MANET categorizes its routing protocols into three node to destination node in a network. Routing is categorized
categories: as static and dynamic routing. Static routing [5] refers to the
routing strategy being stated manually or statically in the
Table-driven or pro-active routing protocol: In this router. Static routing maintains a routing table usually written
protocol, each node has its own routing table containing by a network administration. The routing table doesn’t depend
routing information of every other node and these routing on whether the destination is active or not. Dynamic routing
tables get updated periodically to maintain latest view of enables routers to select routes according to real time
routes in the network. So this consumes more battery power. changing network topologies. It allows creation, maintenance
This is used only when a source node requires a path to the and updation of routing tables dynamically. All this can be
destination node for some data transmission. Some pro-active done when the destination is active.
routing protocols are DSDV, DSF. GSR, WRP,ZRP and many
more[8]. Several routing algorithms are to be discovered for
ad-hoc networks with their advantages and disadvantages.
On- demand driven or reactive protocols: In this, there Among MANET’s routing protocols reactive routing
are no predefined routing tables to each node. The router or protocols are more suitable than proactive one[9]. In reactive
paths are discovered on-demand , when one node requires to routing there are two phases: route discovery and route
transmit some data to another node. So this saves the battery maintenance. The route discovery phase depends on the route
power by discovering routes when needed only Each node has request(RREQ) and route reply(RREP) queries on recycle
route cache instead of routing table for keeping all basis which increases the cost and decreases the performance
information of all latest paths from source to destination. of network. To decrease the cost two techniques are used that
DSR, AODV,TORA, ABR etc are some reactive routing are route caching[6] and local flooding.
protocols.

376
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. DESCRIPTION OF REACTIVE used by key exchange protocols. For route discovery, keys are
used in point –to point encryption and for data packets, keys
PROTOCOLS are used in end to end encryption. To protect the network
A. AODV from attacks done by malicious nodes, some key verification
and multilayered enciphering schemes are used.
AODV is on-demand extension of the dynamic sequenced
distance vector(DSDV) protocol[2]. Kulasekaran A. Sivakumar, Mahalingam Ramkumar
[2]proposed some modifications in the SAODV routing
Route discovery: When a node finds that there is no available protocol to make it more secure and generates SAODV-2
route to its destination then the source node start its route protocol .The SAODV-2 protocol uses proactive
discovery process by broadcasting the RREQ query to all the maintenance of a secure reliable delivery neighborhood by
neighboring nodes. This RREQ query includes source ID, each node and uses the BE based authentication strategy for
destination ID, a sequence number of the source, a last known mutable fields. To do these modifications a two hop secret is
sequence number of the destination and max. number of hops used by just maintaining one hop topology and without the
the RREQ can be forwarded. Nodes receiving this RREQ knowledge of the two hop topology.
query check whether they already have seen this RREQ, if so
then they drop the RREQ query. If the RREQ query has not Alekha Kumar Mishra, Bibhu Dutta Sahoo [3]
been seen before then they simply increments the hop count discuses various security threats for the AODV protocol in
and rebroadcasts the RREQ query. If an intermediate node has MANET . Due to lack of resources, MANETs are facing some
the route to the destination having sequenced number equal to challenges therefore the analyzation of such challenges or
or greater than the last sequence number of the destination types of attacks on the protocols is must for making it more
mentioned in RREQ query, then it generates the RREP query. efficient and better performance. Security threats discussed in
Otherwise it just stores the information regarding the previous this paper are attacks done by modifications and
hop from which it receives the RREQ query. This information impersonation, using fabrication, atomic and compound
will be during RREP process. The destination node after misuses.
receiving RREQ query, copy all the information included in
RREQ query and generates a RREP query with updated FAN Ya-qin, FAN Wen-yong, WANG Lin-zhu[4]
sequenced number. This RREP query unicast back to the analyses the performance of DSR routing protocol using
source node. This is the route discovery phase of AODV OPNET simulation tool with different sizes of MANET
protocol. Route maintenance phase: The source node sends models in which no. of nodes varies. The performance metrics
HELLO messages periodically to destination node to check its used are average route discovery time, average route length,
route activeness. If a HELLO from an active node is not throughput, data network latency and data loss rate. The result
received within specific time interval, the route is considered concluded is that DSR is suitable for small scale MANET
unreachable, a route error query (RERR) is broadcast to all networks and it is necessary to improve DSR protocol for
nodes and another cycle of RREQ query is broadcasted. As large scale MANETs.
only active routes can be used to send data packets, the route
table also contains invalids routes for an extended period of Amer O. Abu Salem, Ghassan Samara,Tareq
time. These invalid routes can provide information for Alhmiedat[11], evaluates the performance of DSR routing
repairing routes and for later RREQ queries. After some time protocol using NS-2 simulation with parameters including
interval, these invalid routes will be deleted. delivery ratio, end to end delay and throughput based on
different cache sizes and varying speeds. For this evaluation
B. DSR work was done on 50 nodes in simulation setup with CBR
traffic model. In this paper two caches primary and secondary
DSR is Dynamic Source Routing Protocol. This also has two are discussed and it is concluded that greater the cache size,
mechanisms: route discovery and route maintenance .When a greater will be the end to end delay and vice versa. Based on
source node needed to send the data to a destined node then its this result the best cache size evaluated for high speed should
searches the route in its route cache first and initiate route not more than 10 for primary cache and 20 for secondary
discovery process by issuing RREQ query and RREP queries cache. Keeping this in mind a new caching strategy can be
.When route failure occur, DSR sends RERR query to source developed in future research.
for new route. Unlike AODV, DSR do not require sending of
any periodic route maintenance messages. Each data packet Nidhi Sharma ,R.M. Sharma[5] analyze AODV and
sent then carries in its header the complete, ordered list of DSR protocols and compare their performances on NS-2
nodes through which the packet must pass, allowing packet simulator, in this paper, they have also evaluate the quality of
routing to be loop free and avoiding the need for up-to-date service with some parameters include packet delivery ratio,
routing information in the intermediate nodes[5] . In DSR , average time delay , routing load overhead. These parameters
the new route discovery overhead can be avoided by using were evaluated upon different network sizes and transmission
caching strategy of multiple routes to a destination occur. range of the respective nodes. For evaluation authors setup
minimum and maximum 10 and 45 nodes. This provides a
result that DSR performs better than AODV in less dense
4. RELATED WORK scenarios and AODV outperforms DSR in more dense
Asad Amir Pirzada and Chris McDonald[1], proposed an scenarios. This gives an idea to evaluate these two protocols
efficient scheme for securing the AODV routing protocol with different other parameters and varying no. of nodes for
protects the MANET from various attacks carryout by future work.
malicious nodes. This proposed protocol worked in three
parts: key exchange, secure routing and data protection. This Kumar Prateek , Nimish Arvind , Satish Kumar
applies a registration with certification authority constraint on Alaria[8], compares the performance of DSDV, AODV, and
the nodes before joining any network. Then session keys are DSR routing protocols for MANET using ns-2 simulation.

377
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The AODV and DSR are reactive protocols while DSDV is Amith Khandakar[7],also compares DSR,AODV and
proactive protocol. Both these reactive protocols performed DSDV protocols using NS-2 simulator and metrics taken are
well in high mobility scenarios than DSDV protocol. High packet delivery factor, end –to- end delay and normalized
mobility results in highly dynamic topology i.e. frequent route routing load with varying number of nodes, speed and pause
failures and changes. The DSDV protocol fail to respond real time. It also provides step by step scheme based on
time changing network topology. Routing overhead in DSDV assumptions on how to carry out comparative study so that it
protocol remains almost constant. The result explains that can be used for future work. This implementation shows that
DSR has performed better than all other protocols for delivery DSDV has slighter higher packet delivery factor than AODV
ratio while AODV outperforms for average delay. and DSR in all scenarios and DSR has slightly more packet
delivery factor than AODV.

Table 1. Difference Between AODV And DSR Routing Protocols

AODV DSR

More routing overhead Less routing overhead

Less normalized MAC overhead More normalized MAC overhead

Combination of DSR and DSDV mechanisms Based on source routing

It performs better in high mobility It performs better in low mobility

Route discovery process is frequent It has less frequent route discovery process

Uses one route per destination. Uses multiple routes per destination.

Consumes more battery power Save battery power

Rely on timer based activities Does not rely on timer based activities

Uses route tables to check available routes to the destination. Uses routing cache aggressively for route discovering.

Gather limited routing information Gather large amount of routing information by virtual of SR

It is more controlled, the fresher route is always chosen. It has an implicit mechanism to expire stale routes in the cache or
choose fresher routes.

5. ACKNOWLEDGMENTS
I would like to thank to Er. Harpreet Kaur, AP & H.O.D of [3] Alekha Kumar Mishra, Bibhu Dutta Sahoo,2009
C.S.E Department, Bahra Group of Institute, Bhedpura Patiala ,Analysis Of Security Attacks For Aodv Protocol In
for having faith in me and allowing me to work with all terms Manet, Proceedings of National Conference on
and conditions and advising me time to time about my this Modern Trends of Operating Systems, page 54-57,
survey paper. MTOS.

[4] FAN Ya-qin, FAN Wen-yong, WANG Lin-zhu ,


6. REFERENCES 2010, OPNET-based network of MANET routing
[1] Asad Amir Pirzada and Chris McDonald, 2005, protocols DSR Computer Simulation, WASE
Secure Routing with the AODV Protocol, Asia- International Conference on Information
Pacific Conference on Communications, Perth, Engineering, 978-0-7695-4080-1/10 , IEEE ,DOI
Western Australia, 3 - 5 October. 10.1109/ICIE.
[5] Nidhi Sharma. ,R.M. Sharma, 2010 ,Provisioning of
[2] Kulasekaran A. Sivakumar and Mahalingam Quality of Service in MANETs Performance
Ramkumar, 2007, Safeguarding Mutable Fields in Analysis &Comparison (AODV and DSR ). 978-1-
AODV Route Discovery Process”, Computer 4244-6349-7/10\ IEEE.
Communications and Networks, 2007. ICCCN
2007. Proceedings of 16th International Conference [6] Liu Yujun, Han Lincheng, 2010 ,The Research on
on, 13-16 Aug., 645 – 651, IEEE. an AODV-BRL to Increase Reliability and Reduce
Routing Overhead in MANET, International
Conference on Computer Application and System
Modeling (ICCASM).

378
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[7] Amith Khandakar, 2012, Step by Step Procedural [14] Naisha Taban Khan and Prof. Nitin Agarwal, ,
Comparison of DSR, AODV and DSDV Routing 2014, Adaptive Routing In Mobile Adhoc
protocol, 4th International Conference on Computer Networks And Comparison Between Routing
Engineering and Technology (ICCET 2012)IPCSIT Protocols Aodv And Dsr For Apu Strategy, IOSR
vol.40. Journal of Computer Science (IOSR-JCE) e-ISSN:
2278-0661, p-ISSN: 2278-8727, PP 58-65.
[8] Kumar Prateek, Nimish Arvind and Satish Kumar
Alaria, February 2013 ,MANET-Evaluation of [15] Monika Verma#1, Dr. N. C. Barwar , 2014, A
DSDV, AODV and DSR Routing Protocol, Comparative Analysis of DSR and AODV Protocols
International Journal of Innovations in Engineering under Blackhole and Grayhole Attacks in MANET.
and Technology, Vol. 2 Issue 1. IJCSIT, Vol. 5 (6).

[9] Rajesh Sharma and Seema Sabharwal, July 2013 [16] Jyoti Rani and Dr. Pardeep Kumar Mittal, June
,Dynamic Source Routing Protocol (DSR), 2014, Review of Energy Efficient AODV and DSR
International Journal of Advanced Research in Routing Protocols”, International Journal of
Computer Science and Software Engineering, Advanced Research in Computer Science and
Volume 3, Issue 7. Software Engineering, Volume 4, Issue 6.

[10] Diya Naresh Vadhwani, Deepak Kulhare and [17] V. V. Mandhare, R. C. Thool, January 2015 ,
Megha Singh, , May 2013, Behaviour Analysis Of Comparision Estimation of Various Routing
Dsr Manet Protocol With Http Traffic Using Opnet Protocol in Mobile Ad-hoc Network: A Survey,
Simulator, International Journal of Innovative IJARCSSE, Volume 5, Issue 1.
Research in Computer and Communication
Engineering, Vol. 1, Issue 3. [18] Mr. Vikas Kumar, Mr. Amit Tyagi, Mr. Amit
Kumar, January 2015 ,Mobile Ad-hoc Network:
[11] Puneet Mittal, Paramjeet Singh And Shaveta Rani, Characteristics, Applications, Security Issues,
August 2013 , Comparison Of DSR Protocol In Challenges and Attacks , IJARCSSE, Volume 5,
Mobile Ad-Hoc Network Simulated With Opnet Issue 1.
14.5 By Varying Internode Distance , IJAIEM,
Volume 2, Issue 8. [19] Mostafa Rajabzadeh, Arash Mazidi, Mehdi
Rajabzadeh, January 2015 ,SG-AODV: Smart and
[12] Pooja Singh, Anup Bhola, C.K Jha, April 2013 , Goal Based AODV for Routing in Mobile Ad hoc
Simulation based Behavioral Study of AODV, DSR, Networks , IJARCSSE, Volume 5, Issue 1.
OLSR and TORA Routing Protocols in Manet ,
International Journal of Computer Applications [20] Harvaneet Kaur, January 2015 ,A Survey on Manet
(0975 – 8887) Volume 67– No.23. Routing Protocols, IJARCSSE, Volume 5, Issue 1.

[13] Amer O. Abu Salem, Ghassan Samara and Tareq


Alhmiedat, February 2014 ,Performance Analysis of
Dynamic Source Routing Protocol, Journal of
Emerging Trends in Computing and Information
Sciences, Vol. 5, No. 2.

379
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Survey on Data Placement and Workload Scheduling


Algorithms in Heterogeneous Network for Hadoop

Ruchi Mittal Harpreet Kaur


Department of CSE Department of CSE
Rayat Bahra Institute of Engg. & Tech. Rayat Bahra Institute of Engg. & Tech.
Patiala, Punjab Patiala, Punjab
Email: mittal.ruchi29@gmail.com Email: preet.harry11@gmail.com

ABSTRACT Hadoop is an open source implementation of the MapReduce


model for parallel processing of the large datasets.
The elastic scalability and fault tolerance of the cloud computing Heterogeneity and Data locality are the main factors affecting
has led to a wide range of real world applications. However, the performance of Hadoop system in the Hadoop architecture.
processing requirements of Big Data in these applications pose a In the classic homogeneous Hadoop system, all the nodes have
humongous challenge for achieving desired performance levels. the same processing ability and hard disk capacity. All the nodes
MapReduce is an effective parallel distributed programming are assigned same workloads and data required for their
model for handling large unstructured datasets in cloud processing is often local requiring lesser data movement
applications. Hadoop, an open source implementation of the between the nodes. The data needed to be written in HDFS are
MapReduce model, is currently being employed for high partitioned in to several smaller data blocks of similar size and
performance processing of Big Data. The current Hadoop are assigned to the nodes equally. In such homogeneous
implementation considers the nodes of a cluster in a environment load balancing can be maintained easily.
homogeneous environment where each node has the same
computing capacity and workload. But in real world applications However in real world applications the nodes may be of
the nodes may have different computing capacities and different processing ability and hard disk capacity. With the
workloads resulting in a heterogeneous environment. In such default Hadoop strategy, the faster nodes may finish their tasks
heterogeneous environment the default Hadoop implementation with their local data at a great speed. After finishing the task
does not yield the expected performance. This paper includes a with local data faster nodes can work with nonlocal data which
survey on the algorithms proposed by different authors on (a) may be present on slower nodes which requires more data
data placement strategies and (b) workload scheduling for movement between the nodes, thus affecting the Hadoop
Hadoop in heterogeneous network. performance.

Keywords The rest of the paper is organized as follows. Section II presents


an overview on the basic terms of the MapReduce programming
Cloud Computing, Big Data, MapReduce, Hadoop, model, Hadoop and Hadoop Distributed File System (HDFS).
Heterogeneous Network. Section III comprises the work presenting and describing the
different algorithms on the workload scheduling and data
placement strategies. Section IV and Section V explains the
1. INTRODUCTION conclusion drawn from the survey and opportunities for future
Nowadays there has been an exponential growth in the data research work.
produced every minute. To store, process and analyze such large
volume of data, called Big Data, is proven to be a big challenge 2. BACKGROUND
these days. Parallel computing is a method adopted for the
purpose. MapReduce, a programming model is proven to be an 2.1 MapReduce Model
efficient parallel data processing paradigm for large scale MapReduce is a parallel programming model used for large sets
clusters. Hadoop, an open source implementation of the of data processing in a distributed environment with a large
MapReduce model, can be used developed to process thousands number of computing nodes. This model is proposed by Google
of megabytes of the data on the Linux platform. Hadoop is also in 2004. An application which is to be executed in MapReduce
adopted by Amazon and Facebook for the processing and model is called a job and is divided into 'map tasks' and 'reduce
analysis of large volumes of data. tasks'. This model works on the strategy of 'divide and conquer'.
The large set of data which is stored for processing is divided into
The MapReduce model partitions a program into multiple the blocks of same size. These blocks are then allocated to the
smaller tasks which are executed individually and in parallel. nodes which process them in parallel. The result of the blocks
The results of them are combined by the model to give a single with same map function is composed of <key, value> pairs. The
output. As compared to earlier parallel programming models, reduce nodes combine these intermediate outputs and generate
MapReduce model provides several benefits. First, in this the final output data.
model, new nodes can be added easily in the cluster without
much modifications and the system works the same. Second, the 2.2 Hadoop
system doesn't get much affected on the failure of a single node
Hadoop is an open source implementation of the MapReduce
as the task is automatically allocated to some other idle node in
model supported by Apache Software Foundation. Hadoop
the model.

380
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

MapReduce and Hadoop Distributed File System (HDFS) are to the DataNode by the NameNode. The data blocks are stored
two main parts of the Hadoop System. MapReduce handles the by the DataNode.
tasks of parallel computing and HDFS manages the data
management. The jobs are classified into tasks and are executed 3. RELATED WORK
in parallel by the MapReduce. The data are classified into blocks
The structured survey starts from the definition of the terms-
and are handled by the HDFS. These tasks and data blocks are workload scheduling and data placement strategies for Hadoop
assigned to the nodes of the cluster. Hadoop adopts the which will guide the work description further.
master/slave architecture in which the master is the JobTracker
and slave is the TaskTracker.
3.1 Workload Scheduling
JobTracker is responsible for the job scheduling and task There are three main schedulers which come along with Hadoop.
distribution and TaskTracker is responsible for performing the These are FIFO, Fair Scheduling and Capacity Scheduling. In
tasks and to return the result to the JobTracker. These use FIFO, all the jobs are loaded into a queue and are scheduled to
heartbeat messages for communication. High-end PC's are not execute accordingly as these are loaded in the queue. In Fair
necessary for high performance computing while working with Scheduling, a minimum number of map and reduce slots are
Hadoop. Several general PC's can be used for the purpose with allocated to the jobs, i.e. each job receives a fair share of cluster's
Hadoop with which high performance platform can be build resources. In Capacity Scheduling, each queue shares the cluster's
saving large amounts of money. computational resources according to its priority. These
schedulers were designed to work in the homogeneous network.
2.3 Hadoop Distributed File System But in real world applications, clusters are of different computing
capabilities and storage capacity. So the expected output cannot
(HDFS) be obtained from these in heterogeneous networks. In this, we
present the new scheduling algorithms proposed by different
authors to work in heterogeneous networks to enhance the
Hadoop's performance.

3.2 Data Placement


The data placement strategy of the MapReduce for data
placement consists of two functions: Map and Reduce. Jobs are
divided into map and reduce tasks to be executed by Mapper and
Reducer. First, the input is loaded into the HDFS and data are
partitioned into equal sized data blocks and each block is handled
by the mapper for data processing. As an intermediate output, a
<key, value> pair is generated which is given to the reducer
which merge them generating a single output (figure 2).

Fig 1: An overview of HDFS Read and Write [5]


Based on the Google File System, the HDFS is implemented by
Yahoo! which is used with the MapReduce model. In HDFS
(figure 1), the master is the NameNode and slave is the
DataNode. The complete file system and file information is
stored and managed by the NameNode. The files written in the
HDFS are also partitioned into same sized blocks and assigned

Fig 2: An overview of MapReduce Model [5]

381
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 1. Proposed Workload Scheduling Algorithms along with their performance.

Author's Name Publication Title Proposed Work Results

Julio C.S. Anjos, Future Generation MRA++: The proposed MRA++ design The new algorithm attains 70%
Ivan Carrera, Computer Systems, Scheduling and considers the heterogeneity of the of more performance gain in 10
Wagner Kolberg, Jan. 2015. data placement on nodes during the distribution of data, Mbps networks by nullifying
Andre Luis MapReduce for scheduling of tasks and in job the introduced delay in setup
Tibols, Luciana heterogeneous control. A training task is set for phase.
B. Arantes, environments information gathering before the
Claudio R. Geyer distribution of data.

Zhou Tang, Min The Journal of An Optimized This paper considers the large data The experimental results show
Liu, Almoalmi Supercomputing, MapReduce processing workflow as DAG which that the schedule length and the
Ammar, Kenli Li, Nov. 2014. Workload consists of MapReduce jobs. The parallel speedup for the
Keqin Li Scheduling proposed algorithm calculates the workflow task can be improved
Algorithm for priorities of the jobs by categorizing with the proposed algorithm.
Heterogeneous them into I/O intensive and
Computing computing intensive and the slots
are allocated accordingly. Then the
workflow is scheduled for the tasks
according to the data locality.

Feng Yan, 7th IEEE Optimizing Power Based on different job types, as large With this scheduler, smaller
Ludmila International and Performance jobs requiring faster output or small jobs can be executed up to 40%
Cherkasova, Conference on Trade-offs of interactive jobs requiring faster faster and the output of large
Zhuoyao Zhang, Cloud Computing, MapReduce Job response time, this scheduler called jobs can be achieved 40%
Evgenia Smirni July 2014. Processing with DyScale, is designed to allocate higher in the simulation study.
Heterogeneous jobs accordingly to fast or slow
Multi-Core cores in a heterogeneous cluster by
Processors creating virtual resource pools by
using priority scheduling.

Jessica Hartog, IEEE International Configuring A In this, the proposed MapReduce The proposed work can
Renan DelValle, Congress on Big MapReduce framework called MARLA, divides improve the performance for
Madhusudhan Data, July 2014 Framework For a task into subtasks and delays the few upgraded nodes but didn’t
Govindaraju, Performance binding of data to the subtask's affect it equally.
Maichael J. Lewis Heterogeneous process.
Clusters

Xiaolong Xu, IEEE Systems Adaptive Task Using ATSDWA, the TaskTrackers The proposed algorithm can be
Lingling Cao, Journal, 2014. Scheduling can adjust themselves according to beneficial for both the
Xinheng Wang Strategy based on the load change at runtime and JobTracker by avoiding its
Dynamic according to their computing overloading and for the
Workload capacity can obtain the tasks while TaskTracker by reducing the
Adjustment realizing the self-regulation. task execution time giving more
(ATSDWA) for stability to the heterogeneous
Heterogeneous Hadoop cluster. It can be
Hadoop Clusters. applied to both heterogeneous
and homogeneous
environments.

382
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 1 (Continued)
Author's Name Publication Title Proposed Work Results

Aysan Rasooli, Future COSHH: A A new Hadoop scheduling As compared to the well known
Douglas G. Generation Classification And algorithm is designed and scheduling algorithms-FIFO
Down Computer Optimization implemented considering the and Fair Scheduling, this
Systems, 2014. Based Scheduler heterogeneity both at the algorithm yields moderate
for application and cluster level. output under minimum share
Heterogeneous The main goal of the proposed satisfaction, fairness and
Hadoop Systems algorithm is to improve the locality metrics.
average completion time of jobs.

Bin Ye, Xiaoshe 8th ChinaGrid A Delay In this, by considering the With the proposed scheduler,
Dong, Pengfei Annual Scheduling history time of the completed more jobs can be assigned to
Zheng, Conference, Algorithm based tasks and Delay Scheduler's the suitable slots for better
Zengdong Zhu, IEEE 2013. on History Time in strategy, a new algorithm is performance of the system.
Qiang Liu, Zhe Heterogeneous proposed for multi-user Hadoop
Wang Environments Cluster.

Quan Chen, The Journal of HAT: History The proposed History based The performance of
Minyi Guo, Supercomputing, based Auto- Auto-tuning MapReduce MapReduce applications can be
Qianni Deng, June 2013. Tuning scheduler tunes the weight of increased by 37% with Hadoop
Long Zheng, MapReduce in each map and reduce tasks by and by 16% with LATE
Song Guo, Yao Heterogeneous their value in history tasks and scheduler by applying the
Shen Environments uses them to calculate the proposed HAT scheduler to
progress of current tasks. It then them.
adapts automatically to the
continuous changing
environment by regularly
monitoring the progress of tasks.

Matei Zaharia, 8th USENIX Improving In this, a robust scheduling On Amazon's Elastic Compute
Andy Symposium on MapReduce algorithm, Longest Approximate Cloud, the proposed LATE
Konwinski, Operating Performance in Time to End (LATE), is algorithm performs much better
Anthony D. Systems Design Heterogeneous proposed which uses the than the Hadoop's default
Joseph, Randy and Environments estimated completion times of speculative scheduling
Kartz, Ion Implementation, jobs which are expected to hurt algorithm.
Stocia ACM, 2008. the response time the most.

Quan Chen , 10th IEEE SAMR: A Self- The proposed algorithm In heterogeneous
Daqiang Zhang, International adaptive classifies the nodes into slow environments, the proposed
Minyi Guo, Conference on MapReduce nodes using the historical algorithm reduces the
Qianni Deng, CIT, 2010 Scheduling information and further execution time by 24% for Sort
Song Guo Algorithm In classifies them into slow map applications by 17% for
Heterogeneous nodes and slow reduces nodes. It Wordcount applications as
Environment then launches the backup tasks compared to default Hadoop
in the meanwhile. scheduling mechanism.

Visalakshi P IJCSNS, MapReduce The proposed scheduler executes This maintains a balance
and Karthik TU, April 2011 Scheduler Using at the JobTracker. At the between the CPU bound and IO
Classifiers for JobTracker when a message is bound jobs by effectively
Heterogeneous received by it from the classifying them and
Workloads TaskTracker, the scheduler, from preventing the re-launching of
the list of the MapReduce jobs, the jobs.
selects a task that is expected to
yield maximum throughout.

383
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 2. Proposed Algorithm on Data Placement along with their performance

Author's Name Publication Title Proposed Work Results

Chia-Wei Lee, Elsevier-Big Data A Dynamic Data The proposed Dynamic Data The performance of the
Kuang-Yu Hsieh Research, July Placement Placement (DDP) algorithm algorithm is evaluated on
Sun-Yuan Hsieh , 2014. Strategy for works in two phases: in First two functions- Wordcount
Hung-Chang Hadoop in phase, in the Hadoop distributed and Grep mainly in
Hsiao Heterogeneous file system the input is written heterogeneous environment
Environments and data are allocated to the for Hadoop .The algorithm
nodes and in second phase, the with average improvement
capacity of the nodes is of 14.5% performs 24.7%
calculated and reallocation of better in case of Wordcount
data takes place accordingly. application . And for Grep
application it performs
32.1% better with average
improvement of 23.5%.

Xiaofei Hou, 4th International Dynamic By analyzing the information The simulation results show
Ashwin Kumar T Conference on Big Workload obtained from the log files of that the remaining time of
K, Johnson P Data and Cloud Balancing for Hadoop, this proposed dynamic the tasks, which belong to
Thomas, Vijay Computing, IEEE Hadoop algorithm balances the workload the busiest racks in Hadoop
Vardharajan 2014. MapReduce between the different busiest Cluster, can be decreased by
racks on the Hadoop cluster by more than 50%.
shifting the tasks between them
and idle racks.

Krish K.R., Ali 22nd International ΦSched: A In this, the information regarding As compared to the
Anwar, Ali R. Symposium on Heterogeneity- the behavior of various leading hardware oblivious
Butt Modeling, Analysis Aware Hadoop Hadoop applications in the scheduling, using the
& Simulation of Workflow heterogeneous Hadoop cluster is proposed method
Computer and Scheduler merged into the hardware-aware performance improvement
Telecommunication ΦSched scheduler to improve the of 18.7% can be achieved by
Systems, IEEE resource-application match. An managing four different
2014. instance of Hadoop Distributed clusters. The I/O throughput
File System (HDFS) is also and the average I/O rate can
configured on all the be enhanced by 23% and
participating clusters to ensure 26% respectively with
the data locality. HDFS improvement.

Ashwin Kumar ICACCI, IEEE Dynamic Data In this paper, depending on the For Hadoop with the
T K, Jongyeop 2014. Rebalancing in number of incoming parallel proposed algorithm,
Kim, K M Hadoop MapReduce jobs, the proposed MapReduce job's service
George, Nohpill algorithm balances the data by time can be decreased by
Park dynamically replicating it with 30% and resource utilization
minimum cost of data movement. can be improved up to 50%.

Zhao Li, Yao International OFScheduler: A In this OFScheduler, a dynamic For multipath heterogeneous
Shen, Bin Yao, Journal of Parallel Dynamic network optimizer is proposed cluster, the proposed
Minyi Guo Programming, Network which relieves the network traffic scheduler's simulation
2013. Optimizer for during MapReduce job execution results show 24 ~ 63% better
MapReduce in by reducing the bandwidth performances of MapReduce
Heterogeneous competition. jobs and bandwidth
Cluster utilization.

384
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 2. (Continued)

Author's Name Publication Title Proposed Work Results

Yuanquan Fan, 7th ChinaGrid A Heterogeneity The proposed method first The experimental results
Weiguo Wu, Annual Aware Data computes the nodes computing show that the execution time
Haijun Cao, Huo Conference, IEEE Distribution and capability based on the log is reduced by 5% in
Zhu, Xu Zhao, 2012. Rebalance information about the history Wordcount benchmark and
Wei Wei Method in tasks. Then data is divided into by 9.6% in Sort Benchmark.
Hadoop Cluster different sized blocks according The data locality can also be
to the nodes computing capacity. increased by 18.8% with
Further the dynamic data Wordcount and by 8.3%
migration policy aims at the with Sort benchmark
transfer of data from slow approximately.
DataNode to headmost DataNode
during execution time.

Jiong Xie, Shu International Improving With the proposed method, data Using the proposed data
Yin, Xiaojun Symposium on MapReduce across the nodes can be balanced placement scheme, output of
Ruan, Zhiyang Parallel and Performance adaptively to improve the Wordcount and Grep can be
Ding, Yun Tian, Distributed through Data performance of data intensive increased up to 33.1% and
James Majors, Processing, Placement in application running on Hadoop 10.2% with an average of
Adam Workshops and Heterogeneous cluster. 17.3% and 7.1%.
Manzanares, PhD Forum, 2010. Hadoop Clusters
Xiao Qin

4. CONCLUSION
heterogeneous environments”, in Future Generation Computer
A lot of research work is being carried out on Hadoop Systems, vol. 42, pp. 22-35, January 2015.
these days especially for heterogeneous networks. In
homogeneous environment, the nodes utilize the resources [2] Xiaofei Hou, Ashwin Kumar T K, Johnson P Thomas, Vijay
very efficiently. But when the nodes are of different Vardharajan, “Dynamic Workload Balancing for Hadoop
processing capabilities, i.e. in heterogeneous environment, MapReduce”, in proceedings of 4th International Conference on
the performance of the Hadoop clusters gets affected by Big Data and Cloud Computing, IEEE, Dec. 2014.
the load imbalance created due to heterogeneity which
causes additional overhead of the data transfers between [3] Zhou Tang, Min Liu, Almoalmi Ammar, Kenli Li, Keqin Li,
“An Optimized MapReduce Workload Scheduling Algorithm for
the nodes. Due to this we wish to propose an algorithm for
Heterogeneous Computing”, in The Journal of Supercomputing,
Hadoop in heterogeneous environment and will evaluate Nov. 2014.
its performance against homogeneous environment.
[4] Krish K.R., Ali Anwar, Ali R. Butt , “ΦSched: A
5. FUTURE SCOPE Heterogeneity-Aware Hadoop Workflow Scheduler”, in
According to my survey, there is lot of scope for research proceedings of 22nd International Symposium on Modelling,
work in the heterogeneous environments. A lot of work Analysis & Simulation of Computer and Telecommunication
Systems, IEEE, pp. 255-264, Sep. 2014.
has been proposed for the homogeneous environments.
But in real world applications, homogeneous [5] Chia-Wei Lee, Kuang-Yu Hsieh Sun-Yuan Hsieh , Hung-
environments are impractical. As in real world, nodes are Chang Hsiao, ”A Dynamic Data Placement Strategy for Hadoop
of heterogeneous nature as they have different computing in Heterogeneous Environments ”, in Big Data Research, vol. 1,
capacity and different storage capacity. So, in future, pp. 14-22, July 2014.
algorithms can be proposed for heterogeneous
environment for increasing the Hadoop's performance. [6] Feng Yan, Ludmila Cherkasova, Zhuoyao Zhang, Evgenia
Smirni, ”Optimizing Power and Performance Trade-offs of
6. ACKNOWLEDGEMENTS MapReduce Job Processing with Heterogeneous Multi-Core
Processors”, in proceedings of 7th IEEE International Conference
I would like to thank Er. Harpreet Kaur, AP & HOD of CSE on Cloud Computing,pp. 240-247, July 2014.
Department, Rayat Bahra Group of Institues, Patiala for having
faith in me and guiding me time to time in the completion of this [7] Jessica Hartog, Renan DelValle, Madhusudhan Govindaraju,
survey paper. Maichael J. Lewis, “Configuring A MapReduce Framework For
Performance Heterogeneous Clusters”, in proceedings of IEEE
7. REFERENCES International Congress on Big Data, pp. 120-127, July 2014.
[1] Julio C.S. Anjos, Ivan Carrera, Wagner Kolberg, Andre Luis
Tibols, Luciana B. Arantes, Claudio R. Geyer, ”MRA++: [8] Aysan Rasooli, Douglas G. Down, “COSHH: A
Scheduling and data placement on MapReduce for Classification and Optimization Based Scheduler for

385
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Heterogeneous Hadoop Systems”, Future Generation Computer [16] Visalakshi P and Karthik TU, “MapReduce Scheduler
Systems, vol. 36, pp. 1-15, July 2014. Using Classifiers for Heterogeneous Workloads”, in IJCSNS,
vol. 11 no. 4, April 2011.
[9] Xiaolong Xu, Lingling Cao, Xinheng Wang, “Adaptive Task
Scheduling Strategy based on Dynamic Workload Adjustment [17] Jiong Xie, Shu Yin, Xiaojun Ruan, Zhiyang Ding, Yun
(ATSDWA) for Heterogeneous Hadoop Clusters”, in IEEE Tian, James Majors, Adam Manzanares, Xiao Qin, “Improving
Systems Journal, issue 99, pp. 1-12, June 2014. MapReduce Performance through Data Placement in
Heterogeneous Hadoop Clusters”, in proceedings of
[10] Ashwin Kumar T K, Jongyeop Kim, K M George, Nohpill International Symposium on Parallel and Distributed Processing,
Park, “Dynamic Data Rebalancing in Hadoop”, in proceedings Workshops and PhD Forum, pp. 1-9 Apr. 2010.
of IEEE/ACIS 13th International Conference on Computer and
Information Science, pp. 315- 320, June 2014. [18] Quan Chen ,Daqiang Zhang, Minyi Guo, Qianni
Deng,Song Guo, “SAMR: A Self-adaptive MapReduce
[11] Zhao Li, Yao Shen, Bin Yao, Minyi Guo, “OFScheduler: A Scheduling Algorithm In Heterogeneous Environment”, in
Dynamic Network Optimizer for MapReduce in Heterogeneous proceedings of 10th IEEE International Conference on CIT, pp.
Cluster”, in International Journal of Parallel Programming, Oct. 2736-2743, 2010.
2013.
[19] Matei Zaharia, Andy Konwinski, Anthony D. Joseph,
[12] Bin Ye, Xiaoshe Dong, Pengfei Zheng, Zengdong Zhu, Randy Kartz, Ion Stocia, “Improving MapReduce Performance
Qiang Liu, Zhe Wang, “A Delay Scheduling Algorithm based on in Heterogeneous Environments”, in proceedings of 8th USENIX
History Time in Heterogeneous Environments”, in proceedings Symposium on Operating Systems Design and Implementation,
of 8th ChinaGrid Annual Conference, IEEE, pp. 86-91, Aug. pp. 29-42, ACM Press, 2008.
2013.
[20] Ivanilton Polato, Reginaldo Re, Alfredo Goldman, Fabio
[13] Sutariya Kapil B., Sowmya Kamath S., “Resource Aware Kon, “A comprehensive view of Hadoop research”, in Journal of
Scheduling in Hadoop for Heterogeneous Workloads based on Network and Computer Applications, vol. 46, pp. 1-25, Nov.
Load Estimation”, in proceedings of 4th International Conference 2014.
on Computing, Communications and Networking Technologies,
pp. 1-5, July 2013. [21] B G. Babu, Shabeera T P, Madhu Kumar S D, “Dynamic
Colocation Algorithm for Hadoop”, in proceedings of IEEE
[14] Quan Chen, Minyi Guo, Qianni Deng, Long Zheng, Song International Conference on Advances in Computing,
Guo, Yao Shen, “HAT: History based Auto-Tuning MapReduce Communications and Informatics, pp. 2643- 2647, Sep. 2014.
in Heterogeneous Environments”, in The Journal of
Supercomputing, vol. 64, pp. 1038-1054, June 2013. [22] S. Sujitha, Suresh Jaganathan, “Aggrandizing Hadoop in
terms of Node Heterogeneity & Data Locality”, in proceedings
[15] Yuanquan Fan, Weiguo Wu, Haijun Cao, Huo Zhu, Xu of IEEE International Conference on Smart Structures &
Zhao, Wei Wei, “A Heterogeneity Aware Data Distribution and Systems, pp. 145-151, Mar. 2013.
Rebalance Method in Hadoop Cluster”, in proceedings of 7th
ChinaGrid Annual Conference, IEEE, pp. 255-264, Sep. 2012.

386
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Survey on Parts of Speech Tagging for Indian


Languages
Neetu Aggarwal Amandeep kaur Randhawa
BGIET, Sangrur BGIET,Sangrur
neetuaggarwal4491@gmail.com amehak07@gmail.com

ABSTRACT context in sentence by hand and that is why POS Tagging is


becoming a popular problems in the field of NLP for study.
This paper describes the survey on POS (Part of Speech) 2. POS TAGGING APPROACHES
tagging for various Indian Languages. Various approaches There are three categories for POS tagging called rule based,
concerned for POS tagging of sentences written in Indian Empirical based and Hybrid based . In rule – based tagging
languages are discussed in this paper . Indian Languages have hand – written rules are used . Empirical POS taggers are
rich morphological effect so a no. of problems occur while further divided into Stochastic based taggers which either
tagging the sentences written in various languages. A lot of HMM based , or cue-based, that use decision trees or
POS tagging work has been done by the researchers for maximum entropy models. There are two types of stochastic
various languages using different approaches HMM( Hidden taggers supervised and unsupervised taggers.
Marcov Model) , SVM (Support Vector Machine) , ME
(Maximum Entropy) etc.
2.1 Rule Based Approach
Keywords: Natural Language Processing, Part of Speech In rule-based approach handwritten rules and contextual
tagging, Tagset, Indian Languages information is used to assign POS tags to words in training
data. These rules are often known as context frame rules.
For example, a context frame rule might say something
1. INTRODUCTION like:“If an ambiguous/unknown word X is preceded by a
Determiner and followed by a Noun, tag it as an
The main objective of Natural Language Processing is to
Adjective”.“Brill‟s tagger” is widely used English POS-
facilitate the interaction between human and machine. POS
tagger that is based on rule-based approach.
tagging is the process of attaching the best grammar tag like
to each word of a sentence of some language. A word in a 2.2 Empirical Based POS tagging Approach
sentence can act as a verb, noun , pronoun , adjective , adverb, The empirical approach of parts speech tagging is divided in
conjunction , preposition etc so POS is defined as the to two categories: Example-based approach and Stochastic
grammatical information of each word of a sentence. While based approach.
assigning a POS tag it is necessary to determine the context of
the word i.e. whether it is acting like a noun, adjective, verb 2.2.1 Stochastic based POS tagging
etc. Sometime a word can act as a noun in one sentence and in The stochastic approach is helpful to find out the most
another sentence it can give the sense of verb. So before frequently used tag for a specific word in the annotated
selecting a POS tag for a word the exact context of the word training data and uses this information to tag that word in the
must be clear. unannotated text. In stochastic approach various methods are
used like n-grams, maximum-likelihood estimation (MLE) or
For Indian languages it is a difficult task to assign the correct Hidden Markov Models (HMM). A large sized training
POS tag to each word in a sentence because of some unknown corpus is required for stochastic approach. Two types of
words in Indian languages. The earlier work that has been stochastic approach are:
done for Indian languages was based rule based approaches.
But the rule-based approach requires efficient language Supervised models
knowledge and hand written rule. Most of natural language For using supervised POS Tagging approach a corpus is
processing work has been done for Hindi, Tamil, Malayalam required that is pre-annotated. For extracting information
and Marathi and several part-of-speech taggers have been about the tagset , rule sets ,word tag a pre- annotated corpus is
applied for these languages. The set of tags assigned by a part required for supervised tagging approach. For this approach if
of speech tagger may contain just a dozen tags so such a big the corpus will be large then the results of evaluation will also
tagset can arise the difficulty in the tagging process. POS be better. Examples for supervised POS taggers are:
tagging is helpful in various NLP tasks like Information
Retrieval, Machine Translation , Information Extracton , Hidden Markov Model (HMM) based POS tagging:
It calculates the probability of a given sequence of tags. By
Speech Recognition etc. For Indian languages researchers calculating the probability it specifies the most suitable tag for
find difficulty in writing linguistic rules for rule based a word or token of a sentence that it occurs with the n
approaches because of morphological richness . The other previous tags, where the value of n is set to 1, 2 or 3 for
main issue after morphological richness of Indian Languages practical purposes. The most useful algorithm for
is Ambiguity. Researchers have developed some better POS implementing an n-gram approach is HMM‟s Viterbi
taggers using various approaches. It is very time consuming Algorithm for tagging new text.
process to assign a POS tag to each word according to its
Support Vector Machines Approach:

387
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

SVM is a machine learning algorithm has been applied to Himashu and Amni Anirudh. After evaluation they found that
various practical problems like NLP. For dealing with all the the strength of Conditional Random Fields can be seen on
requirements of modern NLP technology the SVM Approach large training data and CRF performs better for chunking than
is used , by combining simplicity, flexibility, robustness, it does for POS tagging with the training on same sized data.
portability and efficiency. This is achieved by offering NLP With training on 21000 words with the best feature set, the
researchers a highly customizable sequential tagger generator CRF based POS tagger is 82.67% accurate, while the chunker
and working in the Support Vector Machines (SVM) learning performs at 90.89% when evaluated with evaluation script
framework. from conll 2000.

Unsupervised models A POS tagging for Punjabi language using Hidden Marcov
Model has been used by Sapna Kanwar, Mr Ravishankar,
As in supervised POS tagging approach a pre-annotated Sanjeev Kumar Sharma and used a Bi-gram Hidden Markov
training corpus is required , in unsupervised approach there is Model to solve the part of speech tagging problem. During
no requirement of a pre-annotated corpus. Instead, researchers experimental results they note that the general HMM based
use advanced computational techniques like the Baum-Welch method doesn‟t perform well due to data deficiency problem.
algorithm to automatically induce tagsets, transformation
rules, etc. For stochastic taggers they either calculate the
probabilistic information or cajole the contextual rules needed A machine learning algorithm for Gujarati Part of Speech
by rule based systems or transformation based systems. Tagging has been used by Chirag Patel and Karthik Gali.
The machine learning part is performed using a CRF model.
The algorithm has achieved an accuracy of 92% for Gujarati
2.3 Transformation-based POS tagging texts where the training corpus is of 10,000 words and
Approach the test corpus is of 5,000 words. From the experiments they
In general, a large sized of pre-annotated corpus is required in observed that if the language specific rules can be formulated
supervised tagging approach But in Transformation –based in to features for CRF then the accuracy can be reached to
tagging a pre-annotated corpus is not required. One approach very high extents.
to automatic rule induction is to run an untagged text through
a tagging model and get the initial output. After getting the Sanjeev Kumar Sharma et al., 2011 developed a system using
output all the errors are corrected by human manually. Then Hidden Markov Model to improve the accuracy of Punjabi
the tagger compare the two sets of data to learn the correction Part of Speech tagger. A module has been developed that
rules. This process is repeated a no.of times so that the tagger takes output of the existing POS tagger as input and assign the
can obtain the better performance. correct tag to the words having more than one tag. The system
was evaluated over a corpus of 26,479 words and system
3. TAGSET achieved the accuracy of 90.11%.
A tag set consist of tags that are used to represent the
grammatical information of the language. The number of tags
used for a language depends upon the amount of information M. S. Thirumalai, Sam Mohanlal used a Hybrid POS tagger
that we want to represent using a tag. A tagset can be too large for Indian Languages. The experiment on twelve Indian
and can consist a dozen of tags. For representing the context language regarding POS tagging based on the LDC-IL POS
of words in a sentence of training data various tags are used if tagset v0.3 using hybrid POS Tagger shows that the most
a word is acting as a noun then() NN tag is used like this for frequent errors occur with respect to Noun, Verb and
Pronoun (PRP) tag , Verb (V), Adjective (JJ) , Conjunction Adjective categories. it is also observed that due to the
(CC) can be used. For Punjabi Language Two POS tagger has unknown tokens the time taken to assign the tag is more as the
been developed and both the taggers consist same tag set. A system undergoes through several modules to assign the most
new tagset for Punjabi language is suggested by TDIL appropriate tag. However, its efficiency and accuracy increase
(Technical Development of Indian Languages) is used . TDIL as we increase the training data size.
proposed 36 pos tags for Punjabi language.
Sumeer Mittal used N Gram Model for Part of Speech
4. LITERATURE SURVEY FOR INDIAN Tagging of Punjabi Language. A Bi-gram Model has been
LANGUAGES used to solve the part of speech tagging problem. An
Different approaches have been used for part-of speech annotated corpus was used for training and estimating of bi
tagging and different researchers have developed POS taggers gram probabilities. During experimental results he noted that
for various languages Foreign Languages like English, the general-Gram based method doesn‟t perform well due to
Arabic and other European languages have more POS taggers unknown words (foreign language words or due to spelling
than Indian languages. Indian Languages for which POS mistakes) problem.
taggers have been developed are Hindi, Bengali, Panjabi and
Tamil. Vasu Ranganathan proposed a Tamil POS tagger based on
Lexical phonological approach. Ganesan proposed another
Antony P J and Dr. Soman had presented a survey on POS tagger based on CIIL Corpus and tagset. An
developments of different POS tagger systems as well as POS improvement over a rule based Morphological Analysis and
tagsets for Indian languages and the existing approaches that POS Tagging in Tamil were developed by M. Selvam and
have been used to develop POS tagger tools . They concluded A.M. Natarajan in 2009. Dhanalakshmi V, Anand Kumar,
that almost all existing Indian language POS tagging systems Shivapratap G, Soman KP and Rajendran S of AMRITA
are based on statistical and hybrid approach. university, Coimbatore developed two POS taggers for Tamil
using their own developed tagset in 2009.
A CRF (Conditional Random Fields) based part of speech
tagger and chunker for Hindi had been used by Aggarwal

388
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Kavi Narayana Murthy and Srinivasu Badugu proposed a 5. PROBLEMS OF PART OF SPEECH
new approach to automatic tagging without requiring any
machine learning algorithm or training data using a TAGGING
morphological analyzer and a fine-grained hierarchical tag- The main problem in part-of speech tagging is Ambiguity. It
set.. They have worked onTelugu and Kannada languages. is possible that a word in a sentence can act as more than one
They argue that the critical information required for tagging meaning so it can have more than one tag so such situation
comes more from word internal structure than from the arise the problem of Ambiguity. To solve this problem we
context and they show how a well designed morphological consider the context instead of taking single word. For
analyzer can assign correct tags and disambiguate many example-
cases of tag ambiguities too. auh ie`k imhnqI kuVI sI ausdy mW-bwp ny ausdw pUrw
swQ id`qw qy auh swry ausdI sPlqw qy bhuq KuS sn[
A Comparison of Unigram, Bigram, HMM and Brill‟s POS In this example word „ਉਹ‟ is both acting as a singular
Tagging Approaches for some South Asian Languages has
been done by Fahim Muhammad Hasan . compared the pronoun and plural pronoun. Since word ਉਹ occur in between
performance of n-grams, HMM or transformation based POS the sentence and also the word next to it is not a noun so it
Taggers on three South Asian Languages, Bangla, Hindi and may be a pronoun. The previous word of the sentence
Telegu. And we found that the HMM based tagger might determines the type of pronoun that is singular or plural. By
perform better for English, but for South Asian languages, looking at the context of the word the correct POS of a word
using corpora of different sizes, the transformation based in a sentence can be identify.
Brill‟s approach performs significantly better than any other
approach when using a 26-tags tagset and pre-annotated 6. Features For POS Tagging
training corpora consisting of a maximum of 25426, 26148 The Following features have been found to be very useful in
and 27511 tokens for Bangla, Hindi and Telegu respectively. POS tagging:
Suffixes : The next word of Current token is used as feature.
Navneet Garg, Vishal Goyal, Suman Preet used Rule Based
Hindi Part of Speech Tagger for Hindi. The System is Prefixes : The previous word of Current token is used as
evaluated over a corpus of 26,149 words with 30 different feature.
standard part of speech tags for Hindi. The evaluation of the Context Pattern based Features
system is done on the different domains of Hindi Corpus. Context patterns are helpful for POS tagging. Eg.. word prefix
These domains include news, essay, and short storie and and suffix context patterns.
system achieved the accuracy of 87.55%.
Word length : Length of particular word is useful feature .
Kristina Toutanova and Christopher D. Manning presents Static Word Feature: The previous and next words of a
results for a maximumentropy-based part of speech tagger, particular word are used as features.
which achieves superior performance principally by enriching
the information sources used for tagging. The best resulting Presence of Special characters : Presence Special characters
accuracy for the tagger on the Penn Treebank is 96.86% surrounding the current word are used as features.
overall, and 86.91% on previously unseen words. Their work
explored just a few information sources in addition to the ones
usually used for tagging. They incorporated into a maximum 7. EVALUATION METRICES
entropy-based tagger more linguistically sophisticated The evaluation metrics for the data set is precision, recall and
features, whichare non-local and do not look just at particular F-Measure. These are defined as following:-
positions in the text. They also added features that model the
Recall = Number of correct answer given by system / Total
interactions of previously employed predictors. All of these
number of words.
changes led to modest increases in tagging accuracy.
Precision = Number of Correct answer / Total number of
Manjit Kaur , Mehak Aggerwal and Sanjeev Kumar Sharma words.
introduced an improving Punjabi Part of Speech Tagger by
Using Reduced Tag Set. They Effort to improve the accuracy F-Measure = Recall *Precision / Recall + Precision
of HMM based Punjabi POS tagger has been done by
reducing the tagset. The tagset has been reduced from more 8. CONCLUSION
than 630 tags to 36 tags. We observed a significant In this paper work, we tried to give a brief idea about the
improvement in the accuracy of tagging. Their proposed existing approaches that have been used to develop POS
tagger shows an accuracy of 92-95% whereas the existing tagger tools. I have presented a survey on developments of
HMM based POS tagger was reported to give an accuracy of different POS tagger systems for Indian languages. I found
85-87%. out from the survey that for Indian Languages Rule-based,
Supervised, Unsupervised , Transformation based POS
Adwait Ratnaparkhi used a Maximum Entropy Model for tagging approaches have been used which have given good
POS tagging. He presents a statistical model which performance results. In each research work the most
trains from a corpus annotated with Part-Of -Speech tags and challenging task is to generate the most efficient Pos tagger
assigns them to previously unseen text with state-of-the-art for large training corpus which can give the best performance
accuracy(96.6%). The model can be classified as a Maximum for different languages.
Entropy model and simultaneously uses many contextual
"features" to predict the POS tag. Furthermore, He
demonstrates the use of specialized features to model difficult
tagging decisions .

389
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

9. REFERENCES
[8] Kavi Narayana Murthy and Srinivasu Badugu School of
[1] Antony P J Research Scholar Computational Computer and Information Sciences,University of
Engineering and Networking (CEN), Research Centre, Hyderabad, India
Amrita Vishwa Vidyapeetham University, Coimbatore,
[9] Kristina Toutanova Dept of Computer Science Gates
India and Dr. Soman K P Professor and Head
Bldg 4A, 353 Serra MallStanford, CA 94305–9040, USA
Computational Engineering and Networking (CEN),
Christopher D. Manning Depts of Computer Science and
Research Centre,
Linguistic Gates Bldg 4A, 353 Serra Mal Stanford, CA
[2] Aggarwal Himashu, Amni Anirudh,” Part of Speech 94305–9040, USA
Tagging and Chunking with Conditional Random Fields”
[10] Navneet Garg , Vishal Goyal , Suman Preet Department
in the proceedings of NLPAI Contest, 2006
of Computer Science, Punjabi University, Patiala
[3] Sapna Kanwar, Mr Ravishankar, Sanjeev Kumar Sharma Department of Linguistics and Punjabi Lexicography,
LPU, Jalandhar, Lecturer, LPU, Jalndhar, Punjabi University, Patiala.
Associate professor, B.I.S College of Engineering and
[11] Manjit Kaur Department of Computer Science and
Technology, Moga – 142001, India
Engineering Lala Lajpat Rai Institute of Engg. &
[4] Nidhi Mishra , Amit Mishra (2011), “Part of Speech Technology, Moga, India , Mehak Aggerwal Department
Tagging for Hindi Corpus”, International Conference on of Computer Science and Engineering Lala Lajpat Rai
Communication Systems and Network Technologies. Institute of Engg. & Technology, Moga, India, Sanjeev
Kumar Sharma Assistant Professor Department of
[5] Chirag Patel and Karthik Gali Language Technologies Computer Science and Application DAV University
Research Centre International Institute of Information Jalandhar.
Technology Hyderabad, India
[12] Shambhavi B R Department of CSE R V College of
[6] Sumeer Mittal Adhesh college of Engineering, Faridkot, Engineering, Bangalore Ramakanth Kumar P, PhD,
Mr Navdeep Singh Sethi Lecturer, Adhesh Institute of Department of ISE, , R V College of Engineering,
Engineering and Technology, Faridkot, Sanjeev Kumar Bangalore.
Sharma Assistant Professor DAV University Jalandhar
[13] Erlyn Manguilimotan and Yuji MatsumotGraduate
[7] Managing Editor: M. S. Thirumalai, Ph.D. Editors: B. School of Information Science, Nara Institute of Science
Mallikarjun, Ph.D. Sam Mohanlal, Ph.D. B. A. Sharada, and Technology 8916-5, Takayama-Cho, Ikoma, Nara,
Ph.D. A. R. Fatihi, Ph.D. Lakhan Gusain, Ph.D.Jennifer 630-0192 Japan
Marie Bayer, Ph.D. , S. M. Ravichandran, Ph.D.G.
Baskaran, Ph.D. ,L. Ramamoorthy, Ph.D. Volume 11 : 9
September 2011 ISSN 1930-2940.

390
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Asymmetric Algorithms and Symmetric Algorithms: A


Review
Tannu Bala Yogesh Kumar
Research Scholar(MTech) Assistant Professor
BGIET,Sangrur BGIET,Sangrur
tannu.garg18@gmail.com yksingla37@gmail.com

ABSTRACT For security reasons, this message is then coded using a


cryptographic algorithm. This process is called Encryption.
In these days securing a network is an important issue. Many
techniques are provided to secure network. Cryptographic is a An encrypted message is known as Cipher text. It requires
technique of transforming a message into such form which is two processes: - Encryption algorithm and a key. Encryption
unreadable, and then retransforming that message back to its algorithm is used by sender.
original form. Cryptography works in two techniques:
symmetric key also known as secret-key cryptography Decryption: Converting cipher text back to plain text is
algorithms and asymmetric key also known as public-key known as decryption. This may also need two requirements:
cryptography algorithms. In this paper we are reviewing Decryption algorithm and key. Decryption is done by
different symmetric and asymmetric algorithms. receiver.

Key: Key is the Combination of any numeric or alpha


Keywords: RSA (Rivest Shamir Aldeman), El-gamal, numeric text or special symbol. Key is used at the time of
Symmetric cryptography, Asymmetric cryptography. encryption or decryption. Encryption and Decryption process
directly depend upon it so key is very important.
1. INTRODUCTION
1.1 Cryptography:

Information security plays a very important role when


communication is provides by using internet. It is very
important for people who are committing by e-transaction
service. [1] There are various cryptography methods that
provide security to password and payments that relay on
internet. To achieve this level of security, various security
protocols that are of Symmetric-key and asymmetric-key type
have been developed. Cryptography is necessary for secure
communications. Cryptography has many uses and
applications such as protecting private company information. Fig 1: encryption and decryption with key
It allows the user to order a product on the internet without the
fear of their credit card number being stolen and used against
them anymore. [1] Cryptography is all about increasing the
level of privacy of individuals and groups. 1.3 Approaches
The Cryptography or methods used for securing the
1.2 Termed used in Cryptography information are classified into following categories:
Plain text: The original message that the person want to send  Symmetric Key Cryptography
is known as plain text. For an example, Tom is a sender who
 Asymmetric Key Cryptography
want to send message ―hello, how are you‖ to person bob
which is at receiver side. Then the message ―hello, where are
Symmetric-Key Cryptography (also known as single-key
you?‖ is known as plain text.
encryption and private key encryption) is a type of encryption
Cipher text: When plain text is coded by using encryption in which same secret key is used to encrypt and decrypt
then the generated text is known as cipher text. This message information. A secret key can be a number, a word, or simple
cannot be understood by anyone. For an example a string of random letters. Secret key is applied to plain text to
―unjn122%$if ―is a cipher text produced for plain text ―hello, change the content. This is done simply by shifting each letter
where are you ―. in a number of places. In this technique both the sender and
receiver has to know about secret key, so they can encrypt and
Encryption: Converting plain text to cipher text is referred as
encryption. A message in original form is known as Plaintext. decrypt all information. In any symmetric-key encryption
techniques, both encryption and decryption process are carried
out using a single key.DES is a symmetric key algorithm. [2]

391
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

These algorithms have many advantages: 2. LITRATURE REVIEW


1. Efficient and secure Ankit Gambhir (2014) in this paper [1] performance as well
2. Execute at high speeds as comparison between two cryptographic algorithms (RSA
3. Consume less computer resources of memory and and DES) was implemented. There are two techniques of
processor time. cryptography: symmetric key that is also called secret-key
cryptography algorithms and asymmetric that is also called
However, symmetric key cryptographic techniques suffer public-key cryptography algorithms. DES is secret- key based
from many problems: algorithm and RSA is public key based algorithm. Both the
algorithms are very efficient.
1. Key distribution problem
2. Key management problem
3. Inability to digitally sign a message.

Fig 4: The process of encryption and decryption [5]

DES algorithm with its steps to provide encryption and


decryption was discussed same RSA algorithm was discussed
with all its steps.

Fig 2: Symmetric key cryptography process [3]

Asymmetric Key Cryptography: The problem with secret


keys is exchanging them over the Internet while preventing
them from thief. Anyone who knows the secret key can
decrypt the message. To overcome this problem we have
asymmetric encryption technique, in which there is related
pair of keys. [4] A public key is available to anyone who
might want to send you a message. A second, private key is
kept secret, so that only receiver knows it. Any message (text,
binary files, or documents) that are encrypted by using the Fig 5: DES process [5]
public key can only be decrypted by applying the same
algorithm, but by using the matching private key. Any Comparison table shows us the difference between DES and
message that is encrypted by using the private key can only be RSA algorithm based on four different parameters: type of
decrypted by using the matching public key. This means that cryptography, key used, throughput, confidentiality.
you do not have to worry about passing public keys over the
Internet. A problem with asymmetric encryption, however, is
that it is slower than symmetric encryption. It requires far Table 1: Comparison between Des and Rsa Algorithm [5]
more processing power to both encrypt and decrypt the
content of the message. In it, instead of a single key, every
person has a pair of keys. One key, called the public key is
known to everyone and the other one, the private key is
known only to the owner. There is a mathematical relationship
between both these keys. Thus, if any message ‗m‘ is
encrypted using any of the key, it can be decrypted by the
other portion. Various asymmetric encryption algorithms
(RSA, Elgamal) have been implemented.

Annapoorna Shetty et. al. (2014) in this paper [4] a detail


about cryptography is studied. Cryptography is the art and
science of protecting information from unwanted person and
Fig 3: Asymmetric key cryptography process [3] converting it into a form which is not easily breakable. The
main aim of cryptography is keeping data secure form

392
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

unauthorized persons. Data cryptography mostly is the attacks. To improve the security of DES algorithm the
combination of the content of data, such as text data, image transposition technique is added before the DES algorithm to
related data and audio, video related data. To convert that data perform its process. If the transposition technique is used
into code form (cipher text) is called data encryption. The before the original DES algorithm then the user required first
reverse of data encryption process is called data Decryption. to break the original DES algorithm and then transposition
The paper first discusses about different goals technique. So the security is approximately double as
of the cryptography. Different goals like Confidentiality, compared to a simple DES algorithm.
Authentication, Integrity, Non Repudiation, Access control This paper includes four techniques to
are discussed. Different type of attacks that may damage your provide security. First is DES, Double DES (2DES), Its
data is mentioned. In the paper nine different attacks are process is the same as DES but repeated same process 2 times
discussed including Cipher text-only attack, Known-plaintext using two keys K1 and K2, Triple DES is DES -three times,
attack, Chosen-plaintext attack etc. last one is the transposition technique, this does not replace the
Two types of cryptography are discussed. one alphabet with another like the substitution technique but
This paper gives detail about two asymmetric algorithm RSA perform the permutation on the plain text to convert it into
and Elgamal algorithm. Summary table give the detail of the cipher text.
two algorithms based on different factor analyzed. In this The Designed system improved the security power
paper the summary table reports the key length value, type of of original DES. The only drawback of Enhanced DES is extra
algorithm, security attacks, simulation speed, scalability, key computation is needed but the today‘s computer have parallel
used, power consumption, and hardware/ software and high speed computation power so the drawback of the
implementation difference between RSA and EL-Gamal. Enhanced DES algorithm is neglected because our main aim is
to enhance the security of a system.
Bryce D. Allen (2008) in this paper [5] two previous
cryptography techniques Asymmetric and Symmetric
cryptography are combined into one know as Hybrid 3. Detail about different cryptographic
Cryptosystems. Hybrid cryptosystems combine them to gain algorithm:
the advantages of both. This paper implements the two attacks,
basic meet-in-the-middle attack and the two-table attack. Data Encryption Standard (DES): DES algorithm is secret
Several variations in basic meet-in-middle attack are key based algorithm in which same key is used for encryption
implemented and all these implementations are done in c++. and decryption. Des is a symmetric key algorithm. DES is the
Table 1.1 shows Splitting probability of the experiment. block cipher — an algorithm that takes a fixed-length string of
Elgamal cryptography algorithm with Discrete Log Problem plaintext bits. In the case of DES, the block size is 64 bits.
was discussed. This paper discuss meet-in-middle attack in The key originally consists of 64 bits; however, only 56 of
detail with different parameters like its Requirements and these are actually used by the algorithm. Eight bits are used
Assumptions, problem Solution, its implementation , its for checking parity, and are thereafter discarded. Hence the
Running Time and Memory Usage. Then second attack ―two effective key length is 56 bits. [2] DES when implemented
table attacks‖ was discussed with same parameters. with hardware and software it give better performance in
Implementing a cryptosystem securely hardware rather than in software [8]. DES consumes low
requires far more than an understanding of the basic algorithm. power as compare to any other cryptography algorithm
The implementer must be aware of possible attacks on the (RSA).
system, and choose keys and parameters to make those attacks
infeasible. This paper discussed attacks which rely on the Advanced Encryption Standard (AES): AES is the advance
underlying mathematics - however timing attacks have been encryption algorithm which is proposed to provide very strong
discovered against various cryptosystem which gain security and to overcome the problem of DES algorithm. AES
information based on how long the computer takes to perform is the block cipher. In the case of AES, the block size is 128
encryption or decryption operations. bits. Size of a key is depends upon the plain text. Standard
size of the key is 128 bit but if for some reasons more security
Franck Lin (2010) [6] explain the cryptography book. He is required then it may increase upto 256 bits (192 and 256).
divides this book into two parts; first part contains the Number of rounds are depend upon key size: if key is of 128
description of Symmetric and Asymmetric key algorithm with bit then 10 rounds are there, if 192 key size is used then 12
examples. A stream cipher attempts to imitate a one-time pad. rounds are there and if key size is 256 then 14 no. of rounds
Since it is impractical to have a key that is at least the same are there. AES is now used worldwide.
size as the plaintext, stream ciphers take a smaller 128 bit key.
Block ciphers represent a major advancement in cryptography Rivest Shamir Aldeman (RSA): RSA algorithm is the most
and have little vulnerability. Most block ciphers rely on commonly used and secure public key encryption and
substitution-permutation rounds. In each round, data is broken authentication algorithm. It can be used to encrypt a message
up into 8-bit sections, substituted according to a key, without the need to exchange a secret key separately. It is
recombined, and then rearranged according to a key. Second included as part of the Web browsers from Microsoft and
part contains the description of digital Age and cryptography. Netscape. It's also part of Lotus Notes, Intuit's Quicken, and
This report confirms the feasibility and strength of quantum many other products. The encryption system is owned by
cryptography, highlighting an almost certain legal battle and RSA Security. The company licenses the algorithm
information technology revolution. technologies and also sells development kits. The
technologies are part of existing or proposed Web, Internet,
Sombir Singh et. al. (2013) in this paper [7] ―DES‖ Symmetric and computing standards. RSA security depends on the
algorithm was explained. Data encryption standard (DES) is a difficulty of factoring the large integers. It is generally
private key cryptography system that provides the security in considered to be secure when sufficiently long keys are used
communication system but DES has the problem of brute force (512 bits is insecure, 768 bits is moderately secure and 1024
bits is good, for now).

393
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

RSA computation occurs with integers modulo n = p*q. It [4] Annapoorna Shetty, Shravya Shetty and Krithika. ―A
requires keys of at least 1024 bits for good security. Keys of Review on Asymmetric Cryptography –RSA and ElGamal
size 2048 bit provide best security. Widely used for secure Algorithm‖ International Journal of Innovative Research in
communication channel and for authentication to identity Computer and Communication Engineering 2014.
service provider. RSA is too slow for encrypting large
volumes of data but it is widely used for key distribution. [9] [5] Allen, Bryce. Implementing several attacks on plain
RSA has the disadvantage that it is not efficient for both ElGamal encryption. ProQuest, 2008.
hardware and software implementation. The principle of RSA
algorithm is „it is easy to multiply two prime numbers but [6] Lin, Franck. "Cryptography‘s Past, Present, and Future
difficult to factor them‟. As RSA is asymmetric key Role in Society‖.
cryptographic algorithm so there are different keys for
encryption and decryption. [1] [7] Singh, Sombir, Sunil K. Maakar, and Sudesh Kumar. "A
Performance Analysis of DES and RSA
Elgamal Algorithm: ElGamal encryption/decryption is Cryptography." International Journal of Emerging Trends &
based on the difficulty of the discrete algorithm problem Technology in Computer Science (IJETTCS), ISSN: 2278-
where it is straight forward to raise numbers of large powers 6856.
but it is much harder to do the inverse computation of the
discrete logarithm. The ElGamal algorithm depends on certain [8] Padmavathi, B., and S. Ranjitha Kumari. "A Survey on
parameters which are affecting the performance, speed and Performance Analysis of DES, AES and RSA Algorithm
security of the algorithm. ElGamal encryption is one of along with LSB Substitution Technique."International Journal
many encryption schemes which utilizes randomization in of Science and Research 2.4 (2013).
the encryption process. [9]
The ElGamal algorithm can be use as RSA algorithm for [9] Singh, Rashmi, and Shiv Kumar. "Elgamal‘s Algorithm in
public key encryption because: Cryptography."International Journal of Scientific &
Engineering Research 3 (2012).
• RSA encryption depends on the difficulty of factoring large
integers while [10] Menezes, Alfred J., Paul C. Van Oorschot, and Scott A.
• ElGamal encryption depends on on the difficulty of Vanstone. Handbook of applied cryptography. CRC press,
computing dicrete logs in a large prime modulus. 1996.

ElGamal is nothing but the advance version of Diffie- Hell- [11] Li, Xiaofei, Xuanjing Shen, and Haipeng Chen.
men key exchange protocol. But, ElGamal is not good "ElGamal Digital Signature Algorithm of Adding a Random
because its cipher text is two times longer than the plain text. Number." Journal of Networks 6.5 (2011): 774-782.
ElGamal is good because it gives different cipher text for
same plain text each time. For image data, the size of the [12] Sison, Ariel M., et al. "Implementation of Improved DES
cipher text is very huge & reshaping the encrypted data was Algorithm in Securing Smart Card Data." Computer
not under-stood. ElGamal‗s encryption is very simple because Applications for Software Engineering, Disaster Recovery,
it is multiplication of message and symmetric key and Business Continuity. Springer Berlin Heidelberg, 2012.
252-263.
4. CONCLUSION [13] Mahajan, Prerna, and Abhishek Sachdeva. "A study of
Encryption Algorithms AES, DES and RSA for
Security is playing a very important and powerful role in the Security." Global Journal of Computer Science and
field of networking, Internet and various communication Technology 13.15 (2013).
systems. In this paper we compare various symmetric
algorithms (DES and AES) and asymmetric algorithms (RSA [14] William Stallings, " Cryptography and Network Security
and Elgamal). Based on this research we conclude that in Principles and Practices", Prentice Hall, November 16, 2005.
symmetric algorithm AES is better and in asymmetric
algorithm Elgamal is better to provide security. [15] Elminaam, Diaa Salama Abd, Hatem Mohamed Abdual-
Kader, and Mohiy Mohamed Hadhoud. "Evaluating The
REFERENCES Performance of Symmetric Encryption Algorithms." IJ
Network Security 10.3 (2010): 216-222.
[1]Gambhir, Ankit. "RSA Algorithm or DES
Algorithm?" Journal of Engineering Computers & Applied
Sciences 3.4 (2014).

[2]Bhardwaj, CR S. "Modification of Des


Algorithm." International Journal of Innovative Research and
Development 1.9 (2012).

[3] Thakur, Jawahar, and Nagesh Kumar. "DES, AES and


Blowfish: Symmetric key cryptography algorithms simulation
based performance analysis."International journal of emerging
technology and advanced engineering 1.2 (2011).

394
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Novel Approach for Reducing Energy Consumption


and Increasing Throughput in Wireless Sensor Network
using Network Simulator 2

Jagdish Bassi Taranjit Aulakh


Computer Science & Engineering Computer Science & Engineering
Punjab Technical University Punjab Technical University
Jalandhar , India Jalandhar , India
rommy_btech@yahoo.co.in taranaulakh@gmail.com

ABSTRACT various sensors and actuators. There are two types of wireless
A wireless sensor network contains collection of sensor nodes sensor networks.
which are situated at desired locations to control various real 1) Structured network
time applications like moisture, climate, stress etc. So,
Wireless sensor networks (WSNs) involve greater efficiency 2) Unstructured network
to control the environment related applications which are In the structured network, wireless sensor networks plan the
greatly used for military purposes and other life related deployment of wireless sensor nodes while the deployment of
applications in the field of medical. The wireless sensor nodes nodes is in an ad-hoc network manner in the unstructured
mainly make use of the battery systems and thus the wireless wireless sensor networks. These wireless sensor nodes
network’s life is the main issue of the battery’s power system. consume most of the energy during listening process. We
Hence, to assist better results for consumption of battery and generally solve this problem by using the concept of duty
security mechanism for the wireless sensor network to be cycling in the wireless sensor network. The process of time
energy efficient, anycast forwarding scheme is proposed and synchronization is critical for diverse purposes, including
used in this paper. In the wireless network each node has coordinated actuation, sensor data fusion and power-saving
multiple next-hop relaying nodes in a candidate set duty cycling [3]. The wireless sensor nodes periodically
(forwarding set), results in reducing the delay and reducing switch between sleeping and active states, called as Duty
the consumption of battery power. An active node forwards cycling. These wireless sensor nodes transmit and receive the
the packet to the first wakeup node in the forwarding set. data in active state and going completely inactive in sleeping
Keywords state in order to save energy. Here, synchronization process
Sensor, Energy, Delay, Node, MAC, DSR, Throughput between the operating cycles of different wireless nodes is
motivated as the radios of both machines must be on to
1. INTRODUCTION transmit a packet from one machine to another machine.
There are four major activities which consume energy: Example of protocols using synchronized approach: T-MAC,
energy consumed by radios when they are on; the energy S-MAC and RMAC routing protocols [4].
consumed during the process of transmission and receiving of
The current duty cycling MAC layer protocols for wireless
control packets; the energy required to remain wireless
sensor networks is synchronized using explicit schedule
sensors in on state; and energy consumed during data transfer
exchanges or leave unsynchronized as both possess
from source to destination . The active process of data sending
weaknesses and deficiencies. The process of duty cycling and
and receiving is a rarely occurring event and thus only a small
packet transmissions are scheduled by periodic
amount of the total energy is consumed. But, the network
synchronization of messages (SMAC, TMAC and DMAC),
sense events occur repeatedly with continued and independent
which consume significant energy even at zero traffic. The
energy consumption. Thus we propose spreading the
BMAC process wakes up the receiver using the
network’s lifetime by controlling the energy expended to keep
unsynchronized duty cycling process and long preambles
the radio communication systems on (for listening to the
mechanism. The long prelude mechanism has some problems.
medium and for control packets) [1]. So, wireless
First, the latency accumulated along multihop routes could be
communication systems while waiting for a packet arrival
uncontrolled due to the use of long preludes on each hop.
consumes most of the energy. Hence, sleep wake scheduling
Second, after the awakening of the receiver node, the energy
is an effective mechanism to prolong the lifetime of this
consumed on prelude transmission and reception at the node
energy constrained wireless sensor networks. In sleep wake
is wasted. This all can be avoided if sender’s side is aware of
scheduling a transmitting node needs to wait for its next-hop
the receiver’s wake up schedule and thus choosing the prelude
relay node to wake up which may cause a substantial delay.
length conservatively. Third, unneeded prelude overhearing
This delay can be reduced by using some DSR techniques and
by neighbor nodes other than the intended receiver node by
packet forwarding schemes [2].
remaining awake till the last data packet transmission to the
1.1 Wireless Sensor Network node results in energy wastage.[5]-[6].
Wireless ad-hoc network consists of mobile nodes which are
communicating over wireless links with processing capability
containing multiple types of memory (programs, data and
flash memories), RF transceivers, and power source and

395
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

them over multiple “hops” between nodes that are not directly
within the wireless transmission range area of one another. All
routing information about sources of intermediate nodes
changes and also joining or leaving of nodes in the network
area are determined and maintained by DSR. The resulting
network topology may be quite rich and rapidly changing,
because there may be a change in the number or sequence of
intermediate hops to reach any destination,. The DSR protocol
also finds the multiple routes (hops) to any destination node in
Fig 1: Basic architecture of wireless sensor network the ad hoc network of wireless. There is an order list of nodes
To increase higher power efficiency, to reduce the latency , through which the packets with the header information must
higher packet delivery ratios for traffic, integration of the pass. DSR can successfully discover and forward packets over
selected scheduling algorithm and access control to maintain uni-directional links as if other protocols operate only over
one–to-one mapping function between a data active period bidirectional links of the wireless sensor network.
and the subsequent sleep period, DW-MAC uses a new energy
efficient duty cycle MAC protocol.[4]. The major elements 2. PROPOSED ALGORITHM
characterizing the performance of a wireless sensor network There is a cogent increase in the cost of sensor motes due to
are like the consumption of power in different operating the additional receivers in the DSR routing protocol when an
environments, the impact of weather conditions during the use on-demand synchronized sleep-wake schedule is used. As it is
of wireless networks, interference between neighboring nodes, impractical for every sensor node to be know about wake-
etc. are also aiming to be studied and analyzed. Analysis is the sleep schedule of other sensor nodes, leading to the additional
part of the technology, which provides specific general delays along the path to the sink node because each node
information about the transmission range of mote wireless needs to wait for its next-hop node to wake up before it can
sensor nodes decreases significantly in the presence of rain, transmit the packet. But the delay is minimized for the delay
fog and other climate conditions [7]. sensitive applications, such as fire detection or tsunami alarm,
where delay is unacceptable. In traditional packet-forwarding
1.2 The protocol used in Wireless Ad-hoc Network schemes, each sensor node has one designated next-hop
There are various sleep wake protocols, we have synchronized relaying node in the neighborhood, and it has to wait for the
sleep-wake scheduling protocols proposed in [3], [8]-[11]. next-hop node to wake up when it needs to forward a packet.
These protocols are used when wireless sensor nodes There are multiple next-hop relaying nodes in a candidate set
periodically or a periodically exchange their synchronization (forwarding set) for each node. A packet can forward by a
information with their neighboring nodes. However, such sending node to the first node that wakes up in the forwarding
synchronization schemes can cause additional communication set. Thus the proposed Anycast forwarding scheme reduces
overhead, communication delay and consuming a the event-reporting delay and minimizes the power
considerable amount of energy. The On-demand sleep-wake consumption and maximizing the lifetime of nodes.
scheduling protocols as proposed in [12], where nodes turn off
the most of their circuitry and always turn on a secondary
low-powered receiver to response to “wake-up” calls from
neighboring nodes when there is a need for relaying packets.
However, this on-demand sleep-wake scheduling have an
additional receiver which significantly increases the cost of
the sensor motes. Hence, to save energy, each node wakes up
independently of their neighboring nodes in the above
protocols. But there is an occurring additional delay at each
node along the path to the sink node as each node needs to
wait for its next-hop relaying node to wake up before it can
transmit the packet to the node, adds restraints to it. In a
situation like fire detection and Tsunami alarm the delay
between nodes is unacceptable and thus, to minimize the
event reporting delay for such delay sensitive applications, the
On Demand (Reactive) protocol is used. The Dynamic Source
Routing protocol (DSR) is an on demand and efficient routing
protocol designed for multi-hop wireless sensor ad hoc
networks. It organizes and configures the protocols by itself
without any help of any existing network infrastructure or
administration. In DSR the two routing schemes are used, i.e.
(a) Route Discovery and (b) Route Maintenance. These
schemes work together to allow sensor nodes to discover and
maintain source routes to arbitrary destinations in the wireless
ad hoc network. It is called as a loop free technique because it
won't require up-to-date routing information in the
intermediate nodes through which packets are forwarded. It
aids nodes forwarding packets to cache the routing
information in them for their own future use. It allows the Fig 2: Flowchart of anycast forwarding scheme
packet skyward to those nodes that are reacting to changes in
the current routes which they are using. Here, nodes forward
the packets for each other to allow communication between

396
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. SIMULATION RESULTS
We have used the various parameters and simulate them by
Network Simulator 2 (NS2) and compared with existing
techniques of on demand (reactive) routing protocol, DSR.
We achieved our major objective that anycast forwarding
technique has significant improvement as compared to the
existing individual Reactive Protocol for a wireless sensor
network using these simulations.
Table 1. Table captions should be placed above the table

Name Type
Simulator Network Simulator 2.34

Network Size 1500m x 1500m

No. of nodes 20

Simulation Time 30Sec


Fig 3: Delay v/s pause time comparison
MAC Type 802.11
3.2 Throughput performance Comparison
Bandwidth 2Mz Throughput refers to the ratio of the amount of packets
received at the Destination node to the amount of packets
Traffic Sources CBR transmitted at the Source node. It must be higher for the better
performance of the wireless sensor network.
Traffic Agents UDP
Throughput= Total Data Bits Received/Simulation Runtime
Interface Queue Length 250 Here the proposed work shows the higher throughput of
anycast forwarding technique as compared to existing DSR
Packet Size 512 Byte data technique is shown in figure.

Routing Protocol DSR

Antenna Type Omni-directional

Initial Energy 850Joules

The Performance analysis of proposed anycast forwarding


technique is done by comparing with existing on demand
technique on the basis of the following parameters:
a) End to End delay b) Throughput c) Energy
3.1 End to End delay performance
Comparison
It is the delay which is calculated from the time when an event
occurs for a node to the time when the packet due to this event
is received at the sink node [8]. It is the average time taken
by the data packet to arrive at the destination node when an
event occurs in the network. It also counts the delay caused by
the queue in data packet transmission and route discovery
process.
Delay = Tr – Ts
Whereas Tr is arrive time & Ts is send time. The lower value Fig 4: Throughput v/s pause time comparison
of end to end delay means the better performance of the
protocol which is used. The proposed work shows the 3.3 Energy Performance Comparison
comparable end to end delay with existing DSR is shown in Energy is consumed while sending a file or data, with the
figure. consideration of the size of the packets. Since it is just
impractical possible to replace the batteries of a large number
of deployed sensors in the hostile environment of wireless
network. So, to develop an energy efficient network keeping
consumption of energy as low as possible .Our proposed
network consumes less energy as compared to the existing
DSR technique.

397
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[3] J. Elson, L. Girod, and D. Estrin, “Fine-grained network


time synchronization using reference broadcasts”,
SIGOPS Oper. Syst. Rev., vol. 36, pp. 147–163, 2002.
[4] Yanjun Sun,Shu Du, Omer Gurewitz, David B. Johnson,
"DW-MAC: A Low Latency, Energy Efficient Demand-
Wakeup MAC Protocols for Wireless Sensor Networks",
MobiHoc-2008.
[5] Hwee-Xian Tan and Mun Choon Chan, "A2-MAC: An
Adaptive,Anycast MAC Protocol for Wireless Sensor
Networks", Wireless Communications and Networking
Conference (WCNC), 2010 IEEE.
[6] Sha Liu, Kai-Wei Fan and Prasun Sinha, “CMAC: An
Energy Efficient MAC Layer Protocol Using convergent
Packet Forwarding for Wireless Sensor Networks” The
Ohio State University, 2007.
[7] G. Anastasi, A. Falchi, A. Passarella, M. Conti, and E.
Gregori, “Performance Measurements of Motes Sensor
Networks”, in Proceedings of the 7th ACM International
Symposium on Modeling, Analysis and Simulation of
Wireless and Mobile Systems, pp.174–181, October
Fig 5: Energy versus Pause time Comparison 2004.
[8] Y.-C. Tseng, C.-S. Hsu, and T.-Y. Hsieh, “Power-Saving
4. CONCLUSION Protocols for IEEE 802.11-Based Multi-Hop Ad Hoc
Networks,” Computer Networks,vol. 43, pp. 317–337,
Hence, it can be concluded that anycast forwarding technique
Oct.2003.
giving better results than the existing DSR technique.
Network parameters act as the performance markers. Thus, in [9] W. Ye, H. Heidemann, and D. Estrin, “Medium Access
the proposed work, the results in terms of end to end delay is Control with Coordinated Adaptive Sleeping for
lower along with higher throughput and less required energy Wireless Sensor Networks,” IEEE/ACM Transactions on
as in comparison to the existing DSR technique. Hence, it can Networking, vol. 12, pp. 493–506, June 2004.
be concluded that results of proposed protocol are better or
comparable with existing DSR protocol. [10] T. van Dam and K. Langendoen, “An Adaptive Energy-
Efficient MAC Protocol for Wireless Sensor Networks,”
5. ACKNOWLEDGMENTS in Proc. SenSys, pp. 171–180, November 2003.
At the last of it we are thankful to all the faculty members of [11] G. Lu, B. Krishnamachari, and C. S. Raghavendra, “An
BGIET (Sangrur) whose fruitful suggestions enhanced and Adaptive Energy-Efficient and Low-Latency MAC for
enriched the beauty of this paper presentation. Data Gathering in Wireless Sensor Networks,” in Proc.
IPDPS, pp. 224–231, April 2004.
REFERENCES
[12] E. Shih, S.-H. Cho, N. Ickes, R. Min, A. Sinha, A. Wang,
[1] M. Zorzi and R. R. Rao, “Geographic Random and A. Chandrakasan , “Physical layer driven protocol
Forwarding (GeRaF) for Ad Hoc and Sensor Networks: and algorithm design for energy-efficient wireless sensor
Energy and Latency Performance,” IEEE transactions on networks,” in Proc. MobiCom , 2001.
Mobile Computing, vol. 2, pp. 349–365, October 2003.
[2] Joohwan Kim, Xiaojun Lin, Ness B,Shroff, Prasun
Sinha, ” Minimizing Delay and Maximizing Lifetime for
Wireless Sensor Networks With Anycast” ,Proceedings
of IEEE INFOCOM-2010.

398
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Comparison & Analysis of Binarization Technique for


Various Types of Images Text
Monica Goyal Rachna Rajput Kanwaljeet Kaur
GKU, Talwandi Sabo GKU, Talwandi Sabo BGIET, Sangrur
monikagoyal84@gmail.com rachna12cse@gmail.com kanwalshergill85@gmail.com

ABSTRACT with it. These dots are called picture elements, or more
Binarization is the process that converts an image into simply pixels. The pixels surrounding a given pixel
black-and-white a threshold value is defined and the colors constitute its neighborhood. A neighborhood can be
above that value are converted into white. While the colors characterized by its shape in the same way as a matrix
below it is converted into black. This is a very simple
process in digital image processing when one has a Types of Digital Images:
document with black ink written on a white paper. We shall consider basic four types of images:
Document image binarization is an important step in the (a) Binary Images.
document image analysis and recognition pipeline. The (b) Grayscale.
performance of a binarization technique directly affects the (c) True color or RGB.
recognition analysis. The quality of the images however (d) Indexed.
has a significant impact on the OCR performance. Since
most historical archive documents images are of poor Image Acquisition: Briefly, discussing the meaning for
quality due to aging and discolored cards and ink fading. getting picture into a computer.
In recent years this method has gamed popularity over its
competitors due to its simplicity superior convergence CCD camera: Such a camera has, in place of the usual
characteristics and high solution quality. Two algorithms film, an array or photo sites; these are silicon electronic
are presented, that are suitable for scanning document devices whose voltage output is proportional to light
images are high-speed. They are designed or operate on a falling on them. For a camera attached to a computer,
portion of the image while scanning the documents, thus, information fronts the photo sites is then output to a
they fit pipeline architecture and lend themselves to real- suitable storage medium. Generally this is done on
time implementation. The first algorithm is based on hardware as being much faster and more efficient than
adaptive thresholding and uses local edge information to software using a frame-grabbing card. This allows a large
switch between global thresh holding and adaptive local number of images to be captured in a very short time in the
thresholding determined from the statistics of a local order of one ten—thousandth of a second each. The
image window. The second thresholding algorithm is images can then be copied onto a permanent storage device
based on tracking the foreground and background levels at some later time. Digital still cameras use a range of
using clustering based on a variant of the K-means devices, from floppy discs and CD's, to various specialized
algorithm. The two approaches may be used independently cards and memory sticks. The information can then be
or may be combined /or for improving performance. downloaded from these devices to hard disk.

Keywords: K-means, binarization, adaptive


thresholding, pipeline.

1. INTRODUCTION
Binarization is the starting step of most document image Fig.1.1: Images Capture By Digital Camera.
analysis systems and refers to the conversion of the gray
Flatbed Scanner: This works on a principle similar to the
scale image to a binary image. Binarization is a key step in
document image processing modules since a good CCD camera. Instead of the entire image being captured at
once on a large array, a single row or photo sites is moved
binarization sets the base for further document image
across the image, capturing it row-by –row as it moves.
analysis. Binarization usual distinguishes text areas from
Since this is much smaller process than taking a picture
background areas, so it is used as a text locating technique.
from the camera, this is quite reasonable to allow all
capture and storage to be processed by suitable software.
1.1 Images and Digital Images
An image is a single picture which represents something. It
may be a picture of a person, of people or animals, or of an
outdoor scene, or a microphotograph of an electronic
component, or the result of medical imaging.
A digital image differs from a photo in that the (x, y) and f
(x. y) values are all discrete. Usually they take on only Fig.1.2: Images Taken By Scanner.
integer values; x and y ranging from 1 to 256 each and the
brightness values also ranging from 0 (black) to 255
(white). A digital image can be considered as a large array 1.2 Binarization Techniques
of discrete dots, each of which has a brightness associated Document image binarization is an important area or say
active area in the field of image processing and pattern

399
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

reorganization. It converts the gray scale image into binary algorithms that are not only effective, but also efficient and
image as extracting text and eliminating the background. lend themselves to real-time implementation. Document
Binarization plays the important role in document images are typically captured in gray scale (eight bits per
processing since its performance is quite critically the pixel) by a linear array charge-coupled device and are
degree of success in subsequent character segmentation converted to binary (one bit per pixel) output images. In
and recognition. most cases, the documents consist of text or line graphics
on a relatively uniform background, thus, converting them
Block Diagram to binary form is suitable for output and storage, because it
In order to reduce storage requirements and to increase significantly reduces file size and transfer bandwidth
processing speed, it is often desirable to represent gray requirements without loss of important document
scale or color images as binary images by picking a information. With the goal of preserving the content and
threshold value. Binarization algorithms are classified into making the documents available in minimum time and at
global and local methods. Fig 1.3 shows the block diagram reduced cost. It will increase the throughput of a system
of the binarization method. and recognize the text appreciably. The recognition results
can be improved by using binarization techniques.
Binarization techniques can distinguish text from
background. The simplest way to get an image binarized is
to choose a threshold value, and classify all pixels and
classify all pixels with values above this threshold as
white, and all other pixels as black. The problem arises,
how to select the correct threshold.

3. LITERATURE SURVEY
Fig. 1.3: Block Diagram of Image Binarization Method. NOBUYUKI, here in this paper, proposed the Global
In general, the document binarization deals with the two Thresholding technique today named as Otsu method, a
categories: Global Method and Local Method. Our main non-parametric and unsupervised method of automatic
focus is to digitize the document images by applying threshold selection for the picture segmentation. Histogram
binarization techniques to suppress the background noise of an image represents object and background with a graph
and to retain information without any distortion. describing deep and sharp valley between two peaks
respectively as to select the threshold at the bottom of this
valley. Where as in real pictures, it is difficult to detect the
1.2.1 Global Method valley bottom precisely, when the valley is flat and broad,
In global approach, a single threshold value is selected for imbued with noise, or when peaks have extremely unequal
the entire image and is processed with value it mainly heights. To solve these types of problems, some techniques
result good in separation of foreground and background are proposed such as: (i) valley sharpening, (ii) difference
intensity but in poor contrast, variable intensity of histogram method (iii) Apply directly to histogram.
foreground - background these method fails to binarized However such methods need unstable calculation and there
the image. Mainly used global method is Otsu method. are no criteria of calculating ―goodness‖ of the threshold.
So this paper proposes a method to select an optimal
1.2.2 Local Method threshold from the 3 discriminate criterion: mainly by
Problem with global thresholding is that changes in maximizing the discriminate measure, (total variance of
level) as it is independent of k(levels) as other two are
illumination across the scene ma cause some parts to be
brighter (in the light) and some parts darker (in the dependent and based on first order variance (class mean)
shadow) in ways that have nothing to do with the objects and second order variance (class variance). This
in the image. We can deal, at least in part, with such maximization is selected in sequential search by using the
simple cumulative quantities that is only 0 and first or
uneven illumination by determining threshold locally. Ni-
black and Sauvola techniques are used. dissimulative moments of gray level histogram are used
and the range of k is fixed. Proposed method is used for
analyzing the further important aspects as it can be used as
1.3 Importance of Binarization a measure to evaluate the severability of classes for the
Binarization of an image is carried out to convert it to 256 original image. It can also be used to get the lower bound
gray levels. By using binarization techniques, documents and upper bound. A straight forward extension to multi-
processing, degraded images, fingerprints image and, help thresholding problems is fusible by virtue of the criteria on
in performing segmentation in OCR as it is mainly used which this method is based. This method leads to stable
for pre-processing of an image. For example, it is used for and automatic selection of threshold based on integration
degraded document images to enhance it so that these of the histogram. As a result this method is recommended
images can be used for fingerprint identification system. as the most simple and standard method. J.KITTLER
The quality of binarization is measured from several AND J.ILLINGWORTH, in this paper, they proposed a
criteria objectively. derivation on computationally efficient solution, to the
problem of minimum error thresholding under the
assumption of the object and pixel gray level values being
2. PROBLEM DEFINITION normally distributed. Basically this method is alternative
The problem undertaken, High-speed scanners used in solution to the Nagwa and Rosenfeld as they distribute the
production scanning of document images may process over population in mean and variance normally and the
one hundred pages per minute. The speed and performance population parameter is inferred from gray level histogram
requirements imposed on these systems dictate the use of by fitting. This approach is computationally involved as it
dedicated hardware for image processing and require

400
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

require optimization of ―Goodness of Fit‖ criteria function 4.2 Binarization Techniques


by hill climbing , so the method proposed by this paper is
derivation of simpler technique for finding optimal
threshold (Bayes minimum error). It minimizes the criteria 4.2.1 Global Thresholding
instead of minimizing error threshold selection as it gets This method, applies threshold for the entire image. The
the amount of overlap between object and background pixels are separated into two classes. Foreground and
population. The smaller the overlap between density background. This can be expressed as in the equation (1).
function, smaller the classification error. Therefore, the
value of the threshold yielding the lowest value of criteria
will get the best-fit model and therefore minimize the
error. This minimization of criteria function gives a
Where If(x, y) is the pixel of the input image and Ib(x, y) is
threshold value for segmenting the square from the
the pixel of the binarized image and the image can be
background. In case of non-uniform illumination where,
separated into foreground and background [1].
Otsu method fails and result in salt and pepper noise, this
Method:
method uses variable thresholding technique. By using
adaptive windows, an optimal threshold is selected for
Otsu Technique: Otsu is an often used global
each window. It use bilinear interpolation for defining each
thresholding method. It is based on treating the gray level
pixel threshold and show the binary image, so this is the
intensities present in the image as values to be clustered
local method of binarization.
into two sets. One foreground (black) and one background
(white). To carry out this, the algorithm minimizes the
4. TECHNIQUES WORKED OUT weighted sum of within-class variances of the foreground
The goal of Image Binarization is to convert an image of and background pixels to establish an optimum threshold.
up to 256 gray levels to a black and white image. Usually, This is equivalent to maximizing the between-class scatter.
binarization techniques are used for processing which lead From this a scalar number, K, is returned. This is then used
to the clear extraction of useful information in the images to binarize the image through the following equation.
as these techniques differentiate between foreground and
background. The simplest way to get an image binarized is
to choose a threshold value and classify all pixels with
values above this threshold as white and all other pixels as
black. The problem then is how to select the correct
threshold. There are mainly two categories of methods, the 4.2.2 Local Thresholding
global thresholding, and local or adaptive thresholding. Local thresholding method calculate a threshold for each
The process of binarization is completed by following pixel based on the information contained in a
steps: neighborhood of the pixel as in global it is for entire
image. If a pixel(x, y) in the input image has a higher gray
level than the threshold surface, evaluate at (x, y) and to
4.1 Preprocessing set to white, otherwise black. These types of approaches
Preprocessing is mainly used to smooth the image. It is are window-based, which means that the local threshold
used as the noise is there in image. Mainly five categories for a pixel is computed from gray values of pixels in a
of pre-processing filters will be studied: mean filters, window centered at (x, y). Many researchers proposed
median filter, Wiener filter, Total Variation filter and Non- various techniques to compute the local threshold based on
local Means filter. Most of these categories have a the minimum and the maximum gray values in each
selection of variations in implementation. window, some are based on the mean and the standard
deviation as follows:
T=m+k*s
4.1.2 Wiener Filter Where,
The Wiener filter implemented in the spatial domain is
 T is the threshold for the central pixel of a rectangular
also evaluated. Wiener filter, known as ―Minimum mean
window which is shifted across the image,
square error filter‖, is an adaptive linear filter, applied to
 M is the mean,
an image locally, by taking into account the local image
variance. When the variance in an image is large the  ‗s‘ is the variance of the gray values in the window.
Wiener filter results in light local smoothing, while when  ‗k‘ is a constant.
the variance is small, it gives an improved local
smoothing. The filtered image is computed through All the method implemented and result were seen are
Ifilt(x,y)=n+(a2-b2)(Iorig(x,y)-n)/a2 (1) qualitatively (visual effects). Methods are explained with
Where n and a2 are the local mean, variance respectively, algorithms below:
and b2 is the estimate of the noise variance.
4.3 Global Method

4.3.1 Otsu Algorithm


Otsu's method is used to automatically perform histogram
shape-based image thresholding or, the reduction of a gray
(a) (b) level image to a binary image. The algorithm assumes that
Fig.1.4: Wiener Filter (a) original image (b) wiener the image to be threshold contains two classes of pixels or
filter. bi-modal histogram (e.g. foreground and background) then
calculates the optimum threshold separating those two

401
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

classes so that their combined spread (intra-class variance) Niblack's algorithm calculates a pixel-wise threshold by
is minimal. The extension of the original method to multi- sliding a rectangular window over the gray level image.
level thresholding is referred to as the Multi Otsu method. The computation of threshold is based on the local mean
Otsu's method is named after Nobuyuki Otsu and the standard deviation s of all the pixels in the
Otsu Threshold Method window.
• Based on a very simple idea:
Find the threshold that minimizes the weighted within- Algorithm:
class variance. (a). Simple and efficient method for adaptive thresholding
• This turns out to be the same as maximizing the between- (b). The local threshold is set at: T(i,j) = (i,j) + w * o(i,j)
class variance. (c). The values for local mean and standard deviation is
• Operates directly on the gray level histogram [e.g. 256 calculated over a local M x N window.
numbers, p(i)], so it's fast (once the histogram is (d).The parameters are the weight w and the window size.
computed).
Otsu Assumption:
• Histogram (and the image) are bimodal. 4.4.2 Sauvola Method Algorithm
. No use of spatial coherence, nor any other notion of Sauvola method introduced by Jaakko Sauvola is an
object structure. efficient image binarization technique. The Sauvola
• Assumes stationary statistics, but can be modified to be method for local binarization does quite well, and we
locally adaptive. implement it with tiling for efficiency. The basic idea
• Assumes uniform illumination, so the bimodal brightness behind Sauvola is that if there is a lot of local contrast, the
behavior arises from object appearance differences only. threshold should be chosen close to the mean value,
Algorithm: whereas if there is very little contrast, the threshold should
1. Compute histogram and probabilities of each intensity be chosen below the mean, by an amount proportional to
level the normalized local standard deviation. Sauvola is
implemented efficiently by using "integral image"
2. Set up initial and accumulators for the mean and mean-squared pixel values.
3. Step through all possible thresholds The latter requires 64 bit floating point arrays, which are
maximum intensity expensive for large images. Consequently, we give a tiled
version. This gives the identical results as the non-tiled
1. Update and
method, but only requires accumulator arrays to be in
2. Compute memory for each tile separately. For document image
binarization, Sauvola proposed a new method that first
4. Desired threshold corresponds to the maximum performs a rapid classification of the local contents of a
5. Otsu's thresholding method involves iterating through all page to background, pictures and text. Two different
the possible threshold values and calculating a measure of approaches are then applied to define a threshold for each
spread for the pixel levels each side of the threshold, i.e. pixel: a soft decision method (SDM) for background and
the pixels that either falls in foreground or background. pictures, and a specialized text binarization method (TBM)
The aim is to find the threshold value where the sum of for textual and line drawing areas. The SDM includes
foreground and background spreads is at its minimum. noise filtering and signal tracking capabilities, while the
TBM is used to separate text components from background
in bad conditions, caused by uneven illumination or noise.
4.3.2 Iterative Method Finally, the outcome of these algorithms is combined. New
In this we repeat one step again and again until it reaches threshold formula:
to a convergence level, as the name suggest iterative. T=m*(1-k*(1-s/R)),
Algorithm: Where,
1. An initial threshold (T) is chosen; this can be done  T is the threshold for the central pixel of a rectangular
randomly or according to any other method desired. window which is shifted across the image,
2. The image is segmented into object and background  M is the mean,
pixels as described above, creating two sets:  ‗s‘ is the variance of the gray values in the window.
(a) = {f(m,n):f(m,n)>T} (object pixels)  ‗k‘ is a constant.
(b) = {f(m,n):f(m,n)T} (background pixels)
 R is the dynamic range of the standard deviation.
(Note, f(m, n) is the value of the pixel located in the
Algorithm:
column, row)
Step 1: Create the window of size Sx * Sy traverse it on
3. The average of each set is computed.
original X*Y as window centered on the pixel(x, y),
(a) = average value of
Step 2: Compute the local mean m(x, y) and local standard
(b) = average value of
deviation s(x, y)
4. A new threshold is created that is the average of and
Step 3: Compute the threshold value by formula: T=m*(1-
1. T‘ = ( + )/2
k*(1-s/R)),
5. Go back to step two, now using the new threshold
Where, value k=-0.2, R=31.
computed in step four, keep repeating until the new
Step 4: Repeat the above two step for each local window
threshold matches the one before it (i.e. until convergence
until it has traversed whole image.
has been reached).

4.4 Local Method 4.4.3Localots Method


It is same as Otsu method but there is only one difference
4.4.1 Ni-black Method that this is applied on parts of image in this we took the

402
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Otsu Localots Iterative Niblack Sauvola


Method Method Method Method Method

Fig.1.8: Filtered Image.


threshold value locally. We divide the image in small 5.1.4 Applying Techniques:
windows. Fig.1.9: Binarized Image.
5.2 Datasets Taken:
5. RESULTS ANALYSIS For testing we have taken 2 cases:
All the algorithms described above are implemented and 1. Scanned images
datasets are collected. The results are evaluated 2. Digital images
quantitatively. In order to evaluate performance of the 5.2.1 For scanned images we have taken 3 types of
algorithms quantitatively, the clean image are used for images:
comparison with the binary image produced by the A. Book covers
thresholding algorithms. B. Certificates
C. Hand written papers
5.1 Results Working: D. Newspaper
 Read an image. 5.2.2 For digital images we have taken 2 types of
 Convert RGB to grayscale. images:
 Apply filters. A. Boarding
 Apply techniques. B. Pamphlets
5.3 Results for scanned images:
5.3.1 Book Covers:
5.1.1 Read an Image Origin Otsu Localot Iterativ Ni- Sauvol
In this we are reading the image which is our input or say al Metho su e black a
we want to binarize. Here we use imread( ) function of Image d Metho Metho Metho Metho
MATLAB; a inbuilt function. d d d d

Fig.1.5: MATLAB window reading an image.

5.3.2 Certificates:
Origi Otsu Localot Iterative Ni- Sauvol
nal Meth su Method black a
Imag od Metho Metho Metho
e d d d

Fig. 1.6: Original Image


5.1.2 RGB to Grayscale Image:

5.3.3 Newspaper:
Origin Otsu Localots Iterative Ni- Sauvo
Fig. 1.7: Grayscale Image.
al Meth u Method black la
5.1.3 Applying Filters:
Image od Method Metho Meth
d od

403
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Technique Otsu Localots Iterativ Ni- Sauvola


Used u e black
Image Set
Plain Text Bette Best Bad Poor Bad
r
Newspaper Good Best Good Better Bad
Clip
Documents Best Good Better Good Bad
Book Best Better Best Good Bad
Cover
Single Best Good Better Good Bad
image at
5.4 Result for camera’s images: different
5.4.1 Handwritten paper: resolution
1280*960
Origin Otsu Localo Iterati Niblac Sauvo
al Metho ts ve k la
5.5 Discussion
Image d Metho Metho Metho Metho
This result table is created by our team on visually bases.
d d d d
We take a base line ― the image which recognize the
maximum text from original image is consider the best
image and further are decided on sharpen of edges, clear
text and so on.

5.4.2 A single image at different resolution: 6. CONCLUSION & FUTURE SCOPE


Origina Ots Localot Iterativ Niblac Sauvo It is evident that no algorithm work's well for all types of
l Image u s e k la document images. Sauvola gives good results for
Me Metho Metho Metho Meth document images with dark spot and at low illumination.
tho d d d od Ni-Black captures maximum noise along with the text
d detail. Otsu one of the oldest binarization techniques gives
best overall results for document images. Document
images with non-uniform brightness require binarization
methods with delicate local thresholds that must be
determined according to various conditions. There is no
effect of resolution on binarization.
We conclude the advantages and limitations:
1280*960 Advantage
• Simple to implement
• Low computational complexity
Limitations
• Only useful when objects have constant gray values.
• Uneven illumination requires compensation.
• Throws away spatial information.
640*480
• Global nature of histograms limits application to complex
images.
• Quite often does not have spatial coherence.

Further research can focus on the challenges that merge


from the binarization of historical manuscripts such as
320*240 broken edges and smoothening of these manuscripts so
that it can be useful for further processing in OCR.

REFERENCES

320*240 [1] Benjamin Perret, Sébastien Lefèvre, Christophe Collet,


and Éric Slezak, ―Hyperconnections and Hierarchical
Representations for Grayscale and Multiband Image
Processing‖, IEEE, 2011.
[2] B.Gatos, I. Pratikakis and S.J. Perantonis, ―Adaptive
640*480 Degraded Document Image Binarization‖, Pattern
Recognition, Vol. 39(3), PP: 317 – 327, 2006.
[3] B. Gatos, I. Pratikakis and S.J. Perantonis, ―An
adaptive binarization technique for low quality historical
documents‖, IARP Workshop on Document Analysis

404
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Systems, Lecture Notes in Computer Science (3163), PP: of the 10th International Conference on Document
102 - 113, 2004. Analysis and Recognition, Barcelona, Spain, PP: 758 –
[4] B. Gatos, I. Pratikakis and S.J. Perantonis, ―Efficient 762, 2009.
Binarization Of Historical And Degraded Document [19] Manju Joseph, Jijina K.P, ―Simple and Efficient
Images ‘‘, IEEE Transactions on Image Processing, Vol. 7, Document Image Binarization Technique for Degraded
PP: 447 - 454, 2008. document images‖, IJSR - INTERNATIONAL JOURNAL
[5] B. Gatos, I. Pratikakis and S.J. Perantonis, ―Improve OF SCIENTIFIC RESEARCH, Vol: 3, Issue: 5, 2014.
Document Image Binarization by Using a Combination of [20] Naveed Bin Rais, M. Shehzad Hanif, Imtiaz A.Taj,
Multiple Binarization Techniques and Adapted Edge ―Adaptive Thresholding Technique for Document Image
Information‖, Proceedings of the 19th International Analysis‖, IEEE International Multitopic Conference
Conference on Pattern Recognition, PP: 1 - 4, 2008. (INMIC), IEEE, PP: 61 - 66, 2005.
[6] Bolan Su, el.at, ―Robust Document Image [21] Nobuyuki Otsu, "A threshold selection method from
Binarization Technique for Degraded Document Images‖, gray-level histograms". IEEE Trans. Sys., Man&
IEEE TRANSACTIONS ON IMAGE PROCESSING, Cybernetics, Vol. 9(1), PP: 62 - 66, 1979.
VOL. 22, NO. 4, APRIL 2013. [22] Ntogas Nikolaos, Ventzas Dimitrios, ―A Binarization
[7] Chang Moonsoo, Kang Sunmee, Rho Woosik, et.al, method for. Historical Manuscripts‖, 12th WSEAS
"Improved Binarization Algorithm for Document. Image International Conference on Comunications, Heraklion,
by Histogram and Edge Detection", Proceedings Greece, PP: 23 – 25, 2008.
Binarization Methods", IEEE Trans on PAMI, Vol. 2, PP: [23] Prof. S. P. Godse, Samadhan Nimbhore, Sujit
636 - 639, 1995. Shitole, Dinesh Katke, Pradeep Kasar, ―Recovery of badly
[8] Due, Trier Torfinn Taxt, ―Evaluation of Binarization degraded Document images using Binarization
Methods for Document Images‘‘, IEEE Transactions on Technique‖, International Journal of Scientific and
Pattern Analysis and Machine Intelligence, Vol. 17(3), PP: Research Publications, Vol: 4, Issue 5, 2014.
312 – 315, 1995. [24] Rachid Hedjam, Reza Farrahi Moghaddam and
[9] Faisal Shafait, Daniel Keysers, Thomas M. Breuel: Mohamed Cheriet, Senior member, IEEE, ―A spatially
―Efficient Implementation Of Local Adaptive adaptive statistical method for the binarization of historical
Thresholding Techniques Using Integral Images‖, manuscripts and degraded document images‖, Preprint
Proceedings of the 15th Document Recognition and submitted to Elsevier, 2012.
Retrieval Conference, Proc. SPIE, Vol. 6815, 2008. [25] Rowayda A. Sadek, ―An Improved MRI
[10] Fung, C.C. and Chamchong, R., ―A Review of Segmentation for Atrophy Assessment‖, IJCSI
Evaluation of Optimal Binarization Technique for International Journal of Computer Science Issues, Vol: 9,
Character Segmentation in Historical Manuscripts‖, In: 3rd Issue 3, No 2, 2012.
International Conference on Knowledge Discovery and [26] S. Kopf, T. Haenselmann, and W. Effelsberg.,
Data Mining, Phuket, IEEE, PP: 236 – 240, 2010 ―Robust Character Recognition In Low-Resolution Images
[11] Jaap Oosterbroek, Dr. Marco A. Wiering, Dr. And Videos Technical Report‖, Department for
Michael H.F. Wilkinson, ―Using Max-Trees with Mathematics and Computer Science, University of
Alternative Connectivity Classes in Historical Document Mannheim, Reihe Informatik, TR-05-0022005, 2005.
Processing‖, 2012. [27] Sudipta Roy, Ayan Dey, KingshukChatterjee, Prof.
[12] Jagroop Kaur, Dr.Rajiv Mahajan, ―A Review of Samir K. Bandyopadhyay ―A New efficient Binarization
Degraded Document Image Binarization Techniques‖, Method for MRI of Brain Image‖, Signal & Image
International Journal of Advanced Research in Computer Processing: An International Journal (SIPIJ) Vol: 3, No.6,
and Communication Engineering, Vol. 3, Issue 5, 2014. 2012.
[13] J. Bernsen, ―Dynamic Thresholding of Grey-Level [28] Valizadeh, M., Komeili, M., Armanfard, N., Kabir,
Images‖, In: 8th International Conference on Pattern E., ―A Contrast Independent Algorithm for Adaptive
Recognition, France-Paris, ICPR, PP: 1251 – 1255, 1986. Binarization of Degraded Document Images‖, 14th
[14] J. He, Q.D.M. Do, A.C. Downton, and J.H. Kim., ―A International CSI Computer Conference, Art. No.
Comparison of Binarization Methods for Historical 5349339, PP: 127 – 132, 2009.
Archive Documents‖, In International Conference on [29] Valizadeh, M., Komeili, M., Armanfard, N., Kabir,
Document Analysis and Recognition, PP: 538 – 542, 2005. E., ―Degraded Document Image Binarization Based On
[15] Jiang Duan, Mengyang Zhang, Qing Li, ―A Multi- Combination of Two Complementary Algorithms‖,
stage Adaptive Binarization Scheme for Document International Conference on Advances in Computational
Images‖, Proceedings of the Second International Joint Tools for Engineering Applications, Art. No. 5227898, PP:
Conference on Computational Sciences and 595 – 599, 2009.
Optimization(CSO), Sanya, Hainan, China, IEEE, Vol. 1, [30] IEEE. Trans. PAMI, Vol. 19(5), PP: 540 - 544, 1997.
PP: 867 - 869, 2009. [31] You Yang, "OCR Oriented Binarization Method of
[16] J. Kittler and J. Illingworth, "Minimum Error Document Image," Image and Signal Processing, IEEE,
Thresholding," Pattern Recognition, Vol. 19(1), Vol. 4, PP: 622 - 625, 2008.
PP: 41 - 47, 1986. [32] Yung-Hsiang Chiu, Kuo-Liang Chung, Wei-Ning
[17] K.Ntirogiannis ,B. Gatos and I. Pratikakis, ―A Yang, Yong-Huai Huang, Chi-Huang
Modified Adaptive Logical Level Binarization Techniques Liao, ―Parameter-free based two-stage method for
For Historical Document Images‖, 10th International binarizing degraded document images‖, Elsevier, 2012.
Conference on Document analysis and
Recognition(ICDAR), Barcelona, Spain, IEEE, PP: 1171 –
1175, 2009.
[18] Likforman-Sulem, L., Darbon, J., and Barney Smith,
E. H., ―Pre-processing of Degraded Printed Documents by
Non-Local Means and Total Variation,‖ In the proceedings

405
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Page Ranking Algorithms for Web Mining: A Review

Charanjit Singh Dr. S. K Kautish


Research Scholar Professor & Dean Engineering
Guru kasha University Gurur Kashi University
Talwandi Sabo Talwandi Sabo
Sehgal_cs@yahoo.com dr.skautish@gmail.com

ABSTRACT
It is becoming very difficult for the web search engines to Web User
provide relevant information to the users with the growth of the
WWW One of the Data Mining technique called Web mining is Query
Result
defined to extract the hidden information from the web
documents and services. As per the information that is hidden,
web mining can be divided into three different types: web
Web Crawler
content mining, web structure mining and web usage mining. Query
The main application of web mining can be seen in the case of
search engines. In order to rank their search results, they are Web pages Interface
using various page ranking algorithms that are either based on Indexer Search
the content of the web pages or on the link structure of WWW. Web Engine
In this paper, a survey of page ranking algorithms based on
both content and link structure of the web page and comparison
Mining
of some important algorithms in context of performance has Index Query
been carried out.
Processor
Keywords
WWW, Data mining, Web mining, Search engine, Page Figure 1: Search Engine Architecture
ranking.
ranking algorithms has been done and a comparison is carried
1. INTRODUCTION out. This paper is divided into different sections: in section 1,
The World Wide Web is a popular segment of the Internet that first introduce the concept of web search engines and explain
contains billions of documents called Web pages includes its working. In section 2, present the web mining concepts,
documents can contain text, image, audio, video and metadata. categories and technologies. As shown in section 3, present the
With the rapid growth of information sources on the web world, detailed overview of some page ranking algorithms and section
it is becoming difficult to manage the information and satisfy 4, includes the comparison of these algorithms in context of
the user needs. To retrieve the required information from the performance. Finally in section 5, conclude this review and
web matrix, numerous web search engines are used by the discuss some future directions for the system.
users. Some commonly used search engines are Google, msn,
yahoo search etc.
2. WEB MINING
A tool called Web Search engine is used to enable document Application of data mining technique such as web mining is
search with respect to specified keywords, in the web and used to discover automatically and take out information from
returns a list of documents where the keywords were found. Web data. Web mining data can be:
Every search engine performs various tasks based on their  Web Content data- text, images, records, etc
respective architectures to provide relevant information to the  Web Structure data- hyperlinks, tags etc
users. Basic components of a web search engine are: Interface  Web Usage data- http logs, app server logs, etc
(user), Parser, Web Crawler, Database and Ranking Engine (see Further Web Mining can be divided into three categories [1]
Fig. 1). namely web content mining, web structure mining and web
usage mining as shown in Fig. 2
Web search engines work by sending out a spider or web
crawler to visit and download all the web pages of the website 2.1 Web Content Mining (WCM): WCM is the
and retrieve the information needed from them. Using the process of extracting useful information from the contents of
information gathered from the crawler, a search engine will web documents. Content data corresponds to the collection of
then determine what the site is about and index the information. facts a web page was designed to convey to the users. It may
But before representing the pages to the user, search engine consist of text, images, audio, video, or structured records such
uses ranking algorithms in order to sort the results to be as lists and tables. It can be applied on web pages itself or on
displayed. That way user will have the most important and the result pages obtained from a search engine. The Research
useful results first. activities in this field also involve using techniques from other
disciplines such as Information Retrieval (IR) and the natural
In this review paper, various content based and link based page Language Processing (NLP) and Web content mining is further

406
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Web Mining

Web Content Mining Web Structure Web Usage Mining


Mining

Web Page General access Customized


Web Search
Content Mining Pattern tracking Usage tracking
Result Mining

Agent Database Agent Based Database


Based Approach Approach
Approach
Approach

Figure 2: Taxonomy of Web Mining

divide into Web page content mining and Search results 3.1 Page Rank Algorithm
mining. Web page content mining is traditional searching of Page Rank Algorithm was purposed by Surgey Brin and Larry
web pages with the help of content while search result mining Page [4, 5]. Page Rank was named after Larry Page also was
is a further search of pages found in previous search. cofounder of Google search engine. Usually used by the
Google [6] web search engine to rank websites in their search
2.2 Web Structure Mining (WSM): Web engine results. Page Rank is used to measure the importance of
Structure Mining (WSM) is the process, using graph theory website pages by counting the number and quality of links to a
discover structure information from the web. This type of page.
mining can be performed either at the document level or at the This algorithm states that the Page Rank of a page is defined
hyperlink level or we can say as intra-page and inter-page The recursively and depends on the number and Page Rank metric
structure of a typical web graph consists of web pages as nodes, of all pages that link to it (incoming links). If a page has some
and hyperlinks as edges connecting between two related pages. important incoming links to it than its outgoing links to other
pages also become important. A page that is linked to by many
2.3 Web Usage Mining (WUM): Web usage pages with high Page Rank receives a high rank itself.
mining is used to discover the significant patterns from data, A Page Rank Algorithm considers more than 25 billion web
generated by client-server transactions on one or more web pages on the www to assign a rank score [6]. A simplified
localities. It can be further categorized in finding the general version [4] of Page Rank is defined in Eq.1:
access patterns or in finding the patterns matching the specified
parameters.
Business intelligence, site improvement and modification, web PR (u )  C  PR(v) / N
V B ( u )
v
personalization, ranking of pages are application area of said (1)
mentioned categories of web mining. Various ranking
algorithms are used by search engine in order to sort the results here ‘u’ represents a web page, B(u) is the set of pages that
to be displayed and to provide relevant information to the users points to u, PR(u) and PR(v) are rank scores of pages u and v
to cater to their needs. There are various ranking algorithms respectively, Nv denotes the number of outgoing links of pages
developed, few of them have been discussed in the next section: v, C is a factor used for normalization. In Page Rank, the rank
Page Rank, Weighted Page Rank, HITS and SimRank [3, 4] score of a page, p, is evenly divided among its outgoing links.
The values assigned to the outgoing links of page p are in turn
used to calculate the ranks of the pages to which page p is
3. PAGE RANKING ALGORITHMS pointing as shown in Fig. 3
With the swift development of network techniques, huge
information resources glut the whole web world. Web search
engine is increasingly becoming the leading information
retrieving approach.
Relevant information provides to the users is the primary goal
of search engines. As a result, various Page Ranking
Algorithms are used to rank the query results of web pages in
an effective and efficient fashion.
Some algorithms rely only on the link structure of the
document. i.e their popularity scores (web structure mining),
whereas others look for the content in the documents (web
content mining), while some use a combination of both i.e they
use link as well as content of the document to assign a rank
value to the concerned document. Some commonly used page
ranking algorithms have been discussed as follows:
Figure 3: Distribution of page ranks

407
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Later algorithm was modified, observing that not all users


follow the direct links on WWW. The modified version is given
in Eq. 2:

PR (u )  (1  d )  d  PR(v) / N
V B ( u )
v
(2)

Here‘d’ is a damping factor that is usually set to 0.85 and it can


be thought of as the probability of users’ following the links
and (1-d) as the page rank distribution from non-directly linked
pages.
Figure 4: Hubs and Authorities
3.2 Weighted Page Rank Algorithm
This algorithm was proposed by Wenpu Xing and Ali Ghorbani 3.1.1 The HITS algorithm works in two
[9] which is an extension of PageRank algorithm. Algorithm
assigns rank values to pages according to their importance or major steps:
popularity rather than dividing it evenly. The popularity is
assigned in terms of weight values to incoming and outgoing
links and are denoted as Win(v, u) and Wout (v, u) i. Sampling step: In this step, a set of relevant pages for
respectively. Win(v, u) is the weight of link (v,u) calculated on the given query are collected.
the basis of incoming links to page u and the number of
incoming links to all reference (outgoing linked) pages of page ii. Iterative step: This step finds hubs and authorities
v. using the output of sampling step. The scores of hubs
and authorities are calculated as follows:

W(inv ,u )  I u / I
pR ( v )
P
Hp  A q (6)
(3)
ql ( p )
where Iu and Ip represent the number of incoming links of page
u and page p, R(v) is the reference page list of page v.
Wout(v,u) is the weight of link (v,u) calculated on the basis of

H
the number of outgoing links of page u and the number of
outgoing links of all the reference pages of page v. Ap  q (7)
qB ( p )

v ,u )  Ou /
W(out O
pR ( v )
P
where Hq and Aq represents the Hub score and authority score
of a page. I(p) and B(p) denotes the set of reference and referrer
(4) pages of page p. the page’s authority weight is proportional to
the sum of the hub weights of pages that it links to.
Here Ou and Op represents, the number of outgoing links of
page u and page p, respectively. Then the weighted Page Rank
is given by a formula: 3.1.2 Constraints with HITS algorithm
The following are the constraints of HITS algorithms:

WPR(u )  (1  d )  d WPR(v)W
V B ( u )
in
W(out
( v ,u ) v ,u )
i. Hubs and Authorities: It is not simple to distinguish
between hubs and authorities since many sites are hubs
(5) as well as authorities.
ii. Topic drift: Sometimes HITS may not produce the
most relevant documents to the users queries because of
3.3 HITS equivalent weights.
This algorithm was developed by Jon Kleinberg [7] called iii. Automatically generated links: Some links are
Hyperlink- Induced Topic Search (HITS) [8] which gives two automatically generated and represent no human
forms of web pages called as hubs and authorities. Hubs are the judgment, but HITS gives them equal importance.
pages that act as resource lists. Authorities are having important iv. Efficiency: The performance of HITS algorithm is not
contents. efficient in real time.
A fine hub page for a subject points to many authoritative pages
on that context and a good authority page is pointed by many HITS was used in a prototype search engine called Clever for
fine hub pages on the same subject. HITS assumes that if the an IBM research project. Because of the above constraints,
author of page p provides a link to page q, then p confers some HITS could not be implement in a real time search engine.
authority on page q. Kleinberg states that a page may be a good
hub and a good authority at the same time.
The HITS algorithm considers the WWW as a directed graph 3.4 SimRank
G(V,E) where V is a set of vertices representing pages and E is A new page rank algorithm which is based on similarity
a set of edges that match upto links. Fig. 4 shows the hubs and measure from the vector space model, called SimRank [10]. In
authorities in web. order to rank the query results of web pages in an effective and
efficient manner, SimRank is used. Normally, traditional Page

408
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Rank algorithm only employ the link relations among pages to On the basis of these parameters, we can check the performance
compute the rank of each page but the content of each page of each algorithm.
cannot be ignored completely. Actually, the accuracy of page
scoring greatly depends on the content of the page. Therefore,
SimRank algorithm is used to provide the most relevant
5. CONCLUSION
The Page Ranking Algorithms, which are an application of web
information to the users. To calculate the score of web pages in
mining, play an important role in making the user navigation
SimRank , a page in vector space model is represented as a
easier in the results of a search engine. This paper described
weight vector, in which each component weight is computed
several proposed algorithms like Page Rank algorithm,
based on some variation of TF (Term Frequency) ot TF-IDF
Weighted Page Rank algorithm, HITS, SimRank, etc. all
(Inverse Document Frequency) scheme as follows:
algorithms may provide satisfactory performance in some cases
but many times the user may not get the relevant information.
3.4.1 TF scheme: In TF scheme, the weight of a term This is because some algorithms calculate the rank by
ti in page dj is the number of times that ti appears in considering only the content of web page but others compute it
document dj, denoted as fij. The following normalization by focusing on link relations among pages. Therefore, a new
approach is applied [4] technique can be proposed that will consider both content and
link relation of a web page.
f ij
tf ij 
max{ f1 j , f 2 j ,....... f V j REFERENCES
(8)
[1] Kaur,M., Singh,C., 2014. Content based and Link based
Where fij is the frequency count of term ti in page j and |V| is page ranking algorithm: A Survey. International Journal of
the number of terms in page. The disadvantage of this scheme Advanced and Innovative Research (IJAIR), ISSN: 2278-
is that it does not consider the case that a term appears in 7844,Vol 3, Issue – 4, pp. 250-255.
several pages, which limits its application.
[2] R.Cooley, B.Mobasher and J.Srivastava, 1997. Web
Mining: Information and Pattern Discovery on the World
3.4.2 TF-IDF scheme: The inverse document Wide Web". In Proceedings of the 9th IEEE International
frequency (denoted by idfi ) of term ti is computed by [4]. Conference on Tools with Artificial Intelligence,
(ICTAI'97).
N
idf i  log [3] Dr. M. H. Dunham, 2002 Data Mining:Introductory and
Advanced Topics, Prentice Hall.
df i (9)
[4] L. Page, S. Brin, R. Motwani, and T. Winograd, 1999. The
where N is the total no of pages in a web database, dfi is the Pagerank Citation Ranking: Bringing order to the Web.
number of pages in which term ti appears atleast once, and fij is Technical report, Stanford Digital Libraries, SIDL-WP-
the frequency count of term ti in page dj. The term weight is 1999-0120.
computed by: [5] Duhan, N., Sharma, A.K., Bhatia, K.K., 2009. Page
Ranking Algorithms: A Survey. Proceedings of the IEEE
International Conference on Advance Computing.
Wij  tf ij  idf i [6] Kleinberg J., 1998. Authorative Sources in a Hyperlinked
(10)
Environment". Proceedings of the 23rd annual
International ACM SIGIR Conference on Research and
Note that the TF-IDF scheme is based on the intuition that if a
Development in Information Retrieval.
term appears in several pages, it is not important. SimRank
algorithm is based on the similarity measure for computing the [7] C. Ding, X. He, P. Husbands, H. Zha, and H. Simon, 2001.
rank of each page. The main content of a crawled page contains Link Analysis: Hubs and Authorities on the World.
two parts: title and body. The SimRank algorithm works on two Technical report: 47847.
distinct weight values that are assigned to the title and body of [8] Bing Liu., 2006. Web Data Mining: Exploring Hyperlinks,
a page, respectively. The formula for calculating the SimRank Contents, and Usage Data (Data-Centric Systems and
is as follows [simrank paper]: Applications). Springer-Verlag NewYork, Inc., Secaucus,
SimRank ( p j )  tconst  Wijtitle  bconst  Wijbody (11) NJ, USA.
[9] Lizorkin, D., Velikhov, P., Grinev, M., Turdakov, D.,
Where pj, could be denoted as (w1j, w2j,…….,wmj), Wij is the 2008 Accuracy estimate and optimization Techniques for
term weight, ‘t const’ and ‘b const’ are some constants between Simrank Computation. Published in ACM, Print ISBN No:
0.1 to 1. 978-1-60558-305-1, on 24-30 Aug 2008, pp. 422-433.
[10] Li, C., Han, J., He, G., Jin, X., Sun, Y., Yu, Y., Wu, T.,
4. COMPARISON OF VARIOUS 2010. Fast Computation of SimRank for Static and
Dynamic Information Networks. Published in ACM, Print
ALGORITHMS ISBN No: 978-1-60558-9045-9, on 22-26 March 2010.
On the basis of literature analysis, a comparison of certain Web
Page Ranking Algorithms is shown in Table 1. The comparison
is performed on the basis of some vaults such as Mining
technique use, Methodology, Input parameters, Relevancy,
Working levels, Quality of results, Importance and Limitations.

409
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 1. Comparison of Page Ranking Algorithms

Algorithm Page Rank Weighted Page Rank SimRank HITS

Mining
Web Structure Mining,
Technique Web Structure Mining Web Structure Mining Web Content Mining
Web Content Mining
used

Computes scores at Computes scores at


indexing time not ay query indexing time, unequal Computes scores at query Computes hub and
Description time. Results are sorted distribution of score, pages time. Results are calculated authority scores of n highly
according to the are sorted according to dynamically. relevant pages on the fly.
importance of pages. importance.

I/P Backlinks, forward links,


Backlinks Backlinks, forward links Content
Parameters content

Working
N* 1 1 <N
levels

Less
Relevancy Less More More
(higher than PR)

Importance More More Less Less

Quality of
Medium Higher than PR Approx equal to WPR Less than PR
results

Importance of page links is Topic drift and efficiency


Limitations Query Independent Query Independent
totally ignored problems

410
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Comparative analysis of various data mining


classification algorithms

Yogesh Kumar Indu bala


Bhai Gurdas College of engg. Bhai Gurdas College of engg.
& technology,sangrur & technology,sangrur
yksingla37@gmail.com Indu.bala30@gmail.com

ABSTRACT 2. LITERATURE SURVEY


Data mining is the process of digging through and analyzing Oliver, et al. (2012) proposed Introduction to k Nearest
various sets of data and then extracting the meaning of the data. Neighbor Classification and Condensed Nearest Neighbor
Classification is a data mining method used to predict the class Data Reduction, k Nearest Neighbors (KNN)[2] algorithm is
of objects whose class label is not known. There are many to use a database in which the data points are separated into
classification mechanisms used in data mining such as K- several separate classes to predict the classification of a new
Nearest Neighbor (KNN), Bayesian network, Cross validated sample point. The process of choosing the classification of
parameter selection (CVPS), Naive Bayes Multinominal Updatae- the new observation is known as the classification problem
ble (NBMU) Algorithm, Fuzzy logic, Support vector and there are the various ways to tackle it. Here we consider
machines etc. This paper presents a comparison on four choosing the class of the new observation based on the
classification techniques which are K-Nearest Neighbor, User classes of the observations in the database which it is most
Classifier, Cross validated parameter selection and Naive similar" too.
Bayes Multinominal Updataeble Algorithms. The goal of this Thair, et al. (2009) suggested that Classification is a data
research is to enumerate the best technique from above four mining or machine learning technique used to predict group
analyzed under a given data set and provide a fruitful membership for data instances. In this, he presents the basic
comparison result which can be used for further analysis or classification mechanisms. Several major kinds of classifica-
future development. tion techniques including decision tree induction, Bayesian
networks, k-nearest neighbor classifier [3], case-based
Keywords reason- ing, genetic algorithm and fuzzy logic techniques.
KNN,CVPS,NBMU,User Classifier The goal of this survey is to provide a comprehensive review
of different classification mechanisms in data mining.
1. INTRODUCTION Delveen, et al. (2013) proposed, Data mining concept is
Data mining concept is growing very fast in popularity, it growing very fast in popularity, it is a technology that in-
is a technology that involving methods at the intersection of volving large no. Of methods at the intersection of (Machine
(Ar- tificial intelligence, Machine learning and database learning, database system and Statistics), the main goal of
system), the main goal of data mining process is to extract data mining method is to extract information from a large
information from a large data into a form which could be data into a form which could be understandable for further
understandable for further use. Classification is a data mining use. Some algorithms for data mining are used to give
technique based on machine learning [1]. Basically, it is used solutions to various classification problems in the database. In
to classify each data item in a set of data into one of a this a comparison among three classifications algorithms will
predefined set of classes or groups. The classification be studied, these are (K- Nearest Neighbor classifier,
technique makes use of mathematical techniques such as Decision tree [4] and Bayesian network) algorithms. In this
decision trees, linear programming, neural network etc. In he demonstrates the strength and accuracy of each algorithm
classification, we make the various types of software that can for classification in term of performance efficiency and time
learn how to classify the data items into groups. This research complexity required.
has conducted a comparison study between a number of Sohil, et al. (2013) suggested that There are several
available data mining software and tools depending on their methods of data mining like classification, clustering,
ability for classifying data correctly and accurately. The association rule, outlier analysis, etc. That are used for
accuracy measure which represents the percentage of uncovering hidden patterns from the data. There are various
correctly classified instances that is used for judging the algorithms of above techniques are developed by various
performance of the selected tools and software. The rest of the researchers. In this he tried to examine and investigate various
paper is organized as follows: Section 2 summaries related techniques of classifications like Decision Trees, Naive
works on data mining, data classification. Section 3 Bayes, k-Nearest Neighbor [5], Feed Forward Neural
summaries the various types of data classification techniques Networks and Support Vector Machine to identify the best fit
used. Section 4 provides a general description of the tools and methods among them. All the above mentioned algorithms
software under test and dataset used. Section 5 reports were implemented using WEKA which consists of a
experimental results and compares the results of the different collection of machine learning algorithms for data mining
algorithms. Finally, I close this paper with a summary and an tasks.
outlook for some future work. Abdullah, et al. proposed that huge amount of data and
info. Are available for everyone, Data can now be stored in

411
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

many different kinds of databases, besides being available on 3.2. Decision tree
the Internet. With such amount of data, there is a need for
powerful methods for better interpretation of these data that Decision trees are trees that classify objects by sorting them
exceeds the human’s ability for making decision in a better based on feature values. Each node in a tree represents a
way. In order to get the best tools for classification task that feature value in[9] an object to be classified, and each branch
helps in decision making [6]. He has shown a comparative represents a value of the node. Objects are classified starting
study between a number of freely available data mining and at the root node and sorted based on their feature values.
knowledge discovery tools. He has shown the results that the Decision Trees offer many benefits of data mining technology
performance of the tools for the classification task is affected like:
by the kind of dataset used and by the way the classification  Easy to follow when compacted.
techniques were implemented within the toolkit.  The ability of handling a variety of input data:
Yogesh, et al. (2013) suggested that network security needs numeric and text etc.
to be concerned to provide secure information medium due to  High performance in a relatively small
increase in potential network attacks. In todays era detection computational effort.
of various security threats that are commonly referred to as  Useful for various techniques, such as
intrusion [7], has become a very critical issue in the network. classification, clustering and feature selection etc.
Highly secured data of large organizations are present over
the network so in order to prevent that data from unauthorized
users or attackers a strong security technique is required. 3.3.Naive Bayes Multinomial Updateable
Intrusion detection system plays a major role in providing Algorithm
security to computer networks. Intrusion Detection System is
a valuable tool for providing security to computer networks. The task of text classification can be approached from a
In this paper a comparative analysis of different feature Bayesian [10]learning perspective, which predicts that the
selection mechanisms is presented on the KDD dataset and word distributions in documents are generated by a specific
their performance are evaluated in terms of computational parametric model, and the parameters can be estimated from
time, detection rate and ROC. the training data.There is an option available for NaiveBayes-
Multinomial. When the option is debug and if it’s set to true,
classifier may output additional info. To the console. This is
3. TECHNIQUES USED the incremental version of the naive Bayes multinomial
3.1. K-Nearest Neighbor Classifiers (KNN) algorithm. This uses the Bayes rule theory as its core
equation.
K-NN is a type of lazy learning where the function is only
approximated locally and all computation is deferred until 3.4. User Classifier
classification. The k-NN is a lazy learning technique [8], and
Interactively classify through visual means. You are Presented
instead of estimating the target function once for the whole
with a graph of the data against two user selected
instance, they delay processing until classification. This
attributes[11], as well as a view of the decision tree. By
algorithm is amongst the simplest of all machine learning
creating polygons around data plotted on the scatter graph,
algorithms: an object is classified by a majority vote of its
You can create binary splits, as well as by allowing another
neighbors, with the object being assigned to the most common
classifier [12] to take over at points in the decision tree should
class amongst its k nearest neighbors (k is a positive integer,
you see fit. There is an option available for UserClassifier.
typically small). If k = 1, then the object is simply assigned to
When the option is debug and if it’s set to true, classifier may
the class of its nearest neighbor. The neighbors are taken from
output extra info. to the console.
a collection of objects for which the correct classification is
known. In this no training step is required. The k-nearest
neighbor algorithm is sensitive to the local structure of the 3.5.CV Parameter Selection Algorithm
data. K-Nearest Neighbors (K-NN) algorithm is a
nonparametric method in that no parameters are estimated. This is a class for performing parameter selection by
For eg: To classify an unknown object: crossvalidation for any classifier. There are various types of
 Compute distance to other training objects the options available for CVParameterSelection. When the
 Identify total no. Of k nearest neighbors option is CV Parameters, then it sets the scheme parameters
 Use classification of nearest neighbors to determine which are to be set by cross-validation. When the option is a
the class label of unknown record classifier, then the base classifier is to be used. When the
Algorithm of KNN: Consider k as the desired number of option is debug and if it’s set to true, classifier may output
nearest neighbors and S: = p1,..., pn is the set of training extra info. To the console. If the option is numFolds, then it
samples in the form p1 = (xi, ci), where xi is the d- gets the no. Of folds used for cross validation. If the option is
dimensional feature vector of the point pi and ci is the class a seed, then the random no. seed to be used.
that pi belongs to. For each p’= (x’, c’)
 Compute the distance d (x’, xi) between p’ and all 4. THE COMPARATIVE STUDY
pi belonging to S using Euclidean distance formula: The methodology of the study consists of collecting a set of
d(p,q)= 𝑖 (𝑝𝑖 − 𝑞𝑖 )2 data mining and knowledge discovery tools to be tested,
specifying the data set to be used, and selecting a various set
 Sort all points pi according to the distance d (x’, xi). of the classification algorithm to test the tools’ performance.
 Select the first k training samples from the sorted
list, those are the k closest training samples to p’. 4.1. Tools Description
 Assign classification to p’ based on majority vote Weka 3.6 is a collection of machine learning algorithms for
of classification. data mining tasks. Weka stands for Waikato Environment for
Knowledge Analysis [13]. The algorithms can either be

412
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

applied directly to a dataset or called from the Java code. includes features such as the number of failed login
Weka contains various tools for data pre-processing, attempts.
classification, regression, association rules, clustering, and  Time-based Traffic Features: These features are
visualization. The Weka GUI Chooser (class designed to capture properties that mature over a 2
weka.gui.GUIChooser) provides a starting point for launching second temporal window. One example of such a
Weka’s main GUI applications and supporting tools. The GUI feature would be the number of connections to the
Chooser consists of four buttons: one for each of the four same host over the 2 second interval.
major Weka applications and four menus. The buttons can be  Host-based Traffic Features: Utilize a historical
used to start the applications that are explained as follows: window estimated over the number of connections
in this case100 instead of time. Host based features
 Explorer: It is an environment used for exploring are therefore designed to assess attacks, which span
data with WEKA (the rest of this documentation intervals longer than 2 seconds.
deals with this application in more detail). In order to test the classifiers, I randomly selected 4973
 Experimenter: It is an environment for performing connection records as a training data set and 1000 connection
experiments and conducting statistical tests between records as a testing data set. Below Table 1 shows the detail
learning schemes. of connection records in these both datasets. KDD dataset
 KnowledgeFlow: This environment supports contains symbolic as well as continuous features.
essentially the same functions as the Explorer, but
with a drag-and drop interface. It supports Table 1: details of connection records in used dataset
incremental learning.
 SimpleCLI: It provides a simple command-line
interface that allows direct execution of WEKA Label Training Testing set
commands for operating systems that do not provide
set
their own command line interface. Normal 2645 269

4.2. Data Set Description Probe 114 114


To verify the efficiency of KNN algorithm with other classifi-
cation algorithm, I have used KDD dataset. This dataset DOS 2147 550
contains 39 features and is labeled with exact one specific
attack type i.e., either normal or an attack. Each vector is U2R 21 21
labeled as either normal or an attack, with exactly one specific
attack type. Deviations from normal behavior, everything that R2L 46 46
is not normal, are considered attacks. Attacks labeled as
normal are records with normal behavior. The training dataset Total 4973 1000
has 53.18% normal and 46.81% attack connections. KDD
CUP 99 has been most widely used in attacks on network. The Records
simulated attack falls in one of the following four categories:
5. EXPERIMENTS AND EVALUATIONS
 Denials-of Service (DoS) attacks[7] have the goal of
limiting or denying services provided to the user, 5.1. Result evaluation parameters
computer or network. A common tactic is to 1) The correctly and incorrectly classified instances show the
severely overload the targeted system. (E.g. apache, percentage of test instances that were correctly and incorrectly
smurf, Neptune, Ping of death, back, mailbomb, classified. The percentage of correctly classified instances are
udpstorm, SYNflood, etc.). often called accuracy or sample accuracy.
 Probing or Surveillance attacks have the goal of 2) Root Mean Squared Error (RMSE): The RMSE is a
gaining knowledge of the existence or configuration quadratic scoring rule which measures the average magnitude
of a computer system or network. Port Scans or of the error.
sweeping of a given IPaddress range typically fall in
this category. (e.g. saint, portsweep, mscan, nmap, RMSE= sqrt ((p1-a1)2+…..+ (pn-an) 2/n)
etc.).
 User-to-Root (U2R) attacks have the goal of gaining 3. Relative Absolute Error (RAE): It is just the total, absolute
root or super-user access to a particular computer or error, with the same kind of normalization.
system on which the attacker previously had user RAE= (|p1-a1|+…+|pn-an|)/(|a¯-a1|+…+| a¯-an|)
level access. These are attempts by a non-privileged
user to gain administrative privileges (e.g. Perl,
xterm, etc.). (4) Root Relative squared error (RRSE): The root relative
squared error takes the total root of squared error and
 Remote-to-Local (R2L) attack is an attack in which
normalizes it by dividing the total squared error of the default
a user sends packets to a machine over the internet,
predictor. Root relative squared error Ei of an individual
which the user does not have access to in order to
program i is evaluated by the equation:
expose the machine vulnerabilities and exploit
privileges which a local user would have on the
computer . 𝑛 𝑛
Ei= 𝑗 =1(𝑃𝑖𝑗 − 𝑇𝑗 )2 𝑗 =1(𝑇𝑗 − 𝑇) 2
Features of data set are grouped into four categories:
 Basic Features: Basic features can be derived from
packet headers without inspecting the payload. Where P(ij) is the value predicted by the individual program i
 Content Features: Domain knowledge is used to for sample case j (out of n sample cases); Tj is the target value
assess the payload of the original TCP packets. This for sample case j; and is given by the formula:

413
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

𝑛
Root Relative squared error 75%
𝑇 = 1/𝑛 𝑇𝑗
𝑗 =1 Relative absolute error 66%

(5) Mean absolute error (MAE): The mean absolute erroris


less sensitive to outliers than the mean squared error. The
error rates are used for numeric prediction rather than 5.3: Result of different classification algorithms on Weka
classification.
In this I have taken upper defined KDD dataset as a training
MAE=|p1-a1|+…..+|pn-an|/n set and a testing set in the weka. By implementing different
algorithms on this training set and testing set, I have found the
5.2. Result for KNN algorithm % of correctly classified instances, incorrectly classified
In this I have taken upper defined KDD dataset as a training instances, Mean absolute error, Root mean squared error, and
set and a testing set. By implementing the KNN algorithm on Root Relative squared error, a Relative absolute error that is
this training set and testing set by using console application, I shown in below table 5.
have found the % of correctly classified instances, incorrectly
Table 5: performance of different algorithms on weka
classified instances, Mean absolute error, Root mean squared
error, Root Relative squared error, Relative absolute error by Parameter UserClassifier NBMU CVPS
using majority vote classification among the class of the K
objects. Finally, I got the result given by the KNN algorithm % of correctly 52.33 57.89 52.33
as shown in below tables 2, 3 and 4. classified instances
% of incorrectly 47.6641 42.1053 47.664
Table 2: output given by knn algorithm classified instances
Mean absolute 0.2123 0.1687 0.2125
Label Testing Output error
set set Root mean squared 0.3265 0.4104 0.3265
Normal 269 10 error
Root Relative 100% 125.68% 100%
Probe 114 220 squared error
Relative absolute 99.92% 79.39% 100%
DOS 550 550 error

U2R 21 0
5.4: Comparison of Results obtained by KNN,
R2L 46 220 UserClassifier, CVPS and NBMU Algorithms

The below table no. 6 and figure no. 1 enable us to analyze


Table 3: correctly classified instances given by knn the different algorithm results with better perception.
algorithm
Table 6: Result analysis of KNN, UserClassifier, NBMU,
Attacks Frequency and CVPS Algorithms

Normal 10 Parameter KNN UserClassifi NBMU CVP


er S
Probe 114 % of correctly 72.00 52.33 57.89 52.33
classified
instances
DOS 550
% of incorrectly 28.00 47.6641 42.105 47.66
classified 3 4
U2R 0
instances
Mean absolute 0.56 0.2123 0.1687 0.212
R2L 46
error 5
Root mean 0.330 0.3265 0.4104 0.326
Table 4: results of knn algorithm squared error 2 5
Root Relative 75% 100% 125.68 100%
Parameters Result %
squared error
Relative 66% 99.92% 79.39% 100%
% of correctly classified instances 72.00 absolute error
% of incorrectly classified instances 28.00
Fig 1: chart for comparison of various classifiers
Mean absolute error 0.56

Root mean squared error 0.3302

414
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

acknowledged with reverential thanks. I would like to express


a deep sense of gratitude and thanks profusely to Er Yogesh
Kumar, Asstt. Prof., Department of Computer Science &
Engineering, BGIET, who is the research work Supervisor.
140 % of correctly
Without the wise counsel and able guidance, it would have
classified been impossible to complete the research work in this manner.
120 instances The help rendered by Er Yogesh Kumar, AP, BGIET, for the
% of incorrectly literature and their associates for experimentation is greatly
100 classified acknowledged. I express gratitude to other faculty members of
80 instances Department of Computer Science & Engineering, BGIET, for
Mean absolute their intellectual support throughout the course of this work.
60 error(in %) Finally,I indebted to all whosoever have contributed in this
research work and friendly stay at BGIET.
40 Root mean REFERENCES
20 squared
error(in %) [1] A. L. S. Saabith, E. Sundararajan, and A. A. Bakar,
0 Root Relative “Comparative study on different classification techniques for
squared breast cancer dataset,” 2014.
[2] O. Sutton, “Introduction to k nearest neighbour
error(in %) classification and condensed nearest neighbour data
Relative
reduction,” 2012.
absolute [3] T. N. Phyu, “Survey of classification techniques in data
error(in %) mining,” in Proceedings of the International MultiConference
of Engineers and Computer Scientists, vol. 1, 2009, pp. 18–
From the results of these experiments, K-Nearest Neighbor 20.
algorithm proved to have better results of finding the 72 % of [4] D. L. A. AL-Nabi and S. S. Ahmed, “Survey on
correctly classified instances from the KDD dataset. While classification algorithms for data mining:(comparison and
having a % of correctly classified instances of 52.33 of evaluation),” Computer Engineering and Intelligent Systems,
UserClassifier with the minimum error rate than CVPS vol. 4, no. 8, pp. 18–24, 2013.
algorithm as shown in table 6 had the second best algorithm. [5] S. D. Pandya and P. V. Virparia, “Comparing the
application of classification and association rule mining
6. CONCLUSION techniques of data mining in an indian university to uncover
hidden patterns,” in Intelligent Systems and Signal Processing
In this work, I compare the basic classification algorithms. (ISSP), 2013 International Conference on. IEEE, 2013, pp.
The goal of this study is to provide a comprehensive review of 361–364.
different four techniques k-nearest neighbor, User Classifier, [6] A. H. Wahbeh, Q. A. Al-Radaideh, M. N. Al-Kabi, and E.
NBMU, and CSPV Algorithms in data mining. In order to M. Al- Shawakfa, “A comparison study between data mining
compare these four algorithms based on the Correctly tools over some classification methods,” International Journal
classified instances, Relative absolute error, Relative squared of Advanced Computer Science and Applications, Special
error, Mean absolute error, Mean squared error, Root mean Issue, pp. 18–26, 2011.
squared error parameters, we came to the conclusion which [7] K. Kumar, G. Kumar, and Y. Kumar, “Feature selection
algorithm is more efficient to use. The performance of the approach for intrusion detection system.”
each algorithm is tested on a KDD data set. After the [8] W. Zheng and A. Tropsha, “Novel variable selection
execution of each classification algorithm, I got the numbers quantitative structure-property relationship approach based on
of correctly classified instances and the incorrectly classified the k-nearest-neighbor principle,” Journal of chemical
instances. This gave the accuracy of the algorithm. Other information and computer sciences, vol. 40, no. 1, pp. 185–
important factors, Mean squared error, Root mean squared 194, 2000.
error, Relative absolute error, Relative squared error, Mean [9] D. J. Krusienski, E. W. Sellers, F. Cabestaing, S. Bayoudh,
absolute error, and describes the error rate of an algorithm. D. J. McFarland, T. M. Vaughan, and J. R. Wolpaw, “A
The overall evaluation show that K-nearest neighbor comparison of classification techniques for the p300 speller,”
algorithm is far better than User Classifier, NBMU, and Journal of neural engineering, vol. 3, no. 4, p. 299, 2006.
CSPV Algorithms. In future studies, we can enhance the [10] N. Sebe, Machine learning in computer vision. Springer,
accuracy of the KNN algorithm to achieve better results than 2005, vol. 29.
the previous methodology that I have discussed. [11] J. Han and M. Kamber, Data Mining, Southeast Asia
Edition: Concepts and Techniques. Morgan kaufmann, 2006.
7. ACKNOWLEDGMENT [12] P. Horton and K. Nakai, “Better prediction of protein
cellular localization sites with the it k nearest neighbors
I highly grateful to the Dr. Tanuja Srivastava, Director, Bhai classifier.” in Ismb, vol. 5, 1997, pp. 147–152.
Gurdas Institute of Engineering & Technology (BGIET), [13] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P.
Sangrur, for providing this opportunity to carry out the present Reutemann, and I. H. Witten, “The weka data mining
thesis/work.The constant guidance and encouragement software: an update,” ACM SIGKDD explorations newsletter,
received from Er. Amandeep Kaur, Head, Department of vol. 11, no. 1, pp. 10–18, 2009.
Computer Science & Engineering, BGIET, Sangrur has been
of great help in carrying out the present work and is

415
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Simulative Investigation on VOIP over WiMAX


Communication Network
Ambica Avinash Jethi
BGIET,Sangrur BGIET,Sangrur
ambica.aggarwal14@gmail.com avinashjethi@gmail.com

ABSTRACT (UL-MAP) overheads. DL-MAP is a Medium Access Control


(MAC) message that defines burst start times for a subscriber
The objective of this research paper is to provide brief station on the downlink. UL-MAP supports a set of
overview on simulative investigation on VOIP over WiMAX information that defines the entire access for a scheduling.
communication network for static environment. The mobile
WiMAX (IEEE802.16e) support VOIP traffic under different 1.2 VOIP DESIGN CHOICES,
scenario, QOS, the system throughput, the packet delay,the RESOURCES
signaling overhead,various design choices and performance
analysis. This paper reports about the various VOIP ALLOCATION AND MODELING ASSUMPTIONS
application in the IEEE802.16e including all traffic contract
were made correctly by considering overheads and he speech traffic is generated according to a two-state
performance of voice in mobile environment. Adaptive Multi-Rate (AMR) speech codec. Depending on the
air-interface delay budget, VoIP packet bundling can be
Keywords: voice over internet protocol(VOIP), considered at the base station, i.e., when creating MAC PDUs.
IEEE802.16e , WiMAX This form of bundling offers the advantage of reducing
overhead at the expense of some additional delay given by
N*20 ms where N is the number of VoIP packets that are
bundled into a single packet. Bundling offers the benefit that a
single MAC header may be used for a bundled VoIP packet
1. INTRODUCTION containing payload from multiple VoIP packets; however, the
compressed header is still needed for each constituent packet.
WiMAX is the Next Generation of Wireless Broadband
based on IEEE 802.16 standard. It provides broadband Packet bundling also allows the MAP overhead to be reduced
connectivity anywhere, anytime, for any device and on any since fewer bursts may need to be scheduled for transmission
network and can connect to the internet in faster speed and over the air interface. Silence suppression is achieved through
wider coverage. WiMAX is most suitable for home users, blanking out the eight-rate speech frames, which in turn
individual, small office and home office etc. It erases the facilitates statistical multiplexing of VoIP users and also plays
suburban and rural blackout areas that currently have no a significant role in reducing the interference levels. Each
broadband Internet access. VoIP packet/bundle is mapped to a physical Orthogonal
Frequency Division Multiple Access (OFDMA) burst and
Number of users and their location from the Base Station play corresponds to a MAC Protocol Data Unit (PDU). This allows
an important role in the network performance. Thus, in this
the Hybrid Automatic Request (HARQ) mechanism to operate
paper, the simulative investigation includes the variation in
on a per-burst basis, since the Cyclic Redundancy Check
the number of subscriber stations requesting for VoIP traffic
(CRC) is embedded in the MAC PDU. The MAC PDU
and the distance between the subscriber station and base overhead is 8 bytes, resulting from 6 bytes MAC header
station.The focus of the paper is on the support of VoIP over
overhead and 2 bytes CRC for HARQ bursts. Robust header
mobile
compression (ROHC) is employed to reduce the original 40
bytes of Real Transport Protocol/User Datagram
WiMAX (802.16e). Delay sensitive applications, such as
Protocol/Internet Protocol (RTP/UDP/IP) overhead to 3 bytes
VoIP, pose significant challenges to the design of wireless
only. Partially Used Sub-channelization (PUSC) based on
network infrastructures due to their stringent QoS
distributed sub-carriers allocation to sub-channels is
requirements and to the typical hostile radio environments; the
employed. To support the uplink traffic, control slots are
mobility of users is obviously an extra-dimension that adds to
allocated for ranging, channel quality indication and downlink
the overall complexity, as it is more difficult to predict the
HARQ acknowledgments, as part of the uplink sub- frame.
radio channel conditions, and hence to allocate resources
OFDMA symbols were considered sufficient to carry out the
efficiently at high users speeds. VoIP applications are
uplink overhead in the context of an Extended Real Time
characterized by tight air-interface delay budgets (usually in
Polling Service (eRTPS) like VoIP [1-2]. In the downlink sub-
the order of tens of ms) and a typical QoS target of less than
frame, control slots are allocated for overhead including
1%-2% packet loss for 95% - 98% of active VoIP users.
preamble (1 symbol), Frame Control Header (4 OFDMA
Another major challenge is posed by the amount of overhead
slots) and MAP messages. MAP messages constitute the
required to support many simultaneous VoIP connections,
majority of the overhead and are used for allocating resources
which unless controlled in an efficient and intelligent manner,
to VoIP packets (i.e. number of sub-channels and symbols,
may grow unacceptably large. In WiMAX, the overhead
and their respective allocation offsets). Combining packets
includes the downlink map (DL-MAP) and the uplink map

416
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

bundling and silence suppression features may offer desirable 3. PERFORMANCE ANALYSIS
benefits in terms of reducing the total overhead.
No error in the transmission of the signaling message is
assumed because the signaling message is transmitted with a
It is necessary to ensure that MAP messages are reliably low MCS level. In the CC-based HARQ transmissions, the
received by a very large fraction of users since VoIP packets probability of successful decoding for a VoIP packet
cannot be successfully decoded without first decoding the transmitted with MCS level i at the kth transmission is given
MAP messages. MAP messages require the use of QPSK Rate by
1⁄2 encoding; however, the reliability may be further
improved through slot repetition factors of 2, 4 or 6 –
however, the higher the repetition factor, the larger the P i (k|γ) = (1 − PER i (kγ)) $ PER i (nγ)
overhead grows. In order to avoid having to send MAP
messages to all users with repetition factors of 4 or 6 at QPSK where PER i (γ) is the associated PER of MCS level i at SNR
Rate 1⁄2, telescopic MAP usage is employed as a way of γ. The probability of a VoIP packet being transmitted at the
reducing MAP overhead. With the telescopic MAP approach, kth transmission after k − 1 failures is given by
DL/UL-MAP messages in the frame are broken up into one or
more sub-MAPs (overhead portion of the downlink sub- Q i (k|γ) = $ PER i (nγ)
frame), and instead of transmitting the whole MAP at one low
data rate (e.g. QPSK Rate 1⁄2 with repetition factor of 4), only Let α i be defined as the probability of a packet modulated
the compressed MAP is transmitted with repetition factors and with MCS level i being successfully transmitted in N max
the sub-MAPs are transmitted at higher data rates (QPSK Rate transmissions. The value of α i can be expressed as the sum of
1⁄2 or higher). For this VoIP analysis, we assume the the probabilities of successful decoding at the kt transmission.
transmission of a single compressed DL/UL-MAP message Thus,
that is sent using QPSK Rate 1⁄2 with a repetition factor of 4
followed by a sub- DL-UL-MAP message that is sent using α i ={P i(k|γ)f γ/ P γ (i)dγ}
QPSK Rate 1⁄2. The compressed DL/UL-MAP carries
information elements for resource allocation to about a quarter = 1/P γ (i)P i (k|γ)f γ (γ)dγ ≤ 1
of the users while the sub-DL-UL-MAP is used to allocate
resources to the rest of the users. MAP messages are where the denominator, P γ (i), is a normalized factor. Then,
constructed such that the messages may be reliably received out of x i packets initially transmitted with MCS level i, the
by a large fraction of the mobile station. total number of packets successfully transmitted in N max is

ui=αixi.
2. VOICE TRAFFIC MODEL OF USERS
4. QUALITY OF SERVICE (QoS) IN IEEE
An exponentially distributed on-off model is used to charac- 802.16e
terize traffic from a single voice source, where the mean on-
time is α −1 = 1 s and the mean off-time is β −1 = 1.5 s . The Originally, four different service types were supported inthe
extended real-time polling service (ertPS) is assumed to be 802.16 standard: UGS, rtPS, nrtPS and BE.The UGS
used for the uplink scheduling, where the BS allocates a fixed (unsolicated grant service) is similar to the CBR(constant bit
amount of bandwidth at periodic intervals during the on state. rate) service in ATM which generates a fixed size burst
The user informs the BS of its transition between the on state periodically .this service can be used to replace T1/E1 wired
and the off state by either using a piggyback request field or line or a constant rate services.it can also be used to support
sending a codeword over a channel quality indicator channel real time applications such as Voip or streaming applications.
(CQICH) . The codeword minimizes the signaling overheads Even though the UGS is simple ,it may not be the best choice
and prevents delays in the scheduling of uplink requests and for the VoIP in that it can waste bandwidth during the off
grants. Hence, for the sake of simplicity, this paper ignores period of voice calls.
the uplink request and grant procedure.Each user is assumed
to generate VoIP traffic every p frames in the on state, where The rtPS( real time polling service) is for a variable bit rate
the size of a VoIP protocol data unit is fixed and the size is real time service such as VoIP. Every polling interval ,BS
denoted by L. For example, the G.723.1 codec generates a 24 polls a mobile and the polled mobile transmit bw request
byte encoded voice frame every 20 ms Assuming the size of a (bandwidth request) if it has datato transmit. The BS grants
VoIP packet is fixed, a BS can calcu- late the number of the data burst using UL-MAP-IE upon its reaction.The nrtPS
packets to be transmitted in the uplink from the amount of (non-real-time polling service) is very similar to the rtPS
requested bandwidth or the amount of allocated bandwidth. except that it allows contention based polling.
However, the use of the variable rate codec such as the
adaptive multi-rate (AMR) codec makes it difficult to count The BE (Best Effort) service can be used for applications such
the number of packets from the bandwidth request. The case as e-mail or FTP, in which there is no strict latency
of variable-rate VoIP codecs can be studied in future works. requirement. The allocation mechanism is contention based
This paper assumes that a VoIP packet has a fixed size The using the ranging channel. Another service type called ertPS
VoIP traffic requested from N v users is aggregated in the (Extended rtPS) was introduced to support variable rate real-
initial transmission queue at the BS. The operation of the ini- time services such as VoIP and video streaming. It has an
tial transmission queue can be modeled as a two-state advantage over UGS and rtPS for VoIP applications because
Markov- modulated Poisson process (MMPP) with the it carries lower overhead than UGS and rtPS.
transition rate ma- trix, R, and Poisson arrival rate matrix, Λ .

417
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5. PACKET DELAY ANALYSIS 1) M : the MCS level supported by the VDRA region;

The average packet delay, which is the sum of the average 2) S ru : the size of the resource unit;
waiting time in the queue, W q , and the average transmission
time in the air, T tr , can be expressed as follows 3) B ru : the number of bits which can be transmitted in one
resource unit. This value depends on the M and S ru ;
D = W q + T tr .
4) N ru : the number of resource units in the VDRA region;
The queue waiting time includes the waiting time in the initial
transmission queue and the waiting time in the retransmission 5) L: the number of users accommodated in the VDRA
queue. However, the waiting time in the retransmission queue region;
can be neglected because the BS schedules users firstly from
the retransmission queue and then from the initial 6) λ: the packet arrival probability in a frame time. We
transmission queue. The average length of the initial assume that packet arrivals follow a Bernoulli process with a
transmission queue is given by packet arrival probability of λ in a frame time T ;

Lq= kπ(k). 7) α: the probability that the transmitted packet is negatively


acknowledged (NACK) because of a transmission error;
The average waiting time in the initial transmission queue can
be expressed in terms of Little’s theorem. Thus, 8) D r : time duration (in frame times) that ACK/NACK
signal for a transmitted packet is fed back to the BS;
W q = Lq/ λ e
9) U l (F N ): the resource unit number generated by the PHP
where λ e , which is the effective packet arrival rate at the ini-
tial transmission queue during the frame, is equal to the In addition, we will derive the following analytical results.
average number of scheduled packets: That is,
1) P S (N D , N R |m): the probability that N D and N R users
λ e = x. transmit packets using the predetermined and redirected

When a packet is successfully transmitted at the kth trans- resource units, respectively, when m buffers have pending
mission, the total transmission time is given packets to transmit;

TR(k) = (k − 1)RTT + T f . 2) μ m : the probability that a packet is transmitted to a


specific user, regardless of the use of predetermined or
redirected resource unit, where m buffers have pending
5.1Packet delay analysis in a virtually packets;
dedicated resource allocation region
3) μ(L): the probability that a packet is transmitted to a
A. System Model specific user in a frame when L users are accommodated in
the VDRA region;
Although there may exist several VDRA regions in a system,
we first analyze a single VDRA region case, because one 4) E[N D ]: the mean number of predetermined resource units
VDRA region can be independent of the other regions. The used in the VDRA region;
DL packet transmission model consists of the following five
components: 5) E[N R ]: the mean number of redirected resource units used
in the VDRA region;
1) L buffers for storing the arriving packets for the corre-
sponding L users; 6) φ 0 (L): the probability that a buffer is empty when L users
are accommodated in the VDRA region;
2) a VDRA module that performs the operation mentioned in
Section III at each frame; 7) P: the state transition probability matrix for the number of
packets in a buffer;
3) N ru resource units for packet transmissions;
8) Π: the steady-state probability for the number of packets in
4) L receivers, which decode the received packets, check a buffer;
whether an error occurs, and generate ACK/negative ac-
knowledge (NACK) signals for retransmission; 9) P d (D|L): the probability that packet delay is D frame
times when L users are accommodated in the VDRA region.
5) a delay element (DE), which stores ACK/NACK signals
generated at the receiver and feeds them back to the 6. SIGNALLING SCHEMES OF BS
transmitter. This overall packet transmission model can be
simplified as a model for a single user as shown in Fig. 5(b),
where a packet is served with a probability of μ(L) in a frame.
6.1Conventional Allocation
μ(L) is affected by the buffer empty probability of all the
In conventional mobile WiMAX systems, the characteristic
buffers in the VDRA region because of resource collisions.
that the BS broadcasts a signaling message for every frame
generates a substantial overhead for every frame. Hence, the
The VDRA region has the following parameters.

418
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

size of the signaling overhead is directly proportional to the Carrier Frequency 2.5 GHz
number of scheduled users.
Number of VoIP Variable, subject to capacity
6.2 Persistent Allocation sources/users per sector
simulated
Persistent allocation is a technique used to reduce the
signaling overhead for connections, e.g., VoIP services, that VoIP Capacity Criteria 8% of users satisfied. Users
have a periodic traffic pattern and a relatively fixed payload with less than 2% packet loss
size .A high-level concept of a persistent mapping scheme and are declared satisfied
a conventional mapping scheme. In the persistent allocation
scheme, the BS allocates a persistent resource to a user at Reference VoIP Traffic AMR 12.2 kbps; AMR 7.95
frame t and the allocated resource is valid in a periodic Source kbps
sequence of future frames, namely, frame t + p, frame t + 2p,
without notification of a signaling message. However, if the Transmission Method MIMO, STC (2x2, DL)
optimized modulation and coding scheme (MCS) level at the
current frame is different from the latest MCSlevel indicated Maximum number of HARQ 4
by the BS, the BS may transmit a signaling message in order transmissions (1 st trans +
to adjust the attributes of a persistently allocated resource retransof same packet)
because the MCS mismatch causes a link adaptation error.
MAC Header 48 bits
7. THROUGHPUT ANALYSIS
Total Packet Overhead 88 bits
A discrete-time MMPP can be equivalent to an MMPP in
continuous time. The system state is defined as the number of
packets in the initial transmission queue. The arrival and
service process of the initial transmission queue is depicted . The following performance metrics are used for evaluating the
VoIP users in this paper:
The average packet arrival rate at the initial transmission
queue during the frame is expressed as follows VoIP Packet Loss: it captures packet losses at the receiver
due to residual post-HARQ errors (after a total of maximum
ρ = s(k Dk) allowed number of transmissions) or due to the discarding of
packets that have exhausted their delay budgets
where 1 is a column matrix of ones. The matrix s = [s 1 s 2 ] is at the transmitter.
obtained by solving sU = s and s 1 + s 2 = 1, where U, which
is the phase transition probability matrix in the MMPP, is VoIP Packets Transmmission Statistics: it captures the
expressed as U = (Λ − R) −1 Λ. Additionally, D k is the percentage of packet transmissions across succesive
diagonal probability matrix in which each diagonal element is retransmission attempts along with the average number of
the probability of k packets arriving at the BS for the frame, transmissions per packet. These characteristics are driven by
retransmission attempts along with the transmissions per the H-ARQ mechanisms.
packet.

(λ i T f ) k e −λ i T f /k! for i = 1, 2. 9. CONCLUSION


In this paper, we covered performance analysis,voice traffic
8. RESULTS model of users, QoS, packet delay analysis which includes the
different procedures set for all the system simulation . We
Table 1 :system simulations assumptions and paramters: verified the analytical results by comparing them with
simulation result.
Parameter Assumption

Network Topology 19 cloverleaf cells with three REFERENCES


[1]. Jaewoo So:JOURNAL OF COMMUNICATIONS AND
sectors with wrap-around
NETWORKS, VOL. 14, NO. 5, OCTOBER 2012.
enabled
[2]. M.-H. Fong, R. Novak, S. McBeath, and R. Srinivasan,
“Improved VoIP capacity in mobile WiMAX systems
Duplexing Scheme TDD
using persistent resource allocation,” IEEE Commun.
Mag., pp. 50–57, Oct. 2008.
Bandwidth 10 MHz
[3]. Young Ik Seo, Member, IEEE, and Dan Keun Sung,
Senior Member, IEEE,2011
FFT size 1024
[4]. Doru Calin Bell Laboratories, Alcatel-Lucent,VoIP over
ISD (Inter-Site Distance) 500 m (ITU Ped. B 3Km/h); Realistic IEEE 802.16e System Scenarios: The Uplink
1500m (ITU Veh.A 60Km/h) Direction 2011.
[5]. D. Calin, “VoIP over Realistic IEEE 802.16e System
Propagation Formula 128.1 + 37.6*log10(d) dB, d Scenarios: The Uplink Direction”, in Proc. of IEEE
inKm, for 2GHz Globecom 2012, pp. 3384 – 3388, Anaheim, December
2012.

419
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[6]. Doru Calin Bell Laboratories, Alcatel-Lucent, Design


and Analysis of a VoIP Based IEEE 802.16e System,
2013.
[7]. S. Alshomrani, S. Qamar, S. Jan, I. Khan and I. A.
Shah,QoS of VoIP over WiMAX Access Networks,april
2012.
[8]. João Henriques, Vitor Bernardo, Paulo Simões, Marilia
Curado Center for Informatics and Systems,VoIP
performance over Mobile WiMAX: An Urban
Deployment Analysis,2012.

420
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Swarm Intelligence (SI)-Paradigm of Artificial Intelligence (AI)


Pooja Renu Nagpal
BGIET, Sangrur BGIET, Sangrur
Punjab, India Punjab, India
poojamarken16@gmail.com er.renunagpal@gmail.com

ABSTRACT e) Now, in order to move through the shortest path, path


This paper gives you the overview of the subfield of artificial that will constitute more number of ants to follow the
intelligence which is called as “SWARM INTELLIGENCE”. same path in same interval of time will be the shortest
This is so called because it is named after the inspiration taken path.
from the working behavior of the swarms. The term “swarm” f) And for this shortest path obviously the concentration
denotes the group or aggregation of insects or animals or birds of the pheromone will be more as more number of ants
which works together so as to complete the difficult tasks in an will follow the same route leaving the chemical
efficient manner which are not possible to be completed as an substance behind them.
individual entity. This working behavior of swarms has g) As the concentration of that route will increase since
influenced the researchers to solve their problems in robotics, more number of ants will follow it, thus the path that
telecommunications, computer science, networking and various is comparatively longer will soon disappear. This is
other technical fields. Apart from it, algorithms have been because of the volatile nature of the pheromone and of
proposed to solve various complex problems which are having least concentration of the substance over that path.
resemblance to swarm intelligence, in their working behavior. In 1.2 HONEY BEE SWARMS
this paper, a brief description of the working of these algorithms Honey bees are one another swarms’ type that seems very
is given. helpful in solving the complex tasks by their working manner in
KEYWORDS an efficient, effective and intelligent way. They have tendency
Swarm, swarm intelligence, stigmergy, pheromone, paradigm to do typical tasks by dividing them into smaller tasks. Their
daily tasks involves forging, storing, retrieving and distributing
honey and pollens , communication and most precisely their
1. INTRODUCTION ability of adaptation for the change in the environment. Several
Since it is described that, problem solution which is inspired by algorithms have been designed which work in the same manner
the collective behavior of swarms is defined as swarm as that of the working behavior of the honey bees. The concept
intelligence [1]. Thus, such an intelligent and autonomous of forging in honey bees can be explained as:
systems are required that are able to solve complex tasks with In the hive there are two types of bees named as:
self-organizing nodes that are not having central control (means i) worker bees(scouts), and
distributed approach). ii) forager bees
Swarms are broadly categorized into: The tasks that are to be performed by the worker bees are
1. Ants maintenance and management activities like collecting and
2. Honey bees storing food, removing the dead bees from the hive, keeping
3. Termites proper ventilation and guarding the hive etc. the foraging
4. Particle process involves:
1.1 ANT SWARMS a) First of all, the scout bees are sent to various
Ants as individuals are not smart or intelligent enough to do directions randomly so as to find the food source.
their daily tasks of finding food source or finding shortest path b) Scouts move from one flower patch to another so as to
to that location if source is found, dividing their work into short find promising food source that may have the quality rated
tasks and assigning these tasks to multiple ants so as to complete above the pre-defined quality threshold deposit their nectar
a task as a whole. Foraging is one of the examples that can or pollen.
describe the behavior of ants, and their working as colony. In the c) After finding this, the scout bees move to the dance
foraging process, floor to perform a kind of dance so as to indicate the type
a) Ants are free to move in any direction in search of of quality food detected to other bees.
their food location. d) This kind of dance that scout bees perform on the dance
b) Once they found a destination, they come back to their floor for communication to other bees is known as waggle
actual starting position by leaving a chemical dance.
substance which is volatile and attractive in nature and e) This dance basically helps the other bees to know the
is called as pheromone. direction of patch, distance of patch, and quality rating.
c) For coming back to actual location, an ant may follow f) After getting this information, forager bees are sent to
any direct or indirect path. that patch.
d) All other colony members will follow the same path g) Higher the quality of food at the patch, more will be the
where they found pheromone. number of bees at that patch.
Thus, in this way bee colony is able to get good quality of food
effectively and efficiently.

421
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1.3 PARTICLE SWARMS Since there are many algorithms that can completely fit into the
In particle swarm method, the way by which the researchers got working of various areas of working of several technical fields.
influenced is the working criteria of the birds and their ability to Thus according to those several principles of working, we are
find their food. The foraging process in particle swarm can be having with applications in which Swarm Intelligence is useful.
described as: Some of the fields are as follows, Robotics, data Mining,
a) It is supposed that the flock of birds is searching for Communication networks, Fuzzy Systems, Military
food in some particular area. Applications, Traffic Patterns and many more resembling fields
b) And there is only a single piece of food in that area. that can have their solution with the help of this Swarm
c) Now, to get the exact location where the food actually Intelligence.
laying, each bird will flew in the direction in which
the bird nearest to the food is flying. 3. ALGORITHMS
d) An algorithm is being designed in such a way that Algorithm is a kind of procedure or formula that is to be
locally and globally best positions of the birds are followed step by step so as to calculate or process a particular
being calculated. And velocity after each iteration is problem.
modified. Various algorithms were designed that have their working
In this way, the bird swarms (particle swarms) are helpful in similar to the working behavior of the above given swarm types.
order to find solution to the complex problems. Few of the algorithms are listed below as:
1.4 TERMITE SWARMS a) PSO( Particle Swarm Optimization)
Termites are known for building hills by using pebbles. And the b) ACO(Ant Colony Optimization)
way they collect the pebbles, the way they do it effectively by c) BFO(Bacterial Foraging Optimization)
using shortest path method by the use of pheromone substance d) PPSO(Perceptive Particle Swarm Optimization)
has major influence on the researchers to use an algorithm based 3.1 Particle Swarm Optimization
on this intelligence. The process is as described: Particle Swarm Optimization (PSO) [1] was inspired by the
a) Each termite moves on the way where it finds the behavior of the birds that fly around and search space, for the
pheromone substance in order to collect the pebbles. best location. Each particle either directly or indirectly
b) But if no pheromone is detected by the termite then, it communicates with the one another for the directions. Each
will follow a random path so as to search a pebble by particle moves through the multi-dimensional space sampling an
its own. objective function at various positions. Best solution is extracted
c) During the way to search a pebble, if it caught any out and is plotted.
pebble it will took it up. And is seeded with an initial velocity. The velocity of the
d) A termite can carry on one pebble at a time. particle is continuously updated so that it may experience the
e) And during its path again if it found any other pebble, best position of itself or best position that is experienced by its
then it will drop the already carrying pebble at that neighbor in the swarm. The performance of the particle can be
new location and infuse that pebble with pheromone evaluated with the help of fitness function. The technique might
so that other termite can detect it easily. have the problem of adjusting the parameters but is easy to
f) And this pebble will act as the building site of the hill implement.
of pebbles. That means all other termites will get a
location to drop their own pebbles at that location so 3.2 Ant Colony Optimization
as to build a hill there. As it is explained in the 1.1 section of this paper, that ants used
to communicate through the use of chemical
2. APPLICATIONS
towards it so that the ants of the swarm might follow the same reproduction and eliminating the rest of the population. In order
path only. This all process of communication via the use of to escape local optima, an elimination- dispersion event is
chemical substance is termed as trail-laying and trail-following. carried out where some bacteria are liquidated at random with a
The algorithm based on this concept, uses a technique of very small probability and the new replacements are initialized
positive feedback in which the concentration of the pheromone at random locations of the search space.
goes on increasing with the number of ants passing through the 3.4 Perceptive Particle Swarm Optimization
same route and the path that have lesser number of ants or least Conventional particle swarm optimization relies on exchanging
concentration of pheromone will disappear soon. information through social interaction among individuals.
3.3 Bacterial Foraging Optimization However for real-world problems involving control of physical
Bacterial Foraging Optimization algorithm is a kind of agents (i.e., robot control), such detailed social interaction is not
evolutionary computation algorithm. It is based on the foraging always possible. Recently, the perceptive particle swarm
behavior of Escherichia Coli (E. coli) bacteria that resides in optimization (PPSO) algorithm was proposed to mimic
human intestine. This method is used for locating, handling and behaviors of social animals more closely through both social
ingesting the food in the intestine. During its foraging phase, it interaction and environmental interaction for applications such
can exhibit two different states: tumbling or swimming. The as robot control. In this study, we investigate the PPSO
modification in the orientation of the bacterium is due to the fact algorithm on complex function optimization problems and its
of tumbling action possessed by the bacterium. And the ability to cope with noisy environments.
swimming action is responsible for the movement of the
bacterium in the current direction. After a certain number of
complete swims, the best half of the population undergoes the

422
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. CONCLUSION [4]Gainni Di Caro, Frederick Ducatelle and Luca Maria


Gambardella,”SWARM INTELLIGENCE FOR ROUTING IN
In this paper, I have discussed several types of swarms that can MOBILE AD HOC NETWORKS” , Instituto Dalle Molle di
influence the working behavior of researchers. Nature has Studi sull’Intelligenza Artificiale(IDSIA) Galleria2, CH-6928
inspired problem solving techniques have been found to be an Manno-Lugano, Switzerland.
intelligent and efficient way for this. Apart from it, this paper is
giving a reference to few of the commonly known algorithms of [5] Xiaohui Hu, Yuhui Shi , Russ Eberhart ,”Recent Advances
swarm intelligence. The working steps of these algorithms are in Particle Swarm”.
mentioned that how the ants as a collective team behave
efficiently in order to process complex tasks that are not possible [6] I. Kassabalidis*, M.A. El-Sharkawi* ,R.J.Marks II*, P.
to be carried out as an individual. Swarm intelligence has also Arabshahi, A.A. Gray, “Swarm Inteeligence for Routing in
been used for minimizing functions and for training in neural Communication Networks”, dept. of Electrical Eng. Box
networks efficiently. Swarm intelligence is used in various other 352500, University of Washington Seattle, WA98195 USA.
areas like in digital circuits, data mining and
telecommunications. [7] Payman Arabshahi, Andrew Gray,Ioannis Kassabalidis,
Arindham Das, “Adaptive Routing in Wireless Communication
REFERENCES Networks using Swarm Intelligence”.

[1] Dr. Ajay Jangra, Adima Awasthi, Vandhana Bhatia, “A [8]Dervis KARABOGA , “AN IDEA BASED ON HONEY
Study on Swarm Artificial Intelligence” volume 3 , issue 8, BEE SWARM FOR NUMERICAL OPTIMIZATION” ,
August 2013. Technical Report-TR06, OCTOBER-2005.

[2] Riya Marry Thomas, “Survey of Bacterial Foraging [9]Rajani Muraleedharan and Lisa Ann Osadciw, “Sensor
Opmization Algorithm” International Journal of science and Communication Network using Swarm Intelligence”,
modern engineering( IJISME), ISSN: 2319-6386, Volume-1, Department of electrical engineering and computer science.
Issue-4, March 2013.
[10]Rajeshwar Singh*, D.K. Singh* and Lalan Kumar, “Swarm
[3]ALEKSANDAR JEVTIC , DIEGO ANDINA, “ Swarm intelligence based for routing in Mobile Ad-Hoc networks”,
Intelligence and Its Applications in Swarm Robotics” ,6th Department of electronic and communictions engineering,
WSEAS Int. Conference on Computational Intelligence, Man- BRCM college of engineering and technology.
Machine Systems and Cybernetics , Tenerife , Spain , December
14-16, 2007.

423
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Hierarchical Nepali Base Phrase Chunking


Using HMM With Error Pruning
Arindam Dey Abhijit Paul Bipul Syam Prukayastha
Department of Computer Science Department of Computer Science Department of Computer Science
Assam University Silchar, India Assam University Silchar, India Assam University Silchar, India
4uarin@gmail.com abhijitpaul16@gmail.com bipul_sh@hotmail.com

Abstract phrases, verb phrases, prepositional phrases, etc. Abney


Segmentation of a text into non-overlapping syntactic units (1991) introduced the concept of chunk as an intermediate
(chunks) has become an essential component of many step providing input to further full parsing stages. Thus,
applications of natural language processing. This paper chunking can be seen as the basic task in full parsing.
presents Nepali base phrase chunker that groups Although the detailed information from a full parse is lost,
syntactically correlated words at different levels using chunking is a valuable process in its own right when the
HMM. Rules are used to correct chunk phrases incorrectly entire grammatical structure produced by a full parse is not
chunked by the HMM. For the identification of the
required. For example, various studies indicate that the
boundary of the phrases IOB2 chunk specification is
selected and used in this work. To test the performance of information obtained by chunking or partial parsing is
the system, corpus was collected from Amharic news outlets sufficient for information retrieval systems rather than full
and books. The training and testing datasets were prepared parsing (Yangarber and Grishman,1998). Partial
using the 10-fold cross validation technique. Test results on syntactical information can also help to solve many NLP
the corpus showed an average accuracy of 85.31% before tasks, such as text summarization, machine translation and
applying the rule for error correction and an average spoken language understanding (Molina and Pla, 2002).
accuracy of 93.75% after applying rules.
For example, Kutlu (2010) stated that finding noun phrases
and verb phrases is enough for information retrieval
Keywords
systems. Phrases that give us information about agents,
Nepali Language Processing, Base Phrase Chunking,
times, places, objects, etc. are more significant than the
Partial Parsing
complete configurational syntactic analyses of a sentence
for question-answering, information extraction, text
1. INTRODUCTION mining and automatic summarization.
Chunking is a natural language processing (NLP) task that
focuses on dividing a text into syntactically correlated non- Chunkers do not necessarily assign every word in the
overlapping and non-exhaustive groups of words, i.e., a sentence like full parses to a higher-level constituent. They
word can only be a member of one chunk and not all identify simple phrases but do not require that the sentence
words are in chunks (Tjong et al, 2000). Chunking is be represented by a single structure. By contrast full
widely used as an intermediate step to parsing with the parsers attempt to discover a single structure which
purpose of improving the performance of the parser. It also incorporates every word in the sentence. Abney (1995)
helps to identify non-overlapping phrases from a stream of proposed to divide sentences into labeled, nonoverlapping
data, which are further used for the development of sequences of words based on superficial analysis and local
different NLP applications such as information retrieval, information. In general, many of NLP applications often
information extraction, named entity recognition, question require syntactic analysis at various NLP levels including
answering, text mining, text summarization, etc. These full parsing and chunking. The chunking level identifies all
NLP tasks consist of recognizing some type of structure possible phrases and the full parsing analyzes the phrase
which represents linguistic elements of the analysis and structure of a sentence. The choice of which syntactic
their relations. In text chunking the main problem is to analysis level should be used depends on the specific speed
divide text into syntactically related non-overlapping or accuracy of an application. The chunking level is
groups of words (chunks). efficient and fast in terms of processing than full parsing
(Thao et al., 2009).
The main goal of chunking is to divide a text into segments
which correspond to certain syntactic units such as noun

424
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8
Chunkers can identify syntactic chunks at different levels
of the parser, so a group of chunkers can build a complete 2.1 Phrasal Categories
parser (Abney, 1995). Most of the parsers developed for
languages like English and German use chunkers as Phrases are syntactic structures that consist of one or more
components. Brants (1999) used a cascade of Markov words but lack the subject-predicate organization of a
model chunkers for obtaining parsing results for the clause. These phrases are composed of either only head
German NEGRA corpus. Today, there are a lot of word or other words or phrases with the head combination.
chunking systems developed for various languages such as The other words or phrases that are combined with the
head in phrase construction can be specifiers, modifiers
Turkish (Kutlu, 2010), Vietnamese (Thao et al., 2009),
and complements. Yimam (2000) classified Amharic word
Chinese (Xu et al, 2006), Urdu (Ali and Hussain, 2010), classes into five types, i.e. nouns, verbs, adverbs,
etc. Although Nepali is the working language of Nepal and adjectives and prepositions. In line with this classification,
spoken some of India with a population of about 2 million Yimam (2000) and Amare (2010) classified phrase
at present, it is still one of less-resourced languages with structures of the Nepali language as: noun phrases, verb
few linguistic tools available for Nepali text processing. phrases, adjectival phrases, adverbial phrases and
This work is aimed at developing Nepali base phrase prepositional phrases.
chunker that generates base phrases. The remaining part of
Noun Phrase: An Nepali noun phrase (NP) is a phrase that
this paper is organized as follows. Section 2 presents has a noun as its head. In this phrase construction, the head
Nepali language with emphasis to its phrase structure. of the phrase is always found at the end of the phrase. This
Nepali base phrase chunking along with error pruning is type of phrase can be made from a single noun or
discussed in Section 3. In Section 4, we present combination of noun with either other word classes
experimental results. Conclusion and future works are including noun word class. Example: “बफचायी वुबद्रा”
highlighted in Section 5. References are provided at the meaning “poor subhadra”.
end.
NOUN PHRASE (NP->JJ NNP)

2. LINGUISTIC STRUCTURES OF बफचायी वुबद्रा


NEPALI POS JJ NNP
CHUNK NP(noun phrase)
Nepali or Nepalese (नेऩारी), is a language in the Indo-
Verb Phrase: Nepali verb phrase (VP) is constructed with
Aryan languages. It is the official language and de facto
a verb as a head, which is found at the end of the phrase,
lingua franca of Nepal and is also spoken in Bhutan.
and other constituents such as complements, modifiers and
Nepali has official language status in the formerly
specifiers. But not all the verbs take the same category of
independent state of Sikkim and in West Bengal's
complement. Based on this, verbs can be dividing into two.
Darjeeling district as well as Assam. Nepali developed in
These are transitive and intransitive. Transitive verbs take
proximity to a number of Indo-Aryan languages, most
transitive noun phrases as their complement and
notably Pahari and Magahi, and shows Sanskrit influences.
However, owing to Nepal's geographical area, the intransitive verbs do not. Examples are: “भ बात खान्छु”
language has also been influenced by Tibeto-Burman. meaning “I eat rice”.
Nepali is mainly differentiated from Central Pahari, both in
grammar and vocabulary, by Tibeto-Burman idioms owing VERB PHRASE(VP->PRP NN VM)
to close contact with the respective language group. Nepali
language shares 40% lexical similarity with Bengali
language. भ बात खान्छु
POS PRP NN VM
Historically, the language was first called the Khas CHUNK VP(verb phrase)
language (Khas kurā), then Gorkhali or Gurkhali (language
of the Gorkha Kingdom) before the term Nepali (Nepālī Adjectival Phrase: An Nepali Adjectival phrase (AdjP) is
bhāṣā) was taken from Nepal Bhasa. Other names include constructed with an adjective as a head word and other
Parbatiya ("mountain language", identified with the constituents such as complements, modifiers and
Parbatiya people of Nepal) and Lhotshammikha (the specifiers. The head word is placed at the end. Examples
"southern language" of the Lhotshampa people of Bhutan). are:
Examles
“फशुत ऩततऩयामणा” meaning “very loyal(to husband”.
Nepali Vowels:
अ आ इ ई उ ऊ ए ऐ ओ औ अॊ अः ADJECTIVAL PHRASE (AdjP->QTF JJ)

Nepali Consonant: फशुत ऩततऩयामणा


POS QTF JJ
कखगघङचछजझञटठडढणतथदधनऩपफबभ CHUNK AdjP(adjective phrase)
म य र ऱ ल ळ ऴ व श ज़ ॐ ड़ ढ़ फ़ क ष त्र स श्र.
The above are the characters for Nepali used for writing. Adverbial Phrases: Nepali adverbial phrases (AdvP) are
The script is devanagari. made up of one adverb as head word and one or more
other lexical categories including adverbs themselves as
425
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8
modifiers. The head of the AdvP is placed at the end. formats are complete chunk representations which can
Unlike other phrases, AdvPs do not take complements. identify the beginning and ending of phrases while the last
Most of the time, the modifiers of AdvPs are PPs that three are partial chunk representations. All boundary types
come always before adverbs. Examples are: “एक एक गयी use “I” tag for words that are inside a phrase and an “O”
शे ये” meaning “He examined one by one”. tag for words that are outside a phrase. They differ in their
treatment of chunk-initial and chunk-final words.
ADVERBIAL PHRASES (AdvP->QTC JJ PRP)
IOB1: the first word inside a phrase immediately following
एक एक गयी शे ये another phrase receives a B tag.
POS QTC JJ PRP IOB2: all phrase- initial words receive a B tag.
CHUNK AdvP (adverb phrase)
IOE1: the final word inside a phrase immediately
preceding another same phrase receives an E tag.
2.2 Sentence Formation IOE2: all phrase- final words receive an E tag.

Nepali language follows subject-object-verb grammatical IO: words inside a phrase receive an I tag, others receive
pattern unlike, for example, English language which has an O tag.
subject-verb-object sequence of words (Yimam, 2000;
Amare, 2010). For instance, the Nepali equivalent of the “[”: all phrase-initial words receive “[” tag, other words
sentence “We never sang a song” is written as “शाभीरे receive “.” tag.
कहशरेऩतन गीत गाएनाॉ”.
“]”: all phrase-final words receive “]” tag and other words
receive “.” tag.
Nepali sentences can be constructed from simple or
complex NP and simple or complex VP. Simple sentences In this work, we considered six different kinds of chunks,
are constructed from simple NP followed by simple VP namely noun phrase (NP), verb phrase (VP), Adjective
which contains only a single verb. The following examples phrase (AdjP), Adverb phrase (AdvP) and sentence (S). To
show the various structures of simple sentences. identify the chunks, it is necessary to find the positions
where a chunk can end and a new chunk can begin. The
 पऩटय कहशरेऩतन झगडा गदै न part-of-speech (POS) tag assigned to every token is used to
Peter never quarrels. discover these positions. We used the IOB2 tag set to
identify the boundaries of each chunk in sentences
एक ककरोको कतत ऩछ
extracted from chunk tagged text. Using the IOB2 tag set

along with the chunk types considered, a total of 13 phrase
How much does a kilo cost? tags were used in this work. These are: B-NP, I-NP, B-VP,
I-VP, B-ADJP, I-ADJP, B-ADVP, I-ADVP, B-S, I-S and
Complex sentences are sentences that contain at least one O. The followings are examples of chunk tagged
complex NP or complex VP or both complex NP and sentences.
complex VP. Complex NPs are phrases that contain at
least one embedded sentence in the phrase construction.
The embedded sentence can be complements. The
following examples show the various structures of CHUNK एक एक गरी हे रे
complex Nepali sentences. IOB1 B-ADVP I- ADVP I-VP O
IOB2 I-ADVP B-ADVP I-VP O
 स्लगगको फाटो छे ककन्छ IOE1 I-ADVP I-ADVP I-VP O
The path of haven is blocked IOE2 I-ADVP I-ADVP I-VP O
IO I-ADVP I-ADVP I-VP O
 उनको आॉवु ऩुतछने थथमो [ [ . . .
Her tears would be wiped ] . . . ]

3. BASE PHRASE CHUNKING Table 1: Chunk representation for the sentence “एक एक
3.1 Chunk Representation गयी शे ये”.

The tag of chunks can be noun phrases, verb phrases,


adjectival phrases, etc. in line with the language
3.2 Architecture Of The Chunker
To implement the chunker component, we used hidden
construction rules. There are many decisions to be made Markov model (HHM) enhanced by a set of rules to prune
about where the boundaries of a group should lie and, as a errors. The HMM part has two phases: the training phase
consequence, there are many different „styles‟ of chunking. and the testing phase. In the training phase, the system first
There are also different types of chunk tags and chunk accepts words with POS tags and chunk tags. Then, the
boundary identifications. Nevertheless, in order to identify HMM is trained with this training set. Likewise in the test
the boundaries of each chunk in sentences, the following phase, the system accepts words with POS tags and
outputs appropriate chunk tag sequences against each POS
boundary types are used (Ramshaw and Marcus, 1995):
tag using HMM model. Fig. 1 illustrates the workflow of
IOB1, IOB2, IOE1, IOE2, IO, “[”, and “]”. The first four the chunking process. In this work, chunking is treated as a
426
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8
tagging problem. We use POS tagged sentence as input where C' is the optimal chunk sequence. By applying
from which we observe sequences of POS tags represented Baye‟s rule can derive, Equation (1) yields:
as T. However, we also hypothesize that the corresponding
sequences of chunk tags form hidden Markovian C'= argcmax P(T |C)*P(C) (2)
properties. Thus, we used a hidden Markov model (HMM)
with POS tags serving as states. The HMM model is which is in fact a decoding problem that is solved by
trained with sequences of POS tags and chunk tags making use of the Viterbi algorithm. The output of the
extracted from the training corpus. The HMM model is decoder is the sequence of chunk tags which groups words
then used to predict the sequence of chunk tags C for a based on syntactical correlations. The output chunk
given sequence of POS tag T. This problem corresponds to sequence is then analyzed to improve the result by
finding C that maximizes the probability P(C|T), which is applying linguistic rules derived from the grammar of
formulated as: Amharic. For a given Amharic word w, linguistic rules
(from which sample rules are shown in Algorithm 1) were
used to correct wrongly chunked words (“w-1” and “w+1”
C' = argcmax P(C | T) (1) are used to mean the previous and next word,
respectively).

Word, POS tag and Chunk Sequence

HMM Model

---------------------------------------------------------
Word and POS tag Sequence Chuking Model

Chunk tag sequence


Error pruning with rules

Fig. 1: Workflow of the chunking process.

4. EXPERIMENT 6. If POS(w)=ADJ and POS(w+1)=ADJ, then


chunk tag for w is B-ADJP.
4.1 The Corpus
The major source of the dataset we used for training and Algorithm 1: Sample rules used to prune chunk errors.
testing the system was Nepali news corpus which is at
present widely used for research on Nepali natural
language processing. The corpus contains 1000 sentences 4.2 Test Results
where words are annotated with POS tags. Furthermore, In 10-fold cross-validation, the original sample is
we also collected additional text from an Nepali grammar randomly partitioned into 10 equal size subsamples. Of the
book authored by Banu Oja and Shambhu Oja (2004). The 10 subsamples, a single subsample is used as the validation
sentences in the corpus are classified as training data set data for testing the model, and the remaining 9 subsamples
and testing data set using 10 fold cross validation are used as training data. The cross-validation process is
technique. then repeated 10 times, with each of the 10 subsamples
used exactly once as the validation data. Accordingly, we
1. If POS(w)=ADJ and obtain 10 results from the folds which can be averaged to
POS(w+1)=NPREP,NUMCR then the chunk tag for w is produce a single estimation of the model‟s predictive
0; potential. By taking the average of all the ten results the
2. If POS(w)=ADJ and POS(w-1)!=ADJ and overall chunking accuracy of the system is presented in
POS(w+1)= AUX,V, then the chunk tag for w is B-VP. Table 2.
3. If POS(w)=NPREP and POS(w+1)=N, then
chunk for w is B-NP. Chunking Model Accuracy
4. If POS(w)=NUMCR and POS(w+1)=NPREP, HMM 85.31%
then chunk tag for w is 0. HMM pruned with rules 93.75%
5. If POS(w)=N and POS(w+1)=VPREP and
POS(w-1)=N, ADJ, PRON, NPREP, then chunk tag for w Table 2: Test result for Nepali base phrase chunker.
is B-VP.

427
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8
5. CONCLUSION AND FUTURE [11] Xu, F., Zong, C. and Zhao, J. (2006). A Hybrid
Approach to Chinese Base Noun Phrase Chunking. In
WORKS Proceedings of the Fifth SIGHAN Workshop on Chinese
Language Processing , Sydney.
Nepali is one of the most morphologically complex and [12] Yangarber, R. and Grishman, R. (1998). NYU:
less-resourced languages. This complexity poses difficulty Description of the Proteus/PET system as used for MUC-7.
in the development of natural language processing In: Proceedings of the Seventh Message Understanding
applications for the language. Despite the efforts being Conference. MUC-7, Washington, DC.
undertaken to develop various Nepali NLP applications, [13] Arindam Dey, Bipul Syam Purkayastha “Named
only few usable tools are publicly available at present. One
Entity Recognition using Gazetteer Method and N-gram
of the main reasons frequently cited by researchers is the
morphological complexity of the language. Nepali text Technique for an Inflectional Language: A Hybrid
parsing also suffers from this problem. However, not all Approach” (IJCA) International Journal of Computer
Nepali natural language processing applications require Applications, Vol. 84, 2013.
full parsing. In this work, we tried to overcome this [14] Animesh Nayan,, B. Ravi Kiran Rao, Pawandeep
problem by employing chunker. It appears that chunking is Singh,Sudip Sanyal and Ratna Sanya “Named Entity
more manageable problem than parsing because the Recognition for Indian Languages” .In Proceedings of the
chunker does not require deeper analysis of texts which
IJCNLP-08 Workshop on NER for South and South East
will be less affected by the morphological complexity of
the language. Thus, future work is recommended to be Asian Languages ,Hyderabad (India) pp.97–104, 2008..
directed at improving the chunker and use this component [15] Sujan Kumar Saha Sanjay Chatterji Sandipan
to develop Nepali natural language processing applications Dandapat. “A Hybrid Approach for Named Entity
that do not rely on deeper analysis of linguistic structures. Recognition in Indian Languages”.
[16] Asif Ekbal, Rejwanul Haque, Amitava Das,
REFERENCES Venkateswarlu Poka and Sivaji Bandyopadhyay
[1] Abney, S. (1991). Parsing by chunks. In: Berwick, R., “Language Independent Named Entity Recognition in
Abney, S. and Tenny, C., principle-Based Parsing", Indian Languages” .In Proceedings of the IJCNLP-08
Kluwer Academic Publishers Workshop on NER for South and South East Asian
[2] Abney, S. (1995). Chunks and Dependencies: Bringing Languages, pages 33–40,Hyderabad, India, January 2008.
Processing Evidence to Bear on Syntax. In: Computational [17] P. Srikanth, K. Murthy, Named Entity Recognition
linguistics and the Foundations of Linguistic Theory. for Telugu, Workshop on NER for South and South East
CSLI. Asian Languages, IJCNLP 2008.
[3] Ali, W. and Hussain, S. (2010). A hybrid approach to [18] P. K. Gupta and S. Arora, “An Approach for Named
Urdu verb phrase chunking. In: Proceedings of the 8th Entity Recognition System for Hindi: An Experimental
Workshop on Asian Language Resources (ALR-8), Study,” in Proceedings of ASCNT-2009, CDAC, Noida,
COLING-2010. Beijing, China. India, pp. 103–108.
[4] Brants, T. (1999). Cascaded Markov models, In: [19] S. K. Saha, S. Sarkar, and P. Mitra January 2008, “A
Proceedings of the ninth conference on European chapter Hybrid Feature Set based Maximum Entropy Hindi Named
of the Association for Computational Linguistics. EACL- Entity Recognition,” in Proceedings of the 3rd
99. Bergen, Norway. International Joint Conference on NLP, Hyderabad , India.
[5] Kutlu, M. (2010). Noun phrase chunker for Turkish [20] Suleiman H. Mustafa and Qasem A. Al-Radaideh
using dependency parser. Doctoral dissertation, Bilkent 2004 “Using N-Grams for Arabic Text Searching” journal
University. of the american society for information science and
[6] Lewis, P., Simons, F. and Fennig D. (2013). technology.
Ethnologue: Languages of the World, Seventeenth edition. [21] Sujan Kumar Saha, Sudeshna Sarkar, Pabitra Mitra
Dallas, Texas: SIL International. “Gazetteer Preparation for Named Entity Recognition in
[7] Molina, A. and Pla, F. (2002). Shallow parsing using Indian Languages”.Available at:
specialized HMMs. The Journal of Machine Learning http://www.aclweb.org/anthology-new/I/I08/I08-7002.pdf
Research, 2, pp. 595-613. [22] Shilpi Srivastava, Mukund Sanglikar and D.C
[8] Ramshaw, A. and Marcus, P. (1995). Text chunking Kothari. ”Named Entity Recognition System for Hindi
using transformation-based learning. In Proceedings of the Language: A Hybrid Approach” International Journal of
Third ACL Workshop on Very Large Corpora pp. 82-94. Computational Linguistics (IJCL), Volume (2): Issue (1) :
[9] Thao, H., Thai, P., Minh N., and Thuy, Q. (2009). 2011.
Vietnamese noun phrase chunking based on conditional [23] W. Li and A. McCallum, Sept 2003 “Rapid
random fields. In International Conference on Knowledge Development of Hindi Named Entity Recognition using
and Systems Engineering (KSE'09). pp. 172-178. Conditional Random Fields and Feature Induction(Short
[10] Tjong, E. F., Sang, K. and Buchholz, S. (2000). Paper),” ACM Transactions on Computational Logic.
Introduction to the CoNLL-2000 shared task: Chunking. In
Proceedings of the 2nd workshop on Learning language in
logic and the 4th conference on Computational natural
language learning, Volume 7 pp. 127-132.

428
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Cloud Computing: A Review on Security and Safety


Measures
Sandeep Kapur Dr. Sandeep Kautish
Research Scholar Guru Kashi University, Dean Engineering, Guru Kashi University,
Talwandi sabo Talwandi sabo
sandeep.kapur82@gmail.com dr.skautish@gmail.com

ABSTRACT shared pool of configurable computing resources (e.g.,


Cloud computing is an emerging paradigm of computing. It networks, servers, storage, applications, and services) that
is an internet-based computing, whereby shared resources, can be rapidly provisioned and released with minimal
software and information, are provided to computers and management effort or service provider interaction [8]. This
devices on-demand. This newer powerful computing cloud model is composed of five essential characteristics,
capability approach is byproduct of grid computing, three service models, and four deployment models.
distributed computing and parallel computing. Cloud has In the commercial sector, Amazon.com was one of the first
the advantage of reducing cost by sharing computing and vendors to provide storage space, computing resources and
storage resources, combined with an on-demand business functionality following the cloud computing
provisioning mechanism relying on a pay-per-use business model. In 2006, they launched Elastic Compute Cloud
model. The major issue hindering the growth of popularity (EC2) that allowed companies and individuals to rent
of usage of cloud computing is cloud security. There are computers to run their own enterprise applications and
numerous cloud security issues. This research article services. Salesforce.com, in 1999, pioneered the concept of
provides an overview of the security challenges pertinent to delivering enterprise applications as cloud-based services to
cloud computing and points out considerations for safety enterprises. The number of Cloud providers is increasing at
measures organizations should take when outsourcing data such a rate that Gartner listed Cloud Computing as number
and applications. It also explores the safety solutions to one in its top ten strategic technology areas for 2010 [5].
providing a trustworthy cloud computing environment.
The organization of the rest part is as follows. Section II
outlines the benefits that cloud computing offers service
Keywords model and deployment model. Then, Sections III describe
Cloud computing, security, threat, safety measures, pay- the information security principles, security threat and
per- use, byproduct. issues and section IV discusses the safety measures with
respect to location, cost and security of data in the Cloud.
The last Section presents a brief conclusion.
1. INTRODUCTION
Cloud computing is not a total new concept; it is originated
from the earlier large-scale distributed computing
2. CLOUD COMPUTING
technology. Cloud computing will be the third revolution in 2.1 Benefits
the IT industry, which represent the development trend of The essential features of this latest paradigm include [9]:
the IT industry from hardware to software, software to  On-demand self-services: to enable consumers to
services, distributed service to centralized service. The core use Cloud provisions as and when required by
concept of cloud computing is reducing the processing business demands
burden on the users’ terminal by constantly improving the  Resource pooling: to allow dynamically assigned
handling ability of the “cloud”. Cloud resources include computing resources to serve multiple consumers
hardware and systems software on remote datacenters, as though the use of virtualization technologies
well as services based upon these that are accessed through  Rapid elasticity and scaling: to allow Cloud
the simple Internet connection using a standard browser. services, resources and infrastructures to be
However, there still exist many problems in cloud automatically provisioned as business
computing. Security risks have become the primary requirements change.
concern for people to shift to cloud computing.  Measured provision: to provide a metering
capability to determine the on-demand usage for
1.1 Definition billing purposes
 Effective management: to provide and facilitate
The most acceptable definition for cloud computing
easy monitoring, controlling and reporting.
defined by NIST (National Institute of Standards and
Technology, US), Cloud computing is a model for enabling
ubiquitous, convenient, on-demand network access to a

429
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.2 Service Model return if no longer needed. For mission critical


 Software-as-a-Service (SaaS): it is software that processes, this type of cloud infrastructure is
refers to prebuilt pieces of software or complete much more effective because of enhanced control
applications (e.g. an email system, human and management by the enterprise itself.
resource management, payroll processing)  Community cloud: Community clouds are similar
deployed over the internet. This is a “pay-per- to public clouds except that their access is limited
use” model and was initially deployed for to a specific community of cloud consumers.
salesforce automation and Customer Relationship These are semiprivate clouds in that they are used
Management (CRM). by a defined group of certain tenants (consumers)
 Platform-as-a-Service (PaaS): It provide with shared backgrounds and requirements.
development environment as a service. You can Several organizations jointly construct and share
use the middleman’s equipment to develop your the same cloud infrastructure as well as policies,
own program and deliver it to the users through requirements, values, and concerns. The cloud
Internet and servers e.g. application servers, community forms into degree of economic
portal servers and middleware. Consumers use scalability and democratic equilibrium. The cloud
these to build and deploy their own applications. infrastructure could be hosted by a third-party
 Infrastructure-as-a-Service (IaaS): This delivers a vendor or within one of the organizations in the
platform virtualization environment as a service. community.
Rather than purchasing servers, software, data
center space or network equipment, clients
instead buy those resources as a fully outsourced
service.
 Hardware-as-a-Service (HaaS): According to
Nicholas Carr [7], “the idea of buying IT
hardware or even an entire data center as a pay-
per-use subscription service that scales up or
down to meet your needs. But as a result of rapid
advances in hardware virtualization, IT
automation, and usage metering and pricing, this
model is advantageous to the enterprise users,
since they do not need to invest in building and
managing data centers.

2.3 Deployment Model


 Public cloud: In public clouds, multiple
customers share the computing resources Figure 1. Cloud Model
provided by a single service provider, customers
can quickly access these resources, and only pay 3. DATA SECURITY ISSUES
for the operating resources. Although the public Security issues are treated as vital concerned challenges of
cloud has compelling advantages, there existing cloud computing [9]. Cloud is expected to offer the
the hidden danger of security, regulatory capabilities including: trusted encryption scheme to ensure
compliance and quality of service (QoS). The safe data-storage environment; stringent access control;
resources may be offered free (e.g. Facebook and safe and stable backup of user data. However, cloud allows
YouTube provisions) or offered at a cost. users to obtain the computing power which exceeds their
Consumers are charged only for the resources own physical domain. This leads to several security
they use following a pay-per-use model. problems.
 Private cloud: In the private cloud, computing
resources are used and controlled by an exclusive 3.1 Information Security Principle (CIA)
private enterprise. It’s generally deployed in the  Confidentiality means to keep secret the personal
enterprise’s data center and managed by internal information. It ensures that data resides in the
personnel or service provider. The main cloud cannot be accessed by unauthorized user.
advantage of this model is that the security, Various encryption techniques can be used with
compliance and QoS are under the control of the symmetric or asymmetric encryption algorithms.
enterprises [13]. In these clouds, data is much Encrypting data with passwords and biometric
more secure than if it is held in a public cloud. verifications can be the simple solution for this. It
 Hybrid cloud: A third type can be hybrid cloud all depends on what type of techniques cloud
that is typical combination of public and private service provider is using. For instance in [15],
cloud. It enables the enterprise to running state- MozyEnterprise uses encryption techniques to
steady workload in the private cloud, and asking protect customer data whereas Amazon S3 does
the public cloud for intensive computing not. It also depends on the customer awareness
resources when peak workload occurs, then where they can encrypt their information prior to

430
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

uploading it. Also, The CSP should ensure proper outage in February 2008 and eight hours outage
deployment of encryption standards using NIST in July 2008.
standards in [2]. It helps to develop the trust
between communicating parties.
 Integrity is the assurance that the information is
3.2 Security Threats
trustworthy and accurate. Data integrity in the  Spoofing identity: It is an ill practice in which
Cloud system means to preserve information message is sent from unknown source practicing
integrity (i.e., not lost or modified by as a source known to receiver. There are
unauthorized users). Mainly, there are two numerous types of spoofing but most vital and
approaches which provide integrity, using relevant type to cloud is the IP spoof.
Message Authentication Code (MAC) and Digital  Tampering with data: It is the knowingly
Signature (DS). A message authentication code destruction or manipulation of data. Data can be
(MAC) is a cryptographic checksum on data that tempered whether it is stored physically or it is
uses a session key to detect both accidental and communication mode.
intentional modifications of the data. On the other  Repudiation: Repudiation of data threat occurs
hand, in the DS algorithm it depends on the when proper measures and controls are not taken
public key structure (Having public and private to track and log users' actions and allow
pair of keys). As symmetric algorithms are much malicious manipulation. These attacks destroy
faster than asymmetric algorithms, in this case, data stored on log files make it invalid and
we believe that Message Authentication Code change the authoring information of actions
(MAC) will be the best solution to provide the executed by a malicious user.
integrity checking mechanism. Studies show that,  Information disclosure: These attacks designed to
PaaS and SaaS doesn’t provide any integrity acquire system specific information about a web
protection, in this case assuring the integrity of site. This system specific information includes
data is essential. the software distribution, version numbers, and
patch levels, or the information may contain the
location of backup files and temporary files.
 Denial of service: DoS attacks occur when a
system is flooded with traffic to the point that it is
Security unable to process legitimate service requests. It
will halt the availability and reliability of
applications. The prediction and removal of DoS
attacks is very tough. Firewall and bandwidth
throttling and resource throttling can be used to
control DOS attacks.
 Elevation of privilege: An elevation of privilege
occurs when a user obtains privileged access to
portions of the application or data that are
normally inaccessible to the user.

CIA 3.3 Security Issues


 Privileged user access: Information transmitted
from the client through the Internet poses a
certain degree of risk because of issues of data
Confidentiality Integrity Availability
ownership; enterprises should spend time getting
to know their providers and their regulations as
much as possible before assigning some trivial
Figure 2. Information security principles applications. It is probable for the confidential
data to be illegally accessed due to the astringent
 Availability is a guarantee of ready access to the access control. Unauthorized access may exist if
information by authorized people. The most security mechanism is not adequate. Entities in
powerful technique is prevention through the service chain can easily utilize the
vulnerability to access users’ data. As data
avoiding threats affecting the availability of the
usually exists in the Cloud for a long time, the
service or data. It is very difficult to detect threats risk of illegal access is higher.
targeting the availability. Threats targeting  Regulatory compliance: Clients are accountable
availability can be either Network based attacks for the security of their solution, as they can
such as Distributed Denial of Service (DDoS) choose between providers that allow to be
attacks or CSP availability. For example, audited by 3rd party organizations that check
Amazon S3 suffered from two and a half hours levels of security and providers. Numerous
regulations pertain to the storage and use of data
require regular reporting and audit trails, cloud

431
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

providers must enable their customers to comply complicate matters. For example, with the proper
appropriately with these regulations. Managing skills and equipment, it is possible to recover data
Compliance and Security for Cloud Computing, from failed drives that are not disposed of
provides insight on how a top-down view of all properly by service providers.
IT resources within a cloud-based location can  Data segregation: Encrypted information from
deliver a stronger management and enforcement multiple companies may be stored on the same
of compliance policies. In addition to the hard disk, so a mechanism to separate data should
requirements to which customers are subject, the be deployed by the service provider.
data centers maintained by cloud providers may  Recovery: Every provider should have a disaster
also be subject to compliance requirements. recovery protocol to protect and secure user data.
 Loss of control: Depending on contracts, some  Investigative support: If a client suspect’s faulty
clients might never know what country or what activity from the provider, it may not have many
jurisdiction their data is located. When legal ways pursue an investigation cloud
organizations port their data or services to cloud, computing systems (including applications and
they are not aware of the location of their data services hosted on them) has significant
and services since, the provider can host their implications for the privacy of personal
data or services anywhere within the Cloud. This information as well as for the confidentiality of
poses a serious concern as from a user business and governmental information.
perspective. Organizations as well as users lose Furthermore, the entire contents of a user’s
control over their vital data and are not aware of originally stored on local device may be shifted
any security mechanisms put in place by the to a single cloud provider or even to many cloud
provider. providers. Whenever an individual, a business, a
 Data transfer across the borders: If a global government agency, or other entity shares
company that wishes to take advantages of information in the cloud, privacy or
services hosted on cloud computing systems, it confidentiality questions may arise.
has to make clear which countries are hosting its  Long-term viability: Refers to the ability to
private data providing cloud services, and their retract a contract and all data if the current
individual laws govern its data. For example, the provider is bought out by another firm should be
US company will want to know where the sure that the data you put into the cloud will
personal data of its employees, business never become invalid even when your cloud
information will be located, so that it can be computing provider go broke or get acquired and
known how the specific laws will be applied on swallowed up by a larger company.
its private data. A German subsidiary may not  Audit: CSPs need to implement internal
oppose to use cloud services provided in monitoring controls, in addition to an external
Argentina, but it will object to the transfer of its audit process. Transactions involving data that
data to Turkey, Mexico, or the United States. resides in the cloud need to be properly made and
Knowing where the cloud service provider will recorded, in order to ensure integrity of data and
host the data is a prerequisite task. the data owner needs to be able to trust the
 Dynamic Provisioning: It is not clear which party environment that no untraceable action has taken
is responsible (statutorily or contractually) for place. However, provision of a full audit trail
ensuring legal requirements for personal within the cloud, particularly in public cloud
information are observed, or appropriate data models, is still an unsolved issue.
handling standards are set and followed [5].  Lack of standardization of policy integration: The
Neither is it yet clear to what extent cloud sub- cloud is heterogeneous, which means that
contractors involved in processing can be different cloud servers may have different
properly identified, checked and ascertained as mechanisms to ensure the clients’ data security,
being trustworthy, particularly in a dynamic thus policy integration is one of the concerns. If
environment. It is also unclear what rights in the not addressed properly, security breaches may
data will be acquired by data processors and their exist. Problem between Google, Amazon and
sub-contractors, and whether these are LoadStorm is one example.
transferable to other third parties upon  Unclear responsibility: It is sometimes not clear
bankruptcy, takeover, or merger [6]. which CSP is responsible for security of data.
 Data Sanitization: Sanitization is the removal of Who can use and modify the user data. Users are
sensitive data from a storage device in various also concerned about whether the rights of data
situations, such as when a storage device is procession of one single party can be transferred
removed from service or moved elsewhere to be to another third party in any circumstances.
stored [16]. It also applies to backup copies made  Data Loss: When organizations migrate their data
for recovery and restoration of service, and to cloud, they expect to have the same level of
residual data remaining upon termination of data integrity and safety as in their own premises.
service. In a cloud computing environment, data But in multi tenant environment unauthorized
from one subscriber is physically comingled with parties want to gain access to sensitive data.
the data of other subscribers, which can Deletion or alteration of records without a backup

432
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

of the original content is an obvious example. utilizing this technology. There is no silver bullet to counter
Insufficient authentication and accounting the threats to distributed or cloud computing model but it is
authorization and accounting controls, an possible with right security strategy, multiple layers of
inconsistent use of encryption and encryption security. Both the service providers and the clients must
keys, operational failures, political issues and work together to ensures safety measures, security of cloud
data center reliability are the biggest factors and data on clouds. Mutual understanding between service
responsible in a direct and indirect way for data providers and users is extremely necessary for providing
loss. better cloud security. Cloud computing is an environment
making supercomputing available in a very cost effective
4. SAFETY MEASURES way.
 The prime requirement for safety measure is to
approach the right cloud service provider. REFERENCES
Different vendors have different cloud IT security [1] A. Verma and S. Kaushal, “Cloud Computing Security
and data management. A cloud vendor should Issues and Challenges: A Survey”, Proceedings of
have standards and regulation with analytical Advances in Computing and Communications, Vol. 193,
technical skill, experience. So there is not any pp. 445-454, 2011.DOI: 10.1007/978-3-642-22726-4_46
chance of cloud vendor closing.
 There should be clear contract with cloud vendor. [2] Cloud Security Alliance, “Top Threats to Cloud
So if cloud vendor closes before contract, Computing”, v1.0, March 2010.
enterprise can claim.
 Cloud vendors should provide very good [3] Caroline Kvitka, Clouds Bring Agility to the Enterprise,
recovery facilities. So, if data are fragmented or [Online] Available at:http://www.oracle.com/technology
lost due to certain issues, they can be recovered /oramag/oracle/10-mar/o20interview.html
and continuity of data can be managed.
 Enterprise must have very good infrastructure [4] Cloud Security Alliance, “Security Guidance for
which facilitates installation and configuration of Critical Areas of Focus in Cloud Computing”. 2009.
hardware components such as firewalls, routers,
servers, proxy servers. It should be able to defend [5] Dustin Amrhein & Scott Quint, Cloud Computing for
against cyber attacks. the Enterprise: Part 1: Capturing the cloud, Understanding
 Cloud users should have some mechanism to cloud computing and related technologies, Developer
encrypt data before storing data on cloud Works, IBM, [Online] Available at:
premises for security purpose. www.ibm.com/developerworks/websphere/techjournal/090
 For encryption of data, a strategy must be defined 4_amrhein/ 0904_amrhein.html
and elaborated.
 Applications have built in security mechanisms to [6] Gellman, R. Privacy in the Clouds: Risks to Privacy
avoid any buffer overflow and attacks. and Confidentiality from Cloud Computing. World Privacy
 Multi-layer security approach should be Forum.http://www.worldvacyforum.org/pdf/WPF_Cloud_P
implemented to counter the threats. rivacy_R eport.pdf, 2009.
 For insider attacks antivirus, firewalls should be
used properly. [7] Nicholas Carr's Blog. Here comes haas.
 Ensure multi tenant systems are well isolated. http://www.roughtype.com/archives/2006/03/here_comes_
 The cloud customers (organizations, businesses) haas.php, 2006.
should have a sound understanding of security
processes and of the SLAs with the provider. [8] Peter Mell and Tim Grance, The NIST Definition of
This helps remove any discrepancies and creates Cloud Computing, version 15, National Institute of
a symbiotic relationship between provider and Standards and Technology (NIST), InformationTechnology
customer. Laboratory,www.csrc.nist.gov, 7 Oct 2009
 There should be total analysis of data. An
approach like representing the flow of data on [9] Peter Mell, Timothy Grance. The NIST Definition of
chart can be used by managers so they can have Cloud Computing (Draft). NIST. 2011. http:/
idea where the data is for all the times, where it is /www.production scale.com/home/2011/8/7/the-nist
being stored and where it is being shared.
definition-of-cloud computingdraft.html#axz z1X0xKZRuf

5. CONCLUSION [10] Peter Mell(2012) “What’s special about cloud


Cloud computing is an emerging style of computing with security” IEEE computer society, 1520-9202/12.
great promises; it is expected to change and revolutionize
computing as we know. Cloud technologies, if used
[11] Ramgovind, S., M. Eloff, et al. (2010). The
appropriately, can help to reduce costs, reduce management
management of security in cloud computing, IEEE.
responsibilities and increase agility and efficiency of
organizations. However, one must be very careful to
[12] Stephen Shankland, Brace yourself for Cloud
understand the security risks and challenges posed in

433
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Computing, CNET News, Oct 2009 [15] V.Krishna Reddy(2011) “Security Architecture of
http://news.cnet.com/8301-30685_3-10378782- 264.html [Cloud Computing” International Journal of Engineering
Science and Technology (IJEST), ISSN : 0975-5462
[13] Tharam Dillon, Chen Wu, Elizabeth Chang, 2010 24th Vol.3.pages 7149-7155, 2011..
IEEE International Conference on Advanced Information
Networking and Applications ,“Cloud computing: issues [16] Wayne A. Jansen, (2011) “Cloud Hooks: Security and
and challenges”. Privacy Issues in Cloud Computing” The 44th Hawaii
International Conference on System Sciences – 2011.
[14] T. Mather, S. Kumarasuwamy and S. Latif, “Cloud
Security and Privacy”, O’Rielly, ISBN: 978-0-4596-
802769, 2009.

434
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Survey on Multiprotocol Label Switching Virtual


Private Network Techniques (MPLS VPN)
Gurwinder Singh Manuraj Moudgil
BGIET,Sangrur BGIET,Sangrur
er.gurwindertoor@gmail.com manu.moudgil@gmail.com

ABSTRACT routers, but the greatest advantage of using MPLS is its ability
MPLS is a technology that is used for fast packet forwarding to create Virtual Private Network. MPLS has the capability to
within service provider networks. L2VPNs behave like they create both Layer 2 and Layer 3 VPNs. Apart from VPNs,
are on same LAN, while for L3VPNs Customer Edge router various other benefits includes Traffic Engineering, Optimal
creates a L3 neighbor ship with Provider Edge router. MPLS Traffic Flow, and better IP over ATM integration, use of one
focuses on these three techniques- Virtual Private LAN unified network infrastructure etc. MPLS is one of the big
Service (VPLS), Virtual Private Wire Service (VPWS), things happened to network industry in 21st century, and after
Ethernet VPN & Provider Backbone Bridging-EVPN (EVPN around 14 years, since its first standard paper (IETF RFC
& PBB-EVPN). This paper specifies an overview of 3031), it is still growing with BGP MPLS based Ethernet
Multiprotocol Label switching and Layer 2 and Layer 3 VPN standard paper published in February 2015.MPLS is
MPLS VPN technologies. everywhere in networks with almost all of the service
providers have their backbone network on MPLS, Datacenters
are interconnected using L2 MPLS Technologies, Enterprises
Keywords: use MPLS services to connect their offices at remote
MPLS, LDP, VRF, Layer 3 MPLS VPN, RD, RT, VPLS,
locations.
VPWS, EVPN.

1. INTRODUCTION
1.1 MPLS

Multiprotocol Label Switching (MPLS) is a label switching


technology that uses labels attached to packets to forward
them through the network. MPLS Labels are advertised
between routers and creates a label-to-label mapping. It is the
method of packet forwarding that is used in Service Provider
Network environments. Label Distribution Protocols are used
to exchange labels between routers. Various Label exchange
Protocols are Label Distribution protocol (LDP), Resource
Reservation Protocol (RSVP), Multi-Protocol BGP(MP- Fig 1.2 The AT&T global backbone network includes :
BGP). LDP is the most widely used protocol for exchanging MPLS-based services available to 163 countries over 3,800
labels. LDP labels are only assigned to non-BGP routes in the service nodes, 290,000+ managed MPLS ports for customers,
Routing Information Base (RIB). MP-BGP is used to 38 Internet data centers across the globe, 928,000 worldwide
distribute the label bindings for BGP routes in the RIB. RSVP fiber route miles.[2]
is used to distribute the label bindings for Traffic
Engineering(TE). 1.2 Layer 3 MPLS VPN

MPLS Layer 3 VPN interconnects customer sites over a


service provider core. It is a peer-to-peer VPN model.
Customer creates a Layer 3 neighbor ship with service
provider. Labels are imposed to customers IP routes, when
they enter from customer edge (CE) to provider edge(PE)
device, and labels got disposed while sending the customer ip
traffic from provider edge to customer edge.

Fig 1.1 Label Headers [1] Terminology for MPLS L3 VPN MPLS VPN is below:

 Label - A 4-byte identifier, used by MPLS to make


With its ability to forward traffic on the basis of labels instead forwarding decisions.
of destination IP address, it eliminates the use of Border
 CE Router - Customer Edge Router, a non-MPLS
Gateway Protocol (BGP) protocol in the core service provider
client/site router connected to the MPLS network.

435
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 P Router - Provider Router, a LSR in MPLS VPN over the service provider network and converted back to
terminology. Layer 2 format at the receiving site.
 PE Router - Provider Edge Router, an edge-LSR in
MPLS VPN terminology. - Unlike L3VPNs where the SP take part in the client routing,
 LSP - Label Switch Path, a series of LSRs that with L2VPNs the SP has no involvement in the client IP
forward labeled packets to their destinations routing.
(unidirectional)
- Client layer2 traffic is tunneled through the IP/MPLS core
 Ingress PE router - Is the edge-LSR an IP packet
network, such that the CE routers appear to be directly
arrives at from a CE router before being labeled and
connected.
forwarded to the egress PE router.
 Egress PE Router - Is the edge-LSR where the
2. LITRATURE REVIEW
destination route is connected. Receives labeled
packets, forwards IP packets.
E. Rosen (2001) [4] describes Multiprotocol Label Switching
- Virtual Routing and Forwarding(VRF) is a technology used Architecture of Cisco Systems, A. Viswanathan of Force10
with MPLS Layer 3 VPNs that allows multiple routing tables Networks, and R. Callon [4] of Juniper Networks in Internet
in a router. Every instance of routing table is specific to a Engineering Task Force (IETF) RFC - 3031 specifies the
customer that provides isolated environment between different architecture of Multiprotocol Label Switching(MPLS). It is
clients, even if they have the same address space. Each VRF the first standard document of Multiprotocol Label Switching
instance has a separate RIB, FIB, and LFIB table. by IETF MPLS Working Group.

- Route Distinguisher(RD) is used with VRF which uniquely L. Andersson et. al. (2006) [5] describes framework for Layer
identifies a route. It is a 64-bit value attached to client's non- 2 Virtual Private Networks (L2VPNs) Of Cisco Systems..This
unique 32-bit address in order to produce a unique 96-bit framework is intended to aid in standardizing protocols and
VPNv4 address. VPN routes are forwarded across a MPLS mechanisms to support interoperable L2VPNs. This model
VPN network by MP-BGP that requires transported routes to also is a standard document for Virtual Private Wire Service
be unique. (VPWS) and Virtual Private LAN Service (VPLS).

- Route-Target (RT) is a 64-bit extended BGP community L. Martini (2006) [6] describes pseudo wire Setup and
attached to a VPNv4 route to indicate its VPN membership. Maintenance Using the Label Distribution Protocol (LDP) of
Cisco Systems, N. El-Aawar of Level 3 Communications, T.
 Export RTs are attached to a route when it is
Smith of Network Appliance and G. Heron [6] of Tellabs
converted into a VPNv4 route. It is used to identify
describes how layer 2 services like Frame Relay,
the VPN membership of routes.
Asynchronous Transfer Mode, and Ethernet can be emulated
 Import RTs are used to select VPNv4 routes for over a MPLS backbone by encapsulating the Layer 2 protocol
insertion into matching VRF tables. units (PDU) and transmitting them over "pseudo wires". This
document specifies a protocol for establishing and
maintaining the pseudo wires, using extensions to LDP.

L. Martini (2006) [7] describes encapsulation Methods for


Transport of Ethernet over MPLS Networks, Ed., E. Rosen [7]
of Cisco Systems, N. El-Aawar [7] of Level 3 Communications
and G. Heron of Tellabs describes an Ethernet pseudo
wire(PW) is used to carry Ethernet/802.3 protocol data
units(PDUs) over an MPLS network.

K. Komepella (2007) [8] describes virtual Private LAN Service


(VPLS) Using BGP for Auto-Discovery and Signaling, Ed.
And Y. Rekhter [8], Ed of Juniper Networks describes BGP
Fig 1.3 - Route Propagation in L3 MPLS VPN [3] Auto Discovery and Signaling method for VPLS. It specifies a
mechanism for signaling a VPLS, and rules for forwarding
VPLS frames across a packet switched network.
1.3 Layer 2 MPLS VPN
M. Lasserre et. al. (2007) [9] describes virtual Private LAN
L2VPN (Layer2 VPNs) provides a transparent end-to-end
Service (VPLS) Using Label Distribution Protocol (LDP)
layer2 connection to an enterprise over a SP's (Service
Signaling of Alcatel Lucent [9] in IETF RFC 4762 describes a
Provider) MPLS or IP core. Client Sites behaves like they are
Virtual Private LAN Service (VPLS) solution using pseudo
connected via Switch. Traffic is forwarded from CE switch or
wires, a service previously implemented over other tunneling
router to PE switch in Layer 2 format. It is carried by MPLS
technologies and known as Transparent LAN Services (TLS).

436
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A VPLS creates an emulated LAN segment for a given set of leading to different Ethernet Sites. It can be a physical or a
users; i.e., it creates a Layer 2 broadcast domain that is fully pseudo wire port. MAC address learning takes place
capable of learning and forwarding on Ethernet MAC dynamically when packets arrive on a VPLS PE router,
addresses and that is closed to a given set of users. Multiple similar to traditional switch. Layer 2 loop prevention is done
VPLS services can be supported from a single Provider Edge using split horizon forwarding. By default, layer2 control
(PE) node. PDUs (VTP, STP, and CDP) are dropped at ingress VPLS PE
routers. Layer2 protocol tunneling configured with
N. Bitar (2014) [10] describes requirements for Ethernet "l2protocol - tunnel " allows VTP, STP or VTP to be sent
VPN(EVPN) of Verizon, A. Sajassi [10] of Cisco Systems, R. across a pseudo wire. Enabling STP might be required in
Aggarwal [10] of Arktan, W. Henderickx [10] of Alcatel-Lucent, certain VPLS network designs to avoid downstream loops.
Aldrin Issac [10] of Bloomberg, J. Uttaro [10] of AT&T.

Grenville Armitage et. al. (2000) [11] describes MPLS: The


Magic Behind the Myths [9] reviews the key differences
between traditional IP Routing and the emerging MPLS
approach, and identifies where MPLS adds value to IP
networking.

3. Detail about different MPLS Techniques:


Fig 3.2 VPLS Reference Model [13]
3.1 Virtual Private Wire Service (VPWS) /Any
Transport over MPLS (AToM) - Layer 2 traffic can be
transported over MPLS backbone with the help of 3.3 Ethernet VPN & Provider Backbone Bridging-
AToM/VPWS. AToM is Cisco's implementation of VPWS EVPN (EVPN & PBB-EVPN) - EVPN and PBB-
in MPLS networks. Layer 2 traffic is transparently carried EVPN is designed to address various Datacenter and
across a MPLS backbone from one site to another with both Servicer Provider requirements. It is a next-generation
the sites behaves like they are directly connected. Two solution for Ethernet multipoint connectivity services.
pseudo wire technologies are used in VPWS, one is AToM, EVPN also gives you the capability to manage routing
which is a pseudowire technology that targets MPLS over a Virtual Private Network, providing complete
networks and L2TPv3, a pseudo wire technology for native control and security. EVPN uses BGP for distributing
IP networks. Both AToM and L2TPv3 supports the transport client's MAC addresses over the MPLS/IP network.
of ATM, HDLC, Frame Relay and Ethernet traffic over an IP EVPN advertises each of clients MAC address as BGP
MPLS network. routes that add the capability of BGP policy control over
MAC addresses. PBB-EVPN solution combines Ethernet
PBB (IEEE 802.1ah) with EVPN, where PEs act as PBB
Backbone Edge Bridge(BEB). PEs receives IEEE 802.1q
Ethernet frames from their attachment circuits. These
frames are encapsulated in the PBB header and
forwarded over the IP/MPLS core. On the egress side,
PBB header is removed and original dot1q frame is
delivered to customer equipment.

Fig 3.1 AToM Model [12]

3.2 Virtual Private LAN Service (VPLS) - VPLS uses


Layer 2 architecture to offer multipoint Ethernet VPNs that
connects multiple sites over Metropolitan-area-
network(MAN) or Wide-Area-Network(WAN). VPLS is
designed for those applications that requires multipoint
access. VPLS emulates an Ethernet LAN. If a customer needs
to connect his Ethernet segments from one site to another,
VPLS service can emulate an Ethernet Switch that has ports Fig 3.3 PBB-EVPN Network [6]

437
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[8] Kompella, Kireeti, and Yakov Rekhter. "Virtual private


4. CONCLUSION LAN service (VPLS) using BGP for auto-discovery and
signaling." (2007).
MPLS is the mainstream technology used in service provider
networks because of bulk of features it offer to an ISP, it has [9] Lasserre, Marc, and Vach Kompella. Virtual private LAN
the ability to create both Layer 2 and Layer 3 VPNs. Data service (VPLS) using label distribution protocol (LDP)
Center Interconnect, Ethernet based WAN services are the key signaling. RFC 4762, January, 2007.
applications driving deployments of MPLS L2VPN today. E-
VPN and PBB-EVPN are next-generation L2VPN solutions [10] Isaac, Aldrin, et al. "Requirements for Ethernet VPN
that address multi-homing and forwarding policy (EVPN)." (2014).
requirements.
[11] Armitage, Grenville. "MPLS: the magic behind the myths
REFERENCES [multiprotocol label switching]." Communications Magazine,
IEEE 38.1 (2000): 124-131.
[1] Cisco press MPLS and Next Generation Networks
,”Foundations for NGN and Enterprise Virtualization [12] Cisco press MPLS Configuration on Cisco IOS Software
”,http://ptgmedia.pearsoncmg.com/images/chap3_978158720 http://flylib.com/books/2/686/1/html/2/images/1587051990/gr
1202/elementLinks/md100302.gif” ISBN-10:1-58720-120-8 aphics/11fig01.gif

[2]AT&T(American Telephone & Telegraph) [13] Press, Cisco. "MPLS fundamentals." Page 438, (2007).
”http://bpastudio.csudh.edu/fac/lpress/471/hout/attnetwork.pn
g”. [14] Cisco,” ASR 9000 Series L2VPN and Ethernet Services
Configuration Guide“,http://www.cisco.com/c/dam/en/us/td
/i/300001400000/360001370000/361000362000/361074.eps/_
[3] Press, Cisco. "MPLS fundamentals." (2007).
jcr_content/renditions/361074.jpg
[4]Rosen, Eric, Arun Viswanathan, and Ross Callon.
"Multiprotocol label switching architecture." (2001). [15] Sajassi, Ali, et al. "BGP MPLS Based Ethernet VPN."
(2011).
[5] Andersson, Loa, and E. Rosen. Framework for layer 2
virtual private networks (L2VPNs). RFC 4664, September, [16] Press, Cisco. "MPLS fundamentals." (2007).
2006.
[17] Luo, Wei, et al. Layer 2 VPN architectures. Pearson
[6] Martini, Luca. "Pseudowire Setup and Maintenance Using Education, 2004.
the Label Distribution Protocol (LDP)." (2006).
[18] Darukhanawalla, Nash, et al. Interconnecting data centers
[7] Martini, Luca, et al. "Encapsulation methods for transport using VPLS. Cisco Press, 2009.
of Ethernet over MPLS networks." RFC4448, April (2006).
[19] Zhang, Lixia, et al. "Resource ReSerVation protocol
(RSVP)--version 1 functional specification." Resource (1997).

438
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Comparison Analysis of TORA Reactive Routing


Protocols on MANET based on the size of the network
Emanpreet kaur Abhinash Singla Rupinder kaur Assistant
Bhai Gurdas College of Assistant Professor (CSE), Professor (CSE),
Engg. & Bhai Gurdas College of Bhai Gurdas College of
Tech.,Sangrur.Punjab Engg. & Tech.,Sangrur, Engg. & Tech.,Sangrur,
sra_gagandeep@yahoo. Punjab, India, Punjab, India,
co.in abhinash11@gmail.com rupinder.walia84@gmail
.com

ABSTRACT A. Proactive or table-driven routing protocols: In


MANET is a collection of mobile nodes that communicate proactive protocols, each node maintains individual routing
with each other over relatively bandwidth constrained table containing routing information for every node in the
wireless links . Network topology may change rapidly and network. Each node maintains consistent and current up-
erratically, so it can considerably affect packet routing in to-date routing information by sending control messages
terms of network throughout, load and delay . In this we are periodically between the nodes which update their routing
presenting paper on performance comparison on tora tables. The proactive routing protocols use link-state
routing protocol on MANET with varying network sizes routing algorithms which frequently flood the link
and with increasing area and nodes sizes.this performance information about its neighbors. The drawback of
is measured by using “OPNET MODELLER 14.0” proactive routing protocol is that all the nodes in the
Simulator .the parameters taken for simulation is network always maintain an updated table. Some of the
Throughput ,Network load and Delay .In the last existing proactive routing protocols are DSDV and OLSR .
conculsion is given for the performance of the TORA
reactive protocol under varying network sizes .The final B. Reactive or On Demand Routing Protocol: In
valuation is given at the end of this paper. Reactive routing protocols, when a source wants to send
packets to a destination, it invokes the route discovery
mechanisms to find the route to the destination. The route
Keywords- manet ,opnet, TORA ,simulation. remains valid till the destination is reachable or until the
route is no longer needed. Unlike table driven protocols, all
1. INTRODUCTION AND RELATED nodes need not maintain up-to-date routing information.
Some of the most used on demand routing protocols are
WORK DSR, TORA and AODV .
In the last couple of year, the use of wireless networks has
become more and more familiar. A Mobile Ad-hoc
Wireless Network (MANET) is a collection of autonomous 2. MANET ROUTING PROTOCOLS
nodes that communicate with each other by forming a
multi-hop network, maintaining connectivity in a There are several protocols proposed for wireless mobile
decentralized manner[1]. Due to self-organize and rapidly ad-hoc networks. When we need to transfer the data from
deploy capability, MANET can be applied to different source to destination, we need a dedicated path or a route
applications including battlefield communications, that is decided by various routing protocols. In this paper,
emergency relief scenarios, law enforcement, public we have used the TORA Routing Protocol
meeting, virtual class room and other security-sensitive
computing environments. There are 15 major issues and Temporally Ordered Routing Algorithm (TORA):
sub-issues involving in MANET such as routing,
multicasting/broadcasting, location service, clustering, TORA is adaptive and scalable routing algorithm based on
mobility management, TCP/UDP, IP addressing, multiple the concept of link reversal. It finds multiple routes from
access, radio interface, bandwidth management, power source to destination in a highly dynamic mobile
management, security, fault tolerance, QoS/multimedia, networking environment. An important design concept of
and standards/products. Currently, the routing, power TORA is that control messages are localized to a small set
management, bandwidth management, radio interface, and of nodes nearby a topological change. Nodes maintain
security are hot topics in MANET research. The routing routing information about their immediate one-hop
protocol is required whenever the source to transmit and neighbors. The protocol has three basic functions: route
delivers the packets to the destination. Many routing creation, route maintenance, and route erasure . Nodes use
protocols have been proposed for mobile ad a “height” metric to establish a directed cyclic graph
hoc network [2]. (DAG) rooted at the destination during the route creation
and route maintenance phases. The link can be either an

439
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

upstream or downstream based on the relative height metric for certain functionality presumes that the link status
of the adjacent nodes. TORA’s metric contains five sensing, neighbor discovery, in order packet delivery and
elements: the unique node ID, logical time of a link failure, address resolution are all readily available. The solution is
the unique ID of a node that defined the new reference to run the Internet MANET Encapsulation Protocol at the
level, a reflection indicator bit, and a propagation ordering layer immediately below TORA. This will make the
parameter. Establishment of DAG resembles the overhead for this protocol difficult to separate from that
query/reply process discussed in Lightweight Mobile imposed by the lower layer.[3]
Routing (LMR). Route maintenance is necessary when any
of the links in DAG is broken. Figure 2. Denotes the 3. SIMULATION PARAMETERS
control flow for the route maintenance in TORA . The main
strength of the protocol is the way it handles the link To analyse the performance of TORA OPNET 14.0
failures. TORA’s reaction to link failures is optimistic that simulator is used. Scenario is created with 30 and 10
it will reverse the links to re-position the DAG for number of mobile nodes. Simulation parameters used for
searching an alternate path. Effectively, each link reversal the implementation of TORA are listed in the Table 1.
sequence searches for alternative routes to the destination.
This search mechanism generally requires a single-pass of Simulation Parameter Value
the distributed algorithm since the routing tables are
modified simultaneously during the outward phase of the Number of Nodes 30 and 10
search mechanism. Other routing algorithms such as LMR
use two-pass whereas both DSR and AODV use three pass
procedure. TORA achieves its single-pass procedure with Simulation Time 300 sec
the assumption that all the nodes have synchronized clocks
(via GPS) to create a temporal order of topological change
of events. The “height” metric is dependent on the logical Simulation Area (30 and 10 km *10 km
time of a link failure [2, 3,8]. 10 nodes)

Routing Protocols TORA

Data Rate 11mbps

Application Name FTP (High load)

Buffer size 1024000

Simulator Opnet Modeller 14.0

4. PERFORMANCE PARAMETERS
The following performance parameters are used to analyze
Fig 1. FLOW DIAGRAM OF ROUTE MAINTENANCE the simulated result:-
IN TORA
Throughput [4]: Throughput is defined as the ratio of the
Advantages and Limitations : total data reaches a receiver from the sender. The time
consumed by the receiver to receive the last packet is called
The advantage of TORA is that the multiple routes are throughput. Throughput is expressed as bytes or bits per sec
supported by this protocol between the source and (byte/sec or bit/sec).
destination node. Therefore, failure or removal of any of Delay- The packet end-to-end delay is the average time of
the nodes is quickly resolved without source intervention the packet passing through the network. It includes over all
by switching to an alternate route to improve congestion. It delay of the network like transmission time delay which
does not require a periodic update, consequently occurs due to routing broadcasting, buffer queues. It also
communication overhead and bandwidth utilization is includes the time from generating packet from sender to
minimized. It provides the support of link status sensing destination and express in seconds.
and neighbor delivery, reliable in-order control packet
delivery and security authentication. Also TORA consist Load- Load represents the total load in bit/sec submitted to
some of the limitations like which depends on synchronized wireless LAN layers by all higher layers in all WLAN
clocks among nodes in the ad hoc network. The nodes of the network. When there is more traffic coming on
dependence of this protocol on intermediate lower layers the network, and it is difficult for the network to handle all

440
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

this traffic so it is called the load. The efficient network can


easily cope with large traffic coming in, and to make a best
network, many techniques have been introduced [5].

Media Access Delay:- The Time taken by a node to access


a media in order to transfer a data packet from source node
to destination node is known as Media Access Delay

5. RESULTS AND ANALYSIS


Throughput: Fig a shows the throughput for each
protocol. The maximum throughput for TORA protocol is
at 7,500 bits/sec for 30 nodes and 1,500 nodes for 10 nodes
after 300 sec Fig b Load of TORA for 10 and 30 nodes

Media Access Delay: fig c shows the media acces delay of


TORA protocol . for 30 nodes it is highest at 0.0020 bits

Fig a: Throughput of TORA for 10 and 30 nodes

Load: Fig. b shows the increase in network load for TORA


for 30 and 60 nodes respectively. From fig it is observed Fig c: Media Access Delay for 10 and 30 nodes
that network load starts increasing and then reaches its
maximum value at below 6,500 bits for 30 nodes whereas Wireless LAN Delay: fig d shows the delay for 30 and 10
for 10 nodes it is somewhat below then 1,500 bits .
nodes .for 10 nodes it is seen only as dots and for 3o nodes
its is highest at 0.006 bits.
enhance the capabilities of ad hoc routing protocols. As a
result, ad hoc networking has been receiving much
attention from the wireless research community. With
regards to overall performance TORA performed good .
The simulation study of this report consisted of routing
protocol OLSR deployed over MANET using FTP traffic
analysing their behaviour with respect to five parameters
i.e. delay, network load, throughput, media access delay.

REFERENCES
[1]. Md.Masud Parvez1, Shohana Chowdhury2,
S.M.Bulbul Ahammed3, A.K.M Fazlul Haque4,
.Fig d LAN Delay for 10 and 30 nodes
Mohammed Nadir Bin Ali5 “Improved Comparative
Analysis of Mobile Ad-hoc Network” 1,2,3,4
6. CONCLUSION Department of Electronics and Telecommunication
In this paper, performance of TORA is analysed using Engineering.
OPNET modeler 14.5 . as shown above Throughput is
highest for 30 nodes whereas media acces delay is at its [2]. Tamilarasan-Santhamurthy “A Quantitative Study
highest value for 10 nodes. Finally when overall and Comparison of AODV, OLSR and TORA Routing
performance is compared throughput is the main factor Protocols in MANET” Department of Information
because it is the actual rate of data received successfully by Technology, LITAM, Dullipala (village), Sattenpalli
nodes in comparison to the claimed bandwidth. Over the (Mandal), Guntur, Andhra Pradesh, 522412, India
past few years, new standards have been introduced to

441
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[3]. Pankaj Palta, Sonia Goyal ” Comparison of OLSR and


TORA Routing Protocols Using OPNET Modeler
“Punjabi University Patiala.

[4]. Ashish Shrestha Firat Tekiner” On MANET Routing


Protocols for Mobility and Scalability” School of
Computing, Engineering and Physical Sciences,
University of Central Lancashire Preston, UK School
of Computing, Engineering and Physical Sciences,
University of Central Lancashire Preston, UK
ftekiner@uclan.ac.uk.

[5]. Gaganjeet Singh Aujla & Sandeep Singh Kang


“Simulation Based Comparative Analysis Of TORA,
OLSR And GRP For Email And Video Conferencing
Applications Over Manets ” Department of CSE,
Chandigarh Engineering College, Punjab, India.

[6]. P. Kuppusamy, K. Thirunavukkarsu and B. Kalavathi,


“A study and Comparison of OLSR, AODV and
TORA Routing Protocols in Ad hoc Networks”,
Proceedings of 3rd IEEE Conference on Electronics
Computer Technology (ICECT 2011), 8-10 April
2011.

442
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

COMPARISON ANALYSIS OF ZONE ROUTING


PROTOCOL BASED ON THE SIZE OF THE NETWORK
Rupinder kaur Abhinash Singla Emanpreet kaur
Assistant Professor (CSE), Assistant Professor (CSE), Bhai Bhai Gurdas College of Engg. &
Bhai Gurdas College of Engg. & Gurdas College of Engg. & Tech.,Sangrur.Punjab
Tech.,Sangrur, Punjab, India, Tech.,Sangrur, Punjab, India, sra_gagandeep@yahoo.c
rupinder.walia84@gmail.c abhinash11@gmail.com o.in
om

ABSTRACT Ad-hoc networks can also play a role in civilian forums


MANET is combination of wireless mobile nodes that such as electronic classrooms, convention centers and
communicate with each other without any kind of construction sites. With such a broad scope of applications,
centralized control or any device or established it is not difficult to envision ad-hoc networks operating
infrastructure. Therefore MANET routing is a critical task over a wide range of coverage areas, node densities and
to perform in dynamic network. Without any fixed node velocities. A mobile ad hoc network may consist of
infrastructure, wireless mobile nodes dynamically establish only two nodes or hundred nodes or thousand nodes as
the network.. A mobile ad hoc networks (MANET) is well. The entire collection of nodes is interconnected in
characterized by multihop wireless connectivity consisting many different ways. As shown in Fig-1 there is more than
of independent nodes which move dynamically by one path from one node to another node. To forward a data
changing its network connectivity without the uses of any packet from source to destination, every node in the hope
pre-existent infrastructure. MANET offers such flexibility must be willing to participate in the process of delivering
which helps the network to form anywhere, at any time, as the data packet. A single file is split it into a number of data
long as two or more nodes are connected and communicate packets and then these data packets are transmitted through
with each other either directly when they are in radio range the different paths. At the destination node, all these
or via intermediate mobile nodes. Routing is a significant packets are combined in sequence to generate the original
issue and challenge in ad hoc networks and many routing file.
protocols have been proposed like OLSR, AODV,
DSDV,DSR, ZRP, and TORA, LAR so far to improve the
routing performance and reliability[9] This research paper
provides the overview of ZRP by presenting its
functionality. The performance of ZRP (Zone Routing
Protocol) is analyzed on the basis of various parameters
using simulator OPNET 14.0.

Keywords: MANET, Routing Protocols, ZRP

1. INTRODUCTION
Ad-hoc networks are self-organizing wireless networks
composed of mobile nodes and requiring no fixed
infrastructure. The limitations on power consumption Fig-1: Mobile Ad hoc Network
imposed by portable wireless radios result in a node
transmission range that is typically small, relative to the
span of the network.
2. ROUTING IN MANET
To provide communication throughout the entire network,
each node is also designed to Serve as a relay. The result is Routing [3] is the process of transferring a packet from
a distributed multi -hop network with a time-varying source to its destination. In the routing process, a mobile
topology. Because ad-hoc networks do not rely on existing node will search for a path or route to communicate with
infrastructure and are self-organizing, they can be rapidly the other node in the network. Protocols are the set of rules
deployed to provide robust communication in a variety of through which two or more devices communicate with each
hostile environments. This makes ad-hoc networks very other. In MANET, routing tables are used for routing
appropriate for providing tactical communication for purpose. Routing tables contain the information of routes to
military, law enforcement and emergency response efforts. all the mobile nodes .The routing protocols in MANET are
broadly classified into three categories :

443
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 Proactive or Table Driven Routing Protocols. nodes that are located outside this zone, Reactive i.e. an on
 Reactive or On-Demand Routing Protocols. demand approach is used. So in Hybrid Routing Protocols,
 Hybrid Routing Protocols. the route is established with proactive routes and uses
reactive flooding for new mobile nodes [2]. In Hybrid
Routing protocols, some of the characteristics of proactive
and some of the characteristics of reactive protocols are
combined, by maintaining intra-zone information
proactively and inter-zone information reactively, into one
to get better solution for mobile ad hoc.

3. ZONE ROUTING PROTOCOL


Zone Routing Protocol or ZRP was the first hybrid routing
protocol with both a proactive and a reactive routing
component. ZRP was first introduced by Haas in 1997.
ZRP is proposed to reduce the control overhead of
proactive routing protocols and decrease the latency caused
by routing discover in reactive routing protocols. ZRP
2.1. Proactive or Table Driven Routing Protocols defines a zone around each node consisting of its k-
neighborhood (e. g. k=3). In ZRP, the distance and a node,
In proactive protocols, each node maintains individual all nodes within hop distance from node belongs to the
routing table containing routing information for every routing zone of node. ZRP is formed by two sub-protocols,
node in the network. Each node maintains consistent and a proactive routing protocol: Intra-zone Routing Protocol
current up-to-date routing information by sending control (IARP), is used inside routing zones and a reactive routing
messages periodically between the nodes which update protocol: Inter-zone Routing Protocol (IERP), is used
their routing tables. The proactive routing protocols use between routing zones, respectively.
link-state routing algorithms which frequently flood the A route to a destination within the local zone can be
link information about its neighbors. The drawback of established from the proactively cached routing table of the
proactive routing protocol is that all the nodes in the source by IARP, therefore, if the source and destination is
network always maintain an updated table. Some of the in the same zone, the packet can be delivered immediately.
existing proactive routing protocols are DSDV and OLSR . For each node a routing zone is defined separately.
Within the routing zone, routes are available immediately
2.2. Reactive or On-demand Routing Protocols but for outside the zone, ZRP employs route discovery
procedure. For each node, a separate routing zone is
defined. The routing zones of neighboring nodes overlap
In Reactive or On-Demand [1] Routing Protocols, routes
with each other’s zone. Each routing zone has a radius ρ
are not predefined. For packet transmission, a source node expressed in hops . The zone includes the nodes whose
calls for route discovery phase to determine the route. The distance from the source node is at most ρ hops. In Fig-2,
route discovery mechanism is based on flooding algorithm routing zone of radius 2 hops for node A is shown. Routing
which employs on technique that a node just broadcasts the zone includes nodes all the nodes except node L, because it
packet to all its neighbours and intermediate nodes lies outside the routing zone node A. The routing zone is
forwards the packets to their neighbours . Reactive not defined as physical distance, it is defined in hops. There
are two types of nodes for a routing zone in ZRP :
protocols are Dynamic Source Routing (DSR), Ad hoc On-
· Peripheral Nodes
Demand Distance Vector (AODV), Temporally Ordered · Interior Nodes
Routing Algorithm (TORA). The nodes whose minimum distance to central node is
exactly equal to the zone radius ρ are Peripheral Nodes
2.3. Hybrid Routing Protocols while the nodes whose minimum distance is less than the
zone radius ρ are Interior Nodes. In Fig. 2, Peripheral nodes
Hybrid Protocols [4] are the combination of both i.e. are E, F, G, K, M and Interior Nodes are B, C, D, H, I, J.
Table-Driven and On-Demand protocols. These protocols The node L is outside the routing zone of node A.
take the advantage of best features of both the above
mentioned protocols. These protocols exploit the
hierarchical network architecture and allow the nodes to
work together to form some sort of backbone, thus
increasing scalability and reducing route discovery . Nodes
within a particular geographical area are said to be within
the routing zone of the given node. For routing within this
zone, Proactive i.e. table-driven approach is used. For

444
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

(a) Throughput [2]: Throughput is the average rate of


successful data packets received at the destination .It is the
measure of how fast we can actually send the packets
through the network. It is measured in bits per second
(bits/sec or bps)
or data packets per second.

(b) Load [4]: Load in the wireless LAN is the number of


packets sent to the network greater than the capacity of the
network. When the load is less than the capacity of the
network, the delay in packets is minimum. The delay
increases when the load reaches the network capacity.

(c) Delay [7]: The packet end-to-end delay refers to the


time taken for a packet to be transmitted across the network
from source to destination. In other words, it is the time a
data packet is received by the destination minus the time a
data packet is generated by the source. It is measured in
seconds. End. Lost packets due to delay have a negative
effect on received quality.

Fig-2: Routing Zone of Node A with Radius ρ=2 hop


5. RESULTS AND ANALYSIS
4. SIMULATION PARAMETERS 1. Load: From Fig-4 it is observed that load of ZRP is
50,000 bits per second with 20 nodes. Maximum load of
To analyse the performance of ZRP OPNET 14.0 simulator 182,000 bits per second is observed with 40 nodes.
is used. Scenario is created with 40 number of mobile
nodes. The pause time and traffic load are kept constant
under all the scenarios. Simulation parameters used for the
implementation of ZRP are listed in the Table 1.

Values
Parameters

OPNET 14.0
Simulator

ZRP
Protocol

600 sec
Simulation Time

800m *800 m
Simulation Area
Fig: 3 Load over 20 and 40 nodes in ZRP
11 Mbps
Data Rate
2. Throughput: It is observed from the Fig-4 that with 40
nodes the throughput of ZRP is about 3,550,000 bits per
40
Number of Nodes second in starting and 580,000 bits per second with 20
nodes.
1024000
Buffer Size

Table1:- Simulation Parameters

4.1 Performance Metrics


The following performance metrics are used to
analyze the simulated result:-

445
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES
[1] Nadia Qasim, Fatin Said, Hamid Aghvami, “Mobile Ad
Hoc Networks Simulations Using Routing Protocols for
Performance Comparisons”, Proc. of the World Congress
on Engineering, vol. 1, WCE 2008, July 2008.
[2] Kuncha Sahadevaiah, Oruganti Bala, Venkata
Ramanaiah, “An Empirical Examination of Routing
Protocols in Mobile Ad Hoc Networks”, Proc. Of
International Journal of Communications, Network and
System Sciences, June 2010.
[3] Hongbo Zhou, “A survey on Routing Protocols in
MANETs”, Proc. of Michigan State University, MSUCSE-
03-08, March 2003.
[4] Kavita Panday, Abishek Swaroop, “A Comprehensive
Performance Analysis of Proactive, Reactive and Hybrid
Fig: 4 Throughput over 20 and 40 nodes in ZRP MANETs Routing Protocols”, Proc. Of International
Journal of Computer Sciences Issues, vol. 8, Issue 6, no. 3,
3. Delay: From Fig-5, it is observed that delay of ZRP is Nov 2011.
high at .0057 sec is observed with 40 nodes and .0033 [5] Zygmunt J. Haas, “The Zone Routing Protocol (ZRP)
second with 20 nodes. for Ad hoc Networks”, Internet Draft, July 2002.
[6] Parma Nand, Dr. S.C. Sharma, “Comparative study and
Performance Analysis of FSR, ZRP and AODV Routing
Protocols for MANET”, Proc. Of International Journal of
Computer Applications, 2011.
[7] Ashish K. Maurya, Dinesh Singh, “Simulation based
Performance Comparison of AODV, FSR and ZRP Routing
Protocols in MANET”, Proc. of Internationl Journal of
Computer Applications, vol.12, no.2, November 2010.
[8]. Zygmunt J Haas, Marc R. Pearlman, and Prince Samar,
"The Zone Routing Protocol (ZRP) for Ad Hoc Networks",
draftietf- manet-zone-zrp-04.txt,july,2002.
[9]. Hrituparna Paul1, Priyanka Sarkar2” a study and
comparison of olsr, aodv and zrp routing protocols in ad
hoc networks” ijret: international journal of research in
engineering and Technology eISSN: 2319-1163 | pISSN:
2321-7308.

Fig: 5 Delay over 20 and 40 nodes in ZRP

6. CONCLUSION
The Zone Routing Protocol (ZRP) provides a flexible
solution to the challenge of discovering and maintaining
routes .The ZRP combines two radically different methods
of routing into one protocol. Route discovery is based on a
reactive route request / route reply scheme. This querying
can be performed efficiently through the proactive
maintenance of a local routing zone topology. In this paper,
a performance analysis of ZRP routing protocols for mobile
Ad-hoc networks is presented with 20 and 40 nodes.
Performance of these routing protocol is evaluated with
respect to four performance metrics such as delay, load and
throughput .As observed from the results that when the
simulation starts no data is dropped till one minute also the
throughput is also less and throughput also increases till
the end of the simulation. The simulation study of this
report consisted of routing protocol ZRP deployed over
MANET using Voice Conferencing Application analysing
their behavior. So the results of zrp is pretty good.

446
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The Performance Analysis of LMCS Network Model


Based on Propagation Environment Factors

Dr.Sarbjeet Kaur Dhillon Ms..Gurjit Kaur


Malwa College ,Bathinda Malwa College ,Bathinda
Gurjeet907@gmail.com
Headit.mgmt@gmail.com

ABSTRACT to broadband services. LMCS is a broadband wireless access


Broadband wireless access system such as Local Multipoint technology that is intended to provide broadband services to
Communications Services (LMCS) is aiming to provide fixed subscribers in small cells. LMCS systems are designed
multimedia communication services to subscribers in fixed to have cellular layout. They attempt to completely reuse the
locations via millimeter wave transmissions at 28 GHz. In frequency band in each cell through the use of highly
LMCS, the total allocated frequency band is reused in each directional subscriber antenna and polarization reuse in
cell/sector through the use of highly directional antennas and adjacent cells, so that the interference from co-channel
polarization reuse in adjacent sectors. Some of the key issues subscribers in adjacent cells can be significantly reduced.
in LMCS systems are the coverage and the co channel
interference. These problems have to be resolved before a 1.1 DERIVATION OF LMCS
successful deployment of such services. In this paper, we
implement a techniques that is known to combat co-channel The acronym LMDS or LMCS is derived from the following:
Interference. This technique is power control. The objective of
this research is to analyze the system performance of LMCS L (local) denotes that propagation characteristics of signal in
system using different scenarios. We will provide system this frequency range limit the potential coverage area of a
designers with the appropriate power control command rate single cell site; ongoing field trials conducted in Metropolitan
and power control step size. A computer simulation program centers place the range of an LMDS transmitter at up to 5
was developed and used to determine the system performance miles. M (multipoint) indicates that signals are transmitted in
of the LMCS network model. The investigated parameters a point-to-multipoint or broadcast method; the wireless return
include the propagation exponent, lognormal deviation, Rician path, from subscriber to the base station, is a point-to-point
K factor, correlation factor of fading channel. transmission (distribution) or C (communication) refers to
the distribution of signals, which may consist of simultaneous
voice, data, Internet and video traffic. S (service) implies the
Keywords-LMCS, frequency, Probability, Rician K
subscriber nature of the relationship between the operator and
factor.
the customer; the service offered through an LMDS network
is entirely dependent on the operator’s choice of business.

1. INTRODUCTION
The increasing demand for multimedia type, high bit rate
1.2 ADVANTAGES OF LMCS OVER HFC
services motivated researchers to develop broadband wireless & PONS
access technologies. The delivered services are broadcasting The advantages of the LMCS over the competitive access
TV, video on demand, high speed internet access technologies such as Hybrid Fiber Coax (HFC) and Passive
etc…Broadband access systems will serve both residential Optical Networks (PONs) are as follows
and business customers in fixed networks. One of the
promising broadband access technologies is the local  Low entry and deployment cost
multipoint distribution services (LMDS) or local multipoint  Ease and speed of deployment: deployment of cable and
communication systems (LMCS), which has been introduced fiber systems is difficult in certain areas where installing in-
to deliver a wide variety of broadband services. This wire line ground infrastructure is undesirable. LMCS can provide
technology is competing against wireless broadband systems similar access bandwidths and a two way capability without
trenching streets and yards.
such as Fiber-to-the-home (FTTH), hybrid-fiber-coax (HFC),
 Faster realization of revenues as a result of rapid
and Asynchronous Digital Subscriber Loop (ADSL) on deployment.
copper wires [6].  Quick response to growing market.

Recently local Multipoint Communication System (LMCS) or


local Multipoint Distribution System (LMDS) has been
proposed in Canada and the United states for wireless access

447
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1.3 APPLICATIONS AND SERVICE 1.5. CELL ARCHITECTURE


PERFORMANCE The LMCS system is designed to have a cellular architecture
1. Video on demand application and attempts to reuse the total allocated frequency in each
2. Broadband Internet access sector. This essentially means that frequency will be reused 4
3. Interactive multimedia times in each cell. The transmitter site should be on top of a
4. Home office tall building or on a high pole overlooking the service area. A
5. Distance education typical configuration is a four-sector cell site using 90-degree
6. Voice and Video Telephony beam width antennas to provide service to the subscribers.
7. Entertainment TV Each of these sectorized antennas can support the full
8. Interactive video games bandwidth of the allocated spectrum. The isolation between
9. Home shopping adjacent sectors can be maximized through the use of antenna
The required service performance for LMCS can be polarization as shown in Fig 1.2. The use of repeaters to
summarized as follows [7]:
extend coverage was studied in [12] and provides an
Call set up < 10 sec.
Isochronous cell loss rate <10-3, asynchronous cell loss rate improvement of 6% coverage with macro diversity
<10-4. The maximum cell size for the service area is related to the
Maximum delay < 50 ms. desired system availability obtained from the link budget. Cell
1.4. FREQUENCY BAND AND size can vary due to the type of antenna, its height, and signal
SPECTRUM ALLOCATION loss. Operation in the millimeter range imposes some
restrictions. Precipitation effects lead to severe attenuation
Regulatory agencies such as the U.S. Federal and limit the reliable range of operation to 3-5 km depending
Communications Commission (FCC) are authorizing point-to- on the climatic zone and the frequency of operation
multipoint radio systems to operate over a block of spectrum
and throughout a large geographical area.FCC have proposed
two separate licenses, one license for a bandwidth of 1150
MHz, which includes the spectrum from 27.5 to 28.35 GHz,
29.1 to 29.25 GHz and 31.075 to 31.225 GHz. This spectrum
is referred as Block A. The second license, referred as Block
B, includes the spectrum from 31 to 31.075 GHz and 31.225
to 31.3 GHz, a total of 150 MHz [5]. Industry Canada granted
two blocks of 500 MHz in the 27.35 to 28.35 GHz ranges.
Additional spectrum from 25.35 to 27.35 GHz has been
designated for LMCS future use. Frequency bands in US is
shown in Fig.1.1Licensing and deployment in Europe now
indicate that there will be systems in different frequency
bands from 24 GHz up to 43.5GHz. The frequency band 24.5-
26.6 GHz with sub-bands of 56 MHz has been opened for
point-to-multipoint applications in many European countries.
These bands may then be used for Figure 1.2 LMCS Cell Layout
LMDS.
2 FACTORS AFFECTING SYSTEM
PERFORMANCE
The performance of a LMCS system is affected by many
factors such as propagation environment. Sectoring. Antenna
design and power control. These factors how each factor is
investigated in this work.

2.1 PROPAGATION IMPAIRMENTS


The propagation environment at millimeter wave frequencies
is one of the major challenges in delivering LMCS services to
fixed subscribers. At such high frequencies the signal is
attenuated by the obstacles in the radio path between the
subscriber and the hub, such as buildings, trees and vehicles.
Measurement study for LMCS at 28 GHz, in Ottawa shows
that foliage cause signal attenuation of more than 20 dB in
some locations. Rain and snow could cause more signal loss
[11]. This makes it necessary to provide a line-of-sight path
from the hub to subscriber for maintaining sufficient signal
strength. In [30] it was shown that even when line-of-sight
paths are available, excess loss due to rain attenuation must be
Figure 1.1 LMCS/LMDS band Allocation in USA

448
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

accounted for in the link budget. The requirement of line-of- 2.4 INTERLEAVING
sight for reliable communication at these frequencies is
presenting a big challenge for system designers who try to Interleaving is a form of time diversity that is employed to
disperse bursts of errors in time. A sequence of data symbols
maximize coverage at minimum cost. is interleaved before transmission over a bursty channel. If
errors occur during transmission, restoring the original
2.2 EQUALIZATION sequence to its original ordering has the effect of spreading
Inter symbol interference (ISI) caused by multipath in band the error over time. By spreading the data symbols over time,
limited (frequency selective) time dispersive channels distorts it is possible to use channel coding which protects the data
the transmitted signal, causing bit errors at the receiver. ISI symbols from corruption by channel. The interleaving
has been recognized as the major obstacle to high speed data techniques can be divided into two categories:
1. Block Interleaving
transmissions over radio channels. Equalization compensates
2. Convolution Interleaving
for intersymbol interference [18]. Decision feedback
The interleaved performance depends on memory required for
equalization (DFE) has been evaluated in [10] for various data data storage and the delay in interleaving and de-interleaving,
rates based on the multipath spread present in the measured which should be kept as small as possible. The performance
impulse response data from residential area in Ottawa. The of both interleaves’ was evaluated in [11] using the following
study showed that for data rates of 10 mega symbols per parameters with RS codes.
second (Msps) using QPSK modulation and with narrow 1. Interleaving length = 3000/t Where t is the error correcting
beam width directional antenna at the subscriber, it is possible capability (t=2).
2. Data rate = 40 Mb/s.
to avoid equalizer techniques. The delay with convolution inter-leaver is 40 ms and using
2.3 ERROR CONTROL CODING block inter-leaver is 80 ms. Efficiency can be defined as the
Error control coding techniques rely on the systematic ratio of the length of the smallest burst of errors that can cause
addition of redundant symbols to the transmitted information the errors correcting capability of the code to be exceeded to
to facilitate two basic objectives at the receiver, which are the number of memory element used in the inter-leaver. An
error detection and error correction. The redundant bits lower efficiency of 14.2% for block inter-leaver and 64% for
the raw data rate through the channel. Hence, the spectral convolution inter-leaver has been shown in [11].
efficiency is reduced. The study in [11] shows the Reed
Solomon code and the convolution code performance based 3.1 PROPAGATION IN A MOBILE
on the temporal variation of the narrow band LMCS channel
COMMUNICATION ENVIRONMENT
studies. The parameters for RS codes are length of code word
n=255 symbols, packet length=53 bytes (1 ATM cell) and
error correction capability t= 0 to 10. The probability of In a mobile radio environment the propagation phenomena
symbol error as function of link margin for QPSK modulation can be characterized using different propagation models.
is shown in Fig. 1.3(a). Convolution coding is very popular These models can be divided into two groups, namely
because of its simplicity in terms of hardware implementation. theoretical and empirical models. Modeling the radio channel
has been one of the most difficult parts of mobile
communication systems design. Thus the stochastic behavior
of the mobile radio signal
may be described by means of statistical distributions. Three
distributions are closely related to the mobile radio channel
statistics: lognormal, Rayleigh and Rician. The lognormal
distribution describes the envelope of the received signal
power shadowed by obstructions such as buildings and hills.
The Rayleigh distribution describes the envelope of the
received signal resulting from multipath propagation. The
Rician distribution considers the envelope of the received
signal with multipath propagation plus a line-of sight
component. Propagation models are required to quantify two
variables:
- 1 Average signal strength at any distance from the
transmitter.
- 2 Signal variability which characterizes the fading nature of
the channel.

Figure 1.3(a)[11]

449
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.1.1 PROPAGATION PATH LOSS

Path Loss can be defined as the difference between the


transmitted power and the received power. An exact estimate
of the path loss in mobile communications is not available. 3.1.3 RICIAN FADING
There are many path loss models but we will discuss only the
free space path loss model. In the free space model, the ratio The Rayleigh fading holds only in the case where there is no
between the received power Pr and the transmitted power Pt is LOS path. However, when there is a dominant signal
given by the Friis free-space transmission formula as follows component such as line of sight propagation path; the small
[18] scale fading envelope is Rician. As the dominant signal
component becomes weaker, the composite signal envelope
becomes Rayleigh fading. The Rician distribution is given by

where d is the distance, Gt and Gr are the transmitter and Where A denotes the LOS dominant signal component and I0
receiver gains respectively, and λ is the wavelength. is the modified Bessel function of the first kind and zero-
order. The Rician distribution is often described in terms of
Accordingly, the path loss (in decibels) is the K factor ratio, which is defined as the ratio between the
L(dB) = -10 Log Gt –10 Log Gr + 20 Log(4πd) – 20 Log(λ) dominant signal component and the scattered power. It is
(3.2) given by
Assuming unity antenna gains Gt and Gr , the path loss in dB
is given b L(dB) = 20 Log(4πd) – 20 Log(c/f) (3.3)

Where f is the frequency in hertz, d is the distance in m and c


is the speed of light ( 3 X 108m/s).
A large-scale propagation model use a close-in distance do,
as a known receiver power reference point. The received
It is obvious that when A goes to zero (i.e. the dominant path
power at any distance d (d > do), may be related to the
signal component fades away) the K will go to zero and the
received power at distance do. Thus received power can be
Rician distribution degenerates to the Rayleigh distribution.
given for free space model by
4. SIMULATION MODEL
Observations from [8] and [30] indicate that strong signals are
received only at locations where a LOS path was available
between the transmitter and receiver. Both studies concluded
The reference distance do must be chosen such that it lies in that a LOS path is required to provide reliable LMCS service.
the far-field region (i.e. do >df ) . Line of sight (LOS) system performance will be investigated
through simulations. LMCS systems attempt to reuse the
Df = 2D2 /λ allocated frequency band in each cell by means of directional
λ where D is the largest dimension of the antenna aperture
Antennas and polarization reuse in adjacent sectors. Perfect
orthogonal polarization is assumed which means perfect
3.1.2 RAYLEIGH FADING isolation between horizontal and vertical polarization.
It is well known that the envelope of the sum of two
quadrature Gaussian noise signals obeys a Rayleigh
distribution. The Rayleigh distribution has the following 4.1 SYSTEM MODEL
probability density function (pdf) [18] The LMCS simulation model is designed to have a cellular
layout with frequency reuse factor of 4 (i.e. total frequency
band is reused in each of 4 sectors). We assume that 9 cells
with square grid cover the service area; each hub consists of 4
sectors. Highly directional subscriber antennas and perfect
Where is the rms value of the received signal power
orthogonal polarization are used to reduce the co channel
before envelope detection, 2 is the variance of the received
interference and to provide high coverage. As we can see
signal, and r2 is the instantaneous power. Rayleigh fading
from Figure 4.1, V and H refer to vertical and horizontal
usually applies to scenario where there is no LOS path
frequency polarization, respectively. Therefore, two of the
between the transmitter and the receiver. The corresponding
sectors in each cell will have horizontal polarization and the
cumulative distribution function (CDF) is given by
other two sectors will have vertical polarization. Cells are
assumed to be square and the base station of each cell is
located at the center of the cell. The cell radius is 2 2 km; i.e.
the distance between base stations is 4 km. This gives a total
The mean value of r, coverage area of 144 km2. Subscribers will be uniformly
distributed in each sector of the cell. For every subscriber
signal received at the base station, there will be up to 17
interferers.

450
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4.1.2 PROPAGATION MODEL Step 2: Simulation over an observation period for specific
A transmitted signal in radio channel is subjected to a user
propagation path loss directly proportional to the propagation 1For each user, after a warming up period, set up Rician
exponent, shadowing and multipath fading. The effect of fading channel link between the fixed subscriber and the
propagation environment parameters such as Rician K factor, chosen base station. We generate samples of faded signal that
propagation exponent and shadowing standard deviation on is correlated in time. Then, the system is frozen (i.e. we have
system availability will be studied. Based on propagation one snapshot per time). A snapshot represents an interval of
studies in the LMCS project, simple but realistic assumptions time short enough that the channel’s path loss can be
are made. considered constant.
2 Measure the received SINR at the base station, compare it
with the preset outer loop threshold (th ) and generate
up/down power control commands. Note that all subscribers
are synchronized together, that is, perform the measurements
and the power updates at the same time.
3 bound (-10 dBw) set it to –10 dBw. On the other hand, if it
goes below the lower bound (-50 dBw) then set the updated
transmitter value to –50 dBw. In the case of higher
PC/snapshot rate, for example 10, we execute 10 power
commands in one snapshot.
4 Collect a preset number of fading samples. In order to avoid
the border effect, we collect data only for the subscriber in the
central cell.
5 Calculate the outage probability for the desired user, defined
as the fraction of received SINR samples for that user less
than the threshold. The system threshold in our simulation is
varied from 8 to 12 dB.
5. SIMULATION ALGORITHM
the framework of the simulation program is given below: Step 3: Repeat the simulation cycle
Step 0: Set up system parameters 1 Go to step 1 unless number of cycles (subscribers) exceeds
1 Set up system parameters such as cell radius, preset value chosen as 1000 cycles.
uplink/downlink channel 2 Calculate the system availability, which is the fraction of
bandwidth, and antenna gains and beam width. desired subscriber positions having less than 1 % outage.
2 Set up the environment parameters such as propagation
exponent for the desired subscriber and interferer, the Rician
6. ISSUES TO BE SIMULATED
K factor for intracell users and intracell user, lognormal
shadowing mean and standard deviation values and the
correlation factor .
3 Set up the power control parameters such as SINR outer
loop threshold, and number of samples per location, dynamic
range, PC step size and number of cycles (i.e. number of
subscribers).
4 Set up the system threshold for the outage probability
calculation for the subscriber of interest.
Step 1: Initialization
1Randomly generate users with uniform distribution within
each sector, one user per sector. This represents desired and
interfering users on a
given frequency channel, and assumes that this channel is
occupied in all cells at all times.
2Set an initial transmitted power for each user, all users will
start simulation with PT = - 20 dBw. In fact, it doesn’t matter
to which value we set the initial transmitted power since we
will wait till system reaches steady state before collecting
data.
3 For each user, set up an independent lognormal distributed
shadowing. Each subscriber is assigned to the base station
with the best SINR (i.e.Macro diversity).
Table 1

451
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

7. SIMULATION RESULTS MobileRadio System,


.
The effect of power command rate and step size
[4] Michele Zorzi, 2, May 1996 Power Control and
Diversity in Mobile Radio Cellular Systems in the
Presence of Ricean Fading and Log-Normal Shadowing.

[5] Douglas A. Gary, “A Broadband Wireless Access


System at 28 GHz,” 1997 Wireless Communications
Conference, Aug11-13,1997

[6] Scott Seidel and Hamilton W. Arnold, “28 GHz Local


Multipoint Distribution Service (LMDS): Strengths and
Challenges,” Virginia Tech. Symposiums, June 1995.

[7] D.Falconer and G. Stamatelos, “LMCS system


architecture and associated research issues,” CITR
Internal document, Dec. 9,1997.

[8] Scott Seidel and Hamilton W. Arnold, “Propagation


Measurement at 28 GHz to investigate the performance
of LMDS”, IEEE GLOBECOM, Nov 1995.

[9] Salina Q Gong, “Outage Performance with Directional


Antennas in Cellular Fixed Broadband Wireless Access
Systems”, M. Eng. thesis, Carleton University, 1998.

[10] Ranjiv S. Saini, “Equalization Requirements and


Solutions for Fixed Broadband Wireless Access
Systems”, M. Eng. thesis, Carleton University, 1998.

[11] Nausheen Naz and D. D. Falconer,, “Temporal


Variation Characterization for fixed wireless at 29.5
Figure 7.1 (a) The effect of power command rate and step GHz ”, IEEE VTC 2000, Tokyo, May 2000
size on system availability, with nd =2, ni= 4, correlation
factor =0.9, i K =4, d K =10, dynamic range= 40dBw. [12] J-P DeCruyenaere, “Propagation Simulation for the
Prediction of LMCS/LMDS Coverage”, ANTEM’98,
8. CONCLUSION OTTAWA, Aug 1998.
The simulation results shows the propagation environment [13] Peter B. Papazian et al, “Study of the Local Multipoint
effects the LMCS/LMDS system availability. Transmitter Distribution Service Radio Channel”, IEEE
power control and macro diversity techniques have been Transactions on Broadcasting, Vol. 43, No. 2, June
implemented in this study. The LMCS/LMDS system 1997.
availability is affected by the setting of the power control
parameters such as the power control update rate, step size, [14] Vincentzio I. Roman, “Frequency Reuse and System
Deployment in Local Multipoint Distribution Service”,
transmitted dynamic range and the outer loop threshold. In
IEEE Personal Communications, Dec. 1999.
this thesis, we also studied the effect of propagation
parameters on system availability, some of these parameters [15] Douglas A. Gray, “Optimal Hub Deployment for 28
such as path loss exponent, standard deviation of lognormal GHz LMDS System”, Proc. 1997 Wireless
shadowing, Rican K factor and the correlation factor between Communications Conference, Aug.1997
fading samples.

REFERENCES
[1] S. Ariyavisitakul, Dec. 1992 SIR-Based Power Control
in a CDMA System, IEEE GLOBECOM, Orlando

[2] S. Ariyavisitakul and L. F. Chang, ,Nov. pp 1993 Signal


and interference statistics of a CDMA system with
power control, IEEE Transactions on
Communications,

[3] Chung-Ju Chang, Jeh-Ho Lee, and Fang-Ching. Ren


August 1996 Design of Power Control Mechanisms with
PCM Realization for the Uplink of a DS-CDMA Cellular

452
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Performance Evaluation of Delay Tolerant Network


Routing Protocols

Vijay Kumar Samyal Sukvinder Singh Bamber Nirmal Singh


Punjab Technical University Department of CSE Punjab Institute of Technology
Jalandhar, Punjab, India Panjab University SSG Hoshiarpur
Regional Centre, Hoshiarpur, Punjab, India
samyal.mimit@gmail.com Punjab, India

ABSTRACT t1, t2 and t3 (t1 < t2 < t3). Node mobility leads to several
The performance analysis of different Delay Tolerant pairs of nodes moving into communication range (e.g., node
Networks (DTNs) routing mechanisms plays a key role in A and B cannot communicate at t1, but they run into
understanding the design of DTNs. It gives the capacity to communication range at t2) or moving out of communication
describe the conduct and execution of routing protocols, range (e.g., node C and D are in communication range at t1
which encourages one to choose proper routing protocol for and t2, but they become unreachable at t3).
the application or the system under control. DTNs routing Therefore, the stable end-to-end path does not exist
protocols have differ in the knowledge that they use in making between any couple of nodes. The communications between a
routing decision and the number of replication they make. The pair of nodes are often disrupted due to unstable connections.
performance of different DTNs routing protocols (i.e., Direct Besides, if a node wants to send a message to another node, it
Delivery, First Contact, Epidemic, Spray and Wait, Prophet may suffer from large delay. This is because the data
and MaxProp) are compared under the various mobility transmission between any pair of nodes requires being in the
models like Random Waypoint (RWP) model, Map-Based communication range. However, DTN does not guarantee that
Mobility (MBM) Model, the Shortest Path Map-Based two nodes are in communication range permanently. It may
Movement (SPMBM) model and Random Walk (RW) model. take a long time period for two nodes to move into
Among these protocols, the first four routing protocols do not communication range. Thus the communication delay
require any knowledge about the network. The latter two between two nodes is longer than wired networks. For
protocols use some extra information to make decisions on instance, if source node A needs to send a message to
forwarding. destination node E in the sample DTN (Figure 1.), it can only
deliver the data to node E at t3 when they are in
General Terms communication range at this time period.
Performance, Experimentation
In this study we have analyzed the performance of
Keywords different DTN routing protocols (Direct Delivery, First
Delay Tolerant Networks (DTNs), Routing Protocols, Contact, Epidemic, Spray and Wait, Prophet and MaxProp) in
Mobility Models, Opportunistic Network Environment (ONE) different mobility models. These protocols were analyzed on
three different metrics namely Delivery Probability, Average
1. INTRODUCTION Delivery Latency, Overhead Ratio. The detailed simulation
A Delay Tolerant Networks (DTNs) is a sparse dynamic setup and metrics is given in section 3. The remainder of
wireless network where mobile nodes work on ad hoc mode paper is organized as follows: section 2 briefly gives the
and forward data opportunistically upon contacts [1]. Since introduction of the DTN routing protocols. Section 4 gives the
the DTN is sparse and nodes in the network are dynamic, the details of simulator and section 5 gives the simulation setup
irregular connectivity makes it difficult to assurance an end- used to carry out the work. Section 6 discusses the results.
to-end path between any nodes pair to transfer data and long Section 7 concludes the paper and lists the directions for
round trip delays make it impossible to provide timely future work.
acknowledgements and retransmissions. The communication
of nodes can only be made possible when they are in the
2. ROUTING PROTOCOLS IN DTN
communication range of each other. When a node has a copy In DTN, the main characteristic of packet delivery is large
of message, it will store the message and carry it until end-to-end path latency and a DTN routing protocols has to
forwarding the message to a node in the communication range cope with frequent disconnections. Numerous routing and
which is more appropriate for the message delivery. Since forwarding techniques have been proposed over the past few
DTNs allow people to communicate without network years (refer [3] and [4] for overview). Majority of forwarding
infrastructure, they are widely used in battlefield, wildlife and routing techniques uses asynchronous message passing
tracking, and vehicular communication etc. where setting up (also referred to as store-carry-forward) scheme.
network infrastructure is almost impossible and costly [2]. In T. Spyropoulos et al. [5] & A. Keränen et al. [6]
recent years, with the propagation of social network implemented Direct Delivery & First Contact single-copy
applications and mobile devices, people tend to share texts, DTN routing protocols: only one copy of each message exists
photos and videos with others via mobile devices in DTNs. in the network at each moment. In Direct Delivery (DD), the
Figure 1 shows a sample DTN. It depicts the message is kept in the source until it comes in contact with the
network topology snapshots over three different time periods

453
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

t1 t2 t3
A F C E
F
B A
B
B D
C C
E
A
D E
F D

Figure 1. A solid line represents connectivity between two nodes


destination. In First Contact, the message is delivered to the tolerate delays resulting from the tested environment and the
first node encountered and deleted, being forwarded until it main requirement of such protocols is that the messages are
reaches the destination. reliably delivered. Hence, performance metrics for evaluating
the performance of DTN protocols are delivery probability
A. Vahdat and D. Becker [7] worked on Epidemic and delivery latency. Overhead in transmission of the
routing; an unlimited-copy routing protocol or flooding-based messages results in additional energy consumption. As the
in nature its tries to send each message to all nodes in the mobile nodes in DTNs are energy constrained, the overhead is
network. In this router when two nodes encounter, they considered as another important metric. In this study, the
exchange only the message they do not have in their memory performances of various DTN protocols are evaluated based
buffer. Overhead gets high due to more utilization of buffer on the metrics like delivery ratio, average delivery latency and
space but delivery probability gives good value. overhead ratio under different scenarios. Besides these
J. Burgess et al. [2] presented Spray and Wait; an n- metrics, the buffer utilization is observed and the impact of
copy routing protocol with two phases of spray and wait buffer size on performance is also examined. These metrics
routing protocol: the spray phase and wait phase. In the spray are defined as follows:
phase when new message is created at the source node, n
 Delivery probability: It is defined as the ratio of the
copies of that message are initially spread by the source and
number of messages actually delivered to the
possibly received by other nodes. In wait phase, every node
destination and the number of messages sent by the
containing a copy of message and simply holds that particular
sender.
message until the destination is encountered directly. There
 Average delivery latency: It is defined as the average
are two versions of Spray and Wait: normal mode, a node
of time taken by all messages to reach from source to
gives one copy of the message to each node encountered that
destination.
does not have same copy. In Spray and Wait Binary mode
(SaWBinary), half of the n copies to the first node  Overhead ratio: It is defined as the ratio of difference
encountered and that node transmits half of the copies to the between the total number of relayed messages and the
one it encounters first this process is continue until one copy total numbers of delivered messages to the total number
is left with the node. of delivered messages.

A. Lindgren et al. [8] proposed Prophet 4. THE ONE SIMULATOR


(Probabilistic routing protocol using history of encounters and The majority of researcher use simulator which easily allow for
Transitivity); an unlimited-copy routing protocol or flooding- a large number of reproducible environment-conditions.
based in nature. It estimates probabilistic metric called Simulation plays an important role in analyzing the behavior of
delivery predictability. This routing protocol based on the DTN routing protocols. There are various simulators available
probability of node‟s contact with another node. The message like NS-2 (Network Simulator, 2000), DTNSim (Delay Tolerant
is delivered to another node if the other node has a better Network Simulator), OMNet++, OPNET and The ONE. The
probability of delivering it to the destination. ONE is preferred among the simulators because the NS-2
simulator lacks full DTN support. It only supports Epidemic
T. Spyropoulos et al. [9] developed MaxProp an routing whereas DTNSim lacks in movement models. OPNET
unlimited-copy routing protocol. When all node are in and OMNet++ are tailored to specific research needs and hence
communication range it transfer all the messages not held by have fairly limited support for available DTN routing protocols.
other node. The protocol puts priority order on the queue of The ONE simulator is a discrete event based simulator. It is a
messages. Message that should be dropped and those that java-based tool which provides DTN protocol simulation
need to be transmitted are then classified in this priority capabilities in a single framework. A detailed description of The
queue. The priority includes the ratio of successful path ONE simulator is available in [10] and ONE simulator project
establishments to nodes or the number of acknowledgements. page [11] where source code is also available. The overview of
These methods increase the message delivery ratio. ONE simulator [11] with its elements and their interaction are
shown in Figure 2.
3. PERROMANCE METRICS
This section characterizes the measurements that are regarded
in this study to look at and assess the performance of different
DTN routing protocols. The DTNs routing protocol need to

454
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 2. Overview of the One Simulator Environment [11]

5. SIMULATION SETTING 6.1 Effects of Mobility Models


Simulation scenarios are created by defining simulated nodes DTN protocols influence node mobility as a means of
and their characteristics. The simulation parameters are set as message delivery and thus the performance mainly depends
mentioned in Table 1. The simulation is modeled as a network on the encounter pattern. When the network is sparse the node
of mobile nodes positioned randomly within an area (4500 x mobility is an important factor. Node mobility is characterized
3400 m2). The transmission range for each node is set as 250 by structure of mobility and node speed. In the performance
m with a chosen transmission speed of 2 Mbps. The evaluation, the node speed is kept constant and only structure
simulation length is 720 minutes. An event generator of mobility is changed through standard mobility models. The
generates messages of size 500 KB with one new message performance of different routing protocols are compared
created at an interval of every 10-15 s. under the various mobility models like Random Waypoint
(RWP) model, Map-Based Mobility (MBM) Model, the
Table 1. Simulation parameter setting Shortest Path Map-Based Movement (SPMBM) model and
Random Walk (RW) model. While nodes move randomly to a
Parameters Values random destination in RWP model whereas MBM constrains
node movement to predefined paths and routes are derived
Number of Nodes 50 from real map data. The SPMBM uses the same map based
data as in MBM but instead of moving randomly, it calculates
Transmit Range(m) 250 the shortest path from source to a random destination using
Dijikstra‟s shortest path algorithm and follows the shortest
Transmit speed (Mbps) 2 path. The mobility models differ in the pair-wise inter-contact
time due to their behavior. For simplicity it is assumed that all
Node Speed (km/hr) 10-50 nodes follow uniform mobility model. Figures 3 to 5 depict
the effects of mobility models on delivery ratio, average
TTL of message (min) 120 delivery latency and overhead ratio respectively.

Buffer size infinite

Movement Model Random Waypoint Model

Simulation Time(min) 720

6. RESULTS
The effects of variation in mobility models on the
performance of different routing protocols like Epidemic,
Spray and Wait, Direct Delivery, First Contact, Prophet and
MaxProp protocols are evaluated. The results of performance
metrics are presented in the form of graph.

455
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 3. Delivery probability vs. mobility models case of delivery latency, SPMBM mobility model have less
delivery latency among all routing protocols because the node
It is perceived from the results as in Figure 4 that use shortest path to reach destination. In case of overhead,
the mobility models have substantial impact on the delivery Spray and Wait protocol have least overhead among all the
ratio of routing protocols. The protocols studied so far are not routing protocols. In future we investigate the effect of
suitable for Random Walk model. So the Random Walk simulation time transmission range and impact of buffer size
model is not preferred for further discussion. MaxProp routing on different DTN routing protocols.
is the only protocol which performs equally well in all kinds
of mobility models. Though Spray and Wait, Epidemic and REFERENCES
Prophet performs equally well in SPMBM and RWP models,
they experience a slight degradation of 12% in delivery ratio [1] Fall, K., 2003, “A Delay-Tolerant Network Architecture
in MBM model. for Challenged Internets,” Proc. On Applications,
Technologies, Architectures and Protocols for Computer
Communications (SIGCOMM), New York, USA, pp.
27–34.
[2] Burgess, J., Gallagher, B., Jensen, D., and Levine, B.,
2006, “Maxprop: Routing for Vehicle-Based Disruption-
Tolerant Networks,” Proc. 25th IEEE International
Conference on Computer Communications, Barcelona,
Spain, pp. 1–11.
[3] Z. Zhang, 2006 “routing in intermittently connected
mobile ad hoc networks and delay tolerant networks:
Overview and challenges,” IEEE Communication
Surveys and Tutorials 8,vol. 4, pp. 24-37
[4] L. Pelusi, A. Passarella, and M. conti, 2006,
Figure 4. Average delivery latency vs. mobility models “opportunistic networking: Data forwarding in
disconnected mobile ad hoc networks,” IEEE Comm.
Magazine.
[5] Spyropoulos T., Psounis K., and Raghavendra C. S.,
2004 “Single-copy routing in intermittently connected
mobile networks,” Proc. 1st Annual IEEE
Communications Society Conference on Sensor and Ad
Hoc Communications and Networks, IEEE SECON-„04‟.
[6] Keränen A., et al., 2009, “The ONE Simulator for DTN
Protocol Evaluation,” Proc. 2nd International
Conference on Simulation Tools and Techniques
(Simutools'09), Belgium.
[7] Vahdat A., Becker D., 2000, “Epidemic Routing for
Partially Connected Ad Hoc Networks,” Technical
ReportCS-200006, Duke University.
[8] Lindgren A., et al., 2003, “Probabilistic Routing in
Intermittently Connected Networks,” Proc. SIGMOBILE
Figure 5. Overhead ratio vs. mobility models Mob. Comput. Commun. Rev. 7, 3. pp. 19-20.
It is concluded from the results shown in graph of Figure 4 [9] Spyropoulos T., Psounis K., and Raghavendra C. S.,
that all routing protocols, have the less delivery latency when 2005 “Spray and Wait: An Efficient Routing Scheme
the nodes follow SPMBM model. This is due to the fact that for Intermittently Connected Mobile Networks,” Proc.
in SPMBM model, the nodes use shortest path to reach the ACM SIGCOMM workshop on Delay-Tolerant
destination. The results depict that among all routing Networking (WDTN ‟05) ACM, New York, NY, USA,
protocols, the Spray and Wait and MaxProp routing have least pp.252-259.
delivery latency when nodes follow SPMBM and RWP model
due to their individual characteristics. It is inferred from graph [10] Ker¨anen, A., 2008 “opportunistic network environment
plotted in Figure 5 that only Spray and Wait has least simulator. Special assignment report, helsinki university
overhead among all the routing protocols. As mentioned of technology,” Department of Communications and
earlier, it is due to the fact that Spray and Wait routing Networking, May 2008.
restricts the number of replication of the messages which does [11] Tkk/Comnet. Project page of the ONE simulator.
not differ with the structure of mobility. [Online]. Available: http://www.netlab.tkk.fi/tutkimus/
dtn/theone, 2009.
7. CONCLUSION
The paper concludes that the impact of mobility models on
different DTN routing protocols. It is inferred from the above
results that Spray and Wait, Epidemic and Prophet performs
equally well in SPMBM and RWP models, they experience a
slight degradation of 12% in delivery ratio in MBM model. In

456
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Road Traffic Control System in Cloud Computing: A


Review
Kapil Kumar Dr. Pankaj Deep Kaur
Department of Computer Science and Department of Computer Science and
Engineering Engineering
Guru Nanak Dev University Regional Campus Guru Nanak Dev University Regional Campus
Jalandhar, INDIA Jalandhar, INDIA
er_kapilkumar@yahoo.com pankajdeepkaur@gmail.com

ABSTRACT management and control. Urban region have a more


Road traffic on public roads around the world is a vital problem of traffic crushes, particularly when a bit of
conjunctions take into thoughtfulness. [1].
problem and is seemly a major pretend to conclusion
makers. Urban region have a great stack of traffic crushes. The traffic signal is typically controlled by a controller
Many superimposed concepts of road traffic Gas lit, inside a cabinet climbed on a concrete grid. Some electro-
Wireless sensor networks have appeared in the past few mechanical controllers are still in use [2]. Traffic control
years. The living methods for traffic management are not will turn a real significant topic in the hereafter when the
effective. At that place require for a hefty and scalable number of road user’s increases. There are several models
high-execution computing. Cloud computing is turning a for traffic simulation. Huge amount of traffic cause waiting
and accidents. Due to heavy traffic emergency vehicles face
good engineering to provide a potent and scalable
adversities.
computing at low cost. Therefore Sensor-Cloud is seemly
popular in recent years. This paper attempts to review these Cloud computing permits the systems and users to utilize
concepts and discusses their ending alignment with other Platform as a Service (PaaS) provider offer several
phase of networks like Sensor-cloud network. The key environments to users for development of
obstructions to the successful acceptance of Sensor-could applications. The user can develop applications
have been distinguished and directions for the existing and according to their requirements. Infrastructure as a
finally the conclusion have been discussed. Service (IaaS) that provides virtualized computing
resources over the Internet, and Software as a Service
Keywords (SaaS) provides software or application on the
internet and customer used these, with no
Cloud Computing, Sensor-cloud Network, Gas Lit, Wireless
knowledge of development or maintenance [3].
Sensor Network, Virtualization
Therefore, Sensor-Cloud base is seemly popin recent years
that can offer unfold and elastic program for various
1. INTRODUCTION supervising and holding diligences.
Traffic over-crowding is a critical trouble in many urban
centers across the universe. The existing methods are not
accurate in terms of performance and cost for traffic

Fig 1: Evolution of Road Traffic Control System


A gas lit traffic control system is a manually operated by a
police officer. The use of this system is very difficult due to

457
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

burst. Nowadays, Wireless Sensor Network is used of Sensor-Cloud Infrastructure i.e. integrated version of
controlling the traffic. Wireless sensor network is Wireless Sensor Networks and Cloud Computing [6]. A
constructed of nodes. Sensor nodes basically sense data, large number of sensor data easily gather, access, process,
collect data from other nodes then process that data and store, share, and search for using Sensor-Cloud [7].
then transmit this collected data to the base station. It is Sensor-cloud will be placed at a particular space ahead the
proved that node require much power or energy to transmit junctions which will notice the fastness and voice of siren
data. The main concern is to save power to increase the life at a specific doorway. On the foundation of the two
of sensor network. Wireless sensor network applications junctions will be capable to downplay the traffic fall by
used in healthcare, military, environment monitoring, and inter-communication therefore allotting the right time for
manufacturing [4]. Due to the limitations of WSNs in terms red and green lights then emergency vehicles can go
of memory, energy, computation, communication and quickly [8].
scalability [5]. On that point demand for a mighty and
scalable high performance computing for treating and
storing of the WSN data.

Table 1. Key attributes differentiating Old, Nowadays and Farther Road Traffic Control System

Controlling Technology Old Traffic Control Nowadays Traffic Control System Farther Traffic
Key Attributes System Control System
Methods Gas lit system Wireless Sensor Network Sensor-Cloud Network

Power Management Dependent on human Do not have defined ability Not dependent on load
power informants

Scalability No Limited Networks Multiple Networks

Network Connectivity and No Dynamic Static and Dynamic


Protocols
Scheduling No Power management and sensor Support job and
resource management service migration
Performance Less Limited High

Robustness Limited, failed fixed, broken projects are secure


tasks are restarted
restarted
Virtualization Support No Yes Yes

Elasticity No modified inexhaustible

2. EXISTING TRAFFIC CONTROL “green waves” vehicles run through the road network
without halting and amending overall traffic runs.
SYSTEMS
2.3 Intelligent Traffic Control Unit
2.1 Applied Research on Traffic Information
Collection system The Intelligent Traffic Control Unit focuses on three
areas-Ambulance, Priority vehicles and Density control
The distributed treatment of traffic information [11]. In Ambulances radio frequency identification concept
collection system is realized. In [9] paper author discussed is applied to fix the Ambulances track Green. The outcomes
the essential troubles in the effectuation and reffered distinctly state that gamiest priority is granted to the
settlement schemes. This project used in working clients ambulance. Secondly in priority vehicles infrared
power, net analysis situs planning and time synchronization transmitter and receiver are used to make the vehicle track
among nodes. Green. In the third part IR and photodiodes are used in the
line of sight to detect the density at the traffic signal.
2.2 Platoon-Based Self-Scheduling for Real-Time
Traffic Signal Control 2.4 Automatic Daytime Road Traffic Control and
Monitoring System
Self-scheduling collects incoming vehicles into critical
clusters. Author proposes [10] unfolded conclusion This theme [12] suggests a method for accurately
insurances that besides integrate look-ahead of upcoming calculating the routine of vehicles on a route at daylight.
vehicle platoons. The simulation effect show that the gain The running aims are evoked from a frame-differencing
of this access is simple queue solving. The formation of algorithm and the data from grain unit members. The

458
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

algorithm acts good below hard road traffic conditions such area. The signal timing varies throughout the day while
as traces, flora and big trucks. The most significant trinket coordinating all the signals. It withdraws the addiction on
of the aimed method is the vestiges handling utilizing sole less spoiled schemes on sign designs.
strength of B&W icons and top hat shifts.
2.6 Priority Based Traffic Lights Controller
2.5 Design of Adaptive Road Traffic Control
System Author presented a vehicle sensing and active traffic
signal time handling is used in priority based traffic light
This system based on UML. Author provides this controller system [14]. The project is also designed to
proficiency for manipulating the traffic in main road follow international standards for traffic light operations
network utilizing signs [13]. These signs are mechanically and control over multiple intersections. Both single and
moderated aside sensors. To afford well advancement to multiple intersections are dynamically adaptive to traffic
vehicles through the road network these detectors conditions in these techniques.
coordinates the operation of the traffic signals in the entire

Table 2. Surviving technologies and key characteristics

Existing Technologies Key Feature

Priority Based Traffic Lights Used for robotic gate opening, VIP vehicles, GPS navigation system.
Controller
Applied Research on Traffic Voltage diligences: High degree AC Voltage, vehicle tire pressure , planes ,ships
Information Collection System potential troubles supervising before their navigate.

This system uses the microcontroller to download registered data, update easy
Intelligent Traffic Control Unit holdups, delete storage etc.

Platoon-Based Self-Scheduling for Performance of composite operations in active surroundings manages.


Real-Time Traffic Signal Control

Design of Adaptive Road Traffic This system uses the UML. UML is the standard visual modelling terminology
Control System which proposes an extended lot of plots for modelling.

Automatic Daytime Road Traffic  The varieties of measurements that can be achieved with cameras.
Control and Monitoring System
 Shadows identification plays a key role.

3. CONCLUSION [2] Legon-Okponglo, “Design and Development of


Micro-Controller Based Traffic System Using Image
In this paper the role of Sensor-cloud in the setting of Processing Techniques” University of Ghana,
various living methods in road traffic control system is published in ICAST, 2012 IEEE 4th International
hashed out. Cloud Computing is development of these Conference.
engineerings and has announced not to substitute any living
technology. Each technology has its possess position and [3] J. Srinivas, K. Venkata Subba Reddy, Dr. A. Miss
dissimilar technologies demand to mix unitedly in the IT Qyser, "Cloud computing basics," International
environments. Sensor-Cloud enables the sensor information
to be categorised, laid in and treated in such a style that it journal of advanced research in computer and
turns price-efficient, time usable and easily approachable . communication engineering, Vol. 1, Issue 5, pp. 343-
Sensor-cloud signifies to supply summed features and 347, July 2012.
results to the problems that obsess these technologies. The
opportunities of implementing the technology to treat more [4] G. Simon, G. Balogh, G. Pap et al., “Sensor network-
composite spots of traffic control system. based counter sniper system,” in Proceedings of the
2nd International Conference on Embedded
REFERENCES Networked Sensor Systems (SenSys '04), pp. 1–12,
Baltimore, Md, USA, November 2004.
[1] Cheonshik Kim, You-Sik Hong “Traffic Signal
Using Smart Agent System, ” American Journal of [5] J. Yick, B. Mukherjee, and D. Ghosal, Wireless
Applied Sciences, 2008:1487-1493. Sensor Network Survey, Elsevier, 2008.

459
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[6] Kian Tee LAN, (2010),” What’s NExT? [11] Searching Jaiswal, Tushar Agarwal, Akanksha Singh
Sensor+Cloud?”, in Proceeding of 7 the International and Lakshita “Intelligent Traffic Control Unit”,
workshop on Data Management for Sensor networks, International Journal of Electrical, Electronics ISSN
ACM Digital Library, ISBN: 978 -1-4503-0416-0, No. (Online): 2277-2626 and Computer Engineering 2
2010. (2): 66-72 (2013).

[7] H. T. Dinh, C. Lee, D. Niyato, and P. Wang “A [12] P.F. Alcantarilla, M.A. Sotelo, L.M. Bergasa
Survey of Mobile Cloud Computing: Architecture, “Automatic Daytime Road Traffic Control and
Applications, and Approaches Wireless Monitoring System” Intelligent Transportation
Communications and Mobile Computing” Wiley Systems, 2008. ITSC 2008. 11th International IEEE
Online Library, 2011. Conference on 12-15 Oct. 2008 Page: 944–949.

[8] http://telematicswire.net/c-dac-launches-witrac-indias- [13] K. Ranjini, A. Kanthimathi, Y. Yasmine “Design of


first-wireless-traffic-controller-system. Adaptive Road Traffic Control System through
Unified Modeling Language” International Journal of
[9] Xiaoquan Chen, Jihong Zhang, Shao Qian, Peng Xu Computer Applications (0975 – 8887) Volume 14–
“Applied Research on Traffic Information Collection No.7, February 2011.
Based on Wireless Sensor Networks”, Elsevier 2012
International Conference on Future Electrical Power [14] Shruthi K R & Vinodha. K “Priority Based Traffic
and Energy Systems. Lights Controller Using Wireless Sensor Networks”,
International Journal of Electronics, Signals, and
[10] Xiao-Feng Xie, Gregory J. Barlow, Stephen F. Smith, Systems (IJESS) ISSN: 2231- 5969, Vol-1 Iss-4,
and Zachary B. Rubinstein “Platoon-Based Self- 2012.
Scheduling for Real-Time Traffic Signal Control”,
IEEE International Conference on Intelligent
Transportation Systems (ITSC), Washington, DC,
USA, 2011.

460
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Review on Scheduling Issues in Cloud Computing

Kapil Kumar Abhinav Hans Ashish Sharma Navdeep Singh


CSE Department
Guru Nanak Dev University Regional Campus
Jalandhar, INDIA
er.kapilkumar@yahoo.com, abhinavhans@gmail.com,
iamashish90@gmail.com, navvdeep.singh@gmail.com,

ABSTRACT execute one by one in cloud system increments the price and the
price is reduced if we have a modest number of composite tasks.
Cloud computing has become very popular in recent years.
When the number of users in the cloud gets increased then
Cloud computing deals with different kinds of virtualized the scheduling becomes difficult. Therefore, there is a need to go
resources, hence scheduling play significant part in cloud for a better scheduling algorithm than existing one. Since cloud
computing. Though the large amount of resources is available, computing the different kinds of researches is going on, The
but is must to schedule the resources in such a manner that each scheduling strategies proposed to overcome the problems
job can receive the resources for completing it. So the between users and resources[5].
scheduling algorithms are needed by a cloud to arrange
resources for executing jobs. There are various algorithms Moreover, the rest of the paper is organized as: Section II
available that can used to schedule the resources for job describes the various comparison parameters used. Section III
execution. Thus the comprehensive way of dissimilar case of presents various scheduling algorithms. Based on the literature
scheduling algorithms in cloud computing have been thoroughly survey, various open issues have been discussed in IV and the
covered in this review paper and related issues and challenges paper is finally concluded in Section V.
have been highlighted.
II. COMPARISON PARAMETERS
Keywords— Cloud Computing, Compromised-Time-Cost
Scheduling Algorithm, Hybrid Energy Efficient Scheduling Various parameters have been used in this section to compare
Algorithm, Optimized Resource Scheduling Algorithm, ANT various scheduling algorithms:
Colony Algorithm, Energy Efficient Algorithm uses Migration, A. Execution time
Improved Differential Evolution Algorithm, SHEFT Workflow
In which program is running and single
Scheduling Algorithm, Cloud-DLS: Dynamic Level Scheduling
instruction, such as addition or multiplication is carried out in
Algorithm, Improved Cost Based Scheduling algorithm.
the computer instruction.
I. INTRODUCTION
B. Response time
The Cloud computing offers a distributed system concluded a
The response time is the sum of the service time and wait
network in which a course of study or any diligence work on time. Technically response time is the time of system takes to
many related computers at the same time .Cloud computing is a react to a given input.
hosted service in which an end user can access the cloud based
applications through the browser or any mobile application. [1].
National Institute of Standard Technology (NIST) detrmines C. Make span
cloud computing is exact network approach. The major Difference between the depart and close of a sequence of
advantage of cloud computing in which we can pay-for-use for jobs.
any software.
D. Energy Consumption
Cloud computing in other words is the allegory of the internet It is the consumption of energy or power. It is also defined
[2]. Cloud system consists of three service models based on the in some quarters as the use of energy as a raw material in the
resource focus [3] i.e. PaaS, IaaS and SaaS. Platform as a process of manufacturing utilities.
Service (PaaS) provider offer several environments to users for
development of applications. The user can develop applications E. Throughput
according to their requirements. Infrastructure as a Service
It refers to how much data can be transferred from one
(IaaS) that provides virtualized computing resources over the
location to another in a given amount of time.
Internet, and Software as a Service (SaaS) provides software or
application on the internet and customer used these, with no
knowledge of development or maintenance F. Scalability
The increasing demands and growing amount of the work is
known as scalability.

G. Resource utilization
In Cloud Computing user may face hundreds of thousands of
virtualized resources to utilize, it is impossible for anyone to Resource utilization is the use of a resource in such a way
allocate the jobs manually. To allocate the resources to each job that increases through output. The sources used to perform a
efficiently, scheduling plays more important role in cloud particular task.
computing [4]. For a large number of simple tasks when they

461
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

H. Load Balancing G. SHEFT Workflow Scheduling Algorithm


Load balancing is the most above-board method of This paper [13] proposed the SHEFT (Scalable HEFT)
surmounting away an application host base. As application need scheduling algorithm that helps increasing and decreasing the
growths, new hosts can be easily summed to the imagination number of resources at runtime. It provides facility to resources
pond and the load balancer will instantly start posting traffic to to scale at runtime, outperforms in optimizing workflow
the new host. execution time. It scheduled a workflow in a cloud environment
elastically. There was optimized execution time for the
I. Fault tolerance workflow.
Fault tolerance is fixed every bit how to supply, by
redundancy, avail following with the stipulation in hurt of faults H. Cloud-DLS: Dynamic Level Scheduling Algorithm
causing happened or happening. The Cognitive trust model is used in dynamic level
scheduling (DLS) & hence trusts dynamic level scheduling
algorithm is introduced in [14]. This paper focuses on
III. EXISTING SCHEDULING ALGORITHMS trustworthiness in cloud computing. Because of the
characteristics of cloud computing obtaining trustworthiness in
computing resources is difficult. In this the two kinds of trust,
A. Compromised-Time-Cost Scheduling Algorithm i.e. direct trust degree, recommendation, trust degree is obtained
to obtain the trusted scheduling and extends the traditional DLS
Novel Compromised-time-cost execution along with the user
algorithm.
input is proposed [6]. For user defined deadlines this work is
focused on minimizing the price below user specified deadlines.
It provides the just in time graph of the time cost relationship. I. Improved Cost Based Scheduling algorithm
The multiple concurrent instances of the dynamic cloud In [15] paper author proposed the approach that is improved
computing platform are used to change the schedule if the user cost-based scheduling algorithm. It measured computation
wants. performance and resource cost. It also increased execution data
transfer ratio by combining the tasks.
B. Hybrid Energy Efficient Scheduling Algorithm
This algorithm based on pre-power techniques and least load IV. OPEN ISSUES
first algorithm developed. The author proposed [7] Pre-power
technique that is used to reduce the response time and it uses the Based on the survey a scheduling fabric can be
idle threshold value. When the data centres are running in low enforced by utilizing the different parametric quantities.
power mode, then the least load first algorithm is used to balance The efficiency of the energy usage is main issue that took
workloads. a lot concern. But Scheduling is unmatchable issues in the
management of diligence performance in a cloud
C. Optimized Resource Scheduling Algorithm environment. It must focus on cost, time, energy
The author proposed [8] optimal use of resources by using
efficiency and load balancing of the data centres. Paleness
virtual machines. It used Improved Genetic Algorithm (IGA). As imagination allotment sets a critical use in scheduling.
compared to traditional GA scheduling method speed of the IGA
was almost twice and utilization of resources is also larger. IGA V. CONCLUSION
selects optimal VMs by introducing dividend policy.
In this paper, we have sight several living scheduling
D. ANT Colony Algorithm algorithms in cloud computing. The table 1 is shown for
The author proposed a poised Ant colony algorithm [9] promote quotations. A scheduling theoretical history
which uses a pseudo random proportional rule to poise the should be enforced to amend the user satisfaction.We
integral organization load while completing all the jobs at hand have used various parameters to make a comparison. The
as soon as possible according to the environmental status. This scheduling framework should consider the user input
algorithm balances the workload as well as minimizing the make constraints execution time, energy efficiency,
span.
performance issues and make span so on.We also
acknowledged that disk quad management is vital in
E. Energy Efficient Algorithm uses Migration practical surroundings. Hence , there is a demand to carry
Author proposed a hybrid energy-efficient scheduling out a scheduling algorithm in cloud computing.
algorithm [10] using dynamic migration. In this paper powering
down a busy node is not feasible using the threshold value. It
uses the power up command to wake sleep nodes as well as the
idle nodes. The expected spectrum set for the left capacity is
used. Hence power efficiency is improved.

F. Improved Differential Evolution Algorithm (IDEA)


In [11] author proposed a scheduling algorithm which
Optimize task scheduling and resource allocation based on the
price and clock methods. It is a multi-objective optimization
approach. This price model includes the treating, obtaining and
clock model includes obtaining, treating and waiting time. This
algorithm trusts the Taguchi method.

462
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

TABLE I. COMPARISON BETWEEN EXISTING SCHEDULING ALGORITHMS ISSUES


Scheduling Execution Response Make Energy Through-- Scalability Resource Load Fault
Algorithms Time Time Span Consumption put Utilization Balancing Tolerance
Compromised-Time-         
Cost Scheduling
Algorithm

Hybrid Energy         
Efficient Scheduling
Algorithm

Optimize Resource         
scheduling Algorithm

ANT Colony         
Algorithm

Energy Efficient         
Algorithm uses
Migration
Improved differential         
evolution algorithm
(IDEA)

SHEFT Workflow         
Scheduling Algorithm

Dynamic Level         
Scheduling Algorithm

Improved Cost Based         


Scheduling Algorithm

[8] H. Zhong, K. Tao, X. Zhang,” An approach to optimized


resource scheduling algorithm for open-source cloud
system,” Fifth annual china grid conference (IEEE), pp.
REFERENCES 124-129,2010.
[1] C. Germain-Renaud, O. Rana “The Convergence Of [9] Kapil Kumar, Abhinav Hans, Ashish Sharma, Navdeep
Clouds, Grids, and Autonomics” IEEE Internet Computing Singh, “Towards The Various Cloud Computing
13 (6) (2009) 9. Scheduling Concerns: A Review, ” International
[2] http://en.wikipedia.org/wiki/Cloud_computing Conference on Innovative Applications of Computational
[3] Krishan Kant Lavania, Yogita Sharma, Intelligence on Power, Energy and Controls with their
Chandresh Bakliwal “A Review on Cloud Computing Impact on Humanity (CIPECH14) 28 & 29 November
Model, ” International Journal on Recent and Innovation 2014.
Trends in Computing and Communication ISSN 2321-8169 [10] Jiandun Li, Junjie Peng, Zhou Lei, Wu Zhang, “A
Volume: 1 Issue: 3 March 2013. Scheduling Algorithm for Private Clouds”, Journal of
[4] Baomin Xu, Chunyan Zhao, Enzhao Hua,Bin Hu, “Job Convergence Information Technology, Volume6, Number
Scheduling Algorithm Based on Berger Model in Cloud &, July 2011.
Environment”, Elseiver publications, march 2011. [11] Jinn-TsongTsai, Jia-Chen Fang, Jyh-Horng Chou
[5] M. Gokilavani, S. Selvi, C. Udhayakumar “A Survey on “Optimized Task Scheduling and Resource Allocation on
Resource Allocation and Task Scheduling Algorithms in Cloud Computing Environment using Improved
Cloud Environment” ISO 9001:2008 Certified International Differential Evolution Algorithm”, Computers &Operations
Journal of Engineering and Innovative Technology (IJEIT) Research 40 (2013)3045–3055.
Volume 3, Issue 4, October 2013. [12] Deepak Poola, Kotagiri Ramamohanarao, and Rajkumar
[6] Ke Liu, Hai Jin, Jinjun Chen, Xiao Liu, Dong Yuan, Yun Buyya “Fault-Tolerant Workflow Scheduling Using Spot
Yang, “A Compromised-Time-Cost Scheduling Algorithm Instances on Clouds”, ICCS 2014. 14th International
in SwinDeW-C for Instance-Intensive Cost-Constrained Conference on Computational Science, Volume 29, 2014,
Workflows on Cloud Computing Platform” International Pages 523–533.
Journal of High Performance Computing Applications [13] C. Lin, S.Lu, “Scheduling Scientific Workflow Elasticity
Volume 24 Issue 4, November 2010 Pages445-456. for Cloud Computing”, IEEE 4th International Conference
[7] Jiandun Li, Junjie Peng, Zhou Lei, Wu Zhang, “An Energy on Cloud Computing, pp. 246-247,2011.
Efficient Scheduling Approach Based on Private Clouds”, [14] Wei Wang, GuosunZeng, Daizhong Tang, Jing Yao,
Journal of Information & Computer Science, April 2011. “Cloud-DLS: Dynamic Trusted Scheduling for Cloud
Computing”, Expert Systems with Applications, 2011.
S. Selvarani, G.S. Sadhasivam,“Improved Cost Based
Algorithm for Task Scheduling In Cloud Computing”,
Computational Intelligence and Computing Research, pp.
1-5,2010.

463
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Artificial Intelligence (AI): A Review


Preeti Damandeep kaur Ishu Gupta
Asstt.Prof in Computer Sci. Asstt.Prof in Computer Sci. Asstt.Prof in Economics
Gobindgarh Public College, Gobindgarh Public College, Gobindgarh Public College,
Alour(Khanna) Alour(Khanna) Alour(Khanna)
vermapreeti1234@gmail.com damandeepnabha@gmail.com ishugupta36@gmail.com

ABSTRACT has also been observed in non-human animals and in plants.


Artificial intelligence (AI) is the replication of intelligence in
This paper aims at presenting the perception of “Artificial machines.
Intelligence”. Artificial intelligence is a branch of computer
science that aims to create intelligent machines which can Intelligence = perceive + Analyze + React
perform the functions similar to human beings. It has become
an essential part of the technology industry. Research
associated with artificial intelligence is highly technical and According to some researchers "A very general mental
specialized. The paper briefly explains the core part of capability that, among other things, involves the ability to
Artificial Intelligence (AI) such as Knowledge, Reasoning, reason, plan, solve problems, think abstractly, comprehend
Problem Solving, Perception, Learning and Planning. It complex ideas, learn quickly and learn from experience. It is
further describes the various applications of the Artificial not merely book learning, a narrow academic skill, or test-
Intelligence such as Data Mining, Knowledge Representation, taking smarts. Rather, it reflects a broader and deeper
Knowledge Base (KB), Inference Engine, Soft Computing,
capability for comprehending our surroundings—"catching
Natural Language Processing, Aviation, Robotics .The main
aim of this paper is to explore the recent applications of on," "making sense" of things, or "figuring out" what to do".
Artificial Intelligence (AI), the impact of Artificial
Intelligence (AI) on our lives, to provide an overview of the 3. CORE PARTS OF ARTIFICIAL
field, the areas where Artificial Intelligence is used and the
critical role of Artificial Intelligence INTELLIGENCE
Keywords The core parts of artificial intelligence include programming
Artificial Intelligence. computers for certain traits such as:

1. INTRODUCTION
3.1 Knowledge
Artificial Intelligence (AI) is the capacity of a computer to Knowledge engineering is a core part of Artificial Intelligence
perform functions like learning and decision making just like (AI) research. Machines frequently act and react like humans
humans, as by an expert system, a program for Computer only if they have rich information concerning to the world.
Aided Design (CAD) or Computer Aided Manufacturing Artificial intelligence (AI) must have access to objects,
(CAM), or a program for the perception and recognition of classes, properties and relations between all of them to
shapes in computer vision systems. The term is commonly implement knowledge engineering.
applied to the project of developing systems capable with the
intellectual processes characteristic of humans, such as the 3.2 Reasoning
capability of reasoning, discover meaning, generalizing, or Do not include headers, footers or page numbers in your
learning from past experience. submission. These will be added when the publications are
assembled.

2. WHAT IS INTELLIGENCE? 3.3 Problem Solving


Intelligence word derives from the Latin verb intelligere, to Place Tables/Figures/Images in text as close to the reference
comprehend or perceive. Intelligence has been defined in as possible (see Figure 1). It may extend across both columns
many different ways such as in terms of one's capacity for to a maximum width of 17.78 cm (7”).
logic, abstract thought, understanding, self-awareness, Problem solving is another core part of Artificial Intelligence
communication, learning, emotional knowledge, memory, (AI), which may be describe as an orderly search through a
planning, creativity and problem solving.It can also be more range of possible procedures in order to reach some
generally described as the ability to perceive and/or retain predefined aims or solution. Problem-solving method is
further divide into two- parts special purpose and general
knowledge or information and apply it to itself or other
purpose. A special-purpose method is tailor-made for a
instances of knowledge or information creating referable particular problem and often exploits very specific features of
understanding models of any size, density, or complexity, due the situation in which the problem is embedded. On the other
to any conscious or subconscious imposed will or training to hand, a general-purpose method is applicable to a broad range
do so. Intelligence is most extensively studied in humans, but of problems. One general-purpose technique used in Artificial

464
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Intelligence (AI) is means-end analysis—a step-by-step, or aspects, data pre-processing, model and inference
incremental, reduction of the difference between the current considerations, interestingness metrics, complexity
state and the final goal. considerations, post-processing of discovered structures,
visualization, and online updating.

4.2 Knowledge Representation


3.4 Perception
Knowledge is the information about an area that can be used
Machine perception deals with the capability to use sensory to solve problems in that particular area. To solve problems
inputs to work out the different aspects of the world, while there is a need of large extent knowledge, and this knowledge
computer idea is the power to analyze visual inputs with few must be embodying in the computer. A representation scheme
sub-problems such as facial, object and speech recognition. In is the form of the knowledge that is used in an agent. A
perception the situation is examined by means of various representation scheme specifies the form of the knowledge. A
sensory organs, real or artificial, and the scene is decomposed knowledge base is the representation of all of the knowledge
into separate objects in various spatial relationships. Analysis that is stored by an agent.
is complicated by the fact that an object may appear different
depending on the angle from which it is viewed, the direction
and intensity of illumination in the scene, and how much the
object contrasts with the surrounding field. In recent times,
Artificial Perception is suitably well advanced to allow optical
sensors to identify individuals, autonomous vehicles to drive
at modest speed on the open road.

3.5 Learning
Machine learning is another core part of Artificial Intelligence
(AI). There are different forms of learning as applied to
artificial intelligence. The simplest is trial and error method.
For example, a simple computer program for solving mate-in-
one chess problems might try moves at random until mate is
found. The program might then store the solution with the
position so that the next time the computer encountered the
same position it would recall the solution. This simple 4.3 Knowledge Base (Kb)
memorizing of individual items and procedures—known as
rote learning—is relatively easy to implement on a computer. A Knowledge Base (KB) is that technology which is used to
More challenging is the problem of implementing what is accumulate complex planned and unplanned information
called generalization. Generalization involves applying past which has been used by a computer system. The initial use of
experience to analogous new situations. the term was in connection with expert systems which were
the first knowledge-based systems. It makes these available to
3.6 Planning the inference engine in a form that it can use. The facts may
be in the form of background information built into the system
The planning problem in Artificial Intelligence (AI) is about or facts that are input by the user during a consultation. The
the decision making performed by intelligent creatures like rules include both the production rules which applied to the
robots, humans, or computer programs when trying to achieve domain of the expert system. The heuristics of rules-of-thumb
some goal. It involves choosing a sequence of actions that will that are provided by the area expert in order to make the
transform the state of the world, step by step, so that it will system find solutions more capably by taking short cuts.
satisfy the goal. The world is typically viewed to consist of
atomic facts, and actions make some facts true and some facts
false.
4.2 Inference Engine
Inference Engine is a program which infers new facts from
4. APPLICATIONS OF ARTIFICIAL known facts using inference rules. It locates the appropriate
INTELLIGENCE (AI) knowledge in the knowledge base, and infers new knowledge
by applying logical processing and problem-solving
Artificial intelligence (AI) has been used in a wide range of strategies. Commonly found as part of a Prolog interpreter,
fields including medical diagnosis, stock trading, robot expert system or knowledge based system.
control, law, remote sensing, scientific discovery and toys.
The various applications of Artificial Intelligence (AI) are:

4.1 Data Mining


Data Mining is the analysis step of the "Knowledge Discovery
in Databases" process (KDD). It is explained as an
interdisciplinary subfield of computer science, is the
computational process of discovering patterns in large data
sets involving methods at the intersection of artificial
intelligence, machine learning, statistics, and database
systems. The overall goal of the data mining process is to
remove information from a data set and transform it into an
understandable structure for further use. Away from the raw
analysis step, it involves database and data management

465
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 To Safe, compile-time checked definition of the


syntax and grammar.
 Model real-world inheritance with C# class
inheritance.

4.3 Soft Computing


Soft Computing provides rapid dissemination of important
results in soft computing technologies, a synthesis of research
in evolutionary algorithms and genetic programming, neural
science and neural net systems, fuzzy set theory and fuzzy
systems, and chaos theory and chaotic systems .It also 4.5 Aviation
promote the integration of soft computing techniques and
The Air Operations Division (AOD) uses Artificial
tools into both everyday and advanced applications.
Intelligence (AI) for the rule based expert systems. The AOD
has use for artificial intelligence for surrogate operators for
combat and training simulators, mission management aids,
support systems for tactical decision making, and post
processing of the simulator data into symbolic summaries.
The use of artificial intelligence in simulators is proving to be
very useful for the AOD. Airplane simulators are using
artificial intelligence in order to process the data taken from
simulated flights. Other than simulated flying, there is also
simulated aircraft warfare. The computers are able to come up
with the best success scenarios in these situations. The
computers can also create strategies based on the placement,
size, speed and strength of the forces and counter forces.
Pilots may be given assistance in the air during combat by
computers. The artificial intelligent programs can sort the
information and provide the pilot with the best possible
maneuvers, not to mention getting rid of certain maneuvers
that would be impossible for a human being to perform.
Multiple aircraft are needed to get good approximations for
some calculations so computer simulated pilots are used to
gather data. These computer simulated pilots are also used to
train future air traffic controllers.
4.4 NATURAL LANGUAGE
PROCESSING (NLP) 4.6 Robotics
Natural language processing (NLP) refers to computer Robotics is one field within artificial intelligence.
systems that examine, attempt to understand, or produce one Conventional Robotics employ Artificial Intelligence
or more human languages, such as English, Japanese, Italian, planning technologies to program robot behaviors and works
or Russian. The input might be text, spoken language, or toward robots as technical devices that have to be developed
keyboard input. The task might be to interpret to another and controlled by a human engineer. The Autonomous
language, to know and represent the content of text, to build a Robotics approach suggests that robots could develop and
database or generate summaries, or to maintain a dialogue control themselves autonomously. These robots are able to
with a user as part of an interface for database retrieval. adapt to both uncertain and imperfect information in
constantly changing environments. It lets a simulated
4.4.1 Goals evolution process develop adaptive robots.

 To make it easy to define tokens and sentences.

466
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

6.2Increase Our Technological Growth


Rate

Artificial Intelligence (AI) will potentially help us 'open


doors' into new and more advanced technological
breakthroughs. For example, due to their capability to produce
millions and millions of computer modeling programs also
with high degrees of correctness, machines could essentially
help us to find and understand new chemical elements and
compounds etc. Basically, a very realistic advantage AI could
propose is to act as a sort of catalyst for further technological
5. TURING MACHINE & scientific discovery.

Turing suggests we should ask if the machine can win a game, 6.3 They don't stop
called the "Imitation Game". The original Imitation game that
Turing described is a simple party game involving three As they are equipment there is no need for rest or sleep. They
players. Player A is a man, player B is a woman and player C do not get ill or never get tired. There is need for them to be
can be of either sex. In the Imitation Game, player C is unable charged or refill. The machines can do much more work than
to see either player A or player B and can communicate with a man can do. All that is required is that they have some
them only through written notes or any other form that does energy source.
not give away any details about their gender. By asking
questions of player A and player B, player C tries to 6.4No risk of damage
determine which of the two is the man and which the woman
is. Player A's role is to trick the interrogator into making the During exploring new undiscovered land or even planets, the
wrong decision, while player B attempts to assist the machines usually get out of order but there is no harm done as
interrogator in making the right one. they don't feel, they don't have emotions. Whereas going on
the same type of expeditions a machine does, may simply not
be possible or they are exposing themselves to high risk
situations.

6.5 Act as helper

These Machines can help 24*7 aids to children with


disabilities or the elderly; they could even act as a source for
learning and teaching. They could even be part of security
alerting you to possible fires that you are in threat of, or
reducing crimes.

6.6 Their function is almost limitless

These machines are able to do everything. Essentially their


use, pretty much doesn't have any limits. They will make
fewer faults, they are emotionless, they are well-organized,
6. ADVANTAGES OF ARTIFICIAL and they are basically giving us more free time to do as we
INTELLIGENCE (AI) please.

There are many advantages of the Artificial Intelligence (AI) 7. DISADVANTAGES FOR ARTIFICIAL
and these are:
INTELLIGENCE (AI)
6.1 Jobs 7.1 Over reliance on Artificial Intelligence
(AI)
Due to Artificial Intelligence the computers are able to
perform various jobs. Depending on the level and type of
intelligence these machines receive in the future, it will If we over reliance on machines then we will become
obviously have an effect on the type of work they can do, and dependent on these machines and this can be proved harmful
how well they can do it .As the level of AI increases so will for the whole economy. It wouldn't be too smart on our part
their capability to deal with difficulty, complex even risky not to have some sort of backup plan to potential issues that
tasks that are currently done by humans, a form of applied could arise, if the machines 'got real smart'.
artificial intelligence.
7.2 Human Feel

467
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

As they are machines they obviously can't provide you with


that 'human touch and quality', the feeling of a togetherness
and emotional understanding, that machines will lack the
ability of understanding the situations, and may act
irrationally as a consequence.

7.3 Inferior

As machines will be able to perform almost every task better


than us in practically all respects, they will take up many of
our jobs, which will then result in masses of people who are
then jobless and as a result feel essentially useless. This could
then lead us to issues of mental illness and obesity problems
etc.

7.4 Misuse

If this kind of technology will go in wrong hand, it can cause


mass destruction. Where robot armies could be formed, or
they could perhaps malfunction or be corrupted or can prove
harmful for the world.

8. CONCLUSION

The paper aims at the presenting the perception of “Artificial


Intelligence” by which a computer can perform the operations
similar to the human beings. The ability to learn by
illustrations makes them all the more powerful. They are well
organized. The main aim of the researchers is to create
machines that surpass humans. Yet there is lot to gain in the
computer world. Artificial Intelligence in the future will try to
make computers which will be more stylish and sophisticated
and the functions of the humans such as learning by doing,
cognition and sensitivity will also be performed by these
computers.

REFERENCES

[1] Abraham Ajith.2006.Artificial Neural Networks.


Oklahoma State University, Stillwater OK, USA.
[2] Crowen Tyler, Dawson Michelle. 2009. What Does the
Turing test really mean? and how many human beings
(including Turing) could pass? Georage Mason
University Department of Economics and University of
Montreal.
[3] Jha Kumar Girish .2005.Artificial Neural Networks and
Its Application. International Journal of Computer
Science and Issues.
[4] Kumar Koushal, Thakur Mitra Sundar Gour .2012.
Advanced Applications of Neural Networks and
Artificial Intelligence: A Revies.,I.J. Information
Technology and Computer Science, 6, 57-68.
[5] Zurada M. Jacek .2006.Introduction to Artificial Neural
System. Jaico Publishing House.

468
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Data Optimization using Transformation approach in


Privacy Preserving Data Mining

Rupinder Kaur Meenakshi Bansal


Student of YCOE Assistant Professor
Punjabi University, Guru Kashi Campus, YCOE, Punjabi University
Talwandi Sabo Guru Kashi Campus, Talwandi Sabo
rupindercsgate@gmail.com ermeenu10@gmail.com

ABSTRACT these techniques. So, before releasing database, sensitive


Data Mining is a way to extract hidden knowledge from large information or knowledge must be hidden from unauthorized
databases. Data mining techniques extracts the hidden patterns access. To solve privacy problem, PPDM has become a
from databases that may be used for effective decision hotspot in data mining and database security field. Many
making. But these techniques may pose a serious threat to organizations disclose their information or database for
privacy by disclosing sensitive information. So the trend is mutual benefit to find some useful information for some
towards developing Privacy Preserving Data Mining (PPDM) decision making purpose and improve their business schemes.
techniques that should be able to extract useful information But this database may contain some private data and which
without causing threat to privacy. This Paper presents the the organization does not want to disclose. The issue of
basic concepts in the context of PPDM and also a description privacy plays important role when several organizations share
about various Privacy Preserving Data Mining techniques. their data for mutual benefit but no one wants to disclose their
The main emphasis is on transformation approach for PPDM private data. Therefore before disclosing the database,
that results in less information loss as compared to other sensitive patterns must be hidden and to solve this issue
PPDM techniques. PPDM techniques are helpful to enhance the security of
database. Privacy preserving data mining (PPDM) is the
General Terms recent research area that deals with the problem of hiding the
Your general terms must be any term which can be used for sensitive information while analyzing data.
general classification of the submitted material such as Pattern In recent years, the wide availability of personal data has
Recognition, Security, Algorithms et. al. made the problem of privacy preserving data mining an
important one. Privacy preserving data mining considers the
Keywords problem of running data mining algorithms on confidential
Keywords are your own designated keywords which can be data that is not supposed to be revealed even to the party
used for easy location of the manuscript using any search running the algorithm. The problem of privacy-preserving
engines. data mining has become more important in recent years
because of the increasing ability to store personal data about
1. INTRODUCTION users, and the increasing sophistication of data mining
The significant development in field of data collection and
algorithms to leverage this information. So maintaining
data storage technologies have provided transactional data to
privacy is a challenging issue in data mining.
grow in data warehouses that reside in companies and public
sector organizations. As the data is growing day by day, there The basic form of the data in a table consists of following four
has to be certain mechanism that could analyze such large types of attributes [4].
volume of data. Data mining is a way of extracting the hidden
predictive information from those data warehouses without (i) Explicit Identifiers is a set of attributes containing
revealing their sensitive information. Data mining is the information that identifies a record owner explicitly such as
process of extracting hidden information from the database. name, SS number etc.
Data mining is emerging as one of the key features of many (ii) Quasi Identifiers is a set of attributes that could
business organizations. The current trend in business potentially identify a record owner when combined with
collaboration shares the data and mined results to gain mutual publicly available data.
benefit. Data mining is a set of automated techniques used to (iii) Sensitive Attributes is a set of attributes that contains
extract hidden or buried information from large databases. sensitive person specific information such as disease, salary
The term data mining refers to the nontrivial extraction of etc.
valid, implicit, potentially useful and ultimately (iv) Non-Sensitive Attributes is a set of attributes that creates
understandable information in large databases with the help of no problem if revealed even to untrustworthy parties.
the modern computing devices.
3. NEED FOR PPDM
2. PRIVACY PRESERVING DATA Consider, as a tea reseller we are purchasing tea at low price
from two companies, Tata Tea Ltd. and Lipton Tea Ltd.,
MINING (PPDM) while granting them to access our customer database. Now,
Successful applications of data mining techniques have been suppose the Lipton Tea supplier misuse the database and
demonstrated in many areas that benefit commercial, social mines association rules related to the Tata Tea, saying that
and human activities. Along with the success of these most of the customers who buy bread also buy Tata tea.
techniques, they pose a threat to privacy. One can easily
disclose other‟s sensitive information or knowledge by using

469
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Lipton tea supplier now runs a coupon scheme that offers transformation, etc. techniques to the original datasets in order
some discount on bread with purchase of Lipton Tea. So, the to generate their sanitized counterparts that can be safely
amount of sales on Tata Tea is down rapidly and Tata Tea disclosed to untrustworthy parties. The goal of this category
supplier cannot offer tea at low price to us as before and of approaches is to enable the data miner to get accurate data
Lipton monopolizes the tea market and is not offering tea at mining results when it is not provided with the real data.
low price to us as before. As a result, we may start losing Secure Multiparty Computation methodologies that have been
business to our competitors. So, releasing database with proposed to enable a number of data holders to collectively
sensitive knowledge is bad for us. This scenario leads to the mine their data without having to reveal their datasets to each
research of sensitive knowledge (or rule) hiding in database. other.
For example, a hospital may release patients‟ diagnosis The second category deals with techniques that prohibits the
records so that researchers can study the characteristics of disclosure sensitive knowledge patterns derived through the
various diseases. The raw data, also called microdata, contains application of data mining algorithms as well as techniques
the identities (e.g. names) of individuals, which are not for downgrading the effectiveness of classifiers in
released to protect their privacy. However, there may exist classification tasks, such that they do not reveal sensitive
other attributes that can be used, in combination with an knowledge.
external database, to recover the personal identities. Now we
assume that the hospital publishes the data in Table1, which PPDM tends to transform the original data so that the result of
does not explicitly indicate the names of patients. data mining task should not defy privacy constraints.
Following is the list of five dimensions on the basis of which
Table 1: Patients’ diagnosis record published by hospital different PPDM Techniques can be classified:
(i) Data distribution
Attributes (ii) Data modification
ID (iii) Data mining algorithms
Age
Age SexSex Zipcode
ZipcodeDiseaseDisease (iv) Data or rule hiding
(v) Privacy preservation
1 26 M 83661 Flu
4.1 Data distribution
2 24 M 83634 Heart The first dimension refers to the distribution of data. Some of
Attack the privacy preserving approaches have been developed for
centralized data and others refer to a distributed data scenario.
3 31 M 83967 Stomach Distributed datasets scenarios can also be classified as
Cancer horizontal data distribution and vertical data distribution.
4 39 F 83949 Horizontal distribution refers to these cases where different
Flu database records reside in different places, while vertical data
distribution, is referring to the cases where all the values for
different attributes reside in different places.
Table 2: Voter registration list
4.2 Data modification
Attributes The second dimension refers to the modification scheme of
ID the data. Data modification is mostly used in order to change
Name Age Sex Zipcode the original values of a database that needs to be released to
Age Sex Zipcode Disease
1 the public and also to ensure high protection of the privacy
Jim 26 M 83661 data. It is important that a data modification technique should
be in concern with the privacy policy adopted by an
2
Jay 24 M 83634 organization. Methods of data modification include:

3  Perturbation, which is accomplished by the


Tom 31 M 83967 alteration of an attribute value with a new value
(i.e., changing a 1-value of a 0-value, or adding
4 noise). Moreover, data transformation approach is
Lily 39 F 83949
more efficient than bit transformation.
 Blocking is the replacement of an existing attribute
value with an aggregation or merging which is the
4. PPDM TECHNIQUES combination of several values.
PPDM has become an important issue in data mining
 Swapping that refers to interchanging values of
research. As a result, a whole new set of approaches were
individual records.
introduced to allow mining of data, while at the same time
 Sampling, which is referred to delivering the data
prohibiting the leakage of any private and sensitive
for only a sample of a population.
information. The majority of the existing approaches can be
classified into two broad categories: 4.3 Data mining algorithm
(i) methodologies that protect the sensitive data itself in the The third dimension refers to the data mining algorithm, for
mining process, and which the data modification is also placed here. It has
(ii) methodologies that protect the sensitive data mining included the problem of hiding data from a combination of
results (i.e. extracted knowledge) that were produced by the data mining algorithms. For the time being, various data
application of the data mining. mining algorithms have been considered in isolation of each
other. Among them, the most important ideas have been
The first category refers to the methodologies that apply developed for classification data mining algorithms, such as
perturbation, sampling, generalization / suppression,

470
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

decision tree inducers, clustering algorithms, rough sets and nor in any taxonomy order but are assigned certain mapping
Bayesian networks, association rule mining algorithms. values. With this kind of mapped values, the actual values
cannot be perse guessed.
4.4 Data or rule hiding
The fourth dimension refers to hiding the rule or the data. The Table 3: Mapping Table
complexity of hiding aggregated data in the form of rules is of Actual Value Mapping value
course higher, and for this reason, most heuristics have been
developed. The data miner to produce weaker inference rules Flu Illness_1
that will not allow the inference of confidential values. This Stomach Cancer Illness_2
process is also known as “rule confusion”.
Bronchitis Illness_3
4.5 Privacy preservation
The last dimension which is the most important and it refers H1N1 Illness_4
to the privacy preservation technique used for the selective
modification of the data. Selective modification is required in This mapping table is to be preserved in highly secured
order to achieve higher utility for the modified data given that trusted sever along with the original table T and kept
the privacy data is not a loss. The important techniques of confidential. In a table there may be any number of sensitive
privacy preserving data mining are, nominal or categorical attributes. For each sensitive attribute,
1. The randomization method separate mapping table is to be preserved [8].
2. The anonymization method
3. The encryption method
5.2 Transformation method for numerical
data type:
4.5.1 The randomization method Various transformation methods exist for numeric sensitive
Randomization method is an important and popular method in and quasi-identifier attributes. Poovammal [7] proposed a
current privacy preserving data mining techniques. It masks fuzzy based transformation to transform quasi-identifier
the values of the records by adding additional data to the numeric attribute. In 2010, Poovammal and Ponnavaikko have
original data. developed categorical based transformation method to
transform numeric sensitive attributes [8]. Mukkamala and
4.5.2 The anonymization method` Ashok [6] compared fuzzy based transformation methods and
Anonymization method is aimed at making the individual proposed various fuzzy functions and mappings. Haung and
record will be indistinguishable among a group record by Chen [2] proposed a distance and correlation preserving
using generalization and suppression techniques. K- transformation called as FISIP transformation to transform
anonymity is the representative anonymization method. The private numeric data values. Jalla and Girjia [3] proposed a
motivating factor behind the k-anonymity approach is that hybrid transformation method specifically for classification.
many attributes in the data can often be considered quasi-
identifiers which can be used in conjunction with public 6. EVALUATION OF PRIVACY
records in order to uniquely identify the records.
PRESERVING
4.5.3 The encryption method In privacy preserving data mining algorithms, a list of
Encryption method mainly resolves the problems that people evaluation parameters to be used for assessing its quality, is
jointly conduct mining tasks based on the private inputs they given below:
provide. These privacy mining tasks could occur between  The performance, that is the time needed by
mutual un-trusted parties, or even between competitors. each proposed algorithm to hide a specified set
Therefore, to protect the privacy becomes an important of sensitive information.
concern in distributed data mining setting.
 The data utilized means utilization of the
5. TRANSFORMATION APPROACH specific data sets. The application of the
FOR PPDM privacy preserving technique, which is
Transformation means changing from one form to another. equivalent to the reduction of the information
Various transformation methods exist for numeric attributes. loss or else the data loss in the functionality.
Data Transformation methods are more efficient than bit  The level of uncertainty in which the sensitive
transformation methods as they result into less information information that have been hidden.
loss. The actual sensitive values are replaced with new values,  The resistance is accomplished by the privacy
which can exhibit the same general pattern of the actual data, algorithms, with various data mining
but conceal the actual sensitive information (details) even if techniques.
traced by linking attack.
7. CONCLUSION
5.1 Transformation method for There is a need to develop transformation methods for
alphanumerical data type: Boolean attributes. Correlation preserving Transformation
The transformation of alphanumeric nominal data type is results into less information loss as it maintains the actual
based on the principle that encoding does not produce any general pattern of actual values. Transformation based
information loss. Each nominal sensitive value is assigned a methods are also well suited for collaborative data mining,
mapping value. The mapping values are alphabetic or where multiple parties need to share their databases for
alphanumeric and assigned in any random order. A typical efficient mining and accurate results.
mapping table for the attribute „Disease‟ is as shown in Table.
In this, table, actual values of the attribute „Disease‟ are
arranged neither in alphabetical order, nor in severity order,

471
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES International conference on Computing,


Communication and Network Technologies, 2(1),
[1] Dhanalakshmi, M., Sankari, S., (2014), “Privacy pp:1-6.
Preserving Data Mining Techniques”, International
Conference on Computing and Network [6] Mukkamala, R., Ashok, V.G., (2011), “Fuzzy-
Technologies, 3(2), pp: 6-11. based Methods for Privacy-Preserving Data
Mining”, Eighth International Conference on
[2] Haung, J.W., Su, J.W., and Chen, M.S., (2011), Information Technology, 8(2), pp: 348-353.
“FISIP: A Distance and Correlation Preserving
Transformation for Privacy Preserving Data [7] Poovammal, E., Ponnavaikko, M., (2009), “An
Mining”, IEEE Conference on Technologies and Improved Method for Privacy Preserving Data
Applications of Artificial Intelligence, 1(4), pp: Mining”, IEEE International Advanced Computing
101-106. Conference (IACC), 5(2), pp: 1453-1458.

[3] Jalla, H.R., Girjia, P.N., (2014), “An Efficient [8] Poovammal, E., Ponnavaikko, M., (2010), “APPT –
algorithm for Privacy Preserving Data Mining A Privacy Preserving Transformation Tool for
using Hybrid Transformation”, International Micro Data Release”, ACM Digital Library and
Journal of Data Mining & Knowledge Management Proceedings of ACM – W, Women in computing
Process (IJDKP), 4(4), pp: 45-53. Conference on Advances in Computing, 9(1), pp:
62-72.
[4] Malik, M.B., Ghazi, M.A., and Ali, R., (2012),
“Privacy Preserving Data Mining Techniques: [9] Vijayarani, S., Tamilarasi, A., (2011), “An Efficient
Current Scenario and Future Prospects”, Third Masking Technique for Sensitive Data Protection”,
International Conference on Computer and IEEE-International Conference on Recent Trends in
Communication Technology, 9(2), pp: 26-32. Information Technology, 1(1), pp: 1245-1249.

[5] Modi, C.N., Patel, D.R., and Rao, U.P., (2010), [10] Wang, J., Zhao, Y., (2009), “A Survey on Privacy
“Maintaining Privacy and Data Utility in Privacy Preserving Data-Mining”, First International
Preserving Association Rule Mining”, Second conference on Database Technology and
Applications, 1(1), pp: 111-114.

472
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Study on Design the Sensor for Control the Traffic Light


Time as Dynamic for Efficient Traffic Control
Divjot Kaur
Computer Science and Engineering,
BBSBEC, Fatehgarh Sahib, PUNJAB, INDIA
divjot.kaur511@gmail.com

ABSTRACT accidents further influence complexity and performance. Our


main concern is with the Traffic Control Light System. The
Soft driving is a major concern of societies all over the world. Traffic Lights are explained as below.
Most of people are killed or seriously injured due to accidents 1.1 Traffic Light System
each year, various investigation show that speeding is main A traffic light system is used to minimize the traffic on road.
cause of road accidents. The major key region is growing at a Traffic safety equipment are used to road safety like as
rapid speed. The average speed of vehicles is one of the main signaling device positioned at an intersection and road divider to
parameters that have been widely used particularly in road indicate when the traveler have to ride, drive, walk. The traffic
safety equipment designing and road working. The road safety lights commonly have three main light colors, such as the red
equipments are affected by some other parameters such as type light is stop and the meaning of green light is to go whether the
of road, days and type of vehicles either heavy or light. In this yellow light means ready to go. For the pedestrians, there are
paper, a survey had been conducted to study the efficient only two light colors one is red and other is green light. There
performance of road safety equipments to control the traffic. are many benefits of traffic light system, besides reducing the
number of accidents. The government makes few rules to
Keywords: Average Speed, Genetic Algorithm, Road Safety, overcome this problem such as punished to all those do not obey
Traffic Light System. the traffic rules. The traffic control lights placed at the location
where risk of accident is high or large jam is create. Increasing
1. INTRODUCTION the numbers of traffic lights also have problems those are as
below:
Traffic congestion is a serious problem in many cities and rural 1.1.1 The heavy traffic jams
areas also. So we can say that around the world. Traffic
With advance in technology, the number of vehicle on roads are
congestion is a major challenge in the many and most populated
increased, those creates the traffic jams. The jams are usually at
cities. When we can travel to two different places within the
the main intersections in the morning time, before and after the
same city then it is difficult for the traveler in traffic. Sometimes
office hour. So that’s all reasons are effect the time of people.
due to traffic congestion problems the people lose many things
like time, opportunities. Traffic congestion also directly impacts 1.1.2 The road user waits
the industries areas. Due to traffic problem there is a loss of The traffic light used as road safety equipment on road, but
productivity from workers are effect by traffic congestion, sometimes traffic lights is the cause of wasting of time for
delivery of products are also effect by traffic because delivery people. At the certain junction, but sometimes the red light are
gets delayed. All of the above thereby the costs goes on occur but there is no traffic, the road users should wait until the
increasing. red light convert into green light. If peoples run the red light,
To solve the congestion problems, new facilities have to been then they have to pay the fine.
developed and provide new infrastructure also, but at the time it 1.1.3 Emergency vehicle stuck jam
make difficult again. When we can provide new infrastructure Because of traffic jams, the emergency vehicle, such as police
there is a big disadvantage of making new roads is that the cars, ambulance and the fire brigade stuck at the traffic lights,
surroundings become more congested. That’s why we have to because the users are waiting for green traffic light. It is very
improve the system instead of making new infrastructures. For critical problem, emergency become more complicated.
example, most of the countries are working on their existing As above, we mentioned the main problems with the traffic
systems to resolve these problems. When we can manage lights. There are various techniques to solve these problems. All
transportation then mobility also improves the safety and traffic these techniques are reviewed under the literature survey.
flows. When we can enhance route guidance systems, public
transport, traffic signal improvements and incident 2. LITERATURE REVIEW
management, congestion problem can be reduced from the
analysis of US department of transportation, it has been found Leena Singh et al. in [1] Real-time traffic signal is an
that one major reason of congestion is the reoccurring of important part of the urban traffic light system. It provides
congestion. Due to reoccurring congestion, the roads have been effective traffic control for complex traffic network problems
used repeatedly. The non-recurring congestions which are those are the challenging problems. The new model uses GA
caused due to traffic incidents, special events, work zones, which is implemented in MATLAB tool. The new model
weather etc. Reoccurring events reduces the capacity and optimizes the timing of traffic light signal in real time and then
reliability of the transportation system. provides optimum green time duration for all the four phases
The main goal of traffic research is to optimize traffic flow of depending upon the traffic conditions. An “intelligent”
people and goods also. As the number of road user’s increases intersection traffic control system was developed. The
and resources of infrastructures are limited, the intelligent developed system takes “real time” decisions to adjust the light
control of traffic will be a very serious issue in the future. There durations.
are several models for traffic simulation. The flow of traffic
constantly changes, which depends on the time of day, the Javier J. Sanchez et al. in [2] studied a model is which
week, and the year is also a complication. Roadwork and depends on Genetic Algorithms for optimization, on Cellular

473
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Automata for Simulation of Function, and also on the Beowulf traffic light configurations. To Improve traffic light, an
Cluster for parallel execution. Up to date traffic network Intelligent Traffic Light Monitor System using Memory to
optimization has been faced using the Trial-And-Error method. reduce wasting of fuel due to unnecessary waiting times at
This method cannot ensure that the whole searching space is intersections and the wasted time and lost lives of vehicle users.
covered. We propose a new method, a non deterministic
optimization one to solve this task. In this work we optimize K. T. K. Teo et al. in [8] studied the optimization of traffic
traffic light cycles. light systems to control traffic. The traffic light systems are
made to control the flow of traffic in the intersections to make
Halim Ceylan et al. in [3] traffic signal and traffic assignment sure that traffic flow is within control in the traffic network.
problem are optimized by using GA. Signal duration is When there is high traffic flows that cannot be controlled by the
described by network cycles and by the offsets between the current traffic light systems then the long queues are made at the
joints. The objective function in this paper is the performance intersection areas. Genetic algorithm is used to get the optimal
index (PI) of network. Genetic Algorithm (GA) is uses the solution for the reduction of queue length and to control the
inversion of PI as the fitness function. The results show that the traffic flow. Genetic algorithm takes queue length of traffic as
GA is simpler than heuristic algorithm. Furthermore, results input. The output is the optimized duration of green time. The
from the tests conducted on road network shows that the results of Genetic algorithm are improved for the incoming
performance index is improved significantly. traffic when the red light is turn on.

Javier J. Sánchez-Medina et al. in [4] optimization Shailendra Tahilyani et al. in [9] studied congestion problem
techniques such as genetic algorithms (GAs) for the of urban areas which becomes very critical. This problem
optimization; cellular-automata-based micro simulators for increases with the increase in the number of vehicles. The un-
traffic light times; and Multiple-instruction–multiple- expandable traffic infrastructure also causes congestion. New
data(MIMD) also called Beowulf cluster multicomputer of lane algorithm/techniques have been developed to solve this
excellent price/performance ratio described positive experience congestion problem and make the traffic flow smooth on roads.
with the optimization of traffic lights in cities. A distinctive Genetic algorithms are used as optimization method. Finally,
feature of this paper is the large scale of the underlying grid. the results of the proposed algorithm are able to deal with this
Using the supplied maps and statistics, we have simulated the probelm. A new approach is introduced to deal with the traffic
present-day traffic behavior. Additionally, we have optimized congestion on the road networks. A new lane by pass based
the traffic signal times, yielding better results with regard to approach is introduced using genetic algorithms. The results are
several predefined parameters. found satisfactory.

Suhail M. Odeh et al. in [5] describes an intelligent traffic light 3. FUTURE WORK
system to manage the congestion problem because of high flow
of traffic. The authors take two main highways and four
intersection areas for the experimental study of system. The data
3.1 Problem Formulation
 The current traffic light systems provide a fixed traffic
regarding the flow of traffic and other parameters are collected
control plan. The settings of these systems are based on
from a video imaging system which captures the images. These
traffic counts. The monitoring and control of city traffic
images are used to detect and count the number of vehicles. All
light is becoming a major problem.
the collected data, images are transferred to another system
 There are several types of conventional methods of traffic
based on GA. The system depends on rules those are used to set
light control; however they fail to deal effectively with
the green light interval time.
complex and time varying traffic conditions. There is a
need to research on new types of highly effective practical
Martin Kelly et al. in [6] the values of the parameters
traffic light controllers.
governing the simulations are identified through the use of a
 In our work, we will propose a new development of a
genetic algorithm. In a first instance, we will continue pursuing
traffic light control system. This system will manage the
experiments with the current model in order to identify
time duration of RED and GREEN lights to decrease the
additional optimums; then we will extend the current
traffic congestion at traffic light. Our system will reduce
experiments by comparing with other techniques and evaluating
the delay, hi-jack etc.
the current model under different congestion situations. Second,
another model is planned which will tackle re-routing of 3.2 Objective
emergency vehicles only. We are also planning to combine 1. To conduct a review of the traffic controlling systems.
these two models together where both regular and emergency 2. To study the traffic congestion Problems.
vehicles are re-routed and traffic globally optimized. Third, as 3. Propose an Algorithm to make a dynamic traffic light
most simulations of car traffic control, we are using a square system.
grid of routes for modeling the city. 4. Make the duration of GREEN and RED lights a variable
one according to the traffic volume.
Emad I Abdul Kareem et al in [7] The traffic flow in urban
areas is managed by using traffic lights. But the light system 4. CONCLUSION
causes long time for vehicles to wait till there is traffic or there
is no traffic. In urban areas there is a fixed number of traffic- The major key region of traffic congestion is unbalanced traffic
cycle. To improve the traffic light configuration, the current light management system. The average speed of vehicles is one
monitoring system needs to be improved. The other option is to of the main parameters that have been widely used particularly
add additional component to the traffic light system, which will in road safety equipment designing and road working. The road
be capable to find the three cases such as empty, crowd and safety equipments are affected by some other parameters such
normal and stored that in the memory. This study has to develop as type of road, days and type of vehicles either heavy or light.
an intelligent vision traffic light monitoring system via In this paper, a survey had been conducted to study the efficient
associative memory in order to demonstrate an improvement in performance of traffic light system to control the traffic.

474
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES
[1] Arora Himakshi, Singh Leena, Tripathi Sudhanshu, 2009, [6] Giovanna Di Marzo Serugendo, Martin Kelly, 2007, " A
"Time Optimization for TSC (Traffic Signal Control) decentralised car traffic control systemsimulation using
Using GA" IJRTE, Vol 2, No. 2. local message propagationoptimised with a genetic
algorithm", 1-4244-1396-6/07/$25.00 © IEEE.
[2] Enrique Ruhio, Galan Manuel, Javier J.Shchez, 2004, " GA
and Cellular Automata: A New Architecture for TL [7] Aman Jantan, Emad I Abdul Kareem, 2011, "An Intelligent
(Traffic Light) Cycles Optimization ". 07803-85 15- Traffic Light Monitor System (TLMS) using an AAM
2/04/$20.00 lEEE. (Adaptive Associative Memory)" International Journal of
Information Processing and Management. Number 2,
[3] Ceylan Halim, Michael G.H. Bell, 2003, "TSTO (Traffic Volume 2.
signal timing optimisation) based on GA approach,
including drivers_ routing " Elsevier Ltd. All rights [8] K. T. K. Teo, “Fuzzy Multiobjective Traffic Light Signal
reserved. Optimization”.

[4] Enrique Rubio Royo, Javier J. Sanchez Medina, Manuel J. [9] Shailendra Tahilyani,Manuj Darbari and Praveen Kumar
Gal ́an Moreno , Moises Dıaz Cabrera, 2009, "Traffic Shukla” A New GA Based Lane-By-Pass Approach for
Signals in Traffic Circles: Simulation and Optimization Smooth TF on Roads” IJARAI, Vol. 1, No. 3, 2012.
Based Efficiency Study " UROCAST, LNCS 5717, pp.
453–460, c©Springer-Verlag Berlin Heidelberg.

[5] Suhail M. Odeh, 2013, "Management of An Intelligent


(TLS) Traffic Light System by Using GA" Journal of
Image and Graphics Vol. 1, No. 2.

475
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Quality Aspects of Open Source Software


Amitpal Singh Harjot Kaur
Department of Computer Science & Engineering Department of Computer Science & Engineering.
Guru Nanak Dev University, Regional Campus Guru Nanak Dev University, Regional Campus
Gurdaspur, Punjab, India. Gurdaspur, Punjab, India.
apsohal@yahoo.com harjotkaursohal@rediffmail.com

ABSTRACT • no bug fixes


This paper reviews the quality aspects of free and open source • payment in advance
software systems. We start with brief history followed by
introductory definition of open source software development. Without support and bug fixes the growing community
Then we discuss the quality in terms of Functionality, of UNIX users were forces to help themselves, therefore they
Reliability, Usability, Maintainability, Portability and started to share ideas, information, programs, bug fixes and
Efficiency of Open Source Softwares. Looking upon hardware fixes. Hence, a new concept of system design was born.
arguments pertaining to quality of Open Source Software The term “open source initiative” was coined in 1998. Since then
development, we advocate the adoption of Open Source a new ideology that promises a lot in terms of economics,
Softwares by the masses. development environment and unrestricted user involvement has
been evolving in a big way thrust-ed into the big picture by
Keywords loosely-centralized and cooperative community contributions in
F/OSS, OSSD. the field of software engineering. It is a battle for the increasing
need of high quality and reliable softwares in spite of increasing
I. 1. INTRODUCTION complexity of applications [1].
Free and open Source Software Development is a technique
of building, deploying and sustaining large software systems
on global basis which makes it entirely different form the 3. WHAT IS OPEN SOURCE SOFTWARE
traditional software engineering practices. It is the process of
developing and managing the software by a geographically Free/Open Source Software (F/OSS) is relatively a new,
distributed team most of whom never have a face to face alternative, idea of software development in the area of software
interaction. Source code in human readable form is available engineering. A piece of software is F/OSS when its distribution
on-line for use, study, reuse, modification, enhancement and license fulfills the “four freedoms” of F/OSS [4]. Freedom to
redistribution. First a community is formed followed by
software development and related artifacts. Communication • use it at wish
among community members is openly accessible and publicly • copy and redistribute
available over the Internet. • modify it but source code has to be distributed along with binary
version of the software
2. BRIEF HISTORY There are mainly two major promoters of F/OSS:
It all started at AT & T lab with UNIX which was the first I) The Free Software Foundation : Responsible for the famous
Operating System not written in a hardware dependent GNU Project
assembly language (was written in C). This was considered to II) The Open Source Initiative : Keeps a repository with “Open
be the first step towards portability. Portability was the most Source” compliant licenses.
important reason why people, using different hardware,
developed interest in it. Most of the research during 1970s The main idea behind development process of F/OSS is that a
was focused on single person or a group of people have an idea about a software
product or they want a particular software to address a particular
• writing a program that do one thing and do it well. need. Then they start discussion with their friends and colleagues
• writing programs to work together about possible solution and making the code base. So they write a
• write a program that handle text streams. first release of that software to satisfy their needs, their “personal
itch” [3]. They put that software on-line, along with its source
With the work of Thompson and presentation of papers at the code, and inspire others to contribute to the project by sending
symposium on Operating System Principles in Yorktown either bug fixes or functional improvement for the software hence
heights, NY, in October 1973, UNIX became popular and its leading to formation of project community [2]. They may
installations grew many folds. Due to certain legal reasons announce their project at places like mailing lists, newsgroups or
AT & T was prohibited from doing any other business (like on-line news services. At some point these contributions are
Software development) except telephone or telegraph. incorporated into source code of the product and next test version
of the software is released. After rigorous testing and new code
To escape legal obligations , UNIX was provided as submissions, test version of the software becomes next stable
• no advertising release. Then new contributions are made and this process of
• no support release, code contribution (bug fixing and/or functional

476
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

improvement), code integration into the current one, next Let us discuss the quality of Open Source Softwares by taking
release continues in a circular manner. The evolution of the each factor one by one.
product is done by a single or a group of coordinators, who
are responsible for deciding which piece of new code will be 4.1 Functionality: ISO 9126 defines, functionality as “A
incorporated into the current one. The coordinators are set of attributes that bear on the existence of a set of functions and
usually the initial creators of the software and in most cases their specified properties. The functions are those that satisfy
have a strong impact on evolution and success of the product. stated or implied needs” [5].
As the project grows more and more people get attached and
a lot of feedback helps to get a better understanding of the Products developed using open source mechanism are a kind of
issue, and possible strategies to solve it. New information and “monopoly” in their application domain. Some well known
resources are integrated into the research process. The examples are “category killer” like BIND (Berkeley Internet
solution grows, and addresses the issue in ever better ways. Name Domain) Server which is a critical and dominating Domain
Open Source is a license to apply to the work that is freely Name System for Internet's infrastructure. Another well-known
available. If we take into consideration the development side application is Apache Web Server which is globally used by huge
then it is a philosophy. It is a different development method number of web servers.
but the way of coordination and communication is something
that makes the real difference. The latest data collected from netcraft.net [6] shows that Apache
has about 39% of market share which is highest for any web
4. QUALITY OF OPEN SOURCE server. We have enough literature to prove that the functionality
SOFTWARES offered satisfies majority of web server administrators and web
hosting companies. Some other well known and widely accepted
open source applications are various flavors of Linux operating
Quality means that whether or not the product conforms to a systems : like Red Hat, Fedora, Ubuntu etc.; Graphical User
set of standards posed by someone either the manufacturer or Interfaces : like GNOME, KDE, Ximian etc. ; Office Suites : like
the customer. According to the pioneers of Open Source Open Office, Koffice, GNOME Office, Gimp, Kile etc.; Network
Software movement, one of the most acclaimed advantages is Applications : like Apache, BIND, Sendmail, Mozila, Samba and
its superior quality. But this suggestion is still an open issue, Programming Languages like : GCC,Perl,Python,php etc. These
since there is little concrete evidence to justify whether products have same functionality as that of other similar
quality for every open source software is better or worse than proprietary softwares.
that of proprietary software products.
Free Open Source Software tries to be compliant with open
The general aim of this discussion is to look into the current standards which means development in a democratic way where
status of F/OSS quality and to assess its performance in any one is free to contribute and suggest things. Every possible
various aspects of quality. We will try to find out answers to effort is done to promote functionality and interoperability.
various questions raised by the assertion concerning the
quality of F/OSS.
4.2 Reliability: ISO 9126 defines reliability as “A set of
Basic quality model that is widely accepted is ISO 9126 attributes that bear on the capability of software to maintain its
quality model which was developed in 1991 in order to performance level under stated conditions for a stated period of
combine several views of software quality. ISO 9126 is a time” [5]. It is crucial to ensure reliability for applications like
hierarchical model consisting of six major attributes online shoping, banking applications etc. that run round the clock
contributing to software quality. These attributes are over Internet. Proponents of F/OSS claim that there is enough
evidence to support that F/OSS is more reliable than proprietary
 Functionality software.
 Reliability
 Usability Most important factor that directly affects reliability is the
 Maintainability frequency of bug discovering. Bug discovering as described by
 Portability Eric Raymond [3] postulation: “Given enough eyeballs, all bugs
 Efficiency are shallow” holds good in case of F/OSS. Once a version of
software is released, it is a matter of days, hours or even minutes
before the first bug is reported and the official fix is announced
[7] .

One such example is of Teardrop attack described by Hissam,


Plakosh & Weinstock [8]. What happened was that a flaw in the
Linux kernel's IP stack caused the system to crash, when a special
type of IP packet was received. The reason for selection of this
attack is that it was also affecting another well known closed
source operating system. For Linux, the vulnerability was fixed
within few hours. The patch for other closed source operating
system took a lot longer to be published and successive patches
were needed for resolving the problem.

Fig 1 : ISO 9126 - Quality Characteristics of Software Another fact that makes F/OSS more recoverable is that these
Source : ESSI-SCOPE Quality Characteristics and their application , patches are immediately put on line and are included in the code
www.cse.dcu. stored in the CVS 8 repository of the product.

477
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

F/OSS depends heavily on fast and effective communication from the same problems that the commercial software does. The
between developers and users. Open source softwares can problems with the usability were understood since long and
only be a success when highly stable and secure considerable efforts were made by the community to resolve the
communication channels are ensured. problems. The two major Linux desktop environments, GNOME
One additional feature, which plays a critical role in software and KDE, have identified the problem of usability and have
product quality, is defect density. launched “Usability Projects” to investigate and improve the
It was found out by Mockus et al [9] that defect density of usability [14].
open source code before system test is much lower which Since few years the ease of operation of Open Source Softwares
means better development process is followed and fewer bugs have improved a lot. As per my own experience around year 2001
are inserted into the code at the time of development. In installation of Linux was a job of a professional only. One has to
addition, due to availability of source code testing and be well versed with machine configuration but now the
debugging conducted by peer reviewers is more efficient. installation is as easy as any proprietary operating system. So over
One important question raised is whether F/OSS is more the passage of time community has realized the importance of
secure than proprietary software. As the code is open so it is usability and is successful in improving the same with various
not only open for the developer, but also for the potential design recommendations.
attacker. So what is the benefit? Various studies were conducted to compare the usability of F/OSS
According to Landwehr & Caloyannides [10], it is not only and proprietary application like Sun StarOffice with Microsoft
the source code, but the compiler that was used to make it Office and OpenOffice [16], F/OSS and proprietary operating
executable. It is possible for the compiler to insert various system GUI [15] and many more. The main result of these studies
malicious parts in the software for later use. Open source was that the usability of F/OSS and proprietary applications has
software let the users review their code and investigate the been evaluated to be nearly equal.
existence of such back doors and other kinds of flaws in
software. It is not certain in all cases that F/OSS is more 4.4 Maintainability: Maintainability has to do with the
secure than proprietary software, but the openness of the code
easiness of code modification. Maintenance is defined by the
is definitely a positive aspect in improving the final product.
IEEE as “the process of modifying a software system or
component after delivery to correct faults, improve performance
4.3 Usability: Usability may be defined as the ease with or other attributes, or adapt to a changed environment” [17].
which a user can learn to operate any system. According to F/OSS development makes software source code available over
ISO 9126 model usability is defined as “A set of attributes the Internet thereby enabling developers from across the globe to
that bear on the effort needed for use and on the individual contribute code, adding new functionality or improving the
assessment of such use by a stated or implied set of users” previous one and submitting bug fixes to the present release. A
[5]. Many people within the F/OSS community realize that part of these contributions are incorporated into the next release
usability must be a core issue for F/OSS if it wants to attract a and the loop of release, code submission/bug fixing, incorporation
critical mass. of the submitted code into the current and new release is
Michelle Levesque [11] found five most important flaws with continued [2]. Hence F/OSS development involves frequent
Open Source software development which should be kept in maintenance for debugging existing functionality and adding new
mind to improve its usability. They area one to the system.
Different studies on various attributes affecting maintainability
 User interface design such as testability, simplicity, readability and self-descriptiveness
 Documentation has been performed. These studies have revealed 50-50 ratio of
 Feature–centric development good and bad maintainability. One such study to compare the
 Programming for the self Maintainability Index [18] of the Apache, Mozilla Firefox, MySql
 Religious blindness and FileZilla software was observed over fifty successive
versions. From the results it was observed that the Mozilla Firefox
According to another F/OSS pioneer Brian Behlendorf [12] has the highest maintainability index value and the Apache has
end-user applications are hard to write, not only because a the lowest maintainability index. Studies also show that closed
programmer has to deal with a graphical, windowed source software maintainability is no better than that of open
environment which is constantly changing, nonstandard and source software.
buggy simply because of its complexity, but also because
most programmers are not good graphical interface designers. 4.5 Portability: Portability in high-level computer
For better or worse, good programmers are not necessarily
programming is the usability of the same software in different
good communicators and graphical user interfaces are a form
environments. The pre-requirement for portability is the
of non verbal communication [13]. Feller and Fitzgerald [4]
generalized abstraction between the application logic and system
states that the non-developers will focus not only on the
interfaces. When software with the same functionality is produced
availability of the source code, but mainly on the quality and
for several computing platforms, portability is the key issue for
the support of the product.
development cost reduction[19].
The community of F/OSS does not pay so much attention to
Portability was perhaps one of the very first requirement for
the usability issues that they themselves do not experience as
emergence of open source softwares. F/OSS systems are built to
they belong to the upper class of programming professionals
enhance the ability of softwares to be used on platforms with
[4]. The parties involved care for the software to meet their
different architectures. The availability of source code makes it
own demands of usability and do not take into consideration
possible for the developer to port an existing F/OSS application to
other users. They cannot imagine how a system looks like for
a different platform than the one it was originally designed for.
a novice user and how easy it is to operate it.
The most famous F/OSS, the Linux kernel, has been ported to
Usability problem arise due to the inherent difference
almost all CPU architectures like Alpha architecture, ARM
between the developer and users. F/OSS usability seems to be
architecture, IBM, Intel IA-64 Itanium, x86 architecture, MIPS
no better than that of the proprietary software and suffers
architecture, PowerPCs, Alphas, SPARCs and many more[20].

478
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The creators of Linux have tried to make its design as clean Conference for Mathematics and Informatics for Industry,
as possible and have used loadable kernel module in order to MATHIIND, Thessaloniki, Greece , April 2003.
make it more portable. In Linux practical interfaces and core [3] Raymond, E.S. 2002. The cathedral and the bazaar,
code is architecture independent. Wherever performance is http://www.tuxedo.org/~esr/ writings/ cathedral- bazaar/.
critical kernel features are tuned for each architecture [21]. [4] Feller, J. and Fitzgerald, B. 2001. Understanding Open Source
The F/OSS world adheres the most popular portability Software Development,Addison-Wesley.
standard i.e. IEEE POSIX (Portable Operating System [5] ISO/IEC 9126, http://en.wikipedia.org/wiki/ISO/IEC_
Interface) standard and ISO C. Using IEEE POSIX standard, 9126.
it is possible to compile source code and run that on different [6] March 2014 Web Server Survey, http://news.netcraft.com/
platforms. archives/category/web-server-survey
[7] Schmidt, D.C. and Porter, A. Leveraging Open-Source
4.6 Efficiency: ISO/IEC 9126 defines efficiency as “A Communities To Improve the Quality & Performance of Open-
Source Software, Making Sense of the Bazaar, Proceedings of the
set of attributes that bear on the relationship between the
1st Workshop on Open Source Software Engineering at
software's performance and the amount of resources used
ICSE,2001, Feller, J. , Fitzgerald, B. & Van der Hoek, A. (Eds.),
under stated conditions” [5].
http://flosshub.org/sites/flosshub.org/
The efficiency of open source Softwares has not been
files/schmidt.pdf
investigated much as only few objective and scientific studies
[8] Hissam, S.A., Plakosh, D. & Weinstock, C. ,Trust and
have been found. There is a need of more academic studies to
Velnerability in open source software, IEEE Proceedings –
prove the effectiveness and efficiency of open source
Software, 2002, vol. 149 No. 1, pp. 47-51
softwares. Many major software development companies,
[9] Mockus, A., Fielding, R. and Herbsleb, J.D. Two case studies
research and development labs, huge business organizations,
of open source software development: Apache and Mozilla.,
financial concerns, art and entertainment industry and so on
ACM Transactions on Software Engineering and Methodology,
rely on open source products for their operations. So one
Vol. 11, No. 3 , 2002, pp. 309-346.
cannot doubt the efficiency of open source products. There
[10] Witten, B., Landwehr, C., Caloyannides, M. Does Open
use in big firms is a direct indication of performance,
Source Improve System Security, IEEE Software Archive, Vol.
efficiency and reliability.
18, No. 5, Sept. 2001, pp. 57-61.
[11] Levesque, M. Fundamental issues with open source software
5. CONCLUSION development, First Monday, Vol. 9, April 2004.
[12] Behlendorf, B. Open Sources: Voices from the Open Source
We have explained the reliability of open source software in
Revolution, Open Source as a Business Strategy, MA: O'Reilly
terms of its constituent components which are Functionality,
and Associates, Chris DiBona, Sam Ockman and Mark
Reliability, Usability, Maintainability, Portability and
Stone(Eds), Jan. 1999, pp. 126-144.
Efficiency. We have found that even though it is a different
[13] Bentley, J.E. 14 Steps to a Good GUI, Proceedings of the
methodology of software engineering yet these products have
Twenty-Fourth Annual SAS® Users Group International
same functionality as that of other similar proprietary
Conference, Fontainebleau Hilton Resort and Towers Miami
software. It is not certain in all cases that F/OSS is more
Beach, Florida, April 1999.
secure than proprietary software, but the openness of the code
[14] Smith,S., Mankoski, A. Fisherberg, N. Pederson and C.
is definitely a positive aspect in improving the final product.
Benson, GNOME Usability Study Report,
Taking into consideration the usability of F/OSS and
http://developer.gnome.org/projects/gup/ut1_report/report_main.h
proprietary applications it is evaluated to be nearly equal.
tml, July 2001.
Studies on various maintainability issues like testability,
[15] LeMay, R., Ubuntu 9.04 as slick as Windows 7, Mac OS X,
simplicity, readability and self-descriptiveness has revealed
http://news.cnet.com/ubuntu-9.04-as-slick-as-windows-7-mac-os-
50-50 ratio of good and bad maintainability which happens to
x},2009.
be same as that of proprietary software. Taking portability
[16] Beckett, G. and Muller, J. A Feature Comparison of MS
into account, it is possible to compile source code and run
Office, StarOffice and OpenOffice.org,
that on different platforms. One cannot doubt the efficiency
http://www.opensourceacademy.org.uk/...comparison/feature-
of open source products as big firms are successfully using it.
comparison.pdf, December2005.
Looking at various quality aspects of open source software it
[17] IEEE Standard, Glossary of software engineering
is very clear that the methodology delivers very high quality,
terminology,IEEE Std 610.12-1990., https://standards.Ieee.
secure, stable and economical viable software in a short
org/findstds/standard/610.12-1990.html .
period of time. We are of the view that it is very difficult to
[18] Ganpati, A., Kalia, A. and Singh, H., A Comparative Study
clearly demarcate the application areas of open source and
of Maintainability Index of Open Source Software, International
proprietary softwares. There are n number of application
Journal of Emerging Technology and Advanced Engineering, Vol.
areas and there are n number of softwares available for
2, No. 10, ISSS 2250-2459, October 2012.
serving the purpose. We always have an alternate to
[19]Software Portability, http://en.wikipedia.org/wiki/
proprietary software but to use OSS or not is clearly the
Softwareportability.
choice of user.
[20]List of Linux supported architectures,
http://en.wikipedia.org/wiki/List_of_Linux_supported_architectur
6. REFERENCES es
[1] Asundi, J., 2001. Software engineering lessons from open [21] R. Love, LINUX Kernel Development, Portability, Pearson
source projects. In 1st workshop on Open Source Software, Education, 2 Edition, ISBN 978-81-7758-910-8, 2009, pp. 322
ICSE, 2001.
[2] Samoladas, I. and Stamelos, I., Assessing the Quality of
an Open Source ERP/CRM System. 1st International

479
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Study of Shannon and Renyi entropy based


approaches for Image Segmentation
Baljit Singh Parmeet Kaur
BBSBEC, Fatehgarh Sahib BBSBEC, Fatehgarh Sahib
baljit.singh@bbsbec.ac.in pparmeetkaur@gmail.com
being that if the object pixels are brighter than the
ABSTRACT
background, then they should be brighter than the average.
Image segmentation partitions an image into multiple The advantage of obtaining a binary image first is that it
segments, based on properties of discontinuity and similarity. reduces the complexity of the data, simplifies the process of
One of the simplest methods for image segmentation is recognition and classification also reduces the storage space.
thresholding that is used to discriminate foreground from the All the gray level values below this T will be classified as
background of an image. The selection of suitable threshold black (0) and those above T will be white (1). The
value in the image is a challenging task. Thresholding value segmentation accuracy can be maximized by an appropriately
depends upon the randomness of intensity distribution of the chosen threshold value. Among all the existing segmentation
image. Entropy is a parameter to measure the randomness of techniques most use the Shannon Entropy [1, 13]. It can be
intensity distribution of the image. In this paper, Shannon- Global, Local, Single, Multilevel thresholding. Global
entropy based and Non-Shannon (Renyi) entropy based techniques are further classified as: point-dependent and
approaches are used to select suitable threshold value. After region-dependent techniques. Point detection techniques such
this, thresholding values obtained from 10 standard test as P-tile, Mode method, Mean histogram dependent
images are evaluated using Peak Signal to Noise Ratio(PSNR) Technique & Ostu method are discussed [2, 11]. Color image
and Uniformity and it is observed that Renyi entropy based Segmentation is also used for Thresholding [15].
approaches is a better measure than Shannon entropy based
Various threshold selection techniques are:
approaches.
(A) Basic Global Thresholding: This method is relatively
Keywords simple, does not require much knowledge of the image and is
Image segmentation, Thresholding, Shannon Entropy, robust against the noise. An initial threshold (T) is randomly
Histogram, Renyi Entropy. chosen and the image is then segmented into object and
background pixels.
(B) Histogram-based methods: They are very efficient when
1. INTRODUCTION compared to other image segmentation methods because they
Segmentation is a process that subdivides an image into a typically require only one pass through the pixels. In this
number of homogeneous regions. Each homogeneous region technique, a histogram is computed from all of the pixels in
is a constituent part or object in the entire scene. The objects the image, and the peaks and valleys in the histogram are used
on the scene need to be appropriately segmented and to locate the clusters in the image [6].
subsequently classified. The result of image segmentation is a (C) Region growing methods: Region growing is a procedure
set of segments that collectively cover the entire image. All that group pixels or sub-region into larger regions based on
the pixels in a region are similar with respect to some predefined criteria for growth. This iterative approach to
characteristic or computed property, such as color, intensity or segmentation examines neighbouring pixels of initial seed
texture. Segmentation algorithms are based on one of two points and determines whether the pixel neighbors’ should be
basic properties of intensity values discontinuity and added to the region. [5, 17].
similarity. First category is to partition an image based on (D) Edge based segmentation: This is used to detect and
abrupt changes in intensity, such as edges in an image. Second locate the edges in the image. A standard test image is used to
category is based on partitioning an image into regions that compare the results with Laplacian of Gaussian edge detector
are similar according to predefined criteria [12]. operator [9].
There are various methods for image segmentation depending The information theoretic approach based on the concept of
on the basis of threshold value and on the basis of entropy entropy was introduced by Shannon [3]. The principle of
values.
entropy is to use uncertainty as a measure to describe the
(I) Image Segmentation Methods on the Basis of Threshold information contained in a source [14]. In information theory
Selection: The technique used in segmenting the image is there are different types of Non-Shannon entropy such as
called the thresholding method. This method is based on a Renyi entropy, Havrda and Charvat, Vajda, Kapur &Tsallis
threshold value which converts a grey scale image into a entropy [16]. The author had evaluated two-dimensional
binary image. Threshold plays an important role in the image Renyi entropy obtained from the two-dimensional histogram
segmentation, it can be selected manually or automatically. A which was determined by using the grey value of the pixels
simple method is to choose a mean or median, the rationale and the local average grey value of the pixels. The Renyi

480
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Entropy is extended by the author, while still preserving m n


overall functionality, to improve accuracy. The priori  I (i, j )
i 1 j 1
modification allows addition of texture information in an
k (2)
efficient way, which results in more accurate thresholding. mn
Furthermore, the proposed mechanism allows addition of
Step4. Let p1, p2,…..,pk-1 be the probability distribution of a
other features after some normalization [7, 8].
grey level image.
Non-Shannon entropies have a higher dynamic range than
k 1
, Pk 1   p i
Shannon entropy over a range of scattering conditions, and are h(i)
Step5. In this, p(i)  ,
therefore useful in estimating scatter density and regularity
m n i 1
[10]. Main advantage of non-Shannon measures of entropy
t
over Shannon entropy is that non-Shannon measures of
Pt   p j , t is the threshold value and L is the maximum
entropy have parameters (α in case Renyi) that can be used as
j 1
adjustable values. These parameters play an important role as
grey level value in the image.
tuning parameters in the image processing chain for the same
class of images [4, 12]. Peak Signal to Noise Ratio (PSNR) is Step6. Segment the image into two parts i.e. Foreground and
used to evaluate the thresholding because it can measure the Background.
similarities between the original image and the binarized Algorithm for Shannon Based Entropy
image. A higher PSNR indicates more similarity between the S.1 Shannon Entropy using Foreground Region
two images [13]. t
pi  pi 
S F (t )   log   (3)
2. SHANNON AND RENYI ENTROPY i k Pt  Pk 1 e
 Pt  Pk 1 
Entropy is a concept of information theory. Entropy is used to S.2 Shannon Entropy using Background Region:
measure the amount of information [1]. The principle of L
pi  pi 
entropy is to use uncertainty as a measure to describe the S B (t )   1  P log  1  P  (4)
 
e
information contained in a source [14]. Entropy, thus, i t 1 t t
characterizes our uncertainty about our source of information.
S.3 For Optimal Thresholding
It is defined in terms of the probabilistic behavior of a source
of information. In accordance with this definition, a random T  max[S F (t )  S B (t )] (5)
i
event (A) that occurs with probability P (A) is said to contain
where i=k,…,L
I (A) = log[1/ P (A)] = -log [P (A)] (1)
S.4 Segment the image using T
units of information. The amount I (A) is called the self-
information of event A. The amount of self-information of the 0 if I (i, j )  T 
event is inversely proportional to its probability. The basic S (i, j )    (6)
concept of entropy in information theory has to do with how 255 if I (i, j )  T 
much randomness there is in a signal or in a random event. Algorithm for Renyi Based Entropy
This approach uses the Shannon entropy originated from the R.1 Renyi Entropy using Foreground Region:
information theory considering the gray level image

histogram as a probability distribution 1  t  p  
Shannon based entropy provides an absolute limit on the best RF (t ) 
1
log e   P  P  
i
(7)
possible average length of lossless encoding or compression.  i k  t k 1  

Globally, for the whole data, or locally, to evaluate entropy of R.2 Renyi Entropy using Background Region:
probability density distributions around some points. It

provides additional information about the importance of 1  L  p  
specific events, Here P is Probability Distribution and K is no RB (t )  log    i   (8)
of pixels /Total no of pixels. 1 e
i t 1 1  Pt  
The basic steps of the algorithm are reproduced here for the where α≠1, α>0
sake of convenience:
R.3 For Optimal Thresholding
Step1. Read input image I.
Step2. Histogram of an input image h(I) and size of an T max[ RF (t )  RB (t )] (9)
i
image(m×n) is calculated. where i=k,…,L
L
h( I )   I (i, j )
R.4 Segment the image using T
(1)
i 1 0 if I (i, j )  T 
S (i, j )    (10)
Step3. Calculate the average of an image
255 if I (i, j )  T 

481
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. PERFORMANCE MEASURES c
2  c     f i  ui 
2
In order to avoid human interpretation, two objective
j=0 iR j
measures, Peak signal to noise ratio and uniformity [15], have U =1 (7)
m  n   f max  f min 
2
been used for performance evaluation.
Peak signal to noise ratio: PSNR measure the similarities
where c is the no of threshold, Rj is the segmented region j, fi
between the original image and the binarized image. A higher
is the grey level of pixel i, uj is the grey level of those pixels
PSNR indicates more similarity between the two images. It is
in segmented region j, m*n is the total pixels, fmax is the
most easily defined via the mean squared error (MSE) which
maximum grey level pixel in given image. fmin is the minimum
for two m×n monochrome images f and g where one of the
grey level pixel in given image.
images is considered a noisy approximation of the other is
defined as:-
m 1 n 1 4. EXPERIMENTAL RESULTS
  f i, j   g i, j 
1 2
MSE = (5)
m  n i=0 j=0 The PSNR and the Uniformity measure for ten standard
The PSNR is defined as: images have been calculated and tabulated in Table 1. The
optimal threshold value of Shannon and Renyi based entropies
Max f for the 10 standard images are tabulated in Table 2. The
PSNR = 20log10 (6) original images are labeled as Fig 1(a)-10(a). The histograms
( MSE ) of these images have been prepared and shown in Fig 1(b)-
where MAXf is the maximum possible pixel value of the 10(b). Fig 1(c)-10(c) depict the Image Segmentation using
image. When the pixels are represented using 8 bits per Shannon based entropy while Fig 1(d)-10(d) represent the
sample, this is 255. Image Segmentation using Renyi based entropy.
Uniformity measure: The uniformity measure is generally
used to describe region homogeneity in an image. For a given
threshold t, it is defined by

(a) (b) (c) (d)


Fig 1: (a) Original image (b) Histogram (c) Segmented image using Shannon Entropy (d) Segmented image using Renyi
Entropy

(a) (b) (c) (d)


Fig 2: (a) Original image (b) Histogram (c) Segmented image using Shannon Entropy (d) Segmented image using Renyi
Entropy

(a) (b) (c) (d)


Fig 3: (a) Original image (b) Histogram (c) Segmented image using Shannon Entropy (d) Segmented image using Renyi
Entropy

482
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

(a) (b) (c) (d)


Fig 4: (a) Original image (b) Histogram (c) Segmented image using Shannon Entropy (d) Segmented image using Renyi
Entropy

(a) (b) (c) (d)


Fig 5: (a) Original image (b) Histogram (c) Segmented image using Shannon Entropy (d) Segmented image using Renyi
Entropy

(a) (b) (c) (d)


Fig 6: (a) Original image (b) Histogram (c) Segmented image using Shannon Entropy (d) Segmented image using Renyi
Entropy

(a) (b) (c) (d)


Fig 7: (a) Original image (b) Histogram (c) Segmented image using Shannon Entropy (d) Segmented image using Renyi
Entropy

(a) (b) (c) (d)


Fig 8: (a) Original image (b) Histogram (c) Segmented image using Shannon Entropy (d) Segmented image using Renyi
Entropy

483
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

(a) (b) (c) (d)


Fig 9: (a) Original image (b) Histogram (c) Segmented image using Shannon Entropy (d) Segmented image using Renyi
Entropy

(a) (b) (c) (d)


Fig 10: (a) Original image (b) Histogram (c) Segmented image using Shannon Entropy (d) Segmented image using Renyi
Entropy

Table 1. Parameters to evaluate the Entropy using Peak Signal to Noise Ratio and Uniformity

S.No Test Image Shannon – Entropy based Approach Renyi - Entropy based Approach
PSNR Uniformity PSNR Uniformity
1 Pattern (512×512) 9.2782 0.8160 9.3286 0.8180
2 Man (1024×1024) 8.4540 0.7966 9.6147 0.8158
3 Walter Cronkite (256×256) 9.3572 0.7997 9.6049 0.8063
4 Mandi (2014×3039) 11.3334 0.8758 11.3548 0.8743
5 ChemicalPlant1 (256×256) 8.6486 0.7354 9.0774 0.7499
6 Chemical Plant2 (256×256) 7.7286 0.6705 8.6022 0.7245
7 Bridge (200×200) 7.8374 0.7157 8.9884 0.7771
8 Croud (200×200) 7.2072 0.6786 8.2088 0.7431
9 Concordorthophoto (2215×2956) 7.7014 0.6916 8.4069 0.7335
10 Lines (200×200) 7.0590 0.6596 8.8867 0.7732

Table 2. The optimal Threshold value of Shannon and Renyi based entropy at α =0.7 for Renyi

S.No Test Image TSopt TRopt


1 Pattern (512×512) 160 158
2 Man (1024×1024) 172 115
3 Walter Cronkite (256×256) 147 133
4 Mandi (2014×3039) 145 113
5 ChemicalPlant1 (256×256) 172 120
6 Chemical Plant2 (256×256) 174 141
7 Bridge (200×200) 185 150
8 Croud (200×200) 191 171
9 Concordorthophoto (2215×2956) 168 139
10 Lines (200×200) 186 152

484
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5. CONCLUSION
In this paper, an attempt is made to analyze the Shannon and [9] Shareha, A., Rajeshwari, M. and Ramachandram, M.
Non-Shannon (Renyi) measures of entropy for segmenting 2008. Textured Renyi Entropy for Image Thresholding.
grey level images. Analysis has been taken on 10 different Fifth International Conference on Computer Graphics,
types of standard images. This study shows that results of Imaging and Visualization, pp.185-192.
non-Shannon (Renyi) based entropy are satisfactory. It is also [10] Singh, A.P. and Khehra, B.S. 2008. Edge Detection in
observed that main advantage of non-Shannon measures of Gray Level Images based on the Shannon Entropy.
entropy over Shannon entropy is that non-Shannon measures Journal of Computer Science, vol. 4, no. 3, pp. 186-191.
of entropy have parameters (α in case of Renyi) that can be [11] Singh, A.P. and Khehra, B.S. 2009. Shannon and Non-
used as an adjustable values. These parameters can play an Shannon measures of entropy for statistical Texture
important role as tuning parameters in the image processing Feature Extraction in Digitized Mammograms. Proc. Of
chain for the same class of images. The results of this study World Congress on Engineering and Computer Science,
are quite promising. vol.2, pp.1286-1291.
[12] Al-amri, S., Kalyankar, N.V. and Khamitkar, S.D. 2010.
REFERENCES Image Segmentation by using Threshold Techniques.
[1] Shannon, C.E. 1948. A Mathematical theory of Journal of Computing, vol.2, no.5, pp.83-86.
Communication. Int. Journal. Bell. Syst. Technical, [13] Khehra, B.S. and Singh, A.P. 2011. Digital Mammogram
vol.27, pp.379-423. Segmentation using Non-Shannon Measures of Entropy.
[2] Sahoo, P.K., Soltani, S. and Wong A.K.C. 1988. A Proceedings of the World Congress on Engineering,
Survey of Thresholding Techniques. Journal Computer vol.2.
Vision Graphic Image Processing, vol.41, pp.233-260. [14] Mahmoudi, L. and Zaart, A.E. 2012. A Survey of
[3] Kapur, J. 1994. Measures of information and their Entropy Image Thresholding Techniques. 2nd
Applications. John Wiley and Sons Publishers, New International Conference on Advances in Computational
Delhi, 1st Edition, pp.1-20. Tools for Engineering Applications, pp.204-209.
[4] Sahoo, P.K., Wilkins, C. and Yeager, C. 1997. Threshold [15] Farshid, P., Abdullah, S. and Sahran, S. 2013. Peak
selection using Renyi entropy. Pattern Recognition, vol. Signal to noise ratio based on threshold method for
30, no. 1, pp. 71–84. Image Segmentation. Journal of Theoretical and applied
[5] Kim, J.B., Park, H.S., Park, M.H. and Kim, H.J. 2001. A information technology, vol.57, no.2, pp.158-168.
Real time based motion segmentation using adaptive [16] Pandey, V. and Gupta, V. 2014. MRI Image
thresholding and k-mean clustering. Springer Library, Segmentation Using Shannon and Non Shannon Entropy
vol.2256, pp.213-224. Measures. International Journal of Application or
[6] Sahoo, P.K. and Arora, G. 2004. A Thresholding method Innovation in Engineering & Management, vol.3, no.7,
based on Two-Dimensional Renyi Entropy. Pattern pp.41-46.
Recognization, pp.1149-1161. [17] Mathur, S. and Gupta, M. 2014. An Analysis on Color
Preservation Using Non-Shannon Entropy Measures for
[7] Sahoo, P.K. and Arora, G. 2006. Image Thresholding
Gray and Color Images. Fourth International Conference
using Two-Dimensional Tsallis-Havrda-Charvat
on Advances in Computing and Communications,
Entropy. Journal of Pattern Recognization Letters,
vol.27, no.27, pp.520-528. pp.109-112.
[8] Chang, C.I., Du, Y., Wang, J., Guo, S.M. and Thouin,
P.D. 2006. Survey and comparative analysis of entropy
and relative entropy thresholding techniques. IEE Proc.-
Vis. Image Signal Process., vol. 153, no. 6, pp.837-850.

485
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Swarm Intelligence and


Flocking Behavior
Himani Ashish Girdhar
M Tech Student Assistant Professor
SLIET, Longowal SLIET, Longowal
himanikapurnit@gmail.com ashishgirdhar410@gmail.com

ABSTRACT which are necessary and sufficient to acquire SI behaviour


Swarm behavior suggests simple methodologies used by According to Millonas[3], the five principles of SI must be
agents of swarm to solve complex problems, which humans adhered to by the algorithms proposed under SI approach.
may find difficult. The basic reason behind this is the group First is the proximity principle: the population should be able
behavior in these algorithms. The distributed control to carry out simple space and time computations. Second is
mechanism and simple interactive rules can manage the the quality principle: the population should be able to respond
swarm efficiently and effectively. Flocking behavior does not to quality factors in the environment. Third is the principle of
involve central coordination. This paper aims at the review of diverse response: the population should not commit its
the Swarm Intelligence algorithms developed so far and its activities along excessively narrow channels. Fourth is the
association with flocking model. principle of stability: the population should not change its
mode of behaviour every time the environment changes. Fifth
General Terms is the principle of adaptability: the population must be able to
Artificial Intelligence, Swarm Intelligence, Algorithms change behaviour mode when it’s worth the computational
price. It is the last two points that determines the performance
Keywords of the SI algorithms. Table 1 lists some of the SI algorithms
proposed so far.
Swarm Intelligence, agents, ACO, ABC, Flocking, PSO.
Table 1: Entities and SI Algorithms associated

1. INTRODUCTION Entities SI Algorithm


The concept of SI was originally introduced by Gerardo Beni
and Jing Wang[1] in 1989 in the context of cellular robotic
Ant Ant Colony Optimisation[4]
systems (Beni and Wang, 1993), while Bonabeau et al[2]
redefined it in 1999 as “any attempt to design algorithms or
distributed problem-solving devices inspired by the collective Particles Particle Swarm Optimisation[6]
behavior of social insect colonies and other animal societies”.
Most of the SI approaches are inspired by the collective Bees Artificial Bee Colony[5]
behavior of natural species, such as ants foraging, birds
flocking and bees gathering honey. Nevertheless, the term Fireflies Firefly Algorithm[7]
swarm could be extended to any constrained collection of
interacting agents or individuals. This paper is organized as Monkeys Monkey Search[8]
follows: Sections 2,3 and 4 lay down the basic principles
which come under SI and associated algorithms. In section 5 Cockroaches Roach Infestation Optimisation[9]
we discuss the flocking behavior in detail and how it is
different and/or similar to the swarm behavior and section 5.1
Frogs Jumping Frogs Optimisation[10]
is discussion on PSO which is a mix of both swarm and flock.
Finally in section 6 we conclude the paper.
3. ANT COLONY OPTIMISATION
2. SWARM INTELLIGENCE The natural behaviour of ants during the foraging procedure
Swarm Intelligence has been derived from the natural swarm inspires the optimisation algorithm as proposed by Colorni et
behavior of animals which can be defined as the collective al[4]. During the searching process for food sources, ants
behavior exhibited by the animas of same size, aggregating behave intelligently to find the optimal path to food source,
together to solve a problem which is essential for their
which is practically achieved by the utilization of pheromone.
survival. SI can be defined as the emergent collective
intelligence of groups of simple agents. Agents are analogous The existence of pheromone shows the trace of an ant, and
to the animals of the natural swarm. Agents can be a hardware provides heuristic information for other ants, which decide
device or a software program which operate in distributed whether to follow this pheromone trace or not. If the new ant
control mechanism. Agents solve problems by interacting chooses to follow this pheromone trace, it would reinforce the
with other agents, or with their environment. Software agents density of pheromone; otherwise the pheromone would be
are capable of taking simple decisions to solve a problem. SI gradually evaporated and finally exhausted. The above
has three fundamental and essential properties, namely
decision strategies can be regarded as positive feedback and
decentralization, self-organisation and collective behaviour,

486
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

negative feedback respectively. Higher pheromone density


indicates higher chosen probability. Therefore, more and more
ants choose to follow the trace with high pheromone density
and construct the optimal path to the food source.

The whole procedure of the ACO algorithm can be illustrated


as follows. Initially, ants are placed on the nodes of the graph
randomly. Then each ant selects a connected and unvisited
node as its next movement probabilistically. The probability is
influenced by two factors, the distance from the current node
to the next node and the pheromone on the associated edge.
This movement is executed iteratively until all ants have
traversed all the nodes on the graph, which is called one cycle.
After each cycle, the pheromone deployment of the whole
graph is updated. The principle is that whenever an ant moves
through an edge, the pheromone on that edge is reinforced;
otherwise, it would evaporate and be exhausted. After a
certain number of cycles, the path with highest pheromone
density is found, which represents the optimal solution. Thus
the algorithm helps find the optimal solution without actually
knowing the problem.

4. ARTIFICIAL BEE COLONY


This approach was proposed by Karaboga[5] on the principles
of foraging by bees in the natural bee hives. When foraging,
different bees work collaboratively to explore and exploit
food sources with rich nectar. As per the proposed approach
there are three types of bees: employed bees, onlookers and
scouts. For every food source or candidate solution, there is an
employed bee. The food sources with more nectar are better
solutions to the problem.

The whole procedure of the ABC algorithm can be described


as follows. Scout bees are assigned to find the initial food
sources by carrying out a random search in the search space.
After that, employed bees are sent out to exploit the
discovered food sources, and each employed bee matches one
food source. During the exploitation procedure, each
employed bee also carries out a neighbourhood search and
tries to find a better food source nearby. If a better food
source is found, the employed bee would abandon the
previous food source and exploit the better one. . After the
completion of all employed bees, they return to the hive and
share their information on food sources with onlooker bees
waiting in the hive through a waggle dance. The onlooker
bees would choose to follow certain employed bees and
exploit corresponding food sources probabilistically. This
probability is computed by the richness of the corresponding
food sources. Once an onlooker bee chooses to follow an
employed bee, it becomes an employed bee and repeats the
procedure of employed bees. After certain number of iteration
of the exploration and exploitation procedure, a food source
may be exhausted and abandoned. In that case, the
corresponding employed bee becomes a scout bee and
randomly finds a new food source to replace the abandoned
one. The whole procedure can divided into four phases:

487
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

(1) initialization phase,(2)employed bee phase,(3)onlooker firstly, referring to three factors: its current velocity, the local
bee phase and(4)scout bee phase best position and the global best position. Different
weightings of different factors indicate different optimization
strategies. Subsequently, the particle updates its position
5. FLOCKING BEHAVIOUR information following the updated velocity vector. The
Flocking behaviour can be described as the behavior exhibited positions of each particle correspond to candidate solutions.
when a group of birds, called a flock, are foraging or in flight. The local and global best positions are updated after each
Birds and fish adjust their physical movement to avoid movement provided that the particle arrives at a better
predators, seek food and mates, optimize environmental position. This procedure is conducted iteratively until the
parameters such as temperature, etc. Birds have poor eyesight stopping criteria are met. The global best position is the
and they move in flocks in order to identify the obstacles in optimal solution which can be found so far.
their paths. Similar algorithms can be applied in controlling
the air traffic in the time of disaster management. Some fish
for each Particle i in [1 .. N]
such as clown fish also move in such flocking fashion called
school. They use it to identify if some fish has been missed initialize xi , vi
out in the journey. Humans, however adjust not only physical
Pi = xi
movement but cognitive or experiential variables as well. We
do not usually walk in step and turn in unison rather, we tend End for
to adjust our beliefs and attitudes to conform with those of our
social peers. Essentially the basic models of flocking
behaviour are controlled by three simple rules:
repeat:
1. Separation - avoid crowding neighbours (short for each Particle i in [1 .. N]
range repulsion)
update vi using equation (5)
2. Alignment - steer towards average heading of
neighbours Check the velocity boundaries.
3. Cohesion - steer towards average position of update xi using equation (6)
neighbours (long range attraction)
if f(xi) ≤ f(Pi) then Pi=xi
With these three simple rules, the flock moves in an extremely
realistic way, creating complex motion and interaction that if f(Pi) < f(G) then G=Pi
would be extremely hard to create otherwise.
end for
The basic model has been extended in several different ways
since it was proposed. In flocking simulations, there is no until stopping criterion is met
central control; each bird behaves autonomously. In other
words, each bird has to decide for itself which flocks to
consider as its environment. Usually environment is defined Figure1:Pseudo-code for PSO
as a circle (2D) or sphere (3D) with a certain radius.

5.1 PARTICLE SWARM OPTIMISATION The basic elements in PSO are position and velocity. In this
It was proposed by James Kennedy and Russell Eberhart in [6] context, each solution created by the ants is represented as one
1995 and was inspired by the bird flocking, fish schooling, particle. Each particle has a position or in other words their
and swarming theory. They used the term particles in place of fitness in the solution space. Each is then attached with a
points, just to incorporate the idea of velocities associated velocity which the particle will move towards the direction of
with each particle. Therein each individual followed three the velocity vector pointing with the distance it specified. The
simple rules: collision avoidance, velocity matching and flock particles will continue moving and updating the velocity until
centring. In addition to three rules, each particle has three they reach a common goal or the optimum solution as all the
inner attributes: its historical best position (local best position) particles agreed.
and global best position of the swarm, and refers to those two
positions whenever it moves to the next position. The 6. CONCLUSION
advantage of using an optimization method such as PSO is The nature has provided us with a technique for processing
that it does not use the gradient of the problem to be information that is at once elegant and versatile.SI and
optimized, so the method can be readily employed for a host flocking behaviour are based on the swarming methodologies
of optimization problems. This is especially useful when the already present in nature. Based on the swarming theory and
gradient is too laborious or even impossible to derive. This flocking of birds, the PSO algorithm is apparently simple to
versatility comes at a price, however, as PSO does not always implement. In addition to the population size, i. e. the number
work well and may need tuning of its behavioural parameters of candidate solutions, topology also affects the performance
so as to perform well on the problem at hand. of the algorithm. However, practitioners tend to keep these
The procedure of the algorithm can be explained as follows. constant across problems, although there is some evidence to
At the beginning, a number of particles are randomly placed suggest that bigger populations are better for higher
in the search space. Each particle holds its position and
velocity information in a vector format. Whenever movement
occurs, the particle needs to update its velocity information

488
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

dimensional problems and that highly-connected topologies [10] Garcia, F.J.M., Perez, J.A.M., 2008. Jumping frogs
work better for unimodal problems, while more sparsely- optimization: a new swarm method for discrete optimization.
connected topologies are superior on multimodal problems.
Dynamic problems are challenging for PSO. These problems [11] Walter J. Gutjahr,2006. Mathematical runtime analysis of
are modelled by the fitness functions which change over time ACO algorithms: survey on an emerging issue
and so storing the values in the memory is obsolete. So, to [12] Shuzhu Zhang, C.K.M.Lee, 2014. Swarm intelligence
date a fully comprehensive mathematical model of particle applied in green logistics: A literature review.
swarm optimization is still not available. There are several
reasons for this. Firstly, the PSO is made up of a large number
of interacting elements (the particles). Although the nature of
the elements and of the interactions is simple, understanding
the dynamics of the whole is nontrivial. Secondly, the
particles are provided with memory and the ability to decide
when to update the memory. This means that from one
iteration to the next, a particle may be attracted towards a new
pi(personal best velocity of particle i) or a new pg(global best
velocity of particle) or both. Thirdly, forces are stochastic.
This prevents the use of standard mathematical tools used in
the analysis of dynamical systems. Fourthly, the behaviour of
the PSO depends crucially on the structure of the fitness
function. However, there are infinitely many fitness functions,
and so it is extremely difficult to derive useful results that
apply to all, even when complete information about the fitness
function is available. Nonetheless, in the last few years, a
number of theoretical advances have been made. Social
optimization occurs in the time frame of ordinary experience -
in fact, it is ordinary experience. It can be concluded that this
class of algorithms belong ideologically to that philosophical
school that allows wisdom to emerge rather than trying to
impose it, that emulates nature rather than trying to control it,
and that seeks to make things simpler rather than more
complex.

REFERENCES
[1] Beni, G., Wang, J., 1993. Swarm Intelligence in Cellular
Robotic Systems, Robots and Biological Systems: Towards a
New Bionics?. Springer, Berlin Heidelberg.
[2]Bonabeau,E.,Dorigo,M.,Theraulaz,G.,1999.Swarm
intelligence: from natural to artificial systems.
[3] Millonas, M. M. (1994). Swarms, phase transitions, and
collective intelligence. In C. G. Langton, Ed., Artificial Life
III. Addison Wesley.
[4] Colorni, A.,Dorigo, M.,Maniezzo,V.,1991.Distributed
optimization by ant colonies. In: Proceedings of the first
European Conference on Artificial Life.
[5] Karaboga, D., 2005. An idea based on honey bee swarm
for numerical optimization.
[6] Eberhart, R., Kennedy, J., 1995. A new optimizer using
particle swarm theory (MHS'95). In: Proceedings of the Sixth
IEEE International Symposium on Micro Machine and
Human Science.
[7] Yang,X.-S.,Deb,S.,2009.CuckoosearchviaLévy flights.
(NaBIC2009).In: IEEE World Congress on Nature and
Biologically Inspired Computing.
[8] Mucherino, A., Seref, O., 2007. Monkey Search: A Novel
Metaheuristic Search for Global Optimization, Data Mining,
Systems Analysis and Optimization in Biomedicine.
American Institute of Physics, New York.
[9] Havens, T.C., Spain, C.J., Salmon, N.G., Keller, J.M.,
2008. Roach infestation optimization. In: Proceedings of the
IEEE Swarm Intelligence Symposium.

489
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Internet Threats and Prevention – A Brief Review

Sheenam Bhola Sonamdeep Kaur Gulshan Kumar


Assistant Professor Assistant Professor Assistant Professor
SBSSTC, Ferozepur SBSSTC, Ferozepur SBSSTC, Ferozepur
Sheenambhola@gmail.com sonamkamboj7@gmail.com gulshanahuja@gmail.com

ABSTRACT These type of programs can make use of the Web


Network security is a branch of computer technology forwiden, conceal, update and transmit theft data back to
whose objective is to protect the information and criminals or hackers. This will be better understand by
property from theft, corruption, or threats attack. Now a the example- a trojan used to download spyware and a
days, we are mostly dependent on internet for many worm is used to contaminate the system with a bot
things such as online shopping, bank transactions, [1].The Technology has turn out to be an
internet surfing etc. However, internet is not fully secure predictablecomponent of our lives. But the Internet
as its users are threatened by many computer viruses, proffer a accumulationquantity of helpful information
malicious threats and many more attacks. This paper and formulatemessage easier and faster than eternally,
summarize various computer threats and mechanisms but it nearbya number of threats as wellbeside the
used for protecting sensitive information over the approach. The computer system is a immensedevice to
internet. store upimperative information. In convinced cases, the
information is extremelyvery importanttotrailing it will
damage the system.The Computer system threats can
Keywords: Internet Threats, security, prevention draw closer from numerouscustomsmoreover from
human or from natural calamity.illustration, assomebody
is burglary your report information from a
1. INTRODUCTION confidentialstore, this type of threat is well thought-out
Continuallyas the innovation of the computer, the as a human threat. though, as the computer is drenched in
security of computer systems has turn out to be an profound rain, then this type is called as natural disaster
progressively moresignificantpart of focal point for threat.
allassociations. The beginning point of the Internet and
Web has supplementary anentireinnovativeaspect to Internet has turn out to be a typedevice for industry
security which we called an Internet Security. Before the communication and information contribution. All the
use of internet the unauthorised access attact on the Contents of internet we can read, send, and receive bear a
system data and misuse the conifidential details. The risk. The numeral of latent security danger has better than
internet change the whole picture of computer. By using previous to at the similar time that reliance on
internet anyone can access the data any where in the information technology has fully fledged manufacture the
world.The access of Dial-up releasethe computers upbeat require for a inclusive security program
to various threats that did not contain physical access to stillextravital.Web threats pretence a broad assortment of
the computer system[3]. risks, with financial indemnity, identity theft, defeat of
private information, theft of network property, damaged
This paper discusses the Internet security distress and the personal status, and corrosion of customerself-assurance
risk of securityallied by the use of Internet which in e-commerce and online
including various threats, risk related to the security and banking.specialistdetermineinnovative security
its prevention. all through,the users of internet have vulnerabilities approximatelyeach day[5].The
practised and they will persist to recentlyexposedvulnerabilities might be due to defect in
familiarityprofuseschemesufferers that have a software or they might be due to software
straightcollision on their mainlyvital feature which is constructionfault. Hackers can make use of these
their important information and theirsecurity is vulnerabilities to achieve access to network resources.
highestimperative to the users. The exposed attackson the overseerhave touse up a set of point and
internet increased day by day and and this paper consider forceimmediatelyhang aboutknowledgeableconcerning
thesuggestedthe steps to grip the internet security issues and trade with new-fangled vulnerabilities. frequently the
to elucidate before its use [2]. effect is that they are not capable to obtain the instance to
observe and edifyemployees. Enforcement of security
strategymight be missing or rely on the honor structure.
2. THREATS stoppage to protecttouching the solution threats to
information and systemresources can effect in disaster
The internet threats are malicious software programs like [11].
spyware, adware, trojan horse, bots, viruses and worms,
etc. which are set up on the systemdevoid of our The system threats aresomething that show the way to
information or we can say without any authorization. defeat or bribery of data and bodilyharm to the hardware

490
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

andcommunications. significantly how to be deliberate, unintentional and cause due to natural


recognizeprotection threats which is the first step in disasters[12].
defensive the systems. The internet threats possibly will

Figure 1: Category of Web Threats

2.1 PHYSICAL THREATS 2.2 NON-PHYSICAL THREATS

Thelatentsource of an occurrence which causemay The latent source of an occurrence which cause may
possiblyeffect in the loss and physical harm of the cause:
systems is known as physical threat.  Loss or fraud of system data
There are three types of physical threat:  interrupttradeprocedure that rely on the
 Internal threats- these type of threats include systems
fire, unbalanced power contribute, dampness in  Loss of responsive information
the rooms,accommodation the hardware etc.  illegitimateobserve of actions on thesystems
 External threats- these threats include the
floods, earthquakes etc. These types of threats are also known as the logical
 Human threats- these include theft, threats. The following list is the common types of non-
destruction of the communications and physical threats :
hardware, disturbance, unintentional or
deliberate errors [4].

Table 1: Summary of Internet threats and their Impact

THREATS DEFINITION RECENT ATTACKS IMPACT


MALWARE A software program which is clandestinely On 14th Oct 2014, Spike in Loss of about USD $1
located on the computer system that perfom Malware Attacks on Aging million
unpredicted and unauthorized attempts, that ATMs [13]
are malicious activities.
On 8th Oct 2014, Malware Millions of Dollars
Attacks Drain Russian ATMs
[14]
VIRUS A program which can make copy of its own On 15 August 2012,The cyber Infected 30,000 of its
like real-life viruses and extend rapidly. they attact on Saudi on Saudi Windows-based
are premeditated to harm the computer Aramco by Shamoon Virus machines
system and put on show unforeseen messages [15]
and images. It also destroy the important files
and slow down the system.
WORM A self-reliant program which broaden copy On 14th Feb 2013, Bizarre 1000 devices have been
of itself to other systems throughout network attack infects Linksys routers hit by worm
links, email regard, messages and work as with self-replicating malware
malware. They can block you from accessing [16]
various web sites and also pilfer the licenses

491
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

for various applications that we had installed


on our computer system.

TROZEN HORSE It is a program which act upon a malicious On 25th Oct 2011, Japanese A cyber-attack mounted
act except cannot imitate itself. This type of government hit by Chinese from a server in China
program might turn up as a undamaging file Trojan horse attack [17] apparently stole user ID
and we can say an appliance with concealed codes and passwords of
malicious signs. During its execution, we Lower House members
might practice unnecessary system trouble and their secretaries who
and may every so often misplace information use the chamber's
from the system. computer network.
It gave the hackers
access to e-mails and
documents possessed by
the chamber's 480
lawmakers and other
personnel for at least one
month
SPAM The message which is transmit by email and In Jan. 2014,Fridge sends 100,000 devices used as
instant message that is not requested by us spam emails as attack hits part of the spam attack
and intended to formulate funds for the smart gadgets [18]
sender.
PHISHING the attempts which are made by our phones, In April 2013, an AP Erasing $136.5 billion of
emails, messages and fax for getting our journalist journalist clicked on value
private information by stealing our identity. a spear phishing email
mainly phishing stab appear like they are disguised as a Twitter
intended for a lawful intention, but in short email.[19]
they are in fact planned to be worn for illegal
action.
PHARMING The action of capture lawful websites In March 2014, Criminals Compromised 300,000
addresses for redirecting us to a fake website hack 300,000 home routers as consumer and small
whick appear as original. The spoofed part of mystery 'pharming' office routers
website clandestinely gather our private attack[20]
information when ever weenter it, and can be
used for any numeral of illegal activities.
SPYWARE Software which are installed on our system On 9th Jan 2011, System compromised
without our knowledge and observe, tracks „SPYWARE‟ INCIDENT
and rumour our electronic actions to the SPOOKS JIHADI FORUM
spyware instigator. They are frequently [21]
installed on system throughout Trojans and
from justifiable software which are prefer for
downloading and installation.
ADWARE A software which distribute advertisements Nov 2014,Web Affected Windows
like pop-ups and Web links for us without Attack:PUP/Adware/Fake
our permission. They are typically installed Application Download[22]
surreptitiously in the course of Trojans and
through legitimate software which we prefer
for downloading and installation. It can
exhibit highly targeted advertisements based
on the data composed by spyware which is
previously on our system and track the
Internet surfing.
BOTS BOTNETS They are very small programs which are On 9th Dec 2009, Amazon Infected client computers
located clandestinely on system throughout a EC2 cloud service hit by after hackers were able
Trojan. A botmaster might manage numerous botnet, outage [23] to compromise a site on
bots from a innermost position and carry out EC2 and use it as their
phishing and perform a denial of service own C&C (command
attack which carry down a website so it and control) operation.
cannot be accessed. They are normally used
to deal out spam and phishing attacks.
RANSOMWARE The Software which encrypts the files for the On 28th Jan 2015, One Ransomware works by
intention of extortion. Files are held payment million rooms for Marriott infecting your hard drive,
awaiting wounded pay money for a Hotels, book earlier, freezing your computer
decryption key by distribution payment Ransomware on the rise, Hot and demanding a
throughout a third-party Hotels doing well [24] ransom.

492
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. PREVENTIVE MEASURES Don‟t connect to Web sites that engagewith the transfer
Below are some Internet security tips to keep your of sensitive information, such as online banking and
computer and your family safe from web threats: create a specific user profile without administrator rights
for surfing from public hotspots. Use HTTPSwhile
Avoiding Malware accessing Websites. Wireless networks areinherently less
Ensure that your Internet security software is protected [8]. Avoid sharing of files and weak passwords
restructuredrepeatedly and routinely, but don‟t take it for for Internet usage.
grantedthat it will protect you from attacks, and don‟t be
dependententirely on antivirus software. Numerous Backups Not Crackups
threats requiremultifacetedshield like a full-blown Keep your private and necessary information “off-site”
security group.The risks from “zero-day” attacks could as the professional system administrators do. Always
be eliminated by keeping programs updated. Be alert that keep your laptops along with you so that to keep backing
PDF‟s, image files and Office documents sometimes up as if anyone stole your data then you won‟t have loose
obscurenasty surprises and be apprehensive of program all the important information.Try using system
passwords so that unauthorized user can‟t access your
records and Web links from any unexpected and systems [7].
unauthorized source. Also observe for anyforged anti-
malware packages that identify the invented spyware and 4. CONCLUSION
viruses [6]. To develop policies and structures to bump into such
threats is a big challenge of Internet security. As the US‟
Anti-Social Networks Iowa state develop the laboratories in order to simulate
Avoid compressed URLs like bit.ly, tr.im and the investigation process of Internet attacks so they could
tinyURL.comthat are very generally used to disguise work the same way. The usage of Internet facilities are
nasty Web sites with different links to fake login window becoming more and more familiar and should be
or to malware. Pleasure very undersized URLs with described to the internet users that how these are to be
doubt. You can put an alternative on TinyURL‟s page in used and protect their information from disclosure.
your own web browser that does the same thing. “Web There are in short the following security issues to be
2.0” sites are generally fun based sites but are not secure taken into concern:
as they focus to worm attacks like spam and denial of  Security software should be keep up-to-date
service attacks. Be cautious while and working always. Especially when you use
postingresponsivedelicate information on social network a laptop in cafes airports and other locations as
sites like Facebook and LinkedIn as such social websites they are unprotected networks.
are getting worse as you can‟t even imagine what
 Make use of Web reputation which is the latest
damage the bad guys can do with your private data. So
technology,which can determine the reliability
better to Take a birthday, your home address and your
and security of a Website before you visit it.
identity from these social networking sites [7].
Use this technology collectively with contented
scanning technologies and active URL
Maintaining a Healthy System
filtering.
Make use of Windows bring up to date and related
 Install safety patches and use the most up-to-
mechanisms for regular updating, wheneverpromising. In
date Web browser version whenever available.
short, keep your applications and system reorganized and
Make use a no-script plug-in web browser.
updated.a Lots of existing malware reaches to teh sites
via Office documents, PDFs and so on as there are  Verify your Internet Service Provider to check
enumorous number of malicious sites. So, Office up-to- what kind of fortification is presented by their
date, Adobe Reader and system updates are needed to network.
keep the different applications and system safe [9]. For  Permit the “Automatic Update” feature while
day-to-day work and play puposes avoid using an using Microsoft Windows operating system
administrative account so that if an attacker or malware and pertainlatest updates as soon as they are
access your system then it will restrict the amount of obtainable.
damage as the profile doesnot have any administrative  Install, renew, and sustain firewalls and
privileges. intrusion revealing software that offer spyware/
malware protection.
Protecting Your Passwords  Be careful of Web pages or web links that
Frequently change your passwords and also try to use involveinstallation of software. Try to examine
different passwords for your different accounts so that all programs downloaded from the Internet
unauthorized user can never guess and hack it anyways. with an advancedsafety measures solution.
If any of your password leaked out then having different  Always examine the End User License
passwords for different accounts will lead the attacker Agreement and terminate the installation
can‟t access to everything you owns.Always try to use method if any other “programs” are going to be
very strong passwords which make use of combination of installed in addition to the desired program.
uppercase and lowercase characters,special characters  Avoid giving the personal information to
and numbers [10]. Avoid using easy guessable passwords unwantedrequirements for the information.
and don‟t make silly mistakes like writing down Also when you accept connections ensure that people are
passwords where they can be found easily. who they say they are, and whether or not you really
want them as a connection. At the end make sure that you
(Don’t Be) Burned on a Wire are using a protected connection for your safekeeping.

493
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES y_center/internet_security_tips/internet_security_tips_top
[1] The basic of Web Threats, _10_internet_threats
http://la.trendmicro.com/media/br/the-basic-of-web-
threats-brochure-en.pdf [12] Lujo Bauer, Alessandro Acquisti, Nicolas Christin,
Lorrie Cranor, Anupam Datta, Efforts to promote online
[2] Anthony Bisong and Syed (Shawon) M. Rahman , privacy via research and education at Carnegie Mellon,
AN OVERVIEW OF THE SECURITY CONCERNS IN http://p2.zdassets.com/hc/theme_assets/512007/2000435
ENTERPRISE CLOUD COMPUTING, 00/CMU_Proposal_Updated.pdf
http://airccse.org/journal/nsa/0111jnsa03.pdf
[13]http://krebsonsecurity.com/2014/10/spike-in-
[3] RandyBrown, WEB SECURITY ISSUES: HOW malware-attacks-on-aging-atms/
HAS RESEARCH ADDRESSED THE GROWING
NUMBER OF THREATS? [14]http://www.bankinfosecurity.com/russian-malware-
http://www.swdsi.org/swdsi08/paper/SWDSI%20Procee attacks-drain-atms-a-7412/op-1
dings%20Paper%20S301.pdf
[15]https://www.iiss.org/en/publications/survival/section
[4] Potential security threats to your computer system, s/2013-94b0/survival--global-politics-and-strategy-april-
http://www.guru99.com/potential-security-threats-to- may-2013-b2cc/55-2-08-bronk-and-tikk-ringas-e272
your-computer-systems.html
[16]http://arstechnica.com/security/2014/02/bizarre-
[5] David Harley BA CISSP FBCS CITP , Staying Safe attack-infects-linksys-routers-with-self-replicating-
on the Internethttp://www.eset.com/us/resources/white- malware/
papers/StaySafeOnTheInternet.pdf
[17]http://ajw.asahi.com/article/behind_news/social_affai
[6]Saeed S. Basamh, Hani A. Qudaih, Jamaludin Bin rs/AJ2011102515710
Ibrahim, An Overview on Cyber Security Awareness in
Muslim Countries, [18]http://www.bbc.com/news/technology-25780908
http://esjournals.org/journaloftechnology/archive/vol4no
1/vol4no1_4.pdf [19]http://blog.returnpath.com/blog/tori-funkhouser/top-
7-phishing-scams-of-2013
[7]Trends for 2014 The Challenge of Internet Privacy,
[20]http://www.techworld.com/news/security/criminals-
http://www.welivesecurity.com/wp-
content/uploads/2013/12/Trends-for-2014.pdf hack-300000-home-routers-as-part-of-mystery-
pharming-attack-3505049/
[8] EMERGING CYBER THREATS REPORT 2014,
[21] http://www.wired.com/2011/09/jihadi-spyware/
https://www.gtisc.gatech.edu/pdf/Threats_Report_2014.p
df
[22]http://www.symantec.com/security_response/attacksi
gnatures/detail.jsp?asid=27222
[9] Mark Johnson, Overview of Cyber Security & Risk,
http://www.int-comp.org/attachments/Overview-Cyber-
[23]http://www.cnet.com/news/amazon-ec2-cloud-
Security-Risk.pdf
service-hit-by-botnet-outage/
[10] Dennis Rand, CSIS Security Research and
[24]http://www.irishtimes.com/business/transport-and-
Intelligence, http://www.csis.dk/downloads/LinkedIn.pdf
tourism/one-million-rooms-for-marriott-hotels-book-
earlier-ransomware-on-the-rise-hot-hotels-doing-well-
[11]Top 10 Internet Threats,
1.208246.
http://www.norman.com/home_and_small_office/securit

494
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Comparative performance analysis of Mobile Ad hoc


Network (MANETs) Routing Protocol
Sheenam
PGGCG, Sector 42
Affiliated to PU, Chandigarh
sheenammadan@rediffmail.com

ABSTRACT A mobile ad hoc network (MANET) is a type of wireless


A Mobile adhoc network (MANET) consists of a group of networks. This type depends on the mobile nodes and there is
wireless mobile computers or nodes. A routing procedure is no infrastructure in such type. Adhoc networks require no
always needed in Mobile adhoc networks to find a path so as centralized administration or fixed network infrastructure such
to forward the packets appropriately between the source and as base stations or access points, and can be quickly and
the destination. Adhoc on Demand Distance Vector (AODV) inexpensively set up as needed. Mobile adhoc network is an
is a reactive discovery algorithm where a mobile device of autonomous system of mobile nodes connected by wireless
MANET connects by gateway only when it is needed. A links. Each node operates not only as an end system, but also
comparative analysis can be made using extensive simulation. as a router to forward packets. The nodes are free to move
In this paper an attempt has been made to compare prominent about and organize themselves into a network. These nodes
on-demand reactive routing protocols for MANETs i.e. change position frequently. The main classes of routing
AODV including AODV-UU and Dynamic Source Routing protocols are Proactive, Reactive and Hybrid.
(DSR) protocols. The findings reveal that the differences in A Reactive on-demand routing strategy is a popular
the protocol mechanics lead to significant performance routing category for wireless adhoc routing. It is a routing
differentials, when these protocols are used. The philosophy that provides a scalable solution to relatively large
performance differentials are analyzed using varying network topologies. The design follows the idea that each
simulation time. The simulations are carried out using the node tries to reduce routing overhead by sending routing
NS-2 network simulator. packets whenever a communication is requested.
We ask that authors follow some simple guidelines. In
essence, we ask you to make your paper look exactly like this
KEYWORDS document. The easiest way to do this is simply to download
MANETS, AODV, DSR, Routing, Simulators. the template, and replace the content with your own material.

2. DYNAMIC SOURCE ROUTING


1. INTRODUCTION Dynamic Source Routing (DSR) is a routing protocol for
A wireless network is a growing new technology that will wireless mesh networks. It is similar to AODV in that it
allow users to access services and information electronically, establishes a route on-demand when a transmitting mobile
irrespective of their geographic position. Wireless node requests one. However, it uses source routing instead of
cellular systems have been in use since 1980s. We have seen relying on the routing table at each intermediate device.
their evolutions to first, second and third generation‟s wireless DSR uses a reactive approach which eliminates the need
systems. These systems work with the support of a to periodically flood the network with table update
centralized supporting structure such as an access point. The messages which are required in a table-driven approach. The
wireless users can be connected with the wireless system by intermediate nodes also utilize the route cache
the help of these access points, when they roam from one information efficiently to reduce the control overhead. The
place to the other. The adaptability of wireless systems is disadvantage of DSR is that the route maintenance
limited by the presence of a fixed supporting coordinate. mechanism does not locally repair a broken down link.
It means that the technology cannot work efficiently in The connection setup delay is higher than in table-driven
that places where there is no permanent infrastructure. Easy protocols. Even though the protocol performs well in
and fast deployment of wireless networks will be expected by static and low-mobility environments, the performance
the future generation wireless systems. This fast network degrades rapidly with increasing mobility.
deployment is not possible with the existing structure of
present wireless systems. Recent advancements such as 3. ADHOC ON DEMAND DISTANCE
Bluetooth introduced a fresh type of wireless systems
which is frequently known as mobile adhoc networks. Mobile
VECTOR ROUTING (AODV)
adhoc networks or "short live” networks control in the The Adhoc On-Demand Distance Vector (AODV) routing
nonexistence of permanent infrastructure. Mobile adhoc protocol was developed at the Nokia Research Center of
network offers quick and horizontal network deployment University of California, Santa Barbara and University of
in conditions where it is not possible otherwise. Adhoc is Cincinnati by C. Perkins and S. Das. AODV supports both
a Latin word, which means "for this or for this only." Unicast and Multicast routing. AODV is an on demand
algorithm, meaning that it builds routes between nodes only as
desired by source nodes. It maintains these routes as long as

495
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

they are needed by the sources. Hence, it is considered as a protecting the physical layers protocols by addressing
reactive routing protocol. AODV forms trees which connect features of these layers that make them vulnerable .
multicast group members. The trees are composed of the
group members and the nodes needed to connect the
members. AODV utilizes routing tables to store routing
information; one routing table for uncast routes as well as one 5. COMPARISON OF PROACTIVE AND
layer data protection layer and adhoc layers talk more about REACTIVE ROUTING PROTOCOL
for multicast routes. There are various AODV routing Sr. Proactive Protocols Reactive protocols
protocol implementations, s uch as Mad-hoc, AODV-UCSB, No.
AODV-UU, Kernel-AODV, and AODV-UIUC.
Constant propagation of No periodic updates.
routing information Control information is
4. SECURITY 1 periodically even when not propagated unless
Heterogeneity of nodes in MANETs increases the topology change does there is a change in the
vulnerability of MANET nodes. The exposed nature of not occur. topology
MANETs connective links makes it open to inspection or Attempt to maintain A route is built only
targeted data capture. Also, unwanted interactions, or consistent, up-to-date when required.
interactions with unwarranted entities drains power, thus are routing information from
decreasing survivability. Thus, security deals with the 2 each node to very other
survivability of the network as a whole. MANETs are node in the network.
susceptible to attacks ranging from passive eavesdropping to
active interfering. First packet latency is First-packet latency is
less when compared with more when compared
Unlike wired networks where an adversary must gain physical
3 on-demand protocols with table-driven
access to the network wires or pass though several lines of
protocols because a route
defense at firewalls and gateways, attacks on a wireless
needs to be built.
network are easy to launch, since they involve power-
based-distances and not spatial distance (viz physical A route to every other Not available.
territories or boundaries). The various security issues in 4 node in adhoc network is
MANETs for different layers of OSI are discussed in Table 1. always available.
TABLE 1: SECURITY ISSUES IN MANETS
Incurs substantial traffic Does not incur
Sr. and power consumption, substantial traffic and
Layer Security issue
No. which is generally scarce power consumption
Detecting and preventing viruses, 5 in mobile computers. compared to Table
Application Driven routing protocols.
1 worms, malicious codes, and
layer
application abuses
Authenticating and securing end-
Transport
2 to-end communications through
layer
data encryption 6.. Network Simulator NS-2
Network Protecting the adhoc routing and for-
3 Ns-2 is a discrete event simulator targeted at
layer warding protocols
Protecting the wireless MAC networking research. It provides substantial support for
4 Link layer protocol and providing link-layer simulation of TCP, routing and multicast protocols over
security support wired and wireless networks. It consists of two
Physical Preventing signal jamming denial- simulation tools. The network simulator (ns) contains
5
layer of-service attacks all commonly used IP protocols. The network animator
(nam) is use to visualize the simulations. Ns-2 fully
simulates a layered network from the physical radio
The requirements at the application layer require protection
from eavesdropping and malignant code (viruses and worms) transmission channel to high-level applications.
to maintain secrecy and integrity. Robust encryption is
typically the first line of defence for any communications Ns-2 is an object-oriented simulator written in C++ and
network . In the cases of both encryption and anti-virus OTcl. The simulator supports a class hierarchy in C++
or anti-worm techniques and technologies, these protections and a similar class hierarchy within the OTcl
come at the cost of network performance and must be interpreter. There is a one-to-one correspondence
balanced against the relative threat level and the between a class in the interpreted hierarchy and one in the
operational need for low latency and high bandwidth. compile hierarchy. The reason to use two different
programming languages is that OTcl is suitable for the
At the network layer, protecting routing data has been programs and configurations that demand frequent and
discussed and many solutions proposed. They mainly deal fast change while C++ is suitable for the programs that have
with strengthening the route discovery process or introducing high demand in speed. Ns-2 is highly extensible. It not only
methods for efficient selection of a route from among many supports most commonly used IP protocols but also
available routes to the same destination .Link and physical allows the users to extend or implement their own
The following Table 2 briefly compares the Proactive routing protocols. It also provides powerful trace functionalities,
protocol with Reactive On-Demand routing protocols. which are very important in our project since various
Table 2: Comparison of Proactive and Reactive routing information need to be logged for analysis.

496
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

When choosing a network simulator, we normally consider [10] Stefano Basagni, Marco Conti, Silvia Giordano, Ivan
the accuracy of the simulator.Unfortunately there is no Stojmenovic, (2004) “Mobile ad hoc networking”,
conclusion on which of the above three simulator is the most Wiley-IEEE Press
accurate one. David Cavin et al. has conducted experiments to
compare the accuracy of the simulators and it finds out that [11] T. Clausen, P Jacket, L Viennot, “Comparative study of
the results are barely comparable. Furthermore, it warns Routing Protocols for Mobile Ad Hoc Networks,” The
that no standalone simulations can fit all the needs of the First Annual Mediterranean Ad Hoc Networking
wireless developers. Workshop. September 2002

7. CONCLUSION
On-demand reactive routing protocols, namely, Adhoc On-
Demand Distance Vector Routing (AODV), various methods
of AODV i.e AODV-UU, AODV-UCSB and AODV-UIUC,
and Dynamic Source Routing (DSR). The simulation of these
protocols has been carried out using NS-2 simulator.We can
conclude that if the MANET has to be setup for a small
amount of time then AODV should be prefer due to low initial
packet loss and DSR should not be prefer to setup a MANET
for a small amount of time because initially there is packet
loss is very high. If we have to use the MANET for a longer
duration then both the protocols can be used, because after
some times both the protocols have same ratio of packet
delivering. But AODV have very good packet receiving ratio
in comparison to DSR. The two protocols have been
compared using simulation, it would be interesting to note the
behavior of these protocols on a real life test bed.

REFERENCES
[1] MobileAd-hocNetworks(MANET),
http://www.ietf.org/html.charters/manet-charter.html.
(1998-11-29).
[2] Krishna Gorantala, ―Routing in Mobile Ad-hoc
Networks, Umea University,
[3] Performance Comparison Study of Routing Protocols
for Mobile Grid Environment, IJCSNS International
Journal of Computer Science and Network Security,
VOL.8 No.2, pp.82-88, February 2008.
[4] Laura Marie Feeney, ―A taxonomy for routing
protocols in mobile ad hoc networks, Technical
report, Swedish Institute of Computer Science,
Sweden,1999.
[5] V. Davies, ―Evaluating mobility models within an ad
hoc network, Master„s thesis, Colorado School of
Mines, 2000.
[6] Anne Aaron, Jie Weng, “Performance Comparison of
Ad-hoc Routing Protocols for Networks with Node
Energy Constraints”, available at http://ivms.stanford.edu
[7] Charles Perkins, Elizabeth Royer, Samir Das, Mahesh
Marina, “Performance of two on-demand Routing
Protocols for Ad-hoc Networks”, IEEE Personal
Communications, February 2001, pp. 16-28.
[8] C. Perkins, E. B. Royer, S. Das, “Ad hoc On-Demand
Distance Vector (AODV) Routing - Internet Draft”, RFC
3561, IETF Network Working Group, July 2003.
[9] C. E. Perkins and E. M. Royer, “Ad-Hoc On Demand
Distance Vector Routing”, Proceedings of the 2nd IEEE
Workshop on Mobile Computing Systems and
Applications (WMCSA), New Orleans, LA, 1999, pp.
90-100

497
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Unified Modeling Language (UML) for Database


Systems and Computer Applications
Jyoti Goyal
S.D College of Institutes
Barnala, India
jyotigoyal7009@live.in

ABSTRACT encapsulate the constructs of a data model, database


management system (DBMS), the database. Data is stored
Modeling is an essential part of large software projects and within the data structures of the database. A DBMS is a
also to the medium and small projects as well. This paper suite of computer software providing the interface between
presents the concepts of database systems as well as the users and databases. The interactions carried out by most of
overview of the use of Unified Modeling Language (UML) the databases cauterized into following four categories,
as a standard notation of real-world objects in developing explained below:
object-oriented design methodology for computer
applications. Data Definition: Defining new data structures for a
database, removing data structures from the database,
KEYWORDS modifying the structure of existing data.

Android, Use case Diagrams (UCD), Class Diagrams, Data Maintenance: Inserting new data into existing data
Sequence Diagrams structures, updating data in existing data structures,
deleting data from existing data structures.
I. INTRODUCTION
Data Retrieval: Querying existing data by end-users and
The first step in developing Object- Oriented methodology extracting data for use by application programs.
for computer systems as well as Database Systems is the
use of UML (Unified Modeling Language). UML is a tool
Data Control: Creating and monitoring users of the
for specifying software systems that include standardized
database, restricting access to data in the database and
diagrams to define illustrate or model a software system's
monitoring the performance of databases.
design and structure. UML diagrams include the use case
diagram, class diagram, sequence diagram.
III. USE CASE DIAGRAMS
UML is considered an industry standard modeling language
with a rich graphical notation, and comprehensive set of A use case is the specification of a set of actions performed
diagrams and elements. This paper presents the layering of by a system, which yields an observable result that, is,
an object-oriented class model of a purely relational typically, of value for one or more actors or other
database. Among the concepts of modeling that UML stakeholders of the system. Use cases are part of the Object
specifies how to describe are: class (of objects), object, Management Group (OMG) Unified Modeling Language
association, responsibility, activity, interface, use case, (UML) standard. This standard tells us what the parts of the
package, sequence, collaboration, and state. The overview use case diagrams mean — stick figures, ovals and lines—
of the database systems, the concepts of Unified Modeling and it gives us the definition of a use case. But it doesn’t
Language (UML), Use Case diagrams and Sequence tell us how to structure to write one. So we’re left to read
diagrams have been discussed in this paper. books or articles (like this one), to try to figure out the right
way. A use case is a list of steps that specifies how a user
interacts with the business or system while keeping in mind
II. ROLE OF DATABASE
the value that this interaction provides to the user.

Database is an ordered collection of data elements intended


The UML is a tool for specifying and visualizing software
to meet the information needs of an organization and
systems. It includes standardized diagram types that
designed to be shared by multiple users. Database systems
describe and visually map a computer application or a
reduce data redundancy, integrate data, and enable
database systems design and structure. The use of UML as
information sharing among the various groups of users in
a tool for defining the structure of a system is a very useful
the organization. It is a term that is typically used to
way to manage large, complex systems. Having a clearly

498
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

visible structure makes it easy to introduce new people to Sequence Diagrams: -


an existing project [4]. The UML is used to specify,
visualize, modify, construct and document the artifacts of A Sequence Diagram is an interaction program which
an object-oriented software-intensive system under shows how processes operate with one another and what is
development. It offers a standard way to visualize a their order. It is a construct of a Message Sequence Chart.
system's architectural blueprints, including elements such A sequence diagram shows object interactions arranged in
as: time sequence. It depicts the objects and classes involved in
the scenario and the sequence of messages exchanged
 Activities between the objects needed to carry out the functionality of
 Actors the scenario. Sequence diagrams are typically associated
 Business Processes with use case realizations in the Logical View of the
 Database Schemas system under development. Sequence diagrams are
 Logical Components sometimes called event diagrams or event scenarios. The
 Programming Language Statements sequence diagrams show a detailed flow for a specific use
case or even just part of a specific use case. It shows the
calls between the different objects in their sequence and
IV. UML (ONLINE GROCERY can show, at a detailed level, different calls to different
SYSTEM) objects. A sequence diagram has two dimensions: The
vertical dimension shows the sequence of messages/calls in
the time order that they occur; the horizontal dimension
USE CASE DIAGRAM shows the object instances to which the messages are sent.
Brief Description: - This use case allows customers to Figure 3 illustrates an example of a sequence diagram of
search for their food items online. After making purchase Online Grocery System.
of their items, the customers will do checkout using any
source (Debit/Credit Card).Vendor checks for the item
being in stock or not. Refer Fig. 1

Actors: -

1. Primary Actors: Customers


2. Secondary Actors: Vendor

Class Diagram: - Class Diagram includes the concept of


Objects which inherits properties of a class. In Software
Engineering, Class Diagram in the Unified Modeling
Language is a type of structure diagram that describes the
structure of a system by showing the system's classes, their
attributes, operations (or methods), and the relationships
among objects. It also shows how the different entities Fig. 1 Use Case Diagram of Online Grocery Shopping
(people, things, and data) relate to each other. It can be
used to display logical classes, and implementation classes.
It is the main building block of object oriented modeling. It
is a type of static structure diagram in UML that describes V. CONCLUSION
the structure of a system by showing the system's classes,
their attributes, operations (or methods), and the The UML is a tool for specifying software systems that
relationships among the classes. Refer Fig. 2 include standardized diagrams to define illustrate and
visually map or model a software system's design and
structure. UML diagrams include the use case diagram,
class diagram, sequence diagram. This paper

499
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig 2 Class Diagram of Online Grocery Shopping

Fig 3 Sequence Diagram of Online Grocery Shopping

outlined the use of Unified Modeling Language (UML) as [5] Kogent, Software Engineering, (2012)
a standard notation of real-world objects in developing [6] Bertrand Meyer, Mathai Joseph, Software
object-oriented design methodology for computer Engineering Approaches for Offshore and
applications and database systems. Outsourced Development (2007)
[7] http://www.uml-diagrams.org/
REFERENCES [8] Eve Andersson, Philip Greenspun, Andrew
Grumet, Software Engineering for Internet
[1] Roger S. Pressman, Software Engineering: A Applications (2006)
Practitioner’s Approach Seventh Edition (2010)
[2] Pankaj Jalote, Fred B. Schneider, An Integrated
Approach to Software Engineering (2005)
[3] David Gustafson, Schaum’s Outlines of Software Engineering (2002)
[4] Ghezzi Carlo, Jazayeri Mehdi, Mandrioli Dino, Fundamentals of
Software Engineering (2002)

500
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

SECURING INFORMATION USING IMAGES: A


REVIEW
Anjala Avi Grover Gulshan Kumar
Assistant Professor Research Scholar Assistant Professor
SBSSTC Ferozepur DAV College,Jalandhar SBSSTC Ferozepur
anjalamca@gmail.com avigrover33@gmail.com gulshanahuja@gmail.com

ABSTRACT useless covert messages. The goal of steganalysis is to


identify suspected information streams, determine
Now a day, Security of a data or information is very whether or not they have hidden messages encoded into
important in this world. It is the need of the hour to them, and, if possible, recover the hidden information.
transmit sensitive information over a public network in a The cryptanalysis is the process of encrypted messages
secure way. However, this also provides a chance to the can sometimes be broken the cipher message is otherwise
malicious user to get access to our information over the called as code breaking, although modern cryptography
network. There exist many ways to transmit the techniques are virtually unbreakable.
information in a secure way. Securing the information by
embedding in images is one of the prominent methods. In The aim of this paper is to describe a method for
this paper, we undertake the concepts of information integrating together cryptography and steganography
security, various techniques to secure information using through some media such as image. In this paper, the
images with pros and cons, and their current trends. secret message is embedded within the image called
cover-image. Cover-image carrying embedded secret data
Keywords: Cryptography, Steganography, image is referred as a stego - image.
hiding, least-significant bit (LSB) method.
2. IMAGE STEGANOGRAPHY
1. INTRODUCTION
In this approach, each byte (pixel) of all the three matrices
In today’s developing age technologies have developed so (R, G, B matrices of payload) is encrypted using S-DES
much most of every person use laptop, desktop, mobile, algorithm and an image comprised of encrypted pixels is
notepad, etc. in their daily life to transfer their secret formed. The key used to encrypt each pixel is of 10-bit
information crossed over the World Wide Web. Some length and is obtained from the pixels of key image. The
people are waiting for this type of secret information pixel values of red, green and blue intensities of each
which we called hacker and then change your information pixel of key image are combined to get a 24-bit value.
which we called cracker so, to assure your secret The first ten bits are selected as the key to encrypt the red
information from this type of unauthorized peoples you intensity pixel of the payload image [6]. The middle ten
must protect your data through security techniques. This bets will be the key to encrypt the green intensity pixel of
paper discusses a cryptography and stenography payload and finally the last ten bits is the key to encrypt
techniques to protect secret data. Cryptography is a blue intensity pixel of payload image. So the size of key
technique in which original data are converted to image must be same as that of the payload. If not, then the
unreadable form so the user cannot understand what data key image will get resized. Each pixel (24-bit) of the key
to be transferred over the network. Stenography hides the image is split into three keys (10-bit each). This encrypted
data into cover media. Cover media can anything like as data is represented as an image which is hidden in another
image, audio, video, etc. so; it is unnoticed beyond the image called carrier image using Steganography [4, 5, 6].
network.
Hiding information into a medium requires the following
The steganography and cryptography differ in the way elements [2].
they are estimated: steganography fails when the ‖enemy‖
is able to catch the content of the cipher message, while i) The cover media (c) that will hold the hidden data.
cryptography fails when the ‖enemy‖ detects that there is
a secret message present in the steganographic medium. ii) The secret message (M), may be plain text, cipher text
The disciplines that study techniques for deciphering or any type of data.
cipher messages and detecting hidden messages are called
iii) The stego function (Fe) and its inverse (Fe -1)
cryptanalysis and steganalysis [12]. Steganalysis is "the
process of detecting steganography, by looking at iv) An optional stego-key (K) or password may be used
variances between bit patterns and unusually large file to hide and unhide the message.
sizes" [3]. It is the art of discovering and rendering

501
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The stego function operates over cover media and the a stego media(s).
message (to be hidden) along with a stego-key to produce

k k

Fe Fe-1
C C

M M

Figure 1.1 The steganographic Operation

3. MODERN TECHNIQUES OF 4. LSB METHOD


STEGANOGRAPHY
The least significant bit insertion (LSB) is the most
The common modern technique of steganography exploits widely used image steganography technique [10]. It
the property of the media itself to convey a message. embeds message in the least-significant bits of each pixel.
The LSB techniques might use a fixed least significant bit
The following media are the candidate for digitally insertion scheme, in which the bits of data added in each
embedding messages. pixel remains constant, or a variable least significant bit
insertion, in which the number of bits added in each pixel
(i) Plain text varies on the surrounding pixels, to avoid degrading the
image fidelity In this paper we discuss the embedding of
(ii) Still imagery text into an image through variable size least significant
bit insertion. Today, computer and network technologies
(iii) Audio and video provide easy-to-use communication channels for
steganography. Essentially, the information-hiding
(iv) IP datagram
process in a steganographic system starts by identifying a
cover medium’s redundant bit.

Cover Image

Embedding Communication
algorithm mode

Public key

Extraction
Secret message
algorithm

Figure 1.2 Basic Layout of image stegnographic system

5. IMAGE CRYPTOGRAPHY scrambled code that can be deciphered and sent across a
public or private network. Cryptography uses two main
In today’s information age, information sharing and styles or forms of encrypting data; symmetrical and
transfer have increased exponentially. Cryptography can asymmetrical. Symmetric encryptions, or algorithms, use
be defined as the conversion [14] of data into a the same key for encryption as they do for decryption.

502
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Other names for this type of encryption are secret-key, 6. EMBEDDING THE ENCRYPTED
shared-key, and private-key. Cryptography is the science
of using mathematics to encrypt and decrypt data.
IMAGE IN CARRIER IMAGE
Cryptography enables you to store sensitive information
The LSB is a simple approach to embedding information
or transmit it across insecure networks (like the Internet)
in a cover image. The pixel values of encrypted image
so that it cannot be read by anyone except the intended
are hidden in the LSB of pixels of carrier image by
recipient. While cryptography is the science of securing
merging it with the 2nd LSB of carrier pixel. If the size
data, cryptanalysis is the science of analyzing and
of the encrypted image is mxn, then the size of carrier
breaking secure communication. A cryptographic
image must be mxnx8 as each encrypted byte requires 8
algorithm, or cipher, is a mathematical function used in
bytes (pixels) of image carrier. So if the carrier image
the encryption and decryption process. A cryptographic
size is not eight times the size of the payload, then it has
algorithm works in combination with a key—a word,
to be resized [13]. In this procedure LSB algorithm helps
number, or phrase—to encrypt the plaintext. The same
in securing the originality of the image. For example,
plaintext encrypts to different ciphertext with different
when a secret message is hidden within a cover image,
keys. The security of encrypted data is entirely dependent
the resulting product is a stego-image.
on two things: the strength of the cryptographic
algorithm and the secrecy of the key. A cryptographic A possible formula of the process may be represented as:
algorithm, plus all possible keys and all the protocols that
make it work comprise a cryptosystem. We use Cover medium + embedded message + stegokey = stego-
encryption to ensure that information is hidden from medium
anyone for whom it is not intended, even those who can
see the encrypted data. The process of reverting cipher
text to its original plaintext is called decryption

Key Image

Cover Image Encryption

Stego Image

Cover Decryption
Image

Key Image
Figure: 1.3 Hidden Image

7. CONCLUSION ―crackers‖ as a challenge.Method used in this paper


provides triple security because in this method author
In this paper, we have presented a system for the used three methods two for cryptography and one for
combination of cryptography and Steganography with stegnography. LSB algorithm for hiding encrypted data .
LSB, which could be proven as a highly secured method Finally, we conclude that the proposed techniques are
for data communication in near future. Steganography effective for secret data communication.
transmits secrets through apparently innocuous covers in
an effort to conceal the existence of a secret. If the REFERENCES
embedded message is not advertised, casual users will
[1] S. Dumitrescu, W.X.Wu and N. Memon (2002) On
not know it even exists and therefore will not attempt to
steganalysis of random LSB embedding in continuous-
remove the mark. However, advertising the fact that
tone images, Proc. International Conference on Image
some hidden information exists, is only an invitation to
Processing, Rochester, NY, pp.641-644.

503
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[2] R.J. Anderson and F. A. P. Petitcolas (2001) On the [9] R. Chandramouli, M. Kharrazi, N. Memon, ―Image
limits of the Stegnography, IEEE Journal Selected Areas Steganography and Steganalysis: Concepts and
in Communications, 16(4), pp. 474-481 Practice ― , International Workshop on
DigitalWatermarking, Seoul, October 2004.
[3] Km. Pooja ,Arvind Kumar , ―Steganography- A Data
Hiding Technique‖ International Journal of Computer [10] C. C. Lin, and W. H. Tsai, "Secret Image Sharing
with Steganography and Authentication," Journal
Applications ISSN 0975 – 8887, Volume 9– No.7,
of Systems and Software, 73(3):405-414, December
November 2010.
2004.
[4] A. Westfeld, "F5-A Steganographic Algorithm: High
[11] komal patel, sumit utareja and hiesh gupta,
Capacity Despite Better Steganalysis," LNCS,Vol. 2137, Information hiding Using Least Significant Bit and
pp. 289-302,April 2001. Blowfish Algorithm‖ , International journal of computer
application volume 63 –no.13 february 2013.
[5] C.-C. Chang, T. D. Kieu, and Y.-C. Chou, "A High
Payload Steganographic Scheme Based on (7,
4)Hamming Code for Digital Images," Proc. of the 2008 [12] J. Fridrich, M. Long, ―Steganalysis of LSB encoding
International Symposium onElectronic Commerce and in colorimages,‖Multimedia and Expo, vol. 3,
Security, pp.16-21, August 2008. pp. 1279-1282, July 2000.

[6] Jiri Fridrich ,Du Dui, ―Secure Steganographic [13] A. Ker, ―Improved detection of LSB steganography
in grayscale images,‖ in Proc. Information
Method for Palette Images,‖ 3rd Int. Workshop on
Hiding Workshop, vol. 3200, Springer LNCS, pp. 97–
InformationHiding, pp.47-66, 1999.
115, 2004.
[7] N . Provos, ―Defending Against Statistical
[14] William Stallings, Cryptography and Network
Steganography,‖ Proc 10th USENEX Security Security, Principles and Practice, Third edition,
Symposium 2005. [2] N . Provos and P. Honeyman, PearsonEducation, Singapore, 2003.
―Hide and Seek: An introduction to Steganography,‖
IEEE Security & Privacy Journal 2003.

[8] Y. Lee and L. Chen (2000) High capacity image


steganographic model, IEE Proceedings on
Vision,Image and Signal Processing, 147(3), pp. 288-
294.

504
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Text document tokenization for word frequency count using


RapidMiner
(Taking resume as an example)

Gaurav Gupta Sumit Malhotra


Department of Computer Engineering Department of Computer Science and Engineering
University College of Engineering, Bhai Gurdas College of Engineering and
Punjabi University Technology,PTU
Patiala (Punjab), India Sangrur (Punjab), India
gaurav_shakti@yahoo.com sumitmalhotra.mail@gmail.com

ABSTRACT prescient examination. The overall objective is, basically, to


transform content into information for investigation.
Text mining, at times alluded to as content information mining, is
harshly equal to content investigation, which alludes to the procedure 1.2 Rapidminer
of determining astounding data from content. RapidMiner is
unquestionably the world-leading open-source system for data Rapid-miner is certainly the world-heading open-source framework
mining. It is available as a stand-alone application for data analysis for information mining. It is accessible as a stand-alone application
and as a data mining engine for the integration into own products. for information investigation and as a data mining engine for the
Tokenization is the process of breaking a stream of text up into integration into own products. Rapid Miner uses a client/server
words, phrases, symbols, or other meaningful elements called tokens. model with the server offered as Software as a Service or on cloud
The word frequency counter allows you to count the frequency usage infrastructures. RapidMiner provides data mining and machine
of each word in your document. Applying tokenization and word learning procedures including: data loading and transformation
frequency counter for a text document (resume in this case) helps us (Extract, transform and load (ETL)), data preprocessing and
find out occurrence of each word in a document but there is no visualization, predictive analytics and statistical modeling,
provision to find a particular word frequency occurrence according to evaluation, and deployment.
user choice.
Rapidminer is composed in the Java programming dialect.
Keywords RapidMiner provides a GUI to design and execute analytical
workflows. Those workflows are called “Process” in RapidMiner and
RapidMiner, RapidMiner Text Processing, RapidMiner Transform they consist of multiple “Operators”. Each operator is performing a
case operator, RapidMiner Tokenize operator. single task within the process and the output of each operator forms
the input of the next one. Alternatively, the engine can be called from
1. INTRODUCTION other programs or used as an API. Individual functions can be called
from the command line. RapidMiner provides learning schemes and
models and algorithms from Weka and R scripts that can be used
1.1 Text Mining and Analysis through extensions.
Boundless measures of new data and information are produced
ordinary through investment, scholarly and social exercises. This Features:-
ocean of information, anticipated to expand at a rate of 40% p.a., has
noteworthy potential financial and societal quality. Organizations 1. Open source.
utilize such strategies to dissect client and contender information to 2. Operating system independence.
enhance intensity; the pharmaceutical industry mines patents and 3. Transformation, data mining, evaluation, and visualization.
research articles to improve drug discovery; within academic 4. Compelling high-dimensional plotting facilities.
research, mining and analytics of large datasets are delivering 5. Multi-layered data view concept ensures efficient data handling.
efficiencies and new knowledge in areas as diverse as biological 6. Definition of re-usable building blocks.
science, particle physics and media and communications. As the 7. Machine learning library WEKA completely coordinated.
volume of insightful yield expands, we perceive that researchers are 8. Access to information sources like Excel, Access, Oracle, IBM
increasingly interested in using tools such as Text mining to explore Db2, Microsoft SQL, Sybase and so forth.
patterns and trends across large databases of content. Text mining 9. Rapid prototyping of data mining processes by nesting operator
transposes words and expressions into numerical qualities. chains and complex operator tree
Text analysis involves information retrieval, distributions, lexical
examination to study word recurrence cognizance,
labeling/annotation, data extraction, information mining methods
including connection and affiliation investigation, visualization, and

505
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2. IMPLEMENTATION GB18030 , IBM00858, IBM01140, IBM01141, IBM037,


IBM1026, IBM1047, IBM273, IBM277, IBM278,
IBM280, IBM284, IBM285, IBM297, IBM420, IBM424,
2.1 Process Document from Files IBM437, IBM500 -32BE, UTF-32LE, UTF-8, windows-
1250, x-windows-950, x-windows-50221, x-windows-874,
x-windows-949, windows-1250, windows-1251, x-mswin-
Consider a resume in text format as shown in Figure 2.1 .
936, x-PCK, x-SJIS_0213, x-JISAutoDetect, x-Johab, x-
MacArabic, x-MacCentralEurope, x-MacCroatian, x-
MacCyrillic, x-MacDingbat, x-MacGreek, x-MacHebrew,
x-MacIceland, x-MacRomania, x-MacSymbol, x-MacThai,
x-MacTurkish, x-MacUkraine etc. Default: 'SYSTEM'

Figure 2.1 - Resume in .txt format

In order to process the resume selects the Text Processing group of


RapidMiner. This group contains operators to load and process non-
structured textual data and transform such data into structured forms
for further analysis Figure 2.2 - Process Document.

Process Document from Files operator of this group generates word Double click Process Document from Files operator to add
vectors from a text collection stored in multiple files. This operator components Transform cases and Tokenize as shown below in Figure
utilizes one single TextObject as input for creating a term vector. The 2.3.
subsequent exampleset will subsequently comprise of stand out
single sample. In text directories arbitrary directories can be
specified. All files matching the given file ending will be loaded and
assigned to the class value provided with the directory.

The whole process is shown in Figure 2.2. where text file is selected
from the directory and the parameters are defined below:-

 text directories: In this list arbitrary directories can be


specified. All files matching the given file ending will be
loaded and assigned to the class value provided with the
directory. Range: list
 file pattern: A pattern for the file to be read. Usual
wildcards like ? and * are supported. Range: string; default:
'*'
 extract text only: If checked, structural information like
xml or html tags will be ignored and discarded. Range:
boolean; default: true
 use file extension as type: If checked, the type of the files
will be determined by their extensions. Unkown extensions
will be treated as text files. Range: boolean; default: true Figure 2.3 - Adding components.
 content type: The content type of the input texts Range: 2.2. Transform cases:
txt, pdf, xml, html;default: txt
 encoding: The encoding used for reading or writing files. This operator transforms all characters in a document to either lower
Range: SYSTEM, Big5, Big5-HKSCS, EUC-JP, EUC-KR, case or upper case, respectively as shown in Figure 2.4.

506
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 2.5 - Tokenize component modes.


Figure 2.4 - Transform case operator.
On clicking to run button the result is displayed as shown in Figure
In this case and by default also lower case is chosen from parameter 2.6 where the occurrence of each keyword in a document is displayed
transform to. in numeric format.

2.3. Tokenize:-
This operator splits the text of a document into a sequence of tokens.
There are several options how to specify the splitting points. Either
you may use all non-letter character, what are the default settings. This
will result in tokens consisting of one single word, what's the most
appropriate option before finally building the word vector. Or if you
are going to build windows of tokens or something like that, you will
probably split complete sentences, this is possible by setting the split
mode to specify character and enter all splitting characters. The third
option let's you define regular expressions and is the most flexible for
very special cases. Each non-letter character is used as separator. As a
result, each word in the text is represented by a single token.

The mode parameter is used to selects the tokenization mode.


Depending on the mode, split points are chosen differently. Various
modes for tokenization are:
Figure 2.6 - Words frequency count
 non letter: The non letters mode specify characters and is
the default mode.
 characters: The incoming document will be split into As shown above the frequency of occurrence of each word in a
tokens on each of this characters. For example enter a '.' for document is displayed. The result is displayed in sorted order by
splitting into sentences. Default: '.:' default.
 regular expression: This regular expression defines the
splitting point. 3. CONCLUSION
 linguistic sentence: A sentence is a linguistic unit
consisting of one or more words that are grammatically Rather than reading the whole text document user can find out
linked. occurrence of particular word of interest by tokenization. In order to
 linguistic tokens: related tokens find the frequency of occurrence of particular word the user has to
scroll the scrollbar. There should be provision for the user to find the
In this case the mode is non-letter. The modes are shown in Figure 2.5 frequency of occurrence of particular keyword of interest by
specifying the keyword. Moreover there should be provision to
compare two text documents by comparing keyword frequency
occurrence. The comparison in this case i.e resume are used to find
the better candidate.

507
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES [6] Tanu Verma,Renu,Deepti Gaur,”Tokenization


and Filtering Process in RapidMiner”,
[1] Text mining from http://en.wikipedia.org/wiki/Text_ International Journal of Applied Information
mining. Systems (IJAIS) – ISSN : 2249-0868 ,Volume 7–
No. 2, April 2014.
[2] RapidMiner from http://en.wikipedia.org/wiki/
RapidMiner. [7] Jordan Shterev,”Demo: Using RapidMiner for
Text Mining”, Digital Presentation and Preservation of
[3] RapidMiner Studio from http://rapidminer.com/ Cultural and Scientific Heritage (Digital Presentation
products/rapidminer-studio/. and Preservation of Cultural and Scientific Heritage),
issue: III / 2013, pages: 254256
[4] To find frequency of the words using RapidMiner
(2012). Retrieved June 22, 2012, from http:// [8] Tipawan Silwattananusarn and Assoc.Prof. Dr.
gunjanaaggarwal.blogspot.in/2012/07/ KulthidaTuamsuk,”Data Mining and Its
words-frequency-text-analytics.html. Applications for KnowledgeManagement : A
Literature Review from 2007 to 2012”
[5] Value and benefits of text mining from International Journal of Data Mining &
http://www.jisc.ac.uk/reports/value-and-benefits-of-text- Knowledge Management Process(IJDKP)
mining Vol.2, No.5, September 2012.

508
Mechanical Engineering
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Comparative High-Temperature Corrosion Behavior of D-


Gun Spray Coatings on ASTM-SA213, T11 Steel in Molten
Salt Environment
Ankur Goyal Rajbir Singh Gurmail Singh
Student, Department of Assistant Professor, Assistant Professor,
Mechanical Engg, Bhai Gurdas Department of Mechanical Department of Mechanical
Institute of Engg. & Tech, Engg, Bhai Gurdas Institute of Engg, RIMT – Maharaj
Sangrur, Punjab, India. Engg. & Tech, Sangrur, Aggersen Engineering College,
gylankur@gmail.com Punjab, India. Mandi Gobindgarh, Punjab,
ghuman008@gmail.com India.
gurmail.malhi@gmail.com

with erosion-corrosion resistance. Therefore composite system


ABSTRACT:- of a base material providing necessary mechanical properties
Alloys and metals get corroded when exposed to high
with a protective surface layer, different in structure or
temperature in air or in actual boiler environment in thermal
chemical composition can be an optimum choice in combining
plant. In the present study, a Ni-20Cr coating was deposited by
mechanical properties
Detonation Gun process on ASTM–AS-213, T-11 boiler steel.
This particular chapter is concerned with high temperature
Coating characterization was done using SEM/EDS analysis,
oxidation, hot corrosion and its mechanism and related salt
XRD analysis. Cyclic corrosion studies were carried out in
chemistry. The main focus is on hot corrosion of boiler steels,
molten salt environment at 900oC for 50 cycles. Each cycle
its preventive measures, coatings techniques.
consist of 1 hours heating in Silicon carbide tube Furnace
Hot corrosion became a topic of important and popular interest
followed by 20 Minutes cooling in air. The kinetics of weight
in the late 60s as the gas turbines engines of military suffered
gain or loss was measured after each cycle and visually
severe corrosion during the Viet Nam conflict during operation
examined, the surface studies of the exposed samples were
over seawater. Metallographic inspection of failed parts often
characterized by SEM/EDS analysis, XRD analysis. The
showed sulfides of nickel and chromium, so the mechanism
results obtained showed the better performance of Ni-20Cr
was initially called “sulfidation” [2].
coated T-11 boiler steels than the uncoated T-11 boiler steel.
An increasing demand for more electricity, reduced plant
Keywords:- D-Gun coatings, Ni-20Cr, Corroison. emissions and greater efficiency is forcing power plants to
increase the steam temperature and pressure of boilers. Ultra
1. INTRODUCTION supercritical steam conditions greater than 31MPa and 600 ◦C
have been adopted, and the thermal efficiency of a pulverised
To attain high temperature is very important for the coal-fired boiler of up to 45% has been obtained. Super heater
development of civilization in almost every country. Materials and re-heater materials will therefore be required which have
in high technology areas need to operate in harsh conditions of high creep rupture strength and high corrosion resistance at
corrosive environment, temperature and pressure etc. Invention temperatures of about 900 oC and above [3,4]. Super alloy can
and discovery of high temperature alloys or materials are be used to meet these stringent material targets [5, 6] but they
looked with an expectation of increasing the life time of are unable to meet both the high-temperature strength and the
boilers, gas turbines etc by giving improved strength efficiency high temperature corrosion resistance requirements
the corrosion resistant properties etc. Gas turbines in aircraft, simultaneously [7, 8]. Protective coatings can be used on super
fossil fueled power plants refineries and heating elements for alloys to meet the latter requirement. Coatings can add value to
high temperature furnaces are some examples where corrosion products up to 10 times the cost of the coating [9]. Even if the
restricts their use and reduce their life, severely reducing the material withstands high temperature without a coating, the
efficiency. The corrosion that occurs at high temperature is coating enhances the lifetime of the material.
called hot corrosion or also sometimes dry corrosion. Hot
corrosion is the accelerated oxidation of a material at elevated 1.1 Oxidation
temperature induced by a thin film of fused salt deposit [1]. Oxidation is a type of corrosion involving the reaction between
The annual direct loss of natural resources i.e. metals due to a metal and air or oxygen at high temperature in the absence of
environmental degradation is also substantial. In USA, the loss water or an aqueous phase. It is also called dry corrosion. The
due to corrosion is around 4% of GDP. In India, the corrosion rate of oxidation of a metal at high temperature depends on the
losses will be around Rs.1 lakh crore per annum. Around 80% nature of the oxide layer that forms on the surface of metal
of the unscheduled shutdowns and breakdowns in industries [10].
are due to corrosion and process fouling.
Metallic components in coal fired boilers are exposed to severe Most metals are thermodynamically unstable in air and react
corrosive atmospheres and high temperatures Selection of with oxygen to form an oxide. As this oxide usually develops
material and its preparation are very important for the efficient as a layer or scale on the surface of the metal or alloy, it can
functioning of the system components. Alloys used at high give protection by acting as a barrier that separates the metal
temperature should possess good mechanical properties along

509
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

from the gas. The establishment of an oxide scale on an alloy significantly affected by the type of fuel used, its purity and the
occurs by a nucleation and growth process. [11]. quality of the air required to support the combustion. The hot-
corrosion attack involves more severe degradation of the alloy.
1.2 Hot Corrosion The hot-corrosion process depends on parameters such as alloy
Hot corrosion may be defined as accelerated corrosion, composition, deposit composition, and temperature.
resulting from the presence of salt contaminants such as
Na2SO4, NaCl that combine to form molten deposits which
damage the protective surface oxides [12]. Corrosion is the 2.0 COATINGS
deterioration or destruction of metals and alloys in the presence 2.1 Coating Techniques
of an environment by chemical or electrochemical action. If a material is added or deposited onto the surface of another
Corrosion is an irreversible interfacial reaction of a material material (or the same Material), it is known as a coating.
(metal, ceramic and polymer) with its environment which Coatings are frequently applied to the surface of Materials to
results in its consumption or dissolution into the material of a serve one or more of the following purposes:
component of the environment. Often, but not necessarily, To protect the surface from the environment that may produce
corrosion results in effects detrimental to the usage of the corrosion or other
material considered.  Deteriorative reactions.
1.3 Hot Corrosion  To improve the surfaces appearance.
Hot corrosion may be defined as accelerated corrosion, There are many coating deposition techniques available, and
resulting from the presence of salt contaminants such as choosing the best process depends on the functional
Na2SO4, NaCl that combine to form molten deposits which requirements, (size, shape and metallurgy of the substrate),
damage the protective surface oxides [13]. adaptability of the coating material to the technique intended,
Corrosion is the deterioration or destruction of metals and level of adhesion required, and availability and cost of the
alloys in the presence of an environment by chemical or equipment.
electrochemical action. Corrosion is an irreversible interfacial
reaction of a material (metal, ceramic and polymer) with its 2.2 Thermal Spraying Technology
environment which results in its consumption or dissolution Thermal spraying is a process of depositing a superior material
into the material of a component of the environment. Often, but layer over a base metal or substrate either to improve the
not necessarily, corrosion results in effects detrimental to the surface characteristics like corrosion resistance, wear
usage of the material considered. resistance, surface fatigue or to get the desired dimension, size,
surface appearance etc. In thermal spraying the feed stock
1.4 High Temperature Corrosion material, in the form of a wire or powder of metallic or non
Hot corrosion is a high-temperature analogue of aqueous metallic materials is melted or softened by flame or electricity
atmospheric corrosion. A thin film deposit of fused salt on an and propelled on to prepared work piece to form a coating.
alloy surface in a hot oxidizing gas causes accelerated Thermal spray coating processes are not only capable of
corrosion kinetics. Recognition of the problem and a search applying coatings with excellent wear resistant properties, but
toward a mechanistic understanding and engineering abatement also the range of materials capable of being sprayed so wide
were initiated in response to the severe corrosion attack of that applications for thermally sprayed wear resistant coatings
military gas turbines during the Viet Nam conflict. Initially, the are unlimited [9].
researchers were misled by the observation of corrosion
product sulphides beneath a fused film of sodium sulphate to 2.3 Corrosion Protection
denote the problem and mechanism as “sulfidation”. Later, Plating, painting, coating and the application of enamel are the
studies by Bornstein and Decresente [13, 14] and Goebel and most common anti-corrosion treatments. They work by
Pettit[15, 16] demonstrated that the principal corrosive providing a barrier of corrosion-resistant material between the
environmental component was not a vapor species, but rather damaging environment and the structural material. Platings
the contact of the fused salt with the surface. This fused salt usually fail only in small sections, and if the plating is more
(sodium sulphate) exhibited an acid-base character which at the noble than the substrate (for example, chromium on steel), a
time was quite uncertain and undefined. The electrolytic nature galvanic couple will cause any exposed area to corrode much
of the fused salt film and its similarity to atmospheric corrosion more rapidly than an unplated surface would. Painted coatings
led to the more proper naming of the problem as “hot are relatively easy to apply and have fast drying times although
corrosion”. temperature and humidity may cause dry times to vary.
High-temperature corrosion is chemical deterioration of a Corrosion can be prevented through using multiple products
material (typically a metal) as a result of heating. This non- and techniques including painting, coating, sacrificial anodes,
galvanic form of corrosion can occur when a metal is subjected cathodic protection (electroplating), and natural products of
to a hot atmosphere containing oxygen, sulphur or other corrosion itself.
compounds capable of oxidizing (or assisting the oxidation of)
the material concerned. For example, materials used in
aerospace, power generation and even in car engines have to 2.4 Hot-Corrosion Protection
resist sustained periods at high temperature in which they may In general, thermal spray coating are used to define a group of
be exposed to an atmosphere containing potentially highly processes that deposit finely divided metallic or non metallic
corrosive products of combustion. Another popular name of materials onto a prepared substrate to form a coating.
this type of corrosion is hot-corrosion, which is also “Thermal spraying is a technique that is capable to solve the
extensively used in the scientific community. Hot-corrosion is problems like wear, corrosion and thermal stability by
a serious problem in power generation equipment in gas depositing thin layer on surface of substrate.” The coating we
turbines for ships and aircraft, and in other energy conversion apply on a prepared substrate may be in powder, rod or wire
and chemical process systems. The severity of hot-corrosion in form. The thermal spray gun uses plasma arc, combustible
combustion processes can vary substantially and is gases or an electric arc to generate the heat necessary to melt

510
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

the coating material. The material changes to a plastic or 3.5 Hot Corrosion Studies
molten state when heated and is accelerated by process gases. Hot corrosion studies were conducted at 900°C in the
The acceleration of the molten material produces a confined laboratory using silicon carbide tube furnace, calibrated up to
stream of particles that the process gases transfer to the the variation of ±5°C. The uncoated samples were subjected to
substrate. The particle strikes the substrate with a large impact mirror polishing, whereas coated samples were subjected to
and form thin platelets that bond to the substrate and to each wheel cloth polishing for 5min. Thereafter, the samples were
other. The particles built up and cool into a lamellar structure heated in the oven up to 250°C and the salt mixture of
that forms a coating. Na2SO4–60% V2O5 dissolved in distilled water was coated on
the warm polished samples with the help of a camel hair brush.
2.5 Thermal Spray Coating Process The amount of the salt coating varies in the range 3.0–
Thermal spraying is group of processes which are used to coat 5.0mg/cm2. The coated samples were then dried at 110°C for
surface of substrates, wherein coating material is heated and 3–4h in the oven to remove the moisture and weighed. Hot
projected with high velocity onto surface of substrate [15]. The corrosion studies were carried out for 50 cycles. Each cycle
coating material is heated to change in molten state and consisted of 1h heating at 900°C followed by 20min cooling at
accelerated by compressed gas stream to prepared surface of room temperature. The weight of samples was measured at the
base material. The molten particles strike the substrate where end of each cycle and spalled scale was also taken into
they splat, spread, solidify and adhere to the irregularities of consideration. The corroded samples were analysed by XRD,
the prepared substrate and build up new surface [16]. There are SEM/EDAX and EPMA.
many thermal spray coating techniques named as; flame
spraying, plasma spraying, detonation gun, high velocity oxy- 4. EXPERIMENTAL SETUP AND
fuel (HVOF) and arc spraying.
In addition, multi-phase coatings, such as cermets, can have
PROCEDURE UNDER HIGH
enhanced pullout due to the different sizes and densities of the TEMPERATURE HOT CORROSION
as-sprayed powders. Consequently, if care is not taken when STUDIES
preparing a thermal spray coating for analysis, pullout can The high temperature oxidation and hot corrosion study was
cause erroneous porosity, volume %, and even chemistry conducted at a temperature of 900oC by using Silicon carbide
measurements. Several papers have dealt with metallographic tube furnace in laboratory. Firstly, the furnace was calibrated
preparation and routine analysis of thermal spray coatings. [17] using platinum- rhodium thermocouple and a temperature
indicator with a variation of ±30C. Heating zone in the tube
3 MATERIAL AND METHODS was found out for 900oC with the help of thermocouple. The
uncoated steel specimen was cleaned with acetone using before
3.1 Selection of Substrate Material study. Alumina boats were used to place the samples in furnace
Selection of candidate material for the study has been after for corrosion studies.
consultation with CHEEMA BOILERS LIMTED (INDIA). The boats were preheated at a constant temperature of 500oC
Boiler steel substrate material T-11 (ASTM – SA-213) has for 6-7 hours. It was done to ensure that their weight will
been selected for the present study as the substrate material. remain constant during the high temperature corrosion study.
ASTM-SA 213, T11boiler steel has been selected as the For conducting the experiment, each sample was kept in the
substrate material for the present study. The boiler steel is used boat and the weight of the sample was measured. Then the
as boiler tube materials in some of the power plants in sample was inserted into the heating zone in the furnace at
Northern India. The material for the study was made available 900oC. The holding time in furnace was 1 hour and after that
from the CHEEMA BOILERS LIMTED (INDIA). the boat with the sample was measured with help of electronic
weight balance meter of HEICO make (Type- class II). This
3.2 Preparation of Material consisted one cycle of the study. The study was carried out for
Specimens with dimensions of approximately 20mm x 15mm x 50 cycles. The scale formed and fell in the boat was also taken
5mm were cut from the alloy tubes. The specimens were into account consideration for weight change measurement.
prepared manually and care taken to avoid any structural
changes in the specimens. The specimens were grit blasted 4.1 High-Temperature Corrosion Studies
with Al2O3 (grit 60) before the deposition of the coating
(Molten Salt Bath Environment)
Cyclic corrosion studies were performed in molten salt of
3.3 Deposition of Coatings Na2SO4-60 wt. % V2O5 for 50 cycles. Each cycle consisted of
The available coating material Ni-20Cr, in the powder form, 1 hour heating at given temperature, i.e. 900oC in a Silicon
was used as the coating material in the study. Detonation gun carbide tube furnace followed by 20 min cooling at room
technique was selected for deposition of the coating. The temperature. The cyclic conditions have been chosen to create
coating was made available by SVX POWDER M SURFACE a more aggressive situation of corrosion attack [40, 27].
ENGG. PVT LTD. GREATER NOIDA (INDIA) Na2SO460%-V2O5 has been selected for the study due to the
fact that the vanadium and sodium are common impurities in
3.4 Coating Formulation low-grade fuels.
Detonation-gun process was used to apply coating on the super The mixture of Na2SO4 and V2O5 in the ratio of 40:60 (40%
alloys at SVX powder M Surface Engineering Pvt. Ltd, Greater Na2SO4 and 60% V2O5 by weight) constitutes a eutectic with
Noida (India). Standard spray parameters were used for a low melting point of 5500C and provides a very aggressive
depositing the Ni-20Cr coating. All the process parameters environment for hot corrosion to occur and the corrosion
were kept constant throughout the coating process. The increases with the increase in the temperature and V2O5
thickness of the coatings was measured from Electronic content in the mixture [18]. The studies were performed for
magnetic Induction Thickness Gauge. The average coating uncoated as well as coated specimens for the purpose of
thickness of the coatings ranges from 250 to 350 micron. comparison. Na2SO4-60%V2O5 mixture prepared in distilled
water was applied uniformly on the warm specimens with the

511
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

help of a hair brush. The amount of the salt coating was kept in various analytical techniques such as visual observations,
the range of 3.0-5.0 mg/ cm2. The salt-coated specimens as weight change studies, evaluation of corrosion rate in terms of
well as the alumina boats were then kept in the oven for 3-4 h wall thickness lost X-Ray diffraction (XRD) analysis,
at 1500C. Then they were again weighed before exposing to SEM/EDS analysis etc.
hot corrosion tests in the Silicon carbide tube furnace. The salt 4.2.2 XRD Analysis on Boiler Steel
mixture was applied on the surface only once in the beginning The XRD analysis of the uncoated and coated specimen was
of the test, and it was not replenished during the test. During carried out with X-ray Diffractometer (Goniometer) at
hot corrosion runs, the weight of boats and specimens were THAPAR University Patiala. To recognize various phases
measured together at the end of each cycle, with the help of formed on the surface. The specimens were scanned with speed
electronic balance machine with a sensitivity of 1 mg. The of 20 / min in 2θ range of 10° to 110° and the intensities were
spalled scale was also included at the time of measurements of recorded
weight change to determine total rate of corrosion. The surface
of the corroded specimens was visually observed to record
color, spalling, and peeling of scale during cyclic corrosion. 4.2.2 XRD Analysis Ni-Cr Coated Substrate
Efforts were made to formulate the kinetics of corrosion. XRD, of T-11 Boiler Steel
SEM and EDS techniques were used to analyze the corrosion The XRD analysis as shown in Fig 5.1 for the D-Gun sprayed
products. Ni-Cr coating on T-11 boiler steel as-sprayed before exposure
in Molten salt enviroment, indicate the formation of CrNi
4.2 Characterization of Coatings (Chromium Nickle) and NiC (Nickel Carbide).
4.2.1 Analysis of Corrosion Products
All the specimens subjected to hot corrosion at 900oC for 50

Counts
Chromium Nickel; ( Cr Ni ); Carbon; C; Silicon; Si; Nickel Carbide; Ni C

NiCR
Chromium Nickel; ( Cr Ni ); Nickel Carbide; Ni C

2000

Carbon; C; Silicon; Si

Nickel Carbide; Ni C

Nickel Carbide; Ni C
1000

0
20 30 40 50 60 70 80 90 100

Position [°2Theta] (Copper (Cu))

Figure: 1. XRD diffraction pattern for Ni-20Cr coated T-11 boiler steel before hot corrosion

Table: 1 Major and minor phases identified by XRD analysis Ni-Cr Coated Substrate of T-11 Boiler Steel.

Displacement Chemical
Ref. Code Score Compound Name Scale Factor
[°2Th.] Formula

01-071-7594 39 Chromium Nickle 0.435 0.848 Cr Ni

00-026-1082 16 Carbon 0.196 0.875 C

01-089-9054 13 Silicon 0.090 0.899 Si

00-014-0020 29 Nickel Carbide -0.193 0.434 Ni C

cycles in the Molten Salt environment were analyzed for the


characterization of corrosion products. The surface and cross-
section of the corroded specimens were analyzed by using

512
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4.2.3 X-Ray diffraction (XRD) analysis of The oxide scale of uncoated steel is found to have presence of
mainly in higher intensity Cr-O and higher amount of Fe and O
Un-Coated Corroded Substrate in the scale along with Cr2O3 and Mg2Si04. Some small peaks
The X-ray diffraction of un-coated specimen as shown in Fig of SiO4 and CrP4 (Chromium Phosphide) also observed.
5.8 subjected to Corrosion testing in molten salt environment
of Na2So4-60%V2O5.

Counts

Cr P4; Cr2 O3
Uncoated T11

O3Cr2 O3

Cr - O; Mg2 Si O4; Cr P4; Cr2 O3

Mg2 Si O4; Cr P4; Cr2 O3


Cr2O4;

Cr - O; Cr P4; Cr2 O3
O4; Si

Cr - O; Mg2 Si O4; Cr P4; Cr2 O3


- O;SiMg2

Cr - O; Mg2 Si O4; Cr P4; Cr2 O3


1000

Cr - O; Cr P4; Cr2 O3
Mg2 Si O4; Cr P4; Cr2 O3
Cr - O;CrMg2

Mg2 Si O4; Cr P4; Cr2 O3


Cr P4; Cr2 O3

Mg2 Si O4; Cr2 O3


Cr - O; Cr P4
Cr P4
Cr - O; Cr P4

Cr P4
Cr2 O3

500

0
20 30 40 50 60 70 80 90 100

Position [°2Theta] (Copper (Cu))

Figure: 2 XRD analysis of Un-Coated T-11 Boiler Steel showing XRD Diffraction Pattern After Hot Corrosion

Table: 2 Major and minor phases identified by XRD analysis Un-Coated Substrate of the hot corroded T-11 Boiler Steel specimens after 50
cycles.
Displacement
Ref. Code Score Compound Name Scale Factor Chemical Formula
[°2Th.]
00-006-0532 28 Chromium Oxide -0.115 0.315 Cr - O
01-074-1684 7 magnesium silicate – -0.368 0.186 Mg2 Si O4

01-071-0547 15 Chromium Phosphide 0.175 1.096 Cr P4

01-070-3765 19 Chromium Oxide -0.266 0.452 Cr2 O3

4.2.4 X-Ray diffraction (XRD) analysis of


identified in XRD pattern of coating. Enhance oxide scale of
Ni- 20%Cr Corroded Substrate steel Fe and NiO was observed along with SiC at highest
The XRD patren of Ni-20% Cr as-sprayed coating depicted in intensity which has been for improving hardness of the coating.
Fig 5.9 shows that both the feedstock and coating have Ni as However, No inter metallic phase has been revealed.
the most prominent phase with small intensity of chromium.
Very low intensity of peak Fe3O4

513
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Counts
NiCrCoated

8000

Fe; Si C; Ni O
6000

Si C; Ni OSi C; Ni O

4000

O Ni O
Fe; Mo2 C; NiFe;

Mo2 C; Si C; Ni O
2000

Si C; Ni O
Mo2 C; Si C

Ni O

Ni O
SiSiCC

Si C

Si C

Si C

0
20 30 40 50 60 70 80 90 100

Position [°2Theta] (Copper (Cu))

Figure: 3. XRD analysis of Ni-Cr Coated T-11 Boiler Steel showing XRD Diffraction Pattern After Hot Corrosion

Table: 3 Major and minor phases identified by XRD analysis Ni-Cr Coated Substrate of the hot corroded T-11 Boiler Steel specimens after 50
cycles.
Displacement
Ref. Code Score Compound Name Scale Factor Chemical Formula
[°2Th.]
01-071-4650 26 Iron -0.200 0.822 Fe

01-071-4292 20 Phosphorus -0.453 0.195 P

00-015-0457 23 Molybdenum Carbide -0.197 0.073 Mo2 C

01-080-0017 13 Carbon -0.370 0.027 C

01-089-1975 6 Silicon Carbide 0.137 0.014 Si C

00-022-1189 63 Nickel Oxide 0.009 0.926 Ni O

5.5 Weight change and thickness loss data to Engineering & Technology for corrosion studies is presented in
the Fig: 5.6 in the form of graphs between weight gain and loss
corrosion studies in Molten Salt per unit area (mg/cm2) versus time expressed in number of
Environment at 900ºC for 50 cycles cycles. It can be observed from the graph that uncoated and Ni-
Thermo gravimetric data for coated and Un-Coated T-11 boiler 20Cr coated T-11 boiler steel has shown
steel subjected to cyclic testing in molten salt environment in
Silicon carbide tube furnace at Rayat Bahra Institute of

514
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure: 5.4. (a) Weight Change/Area Vs Number Of Cycles for Comparison between Un-coated and

Coated specimen

B
Figure: 5.4. (b) Bargarph shown the overall Weight Change /Area After Corrosion test in Molten Salt Enviroment
Weight loss Data Calculations
Total weight loss of Un-Coated Substrate – Weight of Coated Substrate
Weight of Un-Coated Substrate

1. Under employed spraying conditions the coating Ni-


6. CONCLUSIONS 20Cr has been successfully deposited on ASTM-A-213,
The high temperature corrosion behaviors of coated and T-11by Detonation –gun spraying technique.
uncoated T-11 boiler steel have been investigated in the molten 2. D-gun-sprayed Ni-20Cr coating may be recommended
salt environment at 9000C for 50 hours (50 cycles). The as a suitable process-coating combination for the said
following conclusions are made: environment.
3. The un-coated T-11 boiler steel suffered accelerated
oxidation during exposure at 900°C in air as well as
molten salt environment in comparison with its coated

515
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

substrate. It has also been observed that the molten salt oxidation in super alloy,” Materials Science and Engg.,
corrosion loss for the Ni-Cr coated was significantly Vol. A265, pp. 87-94
Lower weight loss than un-coated. [13] Chen, K.C, He, J.L, Chen, C.C, Leyland, A and
4. The uncoated T-11 boiler steel and Ni-20Cr coated T- Matthews, A. (2001) “Cyclic Oxidation Resistance of Ni-
11 boiler steel have undergone intense spalling and Al Alloy Coatings Deposited on Steel by a Cathodic Arc
peeling of scale and enormous weight gain during Plasma Process,” Surface Coating Technology, Vol.135,
corrosion attack molten salt environment at 9000C after pp.158-165
50 cycles. [14] W.J. Trompetter a, A. Markwitz a, M. Hyland (2002),
5. In case of Un-coated T-11 boiler steel Cr–O, Mg2 SiO4 “Role of oxides in high velocity thermal spray coatings”,
were identified as main phase by XRD and EDS. Nuclear Instruments and Methods in Physics Research B
6. In Ni-Cr coated NiO, Fe3O4 and SiC was identified as vol.190.pp.518–523.
major phases in aggressive environment as revealed by [15] Eliaz, N., Shemesh, G. and Latanision, R.M. (2002) “Hot
XRD and EDS. Corrosion in Gas Turbine Components,” Engg. Failure
7. Corrosion resistance for coated and uncoated T-11 Analysis, Vol.9, pp. 31-43
boiler steel follows the sequence. [16] Eliaz, N., Shemesh, G. and Latanision, R.M. (2002) “Hot
Corrosion in Gas Turbine Components,” Engg. Failure
Ni-20Cr Coated>Uncoated T11 Analysis, Vol.9, pp. 31-43
[17] Rapp. Robert A. (2002) “Hot corrosion of materials: A
8. The coating was found to have significant resistance to fluxing mechanism,” Corrosion Science, Vol.44, pp. 209-
its oxide scale spallation during cyclic exposures. 221
Moreover, the coating was found to have retained its [18] Uusitalo, M.A., Vuoristo, P.M.J., Mantyla, T.A. (2002)
continuous contact with the substrate steel during these “High temperature corrosion of coatings and boiler
thermal cycles. This indicates that the coating has good steels in reducing chlorine-containing atmosphere,”
adhesion strength. Surface an Coatings Technology, Vol. 161, pp. 275-285
[19] Yamada, K., Tomono , Y., Morimoto, J., Sasaki, Y.,
Ohmori, A. (2002) “Hot corrosion behavior of boiler
REFERENCES tube materials in refuse incineration environment,”
[1] Nelson, H.W., Krause, H. H., Unger, E.W., Putnam, A. Vol.65, pp. 533-540
A., Slander, C.J., Miller, P.D., Hummel, J.D. and Landry, [20] Srikanth, S., Ravikumar, B.Das, S.K., Gopalkrishna, K.,
B.A.(1959) “A Review of Available Information on Nandakumar., K. and Vijayan, P. (2003) : “Analysis of
Corrosion and Deposits in Coal and Oil Fired Boilers Failure in Boilers Tubes due to Fireside Corrosion in a
and Gas Turbines,” Report of ASME Research Pub. Waste Heat Recovery Boiler,” Engineering Failure
Pergamon Press and ASME, New York, pp. 1-197 Analysis, Vol.10, pp. 59-66
[2] Stringer J. (1987) “High Temperature corrosion of super
alloys,” Material Science Technology, Vol.73, pp. 482-
93
[3] Wright IG (1987) “High Temperature Corrosion,” In:
Metals Handbook Vol. 13.9th ed. Metals Park ASM, pp.
97-103
[4] Stringer J. (1987) “High Temperature corrosion of super
alloys,” Material Science Technology, Vol. 73, pp. 482-
93
[5] Otsuka N, Rapp R.A (1990) “Hot corrosion of pre-
oxidised Ni by a thin fused Na2SO4 film at 900oC,” J
Electrochemistry Society, Vol. 137 (1): pp. 46-52
[6] Natesan, K. (1993) “Applications of Coating in Coal –
Fired Energy Systems,” Surface and Coatings
Technology, Vol.56, pp. 185-197
[7] Hocking, M.G. (1993) “Coatings resistant to erosive
/corrosive and severe environments,” Surface and
Coatings Technology, Vol. 62, pp. 460-466
[8] Simmas, N.J., Oakey, J.E., Stephenson, D. J., Smith, P.J,
Nicholls, J.R., (1995) “Erosion-Corrosion modeling of
gas turbine materials for coal fired combined cycle
power generation,” Wear, Vol. 186-187, pp. 247-255
[9] Heath, G.R., Heimgartner, P., Irons G., Miller, R.(1997)
“An assessment of Thermal Spray Coating Technologies
For High Temperature Corrosion Protection,” Material
Science Forum, Vol. 251-54, pp. 809-816
[10] Stott, F.H. (1998) “The Role of the Oxidation in the Wear
of Alloys,” Tribology international Vol. 31 PP. 61-71
[11] Khanna, A.S and Jha. S.K. (1998) “Degradation of
Materials under Hot Corrosion Conditions,” Trans.
Indian Inst. Met., Vol.51, No.5, pp. 279-290
[12] Khalid, F.A, Hussain, N., Shahid, K.A (1999)
“Microstructure and morphology of high temperature

516
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

ANALYSIS AND OPTIMIZATION OF VOID SPACES IN


SINGLE PLY RAW MATERIAL USING FINITE ELEMENT
METHOD & FUSED DEPOSITION MODELING
Harmeet Singh JPS Oberoi Rajmeet Singh
M-tech Student Professor & Dean (R&D) Assistant Professor
Deptt of Mechanical Engineering Deptt of Mechanical Engineering Deptt of Mechanical Engineering
BBSBEC, Fatehgarh Sahib BBSBEC, Fatehgarh Sahib BBSBEC, Fatehgarh Sahib
waliamangwal@gmail.com jpsoberoi@gmail.com rajmeet.singh@bbsbec.ac.in

able to hollow certain portion of layer. In this way low weight


1. ABSTRACT
with good strength single ply raw material with hollow cross
Fused Deposition modelling (FDM) technology is based on
section is produce.
decomposition of 3-D computer models into thin cross
sectional layers, followed by physically forming the layer and 3. PROBLEM STATEMENT
stacking them up layer by layer. FDM provide freedom to add The main requirement is to create Void spaces in single ply raw
material in the desired area and we are also able to create material to decrease weight of single ply raw material. FEM
hollow region in certain portion of layer. In this way low analysis is applied to compare the Nylon101, Nylon6/10 and
weight with good strength single ply raw material with hollow ABS material. CAD drawing of void spaces are created and
cross section is produced. Void spaces were created in single then it is converted into STL files to give input to FDM
ply raw material. FEM analysis was applied to select the machine. ABS specimens were manufactured with the help of
material. Results of FEM analysis shows that ABS material is FDM. Compressive test were done on the specimen to find the
better compressive as comparison to Nylon101, Nylon6/10. compressive strength.
So ABS material is selected for manufacturing of specimens.
4. METHODOLOGY
ABS specimens were manufactured with the help of FDM.
Compressive test of specimens at 8000N shows that two
small square structures give optimum results for ABS
material.

KEYWORDS
STL - Standard Triangulation Language, FEM - Finite Element
method
ABS - Acrylonitrile Butadiene Styrene, FDM - Fused
Deposition modelling

2. INTRODUCTION
In modern time low weight high strength materials is in high
demand with the development of automobile and aerospace
sector. The term rapid prototyping refers to a class of
technologies that can automatically construct physical models
from computer aided design data and allow designers to
quickly create tangible prototypes of their design rather than
just two dimensional pictures. Fused deposition modelling is
based on decomposition of 3-D computer models into thin
cross sectional layers, followed by physically forming the
layer and stacking them up layer by layer. RP provide
Figure 1: Flow chart of methodology
freedom to add material in the desired area and we are also

517
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5 MATERIAL SELECTIONS BY FEM Table 2: Result for two small square void structure
ANALYSIS S MATERIAL DISPLACEMENT
Three dimensional drawing of single ply raw material with void No NAME (COMPRESSIVE
space are created by CAD software are analysed with FEM TEST) mm
software. Displacement is the criteria for deciding the strength. 1 NYLON 101 0.216401
Strength is inversely proportional to the displacement therefore 2 NYLON 6/10 0.0958644
3 ABS 0.0928
the compressive strength is considered for selection of the
material and shape of void space. Load applied on the specimen
was 8000N during the FEM analysis. Fixture is applied at the
bottom surface and load is applied at the top surface.

5.1 Compressive strength of various materials 0.3 Displacement(Compressive Test)


0.216401
(without void space)

Displacement
0.2

0.0958644 0.0928
0.1

0
Nylon101 Nylon6/10 ABS
Figure 3: Result for two small square void structure
NYLON101 NYLON6/10 ABS
Table 1: Result for without void structure Result of FEM analysis show that ABS material has 0.161mm
S No MATERIALS DISPLACEMENT in mm displacement in without void structure and 0.0928mm
(COMPRESSIVE TEST) displacement in two small square void structure which is less
1 NYLON 101 0.166134
displacement as comparison to Nylon101 and Nylon6/10. ABS
2 NYLON 6/10 0.2196229
3 ABS 0.161 material has better compressive strength. So ABS material is
selected for manufacturing of specimens by Fused deposition
0.3 modelling.
DISPLACEMENT(COMPRESSIVE TEST)
0.2196229
6. MANUFACTURING OF SPECIMEN BY
FUSED DEPOSITION MOULDING
Displacement (mm)

0.2
0.166134 0.161
Fused deposition moulding is used to manufacture the specimens
of ABS material of different void shape structure. Size of the
0.1 specimen is 25mm x 5mm x25mm. Solid works drawing in STL
format is given as input to FDM machine. The chamber
temperature is maintained in the range of 750c to 790c and
0 nozzle temperature is maintained in the range of 310 0c to 3200c.
NYLON 101 NYLON 6/10 ABS
Materaials Nozzle of the machine spread a minute layer of base material on
the table. SR30XL soluble material is used asbase material.
Figure 2: Result for without void structure
Temperature of the base material is maintained in the range of
5.2 Compressive strength of various materials
2950c to 3000c. Processor of the machine supply the model
(Two small square void space)
material in those places which were shown solid in the drawing
through nozzle and base material supply to those portions which
were shown void for support of the structure. P430XL ABS
model (IVR) material is used as model material. Temperature of
NYLON101 NYLON6/10 ABS the model material is maintained 3050c to 3100c. Process of

518
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

model and base material is continuing until product is not Table 3 : Result of Compressive Test
completed with the layer resolution of 0.2540mm. Result of compressive
Sr
Part Name test(displacement in
No.
mm)
1 Solid 0.204
Two small square
2 0.083
void shape

8. RESULT
Two small square void structure shows 0.083mm displacement
in compressive test. So it is better alternative in compressive
load.

REFERENCES
[1]. Awate, S., and Kore, S., 2014, “Finite Element Based
Analysis of the Effect of Internal Voids on the Strength
and Stress Distribution of Component- Review” Journal
of Engineering Research and Applications, Vol. 4, pp.
272-273.
[2]. Mireles, J., and Espalin, D.,2012, “Fuse deposition
modeling of metals,” W.M. Keck Center
for 3D Innovation, The University of Texas at El Paso,
Figure 4 : Fused Deposition Modelling Machine El Paso, TX.
[3]. Novacova and kuric 2012. “Basic and advance material
For removing base material from the void space solution of for Fused Deposition Modeling Rapid prototyping
following chemicals in water is maintained at 500c in vibrating Technology” Manuf. and Ind. Eng., 11(1), 2012, ISSN
1338-6549 © Faculty of Manuf. Tech.
mode: [4]. Danas, K., and Aravas, N., 2012, “Numerical modeling
A. Tetra sodium N1-bis carboxyl to methyl)-I-glutamate and of elasto-plastic porous materials with void shape effects
at finite deformations” Composites pp. 1-16.
citric acid [5]. Huang, L., Zhao, G., and Wang, Z., 2011, “Shape
B. Sodium Percarbonate Quality Improvement of Three-DimensionalHexahedral
Meshes” Journal of Information & Computational
Temperature of solution is maintained at 50 0c with the help of Science, Vol. 8, pp. 4007-4014.
electrical heating element and the turbulence is created in the [6]. Kavcic,Babic,Osterman,Podobnik,Poberaj(2011) Rapid
prototyping system with sub-micrometer resolution for
solution by vibration. Solution is alkaline in nature and the microfluidic applications, Springer-Verlag 2011
specimen is dipped in the solution for 20 to 24 hours to remove [7]. Benini,Mancini, Minutolo ,Longhi,Montanari(2011) A
Modular Framework for Fast Prototypingof Cooperative
the base material. After removing the base material, specimens Unmanned Aerial Vehicle, journal intelligent robot
were dried out in atmosphere. system
[8]. Bagsik, A., and Schoppner, V., 2011, “Mechanical
Properties of Fused Deposition Modeling parts
manufacturing” ANTEC Boston, pp. 1-5
[9]. Haiou Zhang, Xinhong Xiong and Guilan Wang(2009)
Metal direct prototyping by using hybridplasma
Figure 5: Solid Specimen deposition and milling, journal of materials processing
technology 124–130
[10]. Singh, R(2010) Three dimensional printing for casting
application : A state of art review and future perspective ,
Advanced Material Research , Vol .83-86, pp. 342-349.
[11]. Singh, J.P. and Singh, R. (2009) Comparison of rapid
Figure 6 : Two small square void specimen
casting solution for lead and brass alloys using three
dimensional printing, Proc. Of institute of mechanical
7. COMPRESSION TEST engg. Part C, journal of Mechanical Sciences, Vol 223,
No. 9, pp.2117-2123.
Compressive test is done with the test speed of 2mm/min. Load [12]. Kanzaki, Bassoli, luliano, L. and Violante ,MG (2007)
Engineering plastics and metals have been extensively
applied on the specimen is 8000N. replaced by polypropylene, Rapid prototyping journal ,
Vol 13, No. 3, pp. 148-155

519
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[13]. Singh, R(2010) Future potential of rapid prototyping and [24]. Lee, C. S., Kim S. G., Kim H. J., and Ahn S. H., 2007,
manufacturing around the world, Rapid Prototyping “Measurement of anisotropic compressive strength of
Journal , Vol 1, No.1, pp. 4-10. rapid prototyping parts” Journal of Materials Processing
[14]. Eyer,D. and Drstvensek, K., (2010) Technologies review Technology, Vol. 187, pp. 627–630.
for mass customization using rapid prototyping ,
Assembly Automation ,Vol.30, No.1, pp. 39-46 . [25]. Palmer, J. A., Summers, J. L., Davis, D. W., Gallegos, P.
[15]. Gatto and Harris ,Russel Anthony (2010) L., Chavez, B. D., Yang, P., Medina, F., and Wicker, R.
Nondestructive analysis of external and internal B., 2005, “Realizing 3-D Interconnected Direct Write
structures in 3DP, Rapid Prototyping journal ,Vol.17 Electronics within Smart Stereolithography Structures,”
,No.2, pp. 128-137 IMECE2005-79360, Proceedings of 2005 ASME
[16]. Simon Li, Li Chen(2010) Pattern-based reasoning for International Mechanical Engineering Congress
rapid redesign: a proactive approach, Res Eng Design andExposition, ASME, Orlando, FL.
Vol 21,25–42 [26]. Khan, Lee, B. H., Z. A., and Abdullah, J., 2005,
“Optimization of rapid prototyping parametersfor
[17]. Haihua Wu, Dichen Li, Xiaojie Chen ,Bo Sun, Xu(2010) production of flexible ABS object,” Journal of Materials
Rapid casting of turbine blades with abnormal film Processing Technology, pp. 54–61.
cooling holes using integral ceramic casting molds, [27]. Jun Xie, Shuhuai Huang, Zhengcheng Duan(2005)
Journal Advance Manufacturing Technology, Vol. 50,pp Positional correction algorithm of a laser galvanometric
13–19 scanning system used in rapid prototyping
[18]. Yagnik, D., 2010, “ Fused Deposition Modeling – A manufacturing, Journal Advance Manufacturing
Rapid Prototyping technique for Product Cycle Time Technology, Vol. 26,pp 1348–1352
Reduction cost effectively in Aerospace Applications” [28]. Liu, Ming Leu, Richards and Schmitt(2004)
Journal of Mechanical and Civil Engineering, pp. 62-68. Dimensional accuracy and surface roughness of rapid
[19]. Xiaoyong Tian andDichen Li(2010) Rapid manufacture freeze prototyping ice patterns and investment casting
of net-shape SiC components, Journal Advance metal parts, Journal Advance Manufacturing Technology
Manufacturing Technology, Vol. 46,pp 579-587 Vol 24, pp485–495
[20]. Wang , G., Li, H., Guan, Y. and Zhao, G. (2004) A rapid [29]. Shin,Yang,Choi,Lee and Whang(2003) A new rapid
design and manufacturing system for product manufacturing process for multi-face high-speed
development application , Rapid Prototyping Journal machining , journal advance manufacturing
,Vol. 5, pp 169-178. technology,Vol 22, 68–74
[21]. Yongnian, y., Shengjie, L., and Xiaohong, W., 2009, [30]. Kulkarni, P., and Dutta, D., 1999, “Deposition Strategies
“Rapid Prototyping and Manufacturing Technology: and Resulting Part Stiffness’s in Fused Deposition
Principle,Representative Technics, Applications, and Modeling,” ASME Journal of Manufacturing Science
Development Trends” Tsinghua science and technology, and Engineering, pp. 93-103.
Vol 14, pp. 1-12. [31]. Masood, S. H., 1996, “Intelligent Rapid Prototyping with
[22]. Bourell, D.L, Leu ,M.C. and Rosen ,D.W.(2009), Road Fused Deposition Modeling,” Rapid Prototyping Journal,
map for additive manufacturing identifying the future of 2(1), pp. 24-33.
freeform processing , University of Texas at Austin [32]. Agarwal, M. K., Jamalabad, V. R., Langrana, N. A.,
[23]. Rochus, P., Kruth J. P., and Carrus R., 2007, “New Safari, A., Whalen, P. J., and Danforth, S. C., 1996,
applications of rapid prototyping and rapid manufacturing “Structural Quality of Parts Processed by Fused
(RP/RM) technologies for space instrumentation” Acta Deposition,” Rapid Prototyping Journal, pp. 4-19.
Astronautica. Vol. 61, pp. 352-359.

520
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Analysis of the Enablers for Selection of Reverse


Logistics Service Provider: An Interpretive Structural
Modeling (ISM) Approach
Arvind Jayant Uttam Kumar
Department of Mechanical Engineering Department of Mechanical Engineering
Sant Longowal Institute of Engineering & Sant Longowal Institute of Engineering &
Technology, Longowal, Sangrur, Punjab – 148106, Technology, Longowal, Sangrur, Punjab – 148106,
INDIA INDIA
arvindjayant@gmail.com

ABSTRACT like to execute green programs to advance their


Activities of reverse logistics are extensively practiced by lead environmental performance, they not only monitor their
acid battery manufacturing industries. One of the important own operations, also coordinate other partners in their
problems experienced by the company management in the supply networks, including reverse logistics activities,
battery manufacturing industries is the irregular supply of spent
material suppliers, manufacturers, distributers, users and
so on. For supply chain mangers, they both insured
batteries/spent lead from the end users in the supply chain
traditional performance criteria as well as environmental
management of batteries production. In the competitive
criteria, known as green supply chain management. The
business environment, the industry is involved in reuse,
outsourcing of reverse logistics activities to third party
recycling, and remanufacturing functions using a third party
has now become a common practice. Reverse logistics
logistics provider which has an impact on the total
encompasses the logistics activities all the way from used
performance of the firm. In the development of the reverse products no longer required by the user to products again
logistics concept and practice, the selection of providers for usable in the market. It is the process of planning,
the specific function of reverse logistics support becomes implementing, and controlling the efficient, cost effective
more important. After comprehensive literatures survey, it flow of raw materials, in-process inventory, finished
was concluded that multiple dimensions and attributes must goods and related information from the point of
be used in the evaluation of 3PRLP.The attributes play an consumption to the point of origin for the purpose of
important role in selecting a third party reverse logistics recapturing value or proper disposal (Stock, 1998). The
provider (3PRLP). Interpretive structural modeling (ISM) most intuitively related notion with such reverse activities
methodology is applied in the present work, which can be involves the physical transportation of used products from
used for identifying and summarizing relationships among the end user back to the producer. Reverse distribution
specific attributes for selecting the best third party reverse activities involve the removal of defective and
logistics provider among the „n‟ 3PRLPs. The study has used environmentally hazardous products from the hands of
three different research phases: identification of enablers from customers. This also includes products that have reached
the literature, interviews with various department managers and the end of their usable life. It is a process whereby
a survey in industry. A model of these enablers has been companies can become more environmentally efficient
developed based upon experts‟ opinions. Clear understanding of through reusing and reducing the amount of materials
these enablers will help organizations to prioritize better and used.The commonly known drives for logistics
manage their resources in an efficient and effective way. outsourcing are the basic requirement of the organizations
to concentrate on core expertise, cost revision, design of
Keywords supply chain partnerships, re-engineering of the company,
Interpretive structural modeling (ISM);Reverse Logistics success of the firms using contract logistics, globalization
Service, Reverse Logistics, MCDM forces, services level improvement and efficient
operations strategies etc. One of the most important
1. INTRODUCTION reasons for outsourcing of the logistics activities are the
With rapid business growth in globalization, some capabilities of the logistics providers to support their
industries with relatively limited resources have to clients with the expertise and exposure that otherwise
outsource some business functions or operations, would be difficult to acquire or costly to have in-house.
purchase raw materials or components/subcomponents The most common outsourced activities are warehousing
from other small medium enterprises to establish an operation, outbound logistics transportation, customs
interrelated supply network. Consequently, if they would brokerage, and inbound logistics transportation (Jayant et

521
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

al., 2012).Section 2 consists of literature review of transportation, customs brokerage, and warehousing
Reverse Logistics Provider. Enablersfor selection of RSP operations. In the view of growing developments of
in the battery industry have been identified and described logistics outsourcing, many 3PL service providers are
in section 3. Step wise elaborated procedure of now offering a variety of services at very attractive rates.
interpretive structural modeling in section 4. Section 5 is These services mainly involve supply chain to supply
Conclusions and then limitations of the study and scope chain business relationships, where not only the user is a
of the future work have been discussed in subsequent critical stakeholder but also their customers who are
sections. directly affected by the quality of service of the
provider.Fifteen years ago, “Logistics” had not yet been
2. LITERATURE REVIEW much explored. However today, with the development of
A thorough literature review is presented in this paper to information technology and increased customer demand,
find out the important evaluation criteria that need to be the enterprises have to handle lots of thorny tasks to take
considered in reverse logistics outsourcing partner care of the service problems. Therefore “Logistics” are
evaluation. Important issues such as the getting considerable attentions from the enterprises.
tools/techniques/methodology presently being used for
the evaluation of a 3PL service provider and specific 3. ENABLERS FOR SELECTION
problems faced in the evaluation process has been OFREVERSE LOGISTICS SERVICE
discussed. PROVIDER
Supply chain management recognizes the importance of, and
The company chosen for this study is a battery
focuses effort on, achieving tight integration between the
manufacturing industry located in the northern part of
various links of the chain. To be efficient, a supply chain must
India. The main scope of this study is to evaluate logistics
exploit modern productivity techniques and approaches, for service providers for hiring their service to collect &
example JIT purchasing, economic batch sizes, strategic supply the End-of-Life (EOL) Lead-acid batteries to the
inventory, reverse logistics, third party logistics, etc. Logistic company door step for reclaiming the lead from
management is termed as the detailed process of planning, automotive batteries. In the forward supply chain, the
implementing, and controlling the efficient, cost effective flow major raw materials such as virgin lead, plastic, and
and storage of materials and products, and related information sulphuric acid are procured from different suppliers for
within a supply chain to satisfy demand (CLM, 2004), and new battery production which is used in two wheelers,
logistics is recognized as the key enabler that allows a company four wheelers, and for other industrial applications. Once
to increase and maintain its competitive advantage and ensures the battery is produced in different plants it has to be
maximum customer satisfaction (Drucker, 1962). Reverse distributed through distributors, wholesalers, retailers and
logistics is the process of moving goods from their typical final then customers. After its end of life, the automobile
destination to another point, for the purpose of capturing value owner leaves the used battery at the automobile service
otherwise unavailable, or for the proper disposal of the products station (initial collection point), where it is replaced by a
(Dowlatshahi, 2000). Reverse logistics is practiced in many new one. The used batteries collected at the collection
industries, and its effective use can help a company to compete points should be quickly transshipped to centralized
in all streams of advantages. Many situations exist for the return center where returned products are inspected for
product to be placed in a reverse flow, such as commercial quality failure, sorted for potential repair or recycling.
returns, warranty returns, end- of-use returns, reusable container After inspection, the useless batteries (not able to recycle)
returns, and others (Du and Evans, 2008). According to Andel are disposed off and reusable batteries are transported to
(1997), effective reverse logistics is believed to result in several disassembly/recycling plants where the batteries are
direct benefits, including improved customer satisfaction, crushed and separated into different components (lead,
decreased resource investment levels, and reductions in storage plastic, acid etc.). Except lead the remaining components
and distribution costs (Autry et al., 2000). Many manufacturers
are sold to the third party for some other applications.
Finally the recycled lead is transported to the battery
and retailers recognize the importance and consider the
manufacturing plants where this secondary lead is used
outsourcing of reverse logistics (Du and Evans, 2008). 3PRLP
along with the virgin lead for new battery production. A
selection and evaluation is one of the most critical activities that
series of interviews and discussion sessions held in the
commits significant resources and impacts the total performance
plant with company management, battery retailers, state
of the firm. The attributes involved in the selection and
pollution control boards officials during this project and
evaluation process may vary depending on the type of product following problem areas are identified for improvement in
considered, and these attributes are often in conflict with one closed loop supply chain of the lead acid batteries.
another.

Langley et al. 2003, the most common outsourced


activities are inbound transportation, outbound

522
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 Uncertainty involved in Supply of spent batteries was firstly developed in 1970‟s (Ravi, V., & Shankar R.).ISM
to recycling company and company is unable to is interpretive as judgment of the selected group for the
forecast collection of EOL products quantity. study decides whether and how the variables are related.
 Presence of illegal lead smelting units in the state Some of the important characteristics of ISM as observed
for unauthorized battery collection &lead are:
recycling operation in business environment. 1. This methodology is interpretive as the judgment of the
 The company is not having any well-structured group decides whether and how the different elements are
model of reverse logistics practice. related.
 Underutilization of existing facilities of the 2. It is structural, too, on the basis of relationship; an
battery closed loop supply chain. overall structure is extracted from the complex set of
variables.
The enablers identified from the literature review, expert 3. It is a modeling technique, as the specific relationships
opinion and survey are:- and overall structure are portrayed in a digraph model.
i. Customer service (CS) ISM generally has following steps:
ii. Customer satisfaction( CSF)
Step 1. Variables (criteria) considered for the system
iii. Customer queries and Complaint (CQC)
under consideration are listed.
iv. Information technology (IT) Step 2. From the variables identified in step 1, a
v. On Time Delivery (OTD) contextual relationship is established among the variables
vi. New technologies(NTE) in order to identify as to which pairs of variables should
vii. Market shares (MS) be examined.
viii. Return on investment (ROI) Step 3. A structural self-interaction matrix (SSIM) is
developed for variables, which indicates pair wise
ix. Recapturing values (REV)
relationships among variables of the system under
consideration.
The problem is here to develop a model of the identified
Step 4. Reachability matrix is developed from the SSIM
enablers for the selection of Reverse Logistics Service
and the matrix is checked for transitivity. The transitivity
Provider with the help of Interpretive Structural Modeling
of the contextual relation is a basic assumption made in
(ISM) technique.
ISM. It states that if a variable A is related to B and B is
related to C, then A is necessarily related to C.
4. INTERPRETIVE STRUCTURAL Step 5. The reachability matrix obtained in step 4is
MODELING(ISM) APPLICATION partitioned into different levels.
Interpretive Structural Modeling (ISM) is a methodology Step 6. Based on the relationships given above in the
used to identify relationship among specific elements, reachability matrix, a directed graph is drawn and the
which define a problem or issue. ISM is an interactive transitive links are removed.
learning process in which a set of dissimilar and directly Step 7. The resultant digraph is converted into an ISM, by
related elements are structured into a comprehensive replacing variable nodes with statements.
systematic model. The model so formed, portrays the Step 8. The ISM model developed in step 7is reviewed to
structure of a complex issue or problem, a system or a check for conceptual inconsistency and necessary
field of study, in a carefully designed pattern implying modifications are made. The above steps are shown in
graphics as well as words. The basic idea of ISM is to use Figure 1.
experts‟ practical experience and knowledge to
decompose a complicated system into several sub-systems
(elements) and construct a multilevel structural model; it

523
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Step 1: List of Enablers Literature review

Step 2: Establish contextual relationship (Xij) Expert Opinion


between attributes (i j)

Step 3: Develop a structural Self- Step 4: Develop Reachability


Interaction Matrix (SSIM) Matrix

Yes
Step 5: Partition the Reachability
Matrix into different levels

Develop the Reachability Matrix into its


conical form

Remove transitivity from the


Step 6: Develop Digraph
digraph

Step 8: Is there
Step 7: Replace attributes nodes any conceptual
with relationship statements inconsistency

No
Represent
Figure. 1.relationship statement
Flow diagram into model
for preparing for the
the ISM Enablers
model

Figure 1. Flow Diagram for ISM


4.1 Structural self-interaction matrix academia were consulted during the brainstorming session
To analyze the enablers for the selection of Reverse to identify the nature of contextual relationships among
Logistics Service Provider in the Battery industry, nine the enablers. In developing SSIM, following four symbols
enablers were considered. These enablers are taken from have been used to denote the direction of relationship
the literature and after discussion with industrial experts between two enablers i and j.
these enablers were included. These experts were senior V- Enabler i will lead to enabler j;
managers in manufacturing and procurement departments A- Enabler j will lead to enabler i;
and also experts in environmental in industry. As X- Barrier i and j will lead to each other;
discussed in section 3, experts from industry and the O- Barrier i and j are unrelated
Table 2. Structured Self Intersection Matrix (SSIM)

Enablers B9 B8 B7 B6 B5 B4 B3 B2 B1
B1 V V O O X A X A X
B2 O X O A X O V X
B3 V O O A V O X
B4 A V O X V X
B5 X O O O X
B6 O O A X
B7 X V X
B8 A X
B9 X

524
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Based on the contextual relationships, the SSIM has been each other so symbol „O‟ has been given in the cell (3,7)
developed (Table 2). Barrier 1 leads to barrier 8 so and so on. The number of pair wise comparison question
symbol „V‟ has been given in the cell (1,8); barrier 2 leads addressed for developing the SSIM are ((N)* (N-1)/2),
to barrier 6 so symbol „A‟ has been given in the cell (2, where N is the number of enablers
6); barrier 5 and 9 lead to each other so symbol „X‟ has
been given in the cell (5,9); barrier 3 and 7 do not lead to 4.2 Initial reachability Matrix
Table 3. Initial Reachability Matrix

Enablers B1 B2 B3 B4 B5 B6 B7 B8 B9

B1 1 0 1 0 1 0 0 1 1

B2 1 1 1 0 1 0 0 1 0

B3 1 0 1 0 1 0 0 0 0

B4 1 0 0 1 1 1 0 1 0

B5 1 1 0 0 1 0 0 0 1

B6 0 1 1 1 0 1 0 0 0

B7 0 0 0 0 0 1 1 1 1

B8 0 1 0 0 0 0 0 1 0

B9 0 0 1 1 1 0 1 1 1

also be 1;for X(5,9) in SSIM, „1‟ has been given


In this step, the reachability matrix is developed from in cell(5,9) and „1‟ in cell(9,5) also in initial
SSIM. The SSIM format is initially converted into an reachability matrix.
initial reachability matrix format by transforming the  If (i, j) value in the SSIM is O, (i, j) value in the
information of each cell of SSIM into binary digits (i.e. reachability matrix will be 0 and (j, i) value will
ones or zeros) in the initial reachability matrix by also be 0;for O(3,7) in SSIM, „0‟ has been given
substituting V, A, X, O by 1 or 0 applying following in cell(3,7) and „0‟ in cell(7,3) also in initial
rules: reachability matrix.

 If (i, j) value in the SSIM is V, (i, j) value in the Initial Reachability Matrix for Enablers for selection of
reachability matrix will be 1 and (j, i) value will the RLSP in table 3.
be 0;for V(1,8) in SSIM, „1‟ has been given in
cell(1,8) and „0‟ in cell(8,1) in initial reachability 4.3 Final reachability matrix with driving
matrix.
and dependence power
 If (i, j) value in the SSIM is A, (i, j) value in the
The final reachability matrix has been obtained by adding
reachability matrix will be 0 and (j, i) value will
transitivity as explained in Step 4 earlier. It is a basic
be 1;for A(2,6) in SSIM, „0‟ has been given in
assumption made in ISM. Final Reachability Matrix with
cell(2,6) and „1‟ in cell(6,2) in initial reachability
driving power and the dependence power of each barrier
matrix.
have also been shown in the Table 4.
 If (i, j) value in the SSIM is X, (i, j) value in the
reachability matrix will be 1 and (j, i) value will

525
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 4. Final Reachability Matrix with driving and dependence power

Enablers B1 B2 B3 B4 B5 B6 B7 B8 B9 Driving
Power
B1 1 1⃰ 1 1⃰ 1 0 1⃰ 1 1 8
B2 1 1 1 0 1 0 0 1 1⃰ 6
B3 1 1⃰ 1 0 1 0 0 1⃰ 1⃰ 6
B4 1 1⃰ 1⃰ 1 1 1 0 1 1* 8
B5 1 1 1⃰ 1* 1 0 1⃰ 1⃰ 1 8
B6 1⃰ 1 1 1 1⃰ 1 0 1⃰ 0 7
B7 1* 1* 1* 1* 1* 1 1 1 1 9
B8 1* 1 1⃰ 0 1⃰ 0 0 1 0 5
B9 1⃰ 1⃰ 1 1 1 1⃰ 1 1 1 9
Dependence 9 9 9 6 9 4 4 9 7 66/66
Power
*means value after applying transitivity

4.4 Partitioning of levels The antecedent set consists of the barrier itself and other
The reachability and antecedent set (Warfield, J.W) for each enablers, which may influence it. Reachability and
barrier have been determined from the final reachability Antecedent set and Intersection sets are found for the all
matrix. The reachability set for a barrier consists of the enablers shown in table 5. We have identified three levels
barrier itself and the other enablers, which it influences. in our study.
Table 5. First Iteration to FIND LEVELS

S. No Reachability Set Antecedent Set Intersection Set Levels


1 1,2,3,4,5,7,8,9 1,2,3,4,5,6,7,8,9 1,2,3,4,5,7,8,9 I
2 1,2,3,5,8,9 1,2,3,4,5,6,7,8,9 1,2,3,5,8,9 I
3 1,2,3,5,8,9 1,2,3,4,5,6,7,8,9 1,2,3,5,8,9 I
4 1,2,3,4,5,6,7,8,9 1,4,5,6,7,9 1,4,5,6,9
5 1,2,3,4,5,7,8,9 1,2,3,4,5,6,7,8,9 1,2,3,4,5,7,8,9 I
6 1,2,3,4,5,6,8 4,6,7,9 4,6
7 2,3,4,5,6,7,8,9 1,5,7,9 5,7,9
8 1,2,3,5,8 1,2,3,4,5,6,7,8,9 1,2,3,5,8 I
9 1,2,4,5,6,7,8,9 1,2,3,4,5,7,9 1,2,4,5,7,9

Table 6. Second Iteration to FIND LEVELS

S. No Reachability Set Antecedent Set Intersection Set Levels


4 4,6 4,6,7 4,6 II
6 4,6 4,6,7 4,6 II
7 4,6,7 7 7
9 4,6,7 4,7 4,7

526
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 7. Third Iteration to FIND LEVELS

S. No Reachability Set Antecedent Set Intersection Set Levels


7 7 7 7 III

9 7 7 7 III

4.5 Model formulation described in the ISM methodology, the digraph is


From the final reachability matrix in Table 4,from the converted into the ISM model has been made as shown in
final reachability matrix, the structural model is generated Figure 2.
and is given in Fig. 2. Removing the transitivity‟s as
Levels
Customer
Return on Customer On time Customer
queries &
Investment Satisfaction Delivery Service (B1)
I Complaint
(B8) (B2) (B5)
(B3)

Information New
II Technology Technology
(B4) (B6)

III Recapturing Market


Values (B9) Shares (B7)

Figure 2. ISM BASED Model

4.6 Enabler Classification (MICMAC


Analysis) 1. Autonomous variables (first cluster) have weak
Variables are classified in to four clusters (Mandal, A., & driving power and dependence. These variables can
Deshmukh, S.G.) named as autonomous variables, be disconnected from the system. In our study, no
dependent variables, linkage variables and independent enablers lie in this range.
variables. The MICMAC principle is based on 2. The second cluster is named dependent variables.
multiplication properties of matrices. The purpose of They have weak driving power and strong
MICMAC analysis is to analyze the drive power and dependence power. In our study, no enablers lie in
dependence power of enablers. this range.
The purpose of MICMAC analysis is to analyze the drive 3. The third cluster named linkage variables having
power and dependence power of enablers. strong driving power and strong dependence power.

527
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

In our study, sevenenabler‟s lies in this region named investment (ROI) (B8), Recapturing values (REV)
as Customer service (CS) (B1), Customer (B9).
satisfaction(CSF) (B2), Customer queries and 4. The fourth cluster named independent variables has
Complaint (CQC) (B3), Information technology (IT) strong driving power and weak dependence power. In
(B4), On Time Delivery (OTD) (B5), Return on our study, twoenablers named new technologies
(NTE) (B6); Market shares (MS) (B7).

B7 B9
9
B4 B1,B5
8 (iv) (iii)
B6
7
B2,B3
6
B8
5
Driving Power

3 (I) (ii)

1
9
1 2 3 4 5 6 7 8
Dependence Power

Figure. 3. Cluster of Enablers

The graph between dependence power and driving power has been developed from ISM technique with the help of
for the enablers for selection of RLSP in Battery industry expert opinion.
is given in Figure 3.The aim of this study is to analyze the Customer service (CS) (B1), Customer satisfaction( CSF)
driving power and the dependency power of (B2), Customer queries and Complaint (CQC) (B3),
enablers(Jharkharia S., & Shankar, R). Without analyzing Information technology (IT) (B4), On Time Delivery
the enablers one cannot initiate the implementing the (OTD) (B5), Return on investment (ROI) (B8),
RLSPin battery industries. This analysis is making RLSP Recapturing values (REV) (B9) have been identified as
adoption easy by adding these enablers. linkage variables. Twoenablers named as new
technologies (NTE) (B6), Market shares (MS) (B7) has
5. CONCLUSION been identified as the driver variables. No barrier has been
The environmental consciousness of customers and the identified as autonomous variable and dependent variable.
increase of environmental image in the market day by day Return on Investment (B8), Customer Satisfaction (B2),
have pushedSMEs indirectly to think about cleaner Customer queries & Complaint (B3), On time Delivery
production by means of RLSP (B5), Customer Service (B1)have been identified as top
implementation.Nineenablers to implement RLSP in level enablers andRecapturing Values (B9) and Market
Battery industry have been identified. Interpretive Shares (B7) are bottom level barrier. Information
Structural Modeling (ISM) methodology has been used Technology (B4) and New Technology (B6) are at 2 nd
for finding contextual relationships among various level of the ISM model. The enablers Market shares (B7)
enablers to implement RLSP in Batter industry. A Model and Recapturing values (B9) are in the independent
cluster which has more driving power and less

528
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

dependence power. That‟s why these lies in this region. [8] Kent, J. L. and Flint, D. J., 1997, “Perspectives on the
The market shares enablers has driving power is 9 and evolution of logistics thought,” Journal of Business
dependence power is 4. Therefore it is most important Logistics, Vol. 18, No. 2, pp.15-29.
enablers in the Reverse Logistics Service Provider. The [9] Langley CJ, Allen GR, Tyndall GR. Third-party logistics
industry can implement Reverse Logistics Service by study 2003: results and findings of the eighth annual study,
implementing these selected enablers and add these 2003. Lawshe, H., 1975, “A quantitative approach to
enablers into their company for successful RLS. content validity,” Personnel Psychology, Vol. 28, No. 4,
pp. 563 575
6. DISCUSSION AND FUTURE SCOPE [10] Mandal, A., & Deshmukh, S.G. (1994). Vendor selection
In this study, we have developed a model of enablers to using interpretive structural modeling(ISM). International
implement RLSP in Indian battery industry based upon Journal of Operations and Production Management, 14(6),
experts‟ opinions. The model may be tested in real world 52-59. Doi: 10.1108/01443579410062086.
situation to check that the enablers are complete and their [11] Ravi, V., & Shankar R. (2005). Analysis of interactions
relationship exists as in the literature. The results of among the barriers of reverse logistics. International
model may vary in real world setting. The enablers may Journal of Technological Forecasting & Social change,
be incomplete or their relationships may be different from 72(8), 1011-1029.
the derived model. ISM framework has been developed [12] Stock, O.N., Oreis, N.P., Kasarda, J.D., 1998. Logistics
with nine enablers for the implementation of RLS only in
strategy and structure a conceptual framework.
the battery industries. More enablers have not been
International Journal of Operations and Production
considered and not categorized. Structural Equation
Management 18 (1), 37–52.
Modeling (SEM) may be used to test the validity of this
[13] Warfield, J.W. (1974). Developing interconnected matrices
hypothetical model. We have developed ISM model for
in Structural modeling. IEEE Transcript on Systems, Men
the implementation of RLS in battery industry. The ISM
model for the implementation of RLS in some other types and Cybernetics, 4(1), 51-81.
of industry like manufacturing and auto industry may be
developed. Industry wise comparison may be made by
using some different methodologies like Analytical
Hierarchy Process (AHP) and Analytical Network Process
(ANP).

REFERENCES
[1] Autry, C.W., Daugherty, P.J., Richey, R.G., 2000. The
challenge of reverse logistics in catalog retailing.
International Journal of Physical Distribution & Logistics
31 (1), 26–37
[2] CLM 2004
DefinitionofLogisticsCouncilofLogisticsManagementAvail
ableat: /www.cscmp.org/S.
[3] Dowlatshahi, S., 2000. Developing a theory of reverse
logistics. Interfaces 30 (3), 143–155.
[4] Drucker, P.F., 1962. The economy‟s dark continent.
Fortune, 103–104.
[5] Du, F., Evans, G.W., 2008. A bi-objective reverse logistics
network analysis for post-sale service. International Journal
of Computers and Operations Research 34, 1–18.
[6] Jayant, P. Gupta, S. K. Garg. Closed Loop Supply Chain
for Spent batteries: A Simulation Study. International
Journal of Advanced Manufacturing Systems 2012;
Volume 3, No.1, pp 89-99.
[7] Jharkharia S., & Shankar, R. (2005). IT enablement of
supply chains: understanding the enablers. Journal of
Enterprise Information Management, 18(1), 11-27.
http://dx.doi.org/10.1108/17410390610658432

529
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Sensitization behavior of GTAW austenitic stainless


steel joints
Subodh Kumar A. S. Shahi
SantLongowal Institute of SantLongowal Institute of
Engineering & Technology, (Deemed Engineering & Technology, (Deemed
University), Longowal, Sangrur University), Longowal, Sangrur
(Punjab), India-148106 (Punjab), India-148106

skbamola@gmail.com ashahisliet@yahoo.co.in

ABSTRACT common problem encountered in austenitic stainless steel


The present work has been carried out to study the weldments during welding as well as in the service conditions.
sensitization behavior of AISI 304L austenitic stainless steel This is a well-known phenomenon called sensitization that
weld, fabricated using GTAW (gas tungsten arc welding occurs during welding, when these steels are subjected to a
process). This weld was subjected to post weld thermal aging temperature range of 550˚C to 850˚C, chromium reacts with
(PWTA) treatments lying in the sensitization range, viz. 700 carbon and form chromium carbides and precipitate along the
˚C for 30 minutes, 500 minutes and 1000 minutes for studying grain boundaries thus giving rise to adjacent regions that are
the influence of carbide precipitation on their metallurgical depleted in chromium [5-7]. This sensitization phenomenon
and corrosion properties. Microstructural studies of these that occurs during welding becomes a cause of concern when
weldments showed that all welds were essentially austenitic these joints are further subjected to a temperature range less
with the presence of a small amount of δ-ferrite. The than 500˚C, as usually encountered in nuclear applications,
microstructure of the welds was dendritic and δ-ferrite phase where it is observed that the pre-existing carbides nuclei, that
placed in interdendritic regions. The weld metal exhibits nucleate during welding, tend to grow during long exposure
largely vermicular morphology of δ-ferrite, and when it was times [8, 9], which consequently affects their corrosion
subjected to different PWTA treatments, carbide precipitation properties and hence service performance.
occurred along the δ-γ interface, the extent of which increases Since, sensitization being a problem associated with
as the aging time increases. The heat affected zones (HAZ) of welding as well as post weld service conditions in the welded
the welds, besides undergoing excessive grain coarsening joints of austenitic stainless steels, the aim of present
during welding, played a significant role in contributing investigation was to study the sensitization behavior of the gas
towards overall sensitization of these joints. Microhardness of tungsten arc welded 304L austenitic stainless steel joints.
the weldments (weld metal and HAZ) decreases as the aging
time increases due to the reason that the matrix becomes .
depleted in solution strengtheners C and Cr, which contribute
towards carbide precipitation. Corrosion studies conducted 2. MATERIALS AND EXPERIMENTAL
through measuring the degree of sensitization (DOS) of the DETAILS
weldments. It was found that the overall DOS of the joints The base material used in the present study was in the
increases as the post weld thermal aging time increases. form of AISI 304L austenitic stainless steel plates with
dimensions of 200 mm x 100 mm x 6 mm, which are cut from
Keywords a rolled sheet. The ER 308L austenitic stainless steel solid
AISI 304L SS; GTAW; sensitization; δ-ferrite; DOS. electrodes of 1.6 mm and 2.4 mm diameters were selected as
the filler metal to fill the single V-groove butt-joint by GTAW
1. INTRODUCTION process. The chemical compositions of the base and filler
Stainless steels (SS) are widely used in a variety of metals are presented in the Table 1.
industries and environments due to their good mechanical and
corrosion properties [1]. Austenitic stainless steels (ASS) are Before welding, the plates were cleaned mechanically
a group of steels that contain nominally 18-25 wt.% and chemically in order to remove any source of
chromium and 8-20 wt.% nickel. This group of stainless steels contaminations like rust, dust, oil, etc. One root pass and two
exhibits an attractive combination of high strength, good main weld passes were carried out to fill the single V-groove
ductility, excellent corrosion resistance and a reasonable with the experimental conditions mentioned in Table 2. The
weldability. These properties make austenitic stainless steels interpass temperature of around 150˚C was maintained for
as attractive candidate materials for use in a wide range of second and third passes. No preheat and post heat treatment
industries such as nuclear industry, petrochemical, chemical was carried out on the welded samples. Welded joints were
industry, biomedical, dairy industry, food industry etc. [1, 2]. visually inspected (during and after the welding) for their
quality and it was ensured that all weld beads possessed good
Welding is one of the most widely used processes to geometrical consistency and were free from visible defects
fabricate austenitic stainless steel structures [3, 4], whereas like surfaceporosity, blow holes etc. Industrial argon gas with
intergranular corrosion due to sensitization is one of the most 10 l/min was used for shielding the weld pool during welding.

530
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 1: Chemical composition of the base and filler material (wt.%)


Alloy element C Si Mn P S Cr Mo Ni Ti V Fe
Base
0.025 0.446 1.386 0.028 0.014 18.238 0.296 9.196 0.006 0.061 Balance
(304L SS)
Filler
0.028 0.421 1.420 0.021 0.012 19.151 0.256 10.02 0.003 0.032 Balance
(ER 308L SS)

Table 2: Experimental welding conditions used in the present work


Heat calomelelectrode (SCE), a graphite counter electrode, and a
input per working electrode. The entire testing including determination
Average unit of the polarization curves was carried out with the
Welding Welding welding potentiostat(Make: Gamry Instruments, Model: Reference
speed length per 600) which was controlled by the dedicated software. The
Type of current voltage (mm/s) weld pass
passes (A) (V) entire DLEPR testing of the welded joints was carried out at
(kJ/mm) room temperature and the electrochemical potential was
varied from the open circuit potential to 300 mV (SCE) with a
Root pass 90 10 1.48 0.43 scan rate of 100 mV/min and then back to the open circuit
potential at the same scan rate of 100 mV/min. The ratio of
Middle
140 14 1.32 1.04 the reactivation current to the activation current multiplied by
pass
100 was taken as a measure of the degree of sensitization
Cover (DOS). The reported values are an average of three tests for
140 14 1.27 1.08 each sample.
pass

In order to study the sensitization (carbide


precipitation) behavior of these joints, three different post
weld thermal aging treatments (CS) viz. 700˚C for 30
minutes, 700˚C for 500 minutes and 700˚C for 1000 minutes
were used in the present work.
The cross-section of the test specimens were
mounted and mechanically ground to 3000 mesh on SiC
papers and finally polished on the cloth using a suspension of
alumina powder. Electrolytic etching was used for revealing
Fig. 1: Schematic illustration showing the cross-section of
the microstructures of different zones of the weldments. 10
a welded joint and various zones formed in this
gms. of oxalic acid and 100 ml. of distilled water was used as
joint.(Rectangular box at the centre shows the composite
the electrolyte using the etching conditions of 6V and 1 min.
zone selected from the joint for DLEPR test).
as cell voltage and etching time respectively. The
microstructure of the different zones of the weldments like 3. RESULTS AND DISCUSSION
weld metal (WM), heat affected zone (HAZ) and fusion zone Ferrite studies were carried out to check the
(FZ) was investigated by optical microscopy. A susceptibility of the weld metal to hot-fissuring tendency and
microhardness tester equipped with Vickers pyramid indenter it was found thatthe weld metal contains 5.1 to 5.8 % of the
was used for microhardness measurements along the ferrite, which shows that this weld was not prone to hot
longitudinal centerline of the welds. A 500 g load was applied cracking tendency. Few micrographs of the weld metal in the
on the indenter for 20s. A ferritescope (M30-Fischer) was as welded and different post weld thermal aging conditions
used in the non-destructive evaluation to observe the ferrite are shown in Fig. 2. The microstructure of the weld metal is
content on the weldments in different regions. dendritic, in which δ-ferrite phase placed in the interdendritic
The double loop electrochemical potentiokinetic regions. The microstructure of weld metal possesses
reactivation test (DLEPR) was used to assess the sensitization vermicular ferrite morphology. Fig. 2 shows that the carbide
behavior of the welded joint in accordance with the test precipitation takesplace along the δ-γ interface [13] and this
conditions used in the previously reported studies [10-12]. As precipitation increases as the post weld thermal aging time
shown schematically in Fig. 1, the testing samples were cut increases. Fig. 3 shows the HAZ microstructures in the as
out from the welded plate in such a way that the cross- welded and different post weld thermal aging conditions.
sectional area of each specimen (which was taken as 15 x From this Fig. 3, it is observed that HAZ experiences grain
6mm2=90 mm2) that was exposed directly to the test solution coarsening during welding, which may be attributed to the
inDLEPR test. Before electrochemical test, all samples cooling rate experienced by the weld. This grain coarsening
weremounted in the epoxy resin and then prepared further affects the carbide precipitation behavior of the HAZ
metallographically by using emery paper up to 1000 grit. A of the welded joint. From Fig. 3, it can be seen that as the post
solution of 0.5 M H2SO4 + 0.01 M KSCN was used for weld thermal aging time increases the carbide precipitation in
conducting DLEPR test.A standard cell was used to conduct the HAZ also increases.
the DLEPR tests with a reference electrode of saturated

531
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

which may be attributed to the microstructural variations that


a occur in these zones during welding. It can also seen that
Vermicular ferrite average microhardness of the weld zones and the HAZs (for
all the post weld thermally aged joints) show a decreasing
trend due to the reason that the matrix becomes depleted in
solution strengtheners C and Cr, which contribute towards
intergranular carbide precipitation [14], and this precipitation
increases with increased exposure times. Further, the extent of
microhardness variation between the as welded condition and
the post weld thermally aging at 700 ˚C for 1000 minuteswas
such that average microhardness of the weld zone and
HAZfor the joint, decreased from 221.52 to 201.23 VHN in
the weld zone and 211.18 to 185.67 VHN in the HAZ, which
may be attributed to higher carbide precipitation.
b
Table 3: Vickermicrohardness (HV0.5) of the WM, HAZ
Vermicular ferrite
and base metal under different conditions

Vickermicrohardness (HV0.5)
S. As- Post weld thermal aging treatment
No. welded at 700˚C
condition 30 500 1000
minutes minutes minutes
1. WM 221.52 219.13 210.74 201.23
2. HAZ 211.18 209.54 195.25 185.67
3. Base 225.33 223.12 208.41 198.74
Fig. 2: Microstructure of the weld metal (at 100X) (a) as-
welded and (b) PWTA (700˚C/1000 minutes) conditions.

HAZ
FB WM

Fig 3: DLEPR curves of the welded joint under various


b conditions.

FB
HAZ WM

GBC

Fig. 3: Microstructure of the HAZ (at 100X) (a) as-welded


and (b) PWTA (700˚C/1000 minutes) conditions.
Different zones of the weldments (as welded and
post weld thermally aged) were evaluated for their
microhardness and the results (which have average of five
microhardness values for various zones of the each joint) are
shown in the Table 3. It has been observed that among all the Fig 4: DLEPR curves of the base metal under various
conditions, the weld zones possess relatively higher average conditions.
microhardness value as compared to the respective HAZs,

532
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

welded316L SS joints. Journal of Materials Processing


and Technology, 94, 36-40.
Table 4: DLEPR results (DOS values of base metal and
welded joints under different conditions) [5] Ozyurek, D., 2008. An effect of weld current and weld
atmosphere on the resistance spot weldability of 304L
Degree of sensitization (DOS=(Ir/Ia)*100) austenitic stainless steel. Materials and Design, 29, 597-
S. 603.
As- Post weld thermal aging
No. welded treatment at 700˚C [6] Bruemmer, S.M. andCharlot, L.A., 1986. Development
condition 30 500 1000 of grain boundary chromium depletion in type 304 and
minutes minutes minutes 316 stainless steels. ScriptaMetallurgica, 20, 1019-1024.
1. Welded joint 0.31 2.23 12.21 22.61
[7] Parvathavarthini, N., Dayal, R. K., Gnanamoorthy, J. B.,
2. Base 0.078 0.53 7.78 16.11
1994. Influence of prior deformation on the sensitization
of AISI Type 316LN stainless steel. Journal of Nuclear
DLEPR technique was used for evaluating DOS of Materials, 208, 251-258.
the welded joints andbase metal under different post weld
[8] Povich, M.J., 1978. Low temperature sensitization of
thermal aging conditions, and the DLEPR curves are shown in
type 304 stainless steel. Corrosion, 34, 60-65.
the Fig. 3 &4 respectively. Table 4 shows the DLEPR results
andfrom this table it can be seen that in the as welded [9] Kain, V., Chandra, K., Adhe, K.N., De, P.K., 2004.
condition, the welded joint possesses 0.31, DOS value and Effect of cold work on low-temperature sensitization
when this joint was subjected to post weld thermal aging behaviourof austenitic stainless steels. Journal of Nuclear
treatments, the DOS value shows an increasing trend of Materials, 334, 115-132.
varying degree.Maximum DOS (22.61) was observed for the
[10] Lopez, N., Cid, M., Puiggali, M., Azkarate, I., Pelayo,
weldedjoint subjected to 700˚C for 1000 minutes. This
A., 1997. Application of double loop electrochemical
significant DOS variation is attributable to the mechanism
potentiodynamic reactivation test to austenitic and
involved for carbideprecipitation in the weld metal andHAZ
duplex stainless steels. Materials Science and
of these welded joints, where, as the post weld thermal aging
Engineering A, 229, 123-128.
time increases the carbide precipitation (which occurs along
δ-γ interfaces in the weld metal and along the grain [11] Cihal, V. andStefec, R., 2001. On the development of the
boundaries in the HAZ) is also increases [15-16]. electrochemical potentiokinetic method.
ElectrochimicaActa, 46, 3867–3877.
4. CONCLUSIONS
1. The weld metal matrix was austeniticwith the presence of a [12] Gracia, C., Tiedra de, M.P., Blanco, Y., Martin, O.,
small amount of δ-ferrite and the morphology of the δ-ferrite Martin, F., 2008. Intergranular corrosion of welded joints
was vermicular. When this weld metal was subjected to of austenitic stainless steels studied by using an
different PWTA treatments, carbide precipitation occurred electrochemical minicell. Corrosion Science, 50, 2390-
along the δ-γ interface and the extent of this precipitation 2397.
increases as the aging time increases. [13] Dadfar, M., Fathi, M.H., Karimzadeh, F., Dadfar, M.R.,
2. HAZ of the welded joint experiences grain-coarsening Saatchi, A., 2007. Effect of TIG welding on corrosion
during welding. When the welded joint subjected to various behaviour of 316L stainless steel. Materials Letters, 61,
PWTA treatments, carbide precipitation occurred along the 2343-2346.
grain boundaries of the HAZ and the amount of this carbide [14] Yae Kina, A., Souza, V.M., Tavares, S.S.M., Pardal,
precipitation increases as the aging time increases. J.M., Souza, J.A., 2008. Microstructure and intergranular
3. Microhardness evaluation of the welded joint showed that corrosion resistance evaluation of AISI 304 steel for high
after PWTA treatments, the microhardness value of the base temperature service. Materials Characterization, 59, 651-
metal, weld metal and HAZ show a decreasing trend, which 655.
may be due to that the intense carbide precipitation removes [15] Cui, Y. andLunding, C.D., 2005. Evaluation of initial
chromium and carbon from solid solution. corrosion location in E316 austenitic stainless steel weld
4. DLEPR curves of the welded joint showed that as the post metals. Materials letters, 59, 1542-1546.
weld thermal aging time increases, the DOS value of the [16] Cui, Y. andLunding, C.D., 2007. Austenite-preferential
welded joint also increases. corrosion attack in 316 austenitic stainless steel weld
metals. Materials and Design, 28, 324-32
5. REFERENCES
[1] Khatak, H. S. and Raj, B., 2002. Corrosion of Austenitic
Stainless Steels. Narosa Publishing House, India.
[2] Sedrics, A. J., 1996. Corrosion of Stainless Steels.
second ed., John Wiley & Sons, New York.
[3] Liu, W., Wang, R.J., Han, J.L., Xu, X.Y., Li, Q., 2010.
Microstructure and mechanical performance of resistance
spot welded cold-rolled high strength austenitic stainless
steel. Journal of Materials Processing Technology, 210,
1956-1961.
[4] Zumelzu, E., Sepulveda, J., Ibarra, M., 1999. Influence
of microstructure on the mechanical behavior of

533
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Grinding Fluid Applications using Simulated


Coolant Nozzles and their Effect on Surface
Properties in a Grinding Process

Mandeep Singh Jaskarn Singh Yadwinder Pal Sharma


GZS PTU CAMPUS UCOE, Punjabi University GZS PTU CAMPUS
Bathinda Patiala Bathinda
smaghmandeep@gmail.com er.yad2007@gmail.com

ABSTRACT several researchers [8, 9]. Webster et al. [10, 11]


Grinding is one of the most widely adopted processes for presented new nozzle designs that give long coherent
finishing flat surfaces. Because of high heat generation jets, thus maximizing the application of fluid into the
by the process, effective cooling is of great concern to grinding zone. The influence of nozzle position, jet
minimize the detrimental effects to the workpiece and velocity, and distance from the grinding zone was
reduce wheel wear. For this, an effective cooling method studied. Methods were described for modeling and
is needed selecting the suitable nozzle that can send optimization of grinding processes [12, 13]. The grind
coolant at high velocity to the cutting zone. This ensures hardening process was analyzed that utilizes the heat
effective cooling that reduces the chances of workpiece dissipation in the grinding area for inducing
damage, save the valuable resources and obtains the metallurgical transformation on the surface of the ground
enhanced surface and dimensional quality, which will be workpiece [14]. An improved thermal model was
helpful for precision manufacturing. The Purpose of the developed which would accurately predict the position of
present study is to review the effect of various process the burn boundary [15].
parameters on the properties and surface quality in
grinding process. 2. GRINDING PROCESS AND
Keywords- grinding, Cooling, workpiece, nozzle, GRINDING FLUID
cutting zone, precision manufacturing.
APPLICATIONS
1. INTRODUCTION Tawakoli [16] et al. reviewed dry grinding process as it
Due to a different material removal mechanism of is one of the most favorable processes from an
rubbing, ploughing and ‗size effects‘ during grinding, economical as well as an ecological point of view. The
specific energy consumption and temperature generation
research was done to reduce heat generation by special
is much higher than other conventional metal cutting
processes. Higher temperature generation in grinding conditioning using a single-point diamond dressing tool
adversely affects the surface properties, burns the based on the new innovative concept. Oliveira [17] et al.
surfaces, induces residual stresses, and causes distortion analyzed relevant industrial demands for grinding
which leads to surface and sub-surface micro cracks. research. The research starts with an analysis on the main
This may also lead to difficulties in controlling the trends in more efficient engines and the changes in their
dimensions. Therefore, in majority of the cases, components that affects the grinding performance. Case
application of grinding fluid becomes necessary.
studies were used to show how research centers and
Moreover, coolant can wash away the chips, provides
lubrication and can improve overall grinding industries are collaborating. Kiyak [18] et al., examined
performance. But for effective cooling not only correct and compared dry and cutting fluid application (wet) in
grinding fluid but also selection of proper nozzle, its grinding as the process is practiced to obtain the best
orientation and nozzle tip distance from grinding zone possible surface quality and dimensionally accurate of
are the other important factors. A thin surrounding air ground machine parts. Their study also examined
layer generated due to the high rotation of wheel material removal rates for dry and wet grinding
prevents the grinding fluid to reach the grinding zone.
This layer is often tried to break using a scraper board processes. Cakir [19] et al., analyzed that effect of
but not found very effective method as there always friction generated heat affects shorter tool life, higher
exists a small gap between the uneven wheel surface and surface roughness and lowers the dimensional
the scraper board. This gap trims down the effectiveness sensitiveness of work material. The effects of work-piece
of the scraper board. material, cutting tool and machining process type were
Several works have been reported to calculate determined in detail. Jin [20] et al., investigated the
temperature generation [1, 2], variation of heat transfer
variation of the convection heat transfer coefficient
coefficient [3, 4], effect of temperatures [5], and need for
use of coolant [6, 7] in grinding processes. Effects of (CHTC) of the process fluids within the grinding zone by
boundary layer have been investigated in detail by using hydrodynamic and thermal modeling. They

534
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

determined the fluid film thickness by grinding wheel actual useful flowrate depends on nozzle position,
speed, porosity, grain size, fluid type, and flow rate and design, flowrate and velocity. Li [27] et al., presented a
nozzle size. The CHTC values were compared for a wide theoretical model for flow of grinding fluid through the
grinding zone as the boundary layer of air around the
range of grinding regimes, including high efficiency deep
grinding wheel restricts most of the grinding fluid away
grinding (HEDG), creep feed and finish grinding. Alves from the grinding zone. Hence, conventional method of
[21] et al., investigated a alternative to recycle cutting delivering grinding fluid that flood delivery they found
fluids by varying the plunge velocity in the plunge was not believed to fully penetrate this boundary layer.
cylindrical grinding of ABNT D6 steel, rationalizing the The flood grinding typically delivers large volumes of
application of two cutting fluids and using a grinding fluid was ineffective, especially under high
superabrasive CBN (cubic boron nitride) grinding wheel speed grinding conditions. Kopac [28] et al., deals with
the contemporary aspects of grinding with regards to
with vitrified binder to evaluate the output parameters of
enhanced productivity and precision demands. High-
tangential cutting force, acoustic emission, roughness, performance grinding is essential to achieve high
roundness, tool wear, residual stress and surface dimensional accuracy and surface integrity of ground
integrity, using scanning electron microscopy (SEM) to components at optimum cost efficiency. Cameron [29]
examine the test specimens. Gviniashvili [22] et al., et al., found that when grinding, wheel loading can clog
developed a model for flow rate between a rotating the pores of the grinding wheel and accelerate thermal
grinding wheel and a work-piece. They found that the damage to the work-piece. To help reduce wheel
loading, separate cleaning-jet systems can be applied to
useful flow that passes through the contact zone is a
the grinding process using a high-speed coolant stream
function of the spindle power for fluid acceleration, directed towards the wheel surface. They examined the
wheel speed and delivery-nozzle jet velocity. Irani [23] influence of speed, flowrate and orientation of the
et al., reviewed some of the common as well as some of cleaning jet. For the experimental conditions used in
the more obscure cutting fluid systems that have been their work, the results show that cleaning-jet orientation
employed in recent years with an emphasis on creep-feed does not appear to have a significant effect on the
grinding process; however, a threshold for the speed and
applications. These cutting fluids remove or limit the
flowrate of the cleaning jet was observed. Monici [30] et
amount of energy transferred to the work-piece through al., analysed that the type and amount of cutting fluid
debris flushing, lubrication and the cooling effects of the used directly affect some of the main output variables of
liquid. Hryniewicz [24] et al., proposed models of fluid the grinding process which were analyzed by them, such
flow in grinding with nonporous wheels. A smooth as tangential cutting force, specific grinding energy,
wheel was employed instead of a rough grinding wheel acoustic emission, diametrical wear, roughness, residual
to simplify the analysis. Fluid flow was investigated for stress and scanning electron microscopy. To analyze the
influence of these variables, an optimized fluid
laminar and turbulent regimes using the classical
application methodology was developed by them to
Reynolds equation of lubrication and a modified reduce the amount of fluid used in the grinding process
Reynolds equation for turbulent flows, respectively. It and improve its performance in comparison with the
was found that the classical Reynolds equation reliably conventional fluid application method. The results
predicts the hydrodynamic pressure if the Reynolds revealed that, in every situation, the optimized
number Re (based on the minimum gap size) is lower application of cutting fluid significantly improved the
than about 300. Experimental results for 300, Re, 1500 efficiency of the process, particularly the combined use
of neat oil and CBN grinding wheel.
agree with the proposed turbulent flow model. That
suggests that the flow in this range of Re was turbulent,
and that the fluid inertia is negligible.
4. SIMULATION WITH DIFFERENT
NOZZLES AND GRINDING
3. EXPERIMENTAL ANALYSIS WHEEL
Brinksmeier [31] et al. described different methods for
Ebbrell [25] et al., studied the effects of the boundary
modeling and optimization of grinding processes. First
layer on grinding and tried to overcome the boundary
the process and product quality characterizing quantities
layer of air is entrained around a rotating grinding wheel.
were measured. Afterwards different model types, e.g.
Their investigation aims to show through experiment and
physical–empirical basic grinding models as well as
modeling, the effects of the boundary layer on cutting
empirical process models based on neural networks,
fluid application and how it can be used to aid delivery
fuzzy set theory and standard multiple regression
by increasing flow rate beneath the wheel. Results from
methods, were discussed for an off-line process
three experiments with different quantities of cutting
conceptualization and optimization using a genetic
fluid passing through the grinding zone were presented.
algorithm. The methods presented were integrated into
Morgan [26] et al., addresses the quality of fluid
an existing grinding information system, which was part
required for grinding and the method of application
of a three control loop system for quality assurance.
using computational fluid dynamics approach. Results
Sakakuraa [32] et al. analyzed the feature of grinding
from this research suggested that supply flowrate needs
process which is performed by using a large number of
to be 4 times the achievable ‗useful‘ flowrate. Improved
abrasive grains with irregular shapes, and random
system design allowed ‗actual‘ useful flowrate to
distribution enables accurate and high quality machining,
approach ‗achievable‘ useful flowrate. Achievable flow
it complicates analysis of the grinding process. Several
rate depends on wheel porosity and wheel speed whereas
computer simulations using the Monte Carlo Method

535
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

were carried out. Simulation program based on the


elastic behaviour model of a grain was developed by
taking the background into account, which was
previously investigated by the authors. The program
focuses on the generation process of a work-piece
surface, and simulates the interaction of grains with a
work-piece, which includes the elastic and plastic
deformation and the removal of work-piece material.
Brinksmeier [33] et al., gives an overview of the
current state of the art in modeling and simulation of
grinding processes. Physical process models (analytical
and numerical models) and empirical process models
(regression analysis, artificial neural net models) as well
as heuristic process models (rule based models) were Figure 1: Pressure distribution around scraper
taken into account, and outlined with respect to their board placed at centre level of wheel [39]
achievements. The models were characterized by the
process parameters such as grinding force, grinding An effective cooling method in finding optimum nozzle
temperature, etc. as well as work results including orientation and pressure of cutting fluids can reduce the
surface topography and surface integrity. Nguyen [34] et chances of work-piece damage, save the valuable
al., conducted the research in two parts describing the resources and obtain the enhanced surface quality. Also
kinematic simulation of the grinding process. The first found the peak pressure, pressure drop region and the
part of their research was concerned with the generation swirl direction, nozzle location can be found out so that
of the grinding wheel surface. A numerical procedure for the flow coming out of the nozzle can be send to the
effectively generating the grinding wheel topography exact cutting region.
was suggested. The procedure was based on the
transformation of a random field. Nguyen [35] et al.,
describes the kinematic simulation of the grinding
process and constitutes the second part of the two-part
series. The complex wheel–work-piece interaction was
taken into consideration in the generation of the work-
piece surface. An algorithm was proposed to identify the
active abrasive grains and their attack angles from the
wheel topography. Based on the critical values of the
attack angle, the abrasive grain was determined either to
cut, plough or rub the work-piece. Sinot [36] et al.,
identified the parameters which are influential in
maintaining a clean wheel as grinding of certain
materials such as ductile material which are hard to grind
implies particular conditions of work. They proposed a
cleaning criterion to estimate the efficiency of the
cleaning process. Using an experimental setup, the
significant of the influence of the nozzle position, the
Figure 2: Velocity vector when scraper is at
flow rate and pressure, the boundary layer of air around
the rotating wheel and the particle rate contained in the
centre level [39]
fluid were assessed. Mandeep [37] et al., using
computational fluid simulation found that the rotating air Anirban [39] et al., used computational fluid simulation
layer with the grinding wheel prevents the cutting fluid (using ANSYS CFX module) approach to study six
to reach to the cutting zone but when the fluid jet different nozzle shapes and cross section to study the
velocity becomes equal or higher than the grinding grinding fluid flow behaviour through them. On the basis
wheel surface speed, the air layer can be successfully of the simulation study, two best shapes of nozzles
broken and the cutting fluid can reach cutting zone namely spline followed by convergent-divergent were
enabling an effective cooling. Mandeep [38] et al., identified for obtaining the peak velocity of the fluid.
adapted a computational fluid simulation (using ANSYS The results were experimentally validated using all the
CFX) approach to find the flow behaviour of air around six nozzles used in the simulation study. The results
the rotating grinding wheel when a scraper board is showed different level settings for the three factors for
placed as surface grinding. But because of high heat optimising the three response parameters. Multi-
generation by the process, effective cooling is of great response signal-to-noise (MRSN) was used to optimise
concern to minimize the detrimental effects to the work- the three responses together. Spline shaped nozzle gave
piece. the best results for the three responses followed by
convergent divergent which also validated the simulation
study results.

536
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 4: Velocity contour around the grinding


wheel and work piece in a cylindrical grinding
process with (wheel revolution: 1,921 rpm,
workpiece 245 rpm) [41]

The simulated results of flow behaviour through six


different kinds of nozzles were experimentally validated
to study the effect of these different types of nozzles as
well as other process parameters on response parameter
namely surface roughness of ground workpiece. Using
Taguchi design of experimentation technique,
experiments were planned and process capability was
statistically analyzed. The optimum solution was found
coherent with the simulated results.

5. OPTIMIZATION OF NOZZLE
ORIENTATION AND NOZZLE TIP
DISTANCE
Baines-Jones [41] et al., reveals that a principal
Figure 3: Six different kinds of nozzles used for
function of fluid is to improve lubrication and
simulation and experimentation subsequently reduce the risk of thermal damage and
(dimensions not to scale) [40] improve process performance. Grinding fluid is
delivered at a particular flow rate and pressure. Correct
The workpiece speed and nozzle tip distance were key nozzle design and positioning are critical elements of the
factors that affect the surface roughness, dimensional delivery system and their research presents the initial
stability and the microhardness of the machined parts. work on nozzle design. Initial work in this area had
Further, wheel speed and nozzle inclination angle do not identified flow coherency as a measure that will allow
affect any of the responses.Mandeep [40] et al., for improved nozzle design. Their findings on the
analysed that due to high heat generation by the importance of other delivery system factors including:
cylindrical grinding process, effective cooling is of great nozzle position, nozzle angle, nozzle type and jet
concern to minimize the detrimental effects caused to the velocity were presented. Morgan [42] et al., proposed
workpiece. Hence, an effective cooling through the use that it is important to the efficiency of the process and to
of proper nozzle can reduce the chances of workpiece the performance of the operation that the fluid is
damage, and to enhance surface quality. In this paper, delivered in a manner that ensures the desired jet
the air flow behaviour around the rotating wheel and velocity has adequate coverage of the contact zone. The
workpiece was simulated with computational fluid nozzle geometry influences the fluid velocity and flow
approach in ANSYS CFX using different wheel and pattern on exit from the nozzle orifice.
workpiece rotational speeds to calculate the maximum Recommendations were given to guide a user to optimal
air velocity of the rotating air layer. design of nozzles to ensure adequate fluid supply to the
contact zone. Webster [43]., configured a nozzle
assembly and method to apply coherent jets of coolant in
a tangential direction to the grinding wheel in a grinding
process, at a desired temperature, pressure and flowrate,
to minimize thermal damage in the part being ground.
Pilkington [44] et al., disclosed a method of
determining a position of a coolant nozzle relative to a

537
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

rotating grinding wheel removing material from a work- metallurgical damage involve martensitic
piece and an apparatus for practicing the method. The transformations.
method includes the step of disposing a coolant nozzle
having a base and a distal end for adjustable movement 7. CONCLUSION
relative to the grinding wheel and the work-piece.
A lot of work has been done on process of grinding and
6. EFFECT OF GRINDING ON grinding fluid applications. Temperature rise in grinding
DIMENSIONAL CONTROL, is an important consideration, because it can adversely
SURFACE ROUGHNESS AND affect the surface properties and cause residual stresses
on the work-piece. Furthermore, temperature gradients in
HARDNESS the work-piece cause the distortions by differential
thermal expansion and contraction which causes surface
Brinksmeier [45] et al., reported that during machining and sub-surface micro cracks and also makes it difficult
the work-piece surface layer is plastically deformed and to control dimensional accuracy. If the temperature is
acts as a source of residual stress for the entire cross very high, the surface may burn, producing a bluish color
section of the work-piece and as a consequence shape on steels, which indicates oxidation. A burn may not be
deviations occur. A new method to predict the shape objectionable in itself. However, the surface layers may
deviation of machined work-pieces with complex undergo metallurgical transformations, with martensite
geometry was proposed. It combines experimental formation in high carbon steels from re-austenization
results of machining work-pieces with simple geometries followed by rapid cooling. The effect is known as
with finite element simulations. This was achieved by metallurgical burn, which also is a serious problem with
making use of the known source stresses in simple parts nickel-base alloys. High temperatures in grinding may
for which the approach was validated. Salonitis [46] et also lead to the thermal cracking of the surface of the
al., analyzed the grind hardening process that utilizes the work-piece, known as heat cracks. Cracks are usually
heat dissipation in the grinding area for inducing perpendicular to the grinding operation; however, under
metallurgical transformation on the surface of the severe grinding conditions, parallel cracks may also
ground work-piece. In the case of grind-hardening of develop. Temperature change and gradients within the
thin work-pieces or cylindrical work-pieces of small work-piece are mainly responsible for residual stress in
diameter, the quenching has to be assisted with the grinding. A principal function of fluid is to improve
application of coolant fluid. So they investigated the lubrication and subsequently reduce the risk of thermal
utilization of the coolant fluid for the grind hardening of damage and improve process performance. The use of
small diameter cylindrical parts. The rapid heating of the proper grinding fluids can effectively control these
work-piece and the short austenitising time were taken adverse effects. Not only the proper selection of a
into consideration both for the estimation of the hardness grinding fluid is important, but also the proper selection
profile and the hardness penetration depth (HPD). A of nozzle and method of application of fluid is of great
finite element analysis (FEA) model was developed for concern.
this specific case and its predictions were verified Because of high rotational speed of grinding wheel, an
experimentally. Rowe [47] et al., developed an air layer surrounding the rotating grinding wheel makes
improved thermal model which would accurately predict a thin film which restricts prevents the cutting fluid to
the position of the burn boundary. The main advance, approach towards the cutting region. This layer is often
compared to previous methods of thermal modeling, was broken using a scraper board, but the position of scraper
the partitioning of the heat flux between the grinding board is of great importance. Furthermore, different
wheel and the work-piece. This allows more realistic cross sections of the nozzle decides the peak fluid
values of heat flux to he employed in the model. velocity that is achieved after the exit from the nozzle
Expressions for upper and lower bound solutions have and the distance from the exit where peak velocity is
been developed which predict the critical specific energy achieved decides the proper location of the nozzle for
for the onset of burn. Chryssolouris [48] et al., reported effective cooling.
that in grind hardening, the heat dissipated in the cutting
area during grinding is used for the heat treatment of the
work-piece. The hardness penetration depth has been
REFERENCES
calculated, for a given set of process parameters, and
compared with experimental data from a cylindrical dry [1] Guo, C. and Malkin, S. 2000 ‗Energy Partition and
grind hardening process. Their model shows that the Cooling During Grinding‘. J. Manuf. Processes, 2,
flow rate through the contact zone between the wheel pp. 151-157.
and the work surface depends on wheel porosity and [2] Malkin, S. and Guo, C. 2007 ‗Thermal analysis of
wheel speed as well as depends on nozzle position, grinding‘ CIRP Annals – Manuf. Technol, 56, pp.
design, and fluid jet velocity. Furthermore, the model 760-782.
was tested by a surface grinding machine in order to [3] Jin, T. and Stephenson, D.J. 2008 ‗A study of the
correlate between experiment and theory. Shaw [49] et convection heat transfer coefficients of grinding
al., reviewed the nature of two well known forms of fluids‘, CIRP Annals – Manuf. Technol, 57, pp.
metallurgical damage of ground surfaces that involve 367-370.
untempered and overtempered martensite [4] Jin, T., Stephenson, D.J. and Rowe, W.B.,
transformations and the special characteristics that ‗Estimation of the convection heat transfer
pertain in grinding where the time at temperature before coefficient of coolant within the grinding zone‘,
quenching is unusually short. Both of these forms of Proc IMechE, Part B: J. Engineering
Manufacture, 217(3), pp. 397-407.

538
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[5] Cakir, O., Yardimeden, A., Ozben, T. and Kilickap, [20] T. Jin and D.J. Stephenson, 2008‖ A study of the
E. 2003 ‗Selection of cutting fluids in machining convection heat transfer coefficients of grinding
processes‘, J. Achiev. Mater. Manuf. Engg, 25(2), fluids‖, CIRP Annals - Manufacturing
pp. 99-102, 2007. Technology, Volume 57, Issue 1, pp. 367-370,.
[6] Kiyak, M. and Cakir, O. 2010 ‗Study of surface [21] Manoel Cleber de Sampaio Alves, 2008 ―Eduardo
quality in dry and wet external cylindrical Carlos Bianchi and Paulo Roberto de Aguiar,
grinding‘, Int. J. Compu Mater Sci Surf Engg, 3, Grinding of hardened steels using optimized
pp. 12-23. cooling‖, Ingeniare. Revista chilena de ingeniería,
[7] Choi, H. Z., Lee, S.W. and Kim, D.J. 2001 Volume 16, Issue 1, pp.195-202.
‗Optimization of cooling effect in the grinding [22] V. K. Gviniashvili, N. H. Woolley and W. B.
with mist type coolant‘, American Society for Rowe, 2004 ―Useful coolant flowrate in grinding‖,
Precision Engineering Proceedings, Crystal City, International Journal of Machine Tools and
Virginia. Manufacture, Volume 44, Issue 6, pp. 629-636.
[8] Li, C.H., Liu, G.Y., Hou, Y.L., Ding, Y.C. and Lu, [23] R.A. Irani, R.J. Bauer and A. Warkentin. 2005 , A
B.H. 2009 ‗Modeling and experimental review of cutting fluid application in the grinding
investigation of useful flow-rate in flood delivery process, International Journal of Machine Tools
grinding‘, Proc. Chinese Control and Decision and Manufacture,
Conference, pp. 5467-5471,. Volume 45, Issue 15, pp. 1696-1705 .
[9] Davies, T.P. and Jackson, R.G. 1981 ‗Air flow [24] P. Hryniewicz, A.Z.Szeri and S.Jahanmir, 2001
around grinding wheels‘, J. Precision Engg, 3, pp. ―Application of lubrication theory to fluid flow in
225-228. grinding: Part I—Flow between smooth surfaces‖,
[10] Webster, J.A., Cui, C., Mindek Jr. R.B. and Journal of Engineering for Industry Transactions
Lindsay, R. 1995 ‗Grinding fluid application of ASME, Volume 123, pp. 94-100, January.
system design‘, CIRP Annals – Manuf. Technol, [25] S. Ebbrell, N. H. Woolley, Y. D. Tridimas, D. R.
44, pp. 333-338 . Allanson, W. B. Rowe, 2000 ―The effects of
[11] Webster, J. A. and Storrs, C.T. 2006 ‗Coherent jet cutting fluid application methods on the grinding
nozzles for grinding applications‘, US Patent process‖, International Journal of Machine Tools
Application Publication, Publication number: and Manufacture, Volume 40, Issue 2, pp. 209-
US2006/7086930 B2. Saint-Gobain Abrasives, Inc. 223.
[12] Brinksmeier, E., Tonshoff, H.K. Czenkusch, C. [26] M.N. Morgan, A.R. Jackson, H.Wu, V. Baines-
and Heinzel, C. 1998 ‗Modelling and optimization Jones, A. Batako, W.B. Rowe. 2008
ofgrinding processes‘, J. Intelli. Manuf. ―Optimization of fluid application in Grinding‖,
[13] Brinksmeier, E., Aurich, J.C., Govekar, E., CIRP Annals - Manufacturing Technology,
Heinzel, C., Hoffmeister, H.W., Klocke, F., Volume 57, Issue 1, pp. 363-366,.
Peters, J.,Rentsch, R., Stephenson, D.J., Uhlmann, [27] C.H. Li, G.Y. Liu, Y.L. Hou, Y.C. Ding and B.H.
E., Weinert, K. and Wittmann, M. 2006 ‗Advances Lu. 2009‖Modeling and experimental
inmodeling and simulation of grinding processes‘, investigation of useful flow-rate in flood delivery
CIRP Annals – Manuf. Technol, 55, pp. 667- 696. grinding‖, Chinese Control and Decision
[14] Salonitis K. and Chryssolouris G. 2007 ‗Cooling Conference, pp. 5467-5471.
in grind-hardening operations‘, Int. J. Adv. Manuf. [28] J. Kopac and P. Krajnik, 2006 ―High-performance
Technol, 33, pp. 285–297. grinding—A review‖, Journal of Materials
[15] Rowe, W.B., Pettit, J.A., Boyle, A. and Moruzzi, Processing Technology, Volume 175, Issues 1-3,
J.L. 1998 ‗Avoidance of thermal damage in pp. 278-284.
grinding and prediction of the damage threshold‘, [29] A.Cameron, R.Bauer and A.Warkentin. 2010 ―An
CIRP Annals – Manuf. Technol, 37(1), pp. 327- investigation of the effects of wheel cleaning
330. parameters in creep feed grinding‖, International
[16] T. Tawakoli, A. Rasifard and M. Rabiey, 2007 Journal of Machine Tools & Manufacture,
―High-efficiency internal cylindrical grinding with Volume 50, pp. 126–130.
a new kinematic‖, International Journal of [30] Rodrigo Daun Monici, Eduardo Carlos Bianchi,
Machine Tools and Manufacture, Volume 47, Rodrigo Eduardo Catai and Paulo Roberto de
Issue 5, pp. 729-733. Aguiar, 2006 ―Analysis of the different forms of
[17] J.F.G. Oliveira, E.J. Silva, C. Guo and F. application and types of cutting fluid used in
Hashimoto. 2009 Industrial challenges in grinding, plunge cylindrical grinding using conventional and
CIRP Annals - Manufacturing superabrasive CBN grinding wheels‖,
Technology,Volume 58, Issue 2, pp. 663-680. International Journal of Machine Tools and
[18] Murat Kiyak and Orhan Cakir. 2010‖ Study of Manufacture, Volume 46, Issue 2, pp. 122-131.
surface quality in dry and wet external cylindrical [31] E. Brinksmeier, H. K. Tonshoff, C. Czenkusch and
grinding‖, International Journal of Computational C. Heinzel. 1998, ―Modeling and optimization of
Materials Science and Surface Engineering, grinding processes‖, Journal of Intelligent
Volume 3, Issue 1, pp. 12-23, Manufacturing, Volume 9, Issue 4,
[32] M. Sakakuraa, S. Tsukamotob, T. Fujiwarac and I.
[19] O. Cakir, A. Yardimeden, T. Ozben and E. Inasakid, 2006 ―Visual simulation of grinding
Kilickap, 2007‖ Selection of cutting fluids in process‖, Intelligent Production Machines and
machining processes‖, Journal of Achievements in Systems-2nd I* PROMS Virtual International
Materials and Manufacturing Engineering, Conference, pp. 107-112.
Volume 25, issue 2, pp. 99-102, 2007. [33] E. Brinksmeier, J.C. Aurich, E. Govekar, C.
Heinzel, H.W. Hoffmeister, F. Klocke, J. Peters,

539
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

R. Rentsch, D.J. Stephenson, E. Uhlmann, K. [41] V.A. Baines-Jones, M.N. Morgan, D.R. Allanson ,
Weinert and M. Wittmann, 2006 ―Advances in A.D.L Batako, ―Grinding fluid delivery system
modeling and simulation of grinding processes‖, design-Nozzle Optimization‖, European Research
CIRP Annals - Manufacturing Technology , Council Grant No: GR/S82350/01.
Volume 55, Issue 2, Pages 667-696. [42] M. N. Morgan and V. Baines-Jones, 2009 ―On the
[34] T.A. Nguyen and D.L. Butler, 2005 ―Simulation of Coherent Length of Fluid Nozzles in Grinding‖,
precision grinding process, part 1: generation of Key Engineering Materials-Progress in Abrasive
the grinding wheel surface‖, International Journal and Grinding Technology, Volume 404, pp. 61-67.
on Machine Tools and Manufacture, Volume 45, [43] J. A. Webster, Storrs, CT (US), 2006 ―Coherent jet
Issue 11, pp. 1321-1328. nozzles for grinding applications‖, US Patent
[35] T.A. Nguyen and D.L. Butler. 2005 ―Simulation of Application Publication, Publication number:
surface grinding process, part 1: interaction of US2006/7086930 B2, Saint-Gobain Abrasives,
abrasive grain with the work-piece‖, International Inc.,.
Journal on Machine Tools and Manufacture, [44] Mark Iain Pilkington, 2009 ―Coolant nozzle
Volume 45, Issue 11, pp. 1329-1336. positioning for machining work-pieces‖, US
[36] O. Sinota, P. Chevrierb and P. Padillaa, 2006 Patent, Patent number: US2009/7568968 B2,
―Experimental simulation of the efficiency of high Rolls-Royce Corporation.
speed grinding wheel cleaning‖, International [45] E. Brinksmeier and J. Solter, 2009 ―Prediction of
Journal of Machine Tools & Manufacture, shape deviations in machining‖, CIRP Annals -
Volume 46, pp. 170–175. Manufacturing Technology, Volume 58, pp. 507–
[37] Mandeep Singh, Anirban Bhattacharya, Ajay 510.
Batish and V.K. Singla, Computational fluid [46] Konstantinos Salonitis and George Chryssolouris,
simulation of coolant flow behaviour through 2007 ―Cooling in grind-hardening operations‖,
different nozzles for effective cooling in surface International Journal on Advanced Manufacturing
grinding, 2nd National Conference on Precision Technology, Volume 33, pp. 285–297.
Metrology, SLIET Longowal, pp. 18. [47] W.B. Rowe, J.A. Pettit, A. Boyle and J.L. Moruzzi,
[38] Mandeep Singh, Anirban Bhattacharya, Ajay 1988 Avoidance of thermal damage in grinding
Batish and V.K. Singla, 2010 ―Nozzle flow and prediction of the damage threshold, CIRP
behaviour for effective cooling through Annals - Manufacturing Technology, Volume 37,
computational fluid simulation in surface grinding Issue 1, pp. 327-330.
using scraper board‖, International Conference on [48] G. Chryssolouris, K. Tsirbas and K. Salonitis, 2005
Frontiers in Mechanical Engineering, NIT ―An analytical, numerical and experimental
Surathkal, pp 148-157. approach to grind hardening‖, Journal of
[39] Anirban Bhattacharya, Ajay Batish and Mandeep Manufacturing Processes, Volume 7, Issue, pp. 1-
Singh, 2012” Experimental studies to validate 9.
simulated results for nozzle effectiveness using [49] M.C. Shaw and A. Vyas, 1994 ―Heat–Affected
multi-response optimisation during cylindrical zones in grinding steel‖, CIRP Annals -
grinding process‖, Int. J. Materials Engineering Manufacturing Technology, Volume 43, Issue 1,
Innovation, Vol. 3, Nos. 3/4,. pp. 279-282.
[40] Mandeep Singh , Yadwinderpal Sharma , Jaskarn
Singh, 2014 “ Experimental Validation of
Simulated Results Through Different Nozzles for
Effective Cooling in Cylindrical Grinding Process
to Enhance Workpiece Surface Quality‖,
International Conference on Advancements and
Futuristic Trends in Mechanical and Materials
Engineering, pp.195-202 .

540
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Optimization of Machining Parameters for surface


roughness in Boring operation using RSM
Gaurav Bansal Jasmeet Singh
BGIET,Sangrur BGIET,Sangru
gaurav_bansal87@yahoo.co.in jasmeet_singh23@rediffmail.com

ABSTRACT: variable) which is influenced by several independent


Machining is a manufacturing process in which
variables (input variables).
unwanted material is removed from the work piece to get the
desired shape and dimensions. Machining operations have 1.1 Boring Process
been the core of the manufacturing industry since the In machining, boring is the process of enlarging a hole
industrial revolution. A machining process involves many that has already been drilled (or cast), by means of a single-
process parameters which directly or indirectly influence the point cutting tool (or of a boring head containing several
surface roughness of the product in common. A reasonably such tools), for example as in boring a cannon barrel. Boring
good surface finish is desired to improve the properties, is used to achieve greater accuracy of the diameter of a hole,
and can be used to cut a tapered hole. Boring can be viewed
fatigue strength, corrosion resistance and aesthetic appeal of as the internal-diameter counterpart to turning, which cuts
the product. Surface roughness is varied due to various external diameters.
parameters of which feed, speed, depth of cut are important
ones. A precise knowledge of these optimum parameters
would facilitate in good surface quality. Extensive study has
been conducted in the past to optimize the process
parameters in any machining process to have the best
product. Current investigation is made on boring process and
Response Surface Methodology is applied on the most
effective process parameters i.e. feed, cutting speed and
depth of cut. The main effects (independent parameters),
surface plots and contour plots of the variables have been
considered separately to build best subset of the model. Each
boring parameter is considered at two levels. After
performing the experiments, roughness value is checked
using surface roughness measuring instrument To analyze
the data set, MINITAB-16 (Software) has been used to
reduce the manipulation and help to arrive at proper
improvement plan of the Manufacturing process &
Techniques.
The results of analysis show that cutting speed and depth of
cut have present significant contribution on the surface
roughness and feed rate have less significant contribution on
the surface roughness. Fig.1 Boring Process

Keywords: Optimization, Boring parameters, Surface There are various types of boring. The boring bar may be
roughness, Response surface methodology. supported on both ends (which only works if the existing
hole is a through hole), or it may be supported at one end.
Line boring (line boring, line-boring) implies the former.
1. INTRODUCTION Back boring (back boring, back-boring) is the process of
Machining operations have been the core of the reaching through an existing hole and then boring on the
manufacturing industry since the industrial revolution. In "back" side of the work piece (relative to the machine
present manufacturing scenario the quality is the challenging headstock). Because of the limitations on tooling design
aspects, which is to be looked upon. Every manufacturing or imposed by the fact that the work piece mostly surrounds the
production unit should concern about the quality of the tool, boring is inherently somewhat more challenging than
product. Response surface methodology (RSM) is a turning, in terms of decreased tool holding rigidity, increased
collection of mathematical and statistical techniques useful clearance angle requirements (limiting the amount of support
for the modeling and analysis of problems. The objective is that can be given to the cutting edge). These are the reasons
to optimize a response (output why boring is viewed as an area of machining practice in its
own right, separate from turning, with its own tips, tricks,

541
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

challenges, and body of expertise, despite the fact that they


are in some ways identical
3.1 Parameters & Their Limits
Experimentation has been done by considering the
2. LITERATURE REVIEW following levels of process variables.
Extensive study has been conducted in the past to Table1 Process parameter levels
optimize the process parameters in any machining process to Process Variable Unit Level 1 Level 2
have the best product. Traditionally, the selection of cutting Cutting Speed m/min. 100 200
conditions for metal cutting is left to the machine operator. Feed Rate mm/rev. 0.08 0.18
In such cases, the experience of the operator plays a major Depth of cut mm 1 2
role, but even for a skilled operator it is very difficult to
attain the optimum values each time. The main machining Now, RSM matrix is made using MINITAB software.
parameters in metal boring operations are cutting speed, feed 20 no. of observations were found and experiments are
rate and depth of cut etc. The setting of these parameters performed on the same and surface roughness value is
determines the quality characteristics of machined parts. determined. Table 2 shows the resultant matrix and
Yadav kumar upendra et.al (2012)[1] discussed the effect surface roughness.
of machining parameters viz. cutting speed, feed rate and
depth of cut on surface roughness in CNC turning by taguchi Table 2 Experiment Results
method. An L27 orthogonal array, analysis of variance and Sr Cutting Feed Depth Surface
signal to noise ratio was used in this study. It was also No. Speed rate of cut roughness
(µm)
concluded that feed rate is the most significant factor
1 150 0.13 1.5 1.44
affecting surface roughness followed by depth of cut. Basim
A. Khidhir et al. (2011) [2] investigated the effect of cutting 2 150 0.13 1.5 1.36
speed feed and depth of cut on surface roughness when 3 65.91 0.13 1.5 1.73
machining nickel based hastelloy 276. It has been found that
4 200 0.18 1 2.22
the good surface finish is obtain with higher cutting speed,
minimum feed rate, and lower depth of cut. 5 200 0.08 1 1.29
6 150 0.13 0.65 0.65
3. EXPERIMENTAL PROCEDURE 7 150 0.13 1.5 1.47
THE STUDY HAS BEEN PERFORMED ON AISI 4130
8 200 0.18 2 2.36
STEEL ALLOY BARS HAVING DIMENSIONS OF 40 MM
DIAMETER AND 40 MM LENGTH, ON CNC BORING 9 150 0.13 1.5 1.31
MACHINE BY USING CARBIDE TOOL OF 0.6 MM NOSE
RADIUS. 10 200 0.08 2 1.26
Further the work has been channelized through following 11 150 0.13 2.34 1.49
adopted procedure:
12 150 0.21 1.5 1.46
 Check and prepare the CNC Boring Machine
(Lokesh TL 250) ready for performing the 13 100 0.18 1 0.81
machining operation. 14 150 0.13 1.5 1.30
 Cut the work piece by power saw and perform
15 100 0.08 2 2.51
initial boring operation on simple lathe to get
desired dimensions of work pieces. 16 150 0.13 1.5 1.47
 Perform straight boring operation on specimens in 17 100 0.08 1 0.72
various cutting environments involving various
18 100 0.18 2 2.36
combinations of process control parameters like:
spindle speed, feed and depth of cut etc. These 19 150 0.045 1.5 1.41
experiments are pre designed with RSM using 20 234.08 0.13 1.5 1.87
MINITAB software and executed as per matrix
provided by RSM technique.
 After this, Surface roughness is found using
roughness measuring instrument. Surface
4. RESULTS & DISCUSSION
roughness of each of the work-piece is measured at Now, various set of analysis is made to validate the
four different points. Then, the average of these results.
four values is found. This resultant value is
considered as the final surface roughness value of
that work-piece.

542
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Residual Plots for Ra(µm)


Normal Probability Plot Versus Fits
99 0.4

90 0.2

Residual
Percent

50 0.0

10 -0.2

1 -0.4
-0.50 -0.25 0.00 0.25 0.50 0.5 1.0 1.5 2.0 2.5
Residual Fitted Value

Histogram Versus Order


0.4
4.8
0.2
3.6
Frequency

Residual
0.0
2.4

1.2 -0.2

0.0 -0.4
-0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 2 4 6 8 10 12 14 16 18 20
Residual Observation Order

Fig.2 Residual analysis of surface roughness (Ra)


Table 4 ANNOVA Table to check RSM statics for Ra
Table 3 Estimated Regression coefficients for Ra Source DOF Seq SS Adj SS Adj F P
Term Coef SECoef T P MS
Regression 9 4.66008 4.66008 0.51779 8.14 0.001
Constant 1.38339 0.1028 13.452 0.000
Cutting Speed 0.12084 0.1148 1.053 0.031 Linear 3 2.11117 2.11117 0.70372 11.07 0.002
Feed rate 0.25519 0.1148 2.224 0.050 Square 3 0.70612 0.70612 0.23537 3.70 0.050
Depth of cut 0.59793 0.1148 5.211 0.000
Interaction 3 1.84279 1.84279 0.61426 9.66 0.003
Cutting Speed* 0.56162 0.1879 2.989 0.014
Cutting Speed Residual 10 0.63582 0.63582 0.06358 ------ ------
Error
Feed rate* Feed 0.20012 0.1879 1.065 0.012
Lack of fit 5 0.52105 0.52105 0.10421 4.54 0.061
rate
Depth of cut* -0.16471 0.1879 -0.877 0.401 Pure Error 5 0.11478 0.11478 0.02296 ----- -----
Depth of cut Total 19 5.29591 ------ ------ ------ ------
Cutting speed* 0.73640 0.2522 2.920 0.015
Feed rate
Figs 2 show the residual distribution diagrams for surface
Cutting speed* -1.14009 0.2522 -4.521 0.001 roughness. These are generally fall on a straight line
Depth of cut implying that errors are distributed normally. From Fig. it
Feed rate* -0.02636 0.2522 -0.105 0.919 can be concluded that all the values are within the CI level of
Depth of cut 95 %. Hence, these values yield better results in future
prediction. Fig. indicated that there is no obvious pattern and
S= 0.252156 PRESS= 4.14474
unusual structure. So, it can be conclude that the residual
R-Sq= 87.99% R-Sq (pred)= 91.74% R-Sq(adj)= analysis does not indicate any model inadequacy. The
purpose of analysis of variance is to investigate which design
89.19%
parameter significantly affects the surface roughness.

4.1 Graphical Inferences for Ra


The plots are developed with the help of a software package
MINITAB 16. The purpose of main effect plot is to obtain a
general idea of which main effect may be important. Fig. 3
shows the main effects plot for Ra.

543
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Main Effects Plot for Ra(µm)


Data Means
Cutting Speed Feed rate
2.0

1.5

1.0

0.5
Mean

65.910 100.000 150.000 200.000 234.090 0.045910 0.080000 0.130000 0.180000 0.214090
Depth of Cut
2.0

1.5

1.0

0.5
0.65910 1.00000 1.50000 2.00000 2.34090

Fig. 3 Effect of boring parameters on Ra

Surface Plot of Ra(µm) vs Cutting Speed, Depth of Cut


Hold Values
Feed rate 0.13

Ra(µm) 2

1 250
200
0
150
C utting Speed
1.0 100
1.5
2.0
2.5
Depth of C ut

Fig. 4 Surface plot of Ra vs Cutting speed & Depth of cut

544
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Contour Plot of Ra(µm) vs Depth of Cut, Cutting Speed Surface Plot of Ra(µm) vs Depth of Cut, Feed rate
Ra(µm) Hold Values
2.2 < 0.0 Cutting Speed 150
0.0 – 0.5
0.5 – 1.0
2.0
1.0 – 1.5
1.5 – 2.0
1.8 2.0 – 2.5
2.5 – 3.0
Depth of Cut

> 3.0
1.6
Hold Values 2.0
1.4 Feed rate 0.13
Ra(µm) 1.5

1.2 1.0 2.5


2.0
0.5
1.5
1.0 1.0
Depth of C ut
0.05
0.10
0.15
0.20
0.8 Feed r ate

80 100 120 140 160 180 200 220


Cutting Speed

Fig. 4-a Contour Plot of Ra vs cutting speed & Depth of


cut Fig.6 Surface Plot of Ra vs Feed rate & Depth of cut
To understand how the response changes in a given direction
Surface Plot of Ra(µm) vs Cutting Speed, Feed rate
by adjusting the design variables response surface graphs are
Hold Values
Depth of Cut 1.5 used. Fig. 4,5 & 6 shows the surface plots of Ra vs cutting
speed & depth of cut, cutting speed & feed rate, feed rate &
depth of cut respectively. In every graph, each combination
of design variables generates a value of surface roughness.
Next, contour plots are developed Contour plots are basically
3.0 orthographic views of 3-D surface plot and consists of
2.5
Ra( µm)
2.0
colored regions of input variables bearing different value of
250
1.5 200
output response. Like in the graph (fig.4-a), dark green
150
C utting Speed region of plot reflects the area having values of Cutting
0.05 100
0.10
0.15
0.20
speed and Depth of Cut where surface roughness may be
Feed r ate
reached up to 3.0 or more. Similarly, the extreme light green
color region of above plot reflects the area having values of
Cutting speed and Depth of cut where surface roughness
Fig. 5 Surface plot of Ra vs Cutting speed & Feed rate may vary between 1.5 to 2.0 and the light blue region of the
plot shows the area having values of cutting speed and depth
of cut varying between 1 to 1.5.
Contour Plot of Ra(µm) vs Feed rate, Cutting Speed
Ra(µm)
4.2 Result
0.200 < 1.5 At last lower, upper and target values of the responses has
1.5 – 2.0
2.0 – 2.5 been fed into Response optimizer tool of Minitab. So, the
0.175 2.5 – 3.0
> 3.0 desirability will become 97.2%. After analysing the data of
Hold Values
0.150 Depth of Cut 1.5 experiments, software has provided the solution at 97.2%
Feed rate

0.125
desirability that at Cutting Speed of 234.08 mm/min, Feed of
0.08 mm/rev and Depth of Cut of 2.34 mm, minimum Ra of
0.100 0.89 μm can be achieved.
0.075

0.050
5. CONCLUSION
80 100 120 140 160 180 200 220 The current study was to investigate the effect of machining
Cutting Speed
parameters on the surface roughness. The following
Fig. 5-a Contour Plot of Ra vs Cutting speed & feed rate conclusions are drawn from the study:
1. Surface roughness could be effectively predicted
by using spindle speed, feed rate, and depth of cut
Contour Plot of Ra(µm) vs Depth of Cut, Feed rate
Ra(µm)
as the input variables.
2.2 < 0.6
2. Surface roughness is mainly affected by cutting
0.6 – 0.9
0.9 – 1.2
2.0
1.2 – 1.5
speed followed by depth of cut and feed rate. The
1.5 – 1.8
1.8 1.8 – 2.1 parameters taken in the experiments are optimized
> 2.1
Depth of Cut

1.6
to obtain minimum surface roughness possible.
Hold Values
Cutting Speed 150 The optimum setting of cutting parameters are :
1.4
i) Cutting Speed = 234.08mm/min
1.2
ii) Feed rate = 0.08rev/min
1.0
iii) Depth = 2.34mm
0.8

0.050 0.075 0.100 0.125 0.150 0.175 0.200


Feed rate

Fig. 6-a Contour plot of Ra vs Feed rate & Depth of cut

545
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES
[1]. Upinder Kumar Yadav, Deepak Narang, Pankaj [5]. 4. H. R. Ghan, S.D.Ambekar “Optimization of
Sharma Attri “Experimental Investigation and cutting parameter for Surface Roughness, Material
Optimization of Machining Parameters for Surface Removal rate and Machining Time of Aluminium
Roughness in CNC Turning By Taguchi Method”/ LM-26 Alloy” International Journal of Engineering
International Journal of Engineering Research and Science and Innovative Technology (IJESIT)
Applications (IJERA) ISSN: 2248-9622 Vol. 2, Volume 3, Issue 2, March 2014.
Issue4, July-August 2012, pp.2060-206 [6]. 5. Mihir Patel, Vivek Deshpande “Application of
[2]. Basim A. Khidhir and Bashir Mohamed “Analyzing Taguchi Approach for Optimization Roughness for
the effect of cutting parameters on surface Boring operation of E 250 B0 for Standard IS: 2062
roughness and tool wear when machining nickel on CNC TC” © IJEDR| Volume 2, Issue 2 | ISSN:
based hastelloy- 276.IOP Publishing (2011). 2321-9939.
[3]. Harsimran Singh Sodhi1, Dhiraj Prakash [7]. RAKESH.K.PATEL1*, H.R.PRAJAPATI2
Dhiman2,Ramesh Kumar Gupta3, Raminder Singh “PARAMETRIC ANALYSIS OF SURFACE
Bhatia4 “Investigation of Cutting Parameters For ROUGHNESS (SR) AND MATERIAL
Surface Roughness of Mild Steel In Boring. REMOVAL RATE (MRR) OF HARDEN STEEL
[4]. Process Using Taguchi Metod”; International ON CNC TURNING USING ANOVA
Journal of Applied Engineering Research, ISSN ANALYSIS: A REVIEW; Rakesh.K.Patel et al. /
0973-4562 Vol.7 No.11 (2012) © Research India International Journal of Engineering Science and
Publications; Technology (IJEST).
http://www.ripublication.com/ijaer.htm. [8]. Nuran Bradley “THE RESPONSE SURFACE
METHODOLOGY.

546
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

ELASTIC - PLASTIC & CREEP PHENOMENON IN SOLIDS

Gaurav Verma Kulwinder Singh


Assistant Prof. in Mathematics Assistant Prof.In Mathematics
Gobindgarh Public College, Alour Punjab Agricultural University, Ludhiana
gkdon85@gmail.com kulwinder85pau@gmail.com

ABSTRACT On the other hand, if deformation remains even after body


forces have been removed is called plastic deformation. It is
The advance research in mechanics deals with elastic, plastic one of type of irreversible deformations in which material
and creep transitions which occur under stress and strain bodies after stresses have attained a certain threshold value
relations. The phenomenon of elastic-plastic and creep has been known as the elastic limit or yield stress. If the state of
given successfully by B . R. Seth in 1962. The elastic-plastic deformation in a body remains constant throughout the whole
and creep transition problems in solids are helpful to calculate part of the material body is called homogeneous deformation.
stresses for initial yielding and for fully plastic state. In this
current literature, the elastic-plastic and creep phenomenon has
been elaborated using the Seth transition theory which is
helpful for the designer to make the safe and economical body ELASTIC
mechanical part of machines. (RECOVERABLE)

Keywords DEFORMATION (

Elastic, Plastic, Creep, Transition, Deformation. PLASTIC


( PERMENANT)
1. INTRODUCTION
Solid mechanics is the branch of Continuum mechanics that
deals with the behavior of solid materials, especially their
motion and deformation under the influence of forces, thermal
effects and other different agents. So, it is regarded as science
of force and motions, with its various civil and mechanical
2. THEORY OF ELASTICITY
engineering applications. These applications have been In physics, elasticity (from Greek ἐλαστός "ductible") is the
tendency of solid materials to return to their original shape after
rendering valuable service to the mankind since the very
being deformed. Solid objects will deform when forces are
beginning of our civilization. These applications have been applied on them. If the material is elastic, the object will return
achieved through a proper understanding of the principles of to its initial shape and size when these forces are removed. The
mechanics with certain postulates and assumptions based on physical reasons for elastic behavior can be quite different for
experiments. So, solid mechanics is necessary for the basic different materials. In metals, the atomic lattice changes size
understanding of mechanical phenomenon but also to make and shape when forces are applied (energy is added to the
engineering techniques advance in most of areas throughout system). When forces are removed, the lattice goes back to the
original lower energy state. For rubbers and other polymers,
technology. During various developments including knowledge
elasticity is caused by the stretching of polymer chains when
of new materials, it is necessary to consider more thoroughly forces are applied. The „theory of elasticity‟ has been
the creative application of scientific principles to design or continuously developed by various investigators for anisotropic
develop machines apparatus or manufacturing processes under bodies and isotropic bodies since 1950. This theory is the solid
scientific conditions, so that both safety and economy purpose foundation for the scientists in designing of engineering
hold good. In order to make a mechanical body safe, a designer structures because of its increasing application to engineering
problems. The advance research in mechanics deals with
must have knowledge of the limiting conditions of stress at
elastic, plastic and creep transitions which occur under stress
which temporary and permanent deformation starts to develop and strain relations. The concept of stress and strain is
so that danger of yielding or fracture is to be eliminated. elaborated by I.S Sokolnkiff in the book "The Mathematical
theory of Elasticity". A load applied to solid structure
Deformation is a physical phenomenon causes change in shape, originates internal forces with in body called stress which cause
size or position of a body part as a result of compression, deformation & the deformation of material is called strain.
deflection or extension [1]. A deformation may be occurred due [6]The state of stress at any point of the medium is completely
to body forces, internal pressure, external loads or temperature characterized by the specification of nine quantities that are
called the components of stress tensor. In the same manner,
changes in the body. Deformations which are recovered after
there are components of strain tensor which characterized pure
the removal of body forces are called elastic deformations. In deformation. Both stress and strain move parallel to each other
this case, a body completely recovers its original configuration. and having relations between them under various conditions.

547
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The stress-strain relations are concerned with the mathematical techniques for calculating non-uniform
characterization of elastic solids. It was Robert Hooke who distributions of stress and strain of fully plastic body. Here term
gave the first rough law of proportionality between the forces [5] strain is regarded as macroscopically uniform, but the
and displacements. He said that the extension is proportional to
plastic distortion is confined to narrow bands which extend
the force. This implies that strain is directly proportional to the
stress that is T=Ee, where E is constant of proportionality through crystal. Since plastic distortion is observed in most of
called modulus of elasticity. A natural generalization of the metals and it can vary widely due to different physical
Hooke's law immediately suggests that at each point of the structures. The design engineers are interested in knowing the
medium the strain components eij are linear functions of the distortion behaviors and plastic flow of different materials as it
stress components Tij .These linear relations involve elastic is useful for measuring and avoiding excessive distortion in
coefficients of the materials. It should be noted that if elastic body parts of machines. If the stress exceeds a critical value, as
properties of body are same in all directions about any given
was mentioned above, the material will undergo plastic, or
point, then the body is called isotropic, where as anisotropy
body is opposite to the isotropic body that is different elastic irreversible, deformation. This critical stress can be tensile or
properties in different directions at given point. But if materials compressive. The Tresca and the von Mises criteria are
have three mutually directions of elastic symmetry then they commonly used to determine whether a material has yielded.
are called orthotropic. The Hooke's law for homogenous However, these criteria have proved inadequate for a large
isotropic body can be written in form range of materials and several other yield criteria are in
widespread use.
Therefore , From the viewpoint of design, plasticity is
Where λ and µ are Lame's constants and is called first
concerned with predicting the safe limits for use of a material
strain invariant. under combined stresses. i.e., the maximum load which can be
applied to a body without causing:
It is clear from the equation, if we have strain measures then we
can easily find stress components or vice versa. These relations ◦ Excessive Yielding
hold good up to certain limit, after that elastic body behave like ◦ Flow
plastic body. Consider the strain components for finite
deformation are given as ◦ Fracture
So ,Plasticity is concerned with understanding the mechanism
= [1 ] , = of plastic deformation of metals.

4. THEORY OF CREEP
In materials science, creep (sometimes called cold flow) is the
tendency of a solid material to move slowly or deform
Where = permanently under the influence of mechanical stresses. It can
occur as a result of long-term exposure to high levels of stress
Using the generalized form of Hooke's law for homogenous that are still below the yield strength of the material. Creep is
isotropic body, we have stress components as more severe in materials that are subjected to heat for long
periods, and generally increases as they near their melting
point. Therefore Creep is the gradual increase of the plastic
= +
strain in a material with time at constant load. Particularly at
elevated temperatures some materials are susceptible to this
= + phenomenon and even under the constant load mentioned
strains can increase continually until fracture. This form of
fracture is particularly relevant to, nuclear reactors, rocket
motors, furnaces, turbine blades etc. The concept of creep in
solids is given by various authors Folke K.G. Odquist, Finnie
3. THEORY OF PLASTICITY and Heller. Creep effects under mechanical stress are observed
In physics and materials science, plasticity describes the in most of solid materials. Creep behavior of a material can be
deformation of a material undergoing non-reversible changes of divided into three stages. The deformation takes place in
shape in response to applied forces. For example, a solid piece primary stage is the early stage of loading. Then it reaches a
of metal being bent or pounded into a new shape displays steady state which is called the secondary creep stage followed
plasticity as permanent changes occur within the material by tertiary stage in which strain rate accelerates and fracture
itself. In the same manner, the designers are interested in the soon occurs as shown in figure [9].
„theory of plasticity. It is helpful in understanding the
deformation behaviors for avoiding excessive deflection or While studying creep effects, we generally neglect elastic
distortion in machine parts. The scientific study of the concept deformations as compared with plastic or creep deformations.
of plasticity is started in 1864. The theory of Plasticity also This assumption is very useful in simplification of many
deals with the calculation of stress and strain of plastically problem cases and we take into account plastic deformation
deformed body as like of theory of elasticity and deals with with strain hardening, viscous flow under constant stress,

548
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Deterioration with time and total deformation [7]. The above (ii) Then the generalized strain measure of elastic –plastic
combinations of the assumptions give rise to a „Theory of total transition problem is taken and using generalized Hooke‟s law
creep deformation‟ which is particularly suited to describe corresponding stress components are obtained.
behavior of the materials in the secondary stage of creep, taking
account of primary creep with its total amount of as correction.
A body which obeys the law of plastic deformation with strain
hardening, viscous flow under constant stress and Deterioration
with time could be characterized as a rigid non-linear-
viscoplastic-deteriorating.

Fig 1: THREE STAGES


OF CREEP DEFORMATION (iii) By solving the equations of equilibrium, we get non linear
differential equations from which turning points can be
obtained. Turning points are ± ∞ and -1where ±
5. RESEARCH METHODOLGY ∞correspondents to elastic plastic transitions and -1 correspond
The transition approach of deformation has been successfully to creep transitions.
applied by various researchers to large number of problems in
elastic, plastic and creep situations. The classical macroscopic (iv) The asymptotic solution at these points gives the
treatment of problems in (i) plasticity, (ii) creep and (iii) displacements & the stresses and no semi empirical yield
relaxation has to assume semi-empirical yield conditions like condition is necessary.
those of Tresca and von Mises and Creep strain laws like those
The methodology is useful to find the solution through
of Norton, Odquist and others. This is a direct consequence of
principal stress & helpful in finding initial yielding. And at
using linear strain measures which neglect the non-linear
fully plastic state, the displacement components are obtained.
transition region through which the yield occurs and the fact
that creep and relaxation strains are never linear [3]. Therefore
classical elastic-plastic model does take into consideration the
6. NATURE OF ELASTIC-PASTIC &
non linear part through which transition takes place. Here in CREEP DEFORMATION
this current literature, the main emphasis is on the study of In the classical concept, the elastic and plastic states of body
are two different region that is the materialistic body is
elastic -plastic and creep transition and solving various
problems of solids by considering non liner part through which separated by yield surface on the basis of symmetry and
the transition takes place. The transition theory of B R SETH is physical conditions[11]. In the elastic region ,the theory of
helpful to deal these types of problems. In the B R Seth theory, linear elasticity works and in the plastic region,Von Mises
the main consideration is that elastic plastic transition is treated equations are used with an yield and boundary condition. In
1868, Tresca considered that there exist a mid zone area
as asymptotic phenomenon at transition points in the
deformation and transition states are non-linear and irreversible between elastic and plastic states of solid which is against the
in nature. Therefore, in elasticity-plasticity non-linear terms are classical theory of elastic-plastic transitions. The classical
very important. While adopting the b r Seth theory, the research model tells only about the linear behavior of elastic –plastic
methodology of problem is described as; transition , but this model does not tell about the non-linear
behavior of transition as shown in the figure [12]. Many
(i) In any problem of elastic –plastic transition, the first of all authors have not recognized this intermediate state as a separate
the displacements components of solid are defined that is state that of elastic and plastic. It was First B.R Seth who
displacement components u, v and w are taken. explained the non-linear nature of transition state. Therefore,
B.R Seth has elaborated the concept of this intermediate region.

549
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

He has named this region as “Transition region” . He has [5] Hill, R “THE Mathematical Theory of Plasticity”,
developed a Transition theory of elastic-plastic and creep Clarendon Press, Oxford, 1950
transitions. Seth[5] has defined the concept of generalized
strain measure which when applied to the governing [6] Sokolnikoff I. S. “THE Mathematical Theory of Elasticity”,
differential equation of the medium eliminates ad-hoc MCGRAW Hill, 1946
assumptions like incompressibility, strain law, yield condition
etc. and these constitutive equations give elastic-plastic ,creep [7] Odquist F.K.G, “THE Mathematical Theory of Creep and
results through some transition functions. Creep Rupture, Clarendon Press, Oxford, 1974

[8] Pankaj, T, "Deformation in a thin rotating Disc having


7. SCOPE OF STUDY variable thickness and Edge load with inclusion at the Elastic-
The scope of study deals with the providing guidelines to
Plastic transitional stresses", Vol.12, No 1, 2012
designers for making products. It deals in developing new ideas
to meet changing demands of products in machinery around the [9] Creep.pdf. , January 25, 2009
world so that innovative, collaborative spirit between people
can be developed. This can lead to the production of industry‟s [10] PropAppGuide_scope and apps.pdf, Quadrantplastics.com.
broadest range of engineering materials. [10]The motive of
study is finding conditions under which life of material parts in
[11] Purushothama, C. M.,”Elastic Plastic transition, ZAMM,
machines get longer , reducing wearing on mating parts, better
45(1965) Heft6, Seite 401-408.
mechanical dampening (less noise) , faster operation of
equipment , less power needed to run equipment, chemical and
corrosion resistance etc. By study the nature of transitions in [12] Seth, B. R. “Elastic Plastic Transition in Shells & Tubes
various material bodies, one can determine the applications and under pressure", ZAMM, vol.43, pp.345, 1963.
mechanical requirements of body.

8. CONCLUSION
While studying elastic plastic & creep transitions, we can say
that the various elastic-plastic & creep problems can be solved
with various different conditions. These problems are non-
linear in nature and works according the B. R. Seth's transition
theory. While solving these problems, there is no need of semi-
empirical laws and adhoc assumptions like creep strain laws,
yield condition etc. Also these problems have very importance
due to higher cost of materials and scarcity of materials for
designing of machines. The knowledge of conditions under
which a body becomes fully plastic, get fractured is also useful
for making safer design of machines. Therefore, in order to
enhance the life of various mechanical tools, the study of
elastic-plastic transition problem has paramount value.

9. ACKNOWLEDGMENTS
The author wishes to acknowledge his sincere thanks to his
Shree Guru Maharaj Ji for support and encouragement for the
whole of the life.

REFERENCES
[1] Seth B.R., “Generalized Strain and Transition Concepts for
Elastic-Plastic Deformation, Creep and Relaxation”, Proc. XIth
Int. Congress of Appl. Mech. Munich, pp. 383-389, 1964.

[2] Borah B.N., “Thermo Elastic-Plastic Transition”,


Contemporary Mathematics, vol.379, pp. 93-111, 2005.

[3] Seth B.R., “Transition Theory of Elastic-Plastic


Deformation, Creep and Relaxation”, NATURE, vol. 195, No.
4844, pp. 896-897, 1962.

[4] Seth B.R., “Generalized strain measure with applications to


physical problems”,Rep. 248

550
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Finite Element Analysis of a Muff Coupling using CAE


Tool
Rajeev Kumar Mayur Randive Gurpreet Dhaul
Assistant Professor, Mech. Research Scholar, Mech. Engg. Research Scholar, Mech. Engg.
Engg. Dept, LPU, Phagwara. Dept. LPU, Phagwara Dept. LPU, Phagwara
rajeev.14584@lpu.co.in randive.mayur08@gmail.com gurpreetdhaul@gmail.com

ABSTRACT in misaligned systems. To establish shaft alignment the motor


Performance of muff coupling relies heavily on successful and component mounts need to be loosened to ensure there is
power transmission from one shaft to another shaft, which free movement. Then connect the shafts with the rigid
strongly affects the various parameters and the reliability of coupling which, if precisely made, will align the shafts.
the muff coupling. Advances taking place in servo application Lastly, center the components on any remaining free play and
have increased the need for a predictive tool for simulating the tighten the mounts.
stress analysis and displacement analysis under operating
conditions. Hyper-Mesh is used as a Pre-Processor A stress
1.1 Purpose of Shaft Coupling
analysis and displacement analysis of a muff coupling has A Shaft coupling is a device used to connect two shafts
been done. This paper depicts the validation of the design of a together at their ends for the purpose of transmitting power.
muff coupling using Finite Element Analysis. The analysis Couplings do not normally allow disconnection of shafts
has been carried out with actual design considerations and during operation, however there are torque limiting couplings
loading conditions. A coupled stress and displacement linear which can slip or disconnect when some torque limit is
static analysis has been carried out for a given torque to exceeded.
determine the maximum deflection, stress distribution and its The primary purpose of couplings is to join two pieces of
location in the muff coupling. Optistruct and RADIOSS are rotating equipment while permitting some degree of
used as Solver in this problem. It has been observed that misalignment or end movement or both. By careful selection,
stresses, displacement, in key, shaft1, shaft2, have been found installation and maintenance of couplings, substantial savings
within safe limits and structure could withstand the given can be made in reduced maintenance costs and downtime.
torque. Hyper-Mesh and HyperView are used for Post-
Processing in this problem. Shaft couplings are used in machinery for several purposes,
the most common of which are the following.
General Terms  To provide for the connection of shafts of units that are
Finite Element Analysis.
manufactured separately such as a motor and generator
Keywords and to provide for disconnection for repairs or
alterations.
Displacement Analysis, Muff Coupling, Pre-processor, Post-
processor, Solver CAE Tool, Stress Analysis.  To provide for misalignment of the shafts or to
introduce mechanical flexibility.
1. INTRODUCTION
We ask that authors follow some simple guidelines. In  To reduce the transmission of shock loads from one
essence, we ask you to make your paper look exactly like this shaft to another.
document. The easiest way to do this is simply to download
the template, and replace the content with your own material.  To introduce protection against overloads.

Couplings, have been historically imprecise, inexpensive, and 2. RESEARCH REVIEW


often home made components for simple shaft to shaft There is a vast amount of literature related to Finite Element
connections. In the past many people would not consider Analysis. Many publications indicate the success story of
using a rigid coupling in any servo application. However, implementation of FEA on various components.
smaller sized rigid couplings, especially made of aluminum,
Studied the procedural steps involve in the design of coupling
cast iron, steel etc are increasingly being used in motion
and the development of software package using java as a tool
control applications due to their high torque capacity,
for design and drafting of the coupling [1]. Study of how
stiffness, and zero backlash . Rigid couplings are torsionally
numerical finite element (FE) analysis can improve the
stiff couplings with virtually zero windup under torque loads.
prediction of stress concentration in the keyway. Using shape
If any misalignment is present in the system the forces will
optimization and the simple super elliptical shape, it is shown
cause the shafts, bearings or coupling to fail prematurely.
that the fatigue life of a keyway can be greatly improved with
Rigid couplings cannot be run at extremely high rpm’s for the
up to a 50 per cent reduction in the maximum stress level. The
reason that they cannot compensate for any thermal changes
design changes are simple and therefore practical to realize
in the shafts which is caused by the high speed use. However,
with only two active design parameters [2].
in situations where misalignment can be tightly controlled
rigid couplings offer excellent performance characteristics in Studied about fatigue cracks in the coupling sleeve. All of
servo applications. A sometimes overlooked advantage of them being initiated at the spline root region [4]. How to
rigid couplings is they can be used to establish shaft alignment improve the topology of geometry we can improve the quality

551
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

of mesh and get the better analysis results. Improvement to four-noded tetras. The overall analysis approach is to create
procedures provided here are for both for elements that are a refined 3D tetra mesh from the imported CAD geometry for
interior to the mesh and for elements connected to the the muff coupling. After completing meshing part of the
boundary [3]. model, than quality check is performed to check the quality of
elements because result quality is directly proportional to
3. PROBLEM DEFINITION element quality. Different quality parameters like skew,
The finite element analysis of a muff coupling finds aspect ratio, collapse, included angles; jacobian, stretch etc.
widespread application in various industries, it has been are the measures of how far a given element deviates from
carried out using CAE tools.. The design of the muff ideal shape.
coupling has been done by taking the data from various RBE2 elements are used to distribute the forces and moment
references and books. Muff coupling is designed as a equally among all the connected nodesirrespective of forces
dividing structure, therefore, it is easy to assemble, and moment application.
disassemble and removal from the shaft. Precisely
RBE3 elements are used to transmit torque to a body.
manufactured rigid couplings with honed bores are
increasingly used in motion control applications [5]
where components are properly aligned.

3.1 CAD Model


The CAD model has been prepared in SOLID EDGE & the
meshing tool used is Hyper-mesh & the solver used is Redioss
linear. The results are to be validated by comparing with the
experimental results. The 2-D drawing of the muff coupling is
shown below in Fig 1. Figure 2: RBE3 and RBE2 Element

3.4 Material, component, and property


collector
In material collector we assign the material to the component
we created in component collector. There are four component
of muff coupling. While creating the material collector we
specify MAT 1 Card image to it for isotropic material.
During meshing we put different surfaces in different
component collector. For example, the muff is in muff
component collector, shaft is in shaft component collector etc.

Figure 1: 2d drawing of muff coupling After creating material collector we assign properties by
creating property collectors. Card image used is PSOLID. As
3.2 Geometry Preparation we create the property we assign them to the respective
After the CAD data has been imported, an Edit/Surface/Edge component collector. For cross checking that whether the
Match is performed on the geometry in order to prepare the properties are assigned or not, we can check the property
surfaces for meshing. This involves the task of removal of tables, in utility tab.
features, changing the shape of a part in order to simplify the
geometry. Certain details of the shape, such as small holes or 3.5 Loads and Boundary conditions
blends, may simply not be necessary for the analysis being Here we collect or specify the forces, loads, moment, torque,
removed. When these details are removed, the analysis can pressure, velocity etc applied ie boundary conditions.
run more efficiently. Changing the geometry to match the Applying load is very crucial consideration in Analysis. In
desired shape can also allow a mesh to be created more muff coupling , we applied a torque on shaft, by creating rbe 3
quickly. element, then creating independent and dependent node and
then applying the moment on the rbe3 dependent node as
3.3 Meshing shown in figure given below.
Meshing is done to reduce the degree of freedom from infinite
to finite. The geometric surfaces of all the components of
muff coupling are meshed using 2-D mixed elements. Mixed
mode is commonly preferred due to better mesh transition
pattern. Based upon the analysis and hardware configuration,
an element edge length of 2 mm is used. For better
representation of hole geometry and smooth mesh flow lines,
holes are modeled with even no. of elements. After all
surfaces have been meshed, next step involves equivalency all
nodes.
Hyper-mesh has tools for determining the causes of mesh
failure such as self intersections, free edges, problems with Figure 3: Constraint applied on shaft
element normal’s, or duplicate elements. The 3-D tetra
mesher, available with Hype-rmesh, is used to create the solid
tetra mesh. Quad elements are splitted to trias and converted

552
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. ANALYSIS
The model created in the earlier steps is now taken up for
solution - the computer program reads the data, generate FE
model, calculates matrix entries, solves the matrix equation
and writes the data out for interpretation. The finite element
model was developed to predict the stress distribution and
displacement distribution at key, Shaft1, and shaft2 instead of
those costly and time-consuming experimental trials and
errors. This task is CPU intensive, and is often called
processing. Most of the time very little interaction from the
user is required. The analysis of muff coupling produces
different type of data files having different type of
information
After the program has evaluated the results, we have to
examine and interpret the results which in our case has been Figure 5: Displacement Contours in Shaft 2( constraint
done in Hyper-View. Hype-View gives the information and applied)
results such as: displacement contours in x, y and z direction,
resultant displacement contours, von misses stress contours.

5. RESULTS AND DISCUSSIONS


The Fig 4 shows displacement distribution in the shaft1 of
muff coupling. It is observed from the displacement contours
that displacement is decreasing towards end of the shaft. The
maximum displacement observed is 0.0081 mm at the upper
surface of the shaft1. The Fig 5 shows displacement
distribution in the shaft2 of muff coupling. The maximum
displacement observed is 0.0804 mm at the lower surface of
the shaft. The Fig 6 shows displacement distribution in the
key of muff coupling the maximum displacement observed is
0.03622 mm.
With the same Finite element model, the stresses in key Figure 6: Displacement Contours in key
,shaft1 and shaft2 is also predicted..The Fig 7 shows stress
contours of the shaft1. The maximum stress observed is 29.44
Mpa and is decreasing towards end of the shaft The Fig 8
shows stress contour of the shaft2. The maximum stress
observed is 30.31 Mpa and is also decreasing towards end of
the shaft. The Fig 9 shows stress contour of the key. The
maximum stress observed is 28.67 Mpa. The maximum stress
observed in the shaft2 is 30.31 Mpa.
To validate the analysis, the results have been compared with
the available experimental and standard results. As the
experimental results were available as shown in Table 1 for
the various component of muff coupling under a torque of
11x105 N-mm, the same FE analysis has been carried out for
the torque of 11x105N-mm. Figure 7: Stress Contours in Shaft 1( torque applied)

Figure 8: Stress Contours in Shaft 2(constraint applied)


Figure 4: Displacement Contours in Shaft 1( torque applied)

The maximum deflection observed in shaft2 is 0.08048, which also validate the FE analysis of the muff coupling. The results
is well below the experimental result for the torque of 11x10 5 of the comparison have been depicted in tabular form in table.
N-mm. The stresses and displacement comparison for key

553
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 1: Experimental results

Parameters Experimental results

Torque applied 1100000 N-mm


Deflection (shaft 2) 0.023 mm
Deflection key 0.01 mm
Maximum Shear Stress 25.422 Mpa
(shaft 1)
Maximum Shear Stress 25.422 Mpa
(shaft 2)
Maximum Shear Stress 22.79 Mpa
key

Figure 9: Stress Contours in key

Table 2: Comparison of experimental and FEA results

Parameters Experimental Results FEA Results Variation

Torque applied 1100000 N-mm 1100000 N-mm Nil

Deflection (shaft 2) 0.023 mm 0.08048 mm 2.85 mm

Deflection key 0.01 mm 0.03622 mm 2.6 mm

Maximum Shear Stress 25.422 Mpa 29.44 Mpa 15.8 %


(shaft 1)
Maximum Shear Stress 25.422 Mpa 30.31 Mpa 19.22%
(shaft 2)
Maximum Shear Stress key 22.79 Mpa 28.67 Mpa 25.8 %

6. CONCLUSIONS [3] S. A. Canann, S. N. Muthukrishnan, R. K. Phillips,


From the results obtained from HYPERMESH, many “Topological improvement procedures for Quadrilateral
Finite element Meshes” Engineering with Computers,
discussions have been made and it will be concluded that, Volume 14, Issue 2, 1998
with the applied torque of 1100000 N-mm on shaft 1, there is [4] Madan, M and Sujata, M and Raghavendra, K and
maximum displacement on shaft 2 is 0.0804. The maximum Bhaumik, “Failure analysis of an aeroengine
components” National Aeronautical Laboratory,
shear stress under same loading condition is in shaft 2 (30.31) Bangalore, India. 2011
than in shaft 1 (29.44) and then in key (28.67) which is [5] S. B. Jaiswal, M. D. Pasarkar, “Failure Analysis of
approachable to the maximum allowable stress of the given Flange Coupling In Industry” International Journal of
Emerging Technology and Advanced Engineering, ISSN
material (plain carbon steel).The variation in FEA and 2250-2459, Volume 2, Issue 5, May 2012
experimental results are 15.2%, 19.22% and 25.8 % for the [6] Mahmood M Shokrieh, Davood Rezaei, “ Analysis and
stress. The variation in FEA and Experimental results in Optimization of composite leaf springs, Composite
Structure, Elsevier, Volume 60, Issue 3, Pages 317-325
deflections are 2.85 for shaft 2 and 2.6 for key.
[7] Kenneth A. Williams, “Hub Design, Finite Element
Analysis” College of Engineering and Mineral Resources
7. REFERENCES at West Virginia Universit, 2006.
[1] Adeyeri Michael Kanisuru, Adeyemi Michael Bolaji ,
Ajayi Olumuyiwa Bamidele, Abadariki Samson [8] Cerlinca Delia Aurora, Alaci Stelian, Rusu Ovidiu
Olaniran, “Computer Aided Design of Coupling” Toader, Irimescu Luminita, Ciornei Florina Carmen
International journal of Engineering (IJE), Vol. 5, Issue “FEA of stress concentrator effect from a rotating disk
5, November 2011. with a keyway” Fascicle of Management and
Technological Engineering, Volume IX (XIX), 2010,
[2] Niels L. Pedersen, “Optimization of keyway design” 2nd NR2.
International Conference on Engineering Optimization,
Lisbon, Portugal, September 2010

554
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[9] R. Citarella, S. Gerbino, “BE analysis of shaft hub [11] O. A. Zambrano, J.J. Coronado, S. A. Rodriguez,
couplings with polygonal profiles” Journal of Materials “Failure analysis of Shaft” Case Studies in Engineering
Processing Technology, Volume 109, Issues 1–2, Failure Analysis, Volume 2,Issue 1, April 2014.
February 2001.
[12] T.N. Shiau, K.H. Huang, F.C. Wang, W.C. Hsu,
“Dynamic response of a rotating multi-span shaft with
[10] S. Baguet, G.Jacquenot, ”Nonlinear couplings in a gear- general boundary conditions subjected to a moving load”
shaft-bearing system” Mechanism and Machine Theory, Journal of Sound and Vibration, Volume 323, issue 4,
Volume 45, Issue 12, December 2010. June 2005.

555
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

EVALUATION OF SUPPLY CHAIN COLLABORATION: AN AHP


BASED APPROACH
Veepan kumar Ravi kant Arvind Jayant Rakesh Malviya
Assistant Professor AssistantProfessor Associate Professor Research Scholar (MED)
(MED) (MED) (MED) Svnit Surat, Gujarat
Sliet Longowal, Svnit Surat, Gujarat Sliet Longowal, rakeshmalviya.20
Punjab ravi.kant@med.sv Punjab 07@gmail.com
kumarveepan958 nit.ac.in arvindjayant@gm
@gmail.com ail.com

ABSTRACT chain costs, to eliminate or reduce investments in physical


assets, to transfer costs and risks to other parties in supply chain,
Supply chain collaboration (SCC) is the driving force to establish to create a more flexible and responsive supply chains in
a collaborative relationship based on truly shared goal. The need competitive business environment.(Hansen and Nohira, 2004).
of SC collaboration is to improve sales and/or profits of SC collaboration has become a new imperative strategy for
organization, to take market share away from competitors, to organisations to create competitive advantage (Horvath, 2001;
reduce organization’s SC costs, to eliminate or reduce Spekman et al., 1998). A closer relationship enables the
investments in physical assets, to transfer costs and risks to other participating organisations to achieve cost reductions and
parties in SC, to create a more flexible and responsive supply revenue enhancements as well as flexibility in dealing with
chain. The objective of this research is to evaluate SC supply and demand uncertainties (Bowersox, 1990; Lee et al.,
collaboration using Analytical hierarchy process (AHP) by 1997). Hewlett-Packard (HP), for instance, initiated
understanding the enablers for effective SC collaboration in the collaboration with one of its major resellers (Callioni and
manufacturing organizations. To build awareness of the critical Billington, 2001). These collaborative efforts, which focused on
SCCEs and present an approach to make SC collaboration co-managed inventory by considering different levels of demand
effective by understanding the dynamics between various SC uncertainty, enabled both parties to improve fill rate, increase
Collaboration Enablers (SCCEs). The findings of the present inventory turnover, and enhance sales. Similarly, Wal-Mart
research work reveal that three enablers of supply chain collaborated in demand planning and replenishment with its
collaboration were statistically significant to organization major suppliers to increase inventory turns, reduce inventory
performance. The empirical results demonstrate that top costs, reduce storage and handling costs, and improve retail
management support, common objectives and goals, sales (Parks, 2001).
communication SC strategic planning, Advance technology, AHP approach helps the organisation to alleviate inconsistencies
Training Advancement and organisation compatibility for SC in decision making problems. This study applies fuzzy linguistic
collaboration are the seven main influential factors on the success preference relations to construct a pairwise comparison matrix.
of SC collaboration project. This study used subjective judgment AHPis an easy and practical way to provide a mechanism for
and any biasing by the person who is judging the SCCEs might improving consistency in SC collaboration implementation.
influence the final result. Here, 20SCCEs have been used to Twenty SCCEs have been chosen on the basis of literature
identify and rank the major SCCEs in relation to the success of review and the opinions of experts from industry and academia.
SC Collaboration in the organization. The results offer insights to The main objectives of this paper are to measure the success/
supply chain collaboration practitioners and policy makers for failure possibility of implementing the supply chain
computing importance weights of SCCEs, which helps to identify collaboration using AHP approach. .
and rank the important SCCEs for their needs and to reveal the
direct and indirect effects of each SCCE for achieving the 2. LITERATURE REVIEW OF SC
effective SC Collaboration in the organization by using AHP
approach. COLLABORATION
The organizations are aware of the importance of all the SCCEs
but fall short of their practicing. Many authors have researched
KEY WORDS: Analytical hierarchy process (AHP), and written directly on these SCCEs. The various SC literatures
Supply chain collaboration Enablers.(SCCEs) have been reviewed to develop a framework for effective SC
collaboration implementation.
Toni et al. (1994) discussed about co-operation with suppliers
1. INTRODUCTION which may help the organizations to improve its time, costs and
Supply chain collaboration is the driving force to establish a quality performances in the product flow management and
collaborative relationship based on truly shared goal between the design/product development. There are two areas namely
partner organizations. The need of SC collaboration is to information technology and warehouse and transport
improve sales and/or profits of organization, to take market technologyin which technological advances are having a
share away from competitors, to reduce organization’s supply significant impact on the opportunities for SC collaboration

556
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

improvement. The innovativeness and information sharing are the collaboration-enabled SC: contingency theory, the resource-
major factor for SC collaboration (Lopez and Poole, 1998; based view of the firm, the relational view of the firm, force
Fearne and Hughed, 1999). Akintoye et al. (2000) discussed that field theory, constituency-based theory, social dilemma theory,
Collaboration has been recognized as a significant process that and resource-advantage theory. The findings reveal that
holds the value creation opportunity in SC also studied about SC developing a collaboration-enabled business model is very
collaboration and management in the top the UK construction difficult. Shankar et al. (2011) proposed a methodology to
industry contractors. Sridharan and Simatupang (2004) measure the extent of collaboration between apparel retailers
discussed a benchmarking study on SC collaboration between and manufacturers in the apparel retail industry in India for
retailers and suppliers, which incorporates collaborative measuring collaboration considers variables like top
practices in information sharing, decision synchronization, and management commitment, information sharing, trust among SC
incentive alignment. An empirical study was carried out to partners, long-term relationships and risk and reward sharing
benchmark the profile of collaborative practices and operational also contributes to the literature by introducing an index for
performance. Simatupang and Sridharan (2005) proposed an measuring the extent of SC collaboration. Kant and Joshi (2012)
instrument to measure the extent of collaboration in a SC presented an approach for effective SC collaboration by
consisting of two members, suppliers and retailers. The understanding the dynamics between various SCCEs that help to
proposed model for collaboration incorporates collaborative effective SC collaboration. The research presents a hierarchy-
practices in information sharing, decision synchronization and based model and the mutual relationships among the SCCEs
incentive alignment. A collaboration index is introduced to using interpretive structural modeling. The research shows that
measure the level of collaborative practices. Collaboration index there exists a group of enablers having a high driving power and
was positively associated with operational performance. Sheu et low dependence requiring maximum attention and of strategic
al. (2006) identify the necessary SC architecture for supplier- importance while another group consists of those variables
retailer collaboration, and demonstrate how it influences SC which have high dependence and are the resultant actions.
performance. A comprehensive supplier-retailer relationship
model is developed with five specific research positions: 3. METHODOLOGY
supplier-retailer business relationship (interdependence,
intensity, trust) affects long-term orientation, supplier-retailer Supply Chain Collaboration Using AHP
business relationship affects SC architecture (information Step1. Establish pairwise comparison matrix for priority
sharing, inventory system, information technology capabilities, weighting of attributes.
coordination structure), long-term orientation affects SC The attributes considered in SC collaboration implementationare
architecture, SC architecture affects the level of supplier-retailer shown in the table below
collaboration, and supplier-retailer collaboration enhances Table 1 shows list of enablers
supplier-retailer performance. Overall, with the exception of SCCEs Enabler Name
duration, all variables are found to be critical to supplier-retailer No
collaboration. Manos et al. (2007) analysed the concept of SC SCCE1 Top management support
collaboration and to provide an overall framework that can be SCCE2 Common objectives and goals
used as a conceptual landmark for further empirical research. SCCE3 Strategic planning
The concept is explored in the context of agri-food industry and SCCE4 Communication
particularities are identified. SC collaboration concept is of SCCE5 Training Advancement
significant importance for the agri-food industry however, some SCCE6 Advance technology,
constraints arise due to the nature of industry’s products, and the SCCE7 Information sharing
SCCE8 Trust and openness
specific structure of the sector. Lorentz (2008) investigated the
SCCE9 Organizational compatibility
level of SC collaboration in an uncertain cross-border context,
SCCE10 Cooperation
and whether it improves SC performance. The moderating role SCCE11 Benefit sharing
of export experience and intensity to the collaboration- SCCE12 Decision synchronization
performance relationship is also investigated. It seems that SCCE13 Motivation and rewards
experience in cross-border SC operations does not guarantee SCCE14 Reliability
success in SC management. However, those organizations with SCCE15 Mutual help and support
large export volumes, implying frequency and leveraged SCCE16 Lead Time
resources in operations, seemed to be better able to collaborate SCCE17 Flexiblity
for successful outcomes.Sridharan and Simatupang (2008) SCCE18 Power sharing
clarified the architecture of SC collaboration and to propose a SCCE19 Innovativeness
design for SC collaboration (DfC), which enables participating SCCE20 Customer Oriented Vision
members to create and develop key elements of the proposed
architecture. The paper offers a concept for designing the five Step2.Normalize the pairwise comparison matrix and aggregate
elements of the architecture of SC collaboration, namely the priority weight for attributes.
collaborative performance system, decision synchronization, The normalized value rij is calculated as
information sharing, incentive alignment, and innovative SC
𝑎 𝑖𝑗
processes. A case study was carried out to illustrate the rij = 𝑛 ∀𝑖, 𝑗 = 1,2 … . , 𝑛.
𝑖=1 𝑎 𝑖𝑗
applicability of the framework. Fawcett et al. (2010) addressed
Meanwhile, the aggregated priority weight of attribute Wi is
how organizations mitigate existing forces to achieve the
collaboration enabled SC. Seven key theories were used to 1 𝑛
provide insight into the theoretical framework for the creation of Wi =
𝑛 𝑗 =1 𝑎𝑖𝑗 ∀𝑖, 𝑗 = 1,2, … 𝑛

557
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Where Wi denotes the aggregated weight of attribute i, and K i


Where Wi denotes the priority weight of attribute (i) and n represents the priority weight of possible outcome Ak with
represents the number of the attributes. respect to attribute i.
Table 2 Degree of preferences between two attributes The consistency ratio (C.R) for a comparison is calculated to
Preferences Preferences number determine the acceptance of the attribute priority weights. it is
to be assigned given by
Equally important /preferred 1 Consistent ratio(C.R) = Consistency index / random index.

Problem solving using AHP for supply chain collaboration


Weakly more important/preferred 3 implementation.
Step1. Establish pairwise comparison matrix for priority
strongly more important/preferred 5 weighting of attributes (See TABLE 3 in appendix)
Step2.Normalize the pairwise comparison matrix and aggregate
the priority weight for attributes(See TABLE 4in appendix)
Very strongly more important 7 Step3. Derivation of the eigenvector and maximum eigen value.
Maximum eigen value is 21.818
Absolutely more important /preferred 9 Step4. Derive the consistency index and consistency ratio.
(CI) = (ƛmax –n)/ (n-1) = (21.818- 20)/19= 0.09568 hence in our
consistency is acceptable
Intermediate values used to present 2,4,6,8 Where CI is consistent index, ƛmax is the maximum value of
compromise eigen value ., n is the number of variable
Consistency ratio (CR) = CI/RI
Random index (RI ) For variable more than eight the random
index is computed using empirical formula given by
Step3. Derivation of the eigenvector and maximum eigen value. 𝑅𝐼 𝑛 = −0.021𝑛2 + 0.1183 𝑛 − 0.001
The eigenvector represents the relative importance among the Where n is the order of the matrixes / variable considered in this
elements. Maximum eigenvalue (ƛmax) can be used to determine SC Collaboration implementation
the strength of consistency among comparisons. Step5.Establish a pairwise comparison matrix for weighting
Step4. Derive the consistency index and consistency ratio. alternatives with respect to attributes.

Table 5 (b) Normalized -matrix of priority weight for


possible outcome in attribute SCCE2 Table 5(a) Paired comparison matrices for possible
outcome in attribute SCCE2
Success Failure Success Failure
SCCE 2 Success 1 3
SCCE 2 Success 0.75188 0.75 Failure 0.33 1
Failure 0.24812 0.25 TOTAL 1.33 4
If matrix A is a consistent matrix, the maximum eigen value of
A should equal its number of orders. Therefore, the consistency Summary of Possible Outcome with Respect to Each
index(CI) = (ƛmax –n)/ n-1 and consistency ratio (CR) = CI/RI can attribute(See TABLE 6 in appendix)
be used to assess the degree of consistency. If the consistency Step6. Priority weight for prediction. (SeeTABLE 7 in
index < 0.1, then there is a satisfactory level of consistency. In appendix)
addition, if the consistency ratio < 0.1, then the evaluation matrix The prediction weights for Successful of SC Collaboration
is acceptable. In this case, CI is 0.09568. implementation = 0.635
Step5. Establish a pairwise comparison matrix for weighting Similarily, The prediction weights for Failure of SC
alternatives with respect to attributes. Collaboration implementation = 0.365
Yusuff et al. (2001) noted that the priority weights for Table 8 illustrates the Rank of enablers of SC Collaboration
alternatives are measured to show the preference of alternatives according to priority weight.(SeeTABLE 8 in appendix)
with respect to attributes. Restated, a stronger alternative
preference indicates that the alternative in question is more
likely to be successful. Five options, Extremely good (5), Good
4. DISCUSSIONS
1. The ranks and priority weights obtained.
(3), Fair (1), Weak (1/3) and Poor (1/5) are provided to illustrate
2. The pairwise comparison times of the priority weight for
the change of success given different alternatives. The larger
possible outcome according to the twenty attributes are done.
rating of an alternative indicates a higher chance of success.
3. The chances of successful and failure SSC implementation
Step6. Priority weight for prediction.
produced by AHP (0.635/0.365)
The prediction weight is computed by multiplying the priority
4. The AHP method performs complicated mathematical
weights of the attributes and the evaluation ratings of the
operations to obtain indicators: for example eigenvector,
alternatives.
maximum eigenvalue, consistency index and consistency ratio,
The prediction weight Ck is then obtained as
to ensure the consistency of a preference matrix.
Ck = ni=1 wi k i
5 All the enablers are ranked according to the priority weights.

558
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

6. Hansen, M. T and Nohria, N. 2004 ‘How to build the


collaborative advantage’, slogan management review.
5. CONCLUSION 7. Parks, L. 2001 ‘Wal-Mart gets on board early with
In the present work, AHP Approach has been used in the SC collaborative planning’, Drug Store News, Vol. 23, No. 2,
Collaboration implementation to obtain possibility of success/ pp. 14.
failure implementation in SC collaboration and obtain the 8. De Toni, A., Nassimbeni, G and Tonchia, S. 1994 ‘New
prediction weights for success and failure of attribute. After that trend in the supply management’, Logistics Information
we obtained consistency index for both the methods.The Management, Vol. 7, No.4, pp.41–50.
conventional AHP method uses reciprocal multiplicative 9. Fearne, A and Hughes, D. 1999 ‘Success factors in the
preference relation with an interval scale [1/9, 9] to establish a Fresh Produce Supply Chain: insights from the UK’,
pairwise comparison matrix based on a set of n (n-1)/2 Supply Chain Management, Vol. 4, No. 3, pp. 120-128.
preference ratios. The principal eigenvector, maximum 10. Akintoye, A., McIntosh. G and Fitzgerald. E. 2000 ‘A
eigenvalue, consistency index and consistency ratio then are survey of supply chain collaboration and management in
calculated for assessing the consistency in a preference relation the UK construction industry’, European Journal of
matrix. Consequently, paired comparison of the alternatives with Purchasing and Supply Management, Vol. 6, pp. 159-168.
respect to each attribute can be used to obtain the overall 11. Simatupang, T and Sridharan, R. 2005 ‘The collaborative
ranking of the feasible alternatives. Future studies will focus on supply chain’, International Journal of Logistics
the generalized analytic hierarchy process problems in linguistic Management, Vol. 13 No. 1, pp. 15-30.
terms without exporting the reciprocal additive transitivity 12. Simatupang, T and Sridharan, R. 2004 ‘A benchmarking
property to reciprocal multiplicative decision models. The scheme for supply chain collaboration’, An International
empirical results demonstrate that top management support, Journal, Vol.11, pp.9-30.
common objectives and goals, communication SC strategic 13. Matopoulos, A., Vlachopoulou, M., Manthou .V and
planning, Advance technology, Training Advancement and Manos, B. 2007 ‘A conceptual framework for supply chain
organization compatibility for SC collaboration are the seven collaboration: empirical evidence from the agri-food
main influential factors on the success of SC collaboration industry’, Supply Chain Management: An International
project.Here, 20SCCEs have been used to identify and rank the Journal, Vol. 12, No. 3, pp.177 – 186.
important SCCEs for their needs and to reveal the direct and 14. Lorentz, H. 2008 ‘Collaboration in Finnish-Russian supply
indirect effects of each SCCE for achieving the effective SC chains: Effects on performance and the role of experience’,
Collaboration in the organization by using AHP approach. Baltic Journal of Management, Vol. 3, No. 3, pp.246 – 265.
15. Simatupang, T. and Sridharan, R. 2008 ‘Design for supply
chain collaboration’, Business Process Management
REFERENCES Journal Vol. 14, No.3, pp.401 – 418.
1. Chang, T. H and Wang, T.C. 2009 ‘Measuring the success 16. Fawcett, S., Magnan, G and Fawcett, M. 2010 ‘Mitigating
possibility of implementing advanced manufacturing resisting forces to achieve the collaboration-enabled supply
technology by utilizing the consistent fuzzy preference chain’, Benchmarking: An International Journal, Vol. 17,
relations,’ Expert Systems with Applications, Vol.36, No.3, No. 2, pp.269 – 293.
pp .4313–4320. 17. Anbanandam, R., Banwet, D and Shankar, R. 2011
2. Bowersox, D.J. 1990 ‘The strategic benefits of logistics ‘Evaluation of supply chain collaboration: a case of apparel
alliances,’ Harvard Business Review, Vol. 68, No. 4, pp. retail industry in India’, International Journal of
36-43. Productivity and Performance Management, Vol. 60, No. 2,
3. Lee, H.L., Padmanabhan, V. and Whang, S. 1997 ‘The pp. 82-98.
bullwhip effect in supply chains,’ Sloan Management 18. Joshi, K. and Kant, R. 2012, ‘Structuring the underlying
Review, Vol. 38 No. 3, pp. 93-102. relations among the enablers of supply chain collaboration’,
4. Spekman, R.E., Kamauff, J.W. and Myhr, N. 1998 ‘An International journal of Collaborative Enterprise, Vol. 3,
empirical investigation into supply chain management”, No. 1, pp.38–59.
International Journal of Physical Distribution and Logistics 19. Sheu, C., Yen, H.R and Chae, B. 2006 ‘Determinants of
Management, Vol. 28, No. 8, pp. 630-50. supplier retailer collaboration evidence from an
5. Horvath, L. 2001 ‘Collaboration: key to value creation in international study’, International journal of operations and
supply chain management’, Supply Chain Management: An production management, Vol. 26, No.1, pp.24-29.
International Journal, Vol. 6 No. 5, pp. 205-7. 20. Herrera–Viedma, E., Herrera, F., Chiclana, F and Luque,
M. 2004 ‘Some issues on consistency of fuzzy preference
relations’, European Journal of Operational Research,
Vol.154, pp.98–109.

559
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Impact of Pinch Strengths on Healthy and Non-Healthy


Workers in Manufacturing Unit
Ahsan Moazzam Manoj Kumar
Department of ME Department of ME
Sant Longowal Institute of Engineering and Sant Longowal Institute of Engineering and
Technology Technology
Sangrur,Punjab Sangrur,Punjab
ahsansliet10@gmail.com manojsliet5@gmail.com

ABSTRACT “carpal tunnel syndrome”. Risk factors for CTS can be


The study is conducted on 90 workers comprising of all men. classified into three broad categories: occupational, personal,
This paper is focused on studying the impact of pinch and psychosocial. Occupational risk factors are factors
strengths on health workers and non-health workers of associated with the interaction of the worker with the physical
manufacturing industries. This is done by appropriate health work environment. These factors have been the focus of many
surveillance by using Correlation analysis and difference studies relating to CTS. The primary occupational risk factors
between proportion test on different type of pinch strengths. of CTS include repetition, forceful hand exertions, and
Fisher‟s test is performed to check the significance of deviated wrist postures, lack of rest and recovery, and
potential carpal tunnel syndrome (CTS) suffers from the data. vibration.

Keywords 1.3 Development of CTS


Pinch strength, CTS, health surveillance, Fisher‟s test, healthy Bending the wrist or moving the fingers brings muscles and
workers, non-healthy workers tendons into action. For example, when a person bends a
finger the tendon moves about two inches. The tendons of the
1. INTRODUCTION hand are encased in sheaths, or sleeves, through which the
tendons slide. The inner wall of the sheaths contains cells that
1.1 Repetitive Strain Injury produce a slippery fluid to lubricate the tendons. Lubrication
Repetitive Strain Injury (RSI) is a generic term and often used is essential for the normal and smooth functioning of the
to describe work-related musculoskeletal disorders. RSI is an tendons. With repetitive or excessive movement of the hand,
umbrella term used to describe a number of specific the lubrication system may malfunction. It may not produce
musculoskeletal conditions, i.e. CTS, as well as „diffuse RSI‟, enough fluid or it may produce a fluid with poor lubricating
which is more difficult to define. These conditions are often qualities. Failure of the lubricating system creates friction
occupational in origin. RSI is the more commonly known between the tendon and its sheath causing inflammation and
term for a set of disorders called Work Related Upper Limb swelling of the tendon area. In turn, the swelling squeezes the
Disorders (WRULD‟S). The highest percentage of work median nerve in the wrist or carpal tunnel. Repeated episodes
injuries resulting from repetitive motion occurs in the of inflammation cause fibrous tissue to form. The fibrous
manufacturing sector, where assembly-line works are tissue thickens the tendon sheath, and hinders tendon
common. RSI‟s must be treated at an early stage or a movement. A common factor in developing carpal tunnel
permanent disability could be caused which can cause a loss symptoms is increased hand use or activity. Persons with
in term of compensation, productivity and number of working diabetes or other metabolic disorders that directly affect the
hours/days. Signs and symptoms vary, depending on type of body's nerves and make them more susceptible to
jobs and which part of the body is affected. Initially, compression are also at high risk. It is estimated that three of
symptoms may only occur when the individual is doing the every 10,000 workers loses time from work because of CTS.
repetitive task - they will slowly go away when the person Half of these workers missed more than 10 days of work.
rests. RSI is caused, by continuous repetitive and forceful
work, hand or arm movements, and i.e. hammering pushing, 1.4 Symptoms of Carpal Tunnel Syndrome
pulling, lifting or reaching movements, too fast or extreme The typical symptoms of CTS are tingling of the thumb, and
workloads, long hours, lack of variety or breaks, awkward of the index, middle, and ring fingers, and night pain. The
grips or positions, imperfectly designed equipment and/or pain awakens the patient, but is often relieved by shaking,
poor working environments. hanging, or massaging the hand. Pain may involve not only
the hand, but also the arm and the shoulder. Numbness and
1.2 Carpal tunnel Syndrome loss of manual dexterity occur in more advanced cases.
Carpus is a word derived from the Greek word karpos, which Weakness of the hand also occurs, causing difficulty with
means "wrist." The wrist is surrounded by a band of fibrous pinch and grasp. The victim may drop objects or be unable to
tissue that normally functions as a support for the joint. The use keys or count change with the affected hand. The skin
tight space between this fibrous band and the wrist bone is may dry because of reduced sweating.
called the carpal tunnel. The median nerve passes through the
carpal tunnel to receive sensations from the thumb, index, and 2. EXPERIMENTATION
middle fingers of the hand. Any condition that causes swelling As many industries rely on repetitive and forceful work for
or a change in position of the tissue within the carpal tunnel successful completion of a task, it is impossible for any
can squeeze and irritate the median nerve. Irritation of the industry that this repetitive and forceful work would be
median nerve in this manner causes tingling and numbness of wholly eliminate from a work process. This work does
the thumb, index, and the middle fingers a condition known as however require the adoption of awkward postures. Awkward

560
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

postures in this work increase the likelihood that workers are Where, N= no. of observations in a single variable
exposed to physical strain, which in turn create risk (potential
CTS symptoms) to workers in the form of potential CTS r -correlation coefficient = ∑x.y/
symptoms. Therefore, the present study focuses the
identification of risk factors such as hand pain, wrist pain, Repetitive and pinch grip exertions are common in many
numbness, tingling, weakness, difficulty in grasping, age, occupational activities, these Repetitive and pinch grip
BMI, Cycle time and many more on workers. Present study exertions are impossible to eliminate from the world of work.
sample consists of 90 manual manufacturing workers in Studies of literature shows where pinch grip strength is being
manufacturing industry. Health questionnaire form was affected by various factors such as age, sex, stature, body
designed according to the information required like age, weight, wrist posture, and elbow posture of the workers there
height, weight, duration of job, levels of potential symptoms is no study which shows the relationship amongst probability
to study the prevalence of potential CTS symptoms amongst of CTS symptoms with Pinch grip strength. So, now in the
manufacturing workers. Also the standardized health present study the relationship of pinch grip strength with
surveillance guidelines were used to authenticate the design outcomes i.e. carpal tunnel syndrome symptoms. So that a
considered in present study by experts from industry and healthier work environment can be, planned, therefore the
medical profession. Job categorization is done according to goal of the current study was to check the relationship
level of repetition (per sec), force involved (kg), BMI (kg/m2) between pinch grip strength of healthy and non-healthy
of the workers. The participants ranged in age from 24 to 60 workers. These healthy and non-healthy workers have been
years with a mean of 47.9 (SD 9.15) years (Table.1). divided according to potential CTS severity symptoms. Data
from health surveillance in manufacturing industry workers is
The mean body mass index (BMI; kg/m2) of the participants divided into two group i.e. healthy and non-healthy workers
of this study was 24.6kg/m2 (SD3.72), and it ranged from and mean Pinch strengths of both the group is taken shown in
17.2 kg/m2 to 37.9 kg/m2. The workers had been performing Table.2. To study the correlation between healthy and non-
work for a mean of 24.6 years (SD 8.2). healthy workers a hypothesis is assumed “that the healthy
Table 1. Baseline characteristics of workers worker have more Pinch strengths than that of non-healthy
workers”.
Factor of concern Statistics
Table 2. Data of pinch strength of healthy and non-healthy
Number of workers 90 workers

Age(years) 47.9 ± 9.15 Palmar


Workers Tip pinch Key pinch
pinch
Weight(kg) 67.5 ± 9.29 Healthy
13.88 17.47 18.26
workers (X
Height(feet) 5.625 ± 0.219 Non-Healthy
13.59 17.12 17.10
workers (Y)
BMI(kg/m2) 23.1 ± 3.72
Employment time at present
24.6 ± 8.12 The values of ∑X2, ∑Y2 and ∑X.Y are calculated from Table
site(years)
3.11 by using equation (17) to get the correlation coefficient
(r).Corresponding values of ∑X2, ∑Y2 and ∑X.Y has been
shown in Table.3
2.1 Analysis of Pinch Strength of healthy
Table 3. Calculated corresponding values of dependent
and non-healthy workers using correlation and independent variables
analysis
Correlation is often used as descriptive tool in non- X Y X2 Y2 XY
experimental research. Two measures are correlated if they 13.88
have something in common. The intensity of the correlation is 13.59 192.65 184.68 188.62
expressed by a number called the coefficient of correlation, 17.47 17.12 305.20 293.09 299.08
which is usually denoted by the letter (r).
18.26 17.10 333.42 292.41 312.24
Correlation coefficient („t‟ test) is used to check the statistical
2 2
significance of correlation which can be calculated from the ∑X ∑Y= ∑X =8 ∑Y = ∑XY
following equation (Gupta S. P., 2001).To test the hypothesis =49.61 47.81 31.24 770.19 =799.96

Ho: ρ = 0
Ho: ρ ≠ 0 Correlation coefficient (r) = ∑x.y/ √("∑x2.∑y2" ) = 799.96/
√(831.28 x 770.19) = 0.999
The appropriate test statistic for this hypothesis is
Significance test„t‟ value is obtained by putting correlation
t=r / coefficient (r) in equation (16).

Which follows the t distribution with n – 2degrees of freedom t = r √(N-2)/√(1-(r)2 ) = 0.997√(3-2)/ √(1-(0.999)2 ) = 22.30
if Ho: ρ = 0 is true.
Standard value of significance test„t‟ for degree of freedom 1,
Therefore we would reject the null hypothesis if │to│> t α/2, n-2 at 5% level is equal to 6.314. Since calculated value of t
(22.30) is more than standard value (6.314) which is more

561
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

than the calculated value, so the hypothesis is accepted. It n2 = Second sample size.
concludes that healthy worker have more Pinch strength than
q =1-p.
that of non – healthy workers.
x1 = Number of occurrences in first sample.
2.2 Analysis of Pinch strengths using test of
x2 = Number of occurrences in second sample.
difference between proportions
In locomotive manufacturing industry manual activities which The test procedure is appropriate when the following
needs repetitive and forceful work. Due to this repetitive and conditions are met:
forceful work, gripping strength of the worker decreases
because of wrist pain, hand pain, numbness, difficulty in i. The sampling method for each population is simple
grasping In fact which is a sign of development of potential random sampling.
CTS symptoms. In the present case study it has been observed ii. The samples are independent.
that repetitive and forceful activities like thread making, glass
fits, nut tightening, dial making and repairing involves the use iii. Each sample includes at least 10 successes and 10
of different pinches namely key, tip and palmer pinch. Now to failures. (Some texts say that 5 successes and 5
find that there is a significant difference between pinch failures are enough.)
strength i.e. tip, key and palmer pinch of healthy and non- Calculated value and observed value relationship indicates
healthy worker. Difference between the proportion tests is that whether the difference between data taken of the samples
used. Difference between the proportion test is applied if is significant or not , if the difference is less than Calculated
two samples are drawn from same/ different populations, we value at 1% level of significance, then the hypothesis is
may be interested in finding out whether the difference accepted and if the difference is more than Calculated value at
between the proportion of data is significant or not. It is 1% level of significance, then the hypothesis is rejected.
observed that Pinch strength is a major symptom to decide the
probability of CTS sufferers. Pinch strength of the worker 2.3 Tip pinch
were taken in neutral position i.e. Tip pinch, Key pinch, and Tip pinch, also called pinch grip, a grasp in which the tip of
Palmar pinch. And it is divided into two group i.e. from 0-7 the thumb is pressed against any or each of the tips of the
kg and >7 kg. other. In this study, Pinch between digit 2 i.e. the tips of the
To test that whether the data reveal a significant difference index finger and the thumb is taken. To find the relationship
between two range, so far as a proportion of CTS sufferers is amongst Tip pinch with CTS sufferers raw data has been
concerned, following hypothesis is assumed:- classified in two categories i.e. (i) From 0-7 range and (ii) >7
with number of potential CTS sufferers.
Every hypothesis test requires the analyst to state a null
hypothesis and an alternative hypothesis. The Table.4 below n1 = 68, p1=x1/n1=30 / 68=0.441
shows three sets of hypotheses. Each makes a statement about n2 = 22, p2 = x2 / n2 =04/22 = 0.181
the difference d between two population proportions, P1 and
P2. p = ((x1 + x2) / (n1 + n2) = ((30 + 04)/(68+ 22)) =46/90=0.377

Table 4. Three sets of hypotheses for differences between q= (1-p) = (1-0.377) =0.62
proportion tests S.E. (p1 –p2) = √ (0.377*0.623 {(1/68) + (1/22)}) = 0.118
Null Alternative Number of (p1 –p2) = 0.441 -0.181 =0.26
Set
hypothesis hypothesis tails
1 P1 - P2 = 0 P1 - P2 ≠ 0 2 Difference /S.E. = 0.26/0.118 = 2.20

2 P1 - P2 > 0 P1 - P2 < 0 1 1 Since the difference is less than 2.58 S.E. at 1% level of
significance, the hypothesis is accepted.
3 P1 - P2 < 0 P1 - P2 > 0 1 1
2.4 Key pinch
Key pinch, a grasp in which the thumb is opposed to the
middle phalanx of the index finger also called lateral pinch
“There is significant difference between the proportions of fingers. To find the relationship amongst Key pinch with CTS
CTS sufferers in two groups i.e.(i) From 0-7 kg range and (ii) sufferer‟s raw data has been classified in two categories
>7 kg”. (i)From 0-7 range and (ii) >7 with number of potential CTS
The standard error of the difference between proportions is sufferers.
given by n1=28, p1=x1/n1=15/28=0.535
S.E. (P1 –P2) = √ (p*q {(1/n1) + (1/n2)}) n2 = 62, p2 = x2 / n2 =38/62 = 0.441
And the pooled estimate of the actual proportion in the p= ((x1 + x2)/ (n1 + n2)) = ((15 + 19) / (28+ 62))= 34/90=0.377
population is given by
q= (1-p) = (10.377)=0.623
p = ((x1 + x2) / (n1 + n2))
S.E. (p1p2) = √ (0.377*0.623{(1/28) +1/62)}) 0.110
Where,
(p1–p2) = 0.535 -0.441 =0.94
P1 = Proportion of successes in one sample.
Difference /S.E. = 0.94/.110 = 0.85
P2 =Proportion of successes in second sample.
Since the difference is less than 2.58 S.E. at 1% level of
n1 = First sample size. significance, the hypothesis is accepted.

562
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.5 Palmar pinch Where the number of observations obtained for analysis is
Palmar pinch a Pinch between the pad of the thumb and the small (sample size ≤ 30).
pads of the index and middle fingers shown in Figure 3.2. To
find the relationship amongst Palmar pinch with CTS
2.7 Impact of Pinch strengths on
sufferers raw data has been classified in two categories probability of CTS symptoms using
(i)From 0-7 range and (ii) >7 with number of potential CTS Fisher’s Exact Test
sufferers. It is observed that Pinch strength is a major symptom to
n1= 23, p1 = x1 / n1 = 14/23 = 0.508 decide the probability of CTS sufferers. Pinch strength of the
worker were taken in neutral position i.e. Tip pinch, Key
n2 = 67, p2 = x2 / n2 =20/67 = 0.298 pinch, and Palmar pinch. And it is divided into two group i.e.
from 0-7(Group 1) and >7(Group 2) and probability of having
p= ((x1 + x2) / (n1 + n2)) = ((14 + 20 / (23+ 67)) =34/90=0.377
CTS. To test the probability of having CTS is more amongst
q = (1-p) = (1-0.377) = 0.623 Group 1 as compared to Group 2, a hypothesis is taken that
the probability of having CTS is more amongst Group 2
S.E. (p1 –p2) = √ (0.544*0.455 {(1/23) + (1/67)}) = 0.126 workers as compared to Group 1. Now all the data has been
(p1 –p2)= 0.580 -0.298 =0.31 categorized, operation wise according to the Group and
probability of having CTS as shown in Table.6 for Tip Pinch,
Difference/S.E.=0.31/0.102=0.18 Table.7 for Key pinch, Table.8 for Palmar pinch.
Since the difference is less than 2.58 S.E. at 1% level of
significance, the hypothesis is accepted. Table.6 Survey based CTS data in Tip Pinch

In all the three cases i.e. Tip pinch, Key pinch, and palmar Tota
Tip Pinch Group 1 (0>7)kg Group 2 (>7)kg
pinch, hypothesis is accepted. Hence, the data revealed a l
significance difference between two levels i.e. 0-7 and >7 so Potential
far as a proportion of CTS suffers is concern. This indicate CTS 30 4 34
that workers with pinch strength up to 7 i.e. lesser pinch Sufferers
strengths workers are more prone to potential CTS symptoms No CTS 38 18 56
which may be due to reduced gripping strength because of
median nerve problems i.e. median nerve compression, Total 68 22 90
swelling ,Tenosynovitis, Epicondylitis or „tennis elbow‟ and
Dupuytren‟s contracture problems due to repetitive and
forceful work. Table.7 Survey based CTS data in Key Pinch
2.6 Fisher’s Exact Test Tota
The Fisher‟s exact test is used to check statistical significance Tip Pinch Group 1 Group 2
l
by (2 × 2) contingency table. In present study Fisher‟s test has Potential
been used to check the significance of potential CTS sufferers CTS 16 18 34
from the collected data for comparison of CTS probability Sufferers
with two levels of pinch strengths i.e. 0-7 and >7 For this,
No CTS 12 44 56
notations a, b, c and d are assigned to cells for fisher‟s exact
test whereas n is assigned to total number of assembly line Total 28 62 90
workers in each operation The test is performed on categorical
data by classifying situation in two different ways as shown in
table.5
Table.8 Survey based CTS data in Palmar Pinch
Table 5. A (2 x 2) contingency table set-up used for
Fisher’s exact test Tota
Tip Pinch Group 1 Group 2
l
Level Potential
Description Level 2 Total
1 CTS 14 20 34
Symptom Sufferers
Present (Test a b a+b No CTS 9 47 56
positive)
Symptom not Total 23 67 90
Present (Test c d c+d
negative)
Totals a+c b+d a + b + c + d= n The significant values of potential CTS sufferers amongst
Group 1 i.e. pinch Strength between 0-7 and Group 2 i.e.
The probability value p is computed by the hyper geometric pinch Strength >7 workers in various operations of assembly
distribution and expressed line have been evaluated and are shown in Table.9

Table.9 Significant values of potential CTS sufferers


amongst Group-1 and Group Strength >7 of
workers through Fisher’s Exact Test
Pinch
Tip pinch Key pinch Palm pinch
Strength

563
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Group - 1 [2] A. Rainoldi, M. Gazzoni , R. Casale “Surface EMG


CTS 30 16 14 signal alterations in Carpal Tunnel syndrome: a pilot
Sufferer study” Eur J Appl Physiol , 103, 2008,233–242
% 44.1 57.1 60.8 [3] Aoife Finneran, Leonard O‟Sullivan, “Force, posture and
Group – 2 repetition induced discomfort as a mediator in self-paced
CTS 4 18 20 cycle time” International Journal of Industrial
Sufferer Ergonomics 40, 2010, 257–266
% 19.0 29.0 29.8 [4] Basmajian JV, De Luca CJ (1985) Muscles Alive. Their
Function Revealed by Electromyography. Williams &
p – value 0.0421 0.0108 0.0123 Wilkens, Baltimore.
Significant Significant Significant [5] Barnhart Scott, Demers A Paul, Miller Mary, Longstreth
Significance WT, Rosenstock, Linda “Carpal tunnel syndrome among
(P<0.05) (P<0.05) (P<0.05) ski manufacturing workers”, Scand J Work Environ
Health 1991;17:46-52
[6] Berlin C., Ortengren R., Lamkull D., “Corporate-internal
P values are calculated through Fisher‟s exact test to find out vs. national standard – A comparison study of two
the significant values of potential CTS sufferers. A parameter ergonomics evaluation procedures used in automotive
is significant if 0.01 < P < 0.05, highly significant if P< 0.01 manufacturing”, Lars Hanson International Journal of
and not significant if P ≤ 0.05. All calculated p-values are less Industrial Ergonomics, 39 ,2009, 940–946.
than the standard value (0.05) whereas p value of trimming
shows highly significance (<0.01). As the data is significant, [7] Bonwglioli Roberta, Mattioli Stefano , Fiorentini
hypothesis is rejected. Hence, the probability of being CTS Cristiana , Graziosi Francesca, Curti Stefania, Violante
sufferer is more amongst Group 1 i.e. having lesser pinch [8] Jo Geere, Rachel Chester, Swati Kale and Christina
strengths, which may be due to reduced gripping strength Jerosch-Herold “Power grip, pinch grip, manual muscle
because of median nerve problems i.e. median nerve testing or thenar atrophy which should be assessed as a
compression, swelling and other problem due to repetitive and motor outcome after carpal tunnel decompression? A
force full work. systematic review” BMC Musculoskeletal Disorders8,
2007, 114-118.
3. CONCLUSIONS
In this present work effect of Repetitive hand movements and [9] Geere Jo , Chester Rachel , Kale Swati and Herold
forceful work has been studied on human body in terms of Jerosch Christina “Power grip, pinch grip, manual
potential CTS symptoms. Fisher‟s test, difference between muscle testing or thenar atrophy – which should be
proportion, Correlation analysis and electromyogram signal assessed as a motor outcome after carpal tunnel
analysis were used to achieve the objectives. decompression? A systematic review” BMC
Musculoskeletal Disorders 2007, 8:114.
Following conclusions have been made from this dissertation:
[10] Giersiepen Klaus, and Spallek Michael, “Carpal Tunnel
i. Correlation analysis shows that healthy workers Syndrome as an Occupational Disease” Dtsch Arztebl
have more pinch strengths than that of non-healthy Int. 2011 Apr;108(14):238-42
workers.
[11] Eleftheriou Andreas, Rachiotis George, and
ii. Difference between proportion test is used on all the Hadjichristodoulou Christos “Cumulative keyboard
three type of pinch strength i.e. Tip pinch ,key pinch strokes: a possible risk factor for carpal tunnel
and palmer which reveals that workers with low syndrome” J Occup Med Toxicol. 2012 Aug 2;7(1):16
pinch strength have more probability of potential
CTS symptoms than that of the workers with high [12] Dong H. , Loomer P. , Barr A. , LaRoche C. , Young ED,
pinch strengths. Rempel D. “The effect of tool handle shape on hand
muscle load and pinch force in a simulated dental scaling
iii. Fisher‟s exact test is also shows the same task” Applied Ergonomics 38 (2007) 525–531
result which authenticate the result as [13] François Gauthier , Dominique Gélinas , Pierre Marcotte
discussed above. “Vibration of portable orbital sanders and its impact on
the development of work-related musculoskeletal
REFERENCES disorders in the furniture industry” Computers &
[1] Ajimotokan H. A, “The Effects Of Coupling Repetitive Industrial Engineering 62 ,2012, 762–769
Motion Tasks With A Manually-Stressed Work [14] Jantree C, Bunterngchit Y, TapechumS, Vijitpornk V,
Environment”, Researcher,1(2),2009,37-40 “An Experimental Investigation on Occupation Factors
Affecting Carpal Tunnel Syndrome in Manufacturing
Industry Works ” AIJSTPME 3(1),2010, 47-53

564
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Evaluation of Total Productive Maintenance Towards


Manufacturing Performance: A Review
Jagvir Singh Harmeet Singh Gopal K. Dixit
Deptt. of Industrial Engg. A.P., Deptt. of Mech. Engg. Associate Prof. in Mech. Engg.
Guru Nanak Dev Engg. College, Guru Nanak Dev Engg. College, Bhai Gurdas Inst. Of Engg. & Tech
Ludhiana, Punjab Ludhiana, Punjab Ludhiana, Punjab
Jagvirpannu85@gmail.com harmeetpawar@gmail.com gopalkdixit@yahoo.co.in

ABSTRACT system is the products or services delivered while the input


The importance of maintenance has been emphasized consists of various resources like the labour, materials, tools,
especially in the manufacturing environment. The failure of plant and equipment, and others, used for producing the
equipments or machines to produce products on time as products or services. The desired production output is
required can reflect the inefficiency in operations thus, failure achieved through high availability, which is influenced by
to deliver the products to the customers. In order to achieve equipment reliability and maintainability. The reliability of
world-class performance, more and more companies are equipment is decreasing with time. That has brought the
replacing their reactive, fire-fighting strategies for maintenance functions into focus to improve the production
maintenance with proactive strategies like preventive and system‟s performance. Cooke (2000) [3] and Meulen et al.
predictive maintenance and aggressive strategies like total (2008) [4] showed that equipment maintenance and system
productive maintenance (TPM). While these newer reliability are important factors that affect organization‟s
maintenance strategies require increased commitments to ability to provide quality and timely services to customers and
training, resources and integration, they also promise to to be ahead of competition. Manufacturing firms are realizing
improve performance. Companies which seek to improve that there is a critical need for proper maintenance of
competitiveness must infuse quality and improvement production facilities and systems to take best use.
measures in all aspects of their operations. This principle led Maintenance function is therefore vital for sustainable
to complete overhaul of maintenance practices in performance of any manufacturing plant. Ahuja et.al. (2006)
manufacturing plants. Maintenance managers view the [5] pointed out that TPM is the proven manufacturing strategy
consistent production of quality goods as greatly dependent on that has been successfully employed globally for the last three
the quality of operations rendered by the necessary machinery. decades, for achieving the organizational objectives of
The TPM approach helps increase uptime of equipment, achieving core competence in the dynamic environment.
reduces machinery set-up time, enhances quality, and lowers
costs. Through this approach, maintenance becomes an 1.1 Total Productive Maintenance (TPM)
integral part of the team. The purpose of this study is to Brah and Chong (2004) [6] addressed that Total Productive
examine the relationship between Total Productive Maintenance is a Japanese approach to maximize the
Maintenance (TPM) practices which are management effectiveness of the facilities that used within businesses. It
commitment, information system focus, autonomous not only addresses maintenance but all aspects of the
maintenance, 5S activities, employee training, employee operation and installation of those facilities. It involves the
involvement and planned maintenance with performance whole organization and when implemented effectively
improvements. A total of 45 manufacturing industries in benefits all sections of the business through improved
Northern region of India were selected for the filling of efficiency and better overall performance. In addition,
questionnaire. However, only 15 questionnaires were returned according to Davis (1995) [7], there are three components of
and usable for analysis. The correlation and multiple TPM:
regression analysis findings indicated that TPM practices 1. Total approach: An all-embracing philosophy which
except employee involvement have a significant impact on the deals with all aspects of the facilities employed within all
performance improvements in terms of cost reduction, quality areas of an operating company and the people who operate,
improvements, delivery compliance and flexibility setup, and maintain them.
enhancements. 2. Productive action: A very pro-active approach of the
condition and operation of facilities, aimed at constantly
Keywords—Total Productive Maintenance (TPM), improving productivity and overall business performance. 3.
Autonomous aintenance. Maintenance: A very practical methodology for maintaining
and improving the effectiveness of facilities and the overall
1. INTRODUCTION integrity of production operations.
Huang et al. (2003) [1] highlight that in today‟s era of global
recession and competition it is basic business requirement to Nakajima (1989) [8] pointed out that TPM is an important
supply quality products at competitive prices through reducing move for companies seeking world class manufacturing
manufacturing expenses which is only possible by improving status. TPM can be considered as a comprehensive
manufacturing performance. This increased global maintenance strategy. TPM focuses on a total system of
competition forcing companies to improve and optimize their maintenance prevention, preventive maintenance, and
productivity in order to remain competitive. Vashisth and maintainability improvement. The aim of TPM activities is to
Kumar (2011) [2] described that the output of the production reinforce corporate structures by eliminating all losses through
the attainment of zero defects, zero failures, and zero
565
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

accidents. Of these, the attainment of zero failures is of the the individual performance measures, (2) from performance
greatest significance, because failures directly lead to measurement of the system and (3) relationship between the
defective products and a lower equipment operation ratio, performance measurement system and its environment also
which in turn becomes a major factor for accidents. highlighted three concepts of performance measurement,(1)
classifications of performance measures as per their financial
1.2 TPM Approaches and non-financial perspectives, (2) positioning the
There are two main approaches found in TPM literature i.e. performance measures from the strategic context and (3)
Western Approach and the Japanese Approach. The Japanese support of the organizational infrastructure, like resource
approach is promoted by Japanese Institute of Plant allocation, work structuring, information system amongst
Maintenance and described by Nakajima (1984) whereas, others. Maintenance performance measurement aligns the
Western approach described by Willmott (1994), Wireman strategic objective within the hierarchical levels of the whole
(1991) and Hartmann (1992). The Western approach is closely organization allowing the visibility of the company‟s goals
tied to the Japanese approach. Willmott (1994) keeping and objectives from the top management level to the middle
Japanese approach in view offered his own definition that is management at tactical level and throughout the organization.
based on teamwork but does not necessarily require total
employee participation, i.e. emphasis has been laid on the use Leblanc (1995) [11] has emphasized upon various
of teams to achieve specific operational targets. He further initiatives like predicting cost savings, integration of cross-
states that the concept of TPM process is that all the assets on functional teams and effective identification equipment root
which production depends are kept always in optimum causes/problems, for reaping significant benefits from TPM
condition and available for maximum output. Hartmann implementation programs. It has been reported that careful,
(1992) presents a similar definition to Willmott and states that efficient planning and preparation are keys to successful
Total Productive Maintenance permanently improves the organization-wide implementation of TPM.
overall effectiveness of equipment with the active Maier et al. (1998) [12] have investigated the impact of
involvement of its operators. The Japanese approach TPM initiatives on the production system and presented
emphasizes on the role of teamwork, small group activities benefits accrued through holistic implementation of TPM
and the participation of all employees in the TPM process to based on data gained from the research project „World Class
achieve equipment improvement objectives hence it is more Manufacturing‟. They have emphasized upon various factors
people and process focused. While the Western approach like: subjective measures (program flexibility, delivery speed,
focuses on the equipment with understanding that operator on-time delivery, volume flexibility, quality and average unit
involvement and participation in the TPM effort is required costs) and objective measures (cost efficiency, quality
hence, it is focused on equipment improvement objectives. performance, fast delivery, on-time delivery, inventory
Ames (2003) finds that the Japanese are just as focused turnover and flexibility) for assessing the contributions of
directly on the results as the Western approach is, and TPM initiatives on plant performance. The analysis confirmed
suggests that although there is very little real difference in the significant impact of TPM implementation on the
approaches, The Western definition emphasizes on results as a effectiveness of manufacturing system. They have concluded
marketing, or selling, tool to gain the interest of Western that TPM is not the only factor determining a plant
managers. Similarly, the Japanese Institute of Plant performance and recommend that there is an emerging need to
Maintenance also advocates company wide application of investigate the inter-relations of TPM with other approaches
TPM rather than equipment focus. of continuous improvement leading to a better explanation of
manufacturing performance achievements.
2. LITERATURE
Nakajima (1989) [8] has defined TPM as an innovative McKone et al. (1999) [13] have proposed a theoretical
approach to maintenance that optimizes equipment framework by testing the impact of contextual issues affecting
effectiveness, eliminates breakdowns, and promotes maintenance system performance of firms through systematic
autonomous maintenance by operators through day-to-day TPM implementation. The study brings out clearly that TQM
activities involving the total workforce. and TPM programs are closely related. The study also
identifies critical dimensions of TPM and their impact on
McKone and Weiss (1995) [9] identify significant gaps manufacturing performance and demonstrates a strong
between industry practice and academic research and relationship among TPM and the contextual factors. The
emphasize the need to bridge these gaps by providing research provides a better understanding of relationships
guidelines for implementing TPM activities. As the goal of among TPM, JIT, TQM and EI for supporting the successful
the TPM program is to markedly increase productivity implementation of TPM.
without losing product quality which is the major concern of McKone (2001) [14] investigated the relationship between
business organizations, enormous companies of Bangladesh Total Productive Maintenance (TPM) and manufacturing
are trying to adopt TPM. performance (MP) through Structural Equation Modeling
(SEM) and found that TPM has a positive and significant
Neely et al. (1995) [10] suggested performance relationship with low cost (as measured by higher inventory
measurement is examined from three different levels, (1) from

566
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

turns), high levels of quality (as measured by higher levels of suggested self-auditing and benchmarking against world-class
conformance to specifications), and strong delivery industries with similar product lines as desirable prerequisites
performance (as measured by higher percentage of on-time before TPM implementation. They have further reported that
deliveries and by faster speeds of delivery). Nigerian industry needs to possess a culture dealing more
effectively with rapid changes to inculcate a competitive
Kutucuoglu et al. (2001) [15] have stated that equipment outlook in their manufacturing environments.
is a major contributor to the performance and profitability of
manufacturing systems. They have classified maintenance Kennedy (2005) [21] suggested that it should be
performance measures into five categories: equipment related acknowledged that a TPM implementation is not a short-term
performance, task related performance, cost related fix program. It is a continuous journey based on changing the
performance, immediate customer impact related work-area, then the equipment so as to achieve a clean, neat,
performance, and learning, growth related performance. The safe workplace through a "PULL" as opposed to a "PUSH"
study is aimed at investigation of role of performance culture. Significant improvement can be evident within six
measurement systems (PMS) in maintenance, with particular months, however full implementation can take many years to
reference to developing a new PMS using the quality function allow for the full benefits of the new culture created by TPM.
deployment (QFD) technique. The framework substantially
contributes to the area of maintenance management by
Seth and Tripathi (2005) [22] have investigated strategic
implications of TQM and TPM in an Indian manufacturing
incorporating key features of a successful PMS, namely: goal
set-up. They have examined the relationship between factors
deployment, cross-functional structure and a balanced view of
influencing implementation of TQM and TPM initiatives with
a system.
business performance, for the following three approaches in
Ireland and Dale (2001) [16] demonstrated that the an Indian context: TQM alone; TPM alone; both TQM and
companies implemented TPM because of the business TPM together and have also extracted significant factors for
difficulties they faced. In all three companies senior the above three approaches. The research identifies critical
management had supported TPM and set up suitable significant factors like leadership; process management and
organizational structures to facilitate its implementation. The strategic planning; equipment management and focus on
companies had followed Nakajima's seven steps of customer satisfaction, for the effective adaptation of TQM and
autonomous maintenance, although different TPM pillars had TPM programs in Indian manufacturing environment.
been adopted, with the common ones being improvements,
education and training, safety, and quality maintenance. The
Campbell and James (2006) [23] highlighted that TPM is
a manufacturing-led initiative that emphasizes the importance
main differences in TPM implementation related to the use of
of (i) people with a „can do‟ and continual improvement
ABC machine classification system and the role of facilitators.
attitude and (ii) production and maintenance personnel
Mora (2002) [17] states that implementing the Total working together in unison. TPM combines the best features
Productive Maintenance is not a difficult task. However, it of productive and preventive maintenance (PM) procedures
requires some customized training in order to succeed. The with innovative management strategies and encourages total
results of implementing an effective program in terms of employee involvement.
increased plant efficiency and productivity are outstanding.
Parida and kumar (2006) [24] viewed maintenance
McBRIDE (2004) [18] suggested maintenance and performance measurement as the multidisciplinary process of
reliability as a core business strategy, and is key to a measuring and justifying the value created by maintenance
successful TPM implementation. Without the support of top investment, and taking care of the overall business
management, TPM implementation will be failed. It is certain requirements. A performance measurement system is defined
that Implementing TPM using the 12 steps will leads to “zero as the set of metrics used to quantify the efficiency and
breakdowns” and “zero defects.” effectiveness of actions.

Ming-Hong (2004) [19] suggests that to be successful, not Thun (2006) [25] has described the dynamic implications of
only support is required from top management, but also from TPM by working out inter-relations between various pillars of
the head of each department. The other key factor is that each TPM to analyze fundamental structures and identified most
employee must feel that they also have been benefited from appropriate strategy for implementation of TPM considering
this activity. This will improve their performance. This the inter-play of different pillars of this maintenance
improved performance will reflect in their monthly bonus. approach. The research focuses upon analyzing the reasons
This will motivate the employee, which in turn will lead to behind successful TPM implementation and identifies inter-
better progress. The design of the activity should be kept as relations between the pillars of TPM. The research has been
simple as possible. conducted for analyzing the fundamental structures and
identification of strategy for successful implementation of
Eti et al. (2004) [20] have explored the ways in which TPM.
Nigerian manufacturing industries have implemented TPM as
a strategy, culture for improving its performance, and

567
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Pessan (2007) [26] has proposed a multi-skill project production losses due to equipment inefficiency. TPM is the
scheduling problem model for maintenance activities in the methodology that aims to increase both availability of the
organization. Preventive maintenance activities are usually existing equipment hence reducing the need for the further
planned in advance: production is stopped and all capital investment. The aim of the paper is to study the
maintenance activities should be processed as fast as possible implementation of the TPM program in an Indian automobile
in order to restart production. Moreover, these human manufacturing industry. Through a case study of
resources handled activities which require specific skills and implementing TPM in an automobile industry, the practical
are subjected to precedence constraints. The main difference aspects within and beyond basic TPM theory, difficulties in
with Multi-Skill Project Scheduling Problem is that some the adoption of TPM and Problems encountered during
activities may be submitted to disjunctive constraints due to implementation are discuss.
material constraints of the production channel. He described
how these constraints can be used to improve usual Multi- Wakjira and Singh (2012) [31] have focused upon the
Skill Project Scheduling Problem resolution methods. significant contributions of TPM implementation success
factors like top management leadership and involvement,
Ramayah (2007) [27] has emphasized importance of traditional maintenance practices and holistic TPM
maintenance in the manufacturing environment. The failure of implementation initiatives, towards affecting improvements in
equipments or machines to produce products on time as manufacturing performance in the Ethiopian industry. The
required can reflect the inefficiency in operations thus, failure study establishes that focused TPM implementation over a
to deliver the products to the customers. The objective of reasonable time period can strategically contribute towards
TPM is to create an active participation of all employees in realization of significant manufacturing performance
maintenance and production functions, including the operators enhancements. The study highlights the strong potential of
who operate the machines and equipments. The results TPM implementation initiatives in affecting organizational
suggest important aspects of autonomous maintenance and performance improvements. The achievements of Ethiopian
planned maintenance activities that contributed to the manufacturing organizations through proactive TPM
improvement in quality and cost. initiatives have been evaluated and critical TPM success
factors identified for enhancing the effectiveness of TPM
Lazim (2008) [28] suggested that the importance of implementation programs in the Ethiopian context.
maintenance has been emphasized especially in the
manufacturing environment. The failure of equipments or Sharma et. al (2012) [32] demonstrated that to improve
machines to produce products on time as required can reflect productivity it is essential to improve the performance of the
the inefficiency in operations thus, failure to deliver the manufacturing systems. This system consists of various
products to the customers. The objective of TPM is to create resources like labour, materials, tools, plant and equipment,
an active participation of all employees in maintenance and and others, used for production. The desired production output
production functions, including the operators who operate the is achieved through high equipment availability, which is
machines and equipments. This paper discusses part of a influenced by equipment reliability and maintainability.
preliminary study finding focusing on two main TPM Maintenance function is therefore vital for sustainable
practices namely autonomous maintenance and planned performance of any manufacturing plant, since, a proper
maintenance in a Malaysian SME. maintenance plan improve the equipment availability and
reliability. The paper describes the maintenance (Total
Khanna (2008) [29] has described that a large percentage of Productive Maintenance) as strategy to improve
the total cost of doing business is due to maintenance-related manufacturing performance. Further, 5S as the base of Total
activities in the organization. One approach to improve Productive Maintenance (TPM) and overall equipment
performance of maintenance activities is to implement and effectiveness (OEE) as a measure of effectiveness have also
develop Total Productive Maintenance (TPM). TPM methods been discussed.
and techniques had been successfully implemented in Japan
over the past three decades, and more recently in India. Badli (2012) [33] highlighted that Total Productive
Inherent within the TPM concept are the aspects of enhancing Maintenance (TPM) is a systematic approach to understand
the overall equipment effectiveness. The research shares some the equipment‟s function, the equipment‟s relationship to the
of the experiences of TPM implementation in Mayur product quality and the likely cause of failure of the critical
Uniquoters, India and achievements made while adopting and equipment conditions. Introducing TPM requires strategic
implementing TPM. It also identifies some of the difficulties planning and few studies had been made in the field of
faced during implementation, relates with the concept of TPM maintenance within the context of Malaysian Small and
and proposes some solutions to eliminate them. Medium Enterprises (SMEs) especially in automotive SMEs.
Technologically, automotive industry is the most important
Paropate et. al (2011) [30] highlighted that a fundamental and strategic industries in the Malaysian manufacturing sector
component of world-class manufacturing is that of the total must be supported by efficient and effective equipment
productive maintenance (TPM), which has been recognized as management. This paper discusses the state of TPM
one of the significant operation strategy to regain the implementation in Malaysian automotive SMEs and

568
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

investigation of Critical Success Factors (CSFs) associated in


implementing TPM. A survey through questionnaires has Detailed Literature Review
been applied to this study to determine the level of TPM
practices in automotive industry. The paper systematically
categorized the TPM knowledge and understanding and
critical success factors (CSFs) in TPM implementation. Problem Identification and
Preparation of Research Plan
3. CONCEPTUAL FRAMEWORK
The conceptual framework shown in Figure 1 has been
developed on the basis of literature review and research Industry Database Questionnaire
problem. This model focuses on the relationship and influence Creation Generation
of TPM practices on performance improvements.

Questionnaire Pre-Testing and


Validation

Questionnaire Administration

Reminders, Phone Calls and


Interviews

Data Collection, Analysis of Data and


Fig. 1: Conceptual Framework Analysis of Results

In this study, there are two variables, which are independent


variables and dependent variables. The independent variables
include management commitment, information system focus, Evaluating the Relationships between TPM Input
employee involvement, and autonomous maintenance, 5S Factors and Output Factors
activities, employee training and planned maintenance while
the dependent variable of this study is manufacturing
performance improvements which include cost, quality, Fig. 2: Research Methodology employed for the Study
delivery and flexibility. It is hypothesized that there is a
significant association between the above said independent 5. QUESTIONNAIRE VALIDATION
and dependent variables.
AND PRE-TESTING
For effectively conducting the survey, the questionnaire has
4. QUESTIONARE DEVELOPMENT been prepared through extensive literature review and
In order to carry out the research, a survey of various medium
and large manufacturing organizations of India that have validated through peer review from academicians and
successfully implemented TPM has been carried out through a practitioners from the industry. To ensure the relevance and
specially designed questionnaire to ascertain the status of effectiveness of the questions to the manufacturing industry,
various TPM implementation factors and performance the questionnaire has been pre-tested on a representative
improvements. A detailed questionnaire called, „TPM sample of industry. The suggestions received from peers,
Questionnaire‟ has been designed to seek information on the senior executives from the industries and academicians have
status of various components and issues of TPM practices in
been incorporated to make the questionnaire more relevant to
Indian manufacturing industry.
the purpose so that it may bring out key outcomes.

6. SAMPLING AND DATA


COLLECTION
The extensive survey has been conducted for the
organizations in the country that have successfully
implemented TPM. The „TPM Questionnaire‟ was mailed to
the organizations chosen at random from the directory of CII
(Confederation of Indian Industry) who have implemented
TPM successfully, and were subsequently contacted through
personal visits and regular follow ups. A total of 15 responses

569
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

out of 45 selected industries have been received from the Information system focus (I2) has not shown any
manufacturing organizations with 33 percent response rate. significance with performance improvements.

7. STRUCTURE OF QUESTIONNAIRES This is because the employees are not habitual of using
The TPM questionnaire design is simple, easy to understand information system as an effective tool. This may be because
and not required much time of the participants to answer it. the traditional management still has a little impact on the
The questionnaire consists of six sections, each collecting a situation in the plant operations.
specific type of information.
The results highlight that ‘Autonomous Maintenance’
(I3) has not been contributing towards cost reduction
8. ANALYSIS OF SURVEY
(O1), delivery compliance (O3) and flexibility
In the present thesis, seven key TPM practices which includes
: Management Commitment (I1), Information System Focus enhancements (O4) in the industry.
(I2), Autonomous Maintenance (I3), 5S Activities (I4),
Employee Training (I5), Employee Involvement (I6), Planned
A significant correlation (r= 0.70; p<0.05) has been
Maintenance (I7) and four performance improvement exhibited between autonomous maintenance (I3) issues
parameters which include: Cost Reduction (O1), Quality and quality improvements (O2).
Improvements (O2), Delivery Compliance (O3) and
Flexibility Enhancements (O4) have been identified as The objective of autonomous maintenance is to minimize the
significant for analyzing the impact of TPM practices towards machine break downs, avoid deterioration, failures, and
achieving performance improvements. stoppages by operator‟s involvement in maintaining the
machine and giving freedom for decision making for the
working gives authority and monopoly feeling to operator
9. RESULTS
which leads to better quality (O2).
Pearson‟s correlations and t-test results depict that
management commitment significantly contribute towards the
The results highlight that adequate 5S activities (I4)
realization of various performance improvement parameters.
effectively contributes towards cost reduction, O1
The results show that management commitment issues
(r=0.55; p<0.05), quality improvement, O2(r=0.32;
(I1) are significantly correlated (r= 0.80, p <0.05) with
p<0.05) and delivery compliance, O3(r=0.27; p< 0.05).
the cost reduction (O1).
5S, which is the pre-step of TPM, is a systematic approach
It means the top management can structure a proper and
providing the contribution of all personnel in the cleaning
strategic planning of the direction of companies to achieve its
regime of the company. The clean and steady environment
TPM goals that aligned with business objectives. Moreover,
targeted by 5S has a positive impact on cost reduction, quality
as much as the impact of TPM on the cost and flexibility of
improvement and delivery compliance.
the productivity output, the management might think to put
some investment for this programme such as enhancing the As a result of 5S activities, a clean work environment can be
personnel skills, provide suitable working environment, formed, increase can be provided in the work efficiency and
training, and compensation to employees therefore a substructure is established for TPM. The personnel
gain collective work skill following the team activities and
Management commitment issues (I1) have shown they become more sensitive in terms of improvement. With
insignificant correlation (r=-0.28 and r= 0.41) with the aid of successful practices and training, many factors
quality (O2) and delivery (O3). causing work accidents have been removed. Through the
positive results obtained as a result of the application, the
From the results it has been found that management motivation of the personnel in the joint targets has been
commitment (I1) significantly influences (r= 0.55; p< positively affected.
0.05) the flexibility in processes (O4).
The results show a significant correlation (r=0.45;
A positive attitude of top management correlates with p<0.05) between employee training (I5) and quality
continual attention being paid to improvement opportunities improvements (O2).
and provides employees with support for their encourageous
behavior. This in turn strongly affects the flexibility in terms Employee training directly enhances the human capital of the
of increase in variety of tasks/jobs performed by the workers, firm and directly leads to the performance improvements by
reduction in new product development cycle. Well motivated raising the general level of skills. As employee becomes more
workforce is able to work with specific strategies to highly motivated and more highly skilled, so that their task
effectively to improve the manufacturing lead time, set-up performance improves and quality is directly enhanced.
time and equipment flexibility. Employee involvement (I6) has exhibited a significant
linkage with quality, O2(r=0.41; p<0.05) and
flexibility, O4 (r=0.47; p<0.05).

570
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

It is in concern with the objectives of TPM that requires the reduction (O1) parameter can be predicted from
total commitment and participation from employees to raising ‘management commitment’ (I1), „5S activities’(I4),
awareness about performance improvement.. „employee involvement’(I6) and ‘planned maintenance’
(I7) issues combined. The results indicate that „management
Therefore, the manufacturing companies have to create which
commitment‟ is very significant (at p<0.01) and β coefficient
will give more positive results because of their higher
(2.6464) for this parameter is highest among other
satisfaction and quality of life at work through involvement.
independent variables chosen for regression analysis. Thus
management commitment issues play a major role in making
Planned maintenance (I7) as an input to the TPM
cost reduction.
implementation has shown a significant correlation (r=
0.72; p <0.05) with cost reduction (O1). Table 1: Multiple Regression Analysis between
Input Factors and Output Parameters
The planned maintenance influences cost reduction the most
by focusing on addressing cost related issues and facilitating
working through self managed project teams and problem
solving groups by stabilization of production systems,
affecting maintenance prevention improvements on the
production system, enhancing the human resource capabilities
and affecting improvements in the reliability of manufacturing
systems. Effective planned maintenance programs can
strategically contribute towards productivity improvements by
improving the basic equipment conditions, reducing
unplanned downtimes and setup times, minimizing troubles
related with equipment upkeep and operations.

The results reveal that planned maintenance (I7) issues


significantly affect (r= 0.63 and r=0.38) the quality
(O2) and delivery (O3) performance of the industry.
Multiple correlation coefficient (R) value for quality
Planned maintenance is the basic or common practice in Total improvements (O2) and the independent variables is 0.773
Productive Maintenance. The main idea in TPM is direct and R2=0.5982. This reveals that 59.82% of the variance in
involvement of operators in maintenance process.TPM this output parameter can be predicted from ‘autonomous
improve the plant performance through break down the maintenance’ (I3), ‘5S activities’ (I4), ‘employee
traditional barriers between maintenance and production, training’ (I5) and ‘planned maintenance’ (I7) issues
foster improvement by looking at multiple perspectives for combined. The results show that autonomous maintenance (p<
equipment operation and maintenance, increase technical 0.05) is the major contributor (β=0.502) in the quality
skills of operators, include maintenance in daily tasks as well improvements
as long-term maintenance plans, and allow for information „Delivery compliance‟ (O3) and „independent variables‟ show
sharing among different department which lead to reduction in a strong association with regression coefficient of 0.384. The
cost, increase in quality and better delivery services. results show that 14.77% of the variance in O3 can be
predicted from ‘5S activities’ (I4) and ‘planned
In order to investigate critical success factors for achieving
maintenance’ (I7) input parameters. The 5S activities issues
results through holistic TPM implementation, the significant
have maximum impact on increasing the effectiveness of
correlations thus obtained as a result of Pearson‟s Correlation
delivery compliance.
and t Test are validated through „Multiple Regression
Analysis‟ as depicted in Table below. The notations depicted Multiple correlation coefficient (R) for flexibility
in the table will include: β = Regression coefficient (beta enhancement (O4) and the input variables is 0.575 and
coefficient), R = Multiple correlation coefficient. The variance (R2) is 0.3316, leading to the connotation that
significant factors with (β) significance level, multiple 33.16% of the variance in flexibility enhancement (O4)
correlation coefficient (R) and F values for each performance
parameter can be predicted from ‘management
parameter are indicated in the Table 4.16. The results imply
commitment’ (I1) and „employee involvement’(I6) issues
that there is a significant contribution of various TPM success
combined. The results indicate that „management
factors depicted in the table with the respective manufacturing
commitment‟ is very significant (at p<0.05) and β coefficient
performance parameters reported.
(0.5873) for this parameter is highest among other
Regression analysis results reveal that: independent variables chosen for regression analysis. Thus
Multiple correlation coefficient (R) for cost reduction(O1) and management commitment issues play a major role in making
the independent variables is 0.874 and variance (R2) is 0.7656, flexibility enhancements.
leading to the connotation that 76.56% of the variance in cost

571
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The results of multi correlation analysis have corroborated Operations and Production Management. vol.15,
with those obtained through t-test analysis and thus validate no.4, pp. 80–116, 1995.
the same.
[11] Leblanc, G. 1995. Tapping the true potential of TPM:
are you maximizing the value of your plant‟s program?.
10. CONCLUSION Journal on Plant Engineering. vol. 49, no. 10, pp.
It is concluded that total productive maintenance (TPM) 143–148, 1995.
practices basically have influence the performance of
manufacturing companies. The improvement can be seen in [12] Maier, F.H., Milling, P.M. and Hasenpusch, J. 1998.
term of cost, quality, delivery, and flexibility that have shown Total Productive Maintenance: An International Analysis
better results after implementing TPM programmes in plant of Implementation and
Performance.http://iswww.bwl.unimannheim.de/lehrstuhl
operations. For example, there are less defects during process,
/publikationen/TPM.pdf, 1998.
reduction in late delivery, increase product quality, decrease
in cost of manpower, and etc. [13] McKone, K. E., Schonberger, R. G., & Cua, K. O. 1999.
Total productive maintenance: a contextual view.
REFERENCES Journal of Operations Management. vol.17, no. 2,
pp. 123-144, 1999.
[1] Huang, S.H., Dismukes, J.P., Shi, J., Su, Q., Razzak,
M.A., Bodhale, R., Robinson, D.E. 2003. Implementing [14] McKone, K. E.; Schroeder, R. G.; Cua, K. O. 2001. The
TPM in plant maintenance: some organizational barriers. impact of total productive maintenance practices on
International Journal of Quality and Reliability manufacturing performance. Journal of Operations
Management. vol. 17, no. 9, pp. 1003-1016. Management, vol.19, pp. 39- 58, 2001.

[2] Vashisth, D. S., Kumar, R. 2011. Analysis of a redundant [15] Kutucuoglu, K.Y., Hamali, J., Irani, Z. and Sharp, J.M.
system with common cause failures. International 2001. A framework for managing maintenance using
Journal of Engineering Science and Technology. performance measurement systems. International
vol. 3, no. 12, pp. 8247-8254. Journal of Operations and Production
Management. vol. 21, no. 1/2, pp. 173-194, 2001.
[3] Cooke, F.L. 2000. Implementing TPM in plant
maintenance: some organizational barriers. [16] Ireland, F. and Dale, B.G. 2001. A study of total
International Journal of Quality & Reliability productive maintenance implementation. Journal of
Management. vol. 17, no. 9, pp. 1003-1016. Quality in Maintenance Engineering. vol. 7, no. 3,
pp.183–192, 2001.
[4] Meulen, P., Petraitis, M., Pannese, P.2008. Design for
maintenance. IEEE conference on Advanced [17] Mora, E. 2002. The Right Ingredients for a Successful
Semiconductor Manufacturing. pp. 278–281. TPM or Lean Implementation.

[5] Ahuja, I.P.S., Khamba, J.S. and Choudhary, R. 2006. [18] McBRIDE, D. 2004. Implementing TPM Total
Improved organizational behavior through strategic total Productive Maintenance (TPM), Lean Manufacturing
productive maintenance implementation. ASME Consulting and Training. EMS Consulting Group,
International Mechanical Engineering Congress 2004.
and Exposition (IMECE), pp. 1-8.
[19] Ming-Hong, L. 2004. Factors affecting the
implementation of Total Productive Maintenance
[6] Brah, S.A. and Chong, W.K. 2004. Relationship between
System.
total productive maintenance and performance.
International Journal of Production Research. vol. [20] Eti, M.C., Ogaji, S.O.T. and Probert, S.D. 2004.
42, no. 12, pp. 2383–2401. Implementing total productive maintenance in Nigerian
manufacturing industries. Applied Energy. vol. 79, no.
[7] Davis, R. K. 1995. Productivity Improvements through
4, pp. 385–401.
TPM: The Philosophy and Application of Total
Productive maintenance.
[21] Kennedy, R. 2005. Examining the Process of RCM and
TPM. The Center for TPM (Australasia), The Plant
[8] Nakajima, S. 1989. TPM Development Program-
Maintenance Resource Center. pp. 9-13.
Implementing Total Productive Maintenance.
Productivity Press: New York. [22] Seth, D. and Tripathi, D. 2005. Relationship between
TQM and TPM implementation factors and business
[9] McKone, K., Weiss, E. 1995. Total productive performance of manufacturing industry in Indian context.
maintenance: bridging the gap between practice and The International Journalof Quality and Reliability
research. Darden School Working Paper University Management. Vol. 22, Nos. 2/3, pp. 256–277.
of Virginia.
[23] Campbell, John D.; and Reyes-Picknell, James 2006.
[10] Neely, A.D., Gregory, M.J. and Platts, K.W. 1995. Strategies for Excellence in Maintenance
Performance Measurement System design – a literature Management. (2nd Ed.). Productivity Press.
review and research agenda. International Journal of

572
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Productivity and Quality Management. Vol. 3, No.6 ,


[24] Parida, A., & Kumar, U. 2006. Maintenance performance pp.12-32.
measure (MPM): issues and challenges. Journal of
Quality in Maintenance Engineering. vol. 12, no. 3, [30] Paropate, R.V., Jachak, S.R., Hatwalne, P.A. 2011.
pp. 239-251. Implementing Approach of Total Productive
Maintenance in Indian Industries & Theoretical Aspect:
[25] Thun, J.H. 2006. Maintaining preventive maintenance An overview. International Journal of Advanced
and maintenance prevention analyzing the dynamic Engineering Sciences And Technologies. Vol. 6, No. 2,
implications of Total Productive Maintenance. System pp. 270-276.
Dynamics Review. Vol. 22, No. 2, pp. 163–179.
[31] Wakjira,M.W. and Singh, A.P. 2012. Total Productive
[26] Pessan, C. 2007. Multi-skill Project Scheduling Problem Maintenance: A Case Study in Manufacturing Industry.
and Total Productive Maintenance. European Journal of Global Journal of researches in engineering. Vol. 12, Iss.
Operational Research. Vol.78, No.2, pp. 146 – 161. 1.

[27] Ramayah,T. 2007. Total Productive Maintenance And [32] Sharma, A.K., Shudhanshu, Bhardwaj, A. 2012.
Performance: A Malaysian SME Experience Manufacturing Performance and Evolution of TPM.
International Review of Business Research Papers Vol 4 International Journal of Engineering Science and
No. 4 , pp.237-250 Technology. Vol. 4, No.03, pp. 854-866.

[28] Lazim, H.M., Ramayah, T. and Ahmad, N. 2010. Total [33] Badli, S. M.Y. 2012. Total Productive Maintenance: A
Productive Maintenance and Performance: A Malaysian Study of Malaysian Automotive SMEs. Proceedings of
SME Experience. International Review of Business the World Congress on Engineering 2012, Vol. III, WCE
Research Papers. Vol 4, No. 4, pp.237-250. 2012, July 4 - 6, 2012, London, U.K.

[29] Khanna, V.K.(2008). Total productive maintenance


experience: case study. International Journal of

573
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

EVALUATION OF TECHNOLOGICAL INNOVATION


CAPABILITIES OF SMEs

Taranjit Singh Gopal K. Dixit


Dept. of Industrial Engg. Bhai Gurdas Institute of Engg. & Tech. Sangrur,
Guru Nanak Dev Engg. College, Ludhiana, Punjab, India
Punjab, India gopalkdixit@yahoo.co.in
Taranvirk22@hotmail.com
Abstract: They therefore, need to strengthen their technological base
In today‟s competitive business environment, global to make themselves competitive, so as to create a niche for
competition forces companies to perpetually seek ways of themselves. The opportunities are immense if they can
improving their products and services. The need of the hour upgrade their capabilities to catch up the modern
is to deliver high quality products through continuous techniques of management, production and marketing
improvements in product features, bring new products to
(Dixit and Nanda, 2011) [3]. It is high time that the
the market faster, make product changes faster and more
manageable, improve forecasting accuracy of the product industries wake up and gear up for R&D initiatives to
demands, reduce costs, improve employee training, skills develop cutting edge technologies for sustained competitive
and education levels, improve information systems and advantages in the global market place. Technology
networks, achieve greater flexibility of manufacturing upgradation has become mandatory for economic
functions. Thus the organizations are left with no choice development, industrial growth, enhanced corporate image,
but to upgrade the existing systems, products and more flexible responses, strategic self-reliance and
technologies for their survival. The present study presents a
detailed analysis of the various factors and issues hindering sustained competitiveness of an enterprise. Thus
technology development initiatives of cycle parts industry technology upgradation efforts must be placed within the
in the region and their relative significance in affecting context of market opportunities, customer needs and
performance output. strategic direction, thereby leading to improving the
Keywords- Technology Upgradation, Organizational Culture, Resource
product and technology portfolio. Without continuous
Support, Govt. Support, Alliances, Performance Output technology upgradation, no enterprise can ever remain
1.1 General competitive and the basis of technology creation and up-
The dynamic engine of a nation‟s economic growth is gradation is research and development (Choi, 1989) [4].
driven by small and medium enterprises, whose activities
are the spur to aggregate economic and social benefits. As
1.3 Technology Development in India
India has experienced transformation from the regime of
SMEs face pressure from increased competition, shortening
regulated economic development to competitive regime
product life cycles and growing product complexity, many
since the liberalizations of 1991. The major reason for this
are finding that they need to change the way they develop
turnover is the rapid technological developments and
new technologies, products and services. Firms should
unprecedented obsolescence rates. Although the change in
develop innovative capabilities for their survival and
technological pattern in Indian industry is remarkable but
growth in the era of global competition. Technological
India is still rallying behind other developing countries in
advancement achieved through technological innovations
terms of growth and absorption of new technology,
enables firms to acquire or improve competitiveness
especially to its‟ Asian neighbors like China and Taiwan.
through market orientation or quality improvement of
The organizations in India depend on external sources for
products, among others (Nanda and Singh, 2009) [1].
its technological requirements and often look to
1.2 Technology Upgradation in SMEs organizations from developed countries or other bigger
organizations for acquisition of technology. This
In the present global business context, it has been observed dependence on other organizations is due to the limited
that existing production techniques and processes in SMEs „know-how‟ and „know-why‟ competencies of
are dragging them to become uncompetitive. Therefore, organizations. This situation is even grimmer in smaller
SMEs need to harness technology appropriate to them to organizations. Organizations in India especially SMEs
enhance their national and international competitiveness should develop their technological and learning capabilities
(Nanda and Singh, 2008)[2]. SMEs are facing many
challenges in the age of market and trade globalization.

574
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

so that they become more innovative and less dependent on


other organizations, if they have to compete at global level. 2.3 Policy Environment
Government can create the right economic, fiscal and
LITERATURE REVIEW
regulatory framework within which innovation can flourish
2.1 Organizational Culture (Hyland and Beckett, 2005) [10]. Government assistance
In the implementation of innovation, firms have to create can basically be divided into two sub-groups: financial and
an organizational culture that fosters innovation by technical. Financial assistance includes various forms of
ensuring employee skills, providing incentives and investment incentives and soft policy loans. It includes
removing obstacles (Aderemi, 2009) [5].Competent contributions in capital accounts and interests, financing at
workforce, education, training, reward schemes and concession rates, guarantee concessions, etc or tax
availability of scientific facilitites are an important factors incentives. Technical assistance consists of human resource
for technology upgradation of any industry. Innovative training, export promotion initiatives, and quality and
organizations believe that the bottom line difference technology programs (Zeng et al., 2010) [11].
between success and failure is finding, developing and
nurturing these elements (Dixit and Nanda, 2012) [3]. 2.4 Alliance with External Organizations
Organizations need a culture that supports collaboration
2.2 Resource Support and a systematic approach for managing innovation.
To support technology upgradation programs, organization Knowledge is generated not only by individuals and
must provide resources in the form of adequate physical organizations, but also by their complex pattern of
infrastructure and sufficient financial support. Innovative interaction. Through collaborations, a company can
organizations have better systems and technology in place improve its exploration and exploitation capabilities and
than less innovative organizations (Ghosbani and Bagheri, consequently improve its innovative capacity (Faem et al.,
2008) [6]. Advanced equipment and resources (Sheel, 2005) [12]. Strong ties can offer steady flows of new ideas,
2002) [7]. Innovation budget (Huang, 2008) [8] and technological innovations, and operational support (Nanda
availability of adequate finance has been viewed as a and Singh, 2009) [1].
critical element for the development of SMEs (Abor and
Quartey, 2010) [9].

2.7 Conceptual Model


Organizational Resource Support Policy Environment Alliance with
Culture  Physical Environment  Policy Environment External
 Manpower Development  Modernization Programs  Financial Support to Organizations
 Learning Environment  Capital Support. R&D  Industrial Partnerships
 Training to Employees  University-Industry
 Traits for Innovation Partnerships
 Intrinsic-Extrinsic
TECHNOLOGY  Other Parternerships
Motivation DEVELOPMENT PROGRAM
Response to Market
 Risk-Taking and Strategic Needs
Direction
Strategic Structure and Output  Increase in product
Level of Technology
Implementation of of Research Function features to respond to
 Level of process
technology in use. Innovation  Collecting customer needs of
 External sources for requirements through a marketplace.
 Role of old technology in
technology needs. separate marketing  Product quality and
impairing performance.
 In-house R&D for department attributes as
developing new  Well defined R&D policy compared to
technology products and for technology competitors.
processes. development.  Markets served by
 Technology developed  New processes developed the industrial units.
through in-house through in-house research  Improvement in
research. efforts. product-mix offered by
 Use of „Risky Research‟,  New products developed the industrial units.
as a strategic approach. through in-house research
 Use of „Imitation for efforts.
Figure 2.1 Conceptual

Model
Organizational structure
creation‟, as a strategic
approach. for R&D function.

575
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

ANALYSIS OF SURVEY  Industrial units lack in availability of technical and


scientific staff (R&D personnel). Majority of the units
do not have R&D personnel in required numbers.
3.1 Industrial Units Surveyed
Only around one tenth (11%) of the units have this
Small scale cycle parts industry in Ludhiana (Punjab, India)
manpower in adequate strength to undertake
has been included in the survey. A total of 95 cycle parts
technology upgradation initiatives.
units were selected from the list of registered units
 The level of encouragement to employees by senior
provided by the office of District Industrial Centre,
management is moderate (PPS= 57.60) for
Ludhiana. A total of 46 units responded to the
undertaking R&D work. There is very little to
questionnaire.
reasonable pressure on employees to put efforts for
technology development.
CONCLUSIONS  Cycle parts sector in the region is not performing well
in availability of multi-skilled workforce. It is only
4.1 Results and Major Findings about one fifth of the units that have such workforce in
The results have been derived from the descriptive and desired numbers.
empirical analysis of data collected through questionnaire
based survey. The main findings are presented as follows: Resource Support Issues
 Industry, in general, has lacked in adequate financial
General Areas support for technology upgradation. One third of the
 The most significant (PPS>70) factors deteriorating units face acute shortage of funds. 59 percent units
performance of cycle parts sector include absence have only little to reasonable support for their
of large scale manufacturing industry in the region, development projects.
lack of technological dynamism and shortage and  State of the art production machinery and equipment is
high cost of electricity to run production operations not available in majority (63 percent) of the industrial
smoothly. units. Only less than one tenth (9 percent) of the units
 Increase in competition because of globalization and have latest production facilities.
liberalization, scarcity of funds for development  Cycle parts units in the region do not have latest
projects, shortage of multi-skilled workforce and high software for drafting, designing and modeling etc. 79
price of raw material are also significant (PPS 65- percent of the units are using softwares only to a very
70). small extent.
 The tool sector lacks in availability of dedicated
laboratories with facilities for experimentation and
Organizational Culture Issues analysis. Majority of the units (74 %) do not have
 Education level in majority (63 percent) of the units is these facilities.
between fair to good. However, one fourth (24  As far as earmarking of funds for research and
percent) of units have considered poor education level development activities is concerned, the performance
of employees to be a serious concern. of industry is not satisfactory. 59 percent units do not
 Most of the units (83 percent) do not provide any allocate funds for research activities clearly. 26
formal training to employees. Only a few (4 percent) percent club these funds with other developmental
organizations provide formal training to employees activities.
just after induction into the organization.  Investments in research function by the local industry
 Majority of the organizations (92 percent) either give a do not compare global standards by any means. 63
fixed monetary reward, an increment in salary or a percent units do not spend even 0.5% of annual
share in the profits made on account of innovation. turnover on R&D. Another 20 percent spends between
Thus reward schemes have been largely based on 0.5-2.5 % of annual turnover on development
extrinsic motivation tools only. initiatives.
 In majority of the units the role of top management  Absence of modernization and renovation programs is
has been supportive in situations of project failures. another aspect preventing development in the cycle
However, a few organizations (4 percent) take strict parts sector. Only one fifth (19%) of the units
action against members of project team or discourage regularly implement modernization and renovation
employees to undertake projects for innovation if measures.
failure occurs.

576
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Policy Environment Issues industrial units (48 percent) are not using this strategy
at all. Another one third uses it only to a small extent.
 Majority of the units (66 percent) do not receive any
financial help from government which is discouraging.  A relatively low rating has been shown in „Imitation
for Creation‟ strategy. More than half (54%) of the
 Government has to make improvements (PPS= 57.60)
units have never practiced this strategy. 39 percent
in its policies to ensure availability of raw materials at
have used it occasionally for technology development.
appropriate prices. Majority of the units have
considered raw material prices to be high and  Nearly half of the industrial units utilize their research
significant in impairing their performance. efforts for solving maintenance related problems. It is
only 18 percent of the units which tend to use research
 Availability and cost of electric power in the region
function for developing new products. Another one
has been considered as a major problem. Nearly two
fourth utilizes it for developing improved processes.
third (62 percent) of the units consider this factor as
most significant in restricting growth and  Only 30 percent of the industrial units collect
competitiveness of industrial units in the region. information on customer requirements in a structured
manner. Out of these, 18 percent units have a separate
 The proprietors and senior executives of the industrial
marketing department to perform this function and in
units are of the opinion that government can suitably
the remaining 12 percent units, a team of senior
reward entrepreneurs for their achievements in the
executives performs this job.
field of technological innovations, support cycle parts
industry by organizing seminars/workshops on  Cycle parts sector is aware of the benefits of in-house
advanced and upcoming technologies, provide labs for technology development programs (PPS= 46.20), but,
testing, analysis etc. and funding for employee a lot needs to be done to convert this awareness into
training programs. reality. At present, not even one tenth (6 percent) of
the units are employing latest technology to produce
products. 59 percent units employ old technology in
Alliance Issues their products.
 Organizations have shown unreasonably low rating  The performance of industry has not been very
(PPS= 34.24 only), in interaction with external encouraging as far as increase in product mix and
agencies (other industries, academic institutes and adding new features to the products is concerned.
research institutes). Most of the units have never dealt Only 3 percent units have increased product features
with these organizations considerably in the last few years.
 Industry has shown a poor rating (PPS= 49.45) in  There are only a few units (6 percent) which follow
deriving support from government service institutes. and practice a well defined R&D policy. 40 percent
There are only a very few (4 percent) industrial units units do not have defined R&D policy. 44 percent of
which have saught active support from these the units have just started formulating their research
government subsidiaries. policy and rest 10 percent have almost been decided
 The industry has obtained an extremely poor rating in their R&D policy.
alliance with academic institutions. Most of the
organizations (89 percent) have not experienced any 5.2 Conclusions and Recommendations
affirmative results through industry-institute The main conclusions to be drawn are as follows:
interactions. i. Absence of large scale manufacturing industry in
 Cycle parts sector in the region lacks in good R&D the region and use of old process technology to
infrastructure. Majority of the units (72 percent) are of manufacture products are the prime factors
the opinion that institutional infrastructure can be affecting performance of small units. Large scale
helpful in development initiatives of industry to a manufacturing sector, if present, provide
large extent. resources in the form of finance and expertise, as
 Majority of the units (73 percent) have been dependent well as operational support and international
on large scale manufacturing industry in the country opportunities.
for process technology needs. ii. High cost of electricity with restricted and
unreliable supply is also affecting the industrial
Concerning Output Performance performance.
iii. Becoming innovative organization requires an
Parameters organizational culture that constantly guides
 The response of industry in utilizing the „Risky employees to strive for innovation and a climate
Research‟ strategy has been low. Nearly half of the that is conducive to creativity. Small scale sector

577
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

in the region has to particularly focus on [3] Dixit, G.K. and Nanda, T (2011), “Evaluation of
appropriate reward systems and training of Technlogy Development Initiatives in Small Scale
employees to build an organizational culture Industry in India”, „IJITM‟, pp 120-134.
conducive for process improvements and product [4] Choi, H.S. (1989), „From Imitation to Creation‟,
innovations. Technological Forecasting and Social Change, Vol.
iv. The performance of industry is worst in resource 36, pp. 209-215.
support component. A restriction of resources [5] Aderemi, H.O., Hassan, O.M., Siyanbola, W.O.,
limits innovative ability because employees are Taiwo, K. (2009), „Managing Science and Technology
more occupied with finding additional resources Occupations of Women in Nigeria‟, Journal of
and not with actually developing new products. Technology Management & Innovation, Vol.4, No.3,
Resources are important not only for functional pp. 34-45.
support, but also because having an adequate [6] Ghorbani, A.A and Bagheri, E. (2008), „The state
level of resources for a project influences of the art in critical infrastructure protection: a
employee‟s perception that the project is valuable framework for convergence‟, International Journal of
and worthy of organizational support. Critical Infrastructures, Vol. 4, No.3, pp. 215 - 244.
v. Government has several schemes for industry [7] Sheel, C. (2002), „Knowledge Clusters of
but, benefits of the same have not reached the Technological Innovation‟, Journal of Knowledge
industrial sector because of lack of awareness of Management, Vol. 6 No. 4, pp. 356-367.
proprietors regarding the schemes and [8] Huang, S.C. (2008), „Efficient Industrial
ineffectiveness of government subsidiaries in Technology Policy, High Government Industrial R&D
reaching small units and extending support. Expenditure: Does one require the other?‟
Government should also ensure good quality and International Journal of Technology, Policy and
reliable physical infrastructure at reasonable Management, Vol. 8, No. 3, pp. 211-236.
prices. [9] Abor, J. and Quartey, P. (2010) „Issues in SME
vi. Small industry has not been interacting much Development in Ghana and South Africa‟,
with external organizations for technology International Research Journal of Finance and
upgradation. Industrial units should enter into Economics, Iss.39, pp 218-228.
interactive learning networks with other firms, [11] Zeng, S.X., Xie, X.M., Tam C.M. (2010),
customers and suppliers, government „Relationship between cooperation networks and
laboratories, universities and R&D organizations. innovation performance of SMEs‟, Technovation, Vol.
30, Iss. 3, pp. 181-194.
[12] Faems, D., B. Van Looy, and K. Debackere
REFERENCES (2005), „Interorganizational collaboration and
innovation: Toward a portfolio approach‟, Journal of
[1] Nanda, T. and Singh, T.P. (2009) „Determinants of
Product Innovation Management, Vol. 22, No. 3, pp.
creativity and innovation in the workplace: a
238-250.
comprehensive review‟, Int. J. Technology, Policy and
Management, Vol. 9, No. 1, pp.84–106.
[2] Nanda,T and Singh, T.P. (2008), „ A
Comprehensive Strategy for Technology Generation
through effective Industry-Institute Bonding‟, The
Indian Journal of Technical Education, Vol. 31, No.2,
pp. 1-6.

578
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

TECHNO-ECONOMIC ASPECTS IN MICROMACHINING


OF H11 HOT DIE STEEL MOULD USING EDM - A CASE
STUDY

Shalinder Chopra, Aprinder Singh Sandhu


Assistant Professor, Assistant Professor,
Chandigarh University, GNDEC,
Gharuan (Mohali), Ludhiana,
svmachopra@gmail.com apsandhu106@gmail.com

ABSTRACT achieved like a cell phone not only provides mobile


To achieve competitive edge in manufacturing in this era of communication, but also has internet, GPS, camera,
stiff competition, most manufacturing companies are focusing audio/video player, video games etc. Digital data storage
on selection of shaping a component by considering various (hard-disc drives) gets smaller and store more data.
technical and economical factors like type of material, shape, Micromachining is capable of fabricating three dimensional
process, process parameters and cost of processing. micro features on a variety of engineering materials like
Companies are using Techno-Economic analysis to estimate ceramics, metals, composites, polymers etc. Egashira et al.
optimal technology and economic processes to carry out their (2010) carried out EDM process of submicron holes using two
tasks in economical and efficient way. Product miniaturization types of electrodes: tungsten electrodes (1µm or less) made by
has opened up a whole new vista of possibilities in the combining wire electro discharge grinding and ECM, along
manufacturing industry. Various micromachining processes with silicon electrodes (< 0.15µm) originally intended as
like CNC, EDM, LBM, and ECM are being used to produce probes for scanning probe microscopes. Holes drilling was
micro grooves/holes; each process having its relative merits done using a relaxation-type pulse generator at open-circuit
and demerits in terms of metal removal rate, wear rate, voltage of 20V or lesser with only machine’s stray
rejection, cost of process and wastage of material as stated by capacitance. Better holes of < 1µm diameter and > 1µm depth
different researchers. The present work has been carried out to were drilled successfully using tungsten tools. Iosub et al.
identify various Techno-Economic aspects in micromachining (2010) revealed influence of EDM parameters on MRR, TWR
of H11 Hot Die Steel Mould using EDM in small scale and surface quality of SiC/Al (aluminium matrix composite)
industry. The experimental work has been carried out by using reinforced with 7% SiC and 3.5% graphite. 27 brass tools of
Taguchi Design L-18 Orthogonal Array and verified by 3.97 mm diameter; with different Pulse-ON, Pulse-OFF and
ANOVA. The optimized EDM process parameters for MRR peak currents were used to machine the hybrid composite
and ROC to cut micro grooves/holes have been worked out. It using mathematical model developed by full factorial design
is estimated that with the induction of this micromachining and regression analysis. Good surface quality of the
process with optimized process parameters, the company has composite was obtained easily by controlling EDM
enhanced its productivity and profitability compared to parameters. Biermann and Heilmann (2011) demonstrated
previous processing by CNC machines. that by downsizing of components and industrial relevance of
bored holes with small diameters and high length-to-diameter
Keywords ratios, combination of laser pre-drilling and single-lip deep
EDM, MRR, radial Over Cut, machining of H11 Hot Die hole drilling could shorten process chain in machining
Steel, mould and Die. components having non-planar surfaces, and also could
reduce tool wear in machining case-hardened materials. In
this research, the combination of these two processes was
realized and found out for first time. Garn et al. (2011)
1. INTRODUCTION investigated vibration effect on micro-EDM. Micro-EDM
Miniaturization is very promising for medical and boring was divided into 3 parts namely, start-up, major boring
biotechnology applications requiring micro-systems to and work piece breakthrough of tool. Investigations revealed a
interact with molecules, proteins, cells and tissues. For delayed start-up of process on work surface for micro-EDM;
instance, with the miniature devices, cancer agents like PSA, but, this effect could be reduced by introducing vibration on
CA can be detected at early stages and miniaturizing such work piece and its cause was analyzed by single discharge
devices will reduce the cost of cancer monitoring and save analysis which also provided as a means for investigating the
many lives. Miniature equipments like probes, stents and fibre effect of vibration frequency. Zhang et al. (2011)
optic cameras used in surgeries to avoid tissue damage during demonstrated for online fabrication of micro tool, a
surgeries, implants etc. and their market is growing rapidly. micromachining system based on electrochemical dissolution
With miniaturization, products with more functionality are

579
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

of material, which consisted of mechanical movement of


equipment, ultra short pulse power supply, circulation system
of electrolyte, and hall current sensor for detecting process Table 1: Specifications of Equipment
status. In micro-ECM, micro tool and micro-structures of
work could be sequentially machined by changing machining Machine Unit Particulars Specifications
conditions. Applying tungsten tool with 8 μm diameter and
ultra short voltage pulses, micro-cross with a 30 μm width Work Table 400mm x 300mm
groove, which had sharp edge, was obtained. Jiang et al. X Travel 250mm
(2012) evacuated debris which were formed during the
Y Travel 150mm
erosion process which limited achievable aspect ratio. To
address the problem of debris accumulation, a pulse generator, Z Travel 300mm
which was capable of shutting off harmful pulses and to Electrode Pipe Dia range 0.3mm to 3.0mm
applying high discharge energy pulses, had been developed. A Max. Job Height 150mm
series of experiments were conducted; experimental results
revealed improved small-hole drilling efficiency and Max. Drilling Depth 100mm
increased aspect ratio. Karthikeyan et al. (2012) revealed Maximum Work Piece Weight 400Kg
behavior of micro Electric Discharge Milling as per shape, Maximum Electrode Weight 35Kg
form and surface quality of channel; also rotation of tool,
traverse were significant where it influenced flow of molten Dielectric Capacity 260Litres
metal, flushing and redeposition of debris. Tool rotation effect For micromachining of H-11 hot die steel by EDM,
not only disturbed plasma but also influenced the final shape cylindrical shaped Copper tool electrode (Fig. 2) is used.
and form of the channel. Using SEM of micro channel at
different instants and conditions of machining, physical nature
of process understood and results presented.

2. EXPERIMENTATIONS
Experimental work has been performed on Sparkonix EDM,
Model S-35, in supervision of EDM operator, at M/s S.G
Engineering Works, Chandigarh.
For experimental investigation in micro-EDM, the following
Techniques/equipments are used:

S No Particulars (Techniques/equipments used)


1. Parametric optimization (Taguchi Method, ANOVA)
2. Experimental Investigation (EDM Equipment) Fig. 2: Copper Tool Electrode
3. Work Material (H11 Hot Die steel) Table 2: Tool Electrode Specifications
4. Cylindrical Tool Electrode (Copper with 500µm diameter) Particulars Specifications
Spark Gap Voltage, Peak Current, Pulse-ON and Pulse-OFF Material Used Copper
are the parameters considered for analyzing the EDM
performance criteria i.e. Metal Removal Rate and accuracy Electrical resistivity 0.0167 Ωmm2/m
(minimum radial overcut) of micro hole. Purity 99.8%
Melting point 1083°C
Density 8.9 kg/dm3
Height 13.7mm
Diameter 500µm

Two response variables MRR and ROC (accuracy) are


considered for present study. Each time material is being
removed from work; there is generation of micro hole, due to
thermal damage. MRR is found out with the help of hole
volume which is calculated by using average hole diameters
from SEMs. Stop watch is used for measuring the machining
time. ROC is determined by difference between diameter of
micro tool and micro hole.

MRR (mm3/min) = π/4 x (Davg)2 x h / Machining Time


(minutes)
ROC (µm) = Davg- davg / 2;
Where, h = work piece height; Davg = average dia. of
machined hole and davg= average dia. of micro tool.
Experiments on micromachining of H-11 hot die steel are
Fig. 1: Sparkonix EDM equipment, Model S-35 used for conducted on EDM equipment (Sparkonix Model S-35) using
Experimentations copper tool electrode of 500µm diameter. The tool alignment
is done with the help of dial indicator to avoid any error due

580
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

to tool displacement during machining. Tip of indicator is to 6μs TON, showing more time arc remains more material is
touched at various points on cylindrical tool circumference, removed. MRR also increases by increas0ing SGV and by
while tool is rotated which helps in aligning the tool vertically decreasing TOFF. These two graphs show that TON and TOFF
straight. The tool is kept perpendicular to work surface, for are the most predominant factors for MRR.
proper machining.
Main Effects Plot for Means
Data Means

3. RESULTS AND DISCUSSION 1.6


Gap Voltage Supply Current

Observations are made during conduction of 18 experiments 1.2


on EDM according to L18 orthogonal array, with different set
0.8
of input parameters. Machining time taken in minutes for each

Mean of Means
0.4
experiment is observed with a stop watch.
0.0
30 40 1 2 3
Pulse ON Pulse OFF
1.6

1.2

0.8

0.4

0.0
2 4 6 2 4 6

Fig. 5: Main Effect Plot for Mean (MRR)

Fig. 6 shows interaction plot graph for Means of different


input parameters for MRR. This graph has been used to
explain effects of two input parameters at a time on the MRR.
Interaction Plot for MRR (mm3/min)
Data Means
1 2 3 2 4 6 2 4 6
2
Gap
Voltage
Gap Voltage 1 30
40
0
2
Supply
C urrent
Supply Current 1 1
2
Fig. 3: SEM’s of Holes made by EDM Micromachining of H11 0
2
3
Pulse
Hot Die Steel ON
Pulse ON 1 2
4
6
Higher MRR (2.14) and S/N Ratio (6.6083) are observed at 0

40V SGV, 2A Current, 6μs TON and 2μs TOFF. It is prepared Pulse OFF

with Minitab with following rule to select Optimum


Parameters from graph: S/N Ratio as well as Mean: Always
Highest Point in all graphs.
Fig. 6: Interaction Graph for MRR
Fig. 4 shows S/N ratio for MRR increases sharply for increase
in TON. So for MRR, optimum values are SGV=40V, SC=2A, Now results obtained for ROC are analyzed, after conduction
TON=6μs and TOFF=2μs. of all 18 experiments, values of ROC, S/N Ratios and Means
are calculated for different set of input parameters. Result
Main Effects Plot for SN ratios analysis includes tables and graphs for S/N ratio and means,
Data Means
interaction graphs for different input parameters and ANOVA.
Gap Voltage Supply Current Minimum ROC (2.75) and S/N Ratio (-8.7867) are observed
5
at 30V Voltage, 2A Current, 2μs TON and 2μs TOFF.
0

-5
It was prepared with Minitab with following rule to select
Mean of SN ratios

-10
Optimum Parameters from graph: S/N Ratio: Highest Point in
-15
30 40 1 2 3
all graphs; Mean: Lowest point in all graphs. S/N ratio for
Pulse ON Pulse OFF
ROC is plotted to select optimum parameter. It is seen this
5
value decreases sharply for increase in TON. So optimum
0 values are SGV=30V, SC=2A, TON=2µs and TOFF=2µs.
-5

-10 Fig. 7 is S/N ratio for Radial overcut for different levels of
-15 input parameters. It is seen the value decreases with TON. S/N
2 4 6 2 4 6
ratio decreases for 30V to 40V SGV. So, optimum values are
Signal-to-noise: Larger is better SGV=30V, SC=2A, TON=2µs and TOFF=2 µs.

Fig. 4: Main Effect Plot for S/N Ratio (MRR)


Fig. 5 shows the main effect plot of mean; here MRR is
almost constant from 1A to 3A SC. MRR increases from 2μs

581
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Main Effects Plot for SN ratios 4. CONCLUSIONS


Data Means Following conclusions have been drawn:
Gap Voltage Supply Current
-20
 MRR is increased with TON and SGV; also ROC
-24
increases with TON and SGV.
-28
 Most dominant factors affecting MRR and ROC are
Mean of SN ratios

-32 TON and TOFF as analyzed by Taguchi method.


30 40 1 2 3  Optimum Micromachining conditions for MRR are
Pulse ON Pulse OFF
-20 TON=6 µs, TOFF=2 µs, SC=3A and SGV=40 V.
-24  Optimum Micromachining conditions for ROC are
-28
TON=2 µs, TOFF=2 µs, SC=2A and SGV=30 V.
-32
 Total operating cost excluding initial set up cost for
CNC is approx. Rs.18, 45,500/- and for EDM it is
2 4 6 2 4 6

Signal-to-noise: Smaller is better


approx. Rs. 1,03,550; and there is saving of Rs.17,
41,950/- to the company;
 Also if we exclude other fixed costs like transformer
cost, AC cost and CAD/CAM software cost, then
Fig. 7: Main Effect Plot for S/N Ratio (ROC) also total operating cost excluding other fixed costs
for CNC is approx. Rs.1, 45,500 and for EDM it is
Fig. 8 is main effect plot for mean. ROC decreases for 1A to approx. Rs. 1, 03,550; and there is still saving of Rs.
2A SC, and further increases for 2A to 3A. ROC decreases for 41,950/- to the company; that means CNC is costs
6µs to 2µs TON showing less time the arc remains less ROC than EDM in any case.
produced leading to accuracy and productivity. ROC Thus it is suggested to use micro EDM, for better machined
decreases by decreasing SGV and TOFF. So from these graphs, circular micro hole as compared to the micro hole produced
it is seen TON & TOFF are the most predominant factors. by CNC, leading to better economical conditions for industry
with suggested optimized parameters.
Main Effects Plot for Means
Data Means 5. REFERENCES
50
Gap Voltage Supply Current [1] Iosub, A. Axinte, E. and Negoescu, F. May 2010. A
40
Study about Micro-Drilling by Electrical Discharge
30
Method of an Al/SiC Hybrid Composite. International
20 Journal of Academic Research, Vol. 2. No. 3. .
Mean of Means

10
30 40 1 2 3 [2] Biermann, D. and Heilmann, M. 2011. Analysis of the
Pulse ON Pulse OFF Laser Drilling Process for the Combination with a
50

40 Single-Lip Deep Hole Drilling Process with Small


30 Diameters. Physics Procedia 12, 308–316. .
20
[3] Karthikeyan, G. Garg, A. K. Ramkumar, J. and
10
2 4 6 2 4 6 Dhamodaran, S. 2012. A microscopic investigation of
machining behavior in µED-milling process. Journal of
Fig. 8: Main Effect Plot for Mean (ROC) Manufacturing Processes. .
[4] Jiang, Yi. Wansheng, Z.. Xuecheng Xi. 2012. A study on
Fig. 9 shows the interaction plot graph for Means of different pulse control for small-hole electrical discharge
input parameters for ROC. This graph explains effects of two machining pp-1463– 1471. .
input parameters at a time over ROC.
[5] Egashira, K. Morita, Y. and Hattori, Y. 2010. Electrical
Interaction Plot for Radial Over Cut (µm) discharge machining of submicron holes using
Data Means
1 2 3 2 4 6 2 4 6
ultrasmall-diameter electrodes. Precision Engineering 34,
60
Gap
Voltage
139–144. .
40
Gap Voltage 30
40
20
[6] Garn, R. Schubert, A. and Zeidler, H. 2011. Analysis of
60
Supply
Current
the effect of vibrations on the micro-EDM process at the
40
Supply Current
20
1
2
work piece surface. Precision Engineering 35, 364–368. .
3
60
Pulse
ON
[7] Maity, K. P. and Singh, R. K. 2012. An optimisation of
40
Pulse ON
20
2
4
micro-EDM operation for fabrication of micro-hole. Int J
6 Adv Manuf Technol 61:1221–1229. .
Pulse OFF [8] Zhang, Z. Wang, Y. Chen, F. and Mao, W. 2011. A
Micromachining System Based on Electrochemical
Dissolution of Material. Russian Journal of
Electrochemistry, Vol. 47, No. 7, pp. 819–824.
Fig. 9: Interaction Graph for ROC

582
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

OPTIMIZATION OF SURFACE ROUGHNESS IN CNC TURNING OF ALUMINIUM USING


ANOVA TECHNIQUE

Karamjit Singh Gurpreet Singh Bhangu Supinder Singh Gill


Assistant Professor Assistant Professor Assistant Professor
BGIET, Sangrur BGIET, Sangrur BGIET, Sangrur

karamjit.nagri@bgiet.ac.in gurpreet_bhangu2001@y supi_gill@yahoo.co.in


ahoo.com

ABSTRACT forces and temperature during machining create a harsh


In modern manufacturing processes tight tolerances and good environment for the cutting tool. Therefore tool life is
surface finish are required to obtain the quality products. important to evaluate cutting performance. The purpose of
Therefore, Surface roughness plays an important role in many turning operation is to produce low surface roughness of the
areas and factors that affecting the same are of great parts. Surface roughness is another important factor to
importance in the evaluation of machining accuracy. Turning evaluate cutting performance. Proper selection of cutting
is one of the most widely used manufacturing processes. The parameters and tool can produce longer tool life and lower
present research work aims to the effect of cutting parameters surface roughness.
such as spindle speed, feed rate, and depth of cut on the
surface roughness of aluminum work pieces. The experiments 1.2 Tooling for Turning Operation:
have been designed according to Yates order and analysis has
been performed using ANOVA technique. Additionally, a The tooling that is required for turning is typically a sharp
mathematical model has been developed using method of single-point cutting tool that is either a single piece of metal
regression. Two levels of four machining parameters have or a long rectangular tool shank with a sharp insert attached to
been used. The results from the present research work revel the end. These inserts can vary in size and shape, but are
that as increase the value of Nose Radius and Spindle Speed typically a square, triangle, or diamond shaped piece [3].
the surface roughness decreases. Nose Radius and Spindle These cutting tools are inserted into the turret or a tool holder
Speed are significant factors in improving surface finish of and fed into the rotating work piece to cut away material. All
Aluminum. cutting tools that are used in turning can be found in a variety
of materials, which will determine the tool's properties and the
Keywords: CNC Turning, Machining, Surface roughness, work piece materials for which it is best suited. These
ANOVA properties include the tool's hardness, toughness, and
resistance to wear. The most common tool materials that are
1. INTRODUCTION used include the following:
The challenge of modern machining industries is mainly
focused on the achievement of high quality, in terms of work  High-speed steel (HSS)
piece dimensional accuracy, surface finish, high production  Carbide
rate, less wear on the cutting tools, economy of machining in  Carbon steel
terms of cost saving and increase the performance of the  Cobalt high speed steel
product with reduced environmental impact . Surface
roughness plays an important role in many areas and is factors The material of the tool is chosen based upon a number of
of great importance in the evaluation of machining accuracy. factors, including the material of the work piece, cost, and
We are also going to perform an experiment on to produce tool life. Tool life is an important characteristic that is
minimum surface roughness [1]. For this experiment we have considered when selecting a tool, as it greatly affects the
used cutting parameters spindle speed, feed, depth of cut, manufacturing costs. A short tool life will not only require
nose radius. We will vary values of these parameters to additional tools to be purchased, but will also require time to
perform our experiment. We will find out the effect of various change the tool each time it becomes too worn. We have used
parameters on the surface roughness. We will perform the tool bit with nose radius 0.8 and 0.4 mm. These are called
experiment for Aluminum and brass; compare the effect of DCMT tool bits because these are made by British Company
various parameters on surface of both. named as Die Cast Machine Tool Ltd (DCMT).
1.1 Turning process 1.3 Materials for Work piece:
Turning is very important machining process in which a In turning, the raw form of the material is a piece of stock
single point cutting tool removes unwanted material from the from which the work pieces are cut. This stock is available in
surface of a rotating cylindrical work piece. The cutting tool is a variety of shapes such as solid cylindrical bars and hollow
fed linearly in a direction parallel to the axis of rotation. tubes. Custom extrusions or existing parts such as castings or
Turning is carried on a lathe that provides the power to turn forgings are also sometimes used. Turning can be performed
the work piece at a given rotational speed and to feed to the on a variety of materials, including most metals and plastics.
cutting tool at specified rate and depth of cut. Therefore three Common materials that are used in turning include the
cutting parameters namely cutting speed, feed and depth of following:
cut need to be determined in a turning operation. The turning
operations are accomplished using a cutting tool; the high  Aluminum

583
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 Brass 2. CUTTING PARAMETERS


 Nickel 2.1 Spindle speed:
 Steel
 Titanium The spindle speed is the rotational frequency of the spindle of
 Zinc the machine, measured in revolutions per minute (RPM). The
preferred speed is determined by working backward from the
When selecting a material, several factors must be considered, desired surface speed (m/min) and incorporating the diameter
including the cost, strength, resistance to wear, and (of work piece or cutter).Excessive spindle speed will cause
machinability [7]. The machinability of a material is difficult premature tool wear, breakages, and can cause tool chatter, all
to quantify, but can be said to possess the following of which can lead to potentially dangerous conditions. Using
characteristics: the correct spindle speed for the material and tools will greatly
affect tool life and the quality of the surface finish .For a
 Results in a good surface finish given machining operation, the cutting speed will remain
 Promotes long tool life constant for most situations; therefore the spindle speed will
 Requires low force and power to turn also remain constant. Facing operations on a lathe however
 Provides easy collection of chips involve the machining of a constantly changing diameter.
 We shall perform our experiment by using Ideally this means changing the spindle speed as the cut
Aluminum and Brass. advances across the face of the work piece, this was harder to
do in practice and was often ignored unless the work
1.4 Possible Defects: demanded it. The introduction of CNC controlled lathes has
solved this awkward problem with a feature called Constant
Most defects in turning are inaccuracies in a feature's Surface Speed (CSS). By means of the machine's software and
dimensions or surface roughness. There are several possible variable speed electric motors, the lathe can increase the RPM
causes for these defects, including the following: of the spindle as the cutter gets closer to the center of the part.

1.4.1 Incorrect cutting parameters 2.2 Feed Rate:

If the cutting parameters such as the feed rate, spindle speed, Feed rate is the velocity at which the cutter is fed, that is,
or depth of cut are too high, the surface of the work piece will advanced against the work piece. It is expressed in units of
be rougher than desired and may contain scratch marks or distance per revolution for turning and boring (millimeters per
even burn marks. Also, a large depth of cut may result in revolution). It can be expressed thus for milling also, but it is
vibration of the tool and cause inaccuracies in the cut. often expressed in units of distance per time for milling
(millimeters per minute), with considerations of how many
1.4.2 Dull cutting tool teeth (or flutes) the cutter has then determining what that
means for each tooth. Feed rate is dependent on the:
As a tool is used, the sharp edge will wear down and become
dull. A dull tool is less capable of making precision cuts.  Surface finish desired.
 Power available at the spindle.
1.4.3 Unsecured work piece  Rigidity of the machine and tooling setup (ability to
withstand vibration or chatter).
If the work piece is not securely clamped in the fixture, the  Strength of the work piece
friction of turning may cause it to shift and alter the desired
cuts. Characteristics of the material being cut, chip flow depends on
material type and feed rate. The ideal chip shape is small and
1.5 Aluminum: breaks free early, carrying heat away from the tool and work.

Aluminum is a silvery white and ductile member of the boron 2.3 Depth of Cut:
group of chemical elements. It has the symbol Al and its
atomic number is 13. It is not soluble in water under normal It is advancement of tool in the perpendicular direction of axis
circumstances. Aluminum is the most abundant metal in the of work piece. It is measured in mm. Depth of cut plays an
Earth's crust, and the third most abundant element therein, important role in surface finish of work piece during turning.
after oxygen and silicon. It makes up about 8% by weight of For rough cuts high value of depth of cut is used and when we
the Earth's solid surface. Aluminum is too reactive chemically have make the final cut we use very small value of depth of
to occur in nature as a free metal. Instead, it is found cut. Depth of cut should given according to need and power
combined in over 270 different minerals [4]. The chief source available. If very high value of depth of cut is given it may
of aluminum is bauxite ore. Aluminum is remarkable for the produce vibration or chatter.
metal's low density and for its ability to resist corrosion due to
the phenomenon of passivation. Structural components made Depth of Cut = (D1-D2)/2
from aluminum and its alloys are vital to the aerospace
industry and are very important in other areas of Where
transportation and building. Its reactive nature makes it useful
as a catalyst or additive in chemical mixtures, including D1 = Original Diameter of stock in mm
ammonium nitrate explosives, to enhance blast power.
D2 = Diameter obtained after turning

584
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. RESPONSE PARAMETER Table 1.2: Design matrix


3.1 Roughness:
Sr. Spindle Feed Depth of cut Nose radius
Roughness is a measure of the texture of a surface. It is No. speed (s) (f) (d) (r )
quantified by the vertical deviations of a real surface from its
ideal form. If these deviations are large, the surface is rough; 1 2 3 4=1*2*3
if they are small the surface is smooth. Roughness is typically
considered to be the high frequency, short wavelength 1 + + + +
component of a measured surface. Roughness plays an
important role in determining how a real object will interact 2 - + + -
with its environment. Rough surfaces usually wear more
quickly and have higher friction coefficients than smooth 3 + - + -
surfaces .Roughness is often a good predictor of the
performance of a mechanical component, since irregularities
4 - - + +
in the surface may form nucleation sites for cracks or
corrosion. Although roughness is usually undesirable, it is
difficult and expensive to control in manufacturing. 5 + + - -
Decreasing the roughness of a surface will usually increase
exponentially its manufacturing costs. This often results in a 6 - + - +
trade-off between the manufacturing cost of a component and
its performance in application. 7 + - - +

4. DESIGN PROCEDURE: 8 - - - -
We have used two levels for every parameter. One is highest
value (given +ve sign) and one is lowest value (given –ve
sign). Number of trials performed is find out by (No. of
levels) (No. of parameters-1) that is {2 4 -1} and equal to 8. Table 1.3: Value for Surface Roughness of Aluminum

Table 1: Cutting Parameter Sr. Y1 Y2 Y=(Y1+Y2)/2 Y3


No.
Symbol Cutting Level 1 Level 2
parameter 1 1.365 1.594 1.479 1.307

A spindle 900 1500 2 1.234 1.359 1.296 1.148


speed
3 0.713 0.854 0.784 0.753
B feed 0.02 0.06
4 1.067 1.217 1,492 1.917
C depth of 0.1 0.3
cut 5 1.159 0.904 1.032 1.117
R nose 0.4 O.8
6 1.748 1.811 1.779 1.717
radius
7 1.312 1.162 1.235 1.295
Units:

Spindle Speed: RPM (Revolution per minute) 8 1.404 1.572 1.488 1.292

Feed: mm (millimeter)

Depth of cut: mm per revolution or mm per minute 4.2 Evaluation of coefficients

Nose radius: mm Regression coefficients for spindle speed (S), feed rate (F),
depth of cut (D), nose radius (R), and surface roughness of
4.1 Design matrix (using yatt’s order) selected model are calculated:
As shown is design matrix which we have followed during bj = Σ (Xji Yi)/N , j = 0,1………….k
turning of Aluminum. In this matrix +ve value shows the
highest value of the parameter and –ve value shows the lowest Where,
value of parameter.
Xji = Value of a factor or interaction in coded form

Yi = Average value of response parameter

N = No. of observations

585
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

K = Number of coefficients 4.3 Variance of Response:

Table1.4: Calculated Values of Coefficients S 2y = [2Σ (∆Y 2)]/N

Sr. No. Coefficients Value S 2y = (2*0.0542)/8

S 2y = 0.01355
1 b0 1.324
Where,
2 b1 -0.191
S 2y = Variance of optimization parameters
3 b2 0.074
Ym = Arithmetic mean of repetions
4 b3 -0.061
Y2 = Value of response in a repetion trial
5 b4 0.173
N = Number of observations
6 b12 0.112
Sr. Due to "t" Remarks
7 b13 0.059 No. value

8 b14 0.052 1 b0 Combined effect of all 32.135 Significant


parameters
9 b23 0.052
2 b1 Main effect of spindle speed 4.635 Significant
10 b24 0.060
3 b2 Main effect of feed rate 1.796 In-
11 b34 0.049 Significant

4 b3 Main effect of depth of cut 1.48 In-


Significant
Model Developed With Above Coefficients
5 b4 Main effect of nose radius 4.199 Significant
YSR = 1.324 – (0.191)S + (0.074)F – (0.061)D + (0.173)R +
(0.112)SF + (0.059)SD +(0.052)SR + (0.052)FD +( 0.060) 6 b12 Interaction effect of S & F 2.718 Significant
FR + (0.049)DR
7 b13 Interaction effect of S & D 1.432 In-
Table 1.5: Finding Variance of Response Significant

Sr. Y1 Y2 Y=(Y1+Y2)/2 ∆Y=Y2- ∆Y 2 8 b14 Interaction effect of S & R 1.262 In-


No. Ym Significant

1 1.365 1.594 1.479 0.115 0.013 9 b23 Interaction effect of F & D 1.262 In-
Significant
2 1.234 1.359 1.296 0.063 0.004
10 b24 Interaction effect of F & R 1.456 In-
3 0.713 0.854 0.784 0.071 0.005 Significant

4 1.067 1.217 1,492 0.042 0.002 11 b34 Interaction effect of D & R 1.189 In-
Significant
5 1.159 0.904 1.032 -0.127 0.016

6 1.748 1.811 1.779 0.032 0.001


4.4 Checking the significance of coefficients of model
7 1.312 1.162 1.235 -0.075 0.006
The statistical significance of coefficients can be tested using
8 1.404 1.572 1.488 -0.084 0.007 ‘t’ test. The level of significance of particular parameter can
be assessed by value of magnitude of ‘t’ associated with it.
Sum=0.054 Higher the value of ‘t’, more it become significant. ‘t’ value
for coefficients of model can be calculated by following
method:

t= (bj)/sbj ,

586
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Sbj= √ [(S 2y)/N] Table 1.7: Analysis of Variance of Surface Roughness

Where, bj = absolute value of coefficient Degree Variance Variance F'--Ratio F' Adequacy
of of of Model Ratio of Model
sbj= standard deviation coefficient Freedom Adequacy Response (Fm) Table

The coefficients b0, b1, b4, b12 are significant as value of ‘t’ F N S 2ad S 2y Fm = S At Whether
is greater than standard value. So these coefficients will used 2ad/ S 2y 3,8 Fm<Ft
to produce the final model. So the final model is:

YSR = 1.324 - 0.191(S) + 0.173 (R) + 0.112 (SF) 3 8 0.033 0.01355 2.4354 4.07 Yes

Table 1.6 Finding Variance of Adequency

Sr.No. Yp Y3 ∆(Yp-Y3) [∆(Yp-Y3)] 2

1 1.418 1.307 0.111 0.012


5. RESULTS AND DISCUSSIONS
2 1.232 1.148 0.084 0.007 From the investigations, it is clear that the surface roughness
of aluminum alloy varies with the cutting parameters used at
3 0.848 0.753 0.095 0.009 each run, so it is possible to obtain improvement in surface
roughness by varying the selected different cutting
parameters.
4 1.800 1.917 -0.117 0.014
Effect of Spindle Speed on Surface Roughness
5 1.072 1.117 -0.045 0.002
Spindle Speed has impact on surface roughness. We have
6 1.576 1.717 -0.141 0.019 observed effect of spindle speed by varying its value at two
levels. We observed that as we increase the value of spindle
7 1.194 1.295 -0.101 0.010 speed the surface roughness decreases.

8 1.454 1.292 0.162 0.026 Effect of Feed on Surface Roughness

Sum=0.099 Feed has impact on surface roughness. We have observed


effect of feed by varying its value at two levels. We observed
that as we increase the value of feed the surface roughness
increases.
4.5 Variance of Adequency: Effect of Nose Radius on Surface Roughness

S 2ad = [Σ [∆ (Yp-Y3)] 2]/f Nose Radius has impact on surface roughness. We have
observed effect of Nose Radius by varying its value at two
Where, levels. We observed that as we increase the value of Nose
Radius the surface roughness decreases.
S 2ad = Variance of Adequacy
Graph between Surface Roughness and Tool Nose Radius
Y3 = Measured/Observed response

Yp = Estimated/Predicted value of response

f = N-(K+1) (Degree of freedom)

K = No. of independent ly controllable variables

Variance of Adequency:

S 2ad = (0.099)/3

S 2ad = 0.033

Where,

f = 8 – (4+1)

f=3 Graph between Surface Roughness and Spindle Speed

587
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[5] B.S. Raghuwanshi, "A Course in Workshop technology",


Dhanpat rai & Co.
[6] Chen Lu, "Study on prediction of surface quality in
machining process", journal of materials processing
technology 205 (2008) 439–450.
[7] Durmus Karayel," Prediction and control of surface
roughness in CNC lathe using artificial neural network",
Journal of materials processing technology 209 (2009)
pp 3125–3137.
[8] E. Daniel Kirby, Joseph C. Chen," Development of a
fuzzy-nets-based surface roughness prediction system in
turning operations", Computers & Industrial Engineering
53 (2007) 30–42.
[9] GK Narula, KS Narula, VK Gupta. "Material Science",
Tata McGraw-Hill.
[10] Gaurav Bartarya, S.K. Choudhary," State of the art in
hard turning", International Journal of Machine Tools
Graph between Surface Roughness and Feed and Manufacture, Volume 53, 2012, pp 1-14.
[11] Ilhan Asilturk, Harun Akkus, "Determining the effect of
cutting parameters on surface roughness in hard turning
using the Taguchi method", Measurement 44 (2011)
1697–1704.
[12] Ilhan Asilturk, Mehmet Cunkas," Modeling and
prediction of surface roughness in turning operations
using artificial neural network and multiple regression
method", Expert Systems with Applications 38 (2011)
5826–5832.
[13] Ilhan Asiltürk, Süleyman Neseli, "Multi response
optimisation of CNC turning parameters via Taguchi
method-based response surface analysis," Measurement
45 (2012) 785–794.
[14] M. N. Islam, Brian Boswell," An Investigation of Surface
Finish in Dry Turning", Proceedings of the World
Congress on Engineering 2011 Vol. I WCE 2011, July 6
- 8, 2011, London, U.K
[15] Muammer Nalbant, Hasan Gokkaya, Ihsan Toktas,
Gokhan Sur," The experimental investigation of the
REFERENCES effects of uncoated, PVD- and CVD-coated cemented
carbide inserts and cutting parameters on surface
[1] Abdulla Shariff, "Handbook of Properties of Engineering roughness in CNC turning and its prediction using
Materials and Design Data for machine Elements", artificial neural networks", Robotics and Computer-
Dhanpat Rai & Sons. Integrated Manufacturing 25 (2009) 211–223.
[2] Anil Gupta , Hari Singh , Aman Aggarwal," Taguchi- [16] R.K Jain, "Production Technology" Khanna Publishers.
fuzzy multi output optimization (MOO) in high speed
CNC turning of AISI P-20 tool steel", Expert Systems [17] Suleyman Nes_eli , Suleyman Yaldız , Erol Turkes, "
with Applications 38 (2011) pp 6822–6828. Optimization of tool geometry parameters for turning
operations based on the response surface methodology",
[3] ASM Metals HandBook Volume 18 - Friction, Measurement 44 (2011) 580–587.
Lubrication, and Wear Technology, ASM International
Handbook Committee, 1992. [18] www.asminternational.org

[4] BL juneja, “Fundamentals of Metal cutting and Machine [19] www.atlasmetals.com.au


Tools" New Age International Publishers.

588
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Effects of Smart Grid Utilization, Performance,


Environmental & Security Issues: A Review
Parminder Pal Singh Gagandeep Kaur
Assistant Professor Assistant Professor
Deptt of Mechanical Engg. Deptt of Electrical & Instrumentation
Bhai Gurdas Institute of Engg. Engg
&Tech,Sangrur Thapar University, Patiala
pps11may76@gmaill.com gagandeep@thapar.edu

ABSTRACT This system consists of three technical components --


The purpose of this paper is to explain the utilization of Smart Distribution Automation (DA), Personal Energy Management
Grid deployed in the world these days. The paper throws the (PEM) and Advanced Metering Infrastructure (AMI).
light on optimized use of electrical resources. This paper
basically focus on the smart grid working already installed in The first one i.e. DA, Distribution Automation System
some developed countries and they are benefiting out of this. provides tools for the distribution of power in the network and
It results in saving of power, optimal use of power and also maintains its security. It is an economical operation. It
contributes in reducing the cost of valuable electrical supply. guarantees power quality as well as increasing the working
But during the study of the entire process of working, efficiency of the power system. Distribution automation
installation and utilization at user end, certain important system provides the solutions to improve the power grid
factors /effects has been observed. They need this system to monitoring, control, failure management and power balance.
be reconsidered from environmental and security point of By making use of real-time monitoring and intelligent control,
view. Though the smart grid is smart enough to incorporate it improves the reliability of the system [3]. Speed of the
the observations related to environment. The radio frequency network can be enhanced to a higher level by making use of
used to control the entire operation of smart meters and grid is D.A. The devices of Distribution Automation (DA) are robust
hampering the life cycle of human being, as it causes some and reliable and they have the tendency to offer high
non curable diseases to the old people. So it is recommended computing power, due to which they act as a source of
to use the existing telephone optical cables to control the planning data. The analysis of the status of the devices such
smart operation. It will reduce the installation cost and saves as: switches, capacitor banks, voltage regulators and
the environment. The developing countries can deploy such transformers in real time is possible in smart grid due to the
smart meters with already existing FO cables serving the presence of the D.A. in the system. These advantages of D.A.
telephone network. improve fault location and isolation. It also increases energy
efficiency through better capacitor, voltage control and
improved asset management [4].
General Terms Secondly, Personal Energy Management (PEM) is a critical
Smart Grid, Radio Waves
component of the smart grid. It directly connects the energy
Keywords consumers in monitoring and controlled energy use. It
Distribution Automation (DA), Personal Energy Management provides tools to control the peak load uniformly. PEM
(PEM), Advanced Metering Infrastructure (AMI). support new sources of generation. e.g. Solar and wind
energy. It provides new methods to the consumers to make
smart use of electricity in the cheapest way. Thus Personal
1. INTRODUCTION energy management is the future of energy efficiency. PEM
A smart grid is a power grid equipped with the advanced motivate the use of Home Area Network so that consumer
technologies dedicated for purposes. They capture the gets directly engaged with the energy management process.
information about the suppliers and consumers in a highly Smart grid with its smart meter make use of Zig Bee
automated way so that we can improve the efficiency and communication which is a low power wireless communication
reliability. The production and distribution of electricity will technology provides a standards based approach for home
be more sustainable and economical as we are going to rely automation. The integration of smart grid, smart meters and
on natural resources. e.g. Solar and wind energy [1]. A two Zig bee communication provides a variety of personal energy
way communication system is formulated to develop management features including
continuous connectivity among the consumer and electric
power suppliers (see Figure1). A radio frequency based 1. Consumers can notify the cost of units in peak hours and
system is established for this purpose. With such highly time of use rates with the help of in-home display. These in-
automatic system even homes which are consuming less home display also aware the consumer about the real energy
energy as compared to the electricity produced by the natural usage.
solar consumer installed in the house hold unit can sell the 2. The commercial and agricultural appliances are controlled
electricity to the another one who is consuming more. The through load reduction programs, provides a reliable power
two way communication system takes care of consumption, system.
utilization and cost procurement [2]. The smart grid with 3. Programmable thermostats are used for control of air
smart meter running on software controlled system hence conditioners and heating systems during peak periods [5].
proves to be more effective in both utility as well as cost wise.

589
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Finally Advanced metering infrastructure (AMI) is In the second case study we found an organization in
architecture of two-way communication between a smart Germany namely as Yello Strom. This organization

Figure 1. Structure of Smart Grid

meter with an IP address and a head end systems. It involves


the intelligent usage of electricity because the customers are established the Yello Sparzahler - an online, user-friendly
aware about the pricing rates of electricity. This system also smart electricity meter that uses existing market standards. By
explains that when the demand is high or low and the using these meter and Yello website, customers can get the
customers can use their electric appliances accordingly. AMI information about their energy consumption data and power
involves many features to manage the peak loads like: bill. It provides the Smart Meter service around the consumer
1. Demand Metering which is a billing method in which the through broadband connection, with the advantage of making
customer is charged for the normal energy usage plus an data available faster to the consumer without having to build
additional charge for the peak usage. an another network.[8]
2. Time of Use Metering which is again a billing method
where the utility varies the price of electricity during different Lastly in the third case study State Grid Corporation of China
periods of a 24 hour day depending on OFF peak and ON (SGCC) is running a project at Shanghai, China in 2010. This
peak hours [6]. project include nine sub-projects of power system namely
Smart Substation (SS) , Distribution Automation(DA), Fault
Restoration Management System(FRMS), Power Quality
2. SMART METERS Monitoring (PQM), Customer Energy Usage Collection
The most important requirement for smart grid is the System (CEUCS), Energy Storage System (ESS), Renewable
intelligent and smart meter (see Figure 2). It is installed on the Energy Integration (REI), Smart Building/Home (SB/SH) and
consumer side. Smart metering is possible without the smart EV Charging/Discharging Station Operation.
grid but the smart grid is built on smart metering. A smart
meter is usually an electrical meter that records consumption This whole project spreads in a 4,000-square-metre area
of electric energy in intervals of an hour or less and named as Magic Box which explain the smart grid technology
communicates that information at least daily back to the and provide the opportunity for people to experience smart
utility for monitoring and billing purposes. Smart meters grid technologies. It is an excellent example of smart
enable two-way communication between the meter and the substations realizing smart monitoring, digital power
central system. The smart meter will not only keep track of metering, fault recording, a relay protection network and
how much electricity we are using but it is also able to control automatic trip, based on the GOOSE protocol. Visitors can
and regulate usage of electricity [7]. see the smart grid control centre technology, smart grid
transmission, IT platforms and visualization displays. They
In this paper three relevant case studies are analyzed to analyze how the smart communities and smart homes of the
develop more sophisticated model of smart system. In first future will improve people’s lives, facilitate the interaction of
case study in 2009, Pacific Gas and Electric’s (PG&E) invest the customer with the power grid, increase convenience levels
US$ 2.2 billion for the installation of 10-million smart meter and improve the economy of electricity consumption. The
in Bakersfield, California, USA. The consumer claims that two-way interactive display shows energy saving benefits of
individual’s average bills jumped from about US$ 200 a optimized energy usage decisions such as reduced CO2
month to about US$ 500 to US$ 600 a month after they emissions and energy bill reduction. This “Magic Box” is
received a smart meter. Thus there is an increment of three helping consumers to engage with smart grid technologies and
times in the bills of consumers. These observations are understand the technologies valuable role in enabling modern
contributory for the design of economical and effective living, has received over 2.6 million visitors. From this we
metering system is one care that must be undertaken [8]. conclude that Chinese government shows the model set up in
4000 acres land so that people, the consumers become aware

590
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

about the smart grid and its benefits for the consumer itself.
[8]

Fig 2: Block diagram of smart meter

3. RADIO FREQUENCY WAVES government agencies or terrorist organizations could bring


whole countries to their knees by interrupting electrical
Radio frequency waves are used for the communication generation. More so than traditional grids, they stress that
between smart meters, smart grid and the consumer. These Smart Grids create a new strategic vulnerability as the cyber
waves have frequencies from 300 GHz to as low as 3 KHz equivalent of a nuclear attack. Smart Grids are also easy to
and corresponding wavelengths from 1 millimeter to 100 sabotage with simple jamming devices. [10]
kilometers. These waves travel at the speed of light. Various
wireless applications are possible only due to radio waves 3. People are complaining of ceiling fans turning on in the
such as fixed and mobile radio communication, broadcasting, middle of the night, speeds spontaneously changing, paddles
radar and other navigation systems, satellite communication, reversing direction, and circuit boards burning up. A few
computer networks and innumerable other applications[9]. meters have exploded. Fire takes place in many of the smart
meters. In New Zealand firefighters report 422 fires in 2010
4. ENVIRONMENTAL AND HEALTH involved with Smart Meters. [10, 11]
PROBLEM
As there are many advantages of radio waves but there are 4. If we work in the evening shift and cook dinner at midnight
many types of problems related to these waves – at electric chullah’s, your rate could be highest when everyone
environmental and health problems elaborated in the else’s is lowest. Then there’s mandatory shut-offs for people
following section [10]. who don’t pay their utility bills -- after which the unfortunate
1. Studies show that when cell towers are installed cows have customer will have to buy a prepaid wireless-enacted electric
increased cancers, lower milk production, agitation, immune meter like a prepaid phone card. Such a system was enacted in
system disorders, more mastitis, miscarriages, and birth South Africa in the 1990s.
defects in offspring near cell towers. Birds with nests near
antennas display lower reproductive rates, and chicks are born 5. According to the Wall Street Journal, Palmisano told that a
with birth defects. In simulations of whole colony collapse $10 billion investment is needed to start a Smart Grids that
disorder, bees have disappeared entirely when transmitting will create 239,000 new jobs but there is a jobs lost, such as
cell phones were placed next to their hive. hundreds of thousands of unemployed meter readers.
2. Trees also endure die-back near towers. Whole forests near
broadcast antennas in Europe have suffered. 6. 2010’s federal appropriations for Smart Grids were $11
billion. But some financial analysts say it will take over $900
3. Fifteen studies report effects among people living 50-to-
billion over the next two decades to upgrade high-tension
1500 feet from a cell tower -- including cancers, immune
lines, meters, central control facilities and substations. In
system effects, fertility problems, heart arrhythmias,
addition, they say to truly digitize and digitalize grids, it will
miscarriages, sleeplessness, dizziness, concentration
cost hundreds of billions more, into 2030, because every
difficulties, memory loss, headaches, skin rashes, lowered
utility’s computer network will need to be upgraded, new
libido, fatigue and malaise.
renewable-energy sources will be needed to plug into new
4. People with implanted medical devices like deep-brain access points, and recharging stations and power lines will
stimulators for Parkinson’s, some pacemakers, insulin pumps, need to be built.
and in-home hospital equipment. The radiofrequency
interference inherent to Smart Grids can cause malfunctioning
7. If solar panels are installed for creation of renewable-
of such equipments, or even to stop [10].
energy sources and then such energy can be sold back to the
.
grid without very expensive equipment [11, 12].
5. ECONOMIC PROBLEMS
6. CONCLUSION
1. “Secure,” Smart Grids can be penetrated by both wired and
wireless networks. In August of 2009, hackers robbed Smart grid can make the power system more efficient and
179,000 Toronto Hydro customer’s names, addresses, and flexible. It maintains the load balancing in a country. It
billing information from their e-billing accounts. monitors the consumer consumption and can warn and advice
consumer about the present rate of units which are variable as
the consumption varies, with which consumer can maintain
2. Ross Anderson and Shailendra Fuloria at Cambridge
their bills. And also overloading can be taken care of. Smart
University’s Computer Laboratory note that hostile grid also makes possible the use of renewable energy sources

591
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

like solar and wind energy by integrating with the existing [2] Ding, W. and Marchionini, G. 1997 A Study on Video
system. But with the use of smart grid many human health Browsing Strategies. Technical Report. University of
related problems are affecting the life which are observed in Maryland at College Park.
the developed countries like U.S.A where they are already
efficiently used, due to the use of radio frequency waves on [3] Fröhlich, B. and Plate, J. 2000. The cubic mouse: a new
which smart meters work. device for three-dimensional input. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems
7. FUTURE SCOPE
[4] Tavel, P. 2007 Modeling and Simulation Design. AK
Peters Ltd.
Smart grid can make the power system more efficient and
flexible. It maintains the load balancing in a country. It [5] Sannella, M. J. 1994 Constraint Satisfaction and
monitors the consumer consumption and can warn and advice Debugging for Interactive User Interfaces. Doctoral
consumer about the present rate of units which are variable as Thesis. UMI Order Number: UMI Order No. GAX95-
the consumption varies, with which consumer can maintain 09398., University of Washington.
their bills. And also overloading can be taken care of. Smart
grid also makes possible the use of renewable energy sources [6] Forman, G. 2003. An extensive empirical study of
like solar and wind energy by integrating with the existing feature selection metrics for text classification. J. Mach.
system. But with the use of smart grid many human health Learn. Res. 3 (Mar. 2003), 1289-1305.
related problems are affecting the life which are observed in [7] Brown, L. D., Hua, H., and Gao, C. 2003. A widget
the developed countries like U.S.A where they are already framework for augmented interaction in SCAPE.
efficiently used, due to the use of radio frequency waves on
which smart meters work. [8] Y.T. Yu, M.F. Lau, "A comparison of MC/DC,
MUMCUT and several other coverage criteria for logical
decisions", Journal of Systems and Software, 2005, in
REFERENCES press.
[1] Bowman, M., Debray, S. K., and Peterson, L. L. 1993. [9] Spector, A. Z. 1989. Achieving application requirements.
Reasoning about naming systems. . In Distributed Systems, S. Mullender

592
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

RECENT ADVANCES IN FRICTION STIR WELDING FOR


FABRICATION OF COMPOSITE MATERIALS
Gurmeet Singh Cheema Prem Sagar Vikash
Head & Professor, Department Research Scholar Research Scholar
of Mechanical Engineering, PTU, Jalandhar PTU, Jalandhar
BGIET, Sangrur jasujaprem@gmail.com Vikasjangra281@gmail.com
gcheemamand@gmail.com

ABSTRACT Sirahbizu. Et. al.[3], investigated the effect of Al +


Friction stir welding (FSW) is a solid state joining process. 12%Si/10 wt%TiC in situ composites. They also concluded
Different composites have been fabricated via friction stir that tool rotational speed and tool type to be the most
welding. This research article studies about the latest influential welding variables to develop sound FSW joints of
developments in friction stir welding processes for the Al + 12%Si/10 wt%TiC in situ composites. Accordingly, in
fabrication of different alloy materials with different the same experimental conditions, 710 rpm tool rotational
reinforced particles. In addition microstructure and speed and 20 mm tool shoulder diameter were preferable
mechanical properties of reinforced composites was studied. welding parameter for better UTS, percentage elongation and
Finally a conclusion was drawn that owing to versatile nature micro-hardness of the butt joints. The developed multiple
of FSW there is still a lots of scope is available to fabricate regression equations satisfactorily predicted the influence of
other useful materials. the input variables. The observed errors in the test cases were
0.52–10.41% for UTS, 7.9–9.89% for percentage elongation
Keywords and 6.99–10.12% for micro-hardness. The optimality test
FSW, Alloy materials, microstructural, mechanical properties. results also exhibited 0.07–2.98% error. It was concluded that
the adopted optimization technique is adequate for the current
1. INTRODUCTION experimental conditions.
Friction stir welding technique was initiated by TWI, England L.Dumpala and D. Lokanadham.[4],elaborate the effect of
especially for the materials which were hard to weld by fusion friction stir welding on size and distribution of reinforced
welding. FSW is widely used in aerospace industry for the particle in composite. Further they provides the idea of
welding of materials like aluminum, magnesium and titanium. converting conventional milling to CNC operating milling
The schematic diagram of the basic principal of FSW process machine along with they determined an approximate
is given in fig.1. FSW uses a non-consumable tool with analytical technique for the calculation of welding flow in
threaded or plane surface. This tool is made up of hard three dimensions, based on viscous flow of an incompressible
material. During welding process axial force applied on the fluid induced by a solid rotating disk. The computed velocity
tool to pinned it into the work piece material. The key benefits fields for the welding of an aluminum alloy, steel and a
of FSW are summarized in Table 1. titanium alloy are compared with those obtained from a well-
tested and comprehensive numerical model. They also
2. Literature Review presented an improved non-dimensional correlation to
Cioffi.et.al.[1], welded 8mm thick 2024 Al alloy with 17% estimate the peak temperature, and an analytical method to
SiC/2124 Al composite to find out the effect of lateral off-set estimate torque. The proposed correlation for the peak
on strength and fracture location of the butt joint via using temperature is tested against experimental data for different
friction stir welding process with unthreaded WC-CO tool. weld pitch for three aluminum alloys. The computed torque
They concluded that diagonal setup to be an efficient way to values are tested against corresponding measurements for
find the appropriate welding parameters for a range of lateral various tool rotational speeds. The hardness in the TMAZ has
off-sets and it influences the fracture location and resistance also been correlated with the chemical composition of
of the dissimilar friction stir welds. However, a lateral offset aluminum alloys. Nanoparticles used was rice hush ash with
of around 1.5 mm into the alloy and a slightly deeper plunging 2%, 4% and 8% wt. fraction for the fabrication of A356.2
could overtake the resistance of the traditional centered setup, alloy, Scanning electron microscope equipped with energy
by improving the bonding quality in the bottom section of the dispersive X-ray analyzer is used for micro structural
alloy-composite boundary. Lateral off-set could be used as a characterization, presence of silicon particles in the
parameter to improve the resistance of; not only dissimilar composites. Finally they concluded that as the percentage of
MMC-Alloy joints, but also friction stir welds of other RHA particles increases, the density of the composites
materials. Because it changes the amount of deformation that decreases and slight increase in the hardness also was
the joint line goes through. observed.
Liu.et.al. [2], studied the influence of welding parameters on Jeon et.al.[5], prepared a lap joint of metal matrix composites
microstructure and mechanical properties of fabricated i.e Graphite & aluminum metal via friction stir spot welding
AC4A+30 vol. % SiC via friction stir welding. Defect free technique. Graphite grade 230u and Alumunium 5052-H32
joints at 25-150mm/min. welding speed and 200 r.p.m. can be alloy sheet of 3mm was used. Prior to FSSW, the graphite
obtained. Decrease of weld strength was reported because of reinforcement was prepared in the form of graphite/water
high weld speed. Tensile strength of the joint was tested along colloid (25 wt. % graphite concentration) and applied on the
with their fracture location. Also they analyzed that low surface of the upper sheet because the fine-size reinforcement,
welding speed cause high heat affected zone which makes the i.e., the graphite, can be safely stirred into the matrix while
material softer. the risk that the graphite is swept and dispersed into the

593
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

atmosphere by a rapidly rotating tool is minimized. They Prater [10], applied Taguchi L27 array for characterizing the
considered input parameters like rotation speed, tool depth, variation of tool wear in FSW of Al359/SiC/20% measuring
shoulder diameter, pin diameter and pin length to study the 0.635cm in thickness considering different levels of input
effect on micro hardness and shear test. Finally Raman spectra parameters like rotation speed, transverse speed and length of
and SEM tests was conducted to ensure the successful weld. Tools were mounted in an optics bench and close-up
distribution of particulates. Further Al1050-H14 was also images of the probe were imported into imaging software.
fabricated to draw the comparison. Wear of the probe was quantified by comparing pre-weld
images. The percent tool loss was calculated based on the
R. L. Suvarna and A. Kumar [6], suggested the optimum degradation in cross sectional area. A 1 cm square grid fixed
conditions for the fabrication of Cu-Al2O3 composite via behind the tool was used to convert area measurements from
FSW. The Taguchi technique was applied for the optimization pixels to square centimeters. Tool wear in FSW of MMCs is
of input parameters of FSW (volume percentage of reinforced observed to be circumferentially symmetric. A multiple
particles, tool tilt angle and concave angle of shoulder) on regression model (MRM) was constructed to estimate the
output parameters like microstructure properties, ultimate volume loss the tool will experience. Expression derived from
tensile strength, yield strength, % elongation, hardness and the regression analysis was strongly correlated with
impact toughness. Also scanning electron microscopy was experimental data with any empirically based predictive
used to study the microstructure, fracture morphology and model,a more definitive assessment can be obtained by testing
nature of fracture. Via microstructure and fracture feature they the model on a validation set comprised of cases separate
concluded presence of refinement of grains and tensile from those in the original data set. The predicted and observed
strength of the composite increases with the increase in wear values for the validation set are closely aligned. Based
volume fraction of the particulates. Findings of the study on the multivariate regression analysis, tool wear in FSW of
outlines that volume % at 12,tilt angle 2° and concave angle MMCs is directly proportional to rotation rate and distance
at 4° gives the optimum condition for all UTS,YS,IT and welded but inversely proportional to traverse speed.
micro hardness of the composite.
Zhang. Et. Al.[11], works to increase the mechanical and
Kalaiselvan et.al. [7], The microstructure of friction stir microstructural properties of friction stir welded AA2024-T3
welded AA6061/B4C AMC was divided into four zones; (i) sheets via using backing plates made up of pure copper and
parent composite, (ii) heat affected zone, (iii) carbon steel. Findings suggests that these plates availed high
thermomechanically affected zone and (iv) weld zone.It was temperature at the nugget zone and simultaneously provided
difficult to differentiate HAZ and parent composite. TMAZ cooling effect at thermally heat affected zone and heat
showed a parallel band like distribution of B4C particles and affected zone. Also they concluded that grains under
elongated grains. The weld zone was characterized by a composites backing plates are completely refined as compared
homogenous distribution of B4C particles. The hardness of to those monolithic plates. A good correlation has been
weld zone was higher than that of parent composite. The established between findings through microstructure tests
tensile strength of welded joint was comparable to the reports and observation through mechanical testing.
strength of the parent composite under experimental
conditions. But FSW reduced the ductility of the joints. The D. W. B. L. Xiao and D. R. N. Z. Y. Ma [12], reported
fracture mode changed from ductile to brittle subsequent to welding tool wear for 6061/Al and reinforced
FSW. Al2O3/6061.Finding suggested that serious welding tool wear
were observed. Also they suggested that both nanoparticles
D. R. Ni .et.al. [8], welded NiTip/6061Al composites FSP volume fraction and welding tool affect the aluminum matrix
using a novel multi-hole particle presetting mode, which composite. Further they concludes that high reinforcement
could effectively prevent the agglomeration and loss of composites requires tool materials with high hardness and
particles. The NiTip were homogeneously distributed in the toughness, however with low reinforcement volume fraction
Al matrix without discernible interfacial products. The wearing of the tools was significantly reduced and the sound
composite exhibited a phase transformation behavior similar joints could be achieved at high welding speed for the AMCs
to that of the as-received NiTip. The composite reinforced by when the hard materials such as Ferro-Titanit alloy, cermet,
the small NiTip showed higher strength than that by the large and WC/Co were used as welding tools.
NiTip. The aging treatment provided a comparable
strengthening effect on composite. The strengths of both the Thangarasu .et.al.[13], AA6082/TiC AMCs were
aging-treated and T6-treated composites reinforced by small synthesized using FSP and the effect of TiC particles along
particles were higher than those of the as-received T651 BM. with its volume fraction on microstructure, mechanical and
The SEM fractographs showed that the bonding between the sliding wear behaviour was analyzed. They concluded that the
TiNip and the Al matrix was good without interfacial volume fraction of TiC particles influenced the area of the
debonding under both the as-FSP and T6-treatment composite. The area of FSP zone was observed to be 65 mm2
conditions. at 0 vol.% and the area of the composite was 34 mm2 at 24
vol.%. Further they concluded that both microhardness and
C. Devanathan and A. S. Babu[9], investigated the effect of the UTS increase when the volume fraction of TiC particles
process parameters by using ANNOVA,S/N method on was increased. The microhardness was found to be 62 HV at 0
friction stir welding of LM 25 Al alloy with 5% SiC vol.% and 149 HV at 24 vol.%. The UTS was estimated to be
particulate using TiAIN coated tool. They considered 222 MPa at 0 vol.% and 382 MPa at 24 vol.%. Also TiC
parameters like tool rotation speed, transverse speed and axial particles influenced the morphology of the fracture surface.
force to investigate their effect on tensile strength. Findings The increased content of TiC particles has increased the
suggested that no tool wear was observed and axial force stiffness and wear resistance of the matrix and reduced the
affect 35%, transverse speed 25% and spindle speed 12% to formation of voids along with wear rate. The wear rate was
increase the final value of tensile strength of the composite found to be 693× 10-5 mm3/m at 0 vol.% and 303×10-5mm3/m
material. at 24 vol.%. The increased volume fraction of TiC particles
altered the wear mode from adhesion to abrasive.

594
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Sathiskumar et.al. [14], variety of ceramic particles such as rotational speeds of 50mm/min and 1000rpm, respectively.
SiC, TiC, B4C, WC and Al2O3, were used for the fabrication Optical and scanning electron microscope observations
of copper surface composites using FSP. Empirical revealed that FSP produced a fine grain microstructure with a
relationships incorporating the FSP parameters were homogeneous distribution of particles on the surface. Their
developed to predict the properties of copper surface work results that increasing TiC reinforcement resulted in
composites such as the area of the surface composite, higher hardness and wear resistance in stir zone.
microhardness and wear rate. FSP process parameters such as
tool rotational speed, traverse speed and groove width Khodabakhshi et.al. [18], used different volume fraction i.e
influenced the area of the surface composite. All those process 3%, 5% and 6% of TiO2 with an average size of 30 nm
parameters and the type of ceramic particle influenced the nanoparticles for friction stir processing of Al-Mg alloy.
microhardness and wear rate of the surface composite. Higher Grooves with dimensions of 4 mm (depth) and 1.2 mm
tool rotational speed, lower traverse speed and minimum (width) were machined in commercial AA5052-H32 sheets of
groove width yielded higher area of surface composite. thickness 5 mm thickness. Fine grains up to 2-3µm were
Higher tool rotational speed and lower traverse speed
reported in metal matrix composite after adding TiO2 which
produced a fine distribution of ceramic particles in the surface
composite. The groove width and the type of ceramic particle were 60µm earlier. Further they annealed the Nano-
did not influence the distribution of ceramic particles composites up to 300-500°c in air for 1-5 hr. Annealing
significantly. Lower tool rotational speed, higher traverse process increases the ductility, ultimate strength and
speed, maximum groove width and B4C ceramic particle percentage elongation and reduced yield strength without
resulted in higher microhardness and lower wear rate of the affecting the tensile strength. However they suggested that
surface composite. annealing beyond 500°c shows abnormal results. Finally
Thangarasu et.al. [15], produced AA6082/TiC alumunium outcomes of mechanical properties are correlated with
matrix composites by friction stir processing technique using microstructure analysis observed by SEM and TEM tests.
high carbon high chromium tool with threaded profile. Further
they studied the effect of increase in transverse speed from Table 1. Key benefits of FSW are summarized below
40-80mm/min on microstructure and mechanical properties of
Metallurgical benefits Environmental benefits
the composite. Results of their work outline that transverse
Solid phase process. Consumable materials
speed influence the area of surface composition inversely and
saving.
to grain size,hardness and TiC dispersion directly.
Low distortion of work Eliminate grinding wastes
Salehi et.al. [16], investigate the effect of SiC particles on piece.
functionally graded 6061 aluminum plate by using different
tool with pin length of 6 mm and of 3.2 mm. Microstructural No loss of alloying No surface cleaning
observations indicated a proper distribution of SiC required.
elements
nanonparticles in the Al 6061 matrix. Composition of FG
sample was changed from 18 to 0 wt% SiC along the five
layers in which Layers I, III and V showed a constant Good dimensional stability Eliminate grinding wastes.
concentration of SiC nanoparticles. Highest value of & repeatability.
microhardness was achieved in FG sample i.e 160 Hv, which
is 3.2 times higher than that of the base metal. Absence of cracking
Sabbaghian et.al. [17], studied the effect of TiC nano-
particles on the mechanical and microstructure of pure Cu Fine microstructure
matrix composite fabricated via friction stir processing.
Friction stir processing was carried out with transverse and

Fig 1: Basic Principal of FSW

3. CONCLUSION 2) Almost all fabricated composites leads to weld free defects.


Following conclusion are drawn 3) Tool geometry and Tool rotation influenced the mechanical
1) Increasing the number of reinforced particles increases the and microstructure characteristics.
tensile strength but simultaneously makes it too brittle.

595
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4) Tools made up of tool steel exhibits serious wearing. [12] D. W. B. L. Xiao and D. R. N. Z. Y. Ma, “Friction Stir
Welding of Discontinuously Reinforced Aluminum
5) Elastic modulus, specific modulus can be taken into Matrix Composites : A Review,” vol. 27, no. 5, pp.
account. 816–824, 2014.
REFERENCES
[13] A. Thangarasu, N. Murugan, I. Dinaharan, and S. J.
[1] F. Cioffi, J. Ibáñez, R. Fernández, and G. González- Vijay, “Synthesis and characterization of titanium
doncel, “The effect of lateral off-set on the tensile carbide particulate reinforced AA6082 aluminium alloy
strength and fracture of dissimilar friction stir welds , composites via friction stir processing,” Arch. Civ.
2024Al alloy and 17 % SiC / 2124Al composite,” J. Mech. Eng., pp. 1–11, 2014.
Mater., vol. 65, pp. 438–446, 2015.
[14] R. Sathiskumar, N. Murugan, I. Dinaharan, and S. J.
[2] H. Liu, Y. Hu, Y. Zhao, and H. Fujii, “Microstructure Vijay, “Prediction of mechanical and wear properties of
and mechanical properties of friction stir welded AC4A copper surface composites fabricated using friction stir
processing,” J. Mater., vol. 55, pp. 224–234, 2014.
+ 30 vol .% SiCp composite,” J. Mater., vol. 65, pp.
395–400, 2015.
[15] A.Thangarasu,N.Murugan,I.Dinaharan, “ Infulence of
transverse speed on microstructural and mechanical
[3] B. Sirahbizu, D. Venkateswarlu, M. M. Mahapatra, P. properties of AA6082-Ti1C surface composite
K. Jha, and N. R. Mandal, “On friction stir butt welding fabricated by Friction stir processing” vol. 5, pp. 2115–
of Al + 12Si / 10 wt % TiC in situ composite,” Mater. 2121, 2014.
Des., vol. 54, pp. 1019–1027, 2014.
[16] M. Salehi, H. Farnoush, and J. Aghazadeh, “Fabrication
[4] L. Dumpala and D. Lokanadham, “Low Cost Friction and characterization of functionally graded Al – SiC
Stir Welding Of Aluminium Nanocomposite - A nanocomposite by using a novel multistep friction stir
Review,” Procedia Mater. Sci., vol. 6, no. Icmpc, pp. processing,” J. Mater., vol. 63, pp. 419–426, 2014.
1761–1769, 2014.
[17] M. Sabbaghian, M. Shamanian, H. R. Akramifard, and
[5] C. Jeon, Y. Jeong, S. Hong, T. Hasan, H. N. Tien, S. M. Esmailzadeh, “Effect of friction stir processing on
Hur, and Y. Kwon, “Mechanical properties of graphite / the microstructure and mechanical properties of Cu –
aluminum metal matrix composite joints by friction stir TiC composite,” Ceram. Int., vol. 40, no. 8, pp. 12969–
spot welding ,” vol. 28, no. 2, pp. 499–504, 2014. 12976, 2014.

[6] R. L. Suvarna and A. Kumar, “Influence of Al 2 O 3 [18] F. Khodabakhshi, A. Simchi, A. H. Kokabi, A. P.


particles on the microstructure and mechanical Gerlich, and M. Nosko, “Effects of post-annealing on
properties of copper surface composites fabricated by the microstructure and mechanical properties of friction
friction stir processing,” Def. Technol., no. September, stir processed Al – Mg – TiO 2 nanocomposites,” J.
pp. 1–9, 2014. Mater., vol. 63, pp. 30–41, 2014.

[7] K. Kalaiselvan, I. Dinaharan, and N. Murugan,


“Characterization of friction stir welded boron carbide
particulate reinforced AA6061 aluminum alloy stir cast
composite,” J. Mater., vol. 55, pp. 176–182, 2014.

[8] D. R. Ni, J. J. Wang, Z. N. Zhou, and Z. Y. Ma,


“Fabrication and mechanical properties of bulk NiTip /
Al composites prepared by friction stir processing,” vol.
586, pp. 368–374, 2014.

[9] C. Devanathan and A. S. Babu, “Friction Stir Welding


of Metal Matrix Composite using Coated tool,” vol. 6,
no. Icmpc, pp. 1470–1475, 2014.

[10] T. Prater, “Friction Stir Welding of Metal Matrix


Composites for use in aerospace structures $,” Acta
Astronaut., vol. 93, pp. 366–373, 2014.

[11] Z. H. Zhang, W. Y. Li, Y. Feng, J. L. Li, and Y. J.


Chao, “A Improving mechanical properties of friction
stir welded AA2024-T3 joints by using a composite
backplate,” vol. 598, pp. 312–318, 2014.

596
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

RECENT DEVELOPMENT IN ALUMINIUM ALLOYS FOR


THE ADVANCE COMPOSITE MATERIAL IN INDUSTRY

Mohan Singh Balwinder Singh Sidhu


Research Scholar (Ph.D.) Department of Mechanical Engineering
Punjab Technical University PTU Campus, GZSCET
Jalandhar Punjab, India Bhatinda, Punjab, India
msingh1976@yahoo.com drbwssidhu07@gmail.com

ABSTRACT
Modern trends in research and development of new al. reported that the distribution of the reinforcement material
aluminium alloys are characterized in the present work. in the matrix must be uniform and the wettability or bonding
Although conventional wrought and casting Al-based alloys between these substances should be optimized. Aluminum-
show good specific strength, as compared to steels or Ti-based silicon carbide metal matrix composite has low density and
alloys, there is still a potential for significant improvement of light weight, high temperature strength, hardness and
their performance. It consists in application of new alloying stiffness, high fatigue strength and wear resistance etc. in
elements, mainly transition metals, and uncommon processing comparison to the monolithic materials . However, aluminum
route. By this way, qualitatively new materials with ultra-high alloy with discontinuous ceramic reinforced MMC is rapidly
strength and excellent thermal stability can be developed.This replacing conventional materials in various automotive,
presentation reviews recent developments in aluminium alloys aerospace, and automobile industries . Amongst various
to improve formability, surface quality in both 5000 and 6000 processing routes stir casting is one of the promising liquid
alloys, and the bake hardening response of 6000 alloys. New metallurgy technique utilized to fabricate the composites. The
approach in improving thermal stability of Al-base alloys is process is simple, flexible, and applicable for large quantity
alloying with transition metals (TM)having low diffusion production. The liquid metallurgy technique is the most
coefficients in solid aluminium The slow diffusivity of economical of all the available technique in producing of
elements retards structure transformations at elevated MMC .Aluminum alloy-based composites containing 10wt%
temperatures, such as grain growth, coarsening of alumina (size range: 150-225 mm) were prepared by liquid
intermetallic phases etc., which are reasons for the strength metallurgy technique using the vortex method. The ZnO
and hardness reduction. whiskers 25 vol% reinforced with Al-matrix composites were
fabricated by a squeeze casting process. The quartz-silicon
Keywords: Aluminium alloy, High-strength, thermal stability, dioxide particulates reinforced LM6 alloy matrix composites
Composite Tool, Microstructure, Scanning Electron were fabricated by carbon dioxide sand moulding process.
Microscope Various researchers have utilized conventional stir casting
technique for producing MMC but still applied research is
1. INTRODUCTION needed for successful utilization of the process for
manufacturing of MMC.
Recently, development effort to apply wrought aluminium is
Metal Matrix Composite (MMC) is engineered combination becoming more active than applying aluminium castings.
of metal (Matrix) and hard particles (Reinforcement) to Forged wheels have been used where the loading conditions
tailored properties. Metal Matrix Composites (MMC’s) have are more extreme and where higher mechanical properties are
very light weight, high strength, and stiffness and exhibit required. Wrought aluminium is also finding applications in
greater resistance to corrosion, oxidation and wear. Fatigue heat shields, bumper reinforcements, air bag housings,
resistance is an especially important property of Al-MMC, pneumatic systems, sumps, seat frames, side impact panels, to
which is essential for automotive application. These properties mention but a few. Aluminium alloys have also found
are not achievable with lightweight monolithic titanium, extensive application in heat exchangers. Until 1970,
magnesium, and aluminum alloys. Particulate metal matrix automotive radiators and heaters were constructed from
composites have nearly isotropic properties when compared to copper and brass using soldered joints. The oil crisis in 1974
long fiber reinforced composite. But the mechanical triggered are-design to lighter-weight structures and heralded
behaviour of the composite depends on the matrix material the use of aluminium. The market share of aluminium has
composition, size, and weight fraction of the reinforcement grown steadily over the last 25 years and is now the material
and method utilized to manufacture the composite. The of choice for use in the automotive heat exchanger industry.
distribution of the reinforcement particles in the matrix alloy Modern, high performance automobiles have many individual
is influenced by several factors such as rheological behaviour heat exchangers, e.g. engine and transmission cooling, charge
of the matrix melt, the particle incorporation method, air coolers (CACs), climate control.One obvious and
interaction of particles and the matrix before, during, and after significant difference between aluminium and steel is the
mixing. Non homogeneous particle distribution is one of the outstanding bare metal corrosion of the 5xxx and 6xxx
greatestproblems in casting of metal matrix composites. Nai aluminium materials. Increasingly large amounts of steel are
and Gupta reported that the average coefficient of thermal supplied zinc coated to achieve acceptable paint durability,
expansion of the high SiCp end was reduced as compared to this is not necessary for aluminium. However, the aluminium
that of the low SiCp end. Hashimet coil or sheet can be supplied with a range of pre-treatment and
primer layers which can improve formability, surface quality

597
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

and may eliminate the need for E-coating. There is a wide temperature regulator cum indicator is utilized for
range of aluminium materials and surface qualities, which can melting of Al/SiC-MMCs. Figure 1 (a) and (b) show an
be chosen, and the growing design and process experience is induction resistance furnace and temperature regulator
enabling the aluminium industry to help the customer cum indicator, respectively. A design and developed
specifying the right material for the application.
stirring setup is shown in Figure 1 (c). Aluminium alloy
In this study stir casting is accepted as a particularly (Al 6063) was first preheated at 450°C for 2 hours
promising route, currently can be practiced commercially. Its before melting and SiC particulates were preheated at
advantages lie in its simplicity, flexibility and applicability to 1100°C for 1 hour 30 minutes. to improve the wetness
large quantity production. It is also attractive because, in properties by removing the absorbed hydroxide and
principle, it allows a conventional metal processing route to other gases. The furnace temperature was first raised
be used, and hence minimizes the final cost of the product. above the melting temperature, that is, 750°C, to melt
This liquid metallurgy technique is the most economical of all the matrix completely and then it was cooled down to
the available routes for metal matrix composite production, just below the melting temperature to keep the slurry in
and allows very large sized components to be fabricated. The
cost of preparing composites material using a casting method
a semi-solid state. At this stage the preheated SiC
is about one-third to half that of competitive methods, and for particles were added and mixed mechanically. The
high volume production, it is projected that the cost will fall to composite slurry was then reheated to a fully liquid
one-tenth . In general, the solidification synthesis of metal state and mechanical mixing was carried out for 20min
matrix composites involves producing a melt of the selected at 200rpm average stirring speed. In the final stage of
matrix material followed by the introduction of a mixing, the furnace temperature was controlled within
reinforcement material into the melt. To obtain a suitable 760 ± 10°C and the temperature was controlled at
dispersion the stir casting method is used. The solidification 740°C. Moulds (size 40mm diameter ×170mm long)
of the melt containing suspended SiC particles is done under made of IS-1079/3.15mm thick steel sheet were
selected conditions to obtain the desired distribution.
From the past review, it is found that the number of research
preheated to 350°C for 2 hours before pouring the
work on wear behavior of MMCs have been published, but molten Al/SiC -MMC. Figure 2 (a) shows the prepared
only few work related to the influence of weight fraction on permanent mould made of steel sheet utilized for
mechanical properties like tensile strength, hardness, impact casting of 40mm diameter ×170mm long bar. Figure 2
strength, percentage of elongation etc. have been reported. In (b) shows mixing again manually before pouring.
this study, different weight fractions of Silicon Carbide Figure 2 (c) shows pouring mixture of molten Al and
particulates are added with aluminum matrix to fabricate the SiC particles. Then fabrication of composite was
Al/SiC metal matrix composites. Different samples have been followed by gravity casting. Similar process was
fabricated by melt-stirring casting and their microstructure, adapted for preparing the specimens of varying mesh
hardness, tensile strength, and impact strength are studied. In
this study the influences of the reinforced particulate size (220
size and weight fractions.
mesh, 300 mesh, 400 mesh) and weight fraction (5%,
10%,15%, 20%) on mechanical properties like Proportionality 3. RESULTS AND DISCUSSION
(MPa) limit, Tensile strength upper yield point (MPa), Tensile
strength lower yield point (MPa), Ultimate tensile strength Various Experiments were conducted on fabricated MMCs
(MPa), Breaking strength (MPa), % Elongation, % Reduction samples by varying weight fraction of SiC (10%, 15%, 20 %,)
in area, Hardness (HRB), Density (gm/cc), Impact Strength and size of SiC particles (220 mesh, 300 mesh, 400 mesh) to
(N.m) are investigated. analyze the casting performance characteristics of Al/SiC-
MMCs.
2. FABRICATION OF Al/SiC METAL
MATRIX COMPOSITES 4. MICROSTRUCTURE

Silicon Carbide (SiC) reinforced particles of average Metallographic samples were sectioned from the cylindrical
particle size 220 mesh, 300 mesh, 400 mesh cast bars. A 0.5 % HF solution was used to etch the samples
wherever required. To see the difference in distribution of SiC
respectively are used for casting of Al/SiC-MMCs by particles in the aluminium matrix, microstructure of samples
melt-stir technique. Table 1 represents the chemical were developed on Inverted type Metallurgical Microscope
composition of commercially available Al-matrix used (Make: Nikon, Range-X50 to X1500). Figure 3 shows
for manufacturing of MMC. Different dimensions of Micrograph of Al/Sic-MMC’s samples for different Sizes
round bars with 5 vol%, 10 vol%, 15 vol% and 20% of (220 mesh, 300 mesh, 400 mesh) and weight fraction (5%,
reinforced particles of size 220 mesh, 300 mesh, 400 10%, 15%, 20 %,) of SiC particles. Optical micrographs
mesh respectively. showed reasonably uniform distribution of SiC particles. In
Experiments were carried out to study the effect of this Al matrix SiC particles are clearly labelled.
settling the reinforced particulates on the solidification
microstructure and mechanical properties of the
castMMC. In the present study, commercially available
aluminium (AA6063) is used as matrix reinforced with
Silicon Carbide (SiC) particulates. The melting was
carried out in a clay-graphite crucible placed inside the
resistance furnace. An induction resistance furnace with

598
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

prepared of Al/Sic-MMC’s for different size (220 mesh, 300


mesh, 400 mesh) and weight fraction (10%,15%,20%) of SiC
particles. In this way, the weight of the heat exchanger is
reduced and its performance is improved. This means that
existing materials have to be improved or new solutions have
to be found to fulfill this demand.

Figure 1. Micrograph of Al/Sic-MMC’s samples for


different Sizes and weight fraction of SiC particles

Figure 4 Specimens for Tensile Test

Procedure of Tensile Test is shown by Figure 4. Eight


Specimens are shown after test. Graphs were plotted between
tensile force (kgf) and Extension (mm) for twelve specimens.
The values of tensile force are plotted on vertical axes and
extension on horizontal axes. The specimen passes through
the clearly defined stages i.e. limit of proportionality, Upper
yield value, lower yield value, ultimate stress value and finally
fractures strength value.

6. FORMABILITY
Figure 2. Micrograph of Al/Sic-MMC’s samples for High formability is required since the space under the
different Sizes and weight fraction of SiC particles hood for head exchanger is minimized. This results that
heat-exchanger designer have to come up with drastic
solution to find a way to optimize the heat exchanger
capacity for a limited space. This puts a large demand on the
forming characteristics of brazing sheet. A simple tensile test
is no longer a guarantee to predict the forming behavior of the
material.

7. BRAZE ABILITY
The term braze ability has not been well-defined in scientific
terms. However, it is generally considered to be a measure of
how well the clad layer flows during brazing to manufacture a
joint, without causing erosion of the underlying core material.
The main factors influencing braze ability are the surface
condition of the aluminum alloy (oxide thickness and type and
the presence of residual rolling oil), the atmosphere within the
Figure 3. Micrograph of Al/Sic-MMC’s samples for brazing furnace, temper of the brazing sheet, alloying
different Sizes and weight fraction of SiC particles elements in clad and core material.

5. TENSILE STRENGTH 8. HARDNESS


During selection of the alloys or temper for the different parts The Rockwell hardness test was done on Rockwell hardness
an optimum in gauge and strength has to be the goal. Higher tester as shown in Figure 3 (a) Model RAB, Sr.No.SN 4144,
strength is demanded to reduce the gauge of the used make SEU Pvt. Ltd. Twelve samples of Al/Sic-MMC’s for
materials and to tolerate higher operating pressures. different sizes and weight fraction of SiC particles were
Tensile.The test was carried out at room temperature on prepared. Figure 3. (b) and (c) Shows samples after test and
Universal Testing Machine Model-UTN-20, Sr.No.-4/79/239, hardness value on dial. The Rockwell hardness values with
Max. Capacity-2000 kgs, Make Blue Star Ltd. Figure 4 shows reference to scale HRB were taken for all samples and shown
standard dimensions of specimen for Tensile Test. Test by graphs.
specimens of standard dimensions as shown in Figure 5. were

599
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

SiCparticles in the Al matrix shows an increasing trend in the


9. DENSITY samples prepared by applying stirring casting technique.
(b)Tensile Strength: From the result graphs Proportionality
(MPa) limit, Tensile strength upper yield point (MPa), Tensile
Density of Twelve samples of Al/Sic-MMC’s for different strength lower yield point (MPa), Ultimate tensile strength
Sizes and weight fraction of SiC particles were measured by (MPa) and Breaking strength (MPa) increases with the
using Archimedean principle. Standard blocks of 15X15X10 increase in reinforced particulate size(220 mesh, 300 mesh,
mm were made as sample pieces. The schematic of the set up 400 mesh) and weight fraction (5%, 10%, 15%, 20%) of SiC
designed and fabricated for the density measurement. Steel particles. % Elongation and % Reduction in area decreases
rod of 2mm diameter is bent in U shape, one end of the rod is with the increase in reinforced particulate size (220 mesh, 300
brazed with the rectangular steel sheet of 3mm thickness and meshes, 400 mesh) and weight fraction (5%, 10%, 15%, 20%)
the other end is free. The whole set up is placed over the of SiC particles. (c) Hardness (HRB) and Density (gm/cc)
electronic weighing machine pan having least count (LC) of increases with the increase in reinforced particulate size(220
0.1mg. Distilled water was filled up to a mark in a standard mesh, 300 mesh, 400 mesh) and weight fraction (5%,10%
beaker of 100ml, which was placed over the wooden slab and ,15%, 20%) of SiC particles. Maximum Hardness (HRB) = 83
it should be free from the electronic balance. Sample pieces and Maximum Density (gm/cc) = 2.852 gm/cc has been
were freely suspended with a piece of thread from the upper obtained at 20 % weight fraction of 220 mesh size of SiC
end of the steel rod. Initially weight of the sample in air (w1) particles. (d) Impact Strength (N.m) decreases with the
is measured, later the same sample was immersed in distilled increase in reinforced particulate size(220 mesh, 300 mesh,
water, andweight (w2) of the sample was recorded. The actual 400 mesh) and increases with the increase in weight fraction(
density was calculated using the following equation. 5%,10%,15%,20%) of SiC particles. Maximum Impact
Density=w1/(w1-w2). Results are shown by graphs. Strength = 37.01 N-m has been obtained at 20 % weight
fraction of 400 mesh size of SiC particles.
10. IMPACT STRENGTH
REFERENCES
Impact Test was carried out on Izod Impact Testing Machine
and results were recorded in table. According to size and [1] Hashim, J., Looney, L. and Hashmi, M.S.J., “Particle
weight fraction of SiC particles Twelve Specimens Al/Sic- Distribution in Metal Matrix Composites,” Part-I, Journal of
MMC’s were prepared of Square cross-section of size Materials Processing Technology, 123: 251-257. 2002.
(10X10X75) with single V-notches as shown in Figure 4. The [2] Nather, S., Brabazon, D. and Looney, L., “Simulation of
size of V-notches is 45o and 2mm depth. Figure 4 shows the Stir Casting Process,” Journal of Materials Processing
specimens of Al/Sic-MMC’s after IZOD Test. Technology, 143-144: 567-571. 2003.
[3] Nai, S.M.L. and Gupta, M. ,”Synthesis and
11. CORROSION Characterization of Free Standing, Bulk Al/Sicp Functionally
Gradient Materials: Effects of Different Stirrer Geometries,”
Material Research Bulletin, 38: 1573-1589. 2003.
The standard test for corrosion is saltwater acetic acid test
(SWAAT) which is aimed at reproducing lifetime [4] Hashim, J., Looney, L. and Hashmi, M.S.J., ”Metal Matrix
performance. The different automotive heat exchangers, Composites: Production by the Stir Casting Method,” Journal
radiator, charge air cooler, evaporator, oil cooler, are of Materials Processing Technology, 92-93: 1-7. 1999.
subjected to different corrosion environments. This means that [5] Manna, A. and Bhattacharyya, B., “Study on Different
for every application the right alloy has to be selected to Tooling Systems during Turning for Effective Machining of
obtain maximum corrosion resistance. This implicates that Al/SiC-MMC,” The Institution of Engineers (India) Journal-
every application has to be specifically tested under condition Production, 83: 46-50. 2003
that simulatedreal life exposure. [6] Allison, J.E. and Cole, G.S., ”Metal Matrix Composite in
Automotive Industry: Opportunities and Challenges,” Journal
of Mechanical Science, 19-24. 1993.
12. RESULT GRAPH [7] Surappa, M.K., “Microstructure Evolution during
Solidification of DRMMCs: State of the Art,” Journal of
Effect of size and weight fraction of SiC particles of Al/Sic- Materials Processing Technology, 630. 1997.
MMC’s on mechanical properties like Proportionality (MPa) [9] Lloyd, D.J., Lagace, H., Mcleod, A. and Morris, P.L.
limit, Tensile strength upper yield point (MPa), Tensile (1989). Micro Structural Aspect of Aluminium Silicon
strength lower yield point (MPa), Ultimate tensile strength Carbide Particulate Composites Produced by a Casting
(MPa), Breaking strength (MPa), % Elongation, % Reduction Method, Materials Science and Engineering, A107: 73-79.
in area, Hardness (HRB), Density (gm/cc), Impact Strength [10] Guo, Z., Xiong, J., Yang, M. and Li, W., “Microstructure
(N.m) are presented in graphs [Figure 5 to 14] as shown and Properties of Tetrapod-like ZnO Whiskers Reinforced Al
hereunder. In these graphs all above properties are taken on Matrix Composite,” Journal of Alloy and Compounds, 461:
vertical axes and Wt.% of SiC on horizontal axes. 342-345. 2008
[11] Sulaiman, S., Sayuti, M. and Samin, R., “Mechanical
13. CONCLUSION Properties of the As-Cast Quartz Particulate Reinforced LM6
Alloy Matrix Composites,” Journal of Material Processing
The experimental study reveals following conclusions: (a) Technology, 201: 731-735. 2008.
Microstructure: Optical micrographs showed reasonably [12] Zhou, W. and Xu, Z.M., “Casting of SiC Reinforced
uniform distribution of SiC particles and this is good Metal Matrix Composites,” Journal of Materials Processing
agreement with earlier work. Homogenous dispersion of Technology, 63: 358-363. 1997.

600
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 5.Proportionality (MPa) limit Vs Wt.% of SiC

Figure 6.Tensile strength upper yield point (MPa) Vs Wt.% of SiC

Figure 7.Tensile strength lower yield point (MPa)Vs Wt.% of SiC

Figure 8.Ultimate tensile strength (MPa)Vs Wt.% of SiC

Figure 9.Breaking strength(MPa)Vs Wt.% of SiC

601
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 10.% Elongation Vs Wt.% of SiC

Figure 11.% Reduction in area Vs Wt.% of SiC

Figure 12.Hardness (HRB) Vs Wt.% of SiC

Figure 13.Density (gm/cc)Vs Wt.% of SiC

Figure 14.Impact Strength (N.m)Vs Wt.% of SiC

602
Electrical Engineering
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

THD Reduction in DVR by BFO-Fuzzy Logic


Chirag Kalia, Divesh Kumar,
BGIET Sangrur, BGIET, Sangrur,
Kalia_chirag@yahoo.com Diveshthareja@yahoo.com

ABSTRACT combination of above two devices gives a device known as


The modern sensitive, Non-linear and sophisticated load affects UPQC.
the power quality. Dynamic Voltage Restorer (DVR) provides
the fast, flexible and efficient solution to improve the power 1.1 DYNAMIC VOLTAGE RESTORER (DVR)
quality for such distribution network. The active power, reactive This is a series connected device that has the same structure as
power, variation of voltage, flicker, harmonics, and electrical that of an SSSC shown in Figure 1. The main purpose of this
behavior of switching operations are the major source of device is to protect sensitive loads from sag/swell, interruptions
affecting power quality. The intent of this paper is to in the supply side. This is accomplished by rapid series voltage
demonstrate the improvements obtained with DVR in power injection to compensate for the drop/rise in the supply voltage.
system network using MATLAB/SIMULINK. In this paper, an Since this is a series device, it can also be used as a series active
overview of the DVR, its functions, configurations, components, filter. Even though this device has the same structure as that of
control strategies are reviewed. The Simulation results are an SSSC, the operating principles of the two devices differ
presented to illustrate the performance of DVR in Total significantly. Another reason is that the DVR costs less
Harmonic Distortion (THD). The results showed clearly the compared to the UPS. Not only the UPS is costly, it also
performance of using DVR in improving THD level. requires a high level of maintenance because batteries leak and
have to be replaced as often as every five years. Other reasons
include that the DVR has a higher energy capacity and lower
Keywords costs compared to the SMES device. Furthermore, the DVR is
Dynamic Voltage Restorer (DVR), PI controller, Power smaller in size and costs less compared to the DSTATCOM.
Quality, Pulse Width Modulation (PWM), Total Harmonic
Distortion (THD).

1 INTRODUCTION
Power quality or, simply, the usability of electric power is of
vital concern to modern life. Both current and frequency rarely
cause problems for end users. That's because electric current is
dictated by load, and the utility -- to maintain stability of the
grid -- very tightly controls the frequency of AC power . Due to
various pieces of equipments or due to any abnormal conditions
in the network, the quality of the power changes and thus it
becomes less suitable for any further application. Earlier the
prime focus for power system reliability was on generation and
transmission system but now a day‘s distribution system
receives more attention. Because 90% of the average customer
interruptions occur in the distribution network and causes huge
amount of financial losses. As a result, voltage quality
represents the lone rogue element. Indeed, statistics reflecting
so-called power quality problems show that over 95% of them Fig.1 Structure of Dynamic Voltage Restorer
are, in fact, voltage problems. These include voltage levels that
are too high or too low; voltage sags (transient drops in voltage);
1.2 EQUATIONS RELATED TO DVR
and power interruptions (absence of voltage). The device named
The load impedance ZTH depends on the fault level of the load
as Dynamic Voltage Restorer, which is connected in series with
bus. When the system voltage (VTH) drops, the DVR injects a
the line. The DVR is a Power Quality device which can protect
series voltage VDVR through the injection transformer so that the
sensitive loads against the disturbances i.e., voltage sags and
desired load voltage magnitude VL can be maintained. The
swells related to remote system faults. The VSC must be
series injected voltage of the DVR can be written
controlled correctly to inject the required current (in shunt
connection) or voltage (in series connection) into the system in as
order to compensate for a voltage dip. Since a number of Where
sensitive loads can shut down because of a dip or other VL: The desired load voltage magnitude
disturbances. The speed of reaction of the device is an important ZTH: The load impedance.
factor for successful compensation by the device. The IL: The load current
VTH: The system voltage during fault condition

603
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Discrete,
The load current IL is given by, Ts = 0.0001 s. a
O/P Voltage
Sag
b Breaker1
powergui c
a A A
a1
b1 b B B
c1
c C C
Line Feeder 1

When VL is considered as a reference equation can be rewritten aA


Breaker2
a A b

as, A
a2
b2
bB b B
c C
c
a

A
c2 cC Voltage Swell Scope9
B B a3
T3
C b3
C

a1

b1

C1
Out 1
Three-Phase Source c3
control. α, β, δ are angles of VDVR, ZTH, VTH respectively and Three-Phase
Sag1

Transformer A
is Load power angle. The complex power injection of the DVR (Three Windings) Line Feeder 2 a A
B
b B
can be written as, c C C

1-1
1-2
2-1
2-2
3-1
3-2
a
c
a A b
b B a
c C c

BFO Voltage Swell1

Double click on this to run optimisation

Note:
1. For Sag--- Breaker 1 is open and Breaker 2 is closed
1 Control_Selection 2.For Swell--- Breaker 2 is open and Breaker 1 is closed
In1
1-2
Fig. 2 Equivalent Circuit Diagram of DVR Constant Goto
1-1
1=Fuzzy Control 2-2
2=BFO -Fuzzy Control 2-1
1.3 DVR SIMULINK MODEL 3. PI Controlled
3-2 3-1
A model for simulation of DVR with PI Controller and Fuzzy DVR
Logic Controller is shown in fig 5.6. In this model a three phase
star ground connected source of 50 Hz is connected to three Flow Chart
phase transformer having winding connections star ground, delta
and delta for winding 1, 2 and 3 respectively. Winding 2
terminals of transformer is connected to a three phase series
RLC branch through a transmission line and winding 3 terminals s
are connected to another three phase RLC branch having
inductance of 0.005H and resistance of 0.001 Ohms. Output of t
these RLC branches are directly connected to two different three a
phase transformers (2 windings). On the upper side transformer, Define
r
a three phase fault having fault resistance 0.001 Ohms and
ground resistance 0.001 Ohms has been put. . A three phase t
breaker is also used to test the system in case of unbalance Define Sort
loading. Output of three phase breaker is directly connected to fuzzy
fuzzy
secondary windings of 3 linear two windings transformers logic the
having a nominal power of 250e6 VA and frequency of 50Hz logic
Initializ
rule set bacter
each. rule set
Fig. 3 DVR Model e BFO Read
ia
paramet define
positi
1-1 1 1-2 2 2-1 3 2-2 4 3-1 5 3-2 6 ers like don
Assign
in
Assign
number fuzzy
bacteri
desce
1

random
of logic
a
nding
2

position
bacteria positio
order
cC
Vabc A

bB

T1

,to each
a

nsIfto n
A
B
C

Three-Phase Breaker
bacteria
searchi
Call for
memb o
b
a

g
+
A
Vabc
a
A that
ngobject eac
ership
B bB
-
C cC
Mag
position
space,
ive h
functio
Universal Bridge T3
Scope2 abc 1
Out Y
Phase

3-Phase
In1
constitu
eliminat
functi n me
Sequence Analyzer puto Evalua e
Pulses Uref vinv_re dlata
MATLAB
Function
du/dt
Subtract
te
onthe
ion, mbs
which
Fuzzy Logic 1 Constant
Discrete
Subsystem
Derivative bject
nposition te new
PWM Generator
dispersa ersnot
are
MATLAB
(t)< ‗fis‘
Incre loofsteps
Function
Control_Selection 1
|u|2
Scope4 From
BFO Fuzzy Logic

Discrete
s
input

Out Y hip
fixed
PI Controller
Math Integrator
Function
To Workspace1

member
ment etc. structu
PI
put e funDecre
the shipE os re and
ctiase
Multiport
Switch

Fig. 4 DVR Model bject


positi functio
n pass
(t-1) onthe
on of n ofd the
x1positi
bacter fuzzy value
604
<xon by
ia logic to the
2<a step
main
x3size
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2. RESULTS
In main simulink model is shown in figure above. Three phase
faults are used along with two breakers. For introducing sag in
the model breaker 2 should be closed and breaker 1 should be
open. Timing for fault introduction can be controlled also. In our
experiment it has been taken from 0.1-0.3 sec. Initially PI
controller is selected for DVR controlling.
The PI compensated THD is 0.24 % that means THD has been
reduced to a good amount but still there is scope of more
reduction in THD. For this purpose we have used fuzzy logic
with 49 rules further. The zoomed output of fuzzy logic is
shown in figure 5.13 to show the distortions. The total harmonic
distortions, as in above case is shown in figure 5.14. it comes
out to be 0.19%, less than the distortions in case of PI controlled
DVR. That proves fuzzy logic control is better than PI
controlled DVR. this figure shown is zoomed in simulink
window and direct taken from there. So because of these
distortions total harmonic distortions are measured by FFT
analysis. For this, FFT analysis from ‗powergui‘ block is used Fig 4. Distortions in the PI compensated waveform
which is placed in model to set the environment for simpower
toolbox in simulink. The THD calculated by that is shown in
figure 5.10. The THD in this case is 1.46%.
Voltage Sag in the interval 0.1-0.3
1
Three phase Voltage

0.5

-0.5

-1
1000 2000 3000 4000 5000 6000 7000
Fig 5. Distortions in fuzzy logic compensated output
Compensated Voltage Sag in the interval 0.1-0.3
1 1
Selected signal: 35 cycles. FFT window (in red): 16 cycles
Three phase Voltage

0.5 -1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Time (s)

Fundamental (50Hz) = 0.9888 , THD= 0.19%


0 0.12
Mag (% of Fundamental)

0.1

0.08

-0.5 0.06

0.04

0.02

0
0 200 400 600 800 1000
-1 Frequency (Hz)

1000 2000 3000 4000 5000 6000 7000


Fig 6. THD in case of fuzzy controlled DVR
Fig 3. Uncompensated and Compensated output

605
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

and voltage regulation performance has been successfully


Table 1. Parameters used for BFO demonstrated in MATLAB/Simulink. It is found that BFO-
Fuzzy Logic Control is more effective than PI Control and fuzzy
control technique in operation of DVR as a custom power
Dimension Of Search Space 12
device.
The Number Of Bacteria 6
REFERENCES
Number Of Chemotactic Steps 6 [1] A.Venkata Rajesh Dr. K. Narasimha Rao,‖ Power Quality
Improvement using Repetitive Controlled Dynamic
Limits The Length Of A Swim 4 Voltage Restorer for various faults‖ IJERA Vol. 2, Issue
1, Jan-Feb 2012, pp.168-174.
[2] B.Rajani, Dr.P.Sangameswara Raju,‖ Comparision Of PI,
Fuzzy & Neuro-Fuzzy Controller Based Multi Converter
Voltage Sag in the interval 0.1-0.3 Unified Power Quality Conditioner‖ IJEET Volume 4,
1 Issue 2, March – April (2013), pp. 136-154.
Three phase Voltage

[3] M.Sharanya, B.Basavaraja , M.Sasikala,‖ An Overview of


0.5 Dynamic Voltage Restorer for Voltage Profile
Improvement‖ International Journal of Engineering and
0
Advanced Technology (IJEAT) ISSN: 2249 – 8958,
Volume-2, Issue-2, December 2012.
-0.5
[4] S.Ezhilarasan, G.Balasubramanian,‖ Dynamic Voltage
-1 Restorer For Voltage Sag Mitigation Using Pi With Fuzzy
1000 2000 3000 4000 5000 6000 7000 Logic Controller‖IJERA Vol. 3, Issue 1, January -
February 2013, pp.1090-1095
Compensated Voltage Sag in the interval 0.1-0.3 [5] Seyedreza Aali and Daryoush Nazarpour,‖ Voltage
1 Quality Improvement with Neural Network-Based
Three phase Voltage

Interline Dynamic Voltage Restorer‖ Journal of Electrical


0.5 Engineering & Technology Vol. 6, No. 6, pp. 769~775,
2011
0
[6] Mohammad KIANI, Seyed Mohammad Ali
MOHAMMADI,‖ A bacterial foraging optimization
-0.5
approach for tuning type-2 fuzzy logic controller‖ Turkish
-1 Journal of Electrical Engineering & Computer Sciences
1000 2000 3000 4000 5000 6000 7000 Turk J Elec Eng & Comp Sci (2013) 21: 263 – 273
[7] Rosli Omar, N.A. Rahim and Marizan Sulaiman,‖
Fig 7. Output of BFO fuzzy logic Dynamic Voltage Restorer Application for Power Quality
Improvement in Electrical Distribution System: An
Overview‖ Australian Journal of Basic and Applied
3. CONCLUSION Sciences, 5(12): 379-396, 2011
The conclusions drawn from the different aspects of the study in [8] Sushree Sangita Patnaik and Anup Kumar Panda,‖
this thesis are summarized in this chapter. The scope for further Particle Swarm Optimization and Bacterial Foraging
study in this area is also dwelt upon at the end. Nonlinear loads Optimization Techniques for Optimal Current Harmonic
and disturbances due to faults produce harmonic currents that Mitigation by Employing Active Power Filter‖ Hindawi
can propagate to other locations in the power system and Publishing Corporation Applied Computational
Intelligence and Soft Computing Volume 2012, Article ID
eventually return back to the source. Therefore, harmonic
897127, 10 pages
current propagation produces harmonic voltages throughout the [9] Rosli Omar, N.A. Rahim and Marizan Sulaiman,‖
power systems. Mitigation techniques have been proposed and Dynamic Voltage Restorer Application for Power Quality
implemented to maintain the harmonic voltages and currents Improvement in Electrical Distribution System: An
within recommended levels by a custom power device DVR. Overview‖ Australian Journal of Basic and Applied
DVR with PI Controller and Fuzzy Logic Controller has been Sciences, 5(12): 379-396, 2011
designed to mitigate the effects of the power quality problems [10] Sushree Sangita Patnaik and Anup Kumar Panda,‖
Particle Swarm Optimization and Bacterial Foraging
during three phase fault condition. But since there is always
Optimization Techniques for Optimal Current Harmonic
scope of improvement in minimizing distortions, so membership Mitigation by Employing Active Power Filter‖ Hindawi
function of fuzzy logic are optimized by bacterial foraging Publishing Corporation Applied Computational
optimization (BFO) and it has been successfully recorded that Intelligence and Soft Computing Volume 2012
total harmonic distortions are very less as compared to other two [11] P. Anitha Rani, Sivakumar.R,‖ Improvement of Power
techniques. The investigation of DVR installation on a power Quality using DVR in Distribution Systems‖ International
distribution system with mainly focus on harmonic reduction

606
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Journal of Innovative Research in Science, Engineering [15] B. Lakshmana Nayak, V. Vijaya Kumar,‖ Single Phase
and Technology, Volume 3, Special Issue 1, January 2014 Unified Power Quality Conditioner with Minimum VA
[12] Javed A Dhantiya, Amin S Kharadi, Ashraf M Patel,‖ requirement‖ International Journal of Advancements in
Analysis of Dynamic Voltage Restorer for Voltage Profile Research & Technology, Volume 3, Issue 1, January-2014
Improvement‖ Journal of Applied Engineering (JOAE), 2 [16] Introduction to PSCAD / EMTDC, Manitoba HVDC
(5), Volume-II, Issue-V,May-2014 Research Centre, March 2000. M. Gole, O. B. Nayak, T.
[13] A.Suresh, V.Govindaraj,‖ Power Quality Improvement S. Sidhu, and M. S. Sachdev.
using Dual Voltage Source Converter Based DVR‖ [17] E Acha, V G Agelidis, O Anaya-Lara, T J E Miller, ―
International Journal of Engineering Trends and Power Electronic Control in Electrical
Technology (IJETT) – Volume 7 Number 4- Jan 2014 Systems‖, Newnes Power Engineering series, 2002.
[14] Syed Shahnawaz Husain, Dr. Jyoti Srivastava,‖ [18] N. Hingorani, ―FACTS — Flexible ac transmission
Enhancing Power Quality with improved Dynamic systems,‖ in Proc.IEE5th Int. Conf. ACDC Transmission,
Voltage Restorer‖ International Journal of Advanced London, U.K., 1991, Conf. Pub. 345, pp. 1–7.
Research in Electrical, Electronics and Instrumentation
Engineering, Vol. 3, Issue 6, June 2014

607
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Ripple Control in Converter


Husanpreet singh, Divesh Kumar,
Bhai Gurdas Institute of Engineering and Bhai Gurdas Institute of Engineering and
Technology, Sangrur, Technology, Sangrur,

ABSTRACT I2R Loss: For a perfectly dc current, the current will be


distributed uniformly across the cross-section of the
When a sinusoidal voltage is converted into dc, the output conductor. when the current is alternating or has ac
voltage waveform contains ripples. Ripple is an unwanted component, the current tends to concentrate closer to the
ac component in dc output.. Ideal value of ripple factor is conductor surface. The effect is called skin effect. Skin
zero. Zero ripple factor means a perfectly dc quantity. effect offers higher resistivity to the ripple current resulting
Undesirable effects of the ripple include equipment heating, in higher surface temperature and conductor losses.
increased losses, and reduced equipment life. Ripple factor
of a single-phase half-wave uncontrolled rectifier is 1.21. Stray Heating: Ripple current induces current in the
neighboring metal structure or piping per Faraday’s law of
The value of ripple is high, a single-phase full-wave
electromagnetic induction and causes induction heating.
uncontrolled rectifier is proposed with a ripple factor of
However, it can be reduced by reducing ripple current itself
0.48. In this paper, the ripple factor for above mentioned
or by placing a low-impedance shield such that induced
rectifiers with a resistive load is presented mathematically.
current does not produce a overheating in the shield
The r waveforms are obtained using a computer program
conductor as a result of lower resistance.
called Alternative Transients program, ATP. Ripple factor
Instrumentation and Communication: Induced current in
was improved by a factor of 40% by using a 2-pulse instrumentation cable causes noise. The ratio of signal to
rectifier instead of a 1-pulse rectifier. Theoretically, if the noise should be within acceptable level. The induction
number of pulses is increased to infinity, the ripple factor effect can be reduced by using shielded cable
will reduce to zero giving a perfect dc output.
Subsequently, 6-pulse, 12-pulse, and 18-pulse rectifiers
will be modeled and advanced studies will be carried out. 1.2 CONTROLLING OF RIPPLE
This paper will help understand the one of the power The Increasing or decreasing of ripple effects from dc
quality components called ripple in dc output.
output by some methods
Keywords
AC to DC converter, Rectifier, Diode, Ripple Factor, ATP 1.Increasing the pulse number of rectifier: Higher the
number of pulses, lower is the ripple magnitude
2.Using an Output Filter: If a capacitor is used across the
1 INTRODUCTION load and an inductor is used in series with the load,,the load
current will be smoother and ripple effect will be lowered
Power quality components can easily be found for a
sinusoidal voltage and current of same frequency. The The ripple magnitude has controlled by the number of
switches are on for some part of the cycle and off for other pulses. Higher the number of pulses, lower is the ripple
part. Therefore, the output waveforms from power magnitude. In a single-phase half-wave uncontrolled
electronic devices like rectifiers are periodic but not rectifier, the number of pulses is one. Likewise, in a single-
sinusoidal. phase full-wave uncontrolled rectifier, the number of
To compare the performance of different types of pulses is two. It is shown in this paper that the ripple factor
rectifiers of the same class, an index called ripple factor is is 1.21 for a 1-pulse rectifier and 0.48 for a 2-pulse
investigated in detail. It is desirable for a rectifier that the rectifier. To model the rectifier and to obtain voltage and
voltage and current ripple in the output waveform be as low current waveforms, a computer program called Alternative
as possible to maintain a good quality of the output. The Transients Program (ATP) Subsequently, 6-pulse, 12-
ripple factor is ratio of root mean square (rms) value of ac pulse, and 18-pulse rectifiers will be modeled and
component to average value of dc output . The ac advanced studies will be carried out.
component of the output waveform is obtained by
subtracting dc component from the output waveform. Thus, 1.3 SINGLE-PHASE HALF-WAVE
mathematically UNCOTROLLED RECTIFIER
Ripple factor = rms value of ac component/ average value
of dc component A single-phase half-wave uncontrolled rectifier feeding a
== {√ (I2- I20)} /I0 (1) resistive load is shown in Figure 1. The diode D conducts
where,I = RMS value of output current during the positive half cycle of supply voltage Vs and
I0 = Average value of output current stops conducting during negative half-cycle of the supply
voltage forming a 1-pulse rectifier. Source current and load
1.1 EFFECTS OF RIPPLE current are represented by iS and i respectively. V average
Ripple quantity results in many unwanted effects in a dc output voltage
system. Some of the known effects are explained below

608
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

periodic in 2π is given by

I0= (1/2π) i0(ɷt) dɷt (5)


After solving above equation we get, I0= Vm /(πR)= Im/π
(6) where I m is peak
value of i0(ɷt) For the load current
i0(ɷt) periodic in 2π, the rms current can be given as

I= √[ i20(ɷt) dɷt] (7) = Vm


/(2 R) = Im/2
fig 1-Single-phase half-wave uncontrolled rectifier with R
load It should be noted that the corresponding rms value of the
load current for sinusoidal operation is Im/√2. Thus the
The instantaneous value of Vs is given by Vs = degree of distortion, Ripple Factor, in a single-phase half-
Vm sin ɷt (2) wave rectified current waveform can be calculated using
where Vm = is the peak value of Vs Similarly,the equation
instantaneous value of V0 is given by V0 = Vm sin ɷt { 0
= √ [{(Im/2) / (Im/π)}2 – 1] (8)
to π, 2π to 3π, etc} (3) And ,instantaneous
value of voltage across the diode VD is given by = 1.21 (9)
VD = Vs - V0 = Vm sin ɷt { π to 2π, 3π------------------------------------------------- If the output wave is perfectly dc, the ripple factor will
--to4π,etc} (4) be zero. Ripple factor of 1.21 is unacceptably high for
many industrial applications.
Voltage input waveform is shown in fig 2.output voltage
and current wave form is in fig 3 1.4 SINGLE-PHASE FULL-WAVE
UNCOTROLLED RECTIFIER
A single-phase full-wave uncontrolled rectifier feeding
a resistive load is shown in Figure 4. Diodes D1 and
D2 conduct during the positive half cycle of supply
voltage Vs; and diodes D3 and D4 conduct during
negative half-cycle of the supply voltage forming a 2-
pulse rectifier. Source current and load current are
represented by iS and i0 respectively. V0 average output
voltage. Output voltage and output currents are shown
in Figure

Figure 2. Supply voltage waveform

Figure 4. Single-phase full-wave uncontrolled


rectifier with R load
Figure 3. output voltage and current waveform The instantaneous value of load current can be given
The average value I0 of the load current i0(ɷt) which is by

609
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

i0 (ɷt ) = [Vm sin ɷt]/R {0 to π, 2π to 3π, etc} + [Vm RESULTS


sin (ɷt-π)]/R { π to 2π, 3π to 4π, etc} (10) The variation of average load current, rms load current, and
The average value I0 of the load current i0(ɷt) which ripple factor of load current are presented below in a
is periodic in 2π is given by tabular form.

(11)

After solving above equation we get ,


I0= 2Vm /(πR)= 2Im/π
It can be seen that the average value of output current
in case of full-wave rectifier is twice the value of in
case of half-wave rectifier
The rms value of the load current can be given as

2. CONCLUSION
(12) Single-phase 2-pulse rectifier offered better ripple factor
= Vm /(√2 R) = Im/√2 (13) than a single-phase 1-pulse rectifier. The ripple factor was
improved by 0.48/1.21= 40%. Theoretically, if the number
of pulses is increased to infinity, the ripple factor will
reduce to zero giving a perfect dc output. Subsequently, 6-
pulse, 12-pulse, and 18-pulse rectifiers will be modeled
and advanced studies will be carried out. This research will
help seniors, graduate students, and design engineers to
understand the modeling and working principle of ac to dc
converters i.e. rectifiers.

REFERENCES
[1]. M.Mazaheri, V. Scaini, and W. E. Veerkamp
“Cause, Effects, and Mitigation of Ripple From
Rectifiers,” IEEE Trans. Industrial Application; vol.
39, no.4, pp. 1187-1192,July/August 2003
[2]. D. A. Paice, Power Electronic Converter
Fig5-Full-wave Rectifier Output Voltage and Curren Harmonics: Multipulse Methods for Clean Power.
It should be noted that the corresponding rms value New York: IEEE Presss, 1996, Chap. 7.
[3]. B. Singh, G. Bhuvaneswari, V. Garg, and S.
of the load current for half wave rectifier is I m/2. Gairola “Pulse Multiplication in AC-DC converters
Thus the degree of distortion, Ripple Factor, in a for Harmonic Mitigation in Vector-Controlled
Induction Motor Drives,” IEEE Trans. Energy
single-phase full-wave rectified current waveform Conversion; vol. 21, no. 2, pp.342-352, June 2006
can be calculated using equation . [4]. M. Ramasubbamma, V. Madhusudhan, K. S. R.
2 Anjaneyulul, and P. Sujatha “Design Aspect and
= √ [{(Im/√2) / (2Im/π)} – 1] (14)
Analysis for Higher Order harmonic Mitigation of
= 0.48 (15) Power Converter Feeding a Vector Controlled
Ripple factor of 0.48 is significantly better than that Induction Motor Drives,” IEEE International
in half-wave rectifier with a ripple factor of 1.21. Conference on Advances in Engineering, Science, and
Management (ICAESM-2012), pp. 282-287, March
30, 31, 2012.

610
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Full and Half wave Rectifier with Full Control of the


Conducting Angle
Jashandeep Singh Simerjeet Singh
Research Scholar Research Scholar
B.G.I.E.T, Sangrur. B.G.I.E.T, Sangrur.

ABSTRACT
A new rectifier circuit capable of controllingconducting This is why we first address the phase shift circuit in
angle in the full range from zero to π. A two cell ledder RC thenext.
network is studied first to enable as phase shifter. Then a
halve-wave rectifier is described performing full control of
the conducting angle. A full wave rectifier is described and
its characteristics extracted by simulation. The main
advantage of the new circuit is its ability to control the
conducting angle in full range making the rectifier
applicable in electronic power supply circuits and in
systems that need power control.

Index Terms
AC/DC converter, power supply, full-wave thyristor Fig. 1. Block scheme of Full-wave controlled rectifier.
rectifier.

1. INTRODUCTION
The controlling of conducting angle in rectifier from its
application related to power control.it is planned in
industrial applications and electronic power supply,. often
one would need a rectifier with a full scale of control of the
conducting angle from 0 to π. We will present here a new
circuit that has these characteristics. The idea is to use a
two cell RC ladder network as a phase shifter controlling
the firing of the thyristor’s gate Fig. 2. Flowing angle.

2. RECTIFYING FULL-WAVE BY 3. COMPLEX CIRCUIT RC


A THYRISTOR ELEMENT
The single passive RC circuit produce a maximum phase
A full-wave controlled rectifier with thyristor is shown in shift of π/2 for infinite frequency. our frequency 50 Hz very
fig 1. The main voltage Es is rectified so that the thyristors large values of the RC elements are needed for angles
are triggered by a control circuit. In this case consists of an nearer to π/2.Extend the possibilities of the controlling
RC ladder followed by a diac. The controll circuitry is not circuit we proposed the use of two-cell passive RC ladder
shown for convenience. circuit in Fug. 3 The voltage transfer function of this
The waveforms of the main voltage and the load voltage circuit is given with [1]
being denoted by Vout. The conducting angleΘA may be less
than π/2 but generally may be controlled between 0 and T(s) = V out/E = 1/1+S(τ1+ k τ2 )+S τ1 τ2
π.thyristor is off during conducting angle. ΘA. The thyristor
gate controlling voltage is derived from the main voltage, where is τ1 = R1C1, τ2 =R2C2, And k=R1/R2 . In the case
attenuated, phase shifted , and, finally, brought to the
thyristors gate via diac performing as a threshold element R1=R2=R and C1=C2=C the Poles of function (1) are
and protecting from conducting in theinverse direction.

611
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig. 3. Circuit of two RC sections

The phase angle dependence on the resistance in the RC ledder for Fig. 5. Half-wave rectifier using a two-cell RC ledder.
three different values of the capacitance. By inspection of the results
one may conclude first that resistances no larger than 20 k are needed Fig. 6. shows the transient waveform of the output voltage after
for the application conceived. As for the capacitances we may come switching on the circuitry. the phase angle is in accordance with the
out with the conclusion that C=1µF is practicaly satisfactory and, in value, for the given circuit parameters, may be extracted from Fig. 4
fact, the most convenient value. we came to the conclusion that one Voltage waveforms for the case when R=20 k . One may notice that
needs to fix the capacitance value and to use variable resistances for conducting angles areconsiderably lower than 90o.After exhaustive
control of the phase angle. Note that resistors with variable resistance simulations, Fourier analisys of the waveforms obtained for large set
are easier to implement of values of the resistance in the phase delay circuit
The amplitude characteristic of the function (1) in the case when
R1=R2=R and C1=C2=C is obtained as

2 2 2
A  T (s) s jω1/ [1  (ωRC) ]  1/[1  (ωRC) ]
(3)or, in terms of decibells,
a(dB)20⋅log(1/ A) f (R, C ). (4)
As for the phase characteristic, using SPICE we got the dependence
a=f(RC) we fixed the value of the capacitance and put the resistance
in the role of a variable. The results are depicted in Fig. 5.From the
amplitude characteristics in Fig. 4 we may come to the conclusion
that even for the largest values of the resitances, if C = 1 µF, we get
an attenuation of only about

The phase characteristic of this function for the case when


R1=R2=R and C1=C2=C is given by
Fig. 6. Waveforms of the output voltage of the circuit of Fig. 6 (for
R=10k , C=1µF, and R3=100 ).

20 dB that is about ten times. This means that one may expect
voltages of the order of magnitude no less than 25 V.

4. HALF-WAVE THYRISTOR
RECTIFIERS WITH CONDUCTION ANGLE
CONTROL
The implementation of the two-cell RC ledder in a thyristor rectifier
is in Fig. 5 [3]. The thyristor is triggered by the gate current pulses
Fig. 4. Attenuation of circuit with two and shaped by the control circuit. The diac is to generate a voltage
. RC sections.
threshold leads the gate to conducting state. In the negative half
cycle of the main voltage the DIAC and the thyristor are off.
Part of these results are shown in Fig. 7 where the DC component of
the output voltage is as a function of the resistance in the phase delay
circuit, for two values of the load resistance

612
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig. 7. Direct component of load voltage the Half-wave


rectifiet
Fig 8.(a) Symbol from the filter plus diac circuit, and (b)
the alternating component of the voltage is dominant for all Full-wave Waveform of voltage (for R=5k ,C=1µF, and
values of the resistance in the phase delay circuit. R3=100
Nevertheles, this dominance is less persuasive for large
values of resistance in the phase delay circuit.

5. FULL-WAVE CONTROLLED
RECTIFIER WITH CONTROL
ANGLE OF PHASE
In order to get larger DC component in the full spectra of
the signal at the output full-wave rectification is needed.
Application of the circuit of Fig. 5.in a full-wave rectifyer
is depicted in Fig. 8 simplification of the schematic we
introduced a subcircuit denoted by D. as shown in Fig. 12a. Fig. 9. Waveform of output voltage (for R=12k ,C=1µF,
Fig. 12b represents a version of the full-wave rectifier. The and R3=100 ).
full wave rectifier circuit was simulated by the SPICE
program in the same manner as the half-wave circuit. The To get a full picture of the properties of the new Fig.
resulting simulation result . In Fig. 9 the output voltage 8proposed circuit Fourier analysis of the output voltage
waveforms are depicted. after transient a steady response is waveforma was performed for a set of resistances R and
obtained in this case with conducting angle lower than π/2. for two values of the load resistance i.e. R3=10 and
In the same time one can recognize that the firing moment R3=100 .
is in accordance with the phase angle obtained from Fig.
4.The simulation results for the output voltage are depicted
in Fig. 10. Relativly large conducting angle may be
observed.
Finally, a large value for teh resistance in the phase shift
circuit was chosen: R=19 k . The simulation results for the
output voltage are shown in Fig. 17, while Fig. 18
represents the waveform of the output current. One may
notice that after some transient a response is obtaine with
very short pulses confirming the role.
Fig. 10. Output voltage waveform (for R=19k , C=1µF,
and R3=100)

613
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

CONCLUSION
The results obtained after Fourier analysis are depicted in A new concept of control of the conducting angle of a
Fig. 12 and Fig. 13. The first one represents the DC thyristor rectifier was proposed and implemented in hal-
component of the output voltage as a function of the wave and full-wave rectification circuits. After theoretical
resistance in the phase shift circuit. One may notice a floor study of the circuit and after creation of some design
of about 50V formulae a thorough verification procedure was
implemented based on SPICE simulation of the circuits. In
particular, the properties of the circuits in the frequency
domain were studied in order to produce information of the
aplicability of the circuit in different applications.

REFERENCES
[1] Dokić, B. „Power electronics –converters and
regulators“, ETF Banja Luka, 2000, in Serbian
[2] Banzaf, W., „Computer-Aided Circuit analysis using
SPICE“, Prentice Hall, Enlewood Cliffs, N. J., 1989.
[3] Taylor,P.D., „Thyristor design and realization“, John wiley
and Sons, Chichester, 1987
that is not dependent on the load resistance. On the other
side, Fig. 20 depicts the dependence of the second
harmonic on the resistance in the phase shift circuit. Again,
as expected, for small conducting angles large harmonic
compared with the DC value is obtained.

Fig 12 DC component of output voltage of the Full-wave


rectifier

fig 13. second harmonic of voltage on load

614
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Multi-objective Optimization Using Linear Membership


Function
Er.Gurpreet Kaur Er. Divesh Kumar Er. Manminder Kaur
Department of Electrical Engg. Department of Electrical Engg. Department of Electrical Engg.
Bhai Gurdas Institute of Engg.& Bhai Gurdas Institute of Engg.& Guru Kashi University
Technology Technology Talwandi Sabo, India
Sangrur, India Sangrur, India mkaurjassal@gmail.com
ergurpreet88@gmail.com diveshthareja@yahoo.com

performance of proposed technique is tested on IEEE 30-bus


ABSTRACT system.
Economic Dispatch (ED) optimization problem is the most
important issue which is to be taken into consideration in power 2. PROBLEM FORMULATION
systems. The problem of ED in power systems is to plan the The aim of the proposed multi-objective optimization problem is
power output for each devoted generator unit in such a way that to determine the power generation levels, which will minimize the
the operating cost is minimized and simultaneously, matching generation cost and line flow, subject to real and reactive power
load demand, power operating limits and maintaining stability. In balance and generation capacity constraints. The problem
this paper, the traditional economic dispatch problem has been formulation has been sub-divided into following three parts:
modified to minimize generation cost and line flow. As the two 1) Decreasing fuel cost.
sub-problems have conflicting objectives, fuzzy decision making 2) Minimization of line flow losses.
multi-objective optimization has been applied to get single 3) Apply Fuzzy decision making technique using Linear
optimal solution from conflicting objectives of generation cost Membership Function.
and line flow. Practicably, it has been tested on IEEE 30-bus
system. The results describe the capability of the proposed 2.1 Economic Load Dispatch
approach of reducing line flow while maintaining economy in the As the main objective minimize the conflicting objectives of
load dispatch. generation cost and line flow, subject to real and reactive
power balance and generation capacity constraints. Hence the
Keywords objective function is taken equivalent to the total cost for
Economic Load Dispatch, Fuzzy Decision Making, Multi- supplying the load demand which is represented
Objective Optimization, Line Flow.
by . It can be formulated as follows:
1. INTRODUCTION Minimize
The aim of power industry is to generate electrical energy at (2.1)
minimum cost while satisfying all the limits and constraints Where
imposed on generating units. Economic Load dispatch (ELD) is the are the fuel cost coefficient.
one of the optimization problem in power industry. ELD
is the real power output of the generator.
determines the optimal power solution to have minimum
generation cost while meeting the load demand. Due to emerging 2.1.1 Real power dispatch:
technology, various techniques have been proposed by several The basic purpose of the real power dispatch problem is to
researchers.
schedule the outputs of thermal generating units so as to meet the
The problem of economic load dispatch becomes multi-objective
system load at minimum cost. Real power dispatch is defined as
optimization problem with conflicting objectives of generation
following by using load flow equations.
cost and line flow. Various optimization techniques have been
(2.2)
described by many researchers to deal with multi-objective
optimization problems having varying degree of success. Where
In this paper fuzzy decision making technique has been applied to i=1, 2, 3…..n
solve the multi-objective optimization problem with conflicting is load demand of real power at the
objectives of generation cost and line flow. Fuzzy decision is generation of real power at the bus
making technique is considered to be more effective than other
techniques as far as conflicting objectives are included in the
optimization problem. The proposed technique optimizes the said
objective functions under a set of system constraints such as real
and reactive power balance equations and generating capacity
limits. The

615
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.1.2 Reactive power dispatch (2.7)


Reactive power dispatch is treated as an optimization problem that Reactive power is injected from bus M to bus I as following
reduces grid congestion by minimizing the active power losses. (2.8)
The Reactive power dispatch requires solving the power flow Where
problem and for this reason is usually known as optimal reactive is the series admittance
power dispatch problem or as an optimal power flow problem. is the shunt admittance
The reactive power dispatch is used to solve the power flow is the voltage at the bus
equations. Hence as a result an improved voltage profile can be (2.9)
obtained. Reactive power dispatch is defined as following by (2.10)
using load flow equations. Power losses in the (I-M) line is the sum of the power flows in
the (I-M) line from the bus and the bus.
(2.3) (2.11)
Where
i=1, 2, 3…..n 2.2 Fuzzy Decision Making Technique Using
s total system generation of reactive power bus. Linear Membership Function
bus. Fuzzy multiple objective linear programming formulates the
bus. objectives and the constraints as fuzzy sets, characterized by their
bus. individual linear membership functions. The decision set is defined
as the intersection of all fuzzy sets and the set defined by relevant
2.1.3 Power balance constraints hard constraints.
The total power generation must be equal to total demand and real 2.2.1 Linear Membership Function
power loss in transmission line. Logical decision making can be defined by fuzzy sets using the
(2.4) operating conditions. The fuzzy sets are defined by equations
2.1.4 Generation capacity constraints called the membership functions. These functions represent the
To stabilize the operation, the generator output, bus voltage degree of membership in some fuzzy sets using the values from 0
magnitudes and voltage angles are restricted by upper and lower to 1. The membership value 0 indicates incompatibility with the
limits. These upper and lower bounds are defined as following: sets, while 1 means full compatibility. When neither is true, a value
(2.5) between 0 and 1 is taken. To find out the membership function
Where of cost and line flow first step is to find the minimum and
i=1,2,3…..n maximum values of cost and line flow.

Voltage magnitude must satisfy the inequality

Where
i = 1, 2, 3…..n
= minimum voltage of the unit
= maximum voltage of the unit (2.12)
The power system equipment is designed to operate at fixed
voltages with allowable variation of 5-10) % of the rated value.
Voltage angle also satisfy the inequality
(2.6)
Where
i = 1, 2, 3…..n
= minimum voltage angle of the unit
= maximum voltage angle of the unit Where
= value of an original fuel cost which is varying
2.2 Line Flow = value of an original fuel cost that is completely
Whenever the network component is overloaded then line flow satisfactory
losses occurs in the network. In a competitive market, line flow = value of an original fuel cost that is completely
has its own importance because of the complexity involved. This
unsatisfactory
line flow may be due to overloading of transmission line. Line
flow is managed at the dispatch stage In this paper to reduce the 2.3 Methodology
line flow we minimized the line flow in branch 2 from bus 1 to Step1. Input parameters of system, fuel cost co- efficient
bus 3.
and specify lower and upper boundaries and define
2.2.1 Computation of Line Flow: minimum fuel cost function.
Consider that line is connecting the buses I and M. The Real Step2. Get the power generation for six generating units
power is injected from bus I to M and is given as following.
and total fuel cost and total losses.

616
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Step3. For minimizing line flow we have to check whether Table 2. Result of Test case 2
any line is overloaded or not.
Step4. If overload exists, find the minimum value of line Power Fuel
Line Fuel
Test Case Losses cost
flow by using equations Flow(MW) cost($/h)
(MW) (Rs./hr)
Step5. As the fuel cost and line flow are the conflicting
objectives so an optimal solution cannot be obtained, hence
to obtain the optimal solution linear membership function is With
applied. losses and 6.13 51.88 907.27025 57158
line flow
Step 6.The Value of membership function is obtained using
equation (2.12) for fuel cost and line flow which lies on one
optimal point.
3.3 Test Case 3
Here generation cost coefficients and generation limit of six unit
3. RESULT AND DISCUSSION system are taken from appendix. In this test case operating cost
The fuzzy decision making technique is tested on four different with losses of six unit systems is calculated. For minimizing the
test cases for six unit system of economic load dispatch and line Line Flow equation of branch 2 from bus 1 to bus 3, load flow
flow problem. The test case 1 considers only operating cost equations are applied.
without line flow and losses, test case 2 considers operating cost Table 3. Result of Test case 3
with losses, test case 3 considers losses with line flow and
operating cost and test case 4 considers operating cost with line Power Line
flow and losses. The cost coefficient and operating ranges data Fuel Fuel cost
Test Case Losses Flow
and load data for 1 hour is taken with power demand 290 MW. cost($/h) Rs./hr
(MW) (MW)
3.1 Test Case 1
Here generation limit of six unit system and the generation With
cost coefficients are taken. For this test case line flow and Minimization of
losses are not considered. In this test case operating cost 4.3 27.72 999.64 62977.32
Line Flow in
and real power generation of six unit system is calculated. Branch 2(1-3)
For this test case condition used as

3.4 Test Case 4


Where In this test case operating cost including line flows with power
i = 1, 2, 3…..6 losses of six unit system is calculated using fuzzy decision making
= Real power generation of ith generator technique with linear membership function.
= Real power demand
Table 4. Result of Test case 4
.
Table1. Result of Test case 1 Power Line
Fuel cost Fuel cost
Test Case Losses Flow
($/h) (Rs./hr)
(MW) (MW)
Total Fuel Cost Total Fuel Cost
Test Case
( $/hr) ( Rs/hr)
When Fuzzy 4.83 35.79 938.12538 59101.56
Decision
Without losses 882.4157 55591.83 Making is
Applied

Membership 0.666 0.666


3.2 Test Case 2 Function
Here generation cost coefficients and generation limit of six unit
system are taken. In this test case operating cost with losses of six
unit system is calculated. Here we use line flow equations when
losses are considered.

Where
i = 1, 2, 3…..6
= Real power generation of ith generator
= Real power demand
= Power losses

617
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.5 Comparison between different voltages Table 6. Performance parameters of IEEE 30 Bus System

1.120
Voltage Magnitude

1.100 Power Line Fuel


Fuel cost
1.080 With Losses Test Cases Losses Flow cost
Rs./hr
(MW) (MW) ($/h)
1.060
1.040 Minimization of
1.020 Line Flow With Losses 6.13 51.88 907.27 57158
1.000
0.980 Fuzzy Decision With
Making Minimization
1 5 9 13 17 21 25 29
of Line Flow 4.3 27.72 999.64 62977.3
in Branch
Load Bus No. 2(1-3)
Fig 1. Comparison of voltages for different test cases

Table 5. Power generation of six units for different cases When Fuzzy
Decision
4.83 35.79 938.12 59101.5
S Test Making is
N Case (MW) (MW) (MW) (MW) (MW) (MW) Applied

1 Case 1 130.2 62.96 23.6 35 19.0 19.0


Linear
Membership 0.666 0.666
2 Case 2 127.7 64.63 25.0 35 21.4 22.2 Function

3 Case 3 59.3 80 50 35 30 40

4. CONCLUSION
4 Case 4 86.28 68.91 34.6 35 30 40
In this paper, Fuzzy Decision Making technique is applied to
economic power generation for six generating units. Fuzzy
Decision Making Technique was employed to solve the ELD
problem for four cases of six generating unit system without
140 losses, with losses, with minimization of line flow and fuzzy
decision making technique with linear membership function. The
conclusion describes the capability of the proposed fuzzy decision
120 multi-objective technique to solve the problem of economic load
Power Generation (MW)

Without Losses dispatch and line flow.


100
With Losses REFERENCES
80 [1] Wood, A.J. and Wollenberg, B. F.1984,“Power Generation
Minimization of Operation and Control”, John Wiley and Sons.
60 Line Flow
[2] Farag, A.,Baiyat, S.A.and Cheng, T.C.1995, “Economic load
Fuzzy Decision dispatch multi-objective optimization procedure using
40 Making linear programming techniques,” IEEE Transactions on
Power Systems, vol. 10, no 2, pp 0885-8950.
20
[3] Behera, R., Panigrahi, P. B. and Pati, B. B 2001, “Economic
Load Dispatch Using Modified Genetic Algorithm.”
0
1 2 3 4 5 6 [4] Niimura, T. and Nakashima, T.2003, “Multi-objective trade-
off analysis of deregulated electricity transactions,”
No. of Generating Units sciences direct Electrical Power and Energy Systems, vol.25,
pp. 179–185.
[5] Abido, M.A 2003, “A novel multi-objective evolutionary
Fig 2. Graph for power generation of six units for different algorithm for environmental/Economic power dispatch,”
cases Science Direct Electric Power Systems Research, vol.65,
pp.71-/81.

618
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[6] Bae, J., Song, K., Rin, J., and Lee, K.Y.2005,"A particle
swarm optimization for economic
dispatch with non-smooth cost functions", IEEE
Transactions on Power Systems, vol.20, no.1,pp.34-42.
[7] Bae, J., Won, Y., Kim, H. and Shin, J.2006, “An Improved
Particle Swarm Optimization for Economic Dispatch with
Valve-Point Effect”, International Journal of Innovations in
Energy Systems and Power, Vol. 1, no. 1.
[8] Abedinia, O., Garmarodi, D., Rahbar, R., and Javidzadeh,
F.2012, “Multi-objective Environmental/Economic Dispatch
Using Interactive Artificial Bee Colony Algorithm,” J. Basic.
Appl. Sci. Res., vol.2, no.11, pp.11272-11281.
[9] Surekha, P. and Sumathi, S. 2012, “Solving Economic Load
Dispatch problems using Differential Evolution with
Opposition Based Learning”, Wseas Transactions on
Information Science and Applications, volume 9, issue 1.
[10] Soni, S.K and Bhuria, V. 2012, “Multi-objective Emission
constrained Economic Power Dispatch Using Differential
Evolution Algorithm”, International Journal of Engineering
and Innovative Technology (IJEIT) volume 2, issue 1, pp
120-125.
[11] Agrawal, S., Bakshi, T. and Majumdar, D. 2012, “Economic
Load Dispatch of Generating Units with Multiple Fuel
Options Using PSO”, International Journal of Control and
Automation vol. 5, no. 4, pp 79-92.
[12] Mathur, D. 2013, “A New Methodology for Solving
Different Economic Dispatch Problems,” International
Journal of Engineering Science and Innovative Technology,
vol.2, pp. 494-499.
[13] Bijay Baran Pal, Kumar, M. 2013, “A linear Fuzzy Goal
Programming Method for Solving Optimal Power Generation
and Dispatch Problem”, International Journal of Advanced
Computer Research, volume 3, number 1, issue 8, pp 56-64.
[14] Ramyasri, N.and Reddy, G.S. 2013, “Fuzzified PSO for
Multiobjective Economic Load Dispatch Problem,”
International Journal of Research in Engineering and
Technology, volume 2 issue 8, pp. 157-162.
[15] Palaniyappan, S., and Ilayaranimangammal, I.2013, “An
Optimistic Solution Technique For Economic Load Dispatch
Problem Using Immune Inspired Algorithm” , International
Journal of Advanced Research in Electrical, Electronics and
Instrumentation Engineering volume 2, issue 12, pp. 6191-
6195.
[16] Rajangam, K., Arunachalam, V.P, Subramanian, R. 2013,
“Fuzzy Logic Controlled Genetic Algorithm to Solve The
Economic Load Dispatch For Thermal Power Station”,
European Scientific Journal , edition vol. 8, No.7, pp 172-
184.
[17] Kothari, D.P. and Dhillon, J.S. 2013 “Power System
Optimization”, PHI.
[18] Appendix 19, “Data Sheets for IEEE 30 Bus System” pp.
129-134.

619
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Allocation of Multiple DGs and Capacitors in Distribution


Networks by PSO Approach
Satish Kansal Rakesh Kumar Bansal Divesh Kumar
Associate Professor M.Tech Student Assistant Professor
Electrical Engg. Department Electrical Engg. Department Electrical Engg. Department
BHSBIET Lehragaga BGIET Sangrur BGIET Sangrur
kansal.bhsb@gmail.com rkbansal.lehra@gmail.com diveshthareja@yahoo.com

ABSTRACT profile of distribution networks was presented in [1]. The


Distributed generation (DG) has been utilized in some electric authors implied that on-line systems including DG-units can
power networks for power loss reduction, voltage achieve better reliability during interruption situations to keep
improvement, postponement of system upgradation, customers supplied. The impacts of capacitor placement on
environmental friendliness and increasing reliability. This distribution system reliability was considered in [2] by
paper presents the application of particle Swarm Optimization defining two objective functions. The first one is the sum of
(PSO) technique to find the optimal placement of DGs and reliability cost and investment cost, and the second one is the
capacitors in the radial distribution networks for active and sum of reliability cost, cost of losses and investment cost.
reactive power compensation by reduction in real power The genetic algorithm to find the optimal placement of DG in
losses. The improvement in voltage is also considered in this the compensated network for restoration the system caused by
work. The analytical expressions based on exact loss formula CLPU condition and to conserve load diversity for reduction
are considered for real power loss reductions. The proposed in losses, improvement in voltage regulation was discussed in
technique is tested on standard 33-bus, 69-bus test systems. [3]. In [4],[5], genetic algorithm (GA) based method is also
used to determine size and location. GA is suitable for multi
General Terms objective problems like DG allocation and can give near
Application of PSO based technique is used for optimal optimal results, but they are being computationally demanding
allocations of DGs and Capacitors in the distribution and slow in convergence. The improved Tabu Search
networks. algorithm with the introduction of mutation operator to
improve the local search ability and to reduce computation
Keywords time to minimize the loss in large scale distribution systems
DG, capacitor, optimal size, optimal location, power loss. was introduced in [6]. The authors in [7] reconfigure the
distribution network using ant colony search algorithm by
1. INTRODUCTION minimize the system losses. In [8], the authors analyzed the
Recent development in small generation technologies has DG optimal location analytically for two continuous load
drawn an attention for the utilities to change in the electric distributions, uniformly distributed and uniformly increasing
infrastructure for adapting distributed generation (DG) in loads. The goal of their studies was to minimize line losses
distribution systems. Employment of DG technologies makes and observed that the optimal location of DG which is highly
it more likely that electricity supply system will depend on dependent on the load distribution along the feeder;
DG systems and will be operated in deregulated environment significant losses reduction would take place when DG is
to achieve a variety of benefits. As DG systems generate located toward the end of a uniformly increasing load and in
power locally to fulfill customer demands, appropriate size the middle of uniformly distributed load feeder. The analytical
and placement of DG can drastically reduce power losses in approach has been demonstrated in [9, 10] to find the optimal
the system. DG inclusion also defers transmission and size and location of DG to minimize the real power losses and
distribution upgrades, improves supply quality and reliability enhancement in voltage profile. In [11], an analytical method
and reduces green house effects. Distributed generation is a to determine the optimum location–size pair of a DG unit was
topical area of research and interest in this area has been proposed in order to minimize only the line losses of the
growing rapidly worldwide. power system. A fast analytical approach to find the optimal
size of DG at optimal power factor to minimize the power loss
Including DG in distribution systems requires in-depth however only type III has been exploited [12]. The authors
analysis and planning tools. This process usually includes find the optimal location in [13], based on the sensitivity test
technical, economical, regulatory, and possibly environmental independently to overcome the limitation of right-of-way or
challenges. Some of the factors that must be taken into real estate cost and heuristic curve-fitted technique is applied
account in the planning process of expanding distribution to find the optimal size of DG at predetermined power factor
system with DG are: the number and capacity of DG units, to minimize the power system loss.
best locations and technology, the network connection,
capacity of existing system, protection schemes, among Optimal placement of capacitor for loss reduction, a well
others. known 2/3 rule is presented in [14] for uniformly distributed
loads. Many researchers have applied other techniques such as
Many optimization tools have been utilized to solve different GA [15] to improve the power quality in the presence of
DG problems, such as: A methodology for evaluating the voltage and current harmonics. The objective function was to
impact of DG-units on power loss, reliability, and voltage minimize the cost of power loss and capacitor banks taking

620
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

the voltage limits, number and size of capacitor bank as


constraints. In [16] the authors proposed a fuzzy expert
system based on voltage and power loss reduction indices to
minimize peak power loss reduction by means capacitor
placement. The hybrid approach Tabu search combination and
with heuristic techniques was employed in [17] for capacitor are the ijth element of [Zbus]
placement to minimize the objective function in terms of cost
and loss. matrix

Most of the approaches presented so for model the optimal Pi = PGi – PDi and Qi = QGi - QDi
placement of DG or Capacitor. However optimal placement of PGi & QGi are power injection of generators to the ith bus.
DG and Capacitor being integrated into distribution systems. PDi & QDi are the loads of ith bus.
The present work develops the comprehensive formula by Pi & Qi are active and reactive power injection of the ith bus.
extending the analytical expression presented in [10] to find
the optimal sizes, optimal locations of DGs and Capacitors to The total power loss against injected power is a parabolic
achieve the objective by compensating the active and reactive function and at minimum losses, the rate of change of losses
powers by the proposed PSO technique. with respect to injected power becomes zero [10].
The paper is organized as follows: Section 2 presents brief
summary of location and sizing issue for reduction of line
losses. The mathematical modeling to calculate the sizes of
DGs and capacitors at optimal locations to minimizing the
system losses has been presented in 3. Section 4 presents the
problem formulation with assumptions and constraints. The It follows that
proposed algorithm for optimal sizing of DGs and Capacitors
at optimal locations to achieve active and reactive power
compensation are introduced in section 5. Section 6 presents
the numerical results of the proposed PSO technique with
interesting observations along with discussions. Finally, the
major contribution and conclusions are summarized in section Where Pi is the real power injection at node i, which is the
7. difference between real power generation and the real power
demand at that node:
2. LOCATION AND SIZING ISSUES
For a particular bus, as the size of DG is increased, the losses
are reduced to a minimum value and increased beyond a size
of DG (i.e. the optimal DG size) at that location. If the size of Where PDGi is the real power injection from DG placed at
DG is further increased, the losses starts to increase and it is node i, and PDi is the load demand at node i. By combining the
likely that it may overshoot the losses of the base case. Also above we get.
notice that location of DG plays an important role in
minimizing the losses. The size at most should be such that it
is consumable within the distribution substation boundary.
Any attempt to install high capacity DG with the purpose of
exporting power beyond the substation (reverse flow of power
though distribution substation), will lead to very high losses
[10]. In distribution system load capacity (MW) will play
important role in selecting the size of DG. The reason for Similarly
higher losses and high capacity of DG can be explained by the
fact that the distribution system was initially designed such
that power flows from the sending end (source substation) to
the load and conductor sizes are gradually decreased from the
substation to consumer point. Thus without reinforcement of
the system, the use of high capacity DG will lead to excessive
power flow through small-sized conductors and hence results
in higher losses.

3. MATHEMATICAL MODELING
3.1Optimal Sizing of DGs and Capacitors The equation (4) gives the size of DG and (6) gives the size
In this section the total power losses will be formulated as of capacitor for each bus i , for the loss to be minimum .Any
based on real power loss in the system is given by (1).This size of DG and capacitor other than PDGi and QDGi placed at
formula is popularly referred as “Exact Loss” formula [18]. bus i, will lead to higher loss.

3.2 Optimal Location of DG and Capacitor


The optimal location can be find for the placement of sizes of
DG and Capacitor as obtained from (4) and (6) which will
give the lowest possible total loss due to placement of DG and
Where, Capacitor at the respective bus. The bus having least power

621
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

loss while satisfying the system constraints will be the optimal position of particle in a n-dimensional vector is represented
location for the placement of DG and Capacitor [10]. as:

4. PROBLEM FORMULATION
4.1Objective Function The velocity of this particle is also an n-dimensional vector,
The main objective is to compensate the active power and
reactive power as given in eq. (1) while meeting the following
constraints. Alternatively, the best position related to the lowest value (for
minimization objective) of the objective function for each
particle is

4.2 Assumptions and Constraints and the global best position among all the particles or best
 For each bus, the following power flow equations pbest is denoted as:
must be satisfied.

During the iteration procedure, the velocity and position of the


particles are updated. It should be noted that the value of DGs
and capacitors sizes varies between 0 to the sum of loads. This
is regarded as the position of a particle during the
optimization process. Fig.1 illustrates the flow chart of
optimal placement of DGs and Capacitors in the distribution
system at optimal locations by PSO technique taking the
constraints into consideration to minimize the power loss. The
 The DG & Capacitor under study are supplying real steps used in the proposed algorithm are given below.
power & reactive power only.
 The sizing and locations are considered at peak load Step 1: (Input system data and Initialize): In this step, the
only. distribution system configuration data, with constraints, such
 The maximum number of DG and Capacitor units is as maximum-minimum allowed voltages and DG and
three, with the penetration of 100%.
Capacitor sizes range are specified. The population size of
 The voltage at every bus in the network should be
swarms and iterations are set. An initial population (array) of
within the acceptable range (Utility‟s standard
ANSI Std. C84.1-1989) i.e., within permissible limit particles with random positions and velocities on dimensions
(±5%) [19], (Sizes of DGs, capacitors and their respective locations) in the
search space is initialized randomly in this step. Vectors X and
V are also described as shown in (11-12). The PSO weight
Vmin ≤ Vi ≤ Vmax factor also set in this step.
… (9)
Step 2: (Calculation of objective function): The calculation of
 Current in a feeder or conductor, must be well objective function (1) is carried out by Backward Sweep and
within the maximum thermal capacity of the Forward Sweep Method of distribution load flow [22].
conductor
Step 3: (Calculate Pbest): The objective function related to
each particle in the population of the current iteration is
compared with it in the previous iteration and the position of
Here, I_i^Rated is current permissible for branch i within the particle enjoying a lower objective function as pbest for
safe limit of temperature. … (10)
the current iteration is recorded,
5. PROPOSED ALGORITHM
5.1 Particle swarm optimization technique
The Particle Swarm Optimization (PSO) algorithm is one of
the Evolutionary Computation (EC) techniques. PSO is a
population based and self-adaptive technique introduced Where, k is the number of iterations, and f is the objective
originally by Kennedy and Eberhart in 1995 [20, 21]. This function evaluated for the particle.
stochastic-based algorithm handles a population of a
multidimensional space where the optimal solution is Step 4: (Calculate Gbest): In this step, the best objective
searched. The individuals are called particles and the function associated with the Pbests among all particles in the
population is called a swarm. Each particle in the swarm current iteration is compared with that in the previous
moves towards the optimal point with adaptive velocity. Each iteration and the lower value is chosen as the current overall
particle in the population is treated as a mass-less and volume- Gbest
less point in a n-dimensional space. Mathematically, the

622
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Start

Recieve data from


Step 5: (Update Velocity): After calculation of the Pbest and
the system
Gbest, the velocity of particles for the next iteration should be
modified by using
Run distribution load
flow for base case

Where, are Set n and m, where


n is number of DGs
the velocities of particle m at iteration k, inertia weight factor, and m is Capacitors
acceleration coefficients, position of particle m at iteration k,
best position of particle m at iteration k and best position
Initialise Particles
among all the particles at iteration k, respectively. (with k1,k2,..kn,P1,P2,...Pn,l1,l2,...lm,Q1,Q2,...Qm)
where, ki is bus number of ith DG
In the velocity updating process, ω, the inertia weight, and C1, Pi is size of ith DG
C2 the acceleration coefficients, should be determined in li is bus number of ith Capacitor
advance. The acceleration coefficients have two values in the Qi is size of ith Capacitor
range of (1,2), and represent the weighting of the stochastic
acceleration terms that pull each particle towards the
individual best position and the overall best position, rand is Set it =1
random functions, generating separate random values in the
range [0, 1], and ω is the inertia weight factor, defined as Calculate fitness(i.e. loss
follows: in network) for each
Particle by placing DGs
and Capacitors at their
respective buses
it = it+1

Where, - Initial inertia Run Distribution load


weight factor, final inertia weight factor, current iteration flow with DG Update Particles
number, maximum iteration number, respectively. using eq(17)

Step 6: (Update Position): The position of each particle at the Constraints


No Update Velocities
next iteration (k+1) is modified as (7)-(10) using eq(15)
Satisfy ?

Fitness =Loss Yes


(in Base case) Fitness = PL (xx)
Step 7: (Check Convergence Criterion): If,
or iter = itermax, the program is terminated
Save old Pbest as
and the results are printed. Otherwise, the program goes to the
Pbest_prev
Step 2. From (15), one can find that the current flying velocity
of a particle comprises of three terms. The first term is the
Update Pbest eq (13) and
particle‟s previous velocity revealing that a PSO system has
Gbest eq (14)
memory. The second term and the third term represent a
cognition-only model and a social-only model, respectively.
The cognition-only model treats individuals as isolated and
reflects private thinking.
If Yes
|Pbest – Pbest_prev|

No

Output Result

Fig.1: Flow chart of optimal placement of DGs and


Capacitors using PSO Technique

623
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

6. NUMERICAL RESULTS Table 2. Multiple DGs with Three Capacitors

6.1 Test systems


Installed Capacity Loss Loss
The proposed methodology is tested on two different test Cases Bus in Reduction
systems. The first system used in this paper is 33-bus radial No. DG Capacitor (kW) (%)
distribution systems with total load of 3.72 MW and 2.3 (MW) (MVAr)
MVAr [23] and the second one is 69-bus radial distribution Base case 225.0 0.00
system with a total load of 3.80 MW and 2.69 MVAr [24]
with Beaver conductors. Single DG 61 1.81 83.37 62.95
Single DG
Based on the proposed methodology, a computer software & Single 61 1.81 1.29 23.17 89.70
program has been developed in MATLAB environment to run Cap.
load flow, calculate distribution loss and identify the optimal 61 1.81 1.22
size and location of multiple DGs and Capacitors by PSO One DG &
20.14 91.05
Two Cap. 69 0.43
approach, using least loss method as described in [10].
Two DG 17 0.52
6.1.1 33-Bus Test System & Single 12.78 94.32
In this case, multiple DGs are considered with two Capacitors. Cap. 61 1.72 1.29
Table 1 shows the simulation results of optimal sizes,
locations and loss reduction of multiple DGs with two For the optimal placement of single DG and single DG with
Capacitors by the proposed approach. Capacitor, are as given in Table 2 having a loss reduction of
62.95% and 89.7% respectively. As the numbers of DG and
Table 1. Multiple DGs and Capacitors Capacitor units are increased, the loss reduction becomes
more effective. Among the cases, two DGs and single
Capacitor at the optimal locations yield a maximum loss
Installed Capacity Loss Loss reduction of 94.32%, while single DG with single capacitor at
Cases Bus in Reduction optimal locations obtains a minimum loss reduction of only
No. DG Capacitor (kW) (%) 89.70% for 69-bus test system.
(MW) (MVAR)
Base case 211.0 0.00
6.2 Voltage Profile
6 2.49 Tables 3 to 5 indicate the minimum and maximum voltages
Single DG before and after the placement of single DG and single
& Two 12 0.44 50.4 76.11
capacitor, single DG and two capacitors and two DGs with
Cap.
30 1.03 single capacitor of 33-bus and 69-bus test systems
respectively.
13 0.83
Two DG
& Two 30 1.11 1.04 28.5 86.49 Table 3. Voltage profile before and after with single DG
Cap. and Single Capacitor
12 0.44

14 0.75 Voltage @bus before DG Voltage @bus after DG


System
Three DG 24 1.07
& Two 14.9 92.94 Min Max Min Max
Cap. 30 1.03 1.04
33 bus 0.9038@18 1.0000@1 0.9526@18 1.0004@1
12 0.44
1.0000@1-
69 bus 0.9092@65 1.0000@1 0.9723@27
The maximum loss reduction of 92.94% can be achieved by 3,28,36
installing three DGs with two Capacitors and minimum of
single DG with two Capacitors of 76.11%. It is observed that
more the number of DGs and Capacitors installed, the better
Table 4. Voltage profile before and after with single DG
the loss reduction increases.
and Two Capacitor

6.1.2 69-Bus Test System Voltage @bus before DG Voltage @bus after DG
Similar to 33-bus system, the Table II shows the optimal System
sizes, locations and % reduction in line losses of multiple DGs Min Max Min Max
with multiple Capacitors.
33 bus 0.9038@18 1.0000@1 0.9806@18 1.021@30

1.0000@1-
69 bus 0.9092@65 1.0000@1 0.9751@27
3,28,36

624
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 5. Voltage profile before and after with Two DG [8] Griffin., Tomsovic, K., Secrest, D., Law, A.: „Placement
and Single Capacitor of dispersed generation systems for reduced losses‟,
Proc. 33rd Annual Hawaii International Conference on
Voltage @bus before DG Voltage @bus after DG system Sciences, Maui, Hl, 2000, pp.1-9.
System [9] Wang, C., Nehrir, M.H., „Analytical approaches for
Min Max Min Max optimal placement of DG sources in power systems‟,
IEEE Trans. Power Syst. 19(November (4))(2004), pp.
33 bus 0.9038@18 1.0000@1 0.9783@18 1.0003@1 2068-2076.
[10] Acharya, N., Mahat, P., Mithulananthan, N.: „An
1.0000@1- analytical approach for DG allocation in primary
69 bus 0.9092@65 1.0000@1 0.9919@69
3,28,36 distribution network‟, Elect. Power & Energy Syst.,
December 2006, 28, (10), pp. 669-678.
[11] Gozel, T., Hocaoglu, M.H.: „An analytical method for
It is seen that in all the cases the voltage profile the sizing and sitting of distributed generators in radial
improves, when the number of DG and capacitor units systems‟, Elect. Power Syst. Res., 2009. vol.79, pp.
installed in the system increased, while satisfy all the 912-918.
current and voltage constraints. [12] Hung, D.Q., Mithulananthan, N., Bansal, R.C.:
„Analytical Expressions for DG Allocation in Primary
Distribution Networks‟, IEEE Trans. Energy
7. CONCLUSION Conversion, 2010, vol.25, (3), pp. 814-820.
This paper has proposed the application of particle [13] Abu-Mouti F. S., El-Hawary M. E.: „Heuristic Curve-
swarm optimization technique for finding the optimal Fitted Technique for Distributed Generation
sizes of DGs and Capacitors at optimal locations for Optimization in Radial Distribution Feeder Systems‟,
active and reactive power compensation for minimizing IET Generation, Transmission & Distribution, vol. 5,
the losses in the primary distribution systems. The no. 2, pp. 172-180, 2011
placement of Capacitors in combination with DGs not [14] Schmill, J.V.: „Optimum size and location of shunt
only reduces the losses to the great extent, but also capacitors on distribution feeders‟, IEEE Trans. Power
App. Syst., , Sept. 1965 vol. PAS-84, (9), pp. 825-832.
improves the voltage profile of the system. The
[15] Masoum, M.A.S., Ladjevardi, M., Jafarian, A. and
placement provides more economy solution for loss Fuchs, E.F.: „Optimal placement, replacement and
reduction. In the age of integrated grid, the placement sizing of capacitor banks in distorted distribution
and analysis of DGs and Capacitors give guidance for networks by genetic algorithms‟, IEEE Trans. Power
optimal operation of power system. Del., , Oct. 2004, vol.19, (4), pp. 1794-1801.
[16] Ng, H.N., Salama, M.M.A. and Chikhani, A.Y.:
REFERENCES „Capacitor allocation by approximate reasoning: fuzzy
capacitor placement‟, IEEE Trans. Power Del., Jan.
[1] Borges, CLT and Falcao, DM. “Impact of distributed 2000, vol.15, (1), pp. 393-398.
generation allocation and sizing on reliability, losses and [17] Gallego, R.A., Monticelli, A.J. and Romero, R.:
voltage profile,” in Proc. IEEE Power Tech Conf., „Optimal capacitor placement in radial distribution
Bologna, Italy, 2003, vol. 2, pp. 1–5. networks‟, IEEE Trans. Power Del., Nov. 2001,vol.16,
[2] Etemadi AH and Fotuhi-Firuzabad, M. "Distribution (4), pp. 630-637.
system reliability enhancement using optimal capacitor [18] Elgerd IO, Electric energy system theory: an
placement," IET, Generation, Transmission & introduction, McGraw-Hill; 1971.
Distribution, 2008, vol. 2, pp. 621-631. [19] H.L.Wills, Power Distribution Planning Reference
[3] Kumar, V., Kumar, R., Gupta, I., Gupta, H.O.: „DG Book. New York: Marcel Deckker, 2004.
integrated approach for service restoration under cold [20] Kennedy J., Eberhart R.: „Particle Swarm Optimizer‟,
load pickup‟, IEEE Trans. Power Del., 2010, 25, (1), pp. Proc. IEEE International Conference on Neural
398-406. Networks, Perth (Australia), IEEE Service Centre
[4] Kim, K.H., Lee, Y.J., Rhee, S.B., Lee, S.K., „Dispersed Piscataway, NJ, IV, 1995, pp. 1942-1948.
generator placement using fussy-GA in distribution [21] Eberhart R.C.and Shi Y.: „Comparing inertial weights
systems‟, Proc. IEEE Power Engineering Society and constriction Evaluating computation, San Diego,
Summer Meeting,USA, July, 2002, vol.2, pp.1148-1153. California, IEEE service center, Piscataway, NJ, 2000,
[5] Kim, J.O., Nam, S.W., Park, S.K., Singh, C.: „Dispersed pp. 84-88.
generation planning using improved Hereford Ranch [22] Haque M.H., “Efficient load flow method for
algorithm‟, Elect. Power Syst. Res., October, 1998, 47, distribution systems with radial or mesh
(1), pp. 47-55. configuration”, IEE Proc. Generation, Transmission
[6] Zhang, D., Fu, Z. and Zhang, L.: „An improved TS and Distribution, vol.143, issue-1, pp.33-38
algorithm for loss minimum reconfiguration in large- [23] Kashem MA, Ganapathy V, Jasmon GB, Buhari MI.
scale distribution systems‟, Elect. Power Syst. Res., 2000. A novel method for loss minimization in
2007, 77, (5-6), pp. 685-694. distribution networks. Int. Conf on Electric Ulitity
[7] Su, C.T., Chang, C.F., and Chiou, J.P.: „Distribution Deregulation and Restructuring and Power Technology
network reconfiguration for loss reduction by ant colony London.
search algorithm‟, Elect. Power Syst. Res., 2005, 75, (2- [24] Baran M, Wu FF.1989. Optimal capacitor placement on
3), pp.190-199. radial distribution systems. IEEE Transaction on Power
Delivery.

625
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Review of Renewable Energy Supply and Energy


Efficiency Technologies

Ramandeep Kaur, Divesh Kumar Ramandeep Kaur


Deptt. of Electrical Deptt. of Electrical Deptt. of
Engineering BGIETSangrur Electrical Engineering B
Engineering BGIET, Sangrur diveshthareja@yahoo.com BBSBEC, Fatehgarh Sahib
ramansandhu548@gmail.com
raman16deep89@gmail.c
om
,

ABSTRACT quantities of non-renewable fuels . Solar energy conversion is


Solar energy is abundant and offers significant potential for manifest in a family of technologies having a broad range of
near-term (2020) and long-term (2050) climate change energy service applications: lighting, comfort heating, hot
mitigation. There are a wide variety of solar technologies of water for buildings and industry, high-temperature solar heat
varying maturities that can, in most regions of the world, for electric power and industry, photovoltaic conversion for
contribute to a suite of energy services. Even though solar electrical power, and production of solar fuels, for example,
energy generation still only represents a small fraction of total hydrogen or synthesis gas (syngas). This chapter will further
energy consumption, markets for solar technologies are detail all of these technologies. Several solar technologies, such
growing rapidly. Much of the desirability of solar technology is as domestic hot water heating and pool heating ,are already
its inherently smaller environmental burden and the opportunity competitive and used in locales where they offer the least-cost
it offers for positive social impacts. The cost of solar option. And in jurisdictions where governments have taken
technologies has been reduced significantly over the past 30 steps to actively support solar energy, very large solar
years and technical advances and supportive public policies electricity (both PV and CSP) installations, approaching 100
continue to offer the potential for additional cost reductions. MW of power, have been realized, in addition to large numbers
Potential deployment scenarios range widely—from a marginal of rooftop PV installations. Other applications, such as solar
role of direct solar energy in 2050 to one of the major sources fuels, require additional R&D before achieving significant
of energy supply. The actual deployment achieved will depend levels of adoption. In pursuing any of the solar technologies,
on the degree of continued innovation, cost reductions and there is the need to deal with the variability and the cyclic
supportive public policies. nature of the Sun. One option is to store excess collected
energy until it is needed. This is particularly effective for
handling the lack of sunshine at night. For example, a 0.1-m
1. INTRODUCTION thick slab of concrete in the floor of a home will store much of
the solar energy absorbed during the day and release it to the
The aim of this paper is to provide a synopsis of the state-of- room at night. When totalled over a long period of time such as
the-art and possible future scenarios of the full realization of one year, or over a large geographical area such as a continent,
direct solar energy’s potential for mitigating climate change. It solar energy can offer greater service. The use of both these
establishes the resource base, describes the many and varied concepts of time and space, together with energy storage, has
technologies, appraises current market development, outlines enabled designers to produce more effective solar systems. But
some methods for integrating solar into other energy systems, much more work is needed to capture the full value of solar
addresses its environmental and social impacts, and finally, energy’s contribution.
evaluates the prospects for future deployment.

Some of the solar energy absorbed by the Earth appears later in 2. TECHNOLOGY AND APPLICATIONS
the form of wind, wave, ocean thermal, hydropower and excess
biomass energies. The scope of this chapter, however, does not This section discusses technical issues for a range of solar
include these other indirect forms. Rather, it deals with the technologies, organized under the following categories: passive
direct use of solar energy. Solar energy is an abundant energy solar and daylighting, active heating and cooling, PV electricity
resource. Indeed, in just one hour, the solar energy intercepted generation, CSP electricity generation and solar fuel
by the Earth exceeds the world’s energy consumption for the production. Each section also describes applications of these
entire year. Solar energy’s potential to mitigate climate change technologies.
is equally impressive. Except for the modest amount of carbon
dioxide (CO2) emissions produced in the manufacture of
conversion devices the direct use of solar energy produces very
2.1 PASSIVE SOLAR AND
little greenhouse gases, and it has the potential to displace large DAYLIGHTING TECHNOLOGIES

626
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

An uncovered absorber, also known as an unglazed


Passive solar energy technologies absorb solar energy, store collector, is likely to be limited to low-temperature heat
and distribute it in a natural manner, without using mechanical production.
elements. The term ‘passive solar building’ is a qualitative
term describing a building that makes significant use of solar A solar collector can incorporate many different materials and
gain to reduce heating energy consumption based on the natural be manufactured using a variety of techniques. Its design is
energy flows of radiation, conduction and convection. The term influenced by the system in which it will operate and by the
‘passive building’ is often employed to emphasize use of climatic conditions of the installation location.
passive energy flows in both heating and cooling, including
redistribution of absorbed direct solar gains and night cooling. . Flat-plate collectors are the most widely used solar thermal
collectors for residential solar water- and space-heating
Daylighting technologies are primarily passive, including systems. They are also used in air-heating systems. A typical fl
windows, skylights and shading and reflecting devices. A at-plate collector consists of an absorber, a header and riser
worldwide trend, particularly in technologically advanced tube arrangement, a transparent cover, a frame and insulation
regions, is for an increased mix of passive and active systems, (Figure 2.1). For low-temperature applications, such as the
such as a forced-air system that redistributes passive solar gains heating of swimming pools, only a single plate is used as an
in a solar house or automatically controlled shades that absorber (Figure 2.2 ).
optimize daylight utilization in an office building. The basic
elements of passive solar design are windows, conservatories
and other glazed spaces (for solar gain and daylighting),
thermal mass, protection elements, and reflectors.
Passive technologies are integrated with the building and may
include the following components:
a) Windows with high solar transmittance and a high thermal
resistance
b) Building-integrated thermal storage,
c) Daylighting technologies and advanced solar control
systems,
Figure2.1 : Schematic diagram of thermal solar collectors: Glazed flat-
In most climates, unless effective solar gain control is plate.
employed, there may be a need to cool the space during the
summer. However, the need for mechanical cooling may often
be eliminated by designing for passive cooling. Passive cooling
techniques are based on the use of heat and solar protection
techniques, heat storage in thermal mass and heat dissipation
techniques. The specific contribution of passive solar and
energy conservation techniques depends strongly on the
climate.

2.2 ACTIVE SOLAR HEATING AND


COOLING

Active solar heating and cooling technologies use the Sun and
mechanical elements to provide either heating or cooling;
various technologies are discussed here, as well as thermal
storage.

2.2.1 SOLAR HEATING

In a solar heating system, the solar collector transforms solar


irradiance into heat and uses a carrier fluid (e.g., water, air) to
transfer that heat to a well-insulated storage tank, where it can
be used when needed. The two most important factors in
choosing the correct type of collector are the following:
Figure:2.2 Schematic diagram of thermal solar collectors:
1) The service to be provided by the solar collector.
Unglazed tube-on-sheet and serpentine plastic pipe.
2) The related desired range of temperature of the heat-carrier
fluid.

627
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Evacuated-tube collectors are usually made of parallel rows of and medium-temperature refrigeration: absorption and
transparent glass tubes, in which the absorbers are enclosed, adsorption.
connected to a header pipe (Figure 2.3 ). To reduce heat loss Open cooling cycle systems are mainly of interest for the air
within the frame by convection, the air is pumped out of the conditioning of buildings. They can use solid or liquid sorption.
collector tubes to generate a vacuum.
2.2.3 THERMAL STORAGE

Thermal storage within thermal solar systems is a key


component to ensure reliability and efficiency. Four main types
of thermal energy storage technologies can be distinguished:
sensible, latent, sorption and thermo-chemical heat storage.

Sensible heat storage systems use the heat capacity of a


material. The vast majority of systems on the market use water
for heat storage. Water heat storage covers a broad range of
capacities, from several hundred litres to tens of thousands of
cubic metres.

Latent heat storage systems store thermal energy during the


phase change, either melting or evaporation, of a material.
Depending on the temperature range, this type of storage is
more compact than heat storage in water. Melting processes
have energy densities of the order of 100 kWh/m3 (360
MJ/m3), compared to 25 kWh/m3 (90 MJ/m3) for sensible heat
Figure:2.3 Schematic diagram of thermal solar collectors: Evacuated- storage. Most of the current latent heat storage technologies for
tube collectors. low temperatures store heat in building structures to improve
thermal performance, or in cold storage systems. For medium-
Solar water-heating systems used to produce hot water can be temperature storage, the storage materials are nitrate salts. Pilot
classified as passive or active solar water heaters. Active solar storage units in the 100-kW range currently operate using solar-
cooling systems, which transform the hot water produced by produced steam.
solar energy into cold water.
Sorption heat storage systems store heat in materials using
water vapour taken up by a sorption material. The material can
2.2.2 SOLAR COOLING
either be a solid (adsorption) or a liquid (absorption). These
technologies are still largely in the development phase, but
Solar cooling can be broadly categorized into solar electric some are on the market. In principle, sorption heat storage
refrigeration, solar thermal refrigeration, and solar thermal air densities can be more than four times higher than sensible heat
conditioning. In the first category, the solar electric storage in water.
compression refrigeration uses PV panels to power a
conventional refrigeration machine. In the second category, the Thermo-chemical heat storage systems store heat in an
refrigeration effect can be produced through solar thermal gain; endothermic chemical reaction. Some chemicals store heat 20
solar mechanical compression refrigeration, solar absorption times more densely than water (at a ΔT≈100°C); but more
refrigeration, and solar adsorption refrigeration are the three typically, the storage densities are 8 to 10 times higher. Few
common options. In the third category, the conditioned air can thermo-chemical storage systems have been demonstrated. The
be directly provided through the solar thermal gain by means of materials currently being studied are the salts that can exist in
desiccant cooling. Both solid and liquid sorbents are available, anhydrous and hydrated form. Thermo-chemical systems can
such as silica gel and lithium chloride, respectively. compactly store low- and medium-temperature heat.

Solar electrical air-conditioning, powered by PV panels. Solar


thermal air-conditioning consists of solar heat powering an 2.3PHOTOVOLTAIC ELECTRICITY
absorption chiller and it can be used in buildings. GENERATION
Closed heat-driven cooling systems using these cycles have Photovoltaic (PV) solar technologies generate electricity by
been known for many years and are usually used for large exploiting the photovoltaic effect. Light shining on a
capacities of 100 kW and greater. The physical principle used semiconductor such as silicon (Si) generates electron-hole pairs
in most systems is based on the sorption phenomenon. Two that are separated spatially by an internal electric field created
technologies are established to produce thermally driven low- by introducing special impurities into the semiconductor on
either side of an interface known as a p-n junction. This creates

628
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

negative charges on one side of the interface and positive electrical performance, reduction of storage needs, availability
charges are on the other side (Figure 2.4). This resulting charge of energy, and dynamic behaviour. Centralized PV mini-grid
separation creates a voltage. When the two sides of the systems could be the least-cost options for a given level of
illuminated cell are connected to a load, current flows from one service, and they may have a diesel generator set as an optional
side of the device via the load to the other side of the cell. The balancing system or operate as a hybrid PV-wind-diesel
conversion efficiency of a solar cell is defined as a ratio of system. These kinds of systems are relevant for reducing and
output power from the solar cell with unit area (W/cm2) to the avoiding diesel generator use in remote areas.
incident solar irradiance. The maximum potential efficiency of
a solar cell depends on the absorber material properties and
device design. One technique for increasing solar cell
efficiency is with a multi junction approach that stacks
specially selected absorber materials that can collect more of
the solar spectrum since each different material can collect
solar photons of different wavelengths.

Figure:2.5 Historical trends in cumulative installed PV power


of off-grid and grid connected systems in the OECD countries.
Figure: 2.4 Generic schematic cross-section illustrating the operation Vertical axis is in peak megawatts.
of an illuminated
solar cell.
CONCLUSIONS
PHOTOVOLTAIC APPLICATIONS
Potential deployment scenarios range widely—from a marginal
role of direct solar energy in 2050 to one of the major sources
Photovoltaic applications include PV power systems classified of energy supply. Although direct solar energy provides only a
into two major types: those not connected to the traditional very small fraction of global energy supply in 2011, it has the
power grid (i.e., off-grid applications) and those that are largest technical potential of all energy sources and, in concert
connected (i.e., grid-connected applications). In addition, there with technical improvements and resulting cost reductions,
is a much smaller, but stable, market segment for consumer could see dramatically expanded use in the decades to come.
applications. Achieving continued cost reductions is the central challenge
that will influence the future deployment of solar energy.
Off-grid PV systems have a significant opportunity for Reducing cost, meanwhile, can only be achieved if the solar
economic application in the un-electrified areas of developing technologies decrease their costs along their learning curves,
countries. Figure 2.5 shows the ratio of various off-grid and which depends in part on the level of solar energy deployment.
grid-connected systems in the Photovoltaic Power Systems In addition, continuous R&D efforts are required to ensure that
(PVPS) Programme countries. Of the total capacity installed in the slopes of the learning curves do not flatten before solar is
these countries during 2009, only about 1.2% was installed in widely cost competitive with other energy sources. The true
off-grid systems that now make up 4.2% of the cumulative costs of and potential for deploying solar energy are still
installed PV capacity of the IEA PVPS countries. unknown because the main deployment scenarios that exist
today often consider only a single solar technology: PV. In
Off-grid centralized PV mini-grid systems have become a addition, scenarios often do not account for the co-benefits of a
reliable alternative for village electrification over the last few renewable/sustainable energy supply . At the same time, as
years. In a PV mini-grid system, energy allocation is possible. with some other forms of RE, issues of variable production
For a village located in an isolated area and with houses not profiles and energy market integration as well as the possible
separated by too great a distance, the power may flow in the need for new transmission infrastructure will influence the
mini-grid without considerable losses. Centralized systems for magnitude, type and cost of solar energy deployment.
local power supply have different technical advantages
concerning

629
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES [5] Chang, K.C., W.M. Lin, T.S. Lee, and K.M. Chung (2009).
Local market of solar water heaters in Taiwan: Review and
perspectives. Renewable and Sustainable Energy Reviews,
[1] A.T. Kearney (2010). Solar Thermal Electricity 2025--
13(9), pp. 2605-2612.
Clean Electricity On Demand: Attractive STE Cost Stabilize
Energy Production. A.T. Kearney GmbH,Duesseldorf,
[6] Bett, A.W., F. Dimroth, W. Gulter, R. Hoheisel, E. Oliva,
Germany, 52 pp.
S.P. Philips, J. Schone, G. Siefer, M. Steiner, A. Wekkeli, E.
Welser, M. Meusel, W. Kostler, and G. Strobl (2009). Highest
[2] Candanedo, J., and A.K. Athienitis (2010). A simulation
effi ciency multi-junction solar cell for terrestrial and space
study of anticipatory control strategies in a net zero energy
applications. In: Proceedings of the 24th European Photovoltaic
solar-optimized house. ASHRAE Transactions, 116(1), pp.
Solar Energy Conference, Hamburg, Germany, 21-25
246-260.
September 2009, pp. 1-6.
[3] Castell, A., I. Martorell, M. Medrano, G. Perez, and L.F.
[7] Braun, M., Y.-M. Saint-Drenan, T. Glotzbach, T. Degner,
Cabeza (2010). Experimental study using PCM in brick
and S. Bofi nger (2008). Value of PV energy in Germany –
constructive solutions for passive cooling. Energy and
Benefi t from the substitution of conventional power plants and
Buildings, 42(4), pp. 534-540.
local power generation. In: Proceedings of the 23rd European
Photovoltaic Solar Energy Conference, Valencia, Spain, 1-5
[4] Catchpole, K.R., and A. Polman (2008). Plasmonic solar
September 2008, pp.
cells. Optics Express, 16(26), pp. 21793-21800.
3645-3652.

630
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Design and Optimization of DGS based T-Stub


Microstrip Patch Antenna for Wireless Applications

Lalit Dhiman Simerpreet Singh


Research Scholar EE Department, BGIET, Sangrur
lalitdhiman1111@gmail.com s.simerpreet@gmail.com

ABSTRACT have been cut out. Hence new resonances along with effective
A compact microstrip patch antenna with wide operational current paths are generated in ground plane, as result wideband
bandwidth is presented. The proposed design consists of characteristics have been obtained.
rectangular patch antenna in ring shaped with U-slots cut in
ground. Antenna is fed by microstrip line. The performance of
rectangular patch antenna has been discussed and analyzed by 2. LITERATURE SURVEY
modification of the width and length of patch dimensions as A wide band frequency antenna [1] was obtained for mobile
given in the previous work [1].The design proposed that antenna communication. Radiation patch and ground of proposed
is having good bandwidth, gain and return loss in frequency antenna was considerably modified. By two modified U-slots in
band between 3.5-5 GHz. At resonant frequency 2 GHz antenna the ground plane and a modified ring-shaped radiation patch,
has bandwidth of 10% and return loss up to -44 dB which are using a T-stub, wide frequency band can be achieved. By
good as compare to reference results. Proposed antenna has been introduction of variable slots in the patches of antenna [2] there
analyzed using IE3D and simulated results are presented in occur less chances of mutual coupling between adjacent
terms of bandwidth, gain and return loss at different frequencies. elements. It had been shown that an enhanced bandwidth was
achieved. The incorporation of U-slots in the patch [3] provided
Keywords : Microstrip line, DGS U-shape, Ring Patch can provide a wider bandwidth than conventional patch antenna
antenna, Slots cutting. by placing a variable capacitor and an inductor at the antenna
input. A frequency tunable microstrip antenna [4] was presented
1. INTRODUCTION by adding U-slot on the patch. This antenna had a planner
In today world of wireless communication, recent developments compact structure, so it can be incorporated in to wireless
in wireless communication industry continue to derive terminals easily. The antenna had good impedance matching at
requirement of small, compatible and affordable microstrip resonant frequency and measured return loss can reached -43 dB
patch antennas. A patch antenna is a narrowband, wide-beam in operating frequency band. A double L-slots microstrip
antenna fabricated by etching the antenna element pattern in antenna for Wi-MAX and WLAN application had been proposed
metal trace bonded to an insulating dielectric substrate such as a [5]. The co-planer waveguide fed considered the work comprise
printed circuit board with a continuous metal layer bonded to the of two rectangular patch elements each embedded on two L-
opposite side of the substrate which forms a ground plane. slots. That design results in a reduction in size and weight and
Common microstrip antenna shapes are square, rectangular, further allows easy integration in hand-held devices. The
circular and elliptical, but any continuous shape is possible. parametric study of considered design showed that radiation
Some patch antennas do not use a dielectric substrate and instead pattern, return loss, voltage standing wave ratio and gain were
made of a metal patch mounted above a ground plane using optimized within the band operation. A novel tri frequency
dielectric spacers. The resulting structure is less rugged but has a monopole antenna [6] for multiband operation was proposed.
wider bandwidth because such antennas have a very low profile, For achieving bandwidth enhancement defected ground structure
are mechanically rugged and can be shaped to conform to the had been used which has rectangular patch with dual inverted L
curving skin of a vehicle. They are often mounted on the exterior shaped strips. Above design found its application in WIMAX
of aircraft and spacecraft or are incorporated into mobile radio and WLAN. A single patch beam steering antenna with U-slot
communications devices. Microstrip antennas are best choice for [7] was designed, fabricated. Simulated results proved that the
wireless devices because of characteristics like low profile, low proposed antenna was able to steer the maximum beam direction
weight, ease of fabrication and low cost. Since it is common in the y-z plane.. The structure of the proposed antenna was
practice to combine several radios into one wireless and use based on a two- layer stacked ECMSA [8]. The radiation patch
single antenna. Microstrip antenna suffers from disadvantages of ECMSA was loaded with gaps and stubs to disturb the surface
like they have less bandwidth and gain. For obtaining multiband electric current for the sake of exciting multiple modes. The
and wideband characteristics,different techniques have been impedance bandwidth increased up to 36%. The proposed
used like cutting slot in patch, fractal geometry and DGS. In antenna was a U-shaped square patch [9] combined with two
order to increase bandwidth DGS has been used. DGS may be parasitic tuning stubs, were fed by a coplanar waveguide (CPW).
realized by cutting shape from ground plane. Shape can be The total sizes of dimensions of parameters for the antenna were
simple or complex. When DGS has been applied to antenna introduced and their effects on the frequency characteristics had
equivalent inductive part due to DGS increases and this cause been investigated through a parametric study. Simulations and
high effective dielectric constant hence bandwidth reduced. It is results indicate that the antenna achieved an ultra wideband
to be noted that within particular area of ground different DGS impedance bandwidth (S11, 210 dB) as high as 129%.The
can produce different resonant frequencies and different radiation patterns of the antenna were measured and presented.
bandwidth. In this paper two radiating u slot in ground plane The gain range from 1.6 to 3 dB against frequency had been

631
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

obtained. These characteristics make the antenna suitable for


UWB application. A conventional L-probe [10] fed microstrip L
patch antenna was modified so that it may be more easily
fabricated with its own good features preserved. Instead of Where the effective length of the patch Leff
bending a probe in to the L-shape. The measured results showed
that the proposed antenna has a relatively large bandwidth like a
conventional L-probe design. Although the position of a feed
and a patch were inverted, this fact is found to have a little effect Step 5: Calculation of Ground Dimensions
on the proposed antenna. A microstrip patch antenna using the
defected ground structure (DGS) to suppress higher order
harmonics was presented [11]. An H-shaped defect on ground
plane with only one or more unit lattices had been utilized and
yielded band stop characteristics. Comparing with a In this paper dimensions of patch are same with respect to
conventional microstrip patch antenna without DGS unit cell, reference paper [1] and work has been carried out to modify the
the radiated power of DGS patch antenna at harmonic
patch structure to obtain improved results in respect to return
frequencies had been decreased. The book main objective is to
introduce [12], in a unified manner, the fundamental principles loss and bandwidth at same frequency.
of antenna theory and to apply them to analysis, design, and
measurement of antennas. Because there are so many methods of 4. PARAMETRIC ANALYSIS
analysis and design of antenna structures Application of are
made to some of the most basic and practical configurations,
such a dipole antennas, fractal geometry antennas, microstrip 4.1 Basic Designs with Microstrip Line
patch antennas, horn antennas and reflector antennas.
The geometry of Basic Design antenna which is in rectangular
shaped design, compatible size. It is a conventional design
3. ANTENNA DESIGN which is simulated in IE3D Software. It has a rectangular
patch and FR-4 Substrate, which is feeding by Microstrip Line
In this paper a rectangular shaped ring patch antenna has been for IMT/Broadband applications are depicted in Fig 1. The
proposed. Return loss, bandwidth, gain and directivity achieved dimensions of designed antenna with microstrip line feed and
after simulating reference antenna [1]. Slot antenna presented in height of substrate, tangent loss all are mentioned in table 1.
this paper has good bandwidth and return loss as compare to [1]. These dimensions are optimized in order to obtain best results.
The design of the conventional antenna is shown in Fig 1(a). The This antenna has very small dimensions.
antenna has 20mm(y=axis) x 8mm(x=axis) rectangular patch.
The dielectric material selected for this design with dielectric
constant 4 and substrate height of 1mm. According to reference
paper [1] 20x8 mm2 is a very compatible size of patch in fig (1). Table 1: Dimensions of the Reference rectangular patch
The above antenna has been designed by using transmission line antenna
model which is most accurate method. Procedure to obtain the Subject Dimensions
dimension of patch and ground [12] which is followed and Ground size 34×20mm2
modified in reference design [1] can be obtained.
The steps [12] to obtain the dimensions are as follows Patch size 24×8mm2
Loss Tangent (tan 0.02
Step 1: Determination of the Width (W) δ)
The width of the microstrip patch antenna is given by (1) Feed Line size 5×2mm2
Substrate used FR4
Thickness 1 mm
Feeding Microstrip line feed
Technique

It is observed from Fig 1 as shown below a conventional


Step 2: Calculation of effective dielectric constant, , antenna is following rectangular characteristics.
which is given by equation (2)

Step 3: Calculation of the length extension ∆L, which is given


by (3)

Fig 1: Conventional Geometry of Antenna


Step 4: Now to calculate the length of patch by (4)

632
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4.2 Reference Design of Antenna the proposed design has been made in such a way so that it
can operate on a wide range of frequency. In order to achieve
In this design, rectangular shaped ring patch antenna is taken dual band antenna with a larger bandwidth as well as with a
and T- Stub has been applied in the rectangular ring patch [1]. better return loss, ground has been etched by shape of the two
Return loss, bandwidth, gain and directivity are analyzed after U-Slots and patch has been designed in the shape of a T-Stub
simulating the proposed antenna in IE3D environment. Since in ring shaped patch. A square slot cavity has been filled with
there are different disadvantages of microstrip patch antenna, a 7 mm long and 2.2 mm width strip. By doing this, area of
hence to overcome them different techniques like reduction in above the Stub strip increases by 7 mm, T bands have been
patch size, DGS, slot cutting on patch have been applied. In removed, and this antenna is fed by microstrip line as shown
this design T-Stub geometry has been applied on rectangular in Fig 3. Further the antenna parameters have been optimized
ring patch. DGS is applied on ground plane by cutting two U- to get the best possible result. In this design, there is no
slots cuts.Further variations in patch dimensions are made and change in DGS is made. The geometry of proposed antenna is
results are compared. In this design rectangular patch is taken shown in Fig below:
having length of 24mm. By cutting slots, rectangular shaped
ring patch antenna has been formed. This Slot antenna
presented has good bandwidth and return loss as compare to
Reference antenna. The design of the conventional antenna is
as shown in Fig 1. The antenna has rectangular patch
dimensions 20mm along y-axis and 8mm along x-axis. The
dielectric material FR-4 selected for this design with dielectric
constant 4.4 and substrate height of 1mm. As per patch
dimensions given in 20×8 mm2 is a very compatible size of
patch as shown in Fig 2. The proposed antenna has been
designed by using transmission line model which is most
accurate method.

Fig 3 Stub Enlarged to 7 mm up to Microstrip Line


Antenna

4.4 Effect of decreasing Patch Strip Length of Wide Band


Antenna

In this design, the dual band MSA has been designed by


cutting a slot on the patch and hence increasing patch strip
length. This causes antenna to resonate at frequency bands 4.2
GHz. The design specifications of antenna are obtained and
are mentioned in table 3. By changing different parameters
Fig 2 Reference Design of Antenna like slot length and width, results are analyzed.
Table 2 shows dimensions of antenna feed line with
dimension of patch. These dimensions of patch have been Table 3: Parameters of Wideband MSA
modified to obtain improved results in respect to return loss Variable Value
and bandwidth at same frequency.
Height 1mm
T-Stub rectangular enlarged to 3.5mm from centre 3.5mm
Table 2: Shows Dimensions of Antenna Feed Line with
of T (a)
Dimension of Patch.
T- Stub enlarged to 3.5 mm from strip line side (b) 3.5mm

Variable Value
Length of Patch 24 mm
Width of Patch 8 mm
Length of ground 34 mm
Width of ground 20 mm
Thickness of substrate 1 mm
U-slots DGS 24mm along y-
axis, 9mm along
x-axis
Substrate used FR4- epoxy
Feed Point (0, 17, 0)

4.4 Effect of Adding Length in Patch Strip of Antenna Fig 4 (a) T-Stub rectangular enlarged to 3.5mm from
centre of T Antenna (b) T- Stub enlarged to 3.5 mm from
Microstrip Patch Antenna has been designed which operates strip line side Antenna.
on resonant frequency 3.5 GHz and 4 GHz, to get the better The proposed antenna can efficiently radiate on the two
bandwidth as well as the better return loss as compared to the central frequencies with a larger bandwidth as well as with a
bandwidth and return loss found in the reference design. Also better return loss. In Design (a), patch has been etched in the

633
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

shape of a T-Stub in ring shaped patch area below Microstrip From the Fig 7 it is observed that Conventional MSA is
Line (MSL) has been cut by 3.5 mm. and In Design (b), patch having VSWR is less than two but greater than one at
has been etched in the shape of a T-Stub in ring shaped patch resonance frequency.
and the area of above the T-Stub strip cut by 3.5 mm, Above It is observed from belove Fig 8 gain of Conventional
the T-Stub instead of square slot cavity has been look like as Antenna at a resonant frequency of 7 GHz is 30 dBi.
U-slot below as showing in Fig 4 (b). After that the antenna
parameter has been optimized to get the best possible result.
In these two configurations, no change in DGS shape has been
made.

5. RESULTS AND DISCUSSION


In this paper results of proposed rectangular microstrip patch
antenna designs are discussed. By making a rectangular ring
microstrip patch antenna with T-stub using DGS, results are
analyzed. Further effect of varying different parameters of
patch like strip width and length results are compared. All
simulations were carried out in IE3D electromagnetic
simulation engine. Fig 8: Gain of Conventional MSA

5.2 Rectangular Ring Patch MSA with T-


5.1 Conventional Rectangular Microstrip
Stub and U-Slots DGS
Patch Antenna In this section the results of the rectangular ring patch MSA
In this section the simulation results of the Conventional have been discussed. This antenna has a T-Stub in ring patch
microstrip patch antenna is discussed. The simulation design and double U-Slot as shown in Fig. This antenna is designed
which is discussed in previous paper has no slotting on patch using procedure given in paper five. This design has same
and no defected ground. The Fig 1 below shows the S- geometry as given in [1]. Fig 8 below shows the S-parameter
parameter of the design. plot of the corresponding antenna.

Fig 9: S-Parameter of Reference Design


Fig 6: S-parameter Plot of Conventional Antenna
As it can be seen from the simulation results that resonance
It is observed from above that Fig 6 than Conventional MSA
frequency of this antenna has been reduced as compare to
resonates at 7 GHz with -17.99 dB of returns loss and a
previous design of conventional antenna. The antenna
bandwidth of 370 MHz
resonates at 25 GHz with -17.44 dB returns loss and
bandwidth of 860 MHz Bandwidth is 60% improved as
compared to previous design. The results have the fluctuating
waveform other than the resonating frequency.

From Fig 10 it has been observed that gain of Reference


Antenna at 25 GHz is 2.81 dBi. From simulation it is seen that
the 2.011 dBi gain of antenna has reduced as compared to 30
dBi gain of previous Conventional MSA.

Fig 7: VSWR of Conventional MSA

634
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig 12 is showing gain of Conventional Antenna. This


antenna has a gain of 1.006 dBi at 3.5 GHz and 3.78 dBi at 4
GHz. It had to design with new idea to increase gain. It can be
seen from simulation result the gain of this antenna has been
reduced as compared to previous Conventional MSA.

5.4 Decreasing Patch Strip Length of Wide


Band Microstrip Patch Antenna
In this section wide band MSA has been designed in which
the patch has been etched in the shape of a T-Stub in ring
shaped patch and also the area below Microstrip Line (MSL)
Fig 10: Gain of Reference MSA
has been cut by 3.5 mm and antenna ground has been etched
by shape of the two U-Slots as shown Fig 4 (a), This antenna
5.3 Dual Band Ring Patch Antenna has a resonating band from 3.63 GHz - 38 GHz.

To achieve improved result patch has been etched in the shape


of a T-Stub in ring shaped patch the area of above the Stub
strip increases by 7 mm, a square slot cavity has been filled
with a 7 mm long and 2.2 mm width strip. T joint has been
removed and ground has been etched by shape of the two U-
Slots and this antenna is feed by microstrip line as shown in
Fig 3. The S-parameter given below:

Fig 13: S-Parameter of T-Stub Rectangular Enlarged to


3.5mm from Centre of T (a)
It is observed from above Fig 13 is showing the optimization
plot of S-Parameter of T-Stub rectangular enlarged to 3.5mm
from centre of T (a) as showing in Fig 4(a). The antenna
resonates is having a resonant band from 3.63 GHz to 83
GHz with -17.19 dB of returns loss and bandwidth of 750
MHz, Bandwidth performance has decreased as compared to
previous design. Now antenna is resonating in a band of 750
MHz

Fig 11: S-Parameter of Dual Band Full Strip Length


Antenna
Fig 11 is showing the optimization plot of S-Parameter of
results of antenna as shown in Fig. The simulated antenna has
better results as compared to the previous simulation results.
The antenna resonates at 3.5 GHz with -28.10 dB, 850 MHz
Bandwidth and 4 GHz with -26.54 dB returns loss and
bandwidth obtained in this case is 300 MHz Results show that
Bandwidth in this case has improved as compared to
Conventional design with multi bands. This antenna resonates Fig 14: Gain Plot of T- Stub Rectangular Enlarged to 3.5
at dual bands. mm from Centre of T (a) at 2 GHz

It is observe from Fig 14 this antenna has a gain of. 2.50 dBi
at 2 GHz. Results of the design as shown in Fig has been
shown in Fig in which The patch has been etched in the shape
of a T-Stub in ring shaped patch the area of above the T-Stub
strip cut by 3.5 mm, the ground remain same. Above the T-
Stub instead of square slot cavity has been looked like as U-
slot below as showing in Fig 4 (b). This antenna has
resonating band 3 GHz - 5 GHz. The analysis waveform given
below:

Fig 12: Gain Plot of Dual Band Ring patch Antenna at


Gain (a) at 3.5 GHz (b) Gain at 4 GHz

635
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

It is observed from above Fig there is comparisons


of entire simulated antenna with conventional antenna and
reference antenna. By using return loss of colors waveforms.
It is observed from above Fig there is comparisons of entire
simulated antenna with conventional antenna and reference
antenna. By using return loss of colors waveforms.
In Fig 17 Bandwidth and Return loss is comparing of all new
designed antennas with reference antenna. All return loss
curve has divided with color.

Table 4: Optimization of Parametric Analysis of Slotted


Fig 15: S-Parameter of T- Stub Enlarged to 3.5 mm from Proposed Antenna
Strip Line Side Antenna (b) Antenna Resonant Retur Gain Bandwidt
Design Frequenc n (dBi h
It is observed Fig 15 that this antenna gives better results. The y (GHz) Loss ) (MHz)
antenna resonates at 4.2 GHz with -44 dB returns loss and (dB)
bandwidth obtained 970 MHz, Bandwidth has improved up to Conventiona 4.7 -17.99 4.33 370
12% as compared to Reference design (860 MHz) (Fig 8) and l Design
as compared to first basic design (370 MHz) improved up to
160% (Fig 6).This antenna resonating frequency at 4.2 GHz Reference 4.25 -17.44 2.68 860
finds application for International Mobile 7mm 3.5 -26.54 1.00 850
Telecommunications (IMT) standards. The performance of increase 6
return loss also minimized. 4.4 -28 3.78 300
3.5 mm cut 3.6 -16 2.40 750
below MSL 4.2 -17.50 2.77
3.5 mm cut 4.2 -44.4 2.50 970
above T
Stub

6. CONCLUSION
A novel compact microstrip ring patch antenna, fed by
microstrip line is presented for Microwave application. The
microstrip ring patch consists of a T-stub and a U-slot
designed on patch. The patch has total size equal to
24X8mm2. The measured return loss indicates that the
Fig 16: Gain-Parameter of T- Stub Enlarged to 3.5 mm antenna exhibits wide band Characteristics. The bandwidth
from Strip Line Side of Antenna (b) characteristics of antenna with respect to the geometrical
parameters are investigated. The proposed antenna shows an
Fig 16 is showing Gain-Parameter of T- Stub enlarged to 3.5 impedance bandwidth as high.
mm from strip line side of Antenna. At a resonant frequency
of 2 GHz, 2.507 dBi of gain is obtained. The difference REFERENCES
between gain of entire antenna designs shows that gain does
not change a lot. 1. Kaushik sumit, Dhillon S. & Marwah A. 2013
“Rectangular Microstrip Patch Antenna with U-shaped
DGS Structure for Wireless Applications” IEEE
International Conference CICN 2013
2. Mizamohammadi Farnaz, Nourinia Javad and Ghobadi
Changiz 2012 “A Novel Dual-Wide Band Monopole-
Like Microstrip antenna with controllable Frequency
Response” IEEE Antenna And Wireless Propagation
Letter, Vol. 11, 2012.
3. Ismail M.Y., Inam M., Zain A.F.M., Mughal MA. 2010
“Phase and Bandwidth enhancement of Reconfigurable
Reflect Array Antenna with slots Embeded Patch”
IEEE Antenna And Wireless Propagation Letter, Vol. 7,
2010
4. Yang Steven, Kishk A. Ahmad and lee Fong
kai.“Frequency Reconfigurable U-slot Microstrip Patch
Antenna” IEEE ANTENNAS AND WIRELESS
PROPAGATION LETTERS, Vol.7, pp 127-129.
5. Nagarajan V. , Chita R. Jothi “Double L-Slot Microstrip
Patch Antenna for WIMAX and WLAN applications”
Fig 17: Comparison of Return Loss all Designed Antennas Department of Electronics and Communication

636
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Engineering, Adhiparasakhi Engineering College, 9. Golpour M., Koohestani m. “U-shaped microstrip


Chennai, India. Patch Antenna with parasitic tunning stubs for Ultra Wide
6. Wen Lengl, Aan-guo Wang, Xio-tao CAI “Novel Band Applications” Published in IET Microwave,
Radiation Pattern Reconfigurable Antenna with six Antennas & Propgation. Revised on 15th Feb 2009.
beam choices” Science-Direct.com/Journals,April 2012 10. Baik Seung-hun Park Jongkuk, Na Hyung-gi “Design of
pp no-123-28 L-probe Microstrip Patch Antenna” IEEE ANTENNA
7. Liu Wen-Chung, Wu Chao-Ming and Dai Yang, 2011. AND WIRELESS PROPAGATION LETTERS, VOL. 3,
“Design of Triple-Frequency Microstrip-Fed Monopole 2004pp no 117-120.
Antenna Using Defected Ground Structure”, IEEE
Transaction on Antennas and Propagation, Vol.10, pp
2457-2463
8. Jung won Chang, Ha Sangjun “Single Beem Steering
Antenna With U-Slot”@ 2011 crown.
Chen Wen-Ling, Wang Guang-Ming, and Zhang Chen-
Xin, 2009. “Bandwidth Enhancement of a Microstrip-
Line- Fed Printed Wide-Slot Antenna With a Fractal-
Shaped Slot”, IEEE Transaction on Antennas and
Propagation, Vol.59, No.7, pp 2176-2179

637
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Feasibility Study of Hybrid Power Generation using


Renewable Energy Resources in Tribal Mountainous
Region of Himachal Pradesh

Umesh C, Rathore Ved Prakash Verma Vikas Kashyap


ABVGIE&T Pragatinagar ABVGIE&T Pragatinagar ABVGIE&T Pragatinagar
Shimla (HP), India Shimla (HP), India Shimla (HP), India
rathore7umesh@gmail.com ved202verma@gmail.com vikas907@gmail.com

ABSTRACT never be exhausted and therefore, they are called renewable


Electrical energy plays an important role in the development sources of energy. These alternate energy resources can play
of a nation. In the country like India, more than 70% of the crucial role in providing electricity to the majority of world
total population lives in villages and remote areas. To provide population living in remote locations where electrical energy
reliable and clean power to these inaccessible areas, is a from conventional sources cannot be provided due to
challenging task. This paper presents a feasibility study on the economic constraints. About 20% of the global final energy
use of hybrid power generation using various types of consumption comes from renewable sources. The exploration
renewable energy resources available in the tribal region of and use of new renewable sources such as small hydro,
Himachal Pradesh which lies in the North-Western part of modern biomass, wind, solar, geothermal and bio fuels are
Himalayan mountain range where living conditions are very growing very rapidly. The share of renewable in electricity
harsh due to extreme cold conditions and tough mountainous generation is around 20%, with 16% of the global electricity
terrain. Based on the current electrical power scenario and coming from hydro power and about 3% from new renewable
available data related to renewable energy resources in the energy sources. Wind power is growing at the rate of 30%
region, this paper explores the possibility of hybrid power annually, with a worldwide installed capacity of about 200
generation and the use of micro grid in maintaining the GW and is widely used in Europe, America and Asia.
reliability of electrical power in this part of the world. Emphasis on Photovoltaic electricity generation is also being
given worldwide. Renewable energy conversion technologies
General Terms are also suited to rural and remote areas, where energy is
Hybrid power generation; renewable energy resources; crucial for human development. All over the world small PV
based solar cells; micro hydro and wind based small power
Keywords plants contribute a lot in electrification of rural areas [2].
Microgrid; small hydro; wind energy; photovoltaic cell; Climate change concerns along with high oil prices and
storage batteries; induction generators. increasing government supports are driving renewable energy
legislations, incentives and commercialization of these
1. INTRODUCTION renewable energy sources.
The present world energy scenario depicts a picture of Hydro energy is the most reliable and cost effective
concern due to the adverse affects on the environment caused renewable energy source. Small hydro power systems play a
by the production and consumption of energy across the major role in meeting power requirement of remote and
globe. There is an imbalance of energy consumption around isolated hilly areas in a de-centralized manner. Small hydro
the world. Energy consumption is high in most developed power plants such as pico and micro-hydro are environment
countries. On the other hand, the developing countries need to friendly and they have relatively short gestation periods and
consume more energy to ensure economic growth. The world require small investment as compared to large hydro power
is gradually marching towards a severe energy crisis due to plants which require huge capital investment. Estimated
ever-increasing demand of energy overstepping its supply. To potential of more than 2, 00,000 MW of small hydro exists in
tackle the energy crisis and to meet the ever increasing the world. India has a history of more than 100 years in small
demand of energy in our day to day life, we need to hydro. An estimated potential of about 15,000 MW of small
understand the importance of energy in our lives and we have hydro exists in India out of which nearly 6000MW have been
to minimize the use of oil, gas and other conventional energy actually identified through more than 2000 sites across India
sources and must take other alternative steps to explore the and most of these sites are located in Himalayan region with
use of non-conventional sources of energy along with efficient majority of these small hydro power plants located in the
use of energy under energy management programs. With the states of Himachal Pradesh, Uttarakhand and Jammu &
rapid advancement in technology, the world is moving Kashmir. However, due to the effect of global warming on
towards the harnessing of alternate sources of energy such as overall environment around the globe and erratic rainfall due
wind, micro/pico hydro, geothermal, tidal, biomass, and ocean to disturbance in environment, the output of these small hydro
thermal energy sources etc. A renewable energy system [1] power plants is not consistent. To maintain the reliability and
converts the energy found in sunlight, wind, falling water, sea consistency in output power, the implementation of hybrid
waves, geothermal heat and biomass into a form we can use generation in such situation will be beneficial and cost
such as heat or electricity. Most of the renewable energy effective. The best suited alternative in such cases will be to
comes either directly or indirectly from sun and wind and can harness the wind energy potential in such areas along with

638
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

small hydro, especially in remote locations which are not many disadvantages such as the transmission losses, effect on
connected to grid. Integration of wind energy and hydro environment and production of nuclear waste etc. Many of
power [3] is the best alternative in maintaining the reliability these issues can be resolved through the use of distributed
of power and utilizing the best possible transmission capacity. generations. By locating, the source near or at the end-user
In this paper, the concept of distributed generation using location the transmission line losses can be minimized.
available renewable energy sources such as wind energy, Distributed generation is often produced by small modular
photovoltaic and biomass along with small hydro in context to energy conversion units which are either stand-alone or
remote locations in mountainous region has been explored and integrated into the existing energy grid. Distributed resources
suggested to provide the reliable electrical power to this improve the efficiency of electric power system and can
region. provide type of electric power meeting power quality standard
as compared to large centralized power systems which suffer
from lot of power quality problem. DG facilities offer
2. CONCEPT OF MICROGRID & potential advantages for improving the transmission of power.
DISTRIBUTED GENERATION Because they produce power locally for users, they aid the
Microgrid type technologies can play crucial role in providing entire grid by reducing demand during peak times and by
the electricity to the majority of the population living in minimizing congestion of power on the network. Large,
remote areas with reliability [4]. Connecting a remote centralized power plants emit significant amounts of carbon
community to the conventional power grid is expensive and monoxide, sulphur oxides, particulate matter, hydrocarbons,
can take much longer period as well as much escalated cost and nitrogen oxides which results in acid rain causing
due to the geographical conditions and economic constraints. environmental pollution. On the other hand, distributed
Hybrid microgrids can provide dependable electricity by generation technologies substantially reduce emissions.
intelligently combining power from multiple local sources
such as wind, micro-hydro, photovoltaic & biomass etc.
Micro grids provide reliable and good quality energy services
3. ELECTRICAL POWER SCENARIO
to off-grid communities living in remote areas. The area of IN HIMACHAL PRADESH
concern or major constraint in micro grids is in the energy The State of Himachal Pradesh is situated in the northern part
storage devices such as batteries. Photovoltaic sources or solar of India and it lies in the North-West Himalayas at 30o 22' 40"
panels, low speed wind energy conversion systems and diesel N to 33o 12' 40" N latitude and 75o 45' 55" E to 79o 04' 20" E
generators can last for longer period, but the energy storage longitude. It is almost wholly mountainous state having
batteries fail much more quickly which leads to escalation of altitudes ranging from 350 meters to 6,975 meters with
cost due to their reduced life time. Therefore, in the design of climate conditions varying from semi-tropical to semi-arctic.
microgrids, provision of diesel generators is also made as Many of the highest peaks in Himalayan mountainous range
sometimes it is cheaper to run diesel generators than to add lie in Himachal Pradesh. The State is bordered by Jammu &
enough solar panels and batteries to provide power round the Kashmir on North, Punjab on West and South-West, Haryana
clock especially in emergency applications. However, the use on South, Uttarakhand on South-East and China on the East.
of diesel generators adds the pollution to environment. The The total geographical area of Himachal Pradesh is 55,673 sq.
microgrid designed should need generators only for km and total population as per 2011 census is 68, 56,509 with
emergencies and long stretches of bad weather. Reducing population density of 123 persons per square kilometers.
battery costs and diesel consumption could lower the cost per Annual average rainfall varies from 50 centimeters to 250
kilowatt-hour. Due to subsidies, in developing countries like centimeters. As per the rainfall, the state of Himachal Pradesh
India, the high cost of microgrid power is even bigger can be divided into three zones namely Outer Himalayas with
obstacle to widespread deployment of microgrid in remotest average rainfall of 50 -250 centimeters, Inner Himalayas with
part of the country. However in cities micro grids and the average rainfall of 75 -125 centimeters and, Alpine Zone with
conventional grid will complement each other. The snowbound regions having scanty rainfall.
conventional grid installation is easier in accessible areas like As per preliminary hydrological, topographical and
cities with large consumer loads and it makes more economic geological study, an estimated potential of 20,000 MW of
sense. Microgrids will easily be operated along with the hydro power exists in the state. Apart from this, a large
conventional grid. At times of peak demand, utilities can call number of unidentified areas have still been left in the river
on electricity stored in micro grid batteries or use their diesel basins which can contribute substantially to the power
generators to provide a boost of power. potential of Himachal Pradesh by way of micro/pico hydro
Distributed generation (DG) employs small-scale power plants using the water of small streams. The state of
technologies to produce electricity close to the end users of Himachal Pradesh has the distinction of providing electricity
electric power. DG technologies often consist of modular and to all its villages despite difficult geographical constraints.
sometimes renewable-energy generators, and they offer a Micro/pico-hydro power plants are the main sources of
number of potential benefits. Distributed generators can electricity in these remote areas. However, with the
provide lower-cost electricity and higher power reliability and availability of other renewable energy sources such as wind
security with fewer environmental consequences as compared and solar energy, there is a possibility of using these sources
to traditional power generators. Traditional electric power along with micro-hydro in the form of distributed generation
network consists of large-scale generating stations located far to maintain the reliability of electrical power in the region.
from load centers, while distributed generation employ
numerous, but small plants and can provide power on site 4. FEASIBILITY OF DISTRIBUTED
with little reliance on the distribution and transmission grid
system [5]-[6]-[7]. DG capacities range from a fraction of a
GENERATION IN TRIBAL REGION
More than 90% of the state population lives in lower hills of
kilowatt to about 100 megawatts. The power generated at
outer Himalayas and part of inner Himalayas which comprise
centralized power plants uses energy from conventional
almost 50% of the total geographical area of the state. While
sources such as coal, oil, natural gas or nuclear fuels.
in the high altitude snow capped and alpine zone region which
Centralized power plants based on conventional sources have

639
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

comprises of more than 50% of the state’s total geographical 2009. Figure 3 shows the variation in generated power against
area, only less than 10% of the total population of the state the installed capacity as per the collected data:
mostly tribal community lives in tough environmental and
geographical conditions. This is also known as cold desert 5000
having snow bound areas with minimal rainfall. Providing the 4500
reliability in electrical power in these areas is most 4000
3500
challenging task. Although the effort is on to tap the vast
3000
hydro potential available in abundance in the state by 2500 Installed Capacity(kW)
converting into electricity, however, to maintain the reliability 2000 Min. value of power output(kW)
Max. value of power output (kW)
of electrical power in these remote tribal areas is still a 1500
1000
challenging task. The interior Districts of Himachal Pradesh 500
like that of Lahul & Spitti, Kinnaur and Bharmour area of 0
Him Ginni AT Hydro KCM HIMURJA
Chamba possess large numbers of small water streams which Kailash Global
are sufficient to produce up to 5 MW of electricity. The
receding of glaciers, erratic rainfall in last few years and Figure 3: Graphs showing the power output v/s installed
extreme temperature changes are the signs of global warming capacity.
which we are experiencing now. The hydro power solely
depends on the amount of water available in the rivers and The availability of water for electricity generation is found to
streams on which power projects are installed. The total water be inconsistent. The situation is worse in the winter when
availability is directly linked to the annual precipitation in the power requirement is more in this region. Except in the
catchments area in the forms of snow and rainfall. But due to months of summer, the majority of the small hydro power
the climate change in the recent years because of global plants found to be producing power output less than their
warming effects has led to the erratic rainfall. It has direct rated installed capacity. To overcome this problem, there are
impact on the functioning of various power plants mostly other renewable energy sources such as wind energy,
small hydro power plants. Due to the less rainfall, study photovoltaic available in tribal and remote regions which can
carried out by the authors revealed that most of the small be harnessed to provide electricity to these isolated regions
hydro power plants running in the Himalayan region are when there is a scarcity of hydro power during the extreme
operating at under capacity due to the non availability of the winter when water availability is very low. The following
required water for operating the power plants at their designed renewable energy resources can be utilized in the hybrid
capacity as shown in Figure 3. generation systems suitable for these remote regions having
adverse climatic conditions.

4.1 Wind Energy potential in Mountainous


region
For the state like Himachal Pradesh, wind energy potential
can be utilized to meet the demand of electrical power in
remote areas along with other renewable energy sources
available in that area. It requires preliminary data on wind
availability to explore the feasibility of wind power potential
in the region. The wind energy potential in this part of the
world is assessed using on-site wind speed data procured from
Indian Metrological Department. The temporal resolution
wind speed data sheet from Climate Research Unit has been
used to produce a wind atlas for Himachal Pradesh and the
Figure 1: Micro-Hydro Power Plant of 100kW results show that the mountainous region can minimally
Capacity (courtesy HIMURJA) support the electricity generation [8]. The most part of the
tribal region which comes under high elevation zone including
Chamba, Lahual Spiti, Kinnaur, Kullu and Shimla districts in
Himachal Pradesh have relatively higher wind speeds (> 2
m/s) during all seasons. Wind potential in these parts of
Himachal Pradesh is observed to be suitable for small-scale
wind applications like low wind speed turbines, wind-micro
hydro hybrid, agricultural water pumps, wind-solar hybrids,
and battery charging etc. With the technological advancement
in small-scale wind energy conversion systems with cost
effective solutions, these renewable energy sources can meet
the energy demand in remote regions. The highly cold alpine
zone of Chamba, Lahaul Spiti and Kinnaur where the wind
speeds are relatively higher, the wind energy can also be
useful in meeting the heating requirements or in water
Figure 2: Small Hydro Power Plant of 2MW Capacity pumping applications in this region.
(courtesy H.P. State Electricity Board)
4.2 Wind-solar Hybrid
To verify the facts, the data of five small hydro power Wind profile of the Himachal Pradesh shows low wind speed
plants located in District Chamba, Himachal Pradesh, was in most part of the year. This reduction in wind speeds can be
collected during the months of April, September and October, compensated by hybridizing wind with available alternative

640
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

resources such as solar energy in Himalayan region [8]. A For the smooth functioning of renewable energy conversion
study on the assessment of solar energy potential in Himachal system, the control of generators along with the overall
Pradesh substantiates that it receives monthly average global control of hybrid generation system for load frequency
insolation (incoming solar radiation) of more than 4 regulation is required which involves power electronics based
kWh/m2/day except for the winter months of December and control circuits along with or without the use of dump load
January. The high altitude alpine zone (> 3500 m) consisting [11]. Hybrid power generation system involves various types
of entire tribal region receives lower insolation but higher of renewable energy sources such as wind, small hydro, solar,
wind speeds while in the lower regions with lesser heights diesel generators and biomass etc. Due to the variable inputs
have higher insolation and lower wind speeds [9]. In these of these sources or due to variable load, the output power,
conditions wind-solar hybrid could be the better solution to voltage and frequency have to be regulated for the required
ensure reliable electrical power in the region. This energy can load connected to the hybrid system. The control circuitry in
also be used to charge the storage batteries. The schematic hybrid power generation involves individual or centralized
representation of the wind-solar hybrid power system with converters, switches for dump load control, storage battery
storage battery [10] is as shown in the Figure 4 below: and various sensors. The systems are connected at output
sides of individual converters and a storage battery is also
Wind Energy Conversion connected whose current and voltage is monitored by
System individual converters and optimally controls battery charging
Wind AC DC DC Consumer and supply power to the load. A dump load regulates the
Turbine DC AC AC Load output power and also prevents battery from overcharging.
The role of diesel generator is to supply the load when other
sources inputs are low and battery charging is insufficient.
Photo Voltaic DC The impact of storage battery in hybrid power generation
AC system stability has been effectively proven and found to be
Solar Energy Source productive [14].
Battery
4.4 Use of Induction Generators
In renewable energy conversion system, three phase induction
Figure 4: Wind-Solar Hybrid Scheme generators offer many advantages as compared to
conventional synchronous generators due to their adaptability
4.3 Micro/pico-Hydro, Wind, Solar & to variation in domestic load in isolated conditions and
variable input conditions such as variable water inputs and
Diesel Engine Hybrid varying wind conditions [15]. The use of dump load based
Majority of the micro hydro power plants in isolated mode in control using power electronic devices helps in achieving
Himachal Pradesh are located in the tribal regions of high better control in induction generators to generate required
altitude alpine region which has the scarcity of water in the voltage and to maintain required frequency of generated
extreme winter resulting in lesser power generation from these output. Apart from this, due to the single phase nature of the
small hydro plants. To meet the power requirements under consumer electric load in the domestic sector, single phase
such conditions, distributed generation system comprising of induction generators are preferred over three phase induction
various combination of micro hydro, wind, solar and diesel generators due to their simple construction and low cost per
engine along with suitable energy storage battery system with kW. There are different ways of producing single phase
requisite power control mechanism could be another option. output from induction generators. One of the topology for
The wind-solar, wind-micro hydro hybrid systems have been single phase induction generator is based on the ac tacho
proven to be the best hybrid combination of renewable energy generator [16]. This single phase externally excited low
conversion systems with accurate control of output voltage frequency induction machine working on ac tacho generator
and frequency [10]-[13]. The schematic representation of the principle is very useful for household wind power generation
hybrid scheme involving micro-hydro, wind, solar and diesel at low wind speeds and these can be effectively used in
generator which could be suitable for remote mountainous remote regions with low wind speed profile. The generation is
region is shown in the Figure 5 as below: possible at any speed greater than zero and covers the wide
range of speeds.
Wind Energy Conversion
System
Wind
Turbine
AC
DC
DC
AC
Consumer
Load
4.5 Water Mills (‘Ghraats’)
Water mills in entire Himalayan region are most popular
DC
source of converting kinetic energy of water to mechanical
Photo Voltaic
AC energy for flour mills. With the help of technological
Solar Energy Source Battery advancement, this low cost water energy can be converted
Control
into a source of electrical energy using cost effective
Hydro
technology such as induction generators with simpler control
IG Control
Turbine mechanism. There are estimated over 5, 00,000 watermills
Dump Load (Ghraats) in the Indian Mountain Regions. These 'Ghraats'
Micro-Hydro Power Plant
can lead to potential power generation in the entire Himalayan
Diesel region benefitting the population residing in these areas.
Generator

Diesel Engine Generator


4.6 Pine Needles as source of Bio-mass
Figure 5: Micro-Hydro, Wind, Solar and Diesel Energy
Generator Hybrid System The other resources available for power generation in
Himachal Pradesh could be the use of pine needles from pine

641
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

forest as huge losses due to forest fire occurred due to the loads. In Proceedings of IEEE 13th International Power
burning of these pine needles every year. Pine needles are Electronics and Motion Control Conference, 2008.
more prone to fire than other wastes like leaves from other [4] Litifu, Z. et al. 2006. Planning of micro-grid power
trees, paddy waste as well as wood. These pine needles are supply based on the weak wind and hydro power
highly susceptible to catch fire, and the whole forest finally generation. In Proceedings of IEEE Engineering Society
ends up in ashes resulting in loss of property. These pine General Meeting, 2006.
needles can be used to generate electricity and collection of [5] Chi Ho Eric Cheung, et al. 2009. Development of a
these pine needles through community projects will also renewable hybrid power generation system. In
eliminate the risk of forest fire in the hilly terrain. The pine Proceedings of IEEE Systems and Information
needles along with other loose agro waste can be processed Engineering Design Symposium, University of Virginia,
for solidifying into solid biomass of high density to make Charlottesville, VA, USA, April, 24, 2009.
biomass briquettes. This will give an alternative to the energy
[6] G. Deb, R. Paul, and S. Das, “Hybrid power generation
generation coupled with a lot of jobs to the local population to
system,” International Journal of Computer and
collect the biomass from the forests and then converting the
Electrical Engineering, vol. 4, no. 2, April 2012.
same into the briquettes and then to electricity generation. The
block diagram representation of biomass briquetting process [7] Hasan, K., Fatima, K. and Mahmood, M. S. 2011.
using pine needles as raw material and its use is shown as in Feasibility of hybrid power generation over wind and
Figure 6 below: solar standalone system. In Proceedings of IEEE 5th
International Power Engineering and Optimization
Conference, Malaysia, 6-7 June, 2011.
Pressure [8] Adhikari, S., Adhikary, S. and Umeno, M. 2003. PV
Fuel for
Heating energy potential in Nepal Himalayas: analytical study on
seasonal and spatial variation of solar irradiance for PV.
Pine Briquitting Solid Electricity In Proceedings of 3rd World Conference on Photovoltaic
Briquittes
Needles Process Fuel Generation
Energy Conversion, May 11-18, 2003 Osaka, Japan.
Biomass [9] Krishnadas, G. and Ramachandra, T.V. 2010. Scope for
Temperature Gassifier renewable energy in Himachal Pradesh, India – a study
of solar and wind resource potential. In Proceedings of
conference at Lake-2010: Wetlands, Biodiversity and
Climate Change, 22-24th December, 2010.
Figure 6: Biomass Briquetting process using Pine [10] T. Hirose and H. Matsuo, “Standalone hybrid wind-solar
Needles generation system applying dump power control without
dump load,” IEEE Transactions on Industrial Electronics,
vol. 59, no. 2, February 2012.
5. CONCLUSION [11] Cozorici, F., Vadan, I., Munteanu, R. A., Cozorici, I. and
This paper highlighted the need of harnessing every possible
Karaissas, P. 2011. Design and simulation of a small ind-
renewable energy sources available without causing adverse
hydro power plant. In Proceedings of IEEE International
impacts on environment to generate electrical power to meet
Conference on Clean Electrical Power, 14-16 June 2011,
the energy requirements in remotest part of the world to uplift
Ischia, pp. 308-311.
the living standard of the people living in adverse climatic
conditions. This paper describes the various options of [12] P. K. Goel, B. Singh, S.S. Murthy, and N. Kishore,
harnessing available renewable energy resources in the form “Isolated wind-hydro hybrid system using cage
of distributed power generation in the tribal regions under generators and battery storage,” IEEE Transactions on
high altitude alpine zone in Himachal Pradesh. This study will Industrial Electronics, vol. 58, no. 4, April 2011.
also help in providing the similar facility to generate electrical [13] Aktarujjaman, M., Kashem, M. A., Negnevitsky, M. and
power in the entire Himalayan mountain range having the Ledwich, G. 2005. Dynamics of a hydro-wind hybrid
similar geographical and climatic conditions. isolated power system. In Proceedings of the
Australasian Universities Power Engineering Conference
6. ACKNOWLEDGEMENT (AUPEC 2005), 25-28 Sept., 2005, Hobart, Australia, pp.
Authors sincerely thank the management of Himachal Pradesh 231-236.
State Electricity Board, HIMURJA and private power [14] A. K. Srivastava, A. A. Kumar, and N. N. Schulz,
companies AT Hydro, Ginni Global and Him Kailash for “Impact of distributed generations with energy storage
permitting to visit their power plants for collecting data used devices on the electric grid,” IEEE Systems Journal, vol.
in this paper. 6, no. 1, March 2012.
[15] Rathore, U. C. and Singh, S. 2014. Isolated 3-phase self-
REFERENCES excited induction generator in pico-hydro power plant
[1] C. Kathirvel and K. Porkumaran, “Technologies for using water pump load in mountainous region of
tapping renewable energy: a survey,” European Journal Himalayas. In Proceedings of IEEE Global Humanitarian
of Scientific Research, vol. 67 no.1, 2011, pp. 112-118. Technology Conference - South Asia Satellite (GHTC-
[2] D. K. Lal, B. B. Dash, and A. K. Akella, “Optimization SAS), September 26-27, 2014 Trivandrum, pp. 40-44.
of photo voltaic/wind/micro hydro/diesel hybrid power [16] N. R. Kulkarni and Y.S. Apte, “A novel concept in
system in HOMER for study area,” International Journal design of single phase induction generator for remote
on Electrical Engineering and Informatics-vol. 3, no. 3, area,” Journal of Engineering Research and Studies,
2011. vol.2, no. 1, January-March 2011.
[3] NASSER, M. et al. 2008. Experimental results of a
hybrid wind/hydro power system connected to isolated

642
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Modeling & Simulation of Photovoltaic system to


optimize the power output using Buck-Boost
Converter
Shilpa Garg Divesh Kumar
Department of Electrical Engineering Department of Electrical Engineering
Bhai Gurdas Institute of Engineering and Bhai Gurdas Institute of Engineering and
Technology, Sangrur, Punjab, India Technology, Sangrur, Punjab, India
shipugarg@gmail.com diveshthareja@yahoo.com

ABSTRACT secondary solar-powered resources such as wave and wind


The recent upsurge in the demand of PV systems is due to the power, hydroelectricity and biomass, account for most of the
fact that they produce electric power without hampering the available non-conventional type of energy on earth. Only a
environment by directly converting the solar radiation into small fraction of the available solar energy is used [13]. Solar
electric power. However the solar radiation never remains powered electrical generation relies on photovoltaic system
constant. It keeps on varying throughout the day. The need of and heat engines. Solar energy's uses are limited only by
the hour is to deliver a constant voltage to the grid irrespective human creativity. To harvest the solar energy, the most
of the variation in temperatures and solar insolation. In this common way is to use photo voltaic panels which will receive
paper proposes the design, modeling and simulation of photon energy from sun and convert to electrical energy. Solar
photovoltaic solar cell model considering the effect of solar technologies are broadly classified as either passive solar or
irradiations and temperature changes. The PV array is active solar depending on the way they detain, convert and
modelled using basic circuit equations. Its voltage current and distribute solar energy. Solar energy is abundantly available
power voltage characteristics are simulated with different that has made it possible to harvest it and utilize it properly.
conditions. It is noticed that output characteristics of PV array Solar energy can be a standalone generating unit or can be a
are affected by environmental conditions and conversion grid connected generating unit depending on the availability
efficiency is low. Therefore a maximum power point tracking of a grid nearby. Thus it can be used to power rural areas
(MPPT) technique is needed to track the peak power to where the availability of grids is very low. Another advantage
maximize the produced energy. The maximum power point in of using solar energy is the portable operation whenever
the power voltage graph is identified by an algorithm called as wherever necessary. But there are two major problems with
incremental conductance method. This algorithm will identify PV generation systems. One is the low conversion efficiency
the suitable duty cycle ratio in which buck boost converter of solar energy into electrical power and the other is the non
should operate to maximum point. The Simulink model for linear characteristics of PV array which makes the electrical
solar cell, buck boost converter, MPPT algorithm is modeled power generated vary with temperature and solar irradiation
and simulated. [2], [3]. In general there is only one point on P-V and V-I
curve called the maximum power point. At this point only PV
Keywords system operates with maximum efficiency and produces
Photovoltaic, Buck-Boost, Maximum Power Point, Direct maximum output power. But this maximum power point is not
Current known hence calculation models or search algorithms known
as maximum power point tracking techniques are used to find
1. INTRODUCTION the maximum power point. A common MPPT Technique is
The Conventional sources of energy are rapidly depleting. well known as power feedback method which measures the
Moreover the cost of energy is rising and therefore power of PV array and uses it as feedback variable. Based on
photovoltaic system is a promising alternative. They are power feedback there are three important tracking techniques
abundant, pollution free, distributed throughout the earth and used in PV systems. They are the Perturbation and
recyclable. The hindrance factor is it’s high installation cost Observation (P&O), the hill climbing method, the incremental
and low conversion efficiency. Therefore our aim is to conductance method.
increase the efficiency and power output of the system. It is
also required that constant voltage be supplied to the load 2. BLOCK DIAGRAM OF PV SYSTEM
irrespective of the variation in solar irradiance and Fig.1 shows PV system block diagram with MPPT Technique.
temperature. PV arrays consist of parallel and series It consists of PV array, Buck-Boost converter, MPPT block,
combination of PV cells that are used to generate electrical and finally to load. Combination of Series and parallel solar
power depending upon the atmospheric conditions (e.g solar cells constitute PV array. Series connection of solar cells
irradiation and temperature). So it is necessary to couple the boost up the array voltage and parallel connection increases
PV array with a boost converter. Moreover our system is
the current. In order to change the input resistance of the panel
designed in such a way that with variation in load, the change
in input voltage and power fed into the converter follows the to match the load resistance (by varying the duty cycle), a DC
open circuit characteristics of the PV array. Our system can be to DC converter is required [4]. Buck-Boost converter is used
used to supply constant stepped up voltage to dc loads. Solar to obtain more practical uses from solar panel. The input of
energy has been harnessed by humans since ancient times buck boost converter is connected to PV array and output is
using a variety of technologies. Solar radiation, along with connected to load. MPPT block receives voltage and current

643
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

signals from PV array. The output of MPPT block is series of The PV cell output voltage is a function of the photocurrent
pulses. These pulses are given to the converter. Converter that mainly determined by load current depending on the solar
works based on these pulses to make the PV system operate at irradiation level during the operation [5].
Maximum power point (MPP)
Module photo current:

Change of temperature and irradiation changes voltage and


current outputs of PV array. Figures shows P-V and V-I
characteristics of PV array for different temperature and solar
irradiance conditions

Fig.1. Block Diagram

2.1. Equivalent model


The PV cell model is represented by equivalent electrical
circuit shown in Fig. 2. The model contains a current source,
diode, series resistance Rs, shunt resistance and load. The
current source produces which depends on radiation. The
output current is the difference between photo current and
diode current.
Fig.3. P-V and V-I characteristics of PV array

3. BUCK-BOOST CONVERTER
The buck–boost converter is a type of DC-DC converter that
has an output voltage magnitude that is either greater than or
less than the input voltage magnitude. Two different
topologies are called buck–boost converter. Both of them can
produce a range of output voltages, from an output voltage
much larger (in absolute magnitude) than the input voltage,
down to almost zero. The output voltage is of the opposite
polarity than the input. This is a switched-mode power supply
Fig.2. One Diode model Of PV Cell with a similar circuit topology to the boost converter and the
buck converter. The output voltage is adjustable based on the
duty cycle of the switching transistor. One possible drawback
The equations of PV module are as follows: Current output of of this converter is that the switch does not have a terminal at
PV module is ground; this complicates the driving circuitry. Neither
drawback is of any consequence if the power supply is
isolated from the load circuit (if, for example, the supply is a
battery) because the supply and diode polarity can simply be
reversed. The switch can be on either the ground side or the
supply side. The circuit diagram is shown in figure 4.

Output voltage of PV cell is

Fig.4. Buck-Boost Converter

644
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 While in the On-state, the input voltage source is directly 5. MPPT TECHNIQUE
connected to the inductor (L). This results in The efficiency of a solar cell is very low. In order to increase
accumulating energy in L. In this stage, the capacitor the efficiency, methods are to be undertaken to match the
supplies energy to the output load. source and load properly. One such method is the Maximum
 While in the Off-state, the inductor is connected to the Power Point Tracking (MPPT). This is a technique used to
obtain the maximum possible power from a varying source. In
output load and capacitor, so energy is transferred from L
photovoltaic systems the I-V curve is non-linear, thereby
to C and R. making it difficult to be used to power a certain load. This is
done by utilizing a boost converter whose duty cycle is varied
4. SIMULINK MODEL OF PV CELL by using MPPT algorithm. Few of the many algorithms are
WITH BUCK – BOOST CONVERTER listed below [3], [4], [5] and [8]. A Buck-boost converter is
used on the load side and a solar panel is used to power this
A DC-DC converter is considered as a DC transformer that
converter. There are many methods used for maximum power
provides a loss less transfer of energy between different point tracking two are listed below:
circuits at different voltage levels. When DC-DC conversion
is needed there is also a need for control, need for higher  Perturb and Observe method
efficiencies, and minimum output ripple voltage, this means  Incremental Conductance method
increase and decrease voltages with control is needed. So, the
5.1. Perturb and Observe method
switching frequency control, or PWM duty ratio control is This method is the most common. In this method very less
required. In the proposed buck-boost converter, the input number of sensors are utilized [5] and [6]. The operating
voltages from (5-20 V) are possible to produce constant (12 V) voltage is sampled and the algorithm changes the operating
output which will be use in the design of solar home voltage in the required direction and samples dPdV. If dPdV
applications effectively. is positive, then the algorithm increases the voltage value
towards the MPP until dPdV is negative. This iteration is
continued until the algorithm finally reaches the MPP.

5.2. Incremental Conductance method


This method uses the PV array's incremental conductance
dIdV to compute the sign of dPdV. When dIdV is equal and
opposite to the value of I/V (where dPdV =0) the algorithm
knows that the maximum power point is reached and thus it
terminates and returns the corresponding value of operating
voltage for MPP. The First two are the most widely used
methods for maximum power point tracking. The advantage
of P&O algorithm is simple and easy to be implemented,
especially using the low cost micro controller system.
However, the main drawback of the algorithm is that it
oscillates around the maximum power point, due the
perturbing process to find the maximum power point.

Fig.5 Simulink Model of PV Cell 5.3. Flow chart for perturb & observe:

Fig.6. Simulink Model of PV cell with Buck- boost converter

645
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5.4. Results

Fig.10. Final output with MPPT

From figure 9

At T=50 degree and S=400 w/ m² I= 10Ampere, V=76.5 volt


and P=712 watt At T=50 degree and S=200 w/ m² I=5.05
Fig.7. I-V Curve of PV Cell Ampere, V=86 volt and P= 395 watt

From Figure 10

At T=50 degree and S=400 w/ m² I= -4.7 Ampere, V=-47


volt and P=470 watt At T=50 degree and S=200 w/ m² I=-3.8
Ampere, V=-38 volt and P= 350 watt

6. CONCLUSION
The buck boost converter is modelled using equation model
approach, rather than circuit model approach. By developing
model using equation modelling, the model could be modified
or extended easily. To verify the developed equation model,
the comparison to the existing circuit model is done. The
experiment result shows that the developed model is suited to
the existing one. Moreover, a MPPT control model (P&O
algorithm) is modelled and tested using several experiment
data. The experiment results show that the overall model
Fig.8. P-V Curve of PV Cell behaves like the real situation. Further, the properties of P&O
algorithm such as the effect of the step value of perturbing
duty cycle and the oscillation problem are well simulated. In
future, the different MPPT methods will be evaluated.
Furthermore, the MPPT algorithm will be applied to the other
renewable energy, such as the wind energy system. The
various waveforms were obtained by using the plot
mechanism in MATLAB.

REFERENCES
[1] M. G. Villalva, J. R. Gazoli, E. Ruppert F,
"Comprehensive approach to modeling and simulation of
photovoltaic arrays", IEEE Transactions on Power
Electronics, 2009 vol. 25, no. 5, pp. 1198--1208, ISSN
0885-8993.
[2] M. G. Villalva, J. R. Gazoli, E. Ruppert F, "Modeling
and circuit-based simulation of photovoltaic arrays",
Brazilian Journal of Power Electronics, 2009 vol. 14, no.
1, pp. 35--45, ISSN 1414-8862.
Fig.9. Output of PV Cell with Buck Boost Converter [3] Mummadi Veerachary, "Control of TI-SEPIC Converter
for Optimal Utilization of PV Power", IICPE, 2010 New
Delhi.

646
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[4] R. Sridhar, Dr. Jeevananathan, N. Thamizh Selvan, [12] Al Nabulsi A, Dhaouadi R. Fuzzy Logic Controller
Saikat Banerjee, “Modeling of PV Array and Based Perturb and Observe Maximum Power Point
Performance Enhancement by MPPT Algorithm", Tracking. Proceedings of International Conference on
International Journal of Computer Applications (0975 – Renewable Energies and Power Quality. Spain. 2012.
8887) Volume 7– No.5, September 2010.
[13] Putri R I, Rifai M. Maximum Power Point Tracking
[5] Hairul Nissah Zainudin, Saad Mekhilef, “Comparison Control for Photovoltaic System Using Neural Fuzzy.
Study of Maximum Power Point Tracker Techniques for International Journal of Computer and Electrical
PV Systems”, Cairo University, Egypt, December 19-21, Engineering. 2012; 4(1): 75-81.
2010, Paper ID 278.
[14] Sedaghati F, Nahavandi A, Badamchizadeh M A,
[6] Katherine A. Kim and Philip T. Krein, “Photovoltaic Ghaemi S, Fallah M A. PV Maximum Power-Point
Converter Module Configurations for Maximum Power Tracking by Using Artificial Neural Network.
Point Operation”, University of Illinois Urbana- Mathematical Problems in Engineering. 2012; Vol. 2012,
Champaign Urbana, IL 61801 USA. Article ID 506709: 1-10.
[7] Huan-Liang Tsai, Ci-Siang Tu, and Yi-Jie Su, [15] Kalirasu A, Dash S S. Modeling and Simulation of
“Development of Generalized Photovoltaic Model Using Closed Loop Controlled Buck Converter for Solar
MATLAB/SIMULINK”, Proceedings of the World Installation. International Journal of Computer and
Congress on Engineering and Computer Science 2008 Electrical Engineering. 2011; 3(2): 206-210.
WCECS 2008, October 22 - 24, 2008, San Francisco,
USA. [16] Pandiarajan N, Ramaprabha R, Ranganath M.
Application of Circuit Model for Photovoltaic Energy
[8] Elgendy M A, Zahawi B, Atkinson D J. Assesment of Conversion System. International Journal of Photoenergy.
Perturb and Observe MPPT Algorithm Implementation 2012; Vol. 2012, Article ID 410401: 114.
Techniques for PV Pumping Applications. IEEE
Transactions on Sustainable Energy. 2012; 3(1): 21-33. [17] Samosir A S, Sutikno T, Yatim A B M. Dynamic
Evolution Control for Fuel Cell DC-DC Converter.
[9] Rashid M M, Muhida R, Alam A H M Z, Ullah H, TELKOMNIKA. 2011: 9(1): 183-190.
Kasemi B. Development of Economical Maximum
Power Point Tracking System for Solar Cell. Australian [18] G R Walker. Evaluating MPPT Converter Topologies
Journal of Basic and Applied Sciences. 2011:5(5): 700- Using A MATLAB PV Model. Journal of Electrical &
713. Electronics Engineerin. Australia. 2001; 21(1): 4956.

[10] Wenhao C, Hui R. Research on Grid-Connected [19] J V Gragger, A Haumer, M Einhorn. Averaged Model of
Photovoltaic System Based on Improved Algorithm. a Buck Converter for Efficiency Analysis. Engineering
Przeglad Elektrotechniczny (Electrical Review). 2012; 3b: Letter. 2010; 18(1).
22-25. [20] J A Jiang, T L Huang, Y T Hsiao, C H Chen. Maximum
[11] Takun P, Kaitwanidvilai S, Jettanasen C. Maximum Power Tracking for Photovoltaic Power Systems.
Power Point Tracking using Fuzzy Logic Control for Tamkang Journal of Science and Engineering. 2005; 8(2):
Photovoltaics Systems. Proceedings of International 147-153.
MultiConference of Engineers and Computer Scientists.
Hongkong. 2011.

647
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Modelling and Analysis of Grid Connected Renewable


Energy Sources with active power filter

Harmeet Singh Jasvir Singh


BGIET, Punjab (India)
harmeet21nov@gmail.com
jasvir.604@gmail.com

ABSTRACT 2. PROPOSED TOPOLOGY


The integration and control of renewable energy in electric The proposed topology consists of control strategy of inter-
power systems is evaluated. The DC/AC converter is used to connection between photovoltaic panel and grid connected
convert the power generated from RES to the grid, Also, shunt
inverter as shown in figure (1).
APF feature is introduced to compensate current unbalance,
load current harmonics, load reactive power demand and load
neutral current. With such a control, the combination of grid-
interfacing inverter and the 3-phase 3-wire linear/non-linear
unbalanced load at point of common coupling (PCC) appears
as balanced linear load to the grid. The proposed system is
modeled and simulated in MATLAB Simulink environment.
An extensive simulation study is carried out to validate the
proposed control approach.

Keywords
Renewable Energy, Grid-interconnection, Active Power
Filter, Power Quality. PLL, Hysteresis

1. INTRODUCTION
The interest in the development of renewable energy sources
generation system such as photovoltaic solar cells connected
to the power grid has increased in the last few years due to
diminishing of the world‟s conventional sources of energy.
Also, the very large use of conventional fossil fuels which are Fig.1: Grid Connected PV System
the primary source of energy causes the serious environmental
pollution and problems. Therefore renewable energy offers a The power injected from the PV panel into grid through the
promising alternative source and has received great attention dc/ac controller. The proposed system also operates as an
in research because it appears to be one of the possible solu- active power filter which is capable of compensates harmonic
tions to environmental problem. Also, One of the most prom- components, generated by the non linear load connected to the
ising applications of renewable energy to supply power for system.
remote communities where main electrical grid is absent.
There are many renewable energy sources are available. 3. SYSTEM MODEL
Photovoltaic generation is becoming increasingly important
as a renewable energy source since it exhibits a great merits 3.1 Photovoltaic Array
such as less maintenance, noise free and pollution free. Also, Photovoltaic Array is formed by combinations of series and
the capability of the system to supply AC loads, relieving the parallel connection of PV solar cells. A simple solar cell
grid demand and making possible to send energy from the equivalent circuit model [4] is shown in figure (2). From the
panels to the grid. figure it is clear that Iph is the photo-generated current or the
In this paper the integration and control of renewable energy amount of solar radiation ( photon) that are received at the
in electric power systems is evaluated. The DC/AC converter surface of solar cell, Id is the diode current (i.e. The current
is used to convert the power generated from RES to the grid, between the p-n junction of the bulk material of cell which act
Also, shunt APF feature is introduced to compensate current
as a diode).
unbalance, load current harmonics, load reactive power de-
mand and load neutral current. With such a control, the com-
bination of grid-interfacing inverter and the 3-phase 4-wire
linear/non-linear unbalanced load at point of common coupl-
ing (PCC) appears as balanced linear load to the grid. The
proposed system is modeled and simulated in MATLAB
Simulink environment. An extensive simulation study is car-
ried out to validate the proposed control approach.

648
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Ipv

RS

Rp RL

Iph Id IRp

Fig 2: Equivalent circuit of photovoltaic cell

IRp is the shunted resistance (i.e. the resistance due to recom-


bination of electron pair that are going to the load), Rs is the
series resistance (i.e. the resistance between the bulk material
within the contact of p-n junction) and RL is the load resis- Fig 3: V-I characteristics of PV cell
tance. Now, find out the photovoltaic current Iph by applying The expression of diode saturation current, Io is given in
the Kirchhoff‟s current law at particular node where Iph, diode equation (5) which depends on temperature.
current Id, Rp and Rs meet, we get the following equation:

Iph = Id + IRp + Ipv (1) Io = Io,r( )exp[ ( - )] (5)

We get the following expression for the photovoltaic current Where, Io,r is the reference diode saturation current, q
from the above equation:
=1.602 10-19 C is the electron charge, k =1.380 10-23
Ipv = Iph – Id – IRp (2) J/K is the Boltzmann constant, Eg =1.12 eV is the band gap
energy. The expression of reference diode saturation current is
given by the following equation:
(3)

Io,r = (6)
where Iph and Io are the Photovoltaic (PV) and diode satura-
tion current, respectively, and Ipv and Vpv is the photovoltaic
current and photovoltaic voltage of the array and Vt = NskT/q Where, Voc,r is the reference open-circuit voltage, Vt,r is the
is the thermal voltage of the array. Ns is the number of cells nominal thermal voltage of the cell Isc,r is the short-circuit
connected in series, q is the electron charge (1.60217646 × current at the nominal condition (25oC and 1000W/m2).
10−19 C), k is the Boltzmann constant (1.3806503 × 10−23
J/K), T (in Kelvin) is the temperature of the p-n junction, and Cells combinations is used to enhance the performance
A is the diode ideality factor. The equation (3) gives the rela- or rating of the Cell, therefore cells connected in parallel in-
tionship between the output parameter of the solar cell i.e. Ipv crease the current and cells connected in series provide greater
and Vpv. The behaviour of these output parameter is non-linear output voltages. Hence a practical PV array consists of several
as shown in figure (3), there are three important points: short PV module(combination of solar cell) connected in series (Ns)
circuit (0, Isc), MPP (Vmp, Imp) and open circuit (Voc, 0) and parallel (Np). Therefore, the equation of PV array written
clearly shown in figure (3). as (7) which is the modified form of the equation (3).

The expression of photo-generated current, Iph is given in Ipv = NpIph –


equation (4) where it linearly depends on solar Irradiance and
also effected by temperature. NpIo[ (7)

Iph = [Iph,r + Ki(T – Tr)] (4)

Where, Iph,r is the photo generated current at the reference For simplification term has been neglected
condition(25oC and 1000W/m2), S is the irradiance or insola-
tion, T is the cell temperature, Sr and Tr is the irradiance and and then further solve the equation (7), we get the following
cell temperature at reference conditions, Ki is the short-circuit equations:
current/temperature coefficient.
(8)

649
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

= (9)

tion (G), temperature (T), number of cell connected in series


4. CONTROL STRATEGY
(Ns) and parallel (Np) and PV Voltage as input and generate
There are two control strategy namely hysteresis controller to the PV current as output as shown in figure (5).
generate appropriate pulses and inner control loop as shown in
figure (1). These control topology are explained in this section
as below:

Control Strategy of Grid Connected Inverter


This paper propose the current control approach based on
hysteresis comparator [8] where reference current is the con-
trol variable. This reference current (Iabc*) is calculated by
multiplication of in-phase component of reference current(Ic)
to the unity grid voltage component (Uabc).
Where,
Fig.5: Masked simulink diagram of the solar PV panel

The M-file for diode current has been developed using the
equation (5) and photovoltaic current subsystem block is
Also, Ic current is calculated from the dc-voltage control loop developed using the equation (4). This diode current and
as shown in figure (4) by comparing the Vdc to the Vdc*, and photovoltaic current is connected according o equation (7)
then respective error signal is given to the PI controller. This resulting to generate the PV current.
PI controller is responsible to maintain a constant dc – link
voltage at the input of the grid interface inverter. The waveforms obtained by varying the solar insolation
but keeping temperatures constant which are fed into the PV
array model have been plotted as shown in figure (6).
I-V Characteristics with different irradiation levels
20
S = 1000
18
S = 800
16 S = 600
S = 400
14
Current in watt

12

10

0
0 5 10 15 20 25 30 35 40 45 50
Voltage in volt

Fig.6: I-V Characteristics with different irradiation


Fig.4:Control strategy of grid interface inverter
From Figure(7), we observed that by increasing the solar
Now, the reference current(Iabc*) are compare with the ac- radiation at constant temperature the voltage and current
tual grid current (Iabc) in the current control loop as shown output from PV array also increases.Hence by increasing
in figure (4) and resulting error signal are given to the hys- insolation we can get our required level voltage.
teresis current controller to generate the switching pulse for
the grid interface inverter.

5. SYSTEM MODELLING IN MAT-

LAB/SIMULINK

5.1 Validating the PV Array


Figure (5) shows the masked simulink structure of PV array
based on equation (1 - 7) [9]. The model use the solar irradia-

650
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

P-V Characteristics with different irradiation levels


500 Irradiation level 600 w/m2 Irradiation level 800 w/m2 Irradiation level 1000 w/m2
S = 1000 4
x 10
450 S = 800 1
S = 600

Psource
400 S = 400 0

350
-1
power in watt

300
-2
0 0.5 1 1.5
250
6000
200

Pload
4000
150
2000
100
0
50 0 4 0.5 1 1.5
x 10
0 2
0 5 10 15 20 25 30 35 40 45 50
Voltage in volt

Pinv
0
Fig.7: P-V Characteristics with different irradiation
-2
0 0.5 1 1.5
5.2 Simulation result of whole system Time
The PV array has been interface with the grid using a current
controlled voltage source converter. This includes the PV Fig.9: Distribution of power
module, inverter and control circuit [8]. In section B, control 200

methodology has been explained, according to that the Vdc is


Vout 0
calculated from the MPPT and Vdc* is given calculated value
-200
but in this paper Vdc* is set by itself according to phase to 50
0 0.5 1 1.5

phase voltage given to grid. In this way controller generate the


Iabc

0
pulses for the inverter. The modeling and simulation of the
whole system has been done in MATLAB/SIMULINK envi- -50
0 0.5 1 1.5
50
ronment.
ILabc

-50
0 0.5 1 1.5
100
Iinv

-100
0 0.5 1 1.5
800
Vdc

600

400
0 0.5 1 1.5
Time

Fig.10:Simulation Results

15

10
Load Current

-5

-10

-15
0.2 0.22 0.24 0.26 0.28 0.3 0.32 0.34 0.36 0.38 0.4
Fig.8: Grid Connected PV System Time

Fig.11: Non-Linear curve of phase ‘a’


The proposed topology is validated with the help of figure(9)
and figure(10) having system response for different solar
i. First mode of operation – Grid and PV feeds the
insolation levels is given. The simulation is run for 1.5 sec
load demand
and insolation is varied at each 0.5 sec such as insolation is set
at 600w/m2 initially from 0-0.5s after that insolation is set at
Figure (12) shows the experimental results
800w/m2 from 0.5-1.0s and than insolation remains at
1000w/m2 for remaining interval. The figure (11) shows the when load demand is shared by both grid and
non-linear curve of phase „a‟ and figure (10) shows that the photovoltaic system. The simulation time for
harmonics generated by non linear load is compensated this mode is 0-0.5sec and after feeding the
perfectly through active filter. power from PV, the remaining demand is
fulfilled by power grid. the phase opposition of
voltage and current waveform of phase „a‟
shows that the grid is feeding the load.

651
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Irradiation Level of 600 W/m2 Irradiation Level of 1000 W/m2


200 200
Va

Va
0

-200
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -200
1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5
50 20
Ia

Ia
0

-50
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -20
50 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5
50
ILa

ILa
0

-50
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -50
1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5
Time Time

Fig.12:Power injected from the grid


Fig.14 :Power absorbed by the grid

Figure (14) shows the experimental results when load demand


is fulfilled by the photovoltaic system and remaining power is
ii. Second mode of operation – no need to grid power
injected to grid. The simulation time for this mode is 1-1.5sec.
The same phase of voltage and current waveform of phase „a‟
Figure (13) shows the experimental results when load
shows that the power is absorbed by the grid.
demand is fulfilled by the photovoltaic system and there
is no need to grid power. The simulation time for this
6. CONCLUSION
mode is 0.5-1.0sec.
In this paper, the model of grid connected photovoltaic system
Irradiation Level of 800 W/m2
200 is implemented. The system contains a simple model of pho-
tovoltaic array, and grid connected inverter, and active power
Va

0
filter. DC voltage control loop with current control loop me-
-200 thod of DC-AC inverter control is used in this paper. The
0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
50 various values of the voltage and current obtained have been
plotted in the I-V curves of the PV array at different insolation
Ia

0
levels and simulation results of filtering and grid connected
-50
0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 inverter verify the correctness of the proposed model.
50

REFERENCES
ILa

[1] F. Lasnier and T. G. Ang, Photovoltaic Engineering


-50 Handbook. New York: Adam Hilger, 1990
0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Time [2] J. P. Pinto, R. Pregitzer, L. F. C. Monteiro, J. L. Afonso,
Fig.13: No need to grid power “3-Phase 4-Wire Shunt Active Power Filter with
Renewable Energy Interface” in Proc. Conf. IEEE
iii. Third mode of operation – PV feed power to Rnewable Energy & Power Quality ( ICREPQ’07), 2007.
load as well as grid [3] Wang Xuanyuan, Kazerani M.A novel maximum power
point tracking method for photovoltaic grid-connected
inverters. Industrial Electronics Society, The 29th Annual
Conference of the IEEE, Vol.3, Nov.2003: 2332-2337.
[4] M. G. Villalva, J. R. Gazoli, and E. R. Filho,
"Comprehensive Approach to Modeling and Simulation
of Photovoltaic Arrays," Power Electronics IEEE
Transactions on, vol. 24, pp. 1198-1208, 2009.
[5] E. Matagne, R. Chenni, and R. El Bachtiri, “A
photovoltaic cell model based on nominal data only,” in
Proc. Int. Conf. Power Eng., Energy Elect. Drives,
POWERENG, 2007, pp. 562–565.
[6] Yusof, S. H. Sayuti, M. Abdul Latif, and M. Z. C.
Wanik, “Modeling and simulation of maximum power
point tracker for photovoltaic system,” in Proc. Nat.
Power Energy Conf. (PEC), 2004, pp. 88–93.
[7] Mukhtiar Singh, V. Khadkikar, A. Chandra, and R. K.
Varma, “Grid Interconnection of Renewable Energy
Sources at Distribution Level with Power Quality
Improvement Featues,” IEEE Trans. Power Delivery

652
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[8] Mukhtiar Singh, V. Khadkikar, A. Chandra, and R. K. [11] Y.Anil Kumar, G.Veeranjaneyulu, “Power Quality
Varma, “Grid Interconnection of Renewable Energy Improvement from Grid Connected Renewable Energy
Sources at Distribution Level with Power Quality Sources At Distribution Level Using Fuzzy Logic
Improvement Featues,” IEEE Trans. Power Delivery Controller” Volume 9, Issue 5 Ver. I (Sep – Oct. 2014),
[9] R.Murali, P. Nagasekhara Reddy, B. Asha Kiran, “Power PP 36-43
Quality Enhancement of Distributed Network fed with [12] N.VimalRadha Vignesh, R.VigneshRam, “Modelling of
Renewable Energy Sources based on Interfacing Inverter Photo Voltaic Array with Active Power Filter” Vol. 3 ,
ISSN: 2277-3878, Volume-2, Issue-2, May 2013 Issue 4 , April 2014
[10] G.Gnaneshwar Kumar, A.Ananda Kumar, “Renewable
Energy Interconnection at Distribution Level to Improve
Power Quality” Issn: 2278-4721, Vol. 2, Issue 5
(February 2013), Pp 39-48

653
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3D Finite Element Analysis for Core Losses in


Transformer
Sapreet Kaur Damanjeet Kaur
Assistant Professor Assistant Professor
UIET, Panjab University, UIET, Panjab University,
Chandigarh Chandigarh
sarpreetdua@yahoo.co.in djkb14@rediffmail.com

ABSTRACT losses, dielectric losses and I 2R losses are often neglected due
The phenomenal growth of power systems has put burden on to small amount of no-load current [2].
the transformer industry to supply consistent and cost-effective
transformers. Any failure of a transformer or its component Transformer
will impair the system performance and social setup. The Losses
reliability of a transformer is a major concern to users and
manufacturers for ensuring a trouble-free performance during
service. Magnetic circuit is considered as the most active No-Load Losses Load Losses
component of the transformer. It consists of iron core and
carries flux associated to the windings. It is important to
understand the connection between iron core and leakage flux
to get a better design of Transformer. Some of the limitation of Hysteresis Eddy
Current I2 R Leakage
using transformer is expenditure by some losses in core. By Losses
solving this difficulty it can save more energy. This paper losses Losses Flux
presents a numerical technique for calculating the core losses Losses
and its reduction.
Fig. 1: Classification of Transformer Losses
Keywords
Finite Element Method, Transformer, Core Losses.
2. RECENT TRENDS IN REDUCTION OF
1. INTRODUCTION TRANSFORMER CORE LOSSES
A transformer is a multifaceted three dimensional There has been a secure development of core steel material in
electromagnetic device, which is used extensively in electric the last century from non oriented steels to grain-oriented
power systems to transfer power by electromagnetic induction steels. The reduction in transformer core losses in the last few
between the circuits at the same frequency but with different decades is related to a significant increase in energy costs. One
values of voltage and current. Transformers involve transfer of of the best ways to reduce the core losses is to use thinner
power between circuits through the use of electromagnetic grades of core steels. But the price of the thinner grades is
induction. Although transformer is a static most efficient comparatively higher. Despite these disadvantages, core
device of the power system but still there are problems materials with still lower thicknesses will be available and
associated with the transformers which affects its performance. used in the future.
The new material is amorphous magnetic alloys which is about
Majorly transformer has two types of losses associated with it
30% of cold rolled grain oriented (CRGO) steel materials,
due to electric current flowing in the windings and the
because of their high resistivity-and low thickness. The flux
alternating magnetic field in the magnetic core. The losses
distribution in amorphous materials is more uniform in then as
related with the windings are called the load losses, while the
compared to the CRGO materials. The assembly of core is an
losses related to the core are called no-load losses or core
automatic process, due to which the amorphous core
losses [1]. Load losses vary according to the load on the transformers are considered as cost-effective with improved
transformer. These losses include heat losses and eddy currents performance [3].
in the primary and secondary windings of the transformer.
Heat losses or I2R losses, in the windings share a major part of
the load losses. These losses can be minimized using a 3. CORE LOSS EVALUATION METHOD
material with low resistance per cross-sectional area. It is Core loss evaluation is a multi-disciplinary problem, requiring
found that copper is the best suitable conductor material when information on Electrical Engineering, Material Engineering
designing transformer considering its parameters like size, and Physics. Several analytical and numerical techniques are
weight, cost and resistance. Engineers can also decrease the used for the computation of electrostatic and magneto static
resistance of the conductor by increasing the cross-sectional fields in the transformers. Analytical techniques include
area of the conductor. But it increases the cost of the double-Fourier method, method of images, separation of
transformer. No-load losses are caused by the magnetizing variables, etc. But due to geometric and material complexities,
current required to energize the core of the transformer. These numerical methods are used for the solution of electrostatic,
losses are independent of the load on the transformer. They are electromagnetic, structural, thermal problems [4].
also called constant losses. Hysteresis losses and eddy current
losses contribute about 99% of the no-load losses, while stray In order to improve the performance of transformer design,
researchers worked on the characteristic, types and

654
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

performance of magnetic core materials to reduce core losses The finite element method is a numerical method of analysis
[5]. G.W. Swift studied the variables which affect the for the explanation of problems described by partial
performance of magnetic cores with respect to core loss and differential equations. The considered field is represented by a
exciting current and investigated the effect of frequency and group of finite elements. The space discretization is realized
the effect of series air gaps at the corners [6]. Z. Valkovic by triangles or tetrahedron depending on the problem is 2D or
compared the characteristics of different cores and determined 3D respectively. Therefore, a constant physical problem is
the building factor to study the influence of corner joint transformed into a discrete problem of finite elements. The
overlap length, the number of laminations per stagger layer, solution of such a problem reduces into a set of algebraic
and the yoke cross-section form [7]. Loffler et al. showed the equations. Therefore, the solution of the 2D or 3D magneto
improvements of multistep-lap (MSL) jointed cores in static problem describing the transformer field reduces into the
comparison to single-step-lap (SSL) jointed cores to reduce estimation of the magnetic field density at each node of the
power losses and concluded that power losses reduces with triangles or tetrahedral of its 2D or 3D mesh, respectively [19].
more homogeneous flux distribution [8].

Girgis et al. carried out an analytical study to determine the


3.2 3D FINITE ELEMENT METHOD
magnitude of core production attributes [9]. Albach et al. BASED MODEL OF TRANSFORMER
presented a practical method to predict the core losses in CORE
magnetic components for an arbitrary shape of the In the 2D FEM analysis, the magnetic field calculation is
magnetizing current [10]. Dolinar et al. determined the conducted with the use of the magnetic vector potential A.
magnetically nonlinear iron core model of a three phase three However, in the case of 3D problems, the use of the vector
limb transformer and compared it with the classical saturated potential results to great complication, due to the great number
iron core model [11]. Stranges and Findlay described an of unknowns parameters. Therefore, the use of magnetic scalar
apparatus that determined the iron losses due to rotational flux potential Φm is preferred in the case of 3D magneto static
[12]. Researchers carried out the experimental study on problem solution. In most of the developed scalar potential
various samples of iron core to test the different stacking formulations this calculation of Φm is realized with the help of
patterns of grain oriented silicon steel laminations and reduced the following equation:
the iron core losses of power transformers [13-15].

A modern method of analysis of transformer performance is H = Hs −∇ Φm (1)


Finite element analysis (FEA). It is one of several numerical
methods that can be used to solve complex geometries and is Figure 2 shows the outlook view of the 3D FEM model of the
the considered as the best method used today. FEA consist five
methods that are: transformer active part, comprising the iron core, high and low
voltage windings of single phase.
1. Finite Difference Method (FDM)
2. Moments Method (MM)
3. Monte Carlo Method (MCM)
4. Boundary Element Method (BEM)
5. Finite Element Method (FEM)

FEM is a mathematical method for solving ordinary and


elliptic partial differential equation. It can use to calculate
object with linear or nonlinear. FEM is useful to obtain an
accurate characterization of electromagnetic behavior or
magnetic components such as transformers [16].
These days, finite element method (FEM) is a very effective
numerical tool for the simulation of structural components,
material optimization, reliability enhancement, failure
analysis, corrective action and verification of new designs
under various loading conditions of transformer [17, 18].
Fig. 2: 3D FEM Model of Transformer Core
3.1 FINITE ELEMENT METHOD In transformer analysis, because of ferromagnetic materials
ANALYSIS OF TRANSFORMER properties usually the problems appear in nonlinear form.
Finite element methods used for solving transformer problems Magnetic permeability u=B/H is not constant and is a function
includes three stages. At first stage the problem space is of magnetic field in each mesh.
meshed into contiguous elements of suitable geometry and The ampere law states that:
appropriate values of the material parameters-conductivity, 𝞩 ×H=J (2)
permeability and permittivity to each element are assigned. In
the second stage, the model is excited, so that the initial H: Magnetic field intensity
conditions are set up. Finally, the boundary conditions for the J: Total current density
problem are specified. The finite element method has the
advantage of geometrical flexibility. It is possible to include a
greater density of elements in regions where fields and
geometry are rapidly [16].

655
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

In 3D analysis of transformer we use third order equation, by


which permeability of each part can be calculated as a function
of B. Core attributes can be predicted by using the third order
equation model. 3D model of FEM shows that we can estimate
core losses of a transformer with high accuracy and can
localized flux distribution in the core. Using 3D FEM model
we can also find the hot spots inside the core. The accuracy of
the results increased with the increase in number of meshes.
The mesh generation of these models is as shown in Figure 4.
These types of models show the transformer behavior at the
design stage and required parameters can be considered before
manufacturing them and thereby reducing the design time and
It is a role of magnetic field in each mesh. The B-H curve of cost [20].
ferromagnetic core as shown in Figure 3 is a hysteresis loop.
The hysteresis loop can be used for calculation of short circuit
reactance or radial and axial electromagnetic force on the
transformer coils but for calculation of flux distribution and
losses in transformer core, the B-H loop is used. Nominal
voltage of primary winding, the value of B and H can be
calculated from the following equation:

E (t) = V (t)-Rio =N dØ/dt (3)


And H = Ni/L (4)

io: no load current


V (t): Terminal voltage in no load circumstance
E (t): EMF
Ø: Flux
R: Resistance of winding
N: Number of turn
L: Mean length

Fig. 4: 3D FEM Mesh Analysis of Transformer Core

4. CONCLUSION
The finite element method is a numerical technique for
obtaining estimate solutions to boundary value problems.
Especially it is an important tool to solve electromagnetic
problems because of its ability to model geometrically and
compositionally complex problems. With the use of advanced
computational tools, more efficient, reliable and compact
transformer can be designed with minimum core as well as
Fig. 3: B-H Curve (Hysteresis Loop) other losses occurring in transformer.

Optoelectronics and Advanced Materials, vol. 10, no. 5,


pp. 1149-1158, May 2008.
REFERENCES [5] H. Chang-Hung, L. Chun-Yao, C. Yeong-Hwa, L. Faa-
Jeng, F. Chao-Ming, J.G. Lin, “Effect of
[1] T. Steinmetz, B. Cranganu-Cretu and J. Smajic, magnetostriction on the core loss, noise and vibration of
“Investigations of no-load and load losses in amorphous flux gate sensor composed of amorphous materials”,
core dry-type transformers”, 19th International IEEE Transactions on Magnetics, vol. 49, no. 7, pp.
Conference on Electrical Machines (ICEM), pp.1-6, 6-8 3862-3865, July 2013.
Sept., 2010. [6] G.W. Swift, “Excitation current and power loss
[2] R. M. Vecchio, B. Poulin, P. T. Feghali and D. M. characteristics for mitered joint power transformer
Shah, Transformer Design Principles, CRC Press, cores”, IEEE Transactions on Magnetics, vol. 11, no. 1,
2010. pp. 61-64, Jan. 1975.
[3] S. V. Kulkarni and S. A. Khaparde, Transformer [7] Z. Valkovic, “Influence of transformer core design on
Engineering: Design and Practice, Marcel Dekker Inc., power losses”, IEEE Transactions on Magnetics, , vol.
2004. 18, no. 2, pp. 801-804, Mar. 1982.
[4] E. I. Amaryllis, M. A. Tsili and P. S. Georgilakis, “The [8] F. Loffler, T. Booth, H. Pfutzner, C. Bengtsson and C.
state of the art in engineering methods for transformer K. Gramm, “Relevance of step-lap joints for magnetic
design and optimization: A survey”, Journal of characteristics of transformer cores”, IEE Proceedings

656
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

on Electric Power Applications, vol. 142, no. 6, pp. [15] E. Cazacu, L. Petrescu, “A simple and low-cost method
371-378, Nov. 1995. for miniature power transformers' hysteresis losses
[9] R.S. Girgis, E.G. teNijenhuis, K. Gramm and evaluation”, 8th International Symposium on Advanced
J.E.Wrethag, “Experimental investigations on effect of Topics in Electrical Engineering (ATEE 2013), pp. 1-4,
core production attributes on transformer core loss 23-25 May, 2013.
performance”, IEEE Transactions on Power Delivery, [16] J.C. Olivares-Galvan, R. Escarela-Perez, F. de Leon, E.
vol. 13, no. 2, pp. 526–531, Apr. 1998. Campero-Littlewood, C.A Cruz, “Separation
[10] M. Albach, T. Durbaum and A. Brockmeyer, of core losses in distribution transformers using
“Calculating core losses in transformers for arbitrary experimental methods”, Canadian Journal of
magnetizing currents a comparison of different Electrical and Computer Engineering, vol.35, no.1,
approaches”, 27th Annual IEEE Conference on Power pp.33-39, 2010.
Electronics Specialists (PESC 96) Record, vol. 2, pp. [17] N. Bianchi, Electrical Machine Analysis Using Finite
1463-1468, 23-27 Jun., 1996. Elements, CRC Press, 2009.
[11] M. Dolinar, D. Dolinar, G. Stumberger, B. Polajzer and [18] M. A. Tsili, A. G. Kladas and P. S. Georgilakis,
J. Ritonja, “A three-phase core-type transformer, iron “Computer aided analysis and design of power
core model with included magnetic cross saturation”, transformers”, Elsevier Journal Computers in Industry,
IEEE Transactions on Magnetics, vol. 42, no. 10, pp. vol. 59, no. 4, pp. 338-350, Apr. 2008.
2849-2851, Oct. 2006. [19] M.A Tsili, A.G Kladas, P.S. Georgilakis, A.T.
[12] N. Strangers and R.D. Findlay, “Measurement of Souflaris, C.P. Pitsilis, J.A. Bakopoulos, D.G.
rotational iron losses in electrical sheet”, IEEE Paparigas, “Hybrid numerical techniques for power
Transactions on Magnetics, vol. 36, no. 5, pp. 3457- transformer modeling: a comparative analysis validated
3459, Sep. 2000. by measurements”, IEEE Transactions on Magnetics,
[13] R. Findlay, R. Belmans and D. Mayo, “Influence of the vol.40, no.2, pp.842-845, March 2004.
stacking method on the iron losses in power transformer [20] D. Phaengkieo, W. Somlak, S. Ruangsinchaiwanich,
cores”, IEEE Transactions on Magnetics, vol. 26, no. 5, “Transformer design by finite element method with
pp. 1990-1992, Sep.1990. DOE algorithm”, IEEE International Conference on
[14] P. Marketos and T. Meydan, “Novel transformer core Electrical Machines and Systems (ICEMS 2013), pp.
design using consolidated stacks of electrical steel”, 2219-2224, 26-29 Oct., 2013.
IEEE International Magnetics Conference (INTERMAG
2006), pp.127, 8-12 May, 2006.

657
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

OVERVIEW OF POWER TRADING: MEANING,


SCNEARIO, ISSUES AND CHALLENGES
Pooja Dogra
Master of Technology
(Power Engineering)
BBSBEC, Fatehgarh Sahib
dogra_pooja@yahoo.co.in

ABSTRACT large and different parts of the country face different types of
climate and different types of loads.
Power trading inherently means a transaction where the price Power demand during the rainy seasons is low in the States of
of power is negotiable and options exist about whom to trade Karnataka and Andhra Pradesh and high in Delhi and Punjab.
with and for what quantum. A robust trading system is very Whereas many of the States face high demand during evening
important for free and fair competitive electricity market peak hours, cities like Mumbai face high demand during
operation. Trading system should be capable of risk hedging office hours. The Eastern Region has a significant surplus
associated with price volatility and other unexpected round the clock, and even normally power deficit states with
changes. Operating behavior of a competitive power market is very low agricultural loads like Delhi have surpluses at night.
significantly affected by the trading arrangements, strategic This situation indicates enough opportunities for trading of
bidding, market model and rule. These arrangements are kept power. This would improve utilization of existing capacities
on changing from time to time depending on the requirement and reduce the average cost of power to power utilities and
for transparent and non-discriminatory electricity market. In consumers.
this paper, various terms related to power trading and
In view of high fixed charges, average tariff becomes
current scenario are discussed. The important key issues and
sensitive to PLF. Trading of power from surplus State Utilities
challenges in this field are also critically analyzed.
to deficit ones, through marginal investment in removing grid
Basic idea of ABT and its importance is also discussed.
constraints, could help in deferring or reducing investment for
Keywords additional generation capacity, in increasing PLF and
reducing average cost of energy. Over and above this, the
Power trading , Open access, Power market ,ABT. Scheduled exchange of power will increase and un-scheduled
exchange will reduce bringing in grid discipline, a familiar
1. INTRODUCTION problem.[1]

Power trading inherently means a transaction where the price 2. WHY TRADING
of power is negotiable and options exist about whom to trade
with and for what quantum. In India, power trading is in an 1. To develop a full fledged, efficient and competitive
evolving stage and the volumes of exchange are not huge. All market mechanism for trading in power and to facilitate
ultimate consumers of electricity are largely served by their the development of generation projects including through
respective State Electricity Boards or their successor entities, private investment, both resulting in reliable, economic
Power Departments, private licenses etc. and their relationship and quality power in the long term.
is primarily that of captive customers versus monopoly
2. Develop power market for optimal utilization of energy
suppliers. In India, the generators of electricity like Central
Generating Stations (CGSs), Independent Power Producers 3. Promote power trading to optimally utilize the existing
(IPPs) and State Electricity Boards (SEBs) have all their resources
capacities tied up. Each SEB has an allocated share in central
sector/ jointly owned projects and is expected to draw its 4. Catalyze development of Power Projects particularly
share without much say about the price. In other words, the environment friendly hydro projects
suppliers of electricity have little choice about whom to sell 5. Promote exchange of power with neighboring countries.
the power and the buyers have no choice about whom to
purchase their power from.
The pricing has primarily been fixed/controlled by the Central
and State Governments. However, this is now being done by 3. ROLE OF POWER TRADING
the Regulatory Commissions at the Centre and also in the 1. Trading creates a market based on enforceable contracts
States wherever they are already functional. Power for buying and selling power.
generation/ transmission is highly capital intensive and the
Fixed Charge component makes up a major part of tariff.
India being a predominantly agrarian economy, power 2. Enables (a utility with) a smaller power system to
demand is seasonal, weather sensitive and there exists become part of a large system, obviating the need for
substantial difference in demand of power during different reserve capacity and affording increased reliability as
hours of the day with variations during peak hours and off well as utilization.
peak hours. Further, the geographical spread of India is very

658
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. BENEFITS OF POWER TRADING engaged in generation in accordance with the regulations


specified by the Appropriate Commission‖ Figure .1shows
1. Seller gets to operate generation capacities at higher broad picture of open access mechanism [8]
utilization (which would otherwise be backed down);
realizes efficiency and economic benefits. Open access allows large users of power — typically having
2. Buyer gets to meet critical loads in a reliable manner, connected load of 1 megawatt (Mw) and above — to buy
often substituting costlier sources of generation; realizes cheaper power from the open market. The idea is that the
reliability and economic benefits. customers should be able to choose among a large number of
competing power companies–instead of being forced to buy
5. OPEN ACCESS AND TRADING electricity from their existing electric utility monopoly. It
helps large consumers particularly the sick textile, cement and
The Electricity Act, 2003 which has come into force from steel industrial units by ensuring regular supply of electricity
10th June, 2003 repeals the Indian Electricity Act, 1910; at competitive rates and boost business of power bourses.
Electricity (Supply) Act, 1948; and Electricity Regulatory
Commissions Act, 1998. In view of a variety of factors,
financial performance of the state Electricity Boards has On the basis of location of buying and selling entity, the open
deteriorated. The cross subsidies have reached unsustainable access is categorized as:
levels. A few States in the country have gone in for reforms 1. Inter State Open Access:When buying and selling entity
which involve unbundling into separate Generation, belongs to different states. In this case CERC regulations are
Transmission and Distribution Companies. To address the ills followed.
of the sector, the new Act provides for, amongst others, newer It is further categorized as:
concepts like Power Trading and Open Access. 1. Short Term Open Access (STOA): open access allowed
Open Access on Transmission and Distribution on payment of for the period of less than one month.
charges to the Utility will enable number of players utilizing 2. Medium Term Open Access (MTOA): open access
these capacities and transmit power from generation to the allowed for a period of 3 months to 3 years.
load centre. This will mean utilization of existing
infrastructure and easing of power shortage. Trading, now a 3. Long Term Open Access (LTOA): open access allowed
licensed activity and regulated will also help in innovative for a period of 12 years to 25 years.
pricing which will lead to competition resulting in lowering of
tariffs.
2. Intra State Open Access: When buying and selling entity
belongs to same state. In this case SERC regulations are
DEFINITION OF ―OPEN ACCESS‖ IN THE followed. It is further categorized as STOA, MTOA, and
ELECTRICITY ACT, 2003 LTOA and the duration of which depends on the respective
The non-discriminatory provision for the use of transmission state open access regulations.
lines or distribution system or a associated facilities with such
lines or system by any licensee or consumer or a person

EMERGING INDUSTRY STRUCTURE


Transmission Generation

GENCO GENCO GENCO

TRADER

OPEN ACCESS : Transmission

DISCOM DISCOM DISCOM


Distribution

OPEN ACCESS : Distribution

TRADER

Customer Customer Customer

Fig 1 Emerging industry structure.

659
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

6. POWER MARKET vi. Power exchange is a voluntary market place

The Wholesale transactions for electric power globally are vii. Competition in an electricity power exchange‘s spot
through spot contracts, forward and future contracts and long market occurs by generators, distributors, traders and large
term bilateral contracts.[2,6] consumers submitting bids for buying and selling electricity.

TYPES Of TRADING viii. Each sale bid specifies the quantity and the
minimum price at which they are willing to supply the energy.
Bilateral Agreements
ix. Conversely, each buy bid specifies the desired
Banking Agreements quantity and the maximum price at which they are wiling to
buy the energy
Power Exchange
x. The power exchange matches supply and demand
Available other resource. along with publishing a market-clearing price.
6.1BILATERAL AGREEMENT xi. Power exchange have trading rules, which cover the
An agreement in which each of the parties to the contract setting of prices, delivery, clearing , type of product, timing
makes a promise or promises to the other party. etc.

Bilateral agreements are of two types. xii. The role of a power exchange is to facilitate the
trade of short- term products.
Bilateral Import
Bilateral Export
7. ROLE OF POWER EXCHANGES
These can be done on real time basis, day ahead & on
firm Basis. Power exchanges in India were conceptualized in 2005. There
Real time :the agreements done on the same day, sometimes was a need for a marketplace, where buyers and sellers could
11/2 hour before the scheduling of power. As it takes only meet and trade power with genuine price discovery. The
11/2 hour to get the power scheduled depending on the stepping stone for establishing such an exchange was laid in
availability of transmission capacity. the Electricity Act, 2003, which introduced the concept of
non-discriminatory open access through provisions for
promoting competition in the market. With the stage set by
the Electricity Act, the country‘s power markets have been
Day Ahead basis:- the agreements which are done one day in
witnessing significant innovation. This has been furthered by
advance.
positive regulatory moves to create a vibrant market and
Firm basis :- the agreements for which the open access have
supported by the efforts of market operators to bring out new
been applied in advance. Maximum tenure 3 months.
products and solutions that benefit consumers, suppliers and
6.2 BANKING AGREEMENT the sector as a whole. Before the operationalization of power
exchanges, the alternatives for purchasing short-term power
An agreement between two parties in which either of the party
included the unscheduled interchange (UI) market (where
agrees to supply the power to the other party for a specified
prices were volatile) and over-the-counter (OTC) trading
tenure & as per the agreement the power is returned back by
mechanisms (which typically have high transaction costs and
the consuming party in the specified time period.
non-standardized contracts). Although the OTC mechanisms
These type of agreements are also of two types continue to serve an important function, in the past,
consumers wanted a platform that allowed them to enter
Day ahead banking standardized contracts, took care of counterparty risks, and
Firm Based provided fixed acceptable future electricity price signals. The
customer demand for such contracts led to the evolution of
power exchanges.
6.3 POWER EXCHANGE Recognizing the fact that price signals from an organized
market will promote competition and investments, the Central
Competitive wholesale spot trading arrangement that Electricity Regulatory Commission initiated the process of
facilitates the selling and buying of electricity organizing the electricity market by issuing guidelines for
setting up and operating power exchanges. In June 2008, the
country‘s first power exchange – Indian Energy Exchange
i. It is an organized market that facilitates trade in (IEX) – commenced operations, followed by Power Exchange
standardized hourly and multi-hourly contracts India Limited in October 2008. The power exchanges were
ii. Develop marginal cost for its energy transaction – A designed to make electricity markets more transparent,
price index efficient and competitive. The multi-buyer and multi-seller
environment along with access to transmission and
iii. Power exchanges are ‗ energy only market‘ since distribution networks increase the responsive- ness of demand
they do not take into account any technical aspects like and supply to price signals. The overarching objective of
transmission constraints or capacity payments. achieving higher efficiency forms the basis of such platforms.
iv. Bids on an exchange only contain quantity and Today, the power exchanges account for 30 per cent of the
prices for a particular period. power transacted in the short-term market, thereby serving as
a valuable link in bridging the power demand-supply gap.
v. An exchange is absolutely neutral towards the The IEX is the leading energy trading platform with a 90 per
market because its rule apply to both sides of the transaction. cent market share. It started operations with a handful of

660
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

participants. Over the past five to six years, the number of expenses, insurance, taxes and interest on working capital.
participants on the exchange has increased to upto 2,600, The variable cost comprises of the fuel cost, i.e., coal and oil
comprising 27 states, five union territories and 500 generators. in case of thermal plants and nuclear fuel in case of nuclear
Of these, over 2,000 are industrial consumers. plants. In the Availability Tariff mechanism, the fixed and
The IEX provides a platform for trading power in the day- variable cost components are treated separately. The payment
ahead market (DAM)[3,7] of fixed cost to the generating company is linked to
availability of the plant, that is, its capability to deliver MWs
on a day-by-day basis. The total amount payable to the
8. TRADING CHALLENGES IN INDIAN generating company over a year towards the fixed cost
POWER SECTOR SCENARIO depends on the average availability (MW delivering
capability) of the plant over the year. In case the average
Power Trading has been a subject of discussion right from the
actually achieved over the year is higher than the specified
very first day of its existence .Initially, in 2002 when trading
norm for plant availability, the generating company gets a
was introduced, people who were in to power sector were
higher payment. In case the average availability achieved is
quite skeptical about its existence in the near future. There has
lower, the payment is also lower. Hence the name
been a number of roadblocks in the path of power trading in
‗Availability Tariff‘. This is the first component of
Indian power Sector scenario. Some of the major causes about
Availability Tariff, and is termed ‗capacity charge‘.
the dilemma about the germination of power trading are low
creditworthiness of the distribution utilities, open access
The second component of Availability Tariff is the ‗energy
restrictions, rising fuel costs, low liquidity in the market etc.
charge‘, which comprises of the variable cost (i.e., fuel cost)
 The sorrow plight of the DISCOMs is known to of the power plant for generating energy as per the given
everyone who is directly or indirectly related to the schedule for the day. It may specifically be noted that
power sector. Huge debt of distribution utilities are energy charge (at the specified plant-specific rate) is not based
creating a wider gap between the various development on actual generation and plant output, but on scheduled
aspects of the Power sector. IPPs and various other generation. In case there are deviations from the schedule
GENCOs are not ready to wait for the over due (e.g., if a power plant delivers 600 MW while it
payments from the DISCOMs. Moreover Traders have was scheduled to supply only 500 MW), the energy charge
not any option to disconnect the supply once the deal has payment would still be for the scheduled generation (500
been finalized. Supplying power without ensuring an MW), and the excess generation (100 MW) would get paid for
adequate payment security mechanism is a violation of at a rate dependent on the system conditions
trading license. prevailing at the time. If the grid has surplus power at the time
 Trading margin involves a thin percentage of around 1 % and frequency is above 50.0 cycles, the rate would be lower.
& with the current picture of cash flows of If the excess generation takes place at the time of generation
DISCOMs,many traders may exit the business of trading. shortage in the system (in which condition the
i.e Working capital of traders > Total capitalization frequency would be below 50.0 cycles), the payment for extra
 Limited implementation of Open Access like various generation would be at a higher rate[3,4]
states (Tamil Nadu,Orissa etc) are restraining from Open
Access 10. WHY WAS ABT NECESSARY
 Imposition of high cross subsidy charges are also coming
as hindrance in the roads of Open access implantation Prior to the introduction of Availability Tariff, the regional
 Another problem of large volume of unscheduled grids had been operating in a very undisciplined and
interchange arising in consideration with ABT. Suppliers haphazard manner. There were large deviations in frequency
and distribution companies are using this as a from the rated frequency of 50.0 cycles per second (Hz). Low
commercial mechanism nowadays to use this power frequency situations result when the total generation available
illegally in literal terms. Appropriate steps are needed to in the grid is less than the total consumer load. These can be
be taken to reduce UI margins below 1% of the total curtailed by enhancing generation and/or curtailing consumer
power generated(currently its around 2-3 %) load. High frequency is a result of insufficient backing down
of generation when the total consumer load has fallen during
 Another road block is the reduction in availability of off-peak hours. The earlier tariff mechanisms did not provide
power from the generating companies in terms of any incentive for either backing down generation during off-
volume. IPPs and captive power producers have taken a peak hours or for reducing consumer load / enhancing
back foot in this scenario. generation during peak-load hours. Infact, it was profitable to
 Although trading licenses have been issued to no of go on generating at a high level even when the consumer
power traders but in actual terms there are very power demand had come down. In other words, the earlier tariff
traders thereby creating a situation of monopolistic mechanisms encouraged grid indiscipline. The Availability
markets.[5] Tariff directly addresses these issues. Firstly, by giving
incentives for enhancing output capability of power plants, it
enables more consumer load to be met during peak load
9. WHAT IS AVAILABILTY BASED hours. Secondly, backing down during off-peak hours no
longer results in financial loss to generating stations, and the
TARIFF earlier incentive for not backing down is neutralized. Thirdly,
the shares of beneficiaries in the Central generating stations
The term Availability Tariff, particularly in the Indian acquire a meaning, which was previously missing. The
context, stands for a rational tariff structure for power supply beneficiaries now have well-defined entitlements, and are able
from generating stations, on a contracted basis. The power to draw power up to the specified limits at normal rates of the
plants have fixed and variable costs. The fixed cost elements respective power plants. In case of over-drawal, they have to
are interest on loan, return on equity, depreciation, O&M pay at a higher rate during peak load hours, which discourages

661
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

them from overdrawing further. This payment then goes to [7] Indian energy exchange website(www.iexindia.com)
beneficiaries who received less energy than was scheduled, [8] Published in Gazette of India the Electricity Act,2003.
and acts as an incentive/compensation for them.[4] India: Universal Law Publications Company Pvt .Ltd.

11. HOW DOES IT BENEFIT


The mechanism has dramatically streamlined the operation of
regional grids in India. Firstly, through the system and
procedure in place, constituents‘ schedules get determined as
per their shares in Central stations, and they clearly know the
implications of deviating from these schedules. Any
constituent which helps others by under drawal from the
regional grid in a deficit situation, gets compensated at a good
price for the quantum of energy under-drawn. Secondly, the
grid parameters, i.e., frequency and voltage, have improved,
and equipment damage correspondingly reduced. During peak
load hours, the frequency can be improved only by reducing
drawls, and necessary incentives are provided in the
mechanism for the same. High frequency situation on the
other hand, is being checked by encouraging reduction in
generation during off-peak hours. Thirdly, because of clear
separation between fixed and variable charges, generation
according to merit-order is encouraged and pithead stations do
not have to back down normally. The overall generation cost
accordingly comes down. Fourthly, a mechanism is
established for harnessing captive and co-generation and for
bilateral trading between the constituents. Lastly, Availability
Tariff, by rewarding plant availability, enables more
consumer load to be catered at any point of time.[4]

12. CONCLUSION
This paper gives the idea of trading in power market. . The
important key issues and challenges in this field are also
critically analyzed. Open access of Power is an important
reform in power sector but transparent mechanism must be
developed, so that it will able to mitigate the liability on
distribution utilities. Power quality and demand side
management issues under electricity trading are also a real
challenge in market. There is a strong need of power tariff
revision.

REFERENCES
[1] Prabodh Bajpai and S. N. Singh ―Electricity Trading In
Competitive Power Market: An Overview And Key
Issues‖, International Conference On Power Systems,
ICPS2004, Kathmandu, Nepal (P110).
[2] S.K.Soonee ―Realizing a Collective Vision Through
Non-Cooperation‖ ―Workshop on Electricity Market in
India and Learning from Developed Markets‖, India
Habitat Centre, Delhi, Mar 1st & 2nd 2005
[3] Mr. C.V.J. Verma President Council of Power Utilities
―Demand Side Management‖, The 11th Annual Asia
Power & Energy Congress31 March-4April 2008, Grand
Hyatt, Singapore
[4] Bhanu Bhushan ― ABC of ABT A Primer on
Availability Tariff‖
[5] Power Trading corporation of India limited,concept and
FAQs, (www.ptcindia.com)
[6] S.N Singh, ―Market power‖, A Short term course on
Electric Power System Operation and Management in
Restructured Environment ,IIT Kanpur, India, pp. A57-
A70.July 21-25,2003.

662
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Review Paper on Wireless Power Transmission


Methods
Ramandip Singh Er.Yadvinder Singh
A.P., Electrical Engg. Deptt Lect., Electrical Engg Deptt
B.G.I.E.T Sangrur (Punjab) Bhai Gurdas Polytechnic College, Sangrur
ramansohi79@gmail.com yadvinder87@gmail.com

ABSTRACT
Wireless power transmission is one of the emerging fields of William C. Brown, the pioneer in wireless power transmission
engineering. The requirement of such technology is increasing as technology, has designed, developed a unit and demonstrated to
the applications of mobile devices, because these devices show how power can be transferred through free space by
requires frequent recharging and wired connection limits its microwaves. The concept of Wireless Power Transmission
immovability and adds unconvincing wiring. Since the wireless System is explained with functional block diagram as shown in
power transmission combines many theories and explained by figure 3. In the transmission side, the microwave power source
many methods, this paper discusses the different methods used generates microwave power and the output power is controlled
for the wireless electricity and history of wireless transmission by electronic control circuits. The wave guide ferrite circulator
system is discuss. which protects the microwave source from reflected power is
connected with the microwave power source through the Coax –
Keywords: Wireless power Transmission(W.P.T), Nikola tesla. Waveguide Adaptor. The tuner matches the impedance between
the transmitting antenna and the microwave source. The
attenuated signals will be then separated based on the direction
1. INTRODUCTION of signal propagation by Directional Coupler. The transmitting
antenna radiates the power uniformly through free space to the
wireless power transfer is a method of transferring the electric rectenna.
energy from a power source to an electrical load without In the receiving side, a rectenna receives the transmitted power
synthetic conductor. Wireless transmission is useful in cases in and converts the microwave power into DC power. The
which connecting lines are inconvenient, dangerous or impedance matching circuit and filter is provided to setting the
impossible.The problem of wireless transmission of energy output impedance of a signal source equal to the rectifying
different from the wireless telecommunication, such as a radio. circuit. The rectifying circuit consists of Schottky barrier diodes
In the radio, the proportion of energy received becomes critical converts the received microwave power into DC power. The
when it is too low to distinguish the signal from background Primary components of Wireless Power Transmission are
noise. With wireless power efficiency is the more significant Microwave Generator, Transmitting antenna and Receiving
parameter. A large part of the energy required by the production antenna .
unit arrive and the receiver or receivers at the system
economical.

The most common form of wireless power transmission is


carried out via direct induction followed by resonant magnetic
induction. Other methods are taking into account
electromagnetic radiation in the form of microwaves or laser [2]
and an electric wire with natural media. [3].

2. BASIC DESIGN OF WIRELESS


POWER SYSTEM
The Figure1depicts high speed rectifier realizing high transfer Figure 2 Flow and components of Wireless power System
efficiency relating to transmitter and receiver.
The key for the present world is to save energy and related
recourses in the organizations and as well as at home. Energy
costs account for a huge portion of most companies operating
expenses, so monitoring, controlling and conserving a building's
lighting, heating and cooling, and other energy-hungry systems
can lead to substantial savings. Several energy management
vendors report that customers have shrunk their power bills by at
least 30 percent. Saving energy also means lowering your carbon
footprint, which could help reduce carbon taxes and promote a
green image, another plus for business. The figure 2 shows a
typical energy wireless light controller device.

Figure 1. Wireless power system

663
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. METHOS OF TRANSMISSION Power beaming by microwaves has the difficulty that for most
space applications the required aperture sizes are very large due
to the diffraction limited antenna directivity. For example, the
Some of the most common methods used for the wireless 1978 NASA Study of solar energy requires satellite a 1-km
transmission of electricity are diameter transmitting antenna, and a 10-km diameter receiving
rectenna for a microwave beam at 2.45 GHz. [7] These sizes
1. Electrostatic induction can be reduced to something shorter wavelengths, although
short wavelengths may have difficulties with atmospheric
2. Electromagnetic radiation absorption and beam blockage by rain or water droplets.
Because of the "thinned array curse," it is not possible, a narrow
3. Microwave method beam which make by combining the beams of several smaller
satellites.
4. Laser method
For earthbound applications a large area 10 km diameter
5. Transformer coupling receiving array allows large total power levels are used as
proposed in the low power density for human electromagnetic
6. Electrical conduction exposure safety. A person certainly distributed power density of
1 mW/cm2 over a 10-km, (4.8 km) induced on the surface.
Electrical Conduction and the current flow through the upper
air layers starting at a barometric pressure of approximately 130
3.1 ELECTROSTATIC INDUCTION mm Hg is possible by the process of the atmospheric ionization
An electrostatic induction or capacitive coupling, the passage by creating capacitively coupled plasma discharge. A global
of electric energy by a dielectric. In practice, an electric field system for "the transmission of electrical energy without wires"
gradient or differential capacitance between two or more called the World Wireless System, dependent upon the high
insulated blocks, plates, electrodes, or nodes, which are electrical conductivity of the plasma and the high electrical
elevated above a conductive ground plane. The electric field is conductivity of the earth was proposed already in 1904.
generated by feeding the sheets with a high potential, high-
frequency AC power supply. The capacitance between two In this way, illuminated electric lamps and electric motors can
terminals and a higher powered device form a voltage divider. be rotated at moderate distances. The transferred energy can be
found in greater distances.
The electrical energy which is transmitted through electrostatic
induction is used by a receiving device such as a wireless air. 3.4 LASER METHODS
[10] [11] [12] Nikola Tesla demonstrated the illumination of
wireless lamps by energy that is coupled into them through an When electromagnetic radiation in detail the visible spectrum
alternating electric field. (10s microns (um) to 10s nm) capable of transmitting power
through the conversion of current into a laser beam, which is
3.2 ELECTROMAGNETIC RADIATION then at a solar cell receiver [9] it This mechanism is usually
called "power beaming "because the power at a receiver,
Far-field techniques achieve longer ranges, often several convert it into usable electrical energy radiated can be known.
kilometers ranges, wherein the distance is substantially greater
than the diameter of the device (s). The main reason for greater The laser "powerbeaming" technology has been studied
distances with radio waves and optical devices is the fact that primarily in military weapons and space and applications will
the electromagnetic radiation can be in the far field to be (with be developed for commercial and consumer electronics Low-
high directivity antennas or well-collimated laser beam) Power applications. Wireless energy transfer system using laser
adapted the shape of the reception area, so it provides almost for consumer space has to satisfy Laser safety requirements.
radiated power at long range. The maximum directivity for
antennas is physically limited by diffraction. 3.5 TRANSFORMER COUPLING
Energy transfer between two coil through magnetic fields but in
3.3 MICROWAVE METHODS this method, distance between two coils should be too close.

Taken directional transmission using radio waves are so long


distance power transmission at shorter wavelengths of the 3.6 ELECTRICAL CONDUCTION
electromagnetic radiation, typically in the microwave range. A The Disturbed charge of ground and air method. The wireless
rectenna is used to convert the microwave energy into transmission of alternating current through the earth with an
electricity. Rectenna efficiencies have been realized in excess equivalent electrical displacement obtained by the air above
of 95%. Power beaming using microwaves has been for the long areas that are higher than the resonant electrical induction
transfer of energy from solar power satellites orbiting the earth methods and low compared with the electromagnetic radiation
and leave the beaming of power to spacecraft orbit has been methods. Electrical energy can be transmitted through
considered. [2] [6] proposed inhomogeneous earth with low loss because the net resistance
between earth antipodes is less than 1 ohm. [3], the electrical
adjustment takes place predominantly by electrical conduction
through the oceans, and metallic ore bodies and similar
subsurface structures.

664
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The electric displacement by electrostatic induction through the  The world’s first fuel free airplane powered by microwave
more dielectric regions such as quartz deposits and other non- energy from ground was reported in 1987 at Canada. This
conductive minerals. Recipients are attracted by currents system is called SHARP (Stationary High – Altitude Relay
Platform) [5]. 
through the earth while an equivalent electric displacement is 
carried out in the atmosphere.  A physics research group, led by Prof. Marin Soljacic, at the
Massachusetts Institute of technology (MIT) demonstrated
This energy transfer process is suitable for the transmission of wireless powering of a 60W light bulb with 40% efficiency
electric energy in industrial quantities and also for wireless at a 2m (7ft) distance using two 60cm-diameter coils in 2007
broadband telecommunications. The Wardenclyffe Tower [7]. 

project was an early commercial venture for trans-Atlantic  In 2008, Intel reproduced the MIT group's experiment by
wireless telephony and proof-of-concept demonstrations of wirelessly powering a light bulb with 75% efficiency at a
global wireless power transmission using this method. The shorter distance. 

plant was not completed due to insufficient funding.  MIT team experimentally demonstrates wireless power
transfer, potentially useful for powering laptops, cell phones
Terrestrial transmission line with atmospheric return Single without any cords. 
wire with Earth return electrical power transmission device 
systems rely on electricity, which is insulated by the earth and a  Imagine a future in which wireless power transfer is feasible:
single line from the earth to complete the circuit. In cell phones, household robots, mp3 players, laptop
emergencies high-voltage direct current power transmission computers and other portable electronics capable of charging
systems can also operate in the 'single wire with earth return' themselves without ever being plugged in, freeing us from
mode. Removal of the increased insulated wire, and the that final, ubiquitous power wire. Some of these devices
might not even need their bulky batteries to operate. 
transmission of alternating high potential through the earth with 
an atmospheric return line is the base of this method for the
wireless transmission of electrical energy.
5. CONCLUSION
The method depends on the atmospheric line passage of electric This paper presented a short review of the techniques used for
current through the earth and through the upper troposphere and the wireless transmission of power and each has their own
the stratosphere. This flow is caused by electrostatic induction advantages and disadvantages. Hence the selection of the
up to a height of about 3 miles technology is depends upon the number of parameters such
required power, distance, medium, application, complexity and
4. HISTORY OF WIRELESS POWER cost.
TRANSMISSION
REFERENCES
Nikola Tesla he is who invented radio and is referred to as
“Father of Wireless”. Nikola Tesla is the one who first [1] A radio transmitter can produce waves having a power
conceived the idea Wireless Power Transmission and of several kilowatts or even megawatts but this energy
demonstrated “the transmission of electrical energy without scatters in all directions. Only a small fraction, less than
wires" that depends upon electrical conductivity as early as a millionth part, of the transmitted energy is received.
1891[2]. However, this is sufficient to yield the intelligence.
In 1893, Tesla demonstrated the illumination of vacuum bulbs [2] G. A. Landis, "Applications for Space Power by Laser
without using wires for power transmission at the World Transmission," SPIE Optics, Electro-optics & Laser
Columbian Exposition in Chicago.  Conference, Los Angeles CA, 24–28 January 1994;
The Wardenclyffe tower shown in Figure 1. was designed and Laser Power Beaming, SPIE Proceedings Vol. 2121,
constructed by Tesla mainly for wireless transmission of 252–255.
electrical power rather than telegraphy [3]. 
[3] Corum, K. L. and J. F. Corum, "Nikola Tesla and the
Diameter of the Earth: A Discussion of One of the
Many Modes of Operation of the Wardenclyffe Tower,"
1996
[4] Dave Barman and Joshua Schwanecke (2009-
12-00). "Understanding Wireless Power”.
[5] Steinmetz, Charles Proteus(29 August
2008). Steinmetz, Dr. Charles Proteus, Elementary
Lectures on Electric Discharges, Waves, and
Impulses, and Other Transients, 2nd Edition,
McGraw-Hill Book Company, Inc., 1914. Google
Books. Retrieved 4 June 2009.
[6] "Wireless charging, Adaptor die, Mar 5th
2009". The Economist. 7 November 2008.
Retrieved 4 June 2009.
[7] Buley, Taylor (9 January 2009). "Wireless technologies
are starting to power devices, 01.09.09, 06:25 pm EST".
Fig.3. Warden clyffe Tower, including Forbes. Retrieved 4 June 2009.
the partially-complete cupola

665
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[8] "Alternative Energy, From the unsustainable...to the


unlimited". EETimes.com. 21 June 2010.
[9] Teck Chuan Beh, Takehiro Imura, Masaki Kato, Yoichi
Hori “Basic Study of Improving Efficiency of Wireless
Power Transfer via Magnetic Resonance Coupling
Based on Impedancem Matching”, Industrial
Electronics (ISIE), 2010 IEEE
[10] Teck Chuan Beh, Masaki Kato, Takehiro Imura, Yoichi
Hori “Wireless Power Transfer System via Magnetic
Resonant Coupling at Restricted Frequency Range”,
http://mizugaki.iis.uokyo.ac.jp/paper_2010/papers/
beh/D_bumon2010Beh.pdf
[11] Teck Chuan Beh, Masaki Kato, Takehiro Imura, Yoichi
Hori “Wireless Power Transfer System via Magnetic
Resonant Coupling at Fixed Resonance Frequency
Power Transfer System Based on Impedance
Matching”, EVS-25 Shenzhen, China, Nov. 5-9, 2010
[12] Takehiro Imura, Yoichi Hori “Maximizing Air Gap
and Efficiency of Magnetic Resonant Coupling for
Wireless Power Transfer Using Equivalent Circuit
and Neumann Formula”, Industrial Electronics, IEEE
Transactions Oct. 2011.

666
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Assessment of Bio-energy Potential for Distributed


Power Generation from Crop Residue in Indian State
Punjab
Ram Singh
Asstt. Professor, Deptt. Of Electrical Engg.
BHSBIET Lehragaga (Sangrur)
singhsran08@gmail.com

ABSTRACT centrally planned and they are typically smaller than 30 MW.
The exact definition of distributed generation (DG) varies
In India, renewable energy sources are now developing at fast somewhat between sources and capacities; however, it is
speed. Especially energy generation using Biomass resources generally and summarily defined as any source of electric
is one of the area in which Researchers & Governments are power of limited capacity, directly connected to the power
concentrating. Agricultural residue/waste is the main source system distribution network where it is consumed by the end
of Biomass energy. A large amount of crop residue biomass is users.
generated in India. The amount of residue varies
geographically. As the Punjab state is the agricultural rich 2. AGRI-RESIDUE POTENTIAL IN PUNJAB
state of India, so the information collected in this study is
about assessment of the Biomass potential in Punjab for Punjab along with the neighboring state Haryana is referred as
power generation using local Biomass Power Plant to meet the "Grain Bowl of India". Agriculture is the major economic
the demand & supply requirements. A sample study has been activity of the state, sustaining nearly 70-80% of the total
presented in this paper. By using the concept of power population. Of the total geographical area, 84% is under agri-
generation at local (distributed) level, the economic & social culture use, 0.18% area is under cultivable wasteland and
status of the society will be raised as this will create the 8.94% area is not available for cultivation within the state.
employability to the local people. The cropping intensity in the state is averaged at 188%, which
is considered as one of the highest within the country. This is
Keywords: Distributed Generation, Bio-mass Power due to the significant availability of ground water for
Generation, Cluster of Villages, Crop Residue irrigation purpose in the state. It has been estimated that
around 55-60% of the net sown area is irrigated in the state.
1. INTRODUCTION
With only 1.6% of geographical area of the country, Punjab
Distributed generation (DG) is generally regarded as small produces approximately, 22% of wheat, 10% of rice and 13%
generators, both in terms of power output and physical size, of cotton of the total production of these crops in the country.
connected to the existing power distribution grid. The The total food grain production in Punjab has increased
difference between distributed generation and power plants significantly over the last few decades, especially in the post
operating in the modern transmission system is at least partly Green Revolution period. In 1970-1971, production of food
semantic. Although both types of generators operate in an grains was 7.305 Mt, which increased to 32.818 Mt in 2010-
interconnected system, power plants on transmission systems 2011 registering a fivefold increase. Wheat and rice played a
are generally located far from the loads they serve and are major role in pushing up agricultural production. The
operated by utilities, whereas distributed generators are production of rice, which was 6.506 Mt in 1990-1991,
typically located on-site close to the loads they serve and could increased to 12.486 Mt in 2010-2011 showing an increase of
be operated independently by a customer or independent 91%. Similarly, production of wheat was 12.159 Mt in 1990-
power producer instead of a utility company. Customers are, 1991 rose to 18.220 Mt in 2010-2011 registering an increase
however, limited in the amount of control they can exert on of 49%. [3] [4]
their own generators. However, the technology offers several
benefits to both consumers and producers of electrical power There are two major agricultural seasons in the state "Rabi"
and has resulted in increased research and usage of DG (winter crop) and "Kharif" (summer crop). The major crops
technologies. grown during the Rabi season are Wheat, Arhar,
Mustard(Sarson), Sunflower, Cotton, Dry chillies &
Distributed Generation Sesamum while during Kharif season Paddy, Bajra, Jowar,
Distributed Generation is also known as on-site generation, Maize, Moong, Ground nut and Sugarcane are important
dispersed generation, embedded generation, decentralized crops. Apart from these crops, there are various other crops
generation, decentralized energy or distributed energy, such as barseem, vegetables, potatoes, tomatoes, green
generates electricity from many small energy sources. It is manure, etc., which are categorized as 'Insignificant Crops'.
fairly a new concept in the economics literature about Here, it is important to mention that in the present work I have
electricity markets, but the idea behind it is not new at all. considered only major and minor crops. A crop is considered
Generally, the term Dispersed or Distributed Generation refers major if its crop area fraction was 10% or above of the total
to any electric power production technology that is integrated cultivated area. A crop is considered minor if it was not
within distribution systems. Distributed generators are covered under the major crop and had either crop area fraction
connected to the medium or low voltage grid. They are not of 2.5% or above. Crops that do not qualify either as major or
minor was considered 'Insignificant crop'. Data for the

667
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

insignificant crops is not calculated in the present work due to the help of required synchronization / other equipments and
miniscule contribution by such crops in the total biomass fed to the Grid of M/s Punjab State Power Corporation
production. The total biological residue generation was Limited at Mansa grid substation which is approx. 10 Km
expressed in Quintals per season at 10% moisture content and from the Power Plant location.
CRR (Crop to Residue Generation Ratio) measured in terms
of their weight and averaged. The residues generated from the Various types of agri-residue are used in the plant which is
major crops consists of wheat straw, mustard stalk, cotton given below:
stalk, paddy straw and husk, Maize husk and cob, and sugar- 1. Cotton Stalks
cane leaves and trash. [2] 2. Mustard Husk
3. Maize Stalks & Cob
3. TYPES OF AGRI-RESIDUE 4. Peddy Straw
5. Rice Husk
Electrical power can be generated from agri-residue by 6. Sugarcane Tops & Leaves
setting up agri-residue based small power plants in rural areas;
one such plant can satisfy the power needs of a cluster of
villages. The agri-residues like rice straw, rice-husk, soya 5. AGRI-RESIDUE BASED ELECTRIC
bean stalks, mustard stalks, mustard husk, wheat straw cotton POWER POTENTIAL AND CLUSTER
stalks, cotton kernals, maize stalks, maize cobs, sunflower OF VILLAGES
stalks, grass, sugarcane trash, bagasse can be fruit fully
utilized in power generation. The different types of available The State of Punjab, known all over the world as rich
agri-residue are shown in figure 1. agricultural state, contributes heavily to national grain stalks.
The agriculture can also show way for power generations.
Agri- residue, waste material left after separating grains from
crop, can be used to generate electricity. In this study, an
effort has been made to explore the feasible electric power
potential of available agri- residue in a cluster of villages in
district Sangrur, Tehsil Lehra of Punjab State. An agricultural
survey was carried out to cover both Rabi and Kharif seasons
(2013-2014).

5.1. Profile of Cluster


• Cluster of villages: Raidharana, Jhaloor, Bhutal Kalan,
Kartarpura alias Changaliwala, Gaga,Ramgarh
Sandhuan, Khandebad, Lehal Kalan A, Lehal Kalan B,
Lehal Khurd

Table 1: Profile of Cluster


Figure 1 District & Tehsil Sangrur, Lehra
Number of Villages 10
4. INTRODUCTION OF BIOMASS Nearest Towns Lehragaga, Sunam
BASED POWER PLANT BY VIATON Nearest Railway Lehragaga, 05Km away
ENERGY PVT LTD.
Nearest
Station Airport Chandigarh,180Km away
In year 2013 the VIATON ENERGY PVT LTD has
launched a 10 MW Biomass based power plant in Khokhar Main Crops Rabi-Wheat,Barseem, Sarson
Khurd, District Mansa in state Punjab. The plant has started Khariff-Paddy, Cotton,
generating power from the year 2014. Maize.Sugarcane

The plant has done remarkable work and adopted couple of


schemes to support energy conservation. The plant consists of
nominal capacity Fluidized Bed Boiler with super heater 5.2. Assessment of Agri-Residue in the
outlet parameter. This boiler produces steam by burning the Cluster of Villages
various Bio-mass fuels available and the steam thus produced
runs a 10 MW Turbo Alternator of both extractions cum
condensing type, which produces Power at 11 KV. The Data pertaining to different locally available crops was
Turbo-generator is a multistage, impulse type, extraction collected from local circle Patwaris (official of land and
condensing, having single reduction parallel shaft gearbox. revenue department) of said clusters. By using grain
The alternator is operating on 11 KV, A.C, 3-phase, 50 Hz production per acre and residue to production ratio (R.P.R)
and having a power factor of 0.8 (lagging). total agri-residue per village and then total agri-residue of
The 11 KV power generated is stepped up to 66 KV with the
whole cluster was calculated. Table 2 shows the different
help of step up transformer as per PSPCL guidelines and with

668
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

types of local crops, there agri-residue components, residue


production ratio, grain production and agri-residue
production. 5.3 CALCULATIONS FOR AGRI-
Table 2
RESIDUE BASESD POWER PLANTS
Calculation work is carried out for 10 MW capacity agri-
Sr. Crop Agri- Residue Grain Agri-
No. Residue Production Production Residue residue based (Steam- Boiler System) power plant by taking
Component Ratio (Quintal product annual load factor of 0.88. Fuel requirement per year for 7.5
/acre)* ion MW steam boiler plant was calculated to be 83950±10%
(RPR)* (Quintal MT*. Total agri-residue production per year in said cluster is
/acre) 1,20,1548 MT**. By using approximately 78% of this agri-
1. Cotton Stalks 4.2 12 50.4 residue, fuel demand of 10 MW steam - boiler plant can be
satisfied. Fuel used for this system is mainly Cotton sticks,
2. Paddy Straw 1.86 32 59.52 paddy straw & rice husk. The cluster of 10 villages gives
3. Paddy Husk - - 9 cotton stalks equal to 1315.44 MT, paddy straw equal to
95910.53 MT & rice husk equal to 1450.26 MT. Total agri-
4. Wheat Straw 1.75 29 50.75
residue fuel came out to be 98676.23 MT** (only cotton
5. Maize Stalks Cob 2.27 18 40.86 stalks, paddy straw & rice husk). By using approximately
94% of total agri-residue of cotton stalks, paddy straw & rice
6. Sugarc Tops Leaves 0.3 400 75**
husk per year in said cluster fuel demand of 10 MW Biomass
7. Sarson
ane Straw 2.7 20 54 plant can be satisfied. Hence agri-residue of cluster of 10
8. Barsee - - - - villages can sufficiently meet the fuel requirements of 10 MW
plant.
* LocalmSurvey & Rich Experience of Farmers
6. RESULTS AND DISCUSSIONS
** Approximated to 75 due to high moisture content
Paddy, wheat, maize and cotton are the major agri-residue The Study was initiated with the aim of investigating the agri-
contributors. Total agri- residue in cluster was calculated to be residue based electric power potential in clusters of villages,
120154.8 tonnes for the year 2012 (Rabi and Kharif). Table-3 exploring the possibility of utilizing Biomass for Electrical
total agri-residue production crop and village wise is power generation in Punjab state.
presented. Paddy, wheat, maize and cotton are the major agri- For investigating the agri-residue based electric power
residue contributors. Total agri- residue in cluster was potential in cluster of villages total agri-residue (Rabi &
calculated to be 1201548 tonnes for the year 2013 (Rabi and Kharif) in year 2013-14 was estimated which came to be
Kharif). 1,20,1548 MT covering all major local crops. Calculation
Table 3 work was carried out for 10 MW power plant. Requirement of
fuel for 10 MW power plant came out to be 83950±10% MT.
By using 78% of the total village cluster agri-residue, the fuel
TOTAL AGRI- demand of 10 MW (Boiler System) can be satisfied for Power
Sr. Crop Agri-residue RESIDUE- CROP
No Component generation from 10 MW plant is sufficient to meet the
WISE (QUINTALS) requirement of more than 20 villages approximately. The
(FROM CLUSTER benefits from agri- residue based power plants are multiple in
OF 10 VILLAGES) terms of development, employment generation and
Cotton Stalks environmental aspects.
1. 13154.40
7. CONCLUSION
2. Paddy Straw 959105.30
 For small power plants, biomass offers the most
3. Paddy Husk 145026.00 attractive alternative energy system than steam power
plants.
4. Wheat Straw 40834.00  A cluster of 10 villages can fulfill the fuel requirements
of 10 MW agri-residue based electric power plant, which
can serve the power needs of more than 20 villages
5. Maize Stalk & Cobs 8826.30
approximately.
 The annual financial benefit to the village Cluster from
6. Sugar Top & leaves the sale of agri-residue is approximately 1.7 times than
cane 23100.00 the amount of annual electrical energy consumed by the
village cluster.
7. Sarson Straw 11502.00  Employability will be created for the youth of the rural &
remote areas of the state.
TOTAL 1201548.00  It helps & encourages Distributed Power Generation in
de-centralized energy market of today.

669
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES [4]. Chauhan Suresh (2010) Biomass resources assessment


for power generation: A case study from Haryana state,
[1]. Abbasi Tasneem, Abbasi S.A. (2010) India Original Research Article Biomass and Bioenergy,
“Biomass energy and the environmental impacts Volume 34, Issue 9, Pages 1300-1308 elsevier
associated with its production and utilization” Review
Article Renewable and [5]. Rai, G.D. (2004), “Non- Conventional Energy Sources
Sustainable Energy Reviews, Volume 14, Issue 3, Pages ", Khanna Publishers, Delhi.
919-937 elsevier
[6]. Tehsil office, Lehragaga ( Sangrur), Punjab
[2]. Baruah, D.C. and Jain, A.K.(1998), "Distribution
of Agricultural Crop Residues in India", Journal of [7]. Viaton Energy Pvt. Ltd. (www.viaton.in)
Agriculture Engineering 35(1), pp. 7-12 .
[8]. www.wikipedia.com/Biomass power from crop residue.
[3]. Chauhan Suresh(2012), “District wise agriculture
biomass resource assessment for power generation: A
case study from an Indian state, Punjab”, Original
Research Article Biomass and Bioenergy, Volume 37,
February 2012, Pages 205-212 elsevier

670
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Potential of Microbial Biomass for Heavy Metal Removal:


A Review

Garima Mahajan Dhiraj Sud


Department of Electrical Engineering Department of Chemistry
Bhai Gurdas Institute of Engineering Sant Longowal Institute of
& Technology, Sangrur Engineering and Technology,
garima8mahajan@hotmail.com Longowal[Punjab]
suddhiraj@yahoo.com

ABSTRACT costly affair [8]. Due to the problems mentioned above,


Present review paper deals with extensive literature survey in research was intensified to look for alternative options to
exploring the potential of microbial biomass for the replace the costly and conventional methods. In last few
sequestering of toxic heavy metals from aquatic streams. decades attention has been diverted towards the utilization of
Presently more than half billion people are underprivileged microbial materials which can be suitable candidates for metal
and deprived of fresh water, clean air, soil and pure food. binding. The literature survey reveals the various options that
Contamination of aquatic streams due to release of toxic metal have been explored by researchers in the removal process of
ions is a stern issue demanding global concern. Toxic metals heavy metals.
such as Cr (VI), Cd (II), Pb (II), Zn (II), Ni (II) and many
more are being released in natural aquatic systems by various 2. MICROBIAL BIOMASS FOR HEAVY
small and large scale industries such as tanneries,
electroplating, galvanizing, pigment and dyes, metallurgical, METAL REMOVAL
paint, refining and metal processing etc. The utilization of High metal sorbing substrates such as bacterial, fungal and
various microorganisms such as yeast, algae and fungi helps algal biomasses were explored in dead / inactive or in live
in binding and extracting heavy metal ions such as Nickel, form to remove heavy metals. Considerable potential was
Cadmium, Lead and Chromium from natural as well as proved by various researchers for these naturally occurring
simulated wastewaters. and abundantly available small creatures. [9,10,11] Many
bacterial species (e.g. Bacillus, Pseudomonas, Streptomyces,
Keywords Escherichia, Micrococcus etc.), were tested for metal uptake.
Biosorption, Heavy metals, Microorganisms, Kinetic Studies Algae was considered very promising sorbing agents
[12,13,14] because of their prominent sorption capability and
were readily available copiously in seas and oceans [15,16].
1. INTRODUCTION [17] Investigated the removal of Ni from solutions by
Metals have played a critical role in industrial development Pseudomonas alcaliphila. It was inferred that the addition of
and technological advances of modern society. Most metals an excess amount of citrate encouraged the complex
are not destroyed; indeed, they are accumulating at an degradation as well as Ni removal. [18] Evaluated the
accelerated pace, due to the ever-growing industrialization. biosorption studies of Cr (VI) on dried vegetative cell and
The major toxic metal ions hazardous to humans as well as spore–crystal mixture of Bacillus
other forms of life are Cr, Fe, Se, V, Cu, Co, Ni, Cd, Hg, As, thuringiensis var. Thuringiensis using the batch method as a
Pb, Zn, etc. These heavy metals are of specific concern due to function of pH, initial metal ion concentration and
their toxicity, bio- accumulation tendency and persistency in temperature. The optimum pH observed for Chromium (VI)
nature [1,2,3]. In the past several disasters became natural ions was 2.0. Chromium (VI) ions uptake of B. Thuringiensis
evidences of metal toxicity in aquatic streams such as spore–crystal mixture was 24.1%, whereas its vegetative cell
Minimata tragedy due to Methyl Mercury contamination and metal uptake was found 18.0%. [19] Studied that blue green
“Itai-Itai” by the contamination of Cadmium in Jintsu river of algae Spiruline sp. was found capable of adsorbing one or
Japan [1,4] more heavy metals including K, Mg, Ca, Fe, Sr, Co, Cu, Mn,
Mounting health problems and its concern have motivated the Ni, V, Zn, As, Cd, Mo, Pb, Se, Al in addition to the
researchers and environmentalists to find the solutions that biosorption of Cr3+, Cd2+ and Cu2+ ions. [20] Reported that the
would be helpful in the sequestering of these toxic metals biomass of Enterobacter sp. J1 isolated from a local industrial
from aquatic streams. The real impetus for improvements wastewater treatment plant was found very effective for the
came in the 1950s and 1960s as a result of modification of the sequestering of Pb, Cu and Cd ions. The sp. was able to
legislation and setting up of independent pollution prevention uptake over 50 mg of Pb per gram of dry cell, while having
authorities at global level. Conventional methodologies like equilibrium adsorption capacities of 32.5 and 46.2 mg/g dry
chemical precipitation, ion exchangers, chemical cell for Cu and Cd, respectively.
oxidation/reduction, reverse osmosis, electro dialysis, ultra [21] Isolated heavy metal resistant bacteria from the soil
filtration, etc. [5,6,7] were applied for the sequestering of samples of an electroplating industry, and the
heavy metals from aquatic streams. However, these bioaccumulations of Cr (VI) and Ni (II) by isolates were also
conventional techniques have their own inherent limitations investigated. The optimum pH 7 indicated the applicability of
such as less efficiency, sensitive operating conditions, and the isolated Micrococcus sp. for the removal of Cr (VI) and Ni
production of secondary sludge and further the disposal are a

671
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

(II). [22] explored the use of powdered biomass of R. [9] Volesky, B. and Holan, Z. R. 1995. Biosorption of heavy
nigricans at optimum pH 2, stirring speed of 120 rpm, metals. Biotechnol. Progress. 11: 235-250
temperature of 450C and contact time of 30 minutes and [10] Gadd, G. M. and Rome, L. de. 1988. Biosorption of
results indicated the high adsorption at low initial copper by fungal melanin. Appi. Microbial. Biotechnol.
concentrations. [23] Synthesized diethylenetriamine-bacterial 29:610-617
cellulose (EABC) by amination with diethylenetriamine on [11] Paknikar K. M, Paluitkar U. S, Puranik, P. R. 1993.
bacterial cellulose (BC) for the adsorption properties of Cu Biosorption of metals from solution by mycelial waste of
(II) and Pb (II) and optimization studies were also carried out. Penicillium chrysogenum. In: Torma, A.E., Apel,
Pseudo-second-order rate model was well fitted and M.L.,Brierley, C.L. (Eds.), Biohydrometallurgical
adsorption isotherm was described by the Langmuir model. Technologies, The Minerals, Metals and Materials Society,
The ability of biofilm of Escherichia coli supported on kaolin Warrendale,
to remove Cr (VI), Cd (II), Fe (III) and Ni (II) from aqueous [12] Vieira, R. H. S. F. and Volesky, B. 2000. Biosorption: a
solutions was investigated in batch assays for the treatment of solution to pollution. Int. Microbiol. 3: 17-24
diluted aqueous solutions by [24]. The biosorption [13] De, Rome L. and Gadd, G. M. 1987. Copper adsorption
performance, in terms of uptake, followed the sequence of Fe by to Rhizopus arrhizus, Cladosporium resinae and
(III) > Cd (II) > Ni (II) > Cr (VI). Recent reports [23] studied Penicillium italicum. Appl. Microbiol. Biotechnol. 26: 84-90
heavy metal bioremediation by a multi-metal resistant [14] Luef E, Prey T, Kubicek, C. P. 1991. Biosorption of Zinc
endophytic bacteria L14 (EB L14) isolated from the Cadmium by fungal mycelial wastes. Appl. Microbiol. Biotechnol. 34:
hyper accumulator Solanum nigrum L. and was characterized 688-692
for its potential application in metal treatment. Within 24 h [15] Brady, D. and Duncan, J. R. 1993. Bioaccumulation of
incubation, EB L14 could specifically uptake 75.78%, metal cations by Saccharomyces cerevisiae. In
80.48%, 21.25% of Cd (II), Pb (II) and Cu (II) under the Biohydrometallurgical Technologies; Torma, A. E., Apel, M.
initial concentration of 10 mg/L. Similarly biosorption L., Brierley, C. L., Eds.; The Minerals, Metals & Materials
studies of heavy metals by whole mycelia of A. niger, R. Society: Warrendale, PA, Vol. 2, p 711-724
orysae and Mucar rouxi for the removal of Cd, Ni and Zn. [16] Puranik, P. R. and Paknikar, K. M. 1997. Biosorption of
The efficiency was found to be enhanced by treating the cell Lead and Zinc from Solutions using Streptoverticillium
wall fractions with 4 M NaOH at 120oC. cinnamoneumWaste Biomass. J. Biotechnol. 55: 113-124
[17] Qian J, Li D, Zhan G, Zhang L, Su W, Gao, P. 2012.
3. CONCLUSION Simultaneous degradation of Ni-citrate complexes and
In the light of above discussion it can be concluded that removal of nickel from solutions by Pseudomonas alcaliphila.
microbial biomass has a great potential for sequestering heavy Bioresour. Technol. 116: 66-73
metal ions from aquatic streams under optimized experimental [18] Sahin I , Keskin S. Y , Keskin, C. S. 2013. Biosorption
conditions. Literature studies further explores the platform for of cadmium, manganese, nickel, lead, and zinc ions
the studies which may substitute chemical intensive by Aspergillus tamari. Desalination and Water
conventional methods and directs towards greener Treatment.1–6
technologies for environmental remediation. [19] Chojnacka K, Chojnacki A, Górecka, H. 2005.
Biosorption of Cr3+, Cd2+ and Cu2+ ions by blue-green algae
Spirulina sp.: kinetics, equilibrium and the mechanism of the
REFERENCES process. Chemosphere 59: 75–84
[1] Friberg, L. and Elinder, C. G. 1985. Encyclopedia of [20] Lu W. B, Shi J. J, Wang C. H, Chang, J. S. 2006.
occupational Health, Third edition, International Labor Biosorption of lead, copper and cadmium by an indigenous
Organization, Geneva isolate Enterobacter sp. J1 possessing high heavy-metal
[2] Garg U. K, Kaur M. P, Garg V. K, Sud, D. 2007. Removal resistance. Journal of Hazardous Materials. 134: 80-86
of hexavalent Cr from aqueous solutions by agricultural waste [21] Congeevaram S, Dhanarani S, Park J, Dexilin M,
biomass. J. Hazard. Mater. 140: 60-68 Thamaraiselvi, K. 2007. Biosorption of chromium and nickel
[3] Randall, J. M., Hautala, E., Waiss, A. C. Jr. 1974. by heavy metal resistant fungal and bacterial isolates. J.
Removal and recycling of heavy metal ions from mining and Hazard. Mater. 146: 270-277
industrial waste streams with agricultutral by-products. [22] Bai, R. S. and Abraham, T. E. 2001. “Biosorption of Cr
Proceedings of the Fourth Mineral Waste Utilization (VI) from aqueous solution by Rhizopus nigricans”.
Symposium, Chicago. Bioresource Technology. 79: 73–81
[4] Kjellstrom T, Shiroishi K, Erwin, P. E. 1977. Urinary beta/ [23] Guo H, Luoa S, Chena L, Xiaoa X, Xi Q, Wei W, Zeng
sub 2/ microglobulin excretion among people exposed to G, Liu C, Wan Y, Chen J, He, Yejuan. 2010. Bioremediation
cadmium in the general environment. Environ. Res. 13: 318- of heavy metals by growing hyperaccumulaor endophytic
344 bacterium Bacillus sp. L14. Bioresource Technology. 101:
[5] Gardea-Torresdey J. L, Gonzalez J. H, Tiemann K. J, 8599–8605
Rodriguez O, Gamez, G. 1998. Phytofilteration of hazardous [24] Sen M, and Dastidar Ghosh, M. 2011. Biosorption of Cr
cadmium, chromium, lead, and zinc ions by biomass of (VI) by resting cells of Fusarium Solani Iran. J. Environ.
medicago sativa (alfalfa). J. Hazard. Mater. 57: 29-39 Health. Sci. Eng. 8: 153-158
[6] Patterson, J.W. 1985. Industrial wastewater treatment
technology, second ed. Butterorth Publisher, Stoneham, MA.
[7] Zhang L, Zhao L, Yu Y, Chen, C. 1998. Removal of lead
from aqueous solution by non-living Rhizopus nigricans.
Water Res. 32: 1437-1444
[8] Ahluwalia, S. S. and Goyal, D. 2005. Microbial and plant
derived biomass for removal of heavy metals from waste
water. Bioresour. Technol. 98: 2243-2257

672
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Efficiency comparison of New and Rewound Induction


Motors used in Rice Mill

Ramanpreet Singh Jasvir Singh


Research Scholar, E.E Deptt. Assistant Professor, E.E. Deptt
Bhai Gurdas Inst. of Engg & Tech. Bhai Gurdas Inst.of Engg & Tech.
Sangrur (Punjab) Sangrur (Punjab)

ABSTRACT
Table-1 Parameters and Measuring Instruments used
To reduce energy consumption in any industry, it is necessary
to determine the efficiency of rewound induction motor and Parameters Measuring Instruments
compare it with the rated efficiency of that motor. If the
efficiency of rewound induction motor is found near to the Voltage Power Analyzer
rated efficiency of that motor, then there is no need for any Current Clamp On Transducer
change. If the efficiency of rewound induction motor is found
low as compared to rated efficiency of that motor, then it is Input Power Power Analyzer
better to replace that motor with new one. This paper explains
the analysis done on rewound induction motors to determine Speed Tachometer
the efficiency and comparison is done with new one. Winding Temp. Resistance Temperature Detector
Keywords Winding Resistance Power Analyzer
Rewound induction motors, Rice mill, efficiency, energy
losses, payback period

1. INTRODUCTION Fig.1 is showing the procedure for analyzing the motor


behaviour
We ask that authors follow some simple guidelines. In
essence, we ask you to make your paper look exactly like this
document. The easiest way to do this is simply to download
Description of machines
the template, and replace the content with your own material.
The major source of energy consumption in an industry is
electrical motors. About 70% of energy is consumed by Information about machines
induction motors in a Rice mill. It is a common practice in any
industry to rewind the induction motors in case of any fault.
This decreases the efficiency of a motor and increases the
energy losses and hence the energy consumption in any Survey of machines for energy measures
industry. Thus, it is better to replace the rewound induction
motors with the new ones.
Methods for energy conservation
2. PROBLEM DEFINITION
In the Rice mill, the rewound induction motors of different Fig-1 Process Flow Chart for Analyzing the Motor
horsepower are to be studied for different types of losses so as
to determine the overall efficiency of the induction motors. 4. RESULTS AND DISCUSSIONS
Then, these efficiencies are compared with the efficiencies of
The energy can be saved by means of replacing the rewound
new induction motors. After that, it has been found that it is
induction motors with the new ones so as to increase the
better to replace the rewound induction motors with the new
efficiency of the motor. Rated parameters of one of the
ones as the payback period is found to be existing in the range induction motors are shown in table 2.
of 1.5 years to as less as 7 months.
Table- 2 Rated Parameters
3. METHOD
No. of Phases 03
After visiting in the rice mill so many times, several rewound
induction motors are identified. After that, it is being No. of Poles 04
examined that how the machines are working. Then all the
parameters (rated, measured and calculated) of the motors are Rated Power(HP) 7.5
recorded through different measuring instruments. These
parameters are then used for determining the efficiency of the
induction motors. The equipments used for the measurements Rated Voltage, (Vrated) 415
of induction motor parameters are described in table 1.
Rated Current, (Irated) 12

Full Load Rated Speed, (Nrated) 1440

Supply frequency, f (Hz) 50

673
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Measured parameters of one of the induction motors are


shown in table 3. No-load slip, Sno-load (%) = Ns  N l*100 = 1.33
Ns
Table- 3 Measured Parameters

No-load voltage (V) 415


Ns  N fl
Full-load slip, Sfull-load (%) = *100 = 1.67
No-load current(A)
Ns
8.5

No-load input Power(W) 524


Full-load rotor losses, Protor = (S×P2) = 196.60 W
Stray losses= 1.5% of full-load input power =187.5W
Winding Temp. of still motor, T1(°C) 12
Full load output power = Pfull-load – Pstator – Piron – Pstray –
Resistance at room temp., R1( 0.52 Protor = 11585.51
Efficiency at full-load, , full-load = (Poutput/Pfull-load) ×
Winding Temp. of no load motor, T2(°C) 38 100= 92.68%
Rated Efficiency= 93.25%
Winding Temp. Of loaded motor , T3(°C) 138
Analysis done on few rewound motors according to the
Supply frequency, f (Hz) 50 formulae described above is shown below in the form of
graphs:

Full load voltage, Vfull-load(V) 415 COMPARSION BETWEEN ACTUAL


EFFICIENCY AND RATED EFFICIENCY
Full load current, Ifull-load(A) 20.5
MOTOR EFFICIENCY (η℅)
94
Full-load input Power, Pfull-load(W) 12500 92
No-load Speed, N1(RPM) 1480 90 Actual eff.
88 Rated eff.
Full-load Speed, N2(RPM) 1475
86
84
NM RM 1 RM 2 RM3 RM4 RM5
5. CALCULATED PARAMETERS TYPES OFMOTORS
Synchronous speed, Ns  120 f = 120 * 50 = 1500RPM
p 4 Fig. 15HP Motor efficiency Analysis
Stator resistance of no-load motor,

R2=R1* 235  T 2 =0.52× 235  =0.57 Ω COMPARSION BETWEEN ACTUAL


EFFICIENCY AND RATED EFFICIENCY
235  T1 235 
MOTOR EFFICIENCY (η℅)

Stator resistance of loaded motor,


94
92
R3=R1*
235  T 3 =0.52× 235 138 = 0.71Ω 90
235  T1 235  38 88
86
Actual eff.
Stator cu. loss, Pst1 = (Ino-load)2 × R1 = 43.35 W 84
82 Rated eff.
Stator cu. loss, Pst2 = (Ifull-load)2 × R2 = 231.1W
NM RM1 RM 2RM 3 RM4 RM 5
Iron losses, (Pi) = Pno-load – Pst cu loss at no-
TYPES OF MOTORS
load = 480.65 W

Fig. 20HP Motor efficiency Analysis

674
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

COMPARSION BETWEEN ACTUAL Hence, it is recommended to replace the large horsepower


motors than to make them rewound.
EFFICIENCY AND RATED EFFICIENCY
90 6. CONCLUSION AND FUTURE SCOPE
MOTOR EFFICIENCY (η℅)

88 The most energy conservation area is analyzing the rewound


86 motors and after analyzing, it was found that the efficiency of
84 new motor is 92.68 better then the rewound motor, which
82 has efficiency is 89.80.
80
78 Actual eff. Capacitor bank of determined size can also improve the
76 system efficiency. The losses of the system are reduced.
74 Rated eff. Hence play an important role in conservation of energy. A
72 future study on power factor improvement analysis and
improvement is thus suggested.
NM RM RM RM RM RM
1 2 3 4 5 7. ACKNOWLEDGMENTS
TYPES OF MOTOR I would like to express a deep sense of gratitude and thanks
profusely to my Dissertation Supervisor, Er. Jasvir Singh,
Assistant Professor, Dept. of Electrical Engineering, Bhai
Gurdas Institute of Engineering & Technology, Sangrur,
without those wise counsel and able guidance, it would have
Fig. 50HP Motor efficiency Analysis been impossible to complete the dissertation.

REFERENCES
[1]. Gilbert A. McCoy and John G. Douglass, “Energy
COMPARSION BETWEEN ACTUAL Management for Motor Driven System”
EFFICIENCY AND RATED [2]. Todd Litman, “Efficient Electric Motor Systems,” The
MOTOR EFFICIENCY (η℅)

EFFICIENCY Fairmont Press, Inc., 1995


95 [3]. Proprietary Method for Energy Conservation in Electric
94 Induction Motors Saves Energy, Saves Money, Available
93 at: www.energyconservationindustries.com
92 [4]. Alaj.n Streicher Hagler, Bailly & Company, “Energy
91 efficiency in the Sugar and Manufacturing Industries”.
Actual eff. March 1985
90
89 Rated eff.
[5]. Penrose H. W., “Financial Impact of Electrical Motor
88 System Reliability Programs”, All-Test Division, BJM
NM RM 1 Corp, 2003.
[6]. John S. H., John D. K., “Comparison of Induction Motor
TYPES OF MOTOR Field Efficiency Evaluation Method”, IEEE Transactions
on Industrial Applications, Vol. 34, No. 1, Jan/Feb 1998.

Fig. 100HP Motor efficiency Analysis [7]. Bureau of Energy Efficiency (BEE), Energy Efficiency
in Electrical Utilities: Electric Motors, Available at:
http://emtindia. com/BEE-Exam/GuideBooks/book3.pdf
After doing analysis of rewound motors, it is recommended to
[8]. Bureau of Energy Efficiency (BEE), Energy Performance
replace the motors with new motors. It is noticed after Assessment for Equipment & Utility Systems: Electric
analysis that there exists a relationship between motor Motors and Variables Speed Drives, Available at:
horsepower and its payback period. The relationship is stating http://emtindia. com/BEE-Exam/GuideBooks/book4.pdf
that with the increase of the motor size its payback period
decreases. [9]. Bureau of Energy Efficiency (BEE), General Aspect of
Energy Management and Energy Audit: Financial
Management, Available at: http://emt-india.com/BEE-
Exam/ GuideBooks/book1.pdf
[10].Ali Hasanbeigi, Lynn Price, “Industrial Energy Audit
Guidebook: Guidelines for Conducting an Energy Audit
in industrial Facilities” 2010

675
Civil Engineering
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Need of Sustainable Development & Use of Demolished


Aggregate for Highway Construction

Raj Kumar Maini Amandeep Singh Saajanbir Singh


PTU, Jalandhar PTU, Jalandhar PTU, Jalandhar
GNDEC, Ludhiana GNDEC, Ludhiana GNDEC, Ludhiana
rajmaini70@gmail.com as.gndec@gmail.com bhathal110@gmail.com

ABSTRACT over the world for proving its feasibility, economic viability
Huge amount of demolished waste is generated by and cost effectiveness.
construction Industry which is not put to re-use, except it is The main reasons for increase of volume of demolition
disposed off as land fill for low lying areas. Dumping of concrete / masonry waste are, many old buildings, concrete
wastes on land is not only causing shortage of space, but also pavements, bridges and other structures have overcome their
environmental problems near cities. With the ban on mining age and limit of use due to structural deterioration beyond
of sand and aggregates in Punjab and other northern regions repairs and need to be demolished; the structures, even
of India, the prices of sand and aggregate have gone too high adequate to use are under demolition because they are not
persuading engineers to go for crushers. Recycling of serving the needs in present scenario; new construction for
demolished waste can offer not only the solution of growing better economic growth; structures are turned into debris
waste disposal problem, but will also help to conserve natural resulting from natural disasters like earthquake, cyclone and
resources for meeting increasing demand of aggregates for floods etc; creation of building waste resulting from manmade
long time to come for construction industry and give disaster/war.
sustainable environment.
From Indian scenario point of view, there is severe shortage
This paper evaluates the behaviour and the environmental of infrastructural facilities like houses, hospitals, roads etc. in
impact of a mixed recycled aggregate from non-selected India and large quantities of construction materials for
construction and demolition waste (CDW) in field conditions. creating these facilities are needed. The planning Commission
A process to get aggregate from demolition waste is allocated approximately 50% of capital outlay for
developed and its basic properties are determined. These infrastructure development in successive 10th & 11th five
properties are compared with conventional and local year plans. Rapid infrastructural development such as
aggregate. Such recycled aggregate is tried to produce highways, airports etc. and growing demand for housing has
concrete of grades equivalent to M25 or similar other uses. It led to scarcity & rise in cost of construction materials. Most of
is found that Recycled Aggregate from demolition Waste can waste materials produced by demolished structures are
be gainfully used in making fresh pavements. disposed off by dumping them as land fill. Dumping of wastes
on land is causing shortage of dumping place in urban areas.
Keywords Therefore, it is necessary to start recycling and re-use of
Construction and demolition waste, Recycled aggregates, demolition concrete waste to save environment, cost and
saving on use of natural materials, technical requirements, energy.
sustainability.
Central Pollution Control Board has estimated current
1. INTRODUCTION quantum of solid waste generation in India to the tune of 48
Any construction activity requires several materials such as million tons per annum, out of which, waste from construction
concrete, steel, brick, stone, glass, clay, mud, wood, and so industry only accounts for more than 25%. Management of
on. However, the cement concrete remains the main such high quantum of waste puts enormous pressure on solid
construction material used in construction industries. For its waste management system.
suitability and adaptability with respect to the changing
environment, the concrete must be such that it can conserve In certain countries, recycling techniques have been in place
resources, protect the environment, economize and lead to since the late 1970s. For example, the reuse of concrete and
proper utilization of energy. To achieve this, major emphasis masonry as a base course for roads is common practice in the
must be laid on the use of wastes and by products in cement Netherlands. Molenaar and van Nierkerk (2002) studied the
and concrete used for new constructions. The utilization of performance of unbound base course materials made from
recycled aggregate is particularly very promising as 75 per recycled concrete and masonry rubble by measuring
cent of concrete is made of aggregates. In that case, the parameters such as gradation, composition, and others. Their
aggregates considered are slag, power plant wastes, recycled study concluded that the degree of compaction is the most
concrete, mining and quarrying wastes, waste glass, important factor pertaining to the mechanical characteristics
incinerator residue, red mud, burnt clay, sawdust, combustor of unbound base courses made of recycled materials.
ash and foundry sand. The enormous quantities of demolished Significant research initiatives are currently under way to
concrete are available at various construction sites, which are determine how technical characteristics, such as moisture
now posing a serious problem of disposal in urban areas. This content, the California Bearing Ratio (CBR), and degree of
can easily be recycled as aggregate and used in concrete. compaction, are affected when recycled construction and
Research & Development activities have been taken up all demolition waste (CDW) aggregate is included in pavement

676
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

layers. For example, Poon and Chan (2006) studied the use of Freeze-Thaw Resistance
recycled concrete aggregate and crushed clay brick in
unbound sub base materials in Hong Kong. The results of In the freeze-thaw resistance test (cube method), loss of mass
their research indicated that the use of 100% recycled concrete of the concrete made with recycled aggregate was found
aggregate increased the optimum moisture content and sometimes above and below than that of concrete made with
decreased the maximum dry density of the sub base materials natural aggregate.
compared to the use of aggregate composed of natural sub Carbonation
base materials. Moreover, the replacement of recycled
concrete aggregate by crushed clay brick further increased the There is an increase in the carbonation depth of RAC as
optimum moisture content and decreased the maximum dry compared to NAC due to the porous recycled aggregate due to
density. It was also found that CBR values (for dry and presence of mortar attached to the crushed stone aggregate.
soaked conditions) of sub base materials composed of 100%
recycled concrete aggregate were lower than those of natural 3. OBSTACLES IN USE OF RCA & RAC
sub base materials. The CBR values further decreased when The acceptability of recycled aggregate is impeded for
more recycled concrete aggregate was replaced by crushed structural applications due to the technical problems
clay brick. associated with it such as weak interfacial transition zones
between cement paste and aggregate, porosity and transverse
cracks within demolished concrete, high level of sulphate and
chloride contents, impurity, cement remains, poor grading,
2. MATERIAL AND PROPERTIES and large variation in quality.
2.1 Concrete Construction Although, it is environmentally & economically beneficial to
Concrete is a heterogeneous material which is made with use RCA in construction, however the current legislation and
Cement, Aggregate (Stone Chips), Sand & Water. Out of the experience are not adequate to support and encourage
total ingredients in mix, about 50% is Coarse Aggregate. As recycling of construction & demolished waste in India. Lack
all these materials are not available in plenty and some of of awareness, guidelines, specifications, standards, data base
them are energy intensive, so their use should be economized. of utilization of RCA in concrete and lack of confidence in
engineers, researchers and user agencies is major cause for
2.2 Properties of Recycled Concrete poor utilization of RCA in construction. These obstacles can
Aggregate be easily removed with the helping hands of government.

4. APPLICATION OF RECYCLE
Particle Size Distribution AGGREGATE
The top quality natural aggregate can obtain by primary,
secondary & tertiary crushing whereas the same can be In general applications of recycle aggregate are as follows:
obtained after primary & secondary crushing in case of
recycled aggregate. The single crushing process is also Many types of general bulk fills, helps in bank protection, in
effective in the case of recycled aggregate. Base or fill for drainage structures; for road construction;
noise barriers and embankments; in construction of low rise
The particle shape analysis of recycled aggregate indicates buildings; in manufacturing of paving blocks & tiles; in laying
similar particle shape of natural aggregate obtained from of flooring and approach lanes; in sewerage structures and
crushed rock. The recycled aggregate generally meets all the sub-base course of pavement and in be siding drainage layer
standard requirements of aggregate used in concrete. in highways and retaining walls.
Water Absorption and Specific Gravity
In general, as the water absorption characteristics of recycled 5. CONCLUSIONS
aggregates are higher, it is recommend to maintain saturated Recycling and reuse of building wastes have been found to be
surface dry (SSD) conditions of aggregate before start of the an appropriate solution to the problem of dumping hundreds
mixing operations. of thousands tons of debris accompanied with shortage of
The specific gravity (saturated surface dry condition) of natural aggregates. The use of recycled aggregates in concrete
recycled concrete aggregate was found from 2.35 to 2.58 prove to be a valuable building materials in technical,
which are lower as compared to natural aggregates. environment and economical respect

Crushing and Impact Values The use of RCAs in civil construction works will reduce
environmental pollution, and reduce the cost of production of
The recycled aggregate is relatively weaker than the natural natural resource as well as solving the problem of
aggregate against mechanical actions. construction-waste management by putting into use this
Compressive Strength waste. Adding RCAs to concrete resulted in increased water
demand, reduction in workability and reduced strength
The compressive strength of RAC is lower than the compared to the control sample. This results show reduction
conventional concrete made from similar mix proportions. in strength of concrete with increase in percentage
The reduction in strength of RAC as compare to NAC is in replacement of RCAs. Here, we can say that up to 20% RCAs
order of 2- 14% and 7.5 to 16% for M-20 & M-25 concretes utilized for economical and sustainable development of
respectively. The amount of reduction in strength depends on Concrete. Uses of RCAs in concrete can save the
parameters such as grade of demolished concrete, replacement construction-waste disposal costs and produce a ‘greener’
ratio, w/c ratio, processing of recycled aggregate etc. concrete for sustainable construction.

677
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES [5] Use of Recycled Aggregate in Concrete; Author: Jorge de


Brito; Nabajyoti Saikia

[6] Rahal,K ,2007 .Mecahnical properties of concrete with


[1] Robin L. Schroeder ,Autumn 1994.The Use of Recycled
Recycled Course Aggreagate
Material in Highway Construction.
[7] R.J. Collins & S.K. Giesielski,1993.Recycling & Use of
[2] YP Gupta ,2009.Use of recycled aggregates in concrete
waste materials & Byproducts in Highway Construction.
construction.
[8] Ramammurthy, K & Gumaste,K.S.,1998.Properties of
[3] Rosario Herrandor;Pablo Perez;Laura Garach;Javier
Recycled Aggregate.
Ordanez,2012.Use of Recycled construction & Demolished
Waste Aggregate.

[4] S.K. Singh,P.C. Sharma,2013.Use of Recycled


Aggregate in concrete.

678
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Applications of Acoustical & Noise Control Materials &


Techniques for Effective
Building Performance- A Review
Jashandeep Kaur K.S. Bedi Manu Nagpal
Structural Engg. G.N.D.E.C, Civil Engg. Deptt. G.N.D.E.C, Geotechnical Engg, G.N.D.E.C,
Gill Road, Ludhiana. Gill Road, Ludhiana. Gill Road, Ludhiana.
jashandeepkaur18 kanwarjeet_b@yahoo.com nagpal.manu@yahoo.com
@gmail.com

ABSTRACT buildings.
The concept of acoustic & noise control is gaining due
attention now a days. The continued growth in urban The double wall construction with shallow gap, light
population has led to high density buildings close to airports weight double leaf wall is adopted as effective sound
highways & cities. This has increased the exposure of absorbers in the walls of the buildings. As the cavity or gap
population from wide variety of sources thereby increasing saturated with air is intentionally left to absorb the sound
the need to provide better sound insulation for the building. A transmitted by various noise resources. Likewise the various
comprehensive list of current available techniques, various sound absorbing materials like glass wool, cork & other waste
acoustical porous materials & composites are discussed in matter or material is used for filling the cavity. Various types
regard to their effectiveness in the building usage. A of panels like GFP (gas filled panels) & particles board such
description of various acoustical characterizations, as cork board, corn cob paricle board are employed in
composition, and application of materials is widely presented. interiors of buildings[2]. Taking care of sustainability, recycled
However, to some extent the sustainability is also accounted. carpet waste, recycled rubber waste, vegetable matter, hemp
A sustainable product can thus be created repeatedly without & jute are largely used.
generating negative environmental effects, without causing The protection against the excess of noise in the workplace is
waste products or pollution, and without compromising the a crucial public health problem. Machine noise can be
wellbeing of workers or communities. controlled by acoustic enclosures which limit the power of
outward sound. But this solution often proves to be
General Terms insufficient, especially for low frequencies. Two main
Acoustics, Sound insulation. approaches can be envisaged to increase reduction in the noise
transmitted by enclosures. The first is wall treatment to absorb
Keywords acoustic energy inside the enclosure. For low frequencies,
Aerogels, composites, Double leaf wall, Particle board. however, these approaches involve an excessive thickness of
material. The second approach consists in directly acting on
the walls to prevent noise transmission by reducing wall
1. INTRODUCTION vibration. Generally, covering the walls with passive materials
ACOUSTICS is a term sometimes used for the science of adds mass and increases damping. In 1999, the total energy
sound in general. It is more commonly used for the special consumption in Europe was 1780 million tons of oil
branch of that science, architectural acoustics, which deals equivalents, for which 35% was used in the residential and
with the construction of enclosed areas so as to enhance the commercial sector. It became clear that reducing the heat
hearing of speech or music. Noise control as the name losses of buildings or in general the total energy consumption
suggests, envelops the techniques used to minimize the effects of buildings can have a major impact on the total greenhouse
of unwanted sound and thus optimize environmental gas emissions in Europe. Traditional insulation materials were
conditions. Building Acoustics involves both the control of and are being used in thicker or multiple layers, which
noise within an enclosed space and the reduction of noise resulted in more complex building details, an adverse net-to-
between rooms or from either outside or inside a building. gross floor area and possible heavier load bearing
Acoustical performance of every adaptable structure has constructions. But simultaneously, a second strategy won
long been interested & is investigated worldwide. Various interest. It became clear that air as an insulator had reached
sound absorbers like organic porous materials for example his limit and that there was a need to research and develops
wool fiber, ceramic, rubber, hemp, aerogel etc. & inorganic high performance thermal insulation materials and solutions.
porous materials for example melamine form, polypropylene Gas-filled panels (GFPs), are one of these new promising high
composite[1]. Various composites have been discovered like performance thermal insulating solutions for building
jute composites, composites from ground chicken quill & applications.
polypropylene[3]. Various techniques are widely accepted for
effective sound insulation in different components of the

679
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

and again can contribute something to the equation in a multi-


1. The Acoustic Nature of Materials layer wall.
Building Material is any one of various substances out of
which buildings are constructed. They come in different forms Wood’s real beauty lies in its ability to reflect sound in a
and are also applied in various ways in building. Materials in pleasing way, meaning that it is a useful material for sound
building construction for the purpose of this write-up will be treatment. Since wood resonates easily, it has a way of
classified under the four major component parts of a building. absorbing some of the sound energy as it vibrates, letting
This includes the following; some of the sound pass through to the other side, and
1. Walls reflecting some of the sound back from whence it came. This
2. Floors genteel quality of wood is one reason it is widely used in the
3. Ceilings making of musical instruments, and wood has a major role to
4. Roofs play as an interior finish material in good sounding rooms.

When we choose the materials that will make up the


structure of a building, we are making decisions that will
1.3 Steel
Steel is a quite dense material, but because of its expense it is
affect the nature of sound within the building. It is possible to
rarely used as a sound isolation material. Steel’s density
make improvements after the fact, but when the building has
actually becomes a liability in structural uses where its dense
been built we’ve lost the ability to affect the acoustics in two
nature causes it to carry sound vibrations for long distances. If
important ways. The major problem with starting to build
you strike an I-beam with a hammer and place your ear to the
before designing the acoustics is that little or no consideration
other end –let’s say 24 ft. away, you’ll see that the sound
is given to the acoustic nature of the materials that make up
carries quite well through the steel. This type of sound
the structure.
transfer is called structure-borne vibration, where sound is
carried through some material other than air for a time. The
First we’ll take a look at some commonly used building
other main type of sound transfer is air-borne vibration.
materials and their acoustic properties, and then we’ll
examine the ways these materials can be used for sound
Steel studs can actually transmit less structure-borne vibration
isolation and acoustic treatment. Sound isolation is the branch
than wood, even though steel is more prone to this problem
of acoustics that deals with keeping sound where you want it
simply because flimsy steel studs have much less cross-
– in or out of the building, for instance, or keeping sounds in
sectional area to carry the vibrations between the two wall
one room from invading another room [4]. Sound treatment, on
surfaces.
the other hand, is the branch of acoustics concerned with the
perfecting the quality of the sound we hear, and using the
proper combinations of materials and shapes to create 1.4 Roofing materials
pleasing, musically accurate sound. Asphalt shingles are fairly massive, as you know if you
hauled them up to the roof, but they are also thin. Installation
with a large overlap, heavy felt, and even double layer
sheathing can help quite a bit. Ceramic and clay tiles are more
1.1 Concrete, Stone & other building massive than wood shakes by far, and can do a reasonable job
material in residential applications. Metal roofing has mass but is thin,
Masonry materials are great for sound isolation, especially and requires that the underlying structure be fairly massive.
when used in floors and walls where the masonry material is
quite thick. A solid concrete wall 1 ft. thick will rarely cause 1.5 Glass and other transparent materials
clients to complain about sound isolation, for two reasons. Glass is quite massive – about three times as massive as
One is the material’s rigidity, meaning that it will not flex and drywall. So in a sound wall with three 5/8” layers of drywall
create sound waves on the quiet side of the wall. The other is on one side, one layer of 5/8” glass maybe inserted to create a
concrete’s mass. Nothing stops sound waves quite like window on that side, provided that it is properly sealed. A
massive materials, and they are especially capable of stopping corresponding piece of glass would be required on the other
the critical low frequencies that are so hard to stop with less side of the wall, at the appropriate thickness.
massive materials. Stone and brick are very similar to
concrete in mass, and concrete masonry units, although they A relatively recent development is the invention of absorptive
are lighter, can do a very good job when they are fully filled glass-like products that offer pretty good transparency while
with concrete, instead of just filling the cells that contain the absorbing enough sound to reduce the harsh reflectivity
rebar. usually associated with glass. These products are made from
Plexiglas or thin transparent foils, perforated with tiny holes.
Concrete slabs also do a good job of isolating sound between Their use is mainly confined to professional sound studios.
floors – something that is very difficult to do any other way.
1.6 Insulating materials
1.2 Wood, and wood products Insulating materials have little mass, so they have limited
Wood is much less dense than masonry, and provides much uses for sound isolation. However, fiberglass has good sound
less in the way of sound isolation for that reason. Wood absorption characteristics, and is very useful as a sound
products like MDF, on the other hand, are somewhat more treatment material for sound room interiors. Fiberglass and
massive, and are sometimes used in interior walls to add mass. rock wool, which has similar acoustic properties, absorb
OSB is less dense than MDF, but can be useful as well, as part sound by slowing the velocity of the air particles carrying the
of an integrated system. Plywood comes in varying densities, wave. Wood, on the other hand, absorbs sound best when in
the pressure zone of a sound wave. Sound waves are at

680
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

highest pressure when at lowest velocity, so care must be research that has received much attention for years. Generally,
taken to place materials appropriately. Waves are generally at for the double-wall structure with a shallow gap, the strong
highest pressure at room boundaries, particularly multiple coupling of the mass-air-mass will deteriorate sound behavior
boundaries like dihedral and trihedral corners. at low frequencies, which leads to the rapid development of
using active techniques for noise control [5].
1.7 Plastics and Rubber Some typical control strategies include acoustic control using
Plastics are sometimes used in the manufacture of low-cost microphones and loudspeakers, vibration control using PZT
acoustical devices, but have limited usefulness. Rubber, actuators and PVDF sensors, and etc. In engineering practice,
particularly neoprene rubber is very good as a mechanical however, it is difficult to embed a complicated control circuit
isolator -- for floating glass and preventing the diaphragmatic into a shallow gap from the implementation point of view.
vibrations of the glass from transmitting into the wall, for As an alternative, passive control using Helmholtz
instance. resonators (HRs) has been adopted extensively due to its low
cost and no need for external power. By properly arranging
Mass loaded vinyl can be used inside wall cavities to increase tuned HRs, the acoustical damping level inside the cavity
sound isolation, and is hung in a limp, as opposed to stretched, between the double panels, i.e., air gap, can be increased, and
fashion. consequently the noise attenuation is achieved. However, the
problem arises naturally when considering the control action
of using HRs. That is, only a very narrow bandwidth can be
1.8 Mechanical and plumbing materials controlled using single HR and the design of HR network to
Metal and plastic pipes are often transmitters of structure
cover a broad frequency range apparently limit its application
borne vibration, and can be isolated or deadened with rubber
due to space requirement. Therefore, it is desired to come up
materials. Refrigerant lines are especially bad for transmitting with an effective design to balance these concerns. Based on
high-pitched whining noises through buildings, so you want to the facts that, the double-gap system is more effective for
locate them carefully, and de-couple them from the
noise attenuation than the single one and a long T-shaped
structure.Ductwork should be heavy sheet metal, lined with at
acoustic resonator array can facilitate the design a plausible
least 1” of acoustic liner. Flex duct is virtually acoustically method is to design resonator-like cavities with large aspect
transparent, and should be avoided when you are picky about ratio around the shallow gap. The reason is that, such a design
crosstalk between the ducts in attics and other mechanical
might not only dampen more than one frequency (compared
spaces.
with single HR), but also alter the modal coupling between
the air gap and the designed cavities, leading to the change of
2. TECHNIQUES FOR SOUND ABSORPTION & vibrio-acoustic behavior between the air gap and the lower
SOUND PROOFING panel, and consequently resulting in noise reduction in the
enclosure [8]. Discussion on the possibility of using resonator-
2.1 Active control of sound transmission through a linked like cavity design for noise control is hence of great interest.
double wall system into an acoustic cavity
Noise control inside a cavity is a typical problem with great 2.2 Aerogel insulation for building
application potential in both industry and civil engineering. application
To control the interior noise, double-wall structures are In 2005, buildings emitted 8.3 Gt carbon dioxide each year
widely used in mobile vehicles, partition walls in building and accounting for more than 30% of the greenhouse gas
aircraft fuselage shells thanks to their superior noise insulation emissions in many developed countries[1]. Residential and
performance over single-leaf structures. commercial retrofit insulation has been found as one of the
In the past few years, numerous new experimental methods most cost effective actions for greenhouse gas abatement .
have been proposed to characterize the elastic and damping Here fore, traditional insulation materials were and are being
parameters of fibrous materials or open-cells and air saturated used in thicker or multiple layers which result in more
polymer foams. These materials are widely used for sound complex building details, an adverse net-to gross floor area
absorption and insulation in buildings, inside the fuselage of and possible heavier load bearing constructions.
airplanes, in machinery enclosures, etc. The influence of their
elastic parameters (Young’s or shear moduli, Poisson’s ratios, But simultaneously, a second strategy won interest. It
loss factors, etc.) may be important when porous materials are became clear that air as an insulator had reached his limit and
bounded onto the vibrating structure. that there was a need to research and develop high
In many practical situations, however, double-wall performance thermal insulation materials and solutions.
structures contain mechanical links to connect the two walls,
which alter the energy transmission path. Apart from the
acoustic transmitting path through the air gap between the two
walls, energy can also be transmitted from the links, such
forming a structural transmitting path [5]. As a result, the
inherent coupling between the panels and the cavities
(the air gap and the enclosure) significantly increases
the degree of complexity in terms of control.

NOISE CONTROL OF AN ACOUSTIC ENCLOSURE COVERED BY


A DOUBLE-WALL STRUCTURE WITH SHALLOW GAP

Control of sound transmission through a double-wall Figure:1 Aerogel as a high performance thermal
structure with an acoustic enclosure is an interesting topic of insulation material for building application

681
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Although yet discovered in the early 1930s , aerogels are – canvas, foam, bottle, jeans, rubber, polyester, acrylic, fibre
together with vacuum insulation panels – one of these new glass, carpet or cork) are some solutions already established .
promising high performance thermal insulation materials for These green products or eco-products intend to be sustainable
possible building applications, but only limited commercial alternative to the traditional ones (e.g. glass or rock wool) [6].
products are available. The fact that natural fibres may be economic, light weight and
environmentally friendly, justifies that they have been studied
2.3 Gas-filled panels for building as an alternative to the synthetic fibres in the acoustic context
[9]
.
applications
Among the above identified agricultural products, corn cob
has an additional advantage in terms of possible application
for alternative processed products because it does not collide
with the worldwide food stock and it is generally considered
as agricultural waste.

Figure: 2 Corncob particle board having size 50 cm x 100


cm & thickness 3 cm.

Gas-filled panels (GFPs) are one of these new promising


high performance thermal insulating solutions for building
applications.[2] Gas-filled panels are experimental and only
limited commercial products are available so far. Most of the
work carried out on GFPs in available literature is performed
by Griffith and co-workers at Lawrence Berkeley National
Laboratory (LBNL), which is also depicted in the references
of this review. In fact, some of the references cover
refrigerator applications and do not actually cover GFPs as
applied in the building envelope.

Gas-filled panels (GFPs) consist of a barrier envelope and a


gas between reflective layers (a baffle). The gas can be air or
a heavier gas to decrease thermal advection and
conduction. A low-emissivity barrier envelope is used
to enclose the gas and to decrease the heat transfer due Figure: 3 Corncob particle board having size 50 cm x 100
to radiation, while a low-emissivity baffle structure is cm & thickness 3 cm.
included to decrease inner gas convection and radiation.
As a result, both flexible and stiff GFPs are possible. 2.5 Sound absorption characteristics of high
temperature sintering porous ceramic
2.4 Impact sound insulation technique using material
corncob particleboard
Sound absorption of porous media has long been interested in
Affordability, low rate of CO2 emissions to the atmosphere, the industry of noise control including areas of mufflers of
and small energy and water consumptions are some cars, air conditioner parts, pump chambers and elevated roads,
parameters that have to be taken into consideration when a etc.
product is designed. Using green building materials which
must be renewable, local and abundant is a solution that According to the substance, current acoustical materials could
contributes to achieve these important goals. In this context, be roughly divided into three kinds: organic porous materials
several authors have already proposed using different (OPMs), inorganic porous materials (IPMs) and metallic
agricultural products such as bagasse, cereal, straw, corn stalk, porous materials (MPMs) [7].
corn cob, cotton stalks, rice husks, rice, straw, sunflower hulls
and stalks, banana stalks, coconut coir, bamboo, durian peel, The organic porous materials (OPMs) include a wide area.
oil palm leaves among others for product processing such as
particleboard, hardboard and fibre board, and focusing on The use of polymeric binder, instead of the concrete typically
their thermal insulation ability. Other authors have been employed in present commercial products, allowed obtaining
studying the technical potential of using other types of residue light sound absorbing panels. The natural foam rubber with
such as newspaper, honeycomb or polymeric wastes in the the addition of sodium bicarbonate acts as a blowing agent. It
processing of different building components. On the other was found that natural foam rubber at the lowest foaming
hand, new alternative sustainable sound insulation building temperature 140 degree Celsius indicates good sound
products have been at the centre of society’s concerns. Sound absorption. By means of suitable constituents and fabrication
insulation products processed with natural materials (e.g. processes, the sound absorption of hemp concrete can be
cotton, cellulose, hemp, wool, clay, jute, sisal, kenaf, feather controlled and significantly enhanced. Besides of the above
or coco) or processed with recycled materials (e.g. wood, materials, the utilization of industrial and agriculture waste

682
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

materials in the area of sound absorption has been drawing [4] Z.S. Liu, H.P. Lee, C. Lu, 2006, “Passive and active
more and more attention, such as recycled rubber, elastomeric interior noise control of box structures using the structural
waste residue (tyre and carpet shreds), industrial tea-leaf-fiber intensity method”.
waste material and coir fiber.
[5] Y.Y. Li, 2006 “Noise control of an acoustic enclosure
The conventional inorganic sound-absorbing materials are covered by a double-wall structure with shallow gap”.
glass wool and rock wool. The effect of circumferential edge
constraint on the acoustical properties of glass fiber materials [6] S. Fatima 1, A.R. Mohanty, 2011 “Acoustical and fire-
with a special focused on the variation of transmission loss retardant properties of jute composite materials”.
with frequency.
[7] Duan Cuiyun, Cui Guang, Xu Xinbang, Liu Peisheng,
REFERENCES 2012 “Sound absorption characteristics of a high-temperature
sintering porous ceramic material”.
[1] Baetens Ruben , Bjørn Petter Jelle , Arild Gustavsend,
2011 “Aerogel insulation for building applications: A state-of- [8] Y.Y. Li, L. Cheng, 2008 “Mechanisms of active control of
the-art review” Energy and Buildings. sound transmission through a linked double-wall system into
an acoustic cavity”.
[2] Ruben Baetens, Bjørn Petter Jelle, Arild Gustavsend,
Steinar Grynninga 2010 “Gas-filled panels for building [9] Jorge Faustino, Luís Pereira , Salviano Soares , Daniel
applications: A state-of-the-art review” Energy and Buildings Cruz, 2012 “Impact sound insulation technique using corn cob
particleboard”.
[3] Shah Huda , Yiqi Yang ,2010 “Composites from ground
chicken quill and polypropylene”.

683
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Nanocement Additives - A Carbon Neutral Strength


Enhancing Material
Jaskarn Singh Gurpyar Singh Parampreet Kaur Akash Bhardwaj
Asst.Prof. Asst. Prof. Asst. Prof. Student of Civil Engg.
GNDEC, Ludhiana BGIET, Sangrur SBSSTC, Ferozepur SBSSTC, Ferozepur

ABSTRACT 2. METHODS TO PRODUCE NANO


Due to Rapid Growth in Construction Industry, the use of CEMENT ADDITIVES
Cement is Tremendously increasing for the development
of advance building materials , Some problem related to
the sustainability for cement and concrete production still The relatively small quantities, less than 1% of nano-sized
existing in the cement industries. For example, most of the materials are sufficient to improve the performance of nano
concrete with local high volume fly ash/slag replacement composites. Yet, the commercial success of nanomaterials
cement, experienced constraint to attain the required early depends on the ability to manufacture these materials in
strength and ultimate strength limited in the range of large quantities and at a reasonable cost relative to the
60MPa to 70MPa at 28days age. Higher than 98MPa overall effect of the nano product. The nanomaterials
concrete only can be produced by binary or ternary blended technologies, which could lead to the industrial outputs,
cement and limited to use together with silica fume in that involve plasma arching, flame pyrolysis, chemical vapour
case the Nanocement type material plays a important role, deposition, electrode position[2], sol-gel synthesis,
In this study the palm oil fuel ash (POFA) and rice husk ash mechanical attrition and the use of natural nanosystems .
(RHA) are used to produce the Nano Cement Additives Among chemical technologies, sol-gel synthesis is one of
(NCA) for carbon neutral cement (CNC). NCA can be the widely used “bottom-up” production methods for nano-
produced by mechano-chemical activation in wet milling sized materials, such as nano-silica. The process involves
and precipitation method. NCA performance can be the formation of a colloidal suspension (sol) and gelation of
evaluated in OPC mortar by replace the OPC at dosage the sol to form a network in a continuous liquid phase (gel).
0.5%, 1.0% and 1.5%. respectively. Usually, trymethylethoxysilane[3] or tetraethoxysilane
(TMOS/TEOS) is applied as a precursor for synthesizing
nanosilica. The sol-gel formation process can be simplified
KEYWORDS to few stages
Nano Cement, GPC, Super plasticizer, CNC, sol-gel
formation, condensation, agglomeration.
1. Hydrolysis of the precursor.

1. INTRODUCTION 2. Condensation and polymerization of monomers to form


the particles.
Since the RHA and POFA are derived from the biomass
thus they can be categorized as the biogenic waste and the 3. Growth of particles.
slag term as industrial by-products. Rice husk ash(RHA)
waste generated from the rice husk processing industries,
palm oil fuel ash(POFA) from palm oil milling industries 4. Agglomeration of particles, followed by the formation
and slag from steel milling industries are abundantly of networks and, subsequently, gel structure.
available and proved pozzolanic waste materials, The
complex chemistry and physical structure of cement 5. Drying (optional) to remove the solvents and thermal
hydrates in concrete however mean that issues of treatment (optional) to remove the surface functional
fundamental science still need to be resolved. Research at groups and obtain the desired crystal structure.
the nanoscale has the potential to contribute to these
debates and questions. Analysis at the nanoscale may
provide further insight into the nature of hydrated cement
phases and their interaction, Recent nano-research in
cement and concrete has focused on the investigation of the
structure of cement-based materials and their fracture
mechanisms. With new advanced equipment it is possible
to observe the structure at its atomic level and even
measure the strength, hardness and other basic properties of
the micro and nano-scopic [1] phases of materials.

684
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Then 1% NCA-RHA or the 1%NCA-POFA can be blended


into the CNC dry powder or during mixing fresh mortar or
SCC and Nonsteaming[7] spun pile concrete.

Na2SiO3(Sodium silicate) + H2SO4 ----------SiO2 (silica) +


Na2SO4 + H2O

4. RESULTS

Fig.1 SEM of Nano-Additives at different concentration

3. METHODOLOGY

Samples and materials PCE superplasticizer was a comb-


type polymer and SNS is napahthalene base
superplasticizer supplied by one of the local admixture
manufacturer. The molecular weight of SNS and PCE
superplasticizer was analyzed and characterized with gel
permeation chromatography(GPC). The GPC analysis was
conducted under the following condition: Columns: Shodex
Asahipak GS series, Flow rate: 1ml/min, Temperature: 40 0
C , Eluent: Acetonitrile[4] and 50mMNaNO3 in the ratio
3/7 (v/v), Injection volume: 10µm, Detector: Refractive
index and UV detector.
Fig.2 The agglomeration results
The nano silica colloidal solution(NCA-RHA) was
prepared by first wet milling process with ball
NCA-RHA consist of 88.9%(190.9nm) and
milling(3.5hrs), ultrasound milling(10minutes), high shear
11.1%(20.92nm) particles however in the TEM test the
wet milling(2cycles). In the second extraction process of
average spherical diameter is about 10nm to 30nm and
silica using caustic soda to form the sodium silicate then
agglomerate to a cluster in diameter about 100nm. This
precipitation nano silica by using the sulphuric acid . 100g
agglomeration more obvious in the case of NCAPOFA the
of RHA samples were dissolved in 800ml of 3.0N sodium
fresh prepared NCA-POFA contain 82.3%(345.7nm) and
hydroxide solution and efflux in a 2.5L Erlenmeyer flask
17.7%(27.09nm) but after a month storage it started
for 10 hours then filtered and the residue was washed with
agglomeration to form bigger diameter particles in the
200ml boiling water. After cool down the filtrate to room
fraction of 74.3%(2.318µm), 12.2%(355.5nm) and
temperature, adjust the pH in the range 7.5 -8.5 with 5N
11.9%(30.32nm).
Sulphuric acid.

The nano silica colloidal sample was examined with TEM 5. CONCLUSION
test or Zeta potential and the filtrate was vacuum dried at
500 C for 2hrs then oven dried for 12hrs to proceed on for  The nano colloidal NCA-RHA solution is successful
XRD and SEM test. The submicron slurry[5] of POFA (in prepared from RHA in nano size 11nm-30nm
short form as NCA-POFA) was prepared in the same amorphous silica without agglomeration by wet
method as nano silica colloidal solution but only achieve milling and precipitation method.
the submicron size slurry due to the nature agglomeration  The nano colloidal NCA-POFA solution in the initial
of aluminosilica[6] in the laminated layer form. 100% prepared stage is nano size particle 27nm- 300nm
replacement cement(CNC) sample was prepared by amorphous of alumina silica, however it is un-stable
intergrinding the 60%slag, 30%POFA and 10%RHA in the will agglomerate into submicron particles.
ball mill till fineness in the range from 650 to 750 m2 /kg.

685
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 The mortar strength indicated that at low dosage development”. Paper presented at Malaysian Construction
0.5%NCA-RHA and NCA-POFA is highly reactive in Research Journal (MCRJ), Vol.1, No.7, pp:28- 37.
pozzolanic activity and suitable for OPC cement
strength enhancer in the early age. When dosage [4] Lai, F.C. (2009). “High volume quaternary blended
beyond 1%NCA will improve the early and ultimate cement for sustainable high performance concrete”. Paper
strength. presented at 34th Conference on Our World In Concrete &
 However the NCA-POFA more significant affect the Structures: 16th-18th August 2009, Singapore, pp:175-180.
workability of mortar/concrete when dosage beyond
1%.
[5] Duxson, P, Fernandez, J, Provis, J.L, Lukey, G.C,
 The NCA is also suitable for producing the CNC
Palomo, A, Van Deventer, J.S.J. (2007). “Geopolymer
cement act as the 100% slag and POFA activator.
technology: the current state of the art”. J.Mater Sci,
 The mortar strength indicated that the CNC cement is
Vol.42, pp:2917-2933.
highly reactive in pozzolanic activity not only for high
volume replacement cement with slag and POFA
concrete but also for sustainable high strength cement [6] Ravindra, N.T and Ghosh, S. (2009). ”Effect of mix
and concrete 100MPa to 120MPa. composition on compressive strength and microstructure of
 Fullfill the sustainable high performance concrete fly ash based geopolymer composites”. ARPN Journal of
technology requirement. No need steaming/autoclave engineering and applied sciences. Vol.4, No.4, pp:68-74.
energy constraints whereby the SCC is high early
strength&ultimate strength at temperature 27 0 C to [7] Nittaya, T and Apinon, N. (2008). “Synthesis and
350 C(tropical climate) with low W/B = 0.25 by using characterization of nanosilica from rice husk ash prepared
the CNC cement with total 90% and 100% OPC by precipitation method”. CMU.J.Nat.Sci. Vol.7, No.1,
replacement with the local renewable materials :RHA, pp:59-65.
POFA and slag.
 Optimum dosage at 1%NCA is good for non-steaming
spun pile concrete achieved >40MPa at 7hrs and the
grade 80 can be delivered at 3days instead of common
practice for 14days.
 The CNC cement concrete mix design is comparable
performance in strength and durability against the
normal OPC mix as the high strength 100MPa and
water depth penetration very low

6. ACKNOWLEDGEMENT

The authors would like to thank Faculty of GNDEC


Ludhiana, & SBSSTC, Ferozepur for their full support and
encouragement in supporting this research.

REFERENCES

[1] Mohamed, E.I, Mohd Warid Hussin and Mohammad


Ismail. (2008). “High performance blended cement
concrete in Malaysia”. Paper presented at 8th International
Symposium on utilization of highstrength and high-
performance concrete, 27th-29th 2008, Tokyo, Japan,
pp:639-646.

[2] Ilham, Ade. (2004). “Mix design and properties of high


performance concrete with compressive strength from
50MPa to 100MPa”. PhD Universiti Kebangsaan Malaysia.
Kartik, H.O, Russell, L.H and Ross, S.M (2003). HVFA
Concrete-An Industry Perpective. Concrete International,
Vol.25 No.8.pp:29-34.

[3] Lee, Y.L, Koh, H.B, Chee, K.W, Suraya, H.A,


Suhaizad Sulaiman and Yung, T.H (2007). “Micronised
biomass silica and nano particles synthesis-recent

686
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Transpiring Concrete Trends: Review

Prof. K. S. Bedi Ivjot Singh Prof. Jaskarn Singh


Associate Prof. M.Tech Student Asst. Prof.
Department of Civil Engg. Department of Civil Engg, Department of Civil Engg.
GNDEC, Ludhiana NIT Jalandhar. GNDEC, Ludhiana
Kanwarjeet_b@yahoo.com ivu.singh@gmail.com jaskarnsinghgne@gmail.com

ABSTRACT highly workable SCC has different properties from the


conventional concrete. SCC have slump value 280mm+[1] at all
With a thrust on infrastructure development, the advancement in levels of flow ability along with maintaining the viscosity to
construction materials has a significance place. Recent make the mortar suspended and maintain a homogenous
researches in the concrete have emphasized on strength and mixture. But the conventional concrete at Slump value 280mm+
durability of hardened concrete. The higher grades of concrete don’t have that much stability like SCC.
are now becoming more popular with its proven utility in the
construction of important structures. The different emerging In 1980s, Japan faced a problem of lack of skilled workers who
trends of concrete production and current research scenario in could construct more durable concrete structures. Professor
the area of Self Compacting Concrete is discussed in this paper. Hajime Okamura (University of Tokyo, now Kochi Institute of
Along with the recent developments in the field of concrete such Technology) found the use of SCC as a solution to this problem.
as high performance concrete (HPC), light weight aggregate SCC technology in Japan was based on using conventional
concrete, compacted reinforced concrete (CRC), high volume superplasticizers to create highly fluid concrete, along with
using viscosity-modifying agents (VMA) which increase plastic
fly ash concrete and green concrete using recycled materials. viscosity thus preventing segregation.
Keywords: Concrete, Aggregate, Fly ash SCC offers some help in respect of the following:-

1. INTRODUCTION 1. Reduced cost

• Improved Productivity – SCC can increase the speed of


Concrete technology is a vast area in the field of
construction, improve surface finishing and thus reduce
research, as concrete is mainly adopted construction material,
repair costs, reduce maintenance costs on equipment, and
the recent thrust on infrastructure development has increased the
provide less time consuming process.
cement production. Hence innovative materials have been
developed and new applications have been emerged in. • Reduced labor costs – SCC reduces labor demands and
compensates for lack of skilled workers to perform the
The basic ingredients of the concrete are same as before, which
rigorous work required for quality concrete construction.
are cement, coarse aggregates, fine aggregates and water. But
the developments in material technology have introduced so 2. Improved work environment and safety – SCC reduced
many trends in concrete like Self Compacting Concrete (SCC), the use of vibrators for concrete placement, thus
High Performance Concrete (HPC), Light Weight Aggregate minimizing vibration and noise exposures. It eliminates trip
Concrete, High Volume Fly Ash Concrete and Green Concrete. hazards caused by cords. It reduces fall hazards, as workers
We shall discuss all these trends in brief one by one .Although do not have to stand on forms for compaction of concrete.
there are different types of concrete that have been developed
for use in different applications, their common virtues are 2.1.1 Properties of Self Compacting Concrete
strength, durability, wide availability, fire resistance and
comparatively low cost. There are no. of steps to design the optimum composition of the
SCC. At first, we need to determine the water cement ratio.
2. TRENDS IN CONCRETE Then the most important criteria i.e. the coarse aggregate
volume should be 50% of the solid volume of the concrete
2.1 Self Compacting Concrete without air, and that the fine aggregate volume should be 40% of
the mortar volume, where particles finer than 0.09mm are not
SCC is highly workable concrete which flows through congested considered as aggregate, but as powder[2]. If the composition of
reinforced areas effectively under its own weight without any the mixture, obtained in this way, is mathematically analyzed, it
segregation, excessive bleeding, air-popping, or any other is found that this procedure leads to a concrete composition with
material separation without any need of compaction. Being a little bit of “excess paste”. It means that slightly more paste is

687
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

in the mixture than necessary to fill all the holes between the conventional concrete with a dry density of 300 kg/m3 up to
particles: this implies that around any particle a very thin 1840 kg/m3. Lightweight aggregates used in structural
“lubricating” layer exists, by virtue of which the friction lightweight concrete are typically expanded shale, clay or slate
between the particles in the fluid mixture is greatly reduced in materials that have been fired in a rotary kiln to develop a
comparison to conventional mixtures, Fig. 1 porous structure. Other products such as air cooled blast furnace
slag are also used. The primary use of lightweight concrete is to
reduce the dead load of a concrete structure which then allows
the structural designer to reduce the size of columns, footing and
other load bearing elements. So the marginally higher cost of the
lightweight concrete is offset by the size reduction of structural
elements, less reinforcing steel and reduced volume of concrete,
thus resulting in lower overall cost.[3]

Fig. 1. Excess paste layers around the aggregate particles. Except those light weight concrete has the following advantages
too:-
Also the modulus of elasticity of SCC is smaller than the normal
concrete for the same strength [2]. This is because of the 1. Earthquake Resistant: Since lighter than concrete & brick, the
lightness of the material increases resistance against earthquake.
different stiffness of the aggregates used so the value of E
modulus is considered to scatter.
2. Insulation: Superior thermal insulation properties compared to
that of conventional brick and concrete, so reduces the heating
2.1.2 Applications of Self Compacting and cooling expenses. In buildings, light-weight concrete will
Concrete produce a higher fire rated structure.
The advantages for using SCC are useful in the following
respects like,
3. Workability: Products made from lightweight concrete are
- the substantial reduction of the noise level lightweight, making them easy to place using less skilled labor.
- the absence of vibration 4. Savings in Material: Reduces dead weight of filler walls in
- the reduction of dust (quartzite!) in the air due to framed structures by more than 50% as compared to brickwork
vibration
resulting in substantial savings.
- the energy saving
- the omission of the expensive mechanical vibrators 5. Less creep and shrinkage: values of creep and shrinkage strain
- the reduction of wear to the formwork for these lightweight concretes are low as compared to
- the possibility to produce elements with high conventional concrete [4]
architectural quality
Note:-The concrete cover to reinforcement using lightweight
aggregates in concrete should be adequate. Usually it is 25mm
more than that of normal concrete because of its increased
permeability and also concrete carbonates rapidly by which the
protection to the steel by the alkaline lime is lost.
Formation of light weight concrete:-Portland cement and sand,
so that when the mix sets and hardens, uniform cellular structure
is formed. Thus it is a mixture of water, cement and finely
crushed sand. We mix fine powder of Aluminium to the slurry
and it reacts with the calcium hydroxide present in it thus
producing hydrogen gas. This hydrogen gas when contained in
the slurry mix gives the cellular structure and thus makes the
concrete lighter than the conventional concrete

Fig. 2. Modulus of elasticity of self-compacting concrete. 2.3 High Volume Fly Ash Concrete
In the modern scenario the cementing materials are being
2.2 Light Weight Concrete replaced by some supplementary materials to a large percentage.
Near about 800 million tones fly ash is available worldwide
Concrete of substantially lower density than that made from which points out to potential the use of fly ash. Because of its
gravel or crushed stone; pozzolanic, self cementious nature it can be used as mineral
usually made with lightweight aggregate or admixture in the concrete.Fly ash has been used as a mineral
by injecting air or gas into the mortar is called as light- weight admixture to reduce the heat of hydration if the specific
concrete. It can also be defined as a type of concrete which applications do not require early strength as in case of concrete
includes an expanding agent in it that increases the volume of roads, dams and other marine structures. Research investigations
the mixture while reducing the dead weight. It is lighter than the

688
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

have shown that about 50% of the cement can be replaced by the minimizing the energy spent to put it into its final functional
fly ash. form. It therefore depends on the practices related to every stage
of its production.
The production of Portland cement is not only costly and energy
intensive but it also produces large amounts of carbon Green concrete should follow reduce, reuse and recycle
dioxide.With large quantities of fly ash available around the technique or any two process in the concrete technology. The
three major objective behind green concept in concrete is to
world at low costs, the use of HVFA seems to offer the best
reduce greenhouse gas emission (carbon dioxide emission from
short term solution to rising cement demands.Also, lead points
cement industry, as one ton of cement manufacturing process
which are points awarded based on environmental performance emits one ton of carbon dioxide), secondly to reduce the use of
are available for any mixture that replaces up to 40% of the natural resources such as limestone, shale, clay, natural river
cement in concrete by fly ash.[5] sand, natural rocks that are being consume for the development
of human mankind that are not given back to the earth, thirdly
The improvement in the durability are the result of the reduction use of waste materials in concrete that also prevents the large
in carbon hydroxide which is the most soluble of the hydration area of land that is used for the storage of waste materials that
products and from changes in the pore structure. This concrete is results in the air, land and water pollution. This objective behind
more crackresistant thanconventional PCC due to the decreased green concrete will result in the sustainable development
shrinkage. without destruction natural resources.

Flyash affects the plastic properties of concrete by improving For concrete, any practice which aids to reduce the energy
workability, reducing water demand as well as segregation and associated with its production, be it energy efficient cement
bleeding, and lowering heat of hydration. Flyash increases production, use of blended cements, savings made on
strength, reduces permeability, reduces corrosion of reinforcing transportation of ingredients and direct use of recycled waste
steel, increases sulphate resistance, and reduces alkali-aggregate contribute towards its greenness.
reaction. Flyash reaches its maximum strength more slowly than
concrete made with only Portland cement. The techniques for 3. CONCLUSION
working with this type of concrete are standard for the industry The major developments in the concrete technology are
and will not impact the budget of a job. discussed briefly in the paper. SCC is most widely adopted
concrete in the construction era and further research in its
strength parameters is going on. Also in future research the
2.3 Triple blend concretes concretes like bacterial concrete and Nano composites will find
With established knowledge of improvementin durability in suitable place. Fly ash concrete has solved the problem of
concretes containinggood amounts of fly ash or blast-furnace carbon dioxide liberation during the hydration process along
slag, or PPC/PSC, one of the main reasonspreventing their large- with a new kind of triple blend concrete has more strength
scale use has been theslower gain of compressive strengths gaining in the early ages of the building. Also green concrete
atearly age. One method for overcoming thisslower strength may be preferred over normal concrete to minimize the
gain is to add another more rapidly reacting supplementary
cementious material like micro-silica. Thus thepotential long- environmental impact.
term durability and strengthimprovements may be obtained with
minimalimpact on early age strength [6]. This provides REFERENCES
attractive option for specifiers looking to decrease the time
before form-work may be removed, say particularly the 1. Technical BulletIN TB1500, “An Introduction to Self
timebefore bridge decks can be opened to traffic laboratory Compacting Concrete.
work on triple blend concretes(or ternary mixes as they are 2. Joost Walraven, Delft University, The Netherland. “Self
called),has already started. To allow the morewidespread Compacting Concrete: challenge for designer and
adoption of such mixes, it isnecessary to build up a series of test researcher”
data on its performance to allow designers andspecifiers to make 3. “Advantages of Structural Lightweight Aggregate
informed choices withregard to the selection of raw materials Concrete”. Expanded Clay, Shale and Slate
andtheir proportions. Institute,www.escsi.org
4. Leming, M.L. “Creep and Shrinkage of Lightweight
Concrete”. Department of Civil Engineering, North
2.5 Green Concrete Carolina State University at Raliegh, North Carolina. April
Most people associate GREEN concrete with concrete that is 16,1990.
colored with pigment. However, it is also referred which has not 5. K. U. Muthu, M. S. Ramaiah Institute of Technology,
yet hardened. But in the context of this topic, green concrete is INDIA. “Emerging Trends in Concrete Technology and
taken to mean environmentally friendly concrete. This means Structural Concrete”
concrete that uses less energy in its production & produces less 6. TIWARI, A.K. and ILLYAS, MOMIN Improving early age
carbon dioxide than normal concrete is green concrete. strength of PSC with indigenous silica fume, The Indian
Concrete Journal, October 2000, Vol 74,No 10, pp. 595-
How green a product is? - The lower the embodied energy, the
598.
greener the product.
The philosophy of green concrete, in fact of any green product is
based on the principle of optimizing its embodied energy i.e. on

689
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

FEASIBILTY OF FLYOVER ON UNSIGNALISED


INTERSECTION
Abhishek singla Gurpreet Singh Bohar Singh Dapinder Deep Singh
Research scholar Department of Civil Engineering
Giani Zail Singh Campus, Bathinda Shaheed Bhagat Singh State Technical Campus
Ferozepur.

INTRODUCTION caters to both urban and regional traffic.


The rapid urbanization and growth of For the present study, an unsignalised
private vehicle ownership have caused an intersection lying on NH-5 at Moga town
increase in road traffic congestion and has been undertaken. In the absence of a
degradation of level-of-services in most of bypass, the regional traffic carries heavy
the urban areas in India. As a result, there vehicles, especially freight moves through
has been an increasing need for providing this corridor. As there is a restriction for
relief of urban traffic through the freight movement during daytime, this
provisions of adequate transport structure. stretch remains busy during night time as
In recent years, efforts have been well.
intensified for providing new roads and The objective of the study is to understand
widening of existing roads in many urban the traffic operations on such roads in
areas. However, because of the frequent terms of road congestion and level-of-
roadside encroachments, non-availability service characteristics; and thereby explore
of land etc., it has been practically possible the benefits such as increase in road width,
to provide new roads or even additional construction of Elevated Bypass, etc.
traffic lane in all situations. As a result,
IDENTIFICATION OF PROBLEMS
there are a number of roads in urban areas
where the widening has been done The following problems are generally on
partially and these roads do not satisfy the the urban roads
standard lane dimensions. The growing
1. Congestion
mismatch between the road infrastructure
2. Construction Problems
and vehicle population has led to the
3. Utility Services
traffic congestion, the reduced level-of-
4. Road Surface at Intersection
service, an increase in road accidents and
5. Repair of Roads
environmental pollution. The study stretch
6. Pollution
of NH-5 (Formerly known as NH-95)

690
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

OBJECTIVES OF THE STUDY 2. Traffic Engineering today is much

1. To study the existing travel pattern larger a challenge due to the

on the study stretch. following reasons:

2. To conduct necessary traffic 3. The need to have designs which

studies on the selected stretch of are eco-friendly.

the road. 4. The need of faster development

3. To analyze the data and sustainable designs for

4. To provide proposal for reducing longetevity, due to ever-growing

the traffic congestion need of the present day economy.


PASSENGER CAR UNIT (PCU)

REVIEW OF LITERATURE 1. This is one of the fundamental

1. Traffic Engineering, is an measures of traffic on a road

important aspect for the traffic system. It indicates the volume of

engineers, which dictates the traffic using the road in the given

achievement of safe, efficient and interval of time.

convenient movement of the 2. Table 1 provides the guidelines

persons and goods. Traffic for the values adopted for

engineering in INDIA is a different categories of transport

complex issue due to available modes available in the country for

different modes of traffic ranging urban roads. The Indian Road

from the slowest, in the form Congress (IRC) lays down these

human/animal drawn carts, to the values. Refer MORT&H

fastest of the four wheelers. Handbook on Highway


Engineers, 2002.
3.
TABLE-1: PCU EQUIVALENT-URBAN AREA

Sr. Vehicle Type Equivalent PCU Factor % Composition


No. of Vehicle type in Traffic Stream
5% 10%
Fast Vehicles
1. Two-wheelers Motor cycles and Scooters 0.50 0.75
2. Passenger Cars, Pick-up Van 1.00 1.00
3. Auto-Rickshaws 1.20 2.00

691
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. Light Commercial Vehicle 1.40 2.00


5. Truck of Bus 2.20 3.70
6. Agricultural Tractor-Trailer 4.00 5.00
Slow Vehicles
7. Cycles 0.40 0.50
8. Cycle-Rickshaws 0.50 2.00
9. Horse Driven Vehicles 1.50 2.00
10. Hand Cart 2.00 3.00

FUTURE TRAFFIC. P = the Volume count of the


1. Future traffic projections based on available year or current year.
previous or present day traffic r = the rate of growth of traffic
volume studies is necessary for an per year usually taken as 0.075.
estimate of the efficacy of the n = the number of years, usually
existing facilities, improvements taken as Zero.
required, if any.
2. In addition, the projection of traffic DATA COLLECTION AND
can lead to policy decision with ANALYSIS:
respect to requirements of bypasses
to city centers and increasing the METHODS AVAILABLE FOR
road capacities. TRAFFIC COUNTS
3. The formula applied for calculation
i. Manual methods
of the projected traffic volume for
ii. Combination of manual and
the next ten years is as under:
mechanical methods
A =
iii. Automatic devices
P (l+r) ^ n+10
iv. Moving Observer method
Where
v. photographic methods
A = the projected Traffic
Volume.

692
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

peak volume in the afternoon existed


STUDY OF THE UNSIGNALISED during the time 1230-1430 hours. The
INTERSECTIONS figures so obtained have been reflected in
1 THE BHUGHIPURA CHOWK Table 4 For traffic flow diagram, refer
MOGA Figure 3.
1.1 TRAFFIC VOLUME STUDY (0830- 1.4 TRAFFIC VOLUME STUDY (1430-
1030 Hours) : WEEK DAY. The traffic 1630 Hours) : WEEK DAY. The traffic
volume count for the whole of a working volume count for the whole of a working
day was conducted. It was found that the day was conducted. It was found that the
peak volume in the morning existed during peak volume in the evening existed during
the time 0830-1030 hours. The figures so the time 1430-1630 hours. The figures so
obtained have been reflected in Table 2. obtained have been reflected in Table 5
For traffic flow diagram, refer Figure 1 For traffic flow diagram, refer Figure 4.
1.2 TRAFFIC VOLUME STUDY (1030- 1.5 PROJECTED FUTURE TRAFFIC
1230 Hours) : WEEK DAY. The traffic VOLUME GROWTH. The future traffic
volume count for the whole of a working growth for the intersection through the
day was conducted. It was found that the years 2011-2016 has been projected as per
peak volume in the morning existed during Table 6. It shall be seen that the most
the time 1030-1230 hours. The figures so affected approach i.e. the LUDHIAN-
obtained have been reflected in Table 3 MOGA approach will cross a figure of
For traffic flow diagram, refer Figure 2. 15,000 by the year 2037. The growth has
1.3 TRAFFIC VOLUME STUDY (1230- been presumed at the existing rate of %. It
1430 Hours) : WEEK DAY. The traffic is a fact that this kind of volume of traffic
volume count for the whole of a working is beyond the handling scopes of a
day was conducted. It was found that the signalized intersection

693
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

TABLE-2: TRAFFIC VOLUME STUDY AT THE BHUGIPURA CHOWK, MOGA


WEATHER:- GOOD
DATE:- 23 -27AUG 2011
ROAD SURFACE:- GOOD
TIME:- 08:30 - 10:30 hrs
PEAK HOUR:-09:00-10:00
TRAFFIC ENTERING INTERSECTION FROM
PCU ASR. -JAL.
Sr. VEHICLE LUDHIANA MOGA BARNALA
FAC BYPASS
No. CLASS
TOR L S R L S R L S R L S R
1 CYCLE 0.5 77 114 64 85 101 68 74 92 69 108 68 72

CYCLE
2 2.0 71 69 89 72 65 119 65 97 73 82 72 66
RICKSHAW
TRACTOR
3 4.0 62 157 120 161 190 147 95 106 97 109 104 120
TROLELY

4 CAR 1.0 101 328 107 162 335 187 156 111 114 99 62 101

5 BUS 2.2 40 153 37 43 132 59 37 27 48 46 22 38

6 TRUCK 2.2 47 99 85 52 155 71 115 107 114 120 115 151

2-
7 0.75 100 210 113 219 182 206 193 161 192 146 112 205
WHEELER
AUTO (3-
8 1.2 61 97 91 85 114 92 87 79 87 91 85 82
WH)

9 TOTAL PCU/hr 560 1227 706 879 1274 949 822 780 794 801 640 835

10 G.TOTAL PCU/hr 2543 3102 2396 2276

11 % LEFT TURNING 23.98 28.33 34.3 35.19

% RIGHT
12 27.76 30.59 33.13 36.68
TURNING
TOTAL PEAK HOUR VOLUME AT INTERSECTION -10317

Where, notations L means Left

R means Right

S means Straight

694
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

TABLE-3: TRAFFIC VOLUME STUDY AT THE BHUGIPURA CHOWK, MOGA


WEATHER:- GOOD
DATE:- 23-27 AUG 2011
ROAD SURFACE:- GOOD
TIME:- 10:30 - 12:30 hrs

PEAK HOUR:-11:00-12:00
TRAFFIC ENTERING INTERSECTION FROM
PCU ASR. -JAL.
Sr. VEHICLE LUDHIANA MOGA BARNALA
FAC BYPASS
No. CLASS
TOR L S R L S R L S R L S R

1 CYCLE 0.5 66 102 61 84 93 64 63 69 61 93 65 64

CYCLE
2 2.0 77 71 91 73 69 103 61 85 67 74 107 113
RICKSHAW
TRACTOR
3 4.0 82 138 102 123 145 104 103 72 84 88 64 107
TROLELY

4 CAR 1.0 89 236 96 145 255 113 136 98 101 71 47 82

5 BUS 2.2 37 127 41 57 105 40 25 16 31 37 29 42

6 TRUCK 2.2 31 74 58 45 126 52 79 62 56 84 73 104

2-
7 0.75 85 183 101 193 127 178 157 134 145 113 97 107
WHEELER
AUTO (3-
8 1.2 82 103 86 81 92 88 83 72 79 89 81 77
WH)

9 TOTAL PCU/hr 549 1034 636 801 1012 742 707 608 624 649 563 696

10 G.TOTAL PCU/hr 2219 2556 1939 1908

11 % LEFT TURNING 24.74 31.34 36.46 34.01

% RIGHT
12 28.66 29.02 32.18 36.47
TURNING

TOTAL PEAK HOUR VOLUME AT INTERSECTION -8622

695
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

TABLE-4: TRAFFIC VOLUME STUDY AT THE BHUGIPURA CHOWK, MOGA


WEATHER:- GOOD
DATE:- 23-27 AUG 2011
ROAD SURFACE:- GOOD
TIME:- 12:30 - 14:30 hrs
PEAK HOUR:-12:45-13:45
TRAFFIC ENTERING INTERSECTION FROM
PCU ASR.-JAL.
Sr. VEHICLE LUDHIANA MOGA BARNALA
FAC BYPASS
No. CLASS
TOR L S R L S R L S R L S R

1 CYCLE 0.5 68 109 70 69 106 69 76 87 69 115 73 65

CYCLE
2 2.0 86 106 83 78 63 114 65 93 74 72 79 73
RICKSHAW
TRACTOR
3 4.0 88 99 103 144 164 89 109 92 67 38 77 71
TROLELY

4 CAR 1.0 96 168 93 119 172 158 64 59 82 91 84 57

5 BUS 2.2 47 81 48 35 112 59 41 37 59 36 42 53

6 TRUCK 2.2 38 73 67 34 72 109 93 98 64 71 58 99

7 2-WHEELER 0.75 96 224 103 146 123 185 146 283 181 102 132 193

AUTO
8 1.2 81 101 85 89 127 87 82 76 76 67 71 84
(3-WH)

9 TOTAL PCU/hr 580 961 654 714 939 868 678 825 673 592 617 695

10 G.TOTAL PCU/hr 2193 2521 2176 1899

11 % LEFT TURNING 26.44 28.32 31.15 31.17

% RIGHT
12 29.73 34.43 30.92 36.33
TURNING

TOTAL PEAK HOUR VOLUME AT INTERSECTION -8789

696
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

TABLE-5: TRAFFIC VOLUME STUDY AT THE BHUGIPURA CHOWK, MOGA


WEATHER:- GOOD
DATE:- 23 -27AUG 2011
ROAD SURFACE:- GOOD
TIME:- 14:30 - 16:30 hrs
PEAK HOUR:-15:30-16:30
TRAFFIC ENTERING INTERSECTION FROM

PCU ASR-JAL.
Sr. VEHICLE LUDHIANA MOGA BARNALA
FAC BYPASS
No. CLASS
TOR L S R L S R L S R L S R

1 CYCLE 0.5 70 105 67 84 96 66 66 71 69 99 66 69

CYCLE
2 2.0 81 68 89 76 72 114 64 92 73 78 70 64
RICKSHAW
TRACTOR
3 4.0 75 126 97 131 151 119 107 79 89 98 95 111
TROLELY

4 CAR 1.0 82 301 14 160 271 127 148 105 107 84 52 89

5 BUS 2.2 42 100 63 57 127 63 36 27 36 34 21 31

6 TRUCK 2.2 34 79 52 47 148 59 85 69 65 95 103 109

7 2-WHEELER 0.75 87 194 96 198 129 187 161 139 157 124 105 169

AUTO
8 1.2 75 85 92 87 109 96 89 76 87 99 87 85
(3-WH)

9 TOTAL PCU/hr 546 1058 570 841 1103 831 756 658 683 711 599 727

10 G.TOTAL PCU/hr 2174 2775 2097 2037

11 % LEFT TURNING 25.11 30.31 36.05 34.9

% RIGHT
12 26.21 29.95 32.57 35.68
TURNING
TOTAL PEAK HOUR VOLUME AT INTERSECTION-9083

697
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

TABLE-6: PROJECTED FUTURE TRAFFIC AT THE BHUGHIPURA CHOWK


(BASIS MAXIMUM PCU)
Sr. APPROACH TIME CLOCK YEARS
No. OF TIMINGS 2011 2016 2021 2026 2031
DAY (HOURS)
1. LUDHIANA FN 0830-1030 2543 3561 5339 5375 10681
AN 1230-1430 2193 4471 4518 6360 9211
2. MOGA FN 0830-1030 3102 4342 6391 8996 13028
AN 1230-1430 2521 3540 5154 7311 10588
3. BARNALA FN 0830-1030 2396 3354 4936 6949 10063
AN 1230-1430 2176 3046 4483 6311 9140
4. ASR-JAL BYPASS FN 0830-1030 2276 3186 4689 6601 9559
AN 1230-1430 1899 2659 3912 5508 7976
5. TOATAL FOR FN 0830-1030 10317 14423 21255 29921 43331
INTERSECTION AN 1230-1430 8789 13716 18067 25490 36935

1. PROJECTION BASED ON FORMULAE: A=P(l+r)n+10where, A=Projected traffic; P=Present


Traffic; r=rate of growth=0.075; n=Zero.
.

698
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

FIGURE 1: TRAFFIC FLOW DIAGRAM(PCU/hr) AT THE BHUGHIPURA


CHOWK WEEK DAY: 0830-1030

699
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

FIGURE 2: TRAFFIC FLOW DIAGRAM(PCU/hr) AT THE BHUGHIPURA


CHOWK WEEK DAY: 1030-1230

700
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

FIGURE 3: TRAFFIC FLOW DIAGRAM(PCU/hr) AT THE BHUGHIPURA


CHOWK WEEK DAY: 1230-1430

701
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

FIGURE 4: TRAFFIC FLOW DIAGRAM(PCU/hr) AT THE BHUGHIPURA


CHOWK WEEK DAY: 1430-1630

702
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

CONCLUSIONS. the intersection on five days of the week have


It is known fact the user of any asset is most almost busted the laid down volumes. The
affected by the infrastructure development. effect will worsen every year, keeping the
The irony is that little attention is paid to this growth pattern of the traffic in mind.
aspect by the hierarchy of the decision making
body. The aspects of planning, money THE BHUGHIPURA CHOWK. During the
allocation and execution of any works need to peak hours, the traffic volumes on the two
be coordinated and need to be far-sighted so major approaches i.e. Ludhiana and Moga are
that the assets once created have a long life in the range of 10,400. Flyovers need to be
and can be cost-effective. planned, if the approach traffic volumes are
The traffic engineers involved in the beyond 10,000. The approach volume on the
nation building, therefore, have a major role to Asr-Jal bypass side is 22,00 and on the
play, both in the planning as well as the Barnala side, the same is about 24,00. These
execution stage. The planning needs to be as figures are for the year 2011. The situation of
dynamic as possible with the parameters of traffic density is going to increase at the rate
planning based on wider scopes of road user's of 37% every year, if not more.
requirements.
RECOMMENDATIONS
An input of the nature as discussed in
1. On the basis of comparison of alternatives
this money is pumped into projects which
and conclusion drawn above, it is evident
otherwise would have a far-reaching
that for providing long-term solution to the
consequence of being cost in-effective. There
traffic congestion, we should go for lane
is, therefore a need to look at planning and
elevated highway for the entire study
execution aspects, so that the road user's
stretch.
requirements are met with the remain satisfied
2. Bhughipura junction should be improved
over at least the life of the asset created. It is
by channelization and signalization as the
only then that the asset so created would be
peak traffic volume from all legs is very
termed as cost effective.
close to the prescribed limit of warrant for
TRAFFIC VOLUMES OF THE
grade separation intersection.
APPROACHES AND THAT OF THE
UNSIGNALIZED INTERSECTIONS. The
REFERENCES
summary of the values of the traffic volumes
1. Khanna S.K. and Justo C.E.G.,
in terms of PCU/hr for the intersection, under
"Highway Engineering", Nem Chand
study, has been considered. It is evident that
& Bros, 20091.
the volumes in the two major approaches for

703
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2. Kadiyali L.R., Traffic Engineering and VOL. 129, No.2, pp. 155-160, March
Transport Planning, Khanna 1,2003.
Publishers, 2003. 10. Farouki, O.T., and Nixon, W.J. (1976),
3. M.O.R.T. & H., Pocket Book for “ The Effect of Width of Suburban
Highway Engineers, IRC, 2002. roads on the Mean-free Speeds of
4. IRC: 11-1962: "Recommended Car”. Traffic Engg. Control, pp. 518-
Practice for the Design and Layout of 519.
Cycle Tracks" (Second Reprint), The 11. Gupta, Aman (1999), „Traffic Flow
Indian Road Congress. Analysis And Level- of- Service
5. IRC: 65-1976: "Recommended Evaluation of NH-1”, ME (highways)
Practice for Traffic Rotaries" (Reprint thesis, PEC, Chandigarh.
September 2002), The Indian Road 12. Hossain, M., and Iqbal, G.A. (1999),
Congress. “Vehicular Headway Distribution
6. IRC: 70-1977: "Guidelines or And Free Speed Characteristics of Two
Regulation and Control of Mixed Lane Highway of Bangladesh‟. Inst.
Traffic in Urban Areas" (Reprinted Engg. (India), pp. 77-80.
November 2002), The Indian Road 13. Leong, H.J.W. (1968) “Distribution and
Congress. Trend of Free Speed on Two Lane Two
7. IRC: 92-1985: "Guidelines for the Way Rural Highway in New South
Design of Interchanges in Urban Wales”. Proc., 4th ARRB Conf., Part-1,
Areas" December 1985, The Indian Australian Research Board, pp. 791-
Road Congress. 814.
8. IRC: 106-1990: "Guidelines for 14. Maitra, B., Sikdar, P.K. and Dhingra,
Capacity of Urban Roads on Plain S.L. (1999). “Modelling Congestion on
Areas" November 1990, The Indian Urban Roads and Assessing Level-of-
Road Congress. Service”, Journal of Transportation of
9. Chandra, Satish and Kumar, Upendra Engineering, ASCE, VOL. 125, No. 6,
(2003), “Effect of Lane wide on pp. 508-514, November/December,
Capacity under Mixed Traffic 1999.
Conditions in India”, Journal of
15. The Moga Web Site, e-Sampark.
Transportation of Engineering, ASCE,
16. The Moga Traffic Police Web Site
.

704
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Application of Geoinformatics in Automated Crop Inventory


Sandeep Kumar Singla Dr. O. P. Dubey Dr. R. D. Garg
Research Scholar Professor Associate Professor
Department of Civil Engineering Department of Civil Engineering Department of Civil Engineering
Indian Institute of Technology Indian Institute of Technology Indian Institute of Technology
(IIT Roorkee) (IIT Roorkee) (IIT Roorkee)
sandy.dce2014@iitr.ac.in opdubey11@gmail.com garg_fce@iitr.ac.in

ABSTRACT using sensors operating in various parts of the electromagnetic


spectrum. GPS tools are used to acquire particular
An attempt has been made in this study to review the role of measurement of an object’s position in terms of latitude,
geoinformatics to discriminate different crops at various longitude and altitude. These technologies endow with a cost
levels of classification, monitoring crop growth and prediction effective way to study the atmosphere, geosphere and
of the crop yield. The suitability of geoinformatics techniques biosphere interactions at global scale whereas at micro scale
suited to Indian conditions has also been assessed. also, space technology exhibits appropriate inputs for optimal
Development in applications of computers and information use of available natural resources.
technology has enhanced the capability of gathering huge and
mottled data as well as information, ranging from historical To keep an eye on the changes in the crop cover using remote
data, ground truth values and aerial photography to satellite sensing and modelling crop growth, crop yield and drafting
data. Thus remote sensing data and the information derived agricultural practices for optimal crop yield using GIS is a
from it, is attractive to agricultural management system in the good example of micro and macro-level applications.
India. It is concluded that, in addition to the remote sensing Relational Data Base Management System (RDBMS) is
technology, the use of many other techniques such as ground software that manages the data to produce relevant
observations, reviews, GIS and soil analysis is highly information in a very effectual manner. Outputs of geo-
appreciable. informatics provide a superb solution for the modeling and
monitoring of crop at a range of scales and thus support
Keywords planning and management of agricultural resources.
Remote Sensing, Crop Yield, Geoinformatics, GIS, GPS, It is established fact that agricultural research has been
RDBMS, Satellite Data, crop inventory, crop models. benefited from conglomeration of technological advances
largely developed for other industries. The industrialization
1. INTRODUCTION brought mechanization and chemical fertilizers to agriculture.
India covers contains about one fourth of earths geographical The technological development has offered efficient
area, whereas it caters about one seventh of world population. agricultural practices including genetic engineering and
Food grain demand is gradually increasing due to escalating automation for more food per unit of natural resources. The
population, improved socio economic conditions and food information technology improved the potential for integrating
habits. Prognostic study indicates that in approaching 30 years the technological and industrial advances into sustainable
the food grain production has to be doubled to meet out the agriculture production system. The application of the
growing need. Food grain production depends upon a variety computer in agriculture research initially exploited for the
of earth factors. These factors may be clubbed as, above adaptation of statistical formula or complex model in digital
surface, surface and below surface parameters. For optimal form for simple and precision agriculture.
and sustained food grain production new methods, monitoring
techniques and predictive crop growth and yield model are Currently computers are being used for automation and to
always welcomed. Generally the required data for monitoring, expand decision support systems (DSS) for the agricultural
evaluation, and modelling is not available or insufficient. production and fortification research. Recently geographic
Therefore there is pressing need to update and generate the information systems and remote sensing technology has come
required data and automated crop inventory using the up with a capable role in agricultural research predominantly
contemporary capabilities of geoinformatics. in crop yield prediction in addition to crop suitability studies
and site specific resource allocation.
Geoinformatics may be broadly defined as the combination of
technology and science dealing by means of the spatial In this paper an attempt has been made to review the role of
information, its acquisition, its qualification and classification, Geoinformatics to discriminate different crops at various
its processing, storage and dissemination. Geo-informatics is levels of classification, monitoring crop growth and prediction
an integrated tool to collect process and generate information of the crop yield. The suitability of geoinformatics techniques
from spatial and non spatial data. Geoinformatics is an suited to Indian conditions has also been assessed.
appropriate blending of modules like Remote Sensing (RS),
Global Positioning System (GPS), Geographical Information The planning and management of agriculture goad to optimize
System (GIS) and Relational Data Base Management System the food production per unit of natural resources. Information
(RDBMS). captured on soil and water conservation and acreage
estimation of different crops using recent technologies such as
Data collected by remote sensing system and some other remote sensing and GIS may lead to optimal agricultural
means is processed, managed, analyzed and disseminate by production. Laboratory and farm level studies has clearly
Geographical Information System (GIS). RS is the technology brought out the fact that adaptation of integrated land water
used to acquire information regarding an object, a process and and crop management practices, integrated manure and pest
some phenomena without being in contact. RS is generally management practices have positive impact on augmenting
based on information collected from satellites or airplanes, the agriculture production.

705
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Information related to agricultural inputs collectively with 3. DIFFERENT APPROACHES


reliable data on already existing acreage and land under a This section briefly portrays the different approaches and
range of crops, types of soils and problems related to soils,
technologies used for crop inventory.
water availability in irrigation systems and management of
natural and other crop related disasters will facilitate evolution
of suitable strategies to keep going the pace of developments 3.1 Aerial Photography
in agricultural research. To obtain crop yield information, one must be able to
recognize tone, pattern, texture and other features. Crop yield
The kind of information thus needed to develop a computer information is used in conjunction with crop acreage statistics
system included (1) crop signature tables showing the upper to obtain crop production. There are two distinct aspects of
and lower limits of crop color variation at various crop ages, yield determination (1) the forecast of yield based on
(2) physical factors such as climate, soils, salts problems (3) characteristics of the plant or crop and relationship based on
cultural factors such as location of producing area to markets, experience in prior years, and (2) estimates of the yield known
land costs, export crops, farming patterns (4) a land use code from the actual weight of the harvest crop for the current year.
(5) base maps and (6) a crop calendar of the area. After the World War II various researchers used the emerged
With a view to handle these complex problems, the concept of aerial photography for the optimized use of
technology based on remote sensing provides a number of resources for the agriculture and crop inventory.
benefits over conventional methods. The benefits include ease Goodman (1959), used black and white photography for crop
of use and multi spectral data for providing passionate identification. Author developed the techniques primarily
information, potential to provide multi temporal data to give a based on ground appearance and the equivalent aerial
picture of long term and seasonal changes and availability of photographic form of selected fields at nine intervals during
descriptions with minimum distortion. the growing season.
Development in the information technology has enhanced the Goodman (1964) used three sets of criteria that can be read or
capability of gathering huge and mottled information, ranging inferred from aerial photographs, that serve as indicators for
from historical data and aerial photography to space borne crop identification (1) farmstead features such as barns,
data, ground truth values and other forms of ancillary data. In granaries, and silos; (2) crop associations; and (3) the uses
addition to remote sensing, we should comprehensively that are made of particular crops.
employ many other techniques such as GPS, GIS, ground
observations and soil analysis in order to get quality Anson (1966) found that CIR photography have more impact
estimation of crop yield management system in the India. than black & white and color photographs for the extraction of
vegetative details. It permits making a ready peculiarity
2. APPLICATION AREAS between soil and vegetation that is not always possible using
From a limited study it has been found that Geoinformatics black & white photography. The background can be
can support the development and intensification of the quality approximately has the same tone as the plants in case of black
of agricultural research. Applications and contributions of and white photograph, making photo interpretation more
geo-informatics are very significant largely in the areas of difficult.
crop yield estimation, spatial modeling, spatial sampling,
classification, integrated surveys and web based applications. Various researchers in the past found that use of photographs
or spectral values is an important information source for
2.1 Crop Yield Estimation predicting crop yields and has been focused on the
Estimation of Crop yield well before the harvest at regional possibilities of forecasting yield from plant measurements.
and national scale is imperative because of the budding need Houseman and Huddleston (1966) cite results which have
for the planning at micro-level and predominantly the demand been promising form making forecast of field crops such as
for crop insurance (Anup et al. 2005). Crop yield estimation cotton, maize, wheat, soybeans and a typical example is cited
plays a significant role in economy development (Hayes and by Small (1967) for tree crops such as walnuts, oranges and
Decker, 1996). Currently it is being done by extensive field filberts.
surveys and crop cutting experimentation. This enables Johnson et al. (1969) identified crops based on the image
decision makers and planners to predict the amount of crop color and developed a computer land use mapping system.
import and export. In most of the developing countries the The authors indicate that overall poor results were due to
crop yield estimation is generally based on traditional color variations based on seasonal crop variations and film
methods of data collection which is based on ground based quality control.
field surveys (Reynolds et al. 2000).
In a photographic study by Roberts and Gialdini (1970) nine
Conventional methods have been found to be expensive, time crops were grown in small plots to evaluate tone differences
consuming and are prone to large errors due to incomplete and resulting from differences in the radiation reflected from the
inaccurate ground observations, leading to deprived crop area vegetation only and not confounded by radiation reflected
estimations and crop yield assessment. In most of the from the soils through varying degrees of canopy closure. It
developing countries the required data is generally available was shown that no single film filter combination can be used
too late for any appropriate decision making. Objective, to discriminate among all the crops.
consistent and possibly inexpensive and or faster methods that
can be used for monitoring of crop growth and an early Yost et al (1970) attempted to extract land use information by
estimation of crop yield are imperative. Data captured through making quantitative colorimetric measurements of additive
remote sensing has the prospective, capacity and the potential and subtractive color infrared film. Additive color was found
to exhibit spatial information at global scale. to be the best method for discrimination, separation positives
and color infrared were second and third, respectively. The
brightness or density of the image was not effective for
identification.

706
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Coleman et al. (1974) utilized the photography to attempt to function of air temperature and rainfall and the project was
show a more cost effective method of detecting and reported one of the first examples in which production is
subsequently regulating cotton farming practices to aid in the forecasted through satellite remote sensing and measured
control of pink bollworm. Researchers also had shown that meteorological observations on the ground (Doraiswamy et
for the purpose of identification of crops the higher the al., 2003).
availability of temporal data and higher will be the expected
accuracy. As of today a large number of researchers and academicians
are working for methodological expansion in field of
3.2 Multispectral Scanners investigation. In India remarkable spurt in the remote sensing
Multispectral scanners (MSS) have certain advantages and activities has started with the launch of the IRS (Indian
disadvantages when compared to photography. Landgrebe et Remote Sensing Satellite) 1A in the year 1988. India launched
al. (1967) showed the ability to differentiate wheat from other a variety of satellites devoted to particular area of relevance
agricultural crops using multispectral data in a computer such as ResourceSat, CartoSat, and OceanSat.
format with pattern recognition techniques. An important The remote sensing data has been proven effective in
consideration in the task of species identification is the stage predicting crop yield and provide representative and spatially
of growth of the crop. exhaustive information on the development of the model for
Early work at LARS (Laboratory for Applications of Remote the crop growth monitoring. Various indices based on remote
Sensing) at Prudue University showed the easy separability of sensing have been employed to estimate the yield of several
MSS data into the broad surface-feature grouping of green types of crops. For instance, the normalized difference
vegetation, water and bare soil (Kristof, 1969). The vegetation index (NDVI) has been used to estimate the yield
identification and mapping of specific crop species requires of rice (Rouse et al., 1974). However, yield estimation with
the acquisition of sufficient and accurate ground observations remote sensing has limitations, mainly due to the indirect
as training sets for computer implemented analysis. nature of the link between the NDVI and biomass but also due
to the sensor spatial resolution or insufficient repeat coverage.
Hoffer and Goddrick (1971) demonstrated that the influence
of geographic area is closely related to changes in the crop Tucker et al. (1980) used ground - based spectral radiometers
maturity. They used MSS data over four flight lines extending to identify the relationship NDVI and crop yield and proved
across 100 miles of agricultural land in central Illions and the high correlation among the crop yields and NDVI. Das et
were able to separate wheat from other crops with over 90 al. (1993) used greenness and transformed vegetation indices
percent of accuracy on test fields. to predict wheat yield at 85–110 days before harvest in India.
These early studies led to crop yield estimation in several
3.3 Radar countries using satellite imagery.
The advantages and limitation of using either airborne or Lennington and Sorensen, 1984, Mccloy et al., 1987, Gallego
spacborne radars fro crop identification are discussed by et al., 1993, proposed the models based primarily on remote
Morain et al., (1970). He points out that many of the radar sensing, that congregate the requirements for speed quality
studies have concentrated on seasonal change between crops and macro management over large areas. The studies also
and that numerous variables must be considered in making proved that these are different from the traditional models.
even the simplest determinations.
Rudoff and Batista (1990) estimated sugarcane crop in Brasil
Morain and Coiner (1970) and Schwarz and Caspall (1968) using remote sensing and an agrometeorological model based
worked with imagery from Radar, they shown that major on a model developed by Doorenbos and Kassam (1979)
agricultural crops can be segregated, though not where yield is related to multiple regression approach used to
unambiguously identified, using simple two-dimensional plots integrate the vegetation index from Landsat and the yield
of HH and HV films. from the agrometeorological model. Such estimations
explained 50, 54 and 69 percent of the yield variation in the 3
3.4 Satellite Data growing seasons analyzed. The authors also tested the
Estimation models for Crop area and yield have been studied accuracy of sugarcane yield estimations using only the RS or
for a long time, and numerous good quality models have been the agrometeorological model only. The results were poorer
developed, but these conventional models have been compared to the combinations of both (Rudorf and Batista,
developed primarily from the point of view of meteorology 1990).
and biology without concern of remote sensing, and thus
cannot meet the requirements of today's society. The Goyal (1990) carried out a study in Sultanpur district of Uttar
utilization of remote sensing data for agricultural development Pradesh to assess the crop area and yield estimation. The
was investigated in the USA in 1971 under Corn Blight Watch study demonstrated that remote sensing satellite spectral data
Experiment (CBWE). The remote sensing data has been and consequent vegetation indices can be used for the
proven effective in predicting crop yield and provide mapping. Conjunctive use of satellite derived information and
representative and spatially exhaustive information on the the ground based yield data have enhanced the estimation of
development of the model for the crop growth monitoring. crop yield. For this study the Landsat (Thematic Mapper-TM)
Another experiment carried out using Landsat data was satellite data and the crop yield data from crop cutting
CITARS - Crop Identification Technology Assessment for experimentation have been used. It was observed that NDVI
Remote Sensing. It aimed at identification of two major crops as compared to RVI (Ratio Vegetation Index) exhibits higher
corn and soybean and testing the concept of signature capability to classification of vegetation vigor and estimation
extraction. of crop yield. An effort was also made to enumerate the effect
of the misclassification. An attempt was also made to
Large area crop inventory Experiment (LACIE) carried out investigate the worth of spectral data in forecasting the crop
during 1974-78 was a major international study carried out in yield and the correlation between wheat yield and the spectral
major wheat growing areas of the world. The models used in
LACIE were statistical models, in which yields is modeled as

707
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

parameters acquired through the hand held spectral new vegetation index GYURI - General Yield Unified
radiometers. Reference Index based on fitting of double Gaussian curve to
the NOAA-AVHRR data during the period of crop growth.
Singh and Goyal (1993), Singh and Goyal (2000) and Singh et They also investigated one more method using only NOAA-
al. (2002) carried out extensive study for estimation of wheat AVHRR county-level yield data. The county-level yield data
crop yield for Rohtak district in Haryana. They used data from and consequently the obtained vegetation index GYURI for
the crop cutting experiments of the year 1995 and 1996 and eight dissimilar crops for eight years. The proposed method is
multi spectral data from the IRS-1B (LISS II) data for 17th inexpensive and simple to use.
February, 1996. Estimation of crop yield by means of indices
RVI and NDVI have been observed. The efficiency of the A new approach of estimation of crop yield incorporating
observed estimation as compared to the usual estimator multi resolution satellite data was proposed by Das, 2004. The
worked out to be 1.42 and 1.28 respectively. The proposed attempt was made to incorporate the satellite spectral data and
model also confirmed that the estimation of crop yield at spatial sampling values for estimation of crop yield, crop
district level may be obtained by dropping the number of crop acreage and crop yield forecasting. The proposed method used
cutting experiments to nearly 2/3 with no loss in the precision satellite data of coarse resolution, that is inexpensive and
and thus ensuing the savings in cost. The study also covers a large area as compared with higher resolution. The
demonstrated that synthetic estimator is more good at your job study also demonstrated the higher efficiency of multiple
as compared to the direct estimator and the standardized error frame sampling estimates as compared to estimations using
of the direct estimator as well as synthetic estimator at Tehsil single index. Supervised maximum likelihood classification
level is within 5 per cent. approach was used for the classification of satellite data. The
study also discussed about the noise during the classification
The NDVI from the National Oceanic and Atmospheric which is just due to the presence of the mixed pixels. A new
Administration – Advanced Very High Resolution model of classification based on fuzzy classification
Radiometer (NOAA–AVHRR) amid spatial resolution of developed by indicator Kriging is also proposed in the study.
1000 m exhibits a strong correlation with wheat yield in Italy
(Benedetti and Rossini, 1993). Anup et al. 2005, considered different parameters like NDVI,
surface temperature, soil moisture and rainfall data for crop
A large number of studies observed that as compared with yield review and prediction using piecewise linear regression
low-temporal resolution measurements, high-spatial model with breakpoint. Crop production environment contains
resolution sensors can more accurately forecast crop yield. inherent sources of heterogeneity and their non linear
Hamar et al., (1996) proposed a linear regression model to behavior. A non linear Quasi-Newton multivariate
forecast corn and wheat yield at regional scale. The model optimization approach has been utilized, that reasonably
was purely based on vegetation spectral indices obtained minimizes errors and inconsistency in yield prediction. A
using Landsat (MSS) data. function based on minimization of least square loss has been
Gupta (2002) developed an incorporated methodology for employed through iterative convergence by predefined
estimation of wheat crop yield utilizing the survey data from empirical equation that provided tolerable lower residual
the crop cutting experiment as well as the spectral data from values with forecasting values very close to observed ones (R2
the satellite obtained as NDVI. The study also demonstrated = 0.86) for soybean crop and (R2 = 0.78) for Corn crop. This
that the usefulness of remote sensing data associated with the study also proved that crop yield predicted based on data
crop yield parameters from crop cutting experiments can obtained before harvest is of acceptable accuracy.
greatly enhance the efficiency of the estimation methods of Ren et al. 2008 proposed a method of crop yield estimation
crop yield for small area. using the MODIS-NDVI data on a regional scale. With the
(Langley et al. 2001; Nordberg and Evertson 2003) explored intention of improving the quality of obtained remote sensing
the crop cover changes over large areas and shown that the data and the accuracy of yield estimation, the filter known as
remote sensing technology offers a realistic and economical Savitzky–Golay filter was utilized to smooth the 10 days
means. The technology of remote sensing extends possible NDVI data. A stepwise regression method was employed to
data collection from current time to over more than a few establish a linear relationship between the spatial
decades back as well as its potential capacity for systematic accumulation of NDVI and the production of winter wheat.
interpretation at a range of scales. Due to this fact the colossal To validate the obtained results the data from the ground
efforts have been made by application specialists and surveys was used and the errors were compared with the
researchers to delineate crop cover from local scale to global values from agro-climate models. The obtained results proved
scale by using remote sensing data. Jung et al. 2006, that the relative errors of the predicted yield using MODIS-
highlighted the different mapping approaches with their NDVI are between 4.62 and 5.40 percent and that calculated
strong points and weaknesses. RMSE was 214.16 kg/ha lower than the RMSE (233.35 kg/ ha
of agro-climate models.
Ahmad et al., 2003 developed a technique that supports GIS
for the identification of different crops in Yamunanagar Hu and Mo (2011) proposed a process based model of crop
district of Haryana. In this paper the different factors growth which is known as VIP-Vegetation Interface Processes
responsible for crop growth were recognized, index for the model to estimate crop yield using remote sensing data over
suitability by means of Spatial Analytic Hierarchy Process the North China Plain. Statistical yield records and values of
was identified and captured. Obtained index was also NDVI from Terra-MODIS were used to obtain the spatial
compared with the Composite development index. pattern of one of the key parameter, maximum catalytic
capacity for assimilation. It was shown in the study that
Ferencz et al., 2004, presented two methods for estimating the photosynthetic parameters acquired from remote sensing data
yield of different crops in Hungary using remote sensing data are reliable for prediction of regional production using a
of Landsat (Thematic Mapper). The requirement of the pre process based model.
processing steps like atmospheric, geometric, radiometric and
scattering correction has also been discussed. They used a

708
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Mkhabela et al. 2011, proposed a new model to estimate the Here it can be concluded that RS techniques have been
crop yield using MODIS data and NDVI. Regression and extensively used in research for yield forecast but played a
correlation analyses were carried out using 10 days composite small role in understanding the cause of spatial yield
NDVI data and running average NDVI with the maximum variability. Also, it has been argued that while RS might not
correlation coefficients as the independent variables and crop be suitable in developing countries because of their stratified
yield as the dependent variable. The ability and the robustness agricultural systems and very small farm sizes. However, this
of the generated regression model to predict crops grain yield problem is hard to overcome in the near-future because of the
was testing with removal of one year at a time and new inability of RS to estimate yield in mixed agriculture. But, the
regression models were obtained, which were then used to increased availability of high-spatial resolution RS at a
predict the yield for the missing year. Results proved that reasonable cost make this technique a possible interesting
MODIS-NDVI values can be used very effectively to predict alternative for yield forecast.
the crop yield. They summarize that accurate crop grain yield
forecasts using the proposed regression models can be made 4. CONCLUSIONS
one to two months before the harvest. Based on the literature review the advances in the
development of international research on crop monitoring and
Chimnarong et al., 2012 discussed the use of remote sensing crop yield estimation can be separated into the following
data as a reliable and efficient means of gathering the stages (MacDonald and Hall, 1980, Sun et al., 1996):-
information required in order to map crop type acreage and
condition. The study area comprised of four provinces, Khon (1) before the 1940, qualitative analysis all the way through
Kean, Chaiyaphum, Nong Bue Lum Phu and Mahasarakham comparison relating meteorological conditions and crop yield
in Northeastern of Thailand. Landsat5 TM digital data (Dec, was put forward.
2011) was evaluated for the potential utility of remote sensing
(2) during 1950-70, statistics emerged very quickly, and the
derived NDVI for sugarcane production estimation. Sum of
regression models between the crop yield and weather
NDVI of individual sampling fields were correlated with the
conditions were utilized very effectively. With the invent of
actual production (ton/ha). NDVI that describes the
aerial photography and rapid expansion and application of
healthiness of crop is one of the factors of yield variability.
computers, researchers and application scientists put forward
The result showed correlation 0.75 for sum NDVI image and
many crop simulation models.
sugarcane production. The other factors which influence
variations are color of leaf and age of cane. The study (3) during 1970-90, with the launch of satellites, the
suggested the integration of remote sensing satellite data with researchers began to use remote sensing techniques for the
other parameters like age of the crop and variety difference to estimation of crop yield at global scale, which lift up the yield
improve the accuracy. estimation models to a higher level.
Mishra et al. 2013, proposed a model based on energy (4) in the 1990-2000, researchers focus more on the
balance known as ALEXI - Atmosphere Land Exchange combination of high resolution satellite images, vegetation
Inverse (ALEXI) model, which is used to deduce root zone indices and statistical procedures to estimate global crop
soil moisture for North Alabama, USA. The obtained soil yield.
moisture estimates were further utilized in crop simulation
model popularly known as DSSAT - Decision Support System (5) the current stage of the technological development
for Agrotechnology Transfer. The study area contains a engrossed the amalgamation of the remote sensing , GIS and
mixture of irrigated and rainfed cornfields. The results GPS (Rao and Rao, 1987, Tennakoon et al., 1992). Due to the
designate that the model forced with the ALEXI moisture theoretical and scientific achievements in the yield estimation
estimates generated yield simulations that compared using remote sensing in the present decade, the researchers
positively with observed yields. The results shown that the and the application scientists frequently use multi date high
ALEXI model is able to detect the soil moisture signal from resolution satellite and meteorological data with the support of
the mixed rainfed and irrigation corn fields and the signal was the GIS to estimate yield, and they also operate on coarse
of adequate strength to produce enough simulations of resolution data as a sampling tool to improve the precision.
recorded yields over a 10 years period. Based on the literature review it is concluded that the crop
Wang et al. 2014, proposed the model to determine the yield and growth monitoring both affected by many composite
optimal spectral index and the best time for predicting grain factors, such as natural disasters with the intention of occur
yield and grain protein content in wheat by fusion of multi suddenly and are beyond man's control as well as those habitat
temporal and multi sensor remote sensing data. Four field factors that can be controlled by human. Although it is
experiments were carried out at different locations, cultivars possible to acquire the reliable and timely information about
and nitrogen rates in two growing seasons of winter wheat. the earth resources by the means of remote sensing, yet it is
The results illustrated that the NDVI estimated by fusion, not adequate to monitor the growth and estimate the yield of
exhibits high consistency with the SPOT-5 NDVI, which crops in the absence of other parameters (Sun et al. 1996).
confirmed the usefulness of related algorithm. The use of RVI Thus, in addition to remote sensing, the use of many other
at the initial filling stage obtained enhanced accuracy in wheat techniques such as ground observations, reviews, GIS and soil
yield prediction. In addition, the accumulated spectral index analysis is highly appreciable. Based on the intensive studies
from jointing to initial filling stage gave higher prediction in the past, it is evident that the research using Geoinformatics
accuracy for protein content and grain yield, respectively, than to obtain the agricultural statistics can be carried out in three
the spectral index at a single period. These results help phases. Overall flow diagram for the crop yield model using
provide a technical method for the prediction of grain yield remote sensing is shown in Fig 1.
and protein content in wheat with remote sensing at a large Phase 1 : Capture Remote Sensing Data and Field Survey data
scale. This prediction model based on multi temporal remote
sensing data can be suitable for the much clear sky conditions Phase 2 : Pre-processing and analysis of the collected data
during the main growth period of winter wheat. Phase 3 : Design of proposed model and obtain the results

709
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig 1: Overall flow diagram for the crop yield model using remote sensing

5. ACKNOWLEDGMENTS Sensing, 17, 3189-3200.


One of the authors, Sandeep Kumar Singla is grateful to the
Roorkee College of Engineering Roorkee and IIT Roorkee for [6] Ibrahim, A.E.I., (1992). Use of remote sensing data in a
their support to accomplish this research work. markov chain model for crop yield forecasting. Project
report IASRI, New Delhi.
REFERENCES
[7] Gupta, N.K., (2002). Applications of spatial models in
[1] Ahmad, T., Singh, R. and Rai, A., (2003). Development estimation of wheat production in rohtak distrct of
of GIS based technique for Identification of Potential haryana. Unpublished M.Sc. thesis submitted to P.G.
Agro-forestry area. Project Report, IASRI, New Delhi. School IARI, New Delhi.

[2] Anup, K.P., Chai, L., Ramesh, P.S. and Kafatos, M., [8] Singh, R. and Goyal, R.C., (1993). Use of remote
(2005). Crop yield estimation model for Iowa using sensing technology in crop yield estimation surveys.
remote sensing and surface parameters. International Project Report, IASRI, New Delhi.
Journal of Applied Earth Observation and
Geoinformation, 8(1), 26-33. [9] Singh R., Goyal, R.C., Pandey, L.M. and Shah, S.K.,
(2000). Use of remote sensing technology in crop yield
[3] Das, S.K., (2004). Application of multiple frame estimation survey-II. Project report IASRI, New Delhi.
sampling technique for crop surveys using remote
sensing satellite data. A Ph.D. thesis submitted to P.G. [10] Singh, R., Semwal, D.P., Rai, A. and Chhikara, R.S.,
School, IARI, New Delhi. (2002). Small area estimation of crop yield using remote
sensing satellite data. International Journal of Remote
[4] Goyal, R.C., (1990). Use of Remote Sensing Planning of Sensing, 23(1), 49-56.
Agricultural Surveys. Project Report, IASRI, New Delhi.
[11] Chimnarong, V., Rethinam, S., Seechan, M. and
[5] Hayes, M.J. and Decker, W.L., (1996). Using NOAA Pliansinchai, U., (2012). In the proceedings of 33rd asian
AVHRR data to estimate maize production in the United conference on remote sensing, Thailand.
States Corn Belt. International Journal of Remote

710
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[12] Rudorff, B.F.T. and Batista, G.T., (1990). Yield growth and yield of wheat using remotely-sensed canopy
Estimation of Sugarcane Based on Agrometeorological- temperature and spectral indices, International Journal of
Spectral Models. Remote Sensing of Environment, 33, Remote Sensing, 14(17), 3081-3092.
183-192.
[25] Doraiswamy, P.C., Moulin, S., Cook, P.W. and Stern, A.,
[13] Doorenbos, J. and Kassam, A.H., (1979). Yield (2003). Crop yield assessment from remote sensing,
Response to Water. FAO Irrigation Drainage Paper 33, Photogrammetric Engineering and Remote Sensing,
United Nations, Rome, p. 193. 69(6), 665-674.

[14] Reynolds, C.A., Yitayew, M., Slack, D.C., Hutchinson, [26] Rouse, J.W., Haas, R.H., Schell, J.A. and Deering, D.W.,
C.F., Huete, A. and Petersen, M.S., (2000). Estimation (1974). Monitoring vegetation sys-tems in the Great
crop yields and production by integrating the FAO Crop Plains with ERTS. Third NASA ERTS Symposium,
Specific Water Balance model with real-time satellite NASA SP-351, United States, pp. 309–317.
data and ground-based ancillary data. International
Journal of Remote Sensing, 21(18), 3487-3508. [27] Goodman, M. S., (1959). A technique for the
identification of farm crops on aerial photographs,
[15] Wang, L., Tian, Y., Yao, X., Zhu, Y. and Cao, W., Photogrammetric Engineering, 28, 984-990.
(2014). Predicting grain yield and protein content in
wheat by fusing multi-sensor and multi-temporal remote- [28] Goodman, M. S., (1964). Criteria for the identification of
sensing images. Field Crop Research, 164, 178-188. types of farming on aerial photographs, Photogrammetric
Engineering, 30, 131-137.
[16] Mishra, V., Cruise, J.F., Mecikalski, J.R., Hain, C.R. and
Anderson, M.C., (2013). A Remote-Sensing Driven Tool [29] Houseman, E.E. and Huddleston, H.F., (1966).
for Estimating Crop Stress and Yields, Remote Sensing, Forecasting and estimating crop yields from plant
5(7), 3331-3356. measurements, Monthly bulletin of Agriculture
Economics and Statistics, 15(10).
[17] Mkhabela, M.S., Bullock, P., Raj, S., Wang, S. and
Yang, Y., (2011), Crop yield forecasting on the Canadian [30] Anson, A., (1966). Color photography comparison,
Prairies using MODIS NDVI data, Agricultural and Photogrammetric Engineering, 32, 286-297.
Forest Meteorology, 151(3), 385-393.
[31] Small, R.P., (1967). Research report on tart cherry
[18] Hu, S. and Mo, X., (2011). Interpreting spatial objective yield surveys, Statistical Reporting Service,
heterogeneity of crop yield with a process model and USDA.
remote sensing. Ecological Modelling, 222(14), 2530-
2541. [32] Johnson C.W., Bowden, L.W. and Pease, R.W., (1969).
Studies in remote sensing of Southern California and
[19] Tucker, C.J., Holben, B.N., Elgin, J.H. and McMurtrey, related environment, University of California, Riverside,
J.E., (1980). Relationship of spectral data to grain yield California, Status Report III, Technical Report V.
variation. Photogrammetric. Engineering and Remote
Sensing, 46, 657-666. [33] Roberts, E.H. and Gialdini, M.J., (1970). Multispectral
Analysis for crop identification, A Report to USDA
[20] Ren, J., Chen, Z., Zhou, Q. and Tang, H., (2008). Status Reporting Service by the Forestry Remote Sensing
Regional yield estimation for winter wheat with MODIS- Lab., University of California.
NDVI data in Shandong, China. International Journal of
Applied Earth Observation and Geoinformation, 10(4), [34] Coleman, V.B., Johnson, C.W. and Lewis, L.N., (1974).
403-413. Remote sensing in control of pink bollworm in Cotton,
California Agriculture, 28(9), 10-12.
[21] Hamar, D., Ferencz, C., Lichtenberger, J., Tarcsai, G.
and Ferencz-Árkos, I., (1996). Yield estimation for corn [35] Landgrebe, D.A. and Staff, (1967). Automatic
and wheat in the Hungarian Great Plain using Landsat Identification and classification of wheat by remote
MSS data. International Journal of Remote Sensing, sensing, Purdue Agric. Experiment Station Res. Prog.
17(9), 1689-1699. Report, 279.

[22] Benedetti, R. and Rossini, P., (1993). On the use of [36] Kristof, S.J., (1969). Preliminary multispectral studies of
NDVI profiles as a tool for agricultural statistics: the soil, Journal of Soil water conservation, 26, 15-18.
case study of wheat yield estimate and forecast in Emilia
Romagna. Remote Sensing of Environment, 45(3), 311– [37] Hoffer, R.M. and Goodrick, F.E., (1971). Geographic
326. considerations in automatic cover type identification, in
Proceedings of the Indiana Academy of Science, 80, 230-
[23] Ferencz, Cs., Bognar, P., Lichtenberger, J., Hamar, D., 44.
Tarcsai, Gy., Timar, G., Molnar, G., Pasztor, Sz.,
Steinbach P., Szekely B., Ferencz O. and Ferencz-Arkos, [38] Morain, S.A., (1970). Radar sensing in agriculture; an
I., (2004). Crop Yield Estimation by Satellite Remote overview, condensed from CRES Technical report, 177-
Sensing. International Journal of Remote Sensing, 214.
25(20), 4113-4149.
[39] Morain, S.A., Wood, C. and Conte D., (1970). Earth
[24] Das, D., Mishra, K. and Kalra, N., (1993). Assessing observation Survey program 90-day mission report,

711
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

NASA/MSC mission 102, site 87, CRES Technical [45] Langley, S.K., Cheshire, H.M. and Humes, K.S., (2001).
Memo 169-4, Centre for Research, University of Kansas, A comparison of single date and multitemporal satellite
Lawerence, Kansas, 16. image classifications in a semi-arid grassland. Journal of
Arid Environments, 49(2), 401-411.
[40] Schwarz, D.E. and Caspall, F., (1968). The use of Radar
in discrimination and identification of agricultural land [46] Jung, M., Churkina, G., Henkel, K., Herold, M. and
use, in Proceedings of the 5th Symposium on Remote Churkina, G., (2006). Exploiting synergies of global land
Sensing and Environment, University of Michigan, 233- cover products for carbon cycle modeling. Remote
247. Sensing of Environment, 101(4), 534-553.

[41] Lennington, R.K., and Sorensen, C.T., (1984). A mixture [47] Sun, Jiulin, Chen, S., Qian, H. and Zhang, D., (1996).
model approach for estimating crop areas from Landsat Series Monographs of the Study on Dynamic Monitoring
data, Remote Sensing of Environment, 14, 197-206. and Yield Estimation of Crops with Remote Sensing in
China, Science and Technology Press of China, Beijing,
[42] Mccloy, K.R., Smith, F.R. and Robinson, M.R., (1987). 238 p.
Monitoring rice areas using Landsat MSS data,
International Journal of Remote Sensing, 8(5), 741-749. [48] MacDonald, R.B. and Hall, F.G., (1980). Global crop
forecasting, Science, 208, 670-679.
[43] Gallego, F.J., Delince, J., and Rueda C., (1993). Crop
area estimates through remote sensing: stability of the [49] Rao, P.P.N., and Rao, V.R., (1987). Rice crop
regression correction, International Journal of Remote identification and area estimation using remotely sensed
Sensing, 14(18), 3433-3445. data from India cropping patterns, International Journal
of Remote Sensing, 8(4),639-650.
[44] Nordberg, M.L. and Evertson, J., (2003), Vegetation
index differencing and linear regression for change [50] Tennakoon, S.B., Murty, V.V.N. and Eiumnoh, A.,
detection in a Swedish mountain range using Landsat (1992). Estimation of cropped area and grain yield of rice
TM and ETM+ imagery, Land Degradation and using remote sensing data, International Journal of
Development, 16, 139-149. Remote Sensing, 13(3),427-439.

712
Basic & Applied Sciences
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Study on PRP’s (Protein Rich Pulses) by Irradiating


Co-60 Gamma Ray Photons
Manoj Kumar Gurinderjeet Shilpa Rani, Amrit Singh,
Gupta, Singh, Department of Department of
Deparment of Department of Physics, SLIET, Physics, SLIET,
Applied Sciences, Physics, SLIET, Logowal, Logowal,
BGIET, SANGRUR, Logowal, SANGRUR, SANGRUR,
mkgupta.sliet@gmail SANGRUR, shilpamittal28991@g amritsliet@gmail.co
.com gjs.sliet@gmail.com mail.com m

Longowal, SANGRU
ABSTRACT shown in Table 1. The linear attenuation coefficient has been
measured by using linear transmission geometry [15] by using
The linear attenuation coefficient of some commonly incident photon energy 1332keV gamma ray from Co-60.
available protein rich pulses (ciceraritenium, vigna radiate,
The geometrical setup for the experimental measurements
phaseolus vulgaris, vigna mungo and canjanus cajan) has been
measured by using direct transmission method. It has been using direct transmission is shown in figure 1. Ortec (2x2)
found that the pulse: vigna radiate (VR) having more linear inch NaI scintillation detector coupled to EG&G Ortec
attenuation coefficient than others pulses. multichannel analyzer has been used in present measurement.
Three collimators of aperture 2mm, 3mm and 3mm are used
KEYWORDS as source, sample and detector collimators respectively. From
the experimental arrangement I and Io, intensity of incident
Protein rich pulses, linear attenuation coefficients, direct
gamma ray photons with and without has been calculated
transmission method.
from obtained spectrum. Using I, Io and mass per unit
thickness in Beer Lambert’s law, absorption coefficient of the
1. INTRODUCTION samples has been calculated. The intensity of incident photons
The attenuation coefficient is a measure of the number observed on the detector in spectrum form for sample (CP) is
of primary photons which actually interacted while traversing shown in figure 2. Each run for the observed spectrum was
a given amount of absorber. The wide range of applications of taken for 3600 sec and repeated three times under similar
linear attenuation coefficient in various fields includes optical conditions to overcome statistical error.
fiber, radiation dosimetry, radiation biophysics, nuclear
medicine, nuclear diagnostic, oceanography etc. Gamma ray 3. RESULT AND DISCUSSION
transmission method utilizes the application of Lambert Beer In this communication, we have measured the values of
Law for the measurement of the linear attenuation coefficient linear attenuation coefficient of some of easily available
of the samples. protein rich pulses: Chick Peas, Phaseolus Vulgaris, Vigna
Radiate, Cicer Aritenium, Vigna Mungo and Canjanus Cajan
Theoretical and experimental measurement of by using linear transmission geometry. Molecular formula for
linear/mass attenuation coefficients for elements [1-4], all PRP’s has been obtained from Wikipedia. Experimental
compounds [5,6], alloys [7,8], biological materials [9] and values have been compared with theoretically calculated
building materials [10] is carried out by various researchers. values from Chantler et al. data tables and are tabulated in
The theoretical values of mass attenuation coefficients and Table 2. 1.75% variation has been found in the experimental
interaction cross-sections, form factor and scattering for the results from theoretical values, which shows a good
element/compounds of dosimetric and radiological interest, agreement. The experimental error in present measurement is
from Z=1 to 92 at various energy have been tabulated by of the order of 0.15%.
Chantler [11] and Hubbell and Seltzer [12]. In the present
communication, attempt has been made to find out linear Figure 1: Geometrical setup for experimental
attenuation coefficient of protein rich pulses (Chick Peas, measurements
Phaseolus Vulgaris, Vigna Radiate, Cicer Aritenium, Vigna
Mungo and Canjanus Cajan) experimentally.

2. EXPERIMENTAL ARRANGEMENT

Self supporting regular shaped samples from commonly


available pulses have been prepared by using the technique
given by Habbani et al. [13] and Tirasoglu [14]. The list of
pulses, their chemical composition and Molecular Formula are Table1. Natural parameters of PRP’s (protein rich pulses)

713
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Protein Table 2. Experimentally measured results for linear


S. Rich * # $
attenuation coefficients
E C P Mol. formula
N Pulses
Sample Linear Attenuation Coefficient (μ)
(PRP) S.
code of
No.
Chick Peas PRP Theoretical Experimental
1 1,619 57 22 C17H25N305S
(CP)
1 CP 3.06 3.01
Phaseolus
2 Vulgaris 1,393 60 24 C20H18O4 2 PV 2.17 2.13
(PV)
3 VR 3.26 3.20
Vigna
3 Radiate 1,452 62.6 3.86 C21H20O10 4 CA 2.67 2.62
(VR)
5 VM 2.53 2.49
Cicer
6 CC 2.89 2.84
4 Aritenium 686 27.4 8.86 C10H13N5O
(CA)

Vigna
5 Mungo 1,603 50 24 C20H20O5 REFERENCES
(VM) [1] Colgate, S. A., Gamma-ray Absorption measurement.
Phys. Rev. 87, 592 (1952).
Cajanus
6 Cajan 569 23 7.2 C16H12O6 [2] Connar A. L., Atwater H. F., Plassmann E. H.,
(CC) McCrary J. H., Gamma-Ray. Attenuation-Coefficient
Measurements Phys. Rev. A 1, (1970) 539-544.
*
E-Energy (KJ), #C-Carbohydrates (g), $P-Protein (g)
[3] Davison C. M., Evans R. D., Measurements of gamma-
rays attenuation coefficient. Phys. Rev. 81, (1951) 404-
411.
CONCLUSION [4] Teli M. T., Nathuram R., Mahajan C. S., Single
From experimentally obtained results, we conclude that the experiment simultaneous measurement of elemental
value of linear attenuation coefficients varies along with mass attenuation coefficients of hydrogen, carbon and
sample’s natural parameters such as: Energy, amount of oxygen for 0.123-1.33MeV gamma rays. Radiation
Proteins, Carbohydrates etc. Also vigna radiate (VR) has more Measurement, 32, (2000) 329-333.
attenuation coefficient than other pulses. Thus, the pulse
containing more energy, carbohydrates and proteins values [5] Gowda, S., Krishnaveni, S., Yashoda, T., Umesh, T.
have more linear attenuation coefficient also. K., Gowda, R., Photon mass attenuation coffecients,
effective atomic numbers and electron densities of
So we can say that, there may be possibility to replace some thermoluminescent dosimetric compounds.
metallic foils with such types of materials in research Pramana journal of Physics, 63 (3), (2004) 529-541.
laboratories. In general, use of these types of pulses provide
better protection from diseases in human body. [6] Turgut, U., Simsek, O., Buyukkasap, E., Measurement
of mass attenuation coefficients in some Cr, Co and Fe
compounds around the absorption edge and the
validity of the mixture rule. Parmana Journal of
Physics. 69 (2), (2007) 199-207.

[7] Han, I., Demir, L., Studies on effective atomic


numbers, electron densities and mass attenuation
coefficients in Au alloys. Jour. of X-ray Science and
Technology. 18, (2010) 39-46.

[8] Seven, S., Karahan, I. H., Bakkaloglu O. F., The


measurement of total mass attenuation coefficients of
CoCuNi alloys. Journal of Quantitave Spectroscopy and
Radiative Transfer. 83 (2), (2004) 237-242.

[9] Singh, K., Singh, C., Singh, P. S., Mudhar, G. S.,


Effect of weight fraction of different constituent
Figure 2: X-ray spectrum for Co-60
elements on the total mass attenuation coefficients of

714
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

biological materials. Pramana Journal of physics. 59 [13] Habbani, F., Eltahir, E. Ibrahim, M. A. S.
(1), (2002) 151-154. Determination of elemental composition of air
particulates and soil in Khartoum area. Tanz. J. Sci.
[10] Mavi, B., Akkurt, I., Natural radioactivity 33, (2007) 57-66.
and radiation hazards in some building materials used
in Isparta, Turkey, Radiation Physics and Chemistry [14] Tirasoglu, E. 2006. Average fluorescence yield of M4,5
79, (2010) 933. subshells for thorium and uranium. Eur. Phys. J. D. 37,
177-180.
[11] Chantler, C. T., Olsen, K., Dragoset, R. A., Chang, J.,
Kishore, A. R., Kotochigova, S. A., Zucker, D. S. X-ray [15] Gupta, M. .K., Sidhu, B. S., Mann, K. S., Dhaliwal, A.
form factor, attenuation and scattering tables, National S., Kahlon, K. S., Advanced Two Media (ATM)
Institute of Standards and Technology.Available method for measurement of linear attenuation
on:/http://physics.nist.gov/ffastS (2005). coefficients, Annals of Nuclear Energy 56 (2013) 251-
254.
[12] Hubbell, J. H., Seltzer, S. M., Tables of X-Ray Mass
Attenuation Coefficients
and Mass Energy-Absorption Coefficients from 1 keV
to 20 MeV for Elements Z = 1 to 92 and 48 Additional
Substances of Dosimetric Interest. NISTIR. 5632
(1995).

715
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Drifting effect of electron in multi-ion plasmas with


nonextensive distribution of electrons

Sheenu Juneja Parveen Bala


Department of Mathematics, Statistics & Physics,
Punjab Agricultural University, Ludhiana.141004
pravi2506@gmail.com

ABSTRACT considerable influence on the excitation of the various kinds


In the present research work, effect of drifting velocity of of nonlinear waves in interplanetary space and Earth’s
electrons have been studied in plasma systems consisting of magnetosphere [3]. In the recent past, this motivated the
cold positive and negative ions and electron beam. The researchers to study multicomponent plasma with electrons in
electrons are assumed to obey non-extensive velocity variety of systems [4,5,7-11].
distribution. Standard reductive perturbation method is used to
drive dispersion relation, which comes out to be a polynomial Particle distributions, offers a considerable increase in
of four degree in phase velocity which corresponds to four richness and variety of wave motion that can exist in plasmas
ion-acoustic modes. The expression for critical velocity is are of two types: Maxwellian and non-Maxwellian.
found to be the function of various parameters including Maxwellian distribution, also known as extensive distribution,
nonextensive parameter q. The nonextensivity and electron is valid universally for systems which are in equilibrium. But
beam parameters play crucial role in the characterization of in systems with long range interactions such as in plasma,
solitons. We have taken (Ar+F-), (H+ H-), (H+O2-) plasma non-equlibrium stationary states i.e. nonextensive distribution
systems for our study. exists. Since last decade there is an increasing focus on a new
statistical approach that is Tsallis distribution [12]. It has also
been studied that nonextensive effects on ion acoustic waves
General Terms are not apparent when electron temperature is much more than
Nonlinear wave structures ion temperature, but they are salient when the electron
temperature is not much more than the ion temperature. As
Keywords compared with the electrons, the ions play a dominant role in
Multi-ion plasma, Nonextensive distribution, Reductive the nonextensive effects [13]. In the limit when q=1 the
Perturbation method. Tsallis distribution leads to Boltzmann-Gibbs Statistics
(BGS). For q=1, Maxwell distribution function is obtained
1. INTRODUCTION where q is measure of nonextensivity. But for q<1, high
The solitons wave is a quantum of energy or quasiparticle that energy states are more probable than in the extensive case. On
can be propagated as a travelling wave in nonlinear systems the other hand, for q>1, high energy states are less probable
and is neither preceded nor followed by another such than in extensive case and there is cut off beyond where no
disturbance. It does not obey superposition principle and does state exists. Tsallis q-distribution has been used with some
not dissipate. A solitary wave exists when effects on success in a number of research work in plasma physics [14-
nonlinearity and dispersion are balanced [1]. Soliton waves 22]. Regarding the organization of paper, basic equations
exist in the sky as density wave in spiral galaxies, in plasmas related to our plasma model have been given in section 2,
etc. Electron beam component is frequently observed in the using nonextensive distribution of electrons and standard
region of space where ion-acoustic waves exist. The reductive perturbation method, Korteweg-de Vries equation
observation of solitary waves in the auroral zone suggests that (KdV) has been derived. Section 3 is devoted to the
there are two classes of solitary waves: the first kind discussion of numerical results and finally the conclusions are
associated with electron beam and other is associated with ion made
beam [2]. The electron beam plasma system has also
considerable importance in the areas of magnetosphere and 2. FORMULATION OF PROBLEM
solar physics. The high speed electrons have a considerable We have considered the collisionless, unmagnetized, multi-
influence on the excitation of the various kinds of nonlinear
ion plasma model containing cold positive and negative ions
waves in interplanetary space and Earth’s magnetosphere [3].
Yadev et. al. [4] showed that above a critical beam velocity and electron beam. Further, the electrons are assumed to obey
four ion-acoustic branches appear in electron beam plasma nonextensive distribution. The number density of the electron
system. Bala et. al. [5] showed that above a critical beam fluid associated with non-extensivity of electrons is given by
velocity six ion-acoustic branches appear in a warm
multicomponent plasma consisting of warm positive and ne  [1  (q  1) ]( q 1) / 2( q 1) (1)
negative ions along with nonthermal electrons and a electron
beam. However present investigation shows the existence of Here q is the nonextensive parameter. The nonlinear
four modes above a critical electron beam velocity. behaviour of ion-acoustic waves in present multispecies
The ,( and plasma composition plasma model is governed by the following set of normalized
occur in D- region of ionosphere, where negative ions are continuity, momentum and Poisson equations:
found [6]. The investigation of electron beam plasma has also

716
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

n j  (n j v j ) (2)
We use (7) in equation (2) to (4) and compare various
 0 physical quantities in same order. The resulting equations are
t x
further used to eliminate other dependent parameters in
v j v j 1  (3) term . Then after a long algebraic but straightforward
 vj 
t x Q j x manipulations, we arrive at the following nonlinear equation
known as Korteweg-de Vries (KdV) equation as:
 2 n   (1)  (1)  3 (1)
 ne   b nb  1  z n2 (4)  A (1) B 0 (9)
x 2 B B    3
Where, j=1,2,b, 1 stands for positive ions, 2 stands for Where A=2Q/P is nonlinearity coefficient and B=2/P is
negative ions, b stands for electron beam. Here, Q1=-, dispersion coefficient. Also coefficient of
Q2=/z, Qb=eZ1 and
nb( 0) n(0) 4 Z12     z2  4 b (v0   ) dF (10)
B  (1   z ) /(1   b ),  b 
m
,   2( 0) ,  2 , P   
ne( 0)
n1 m1 B  (Z12 ) 2  (eZ1 (v0   ) 2 ) 2 d
me Z 1   z2 /    b / e Z1  3 2 (1   z3 )   b 3e (v0   ) 2
e  , z  2   Q     (11)
m1 Z1 B  B( 2 )3  (e (v0   ) )
2 3

Here are the densities and fluid (3  q )(q  1)



velocities of positive and negative ion species ad electron 4
beam respectively. are the equilibrium
densities of two ion components and beam respectively. In 3. DISCUSSION
equations (2) to (4), velocities  (potential), time The critical beam velocity is found numerically from
(t) and space coordinates (x) have been normalized with equations (8) and (10) with the condition that for
respect to the ion- acoustic speed in the mixture, F(λ)=1 [4]. The analytical expression for is given by:
3/ 2
, thermal potential , inverse of ion plasma   B  1/ 3
    z2 
1/ 3
 (12)
vcr 
2  b     
(1  q)B   e Z1     
frequency is the mixture , Debye length  

respectively. Ion densities are It is clear that critical beam velocity depends on electron beam
normalized with their corresponding equilibrium values, concentration negative ion concentration α, mass ratios η
whereas electron densities are normalized by . and respectively. It may be noted that in the Maxwellian
limit (q our impression for becomes same as that of
To study small but finite amplitude ion acoustic solitary Bala et al [5] in the limit β=0, where nonthermal electrons
waves in our multispecies plasma model, we construct here a were reported.
weakly nonlinear theory of ion-acoustic waves which leads to
scaling of the independent variables through the stretched co-
ordinates ξ and τ:

(5)
(6)

Where ε is small parameter measuring the weakness of the


dispersion and is the phase velocity of wave. Now to strike
balance between nonlinear and dispersive terms, we use
reductive perturbation technique where we expand all
dependent quantities in equations (2) to (4) around the
equilibrium values in power of ε in the following form:

nj  1  n (jr ) 


     (r )  (7)
 v j    k   r 1   v j 
 r
Fig 1: For the range q<0, 3D plot of critical velocity as a
   0  ( r )  function of beam density b and nonextensive parameter
     
q with =0.1, =0.476, e=1/1836 and z=Z1=Z2=1
Here k=0 for positive and negative ions and k= v0 for
electron beam, is the initial electron beam velocity. Using For plasma system, the 3D variation of critical
equations (5), (6) and (7) into Poisson’s equation (4), to the velocity as a function of nonextensive parameters q and
lowest order of ε, we get the following dispersion relation electron beam density in figures 1 (for q<0), 2 (for 0<q<1)
and 3 (for q>1) respectively. The value of decreases with
2(   z2 ) 2 b nonextensivity q for all three ranges. Further is also found
  q  1  F ( ) (8) to decrease with the electron beam density . This behavior
 2 Z1 B (v0   ) 2 eZ1 is opposite to that observed by Yadev et al [4] and Bala et al
The dispersion relation (8) is a four degree polynomial in [5]. Hence all ranges of q predict the similar behaviour as is
thereby giving four modes propagating with different phase clear from figures 1,2,3 respectively. A similar kind of
velocity. It may be further mentioned that in the limit q→1, behavior has been observed for the plasma systems H+O2-
our expression for phase velocity reduces to that of Yadev et (=32) and Ar+F- (=0.476) (not shown here).
al [4] and Bala et al [5] for .

717
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Where u is very small constant velocity. Hence soliton


existence condition u <0 is similar to F(M)<1.

Fig 2: For the range 0<q<1, 3D plot of critical velocity as a


function of beam density b and nonextensive parameter
q with =0.1, =0.476, e=1/1836 and z=Z1=Z2=1
Fig 5: Plot of function F(λ) as a function of λ for α=0.8,
η=0.476, q=0.3, (dotted) = 1.9
(dashed), solid line corresponds to F(λ)=1.

The condition F(M)<1 also corresponds to the absence of


convective instability for the linear wave [4]. Hence
F(M)=F(λ)=1 corresponds to the dispersion curve of linear
wave. In figure 5, A plot of function F(λ) as a function of
have been given where F(λ)=1 corresponds to dispersion
relation. Given F(λ) intersects the line F(λ)=1 at four
places , thereby indicating the existence of four real
roots that correspond to four linear modes. Among them one
mode corresponds to wave mode moving in negative
direction. Two of them corresponds to slow modes with phase
velocities smaller than drift velocity and fourth one
Fig 3: For the range q>1, 3D plot of critical velocity as a corresponds to faster mode with λ> . From figure 5, we see
function of beam density b and nonextensive parameter that for the fast mode, dF/d is negative and soliton
q with =0.1, =0.476, e=1/1836 and z=Z1=Z2=1 corresponding to this mode moves with supersonic speed. Out
of two slow modes, one moves with subsonic speed and the
In order to investigate the effect of mass of negative ions on other move with supersonic speed based on the sign of dF/d.
the critical velocity we have plotted vs for three However, for , (dashed lines) two real roots are
different mass ratios i.e η=0.476, η=1, η=32. The other possible, other two may be complex ones.
parameters are taken as , q=1.3(>1). The critical
beam velocity is found to decrease with increase in mass of 4. CONCLUSION
negative ions. In the present investigation, effect of drift velocity of
electrons in multi-ion plasma system has been presented. The
electrons are taken as nonextensive ones. The dispersion
relation and KdV equation as been derived using standard
reductive perturbation method. The critical velocity of
electron beam was found to be a function of nonextensivity q,
beam concentration negative ion concentration α, mass
ratios η and respectively. It is found that above a critical
velocity of electron beam, four soliton branches appear
corresponding to four different modes. Out of which, three
move with supersonic velocity while one moves with subsonic
speed based on the sign of dF/d. Further, the critical velocity
decreases with nonextensive parameter q and electron beam
concentration

Fig 4: Variation of critical velocity with for three ACKNOWLEDGMENTS


different mass ratios i.e η=0.476 (solid curve), η=1 (dotted P.B is thankful to UC for financial support via F. No 42-
curve), η=32 (dashed curve), , =0.1, 1065/2013
and z=Z1=Z2=1 REFERENCES
[1] Dodd, R. K., Eilbeck, J. C., Gibbon, J. D. and Morris, H.
Since the velocity of soliton M=λ+u and hence from Taylor C.1984 Solitons and Nolinear wave equation Academic Press,
expansion New York.

718
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[2] Das, G. C. 1979 Ion-a coustic solitons and shock waves in [12] Tsallis, C. 1988 Possible generalization of Boltzmann-
multicomponent plasmas. Phys Plasmas 21, 257-265. Gibbs entropy. J. Stat. Phys. 52, 479.

[3] Nejoh, Y. 1996 Large amplitude ion-acoustic waves in a [13] Zhipeng, L., Liyan, L., and Jiulin, D. 2009 A
plasma with a relativistic electron beam. Plasma Phys. 56, 67- nonextensive approach for the instability of current-drive ion-
76. acoustic waves in space plasma. Phys. Plasmas 16, 072111.

[4] Yadav, L.L., Tiwari R.S. and Sharma S. R.1994 Ion- [14] Rossignoli, R. and Canosa, N. 1999 Non additive
acoustic compressive and rarefactive solitons in a electron entropies and quantum statistics. Phys. Lett. A 281, 148-153.
beam plasma system. Phys Plasmas 1, 559-566.
[15] Abe, S., Martinrez, S. and Pennini, F. and Plastino, A.
[5] Bala P., Gill, T. S. and Kaur H. 2010 Localized nonlinear 2001 Nonextensive thermodynamic relations. Phys. Lett. A
electrostatic structures in a multispecies plasmas. J. Phys. 281, 126-130.
Conference Series 208, 012076.
[16] Wada, T., 2002 On the thermodynamic stability of Tsallis
[6] Swider, W. 1988 Ionospheric Modeling edited by J.N. entropy. Phys. Lett. A. 297, 334-337.
Korenkov, Birkhauser, Basel 403.
[17] Reynolds, A.M. and Veneziani, M. 2004 Rotational
[7] Moslem, W.M. 1998 Propagation of ion acoustic waves in dynamics of turbulence and Tsallis statistics. Phys. Lett. A.
a warm multicomponent plasma with an electron beam. J. 327, 9-14.
Plasma Phys., 61, 177-189.
[18] Sattin, F. 2005 Non-Extensive Entropy from Incomplete
[8] Bethomier, M., Pottelette, R., Malingre, M. and Knowledge of Shannon Entropy. Phys. Scripta 71, 443-446.
Khotyaintsev, Y. 2000 Electron-acoustic soliton in electron
beam plasma system. Phys. Plasmas 7, 2987-2994. [19] Wu, J. and Che, H. 2007 Fluctuation in nonextensive
reaction–diffusion systems. Phys. Scripta 75, 722-725.
[9] El-Taibany, W. F. and Moslem, W. M. 2005 Higher order
nonlinearity of electron-acoustic solitary waves with vortex [20] Tribeche, M. and Djebarni, L. and Amour, R. 2010 Ion
like electron distribution and electron beam. Phys.Plasmas 12, acoustic solitary waves in a plasma with a q-nonextensive
electron velocity distribution. Phys. Plasmas 17, 04211
032307.
[21] Bains, A. S., Tribeche, M. and Gill, T. S. 2011
[10] Esfandyari, A.R., Kourakis, I. and Shukla, P.K. 2008 Modulational instability of ion acostic waves in a plasma with
Ion acoustic waves in a plasma consisting of adiabatic warm q-nonextensive electron velocity distribution. Phys. Plasmas.
ions, nonisothermal electrons and a weakly relativistic 18, 022108.
electron beam :Linear and higher order nonlinear effects.
Phys.Plasmas 15, 022303. [22] Akhtar, N. and Al-Taibany, W.F. and Mahmmod, S.
2013 Electrostatic double layers in a warm negative ion
plasma with nonextensive electrons. Phys. Lett. A, 377, 1282-
[11] Singh, S.V., Lakhiana, G.S., Bharuthramand, R. and
1289.
Pillay, S. R. 2011 Electrostatic solitary structures in presence
of non-thermal electrons and a warm electron beam on the
auroral field lines. Phys. Plasmas 18, 122306.

719
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Second Language Learner: Threads of Communication


Skills in English Language
Chhavi Kapoor
Asstt. Professor in English
Bhai Gurdas Inst. of Engg. & Tech., Sangrur
chhavikap1989@gmail.com

ABSTRACT secondary level, Communication skills as a course in the


Engineering classroom, Public speaking classes, IELTS
Time in which we are living is a changing time. The scenarios classes, CELTA course, TOFEL classes and many others.
of the market, thinking, and adjustments are altering. The Such different learning genres have evoked the people to
influence of change that occurs creates miscommunication. acquire the language and pay hefty amounts to these centers
Nothing is left behind, apart from loose threads of or so called institutes.
understanding, communication and thinking. The process
through which communication occurs is not understood by the 2. STATUS OF THE LANGUAGE
people in general. The problem is in teaching and language
acquisition by the second language learner. The basic Such is the status of this foreign language that with the
instruction to the language learner gives the outcome of the passing years the worth of the language automatically marks
learning acquired by the learner. The learner needs to be its flag in the enlargement of a country, a community and
particularly of an individual. The language of “Them” became
conscious in the process of learning. This paper describes the
“Our” language in short span. Now the question here is “The
need of learning for the second language learner in the present English” still carries the status of “The English”? Surely
scenario. Even the limitations of the teacher are exposed in English is the language of opportunities and success for life in
the learning process. The approach to language acquisition India and other developing countries. As mentioned in
tries to discover the possibility and achievability for the the,“….A Critical Evaluation of ELT in India noted by the
effective communication for the second language learner with education commission of India in 1966,when a degree holder
advent of language introduction in the past. from India goes to any of the developed countries he is not
treated at par with a degree holder of that country…….” This
makes us realize that although we have inherited it from the
British but still we don’t acquire this language as much as we
Key words: have carried it, in all these passing years. The worth of
Second Language Learner, Teacher’s role, Communication English which came with the lord Macaulay in 1835, who was
Skills, Efficiency, Language, Grammar. the first to accentuate English language teaching in India with
his “Minutes of Education”. He entrenched the roots of
English in India way back in 1835 than later English was
1. INTRODUCTION highlighted by Sir Charles Wood as in 1854 he stated that the
English is a medium for admission to any established
The process of communication appears to be simple for the university for higher language. With spread of the language
first language user, as unconsciously they are aware about the the writes and the speakers accepted it. English became “The
language. But for the second language user it is difficult task. Status Symbol Tag” for many. What a value a foreign
English as a second language is used as a formal instruction language gained! The status which Hindi, Tamil, French,
for those whose native language is not English. Such language Chinese and others didn’t get in few years, impeccably
English got. The extensive practice of English and use of it,
learners have to consciously participate in the process of
for communication of the second language learner was
learning as well as practice more and more to bring out the supported by the set up of the structural approaches for
efficiency required for a language user. Here, it doesn’t mean teaching English in different parts of the country:
that the first user is good at communication; rather they even
fail like the second language learner, in conveying their ideas,
thoughts, and endeavors in social interactions as well as in 3. MINDSET OF USERS
professional front. Effective communication includes four
skills Listening, Speaking, Reading and Writing. The The analyses about the whole scenario has led me evaluate the
globalised countries have marked a pressing need for viable status of this language in my country. English is not used or
English programs in those countries whose native language is produced as done by The English. For the Indians it is a way
not English. This has resulted in the increase in demands of for convenient communication. Now if people think that they
the learners for learning these skills .As a result ,there is are good at it, Saddely they are having wrong perspective.
valuable expansion in the production of Coaching centre’s, Although the rules are much clear to the second language
Special Language classes, English in schools at primary and learner but the efficiency in using it needs confidence and flair
which comes by the language acquisition in the learning

720
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

process to the second language learner. While having a debate quite important. Communication is an interactive process. The
with some relative over the importance of communication in learner as well as the teacher has to focus and understand the
English particularly, and I heard my relative saying these importance of the two communication agents involved in the
words , “……he speaks English ,he eats in English, he walks communication process, sender (S) and receiver(R). Both the
in English, he is now English…”Earlier also I came across communication agents exert a reciprocal influence on each
such lines while reading about the dilemma of English other through inter-stimulation and response.
language learners. Again hearing such type of statement from For this the teacher has to know the individual needs of the
my relative on the one who returned from the foreign land students. And this can only be known in a better way when
after 2 years of his stay made me realize that, the notions the learners perform a task in the class. And they accomplish
about the language have changed in the Indians but the way the task to become linguistically diverse students. Task-based
they express shows their weak communication skills. That approach seems to be suitable for teaching and learning the
statement she made, needed to be concluded in just a single four skills-
line”…...He is good at English...”but such a bold expression
of English with Hindi along with emotional touch, showcased
the real self of the Indian user’s efficiency in this language, in
my country. This is one example there are many more to be
seen in our surrounding. Here the importance of
communication for an individual mechanically arises for
effective growth of personality. Thus right path for the second
language learner would be in acknowledging the
communication as the self-motivated interactive process that
involves the effective diffusion of facts, ideas, thoughts,

Figure:2

Here the knowledge of four skills in the language acquisition


works to bring the language in the coherent, cohesive manner.
This brings the overall success of a student in the learning
process. Being able to communicate effectively and clearly in
your own environment and comfort zone may seem tricky, but
it increases exponentially in a different phases of life. So the
role of the teacher moves towards making an environment
with:
o Emphasis on speaker's purpose.
o Analysis of the signals coming.
o Awareness for what has gone before.
o Positivity in attitude.
o Responsibility to the speech.
Figure:1
o Distractions Forefend mode.
o Materials evaluation.
feelings and values for appropriate drawing of colloquial and
o Non-verbal clues.
metaphoric phrases under the umbrella of communication. As
Halliday (1978, p. 169 explains, communication is more than
merely an exchange of words between parties; it is a 4. CONCULSION
“…sociological encounter” (Halliday, p. 139) and through
For the efficiency of the language ,the teacher, the learner,
exchange of meanings in the communication process, social
the settings and the language strands must be interwoven
reality is “created, maintained and modified” (Halliday, p.
The realization will accept the change in the teaching style
169).Such is the importance of English in Communication.
with address to the learner, the learner becomes motivated
here and the resources are explicating the worth of the
4. LEARNING TO COMMUNICATE language for the second language learner. The skills of the
listening, reading, speaking, and writing along with the
When dealing with the process of learning the role of the
knowledge of vocabulary, morphology, spelling,
learner as well as the teacher is integral in learning a
pronunciation, syntax and understanding associate to
language. The role of the teacher in the process of learning is
produce learning for authentic communication. The whole

721
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

purpose is to make the second language learners able users four skills rather than, to be just good at one or two or
of the language. If one is good in speaking and listening is three. Assimilating all the knowledge and using it for
he good at English? Is one person is good at reading and communication, for expressing easy and even complex
writing, he is good at English? So answer is automatically situations resolves the purpose of learning English.
clear, for communication we have to be good in all the

REFERENCES [6]National council of teachers of English: Report on English


language learners directed by Anne Ruggles Gere.
[1]George,M.,2007.Classroom Activities for building
vocabulary. The journal of English language teaching, [7] M. Mojibur Rahman :ESP World, Issue 1 (27), Volume
India:vol.45/1:35-39.
9, 2010: Teaching Oral Communication Skills: A Task-based
[2]George, H.V., 1971.English for Asian learners: are we on Approach.
the right road? English language teaching,xxv:270-277
[8] Halliday, M.K., 1978. Language as social semiotic.
[3]Ravi.P.V, 1998. The motivational problems with references Edward Arnold, London.
to teaching-learning English as a second language.
[9] Corder, S.P. 1980: Second language acquisition research
[4]S.Devika Malini, 2011, English language teaching in India- and the teaching of grammar.BAAL Newsletter 10.
a critical evaluations of ELT in India.

[5]Mccarthey,S.J.,Garcia,G.E.,Lopezvelasquez,A.M.,&Guo,S.
H.()2004).Understanding contexts for English language
learners. Research in the teaching of english38 (4):351-394.

722
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

An Experimental Investigation on Aluminium based


Composite Material reinforced with Aluminium oxide,
Magnesium and Rice Husk Ash Particles through Stir
Casting Technique

Rajiv Bharti Er.Sanjeev Kumar Er.Jaskirat Singh


Spec Lalru (Pb.) Bgiet,Sangrur (Pb.) Bgiet,Sangrur (Pb.)
er.rajiv0009@gmail.com sanjeevgoldy3@hotmail.c Jaskirat.sliet@gmail.com
om

ABSTRACT
Manufacturing of aluminum alloy based casting composite
materials via stir casting is one of the prominent and
economical routes for development and processing of metal
matrix composite materials. In this experiment Al based
composite material is manufactured and different tests are
performed knowing the properties of composite material and Figure - 1.1 Classifications of Composite
also optical photomicrograph was used for taking
observations of the microstructure revealed that the dispersion Materials wit Metal Matrices
of micron size particles were more uniform. The result
1.2 Natural Composites:
reveals that stir casting could be an economical route for the
production of composite. The composites thus produced were
Several natural materials can be grouped under natural
characterized for their mechanical properties such as hardness
composites e.g. bones, wood, shells, pearlite (steel which is a
and tensile strength. This paper represents a stir casting mixture of a phase and Fe3C) etc.
process, process parameter and preparation of aluminum,
composite like aluminium oxide, magnesium and rice husk
ash reinforcement by varying proportions. Tensile strength
and Brinell hardness number increases with on increasing
percentage of aluminium oxide, magnesium and rice husk ash.

1. INTRODUCTION
Composite materials are engineering combinations of two or
more materials in which properties are achieved by scallop shell
combinations of different constituents. The various types of
engineering composites are found in industry including Figure -1.2 Natural Composite
polymer matrix, ceramic matrix and metal matrix composites.
The field of science and technology demands the development 1.3 Man-Made Composites:
of advanced materials especially in the field of transportation,
aerospace and military engineering related areas. These area Man-made composites are produced by combining two or
demands light weight, high strength materials having good more materials in definite proportions under controlled
tribological properties. Such demand can only be met by conditions e.g. Mud mixed straw to produce stronger mud
development and processing of aluminium metal composite mortar and bricks, Plywood, Chipboards, Decorative
materials. The main challenge in the development and laminates, Fibre Reinforced Plastic (FRP), Carbon
processing of engineering materials is to control the Composites, Concrete and RCC, Reinforced Glass etc.
microstructure, mechanical properties and cost of the product
through optimizing the chemical composition , processing
method and heat treatment .

1.1 Classification of Composites

Composites are classified by the geometry of the


reinforcement, particulate, flakes, and fibres or by the type of
the matrix –metal, carbon and polymer .
Plywood Bricks
Figure-1.3 Man-Made Composites

723
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1.4 Aluminium
During cooling stirring is started at
Pure aluminium is taken as base material which is found 98% the semi-solid condition and
pure and 2% is another element. continued until a temperature
where less than 30% of the metal is
1.5 Aluminium Oxide solidified is reached

Aluminium oxide is a chemical compound of aluminium and


oxygen with the chemical formula AL2O3.It is the most Pouring into the mould
commonly occurring of several aluminium oxides. Also called
alumina, and also be called aloxide, aloxite or launder
depending on particular forms or applications.AL2O3 is
significant in its use to produce aluminium metal, as an
Withdrawal of the composite from
abrasive owing to its hardness , and as a refractory material
owing to its high melting point. the mould

1.6 Magnesium

Magnesium is chemical element with symbol Mg and atomic Desired fabricated MMCs ingots
number 12. it is a highly flammable metal, but while it is easy
to ignite when powered or shaved into thin strips.
Figure-3.1 Flow Chart Showing Steps inInvolved in Stir
1.7 Rice Husk Ash Casting

Rice husk is known as rice husk ash (RHA).This RHA in


turn contains around 85% -90 % amorphous silica.RHA is
3.1 Work Plan for Experiment
used for high strengthness.
Initial testing for sample The
2. RESEARCH GAP Microstructure

Different materials are used for increasing mechanical


properties in composite material, for composite material
matrix and reinforcement are of prime importance .There are Sample Preparation through Stir
lots of gaps found in material selection and tests performed. Casting

Rice husk ash is used in different ratios which are used as a


good insulator and temperature is maintained. Different ratio
Keep the samples under
of Aluminium oxide and magnesium are used to maintain the
properties of composite material. varying temperatures for Time
period
3. PROBLEM FORMULATION
Obtain optical
There has been an increasing interest in composites containing
low density and low cost reinforcements. The specimens photomicrograph
produced will be tested and then subjected to varying
temperatures (1000 C to 8500 C) for a period of 60- 90 days.
Draw Comparison between
Flow Chart showing steps involved in Stir Casting initial and final condition

Collection and Preparation of the


raw materials to put Results and conclusion

Placing raw materials in a Scope for future work


graphite crucible under nitrogen
gas into a furnace
Figure-3.2 Work Plan for Experiment

Heating the crucible above the


liquid us temperature and
allowing time to become
completely liquid

724
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. EXPERIMENTAL PROCEDURE
4.1 Stir Casting

Stir casting is a liquid state method of composite


materials fabrication, in which a dispersed phase (ceramic
particles, short fibers) is mixed with a molten matrix
metal by means of mechanical stirring. The liquid
composite material is then cast by conventional casting
methods and may also be processed by conventional Metal
forming technologies. Stirring is necessary to help in
promoting wet ability i.e. bonding between Matrix and
Reinforcement. Stirring speed will directly control the flow Figure 4.1: Preheating the composite material
pattern of molten metal .Parallel flow will not promote good
reinforcement mixing with matrix .Hence flow pattern should 4.4 Preheating
be controlled turbulence flow. Pattern of flow from inward to
outward direction is best. In this experiment stirring speed Preheating involves heating the base metal, either in its
from 300-400 rpm is kept. As solidifying rate is faster it will entirety or just the region surrounding the joint, to a specific
increase the percentage of wet ability. Aluminum melts at desired temperature, called the preheat temperature.
6500 C-8000 C. Alumina particles are preheated at 9000 C Reinforcement was preheated at specified 5000C to 6000 C
respectively for 1hr to 3 hrs. to make their surface oxidized. temperature 30 minutes to 50 minutes in order to remove
The furnace temperature of Al near above 7500 C to melt the moisture or any other gases present reinforcement. The
Aluminum alloy and then cool down just below the liquid us preheating of also promotes the wet ability of reinforcement
to keep the slurry in semi solid state. Automatic stirring was with matrix.
carried out with help of radial drilling machine for about 10
mints. At stirring rate of 300 r.p.m. at this stage alumina 5. RESULT AND DISCUSION
particles were added manually to vortex.
5.1 Optical Photomicrograph Analysis
4.2 Stir Casting is characterized by the following
features Photomicrograph is opposed to a macrographic image, which
is at a scale that is visible to the naked eye. It is used for taken
1. Motor with stirring system at 270 r.p.m a digital image through a microscope to show a magnified
2. Heating Furnace (5000C-11000C) image of an item. The optical microscope remains the
3. Crucible fundamental tool for phase identification. The optical
4. Stirring blade (450-600) microscope magnifies an image by sending a beam of light
5. Plug through the object. At a basic level, photomicroscope may be
performed simply by hooking up a regular camera to a
4.3 Rice Husk Ash microscope, thereby enabling the user to take photographs at
reasonably high magnification.
Rice Husk is burnt in controlled temperatures which are
below 700 degrees centigrade. This ash generated is
amorphous in nature. The transformation of this amorphous
state to crystalline state takes place if the ash is exposed to
high temperatures of above 850 degrees centigrade. When this
RHA is spread as a coating over the molten metal in the ladle,
it acts as a very good insulator and the temperature is
maintained and does not cool down quickly, hence reducing
the breakdown time of the casting. Rice milling generates a
byproduct known as husk.

Table 4.1: Composition of Material and


Percentage
Figure 5.1: Schematic diagram of Optical
photomicrograph
Al (gm) Al2O3(gm) Mg (gm) Rice Husk Ash
Type (%) (%) (%) (gm) (%)

Sample- 1200 25
120 (7.29%) 300 (18.23%)
1 (72.9%) (1.519%)

Sample- 1200 300 Sample 1 Sample 2


50 (2.3%) 600 (27.9%)
2 (55.8%) (13.95%) Figure 5.2: Specimen Sample

725
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The samples are casting according to the technical form. Aluminium is used as a matrix in several fibre
specifications. reinforced composites. Al2O3, an oxide of Al is very hard and
strong and can be dispersed in the matrix of Al by powder
Stir casting process is mostly useful for composite material metallurgy to produce SAP (sintering aluminium product).
.Composites are solids made from more than one material.
Composites are designed so that the properties of the metal
and composite utilize the properties of the components.
Optical photomicrograph shows that the micro structural
changes have been found between the sample 1 and sample 2,
at different level. The incorporation of as received
reinforcement (Al2O3, Mg, and RHA) in Al alloy by liquid
metal stir casting has resulted in agglomeration of particles.

Figure 5.6: Optical photomicrograph of Al based


composite material of Sample 2 at 50 µm

At 50 µm sample 2 clearly shows grain particles of reinforced


material. Al when added in Mg in range of 3 to 10% with
small amounts of Zn ad Mn increases strength, hardness and
castability.

Figure 5.3: Optical photomicrograph of Al based


composite material of Sample 1 at 100 µm

Optical photomicrograph is carried out at 100µm on sample 1,


which are showing bonding structure between matrix
(aluminium) and reinforcement (Al2O3, Mg, RHA) in small
particle grain size. Black Particles are showing Al2O3. The
wt% of Al2O3 is 7.29, Mg is 1.519.Rice husk ash is used as
strength purposes.
Figure 5.7: Optical photomicrograph of Al based
composite material of Sample 1 at 20 µm

At 20µm reinforced particles changes their size of images.


The density of aluminium is 2.7g/cm3 but Mg having
1.74g/c.m3Mg much costlier than aluminium with it compares
for lightness.

Figure 5.4: Optical photomicrograph of Al based


composite material of Sample 2 at 100 µm

In Sample 2 the wt% of Al2O3 is taken 13.95 and wt%


change in Mg is 2.3 and RHA is 27.9,but Al alloy wt% is
constant.

Figure 5.8: Optical photomicrograph of Al based


composite material of Sample 2 at 20 µm

At 20µm in sample 2 reinforced particle are clearly visualised,


their properties are changed, strength hardness of this material
are more than sample 1.but hardness not same.

5.2 Tensile Strength Test

Figure 5.5: Optical photomicrograph of Al based Tensile test is performed on universal testing machine. The
composite material of Sample 1 at 50 µm shaper machine used for specimens, from sample-1 and
sample-2 to generate proper size and dimension according to
At 50 µm images change its size and in sample 1, grain size, UTM machine. Tensile strength measures the force required
alumina and Mg particles are visualized in small particles

726
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

to pull something such as rope, wire, or a structural beam to


the point where it breaks.

Chang
Original Elongat
Breakin e in
Gauge ion
S.No. gLoad length
Length
(KN)
(mm) (%)
(mm) Diagram 5.12

Specimen
10.8 91.0 1.3 1.4 Table 5.2: Tensile Strength Test
1

Specimen
29.6 83.5 1.1 1.3
2

Percentage Elongatio = 1.4

Percentage Elongation = 1.3

During tensile test on specimen-1 elongation is 1.4% at


breaking load of 10.8 KN.
In specimen-2 an elongation of 1.3% is found at breaking load
of 29.6KN .Tensile strength in specimen-2 is more than
specimen-1, i.e. 29.6 KN.
Figure 5.9: Universal Testing Machines
This may happen due to different percentage of alloying
The tensile strength of a material is the maximum amount elements in specimen-2, which results in increased tensile
of tensile stress that it can take before failure, for example strength.
breaking. Tensile test used to assess the mechanical behaviour
of composites and matrix alloy. If reinforcement wt% 5.3 Brinell Hardness Testing Machine
increases UTS are also increases. This happen may be to
dispersion motion, this May results increase in tensile strength There are many designs of commercially manufactured
of reinforcement Brinell hardness testing machines. The dial indicating-gage
was the original method used in Brinell machines for
measuring the indentation depth and for calculating and
displaying the Brinell hardness number. The general principle
of its operation is to mechanically measure the movement of
the indenter through a multiplying lever system. The dial face
is calibrated to indicate the Brinell number corresponding to
the displacement of the indenter. Usually, the dial divisions
have represented whole Brinell numbers, allowing an
estimation of the hardness number to only ½ Brinell units.
Specimen -1 Specimen-2

Figure 5.10 Tensile Test Specimen-1 and Tensile Test


Specimen-2

Diagram 5.11
Figure: 5.11 Brinelll Hardness Tester

727
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The hardness found in test specimen-2 is higher as compared


to test specimen-1. Its average hardness Number is 156.4.
Depth of indentation and Diameter of indentation are different
at all number of impressions. Hardness numbers increased
Specimen-1 Specimen-2 when diameter of indentation decreased.
Figure: 5.12 Specimens for Hardness Specimen-1 is soft material because of same hardness
number at different points. Its average hardness Number is
138. In specimen-1 depth of indentation are different but the
Table 5.3: Test Specimen No.1 (Steel Ball Indenter 5mm Ø
diameter of indentation are same .at different number of
and load applied 250 Kgf) impressions. Hardness number also same due to diameter of
indentation.
No. of Dia of Form Brinell Hardness Test specimen-2 from sample -2 found
Diameter of
Impress Indentation,d B.H.N more hard then specimen-1 from sample-1.
Ball,D (mm)
ion (mm)
1 138 REFERENCES
5 1.5
2 138 [1] Attia, A.N. (2001),“Surface metal matrix
5 1.5 composites”, “Production Engineering and Mechanical
3 138 Design Department”, pp. 451-457.
5 1.5
[2] Hashim, J. (2001),“The production of cast metal matrix
4 138
5 1.5 composite by modified stir casting method”, pp. 9-20.
5 138 [3] Surappa, M.K. (2003),“Aluminum matrix composites:
5 1.5
Challenges and opportunities”, Vol.28, pp. 319–334.
Average
Hardness 138 [4] Das, S. (2004),“Development of aluminum alloy
Number composite for engineering applications”, pp. 325-334.

[5] Aqida, S. N., Ghalazi, M. I. and Hashim, J.


(2004),“Effects of porosity on mechanical properties of metal
Table 5.4: Test Specimen No.2 (Steel Ball Indenter 5mm Ø
matrix composite: an overview”, pp.17-32.
and load applied 250 Kgf)
[6] Choudhury, S. K., Singh, A. K., Shivaramakrishnan, C. S.
S. and Paanigrahi, S. C. (2004),“Preparation and thermo
No. Of Dia of mechanical properties of stir castAl-2Mg-11TiO2 ”, Vol. 27,
Diameter of
Impress Indentation,d B.H.N pp. 517–521.
Ball, D (mm)
ion (mm)
148 [7] Rajan, T.P.D., Pillai, R.M., Pai, B.C., Satyanarayana, K.G.
1 5 1.45 and Rohatgi, P.K. (2007), “Fabrication and characterization of
Al–7Si–0.35Mg/fly ash metal matrix composites processed by
171
different stir casting routes Materials and Minerals Division”.
2 5 1.35
185 [8] Kumar, S. (2008),“Production and Characterization of
3 5 1.3 Aluminum –Fly Ash Composite using Stir Casting Method”.
140
4 5 1.49 [9] Singla, M., Dwived, D., Deepak, I., Singh, L., and
138 Chawla, V. (2009),“Development of Aluminum Based Silicon
5 5 1.5 Carbide Particulate Metal Matrix Composite”, „Journal of
Average Minerals & Materials Characterisation & Engineering‟, Vol.
Hardness 8, pp. 455-467.
156.4
Number

728
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Role of Youth and media in modern communication


system
Harpreet Kaur
SD College, Barnala, Punjab, India
gudluckhoney@gmail.com

ABSTRACT it is important exposure to media and to provide


Youth is always big treasure of every country .Every guidance on use of all media, including television, radio,
country wants their youth take right path and give music, video games and the Internet(4).
positive services to them .but now a days we are living in The objectives of this statement are to explore the
two worlds one is real and another is a virtual world. It is beneficial and harmful effects of media on mental and
our responsibility to give a complete healthy physical health. Every media effects growing age in both
environment to beginners‟ .it will be possible if we positive and negative ways. Television is that media
discuss the problem of misuse of communication media which capture our growing youth easily and we do not
which spoils our young generation with free flow of move from our routine. According to Paediatr Child
illegal and unneeded information. With the free flow of Health Television has the potential to generate both
information today everybody is living under the pressure positive and negative effects, and many studies have
of information which is usually unrequired. Today a looked at the impact of television on society, particularly
single person is able to shoot a video, click a photograph, on adolescents. An individual youth s developmental
level is a critical factor in determining whether the
write some texts and circulate it in the world in couple of
medium will have positive or negative effects. Music
seconds with the use of internet or any other media. No
videos may have a significant behavioral impact by
any certificate or registration is required to do so. There
desensitizing viewers to violence and making teenagers
is no any law to stop this type of flow but only education more likely to approve of premarital sex. Up to 75% of
is such a weapon which can stop youth to surf or upload videos contain sexually explicit material, and more than
unauthentic information which they find authentic as half contain violence that is often committed against
they are in adolescent age. women(4). Women are portrayed frequently in a
condescending manner that affects children‟s attitudes
Mass circulation of bad content can be stopped by mass about sex roles.
educates and mass education .we should have to make a
campaign to educate young people towards good use of Music videos may reinforce false stereotypes. A detailed
media. Give significance of media for their lives. analysis of music videos raised concerns about its effects
on adolescents‟ normative expectations about conflict
Religion, education departments, family members, health
resolution, race and male-female relationships.
departments and governments play a big role to stop
youth from illegal surfing. If youth can give million s of Music lyrics have become increasingly explicit,
likes to a good moral on social networking sites then it is particularly with references to sex, drugs and violence.
not impossible stop them from surfing bad and illegal Research linking a cause-and-effect relationship between
content from sites or upload bad content. If we think explicit lyrics and adverse behavioral effects is still in
about save earth from pollution then it is very essential progress at this time. very least, parents should take an
factor to save our virtual world from bad information. active role in monitoring the music their children are
exposed to.
Keywords Not all television programs are bad, but data showing
Communication media, social networking, media effects the negative effects of exposure to violence,
on youth inappropriate sexuality and offensive language are
convincing. Because of media takes time away from
ROLE OF YOUTH AND MEDIA exercise activities, people who watch a lot of television
are less physically fit and more likely to eat high fat and
Media play vital role in modern communication system high energy snack foods. Media makes a substantial
Growth of every area of life is dependent on contribution to obesity. According to Children,
communication media we can find information from any adolescents, and television American Academy of
where, any time, in any mode about anything .Regional Pediatrics Committee on Communications the fat content
distance does not matter in new media communication of advertised products exceeds the current average
system. but every big power demands more Canadian diet and nutritional recommendations, and
responsibility towards planet . According to Paediatr most food advertising is for high calorie foods such as
Child Health the influence of the media on the fast foods, candy and presweetened cereals. Commercials
psychosocial development of children is profound. Thus, for healthy food make up only 4% of the food

729
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

advertisements shown time of youth(7). The number of planet, every person live under 800 megawatt pressure as
hours of as viewing also corresponds with an increased per year. It is same as atmosphere pressure but different
relative risk of higher cholesterol levels in people. from gravity of earth .an d we do not feel it. Kanganiwal
Television can also contribute to eating disorders in also add a research study which is done by university of
teenage girls, who may emulate the thin role models seen California, Barkley on the base of 1999 year that
on television. Eating meals while watching television Information store is increased about 43% only in
should be discouraged because it may lead to less
registered matter .now every year it increase around
meaningful communication and, arguably, poorer eating
30%.Accordinng to professor Peter lemen ,chief of
habits. One more aspect of media that the amount of
violence on the rise. The average person sees 12,000 experiment in year 2002 that 5 Exabyte new information
violent acts on television annually, including many is present only in the mode of papers ,films, magnetic
depictions of murder and rape. More than 1000 today, discs and optical storage devices. In other words to save
television has become a leading sex educator. Between this such information we need 5 lakh libraries equal as
1976 and 1996, there has been a 270% increase in sexual American library of congress which contains 1 crore 90
interactions during the family hour of 2000 hours to 2100 lakh books and 5.6 crore fonts(1). This data is only
hours on television (4). Media exposes children to adult related to written format, TV broadcasting and telephone
sexual behaviors in ways that portray these actions as conversations are not included in it. So in other words
normal and risk-free, sending the message that because information growth rate is more than human being
these behaviors are frequent, „everybody does it‟. Sex growth rate. We are totally controlled by information
between unmarried partners is shown 24 times more
.without information we cannot move a while .every
often than sex between spouses, while sexually
source which we use now a days are not useful without
transmitted infections and unwanted pregnancy are rarely
related information .Information became s need of
mentioned.
human being as like air, water and food. This all
A detailed guide to responsible sexual content on media, information travels by media so media is very important
in form of films and music can be found in other peer- in new age But free flow of information affect youth
reviewed publications. Some people believe that the .youth are more aggressive towards everything they also
media can influence sexual responsibility by promoting easy learner to tackle new media(5) techniques then other
birth control, such as condom use. No current empirical age groups so with access freedom to use these media in
evidence supports this concept; it is expected that the any manner they does not follow cultural and civil values
debate will continue. Another is advertisement. There is
and laws .They move their energy in rebellious ways
evidence that passive advertising, which glamorizes
against traditions. According to Rio de janeiro Young
smoking, has increased over the past few years.
People & Media in the World Today Approximately one-
Media not the only way that youth learn about tobacco third of the world‟s population is made up of 2 billion
and alcohol use; the concern is that the consequences of young people under 18(3). They make up half the
these behaviors are not accurately depicted on media. population in the least developed nations; less than a
One-half of the G-rated animated feature films available quarter in the most industrialized ones. Their challenges
on videocassette, as well as many music videos, show range from basic survival to discrimination and
alcohol and tobacco use as normative behavior without exploitation. Moreover, there are myriad differences in
conveying the long term consequences of this use. The
cultures, traditions and values. Nevertheless, children
average person sees more than 20,000 commercials each
and youth everywhere share some universal traits. They
year(9). Advertisements targeting adolescents are
profoundly influential, particularly on cigarette use. are fundamentally more optimistic, more open and
curious than their adult counterparts. Increasingly,
Its like uranium which is a great source of energy but children are enjoying unprecedented freedoms in many
now whole world afraid from its misuse or carelessness countries. Arguably, the proliferation and globalization
in uranium plants. New media communication system is of media are among the key factors that have shaped and
also like uranium because it can be use by mischievous defined the current generation of young people. In many
persons or could be with carelessness. Convergence of countries, youth have access to a greater number of
communication media make it easy to take world in your multi-media choices than ever before— conventional,
pocket and it is also very easy to become a part of satellite and cable TV channels; radio stations;
breaking news within a night. A minor user of newspapers and magazines; the internet and computer
communication media is capable for reach to every kind and video games. In addition, many are exposed to the
and mode of information for example text, pictures, same programs, the same characters and the same
graphics, moving visuals etc. The flow of information marketed spin-off products. Today there is greater
from sender to receiver is faster without any obstacle of availability of foreign programming and media, and less
rule and regulation. official censorship and control in many parts of the
world. Information, email and images flow around the
Free flow of information is mix up of wanted and world faster and more freely than ever. Indeed, mass
unwanted content, and people are lives under the media are making the world smaller, and culture and
pressure of information. Sarabjit Kanganiwal adds a media are increasingly inextricable, especially for young
quote according to American amendment on our people. in other words young people are not much

730
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

educated to use media in appropriate way. Recently discussed explicitly with children, the medium may teach
Pakistan blocked YouTube a social networking site. A and influence by default. Other media, such as
Pakistan explanation behind blocking the particular site magazines, radio, video games and the Internet, also
is that they have not an ultimate technique to remove have the potential to influence children‟s eating habits,
sensational matter from the YouTube or their have not exercise habits, buying habits and mental health. If
any way to stop people to surfing unwise or illegal children are allowed to be exposed to these media
matter. Their religion and cultural values do not allow without adult supervision, they may have the same
them to permit their children or young people for deleterious effects.
watching those type of videos .this is the authentic
answer for why we have to worry about media and youth Parents have to do their natural responsibility towards
.but blocking the site is not a justify and proper solution their children. They must instruct them for meaning full
of the problem .its also looking unjustified for youth and use of media .we also welcome to religion in it because
media. every religion can make list to their followers for avoid
misuse of communication of media .but they must not
Two another elements demand justify that are democracy target to principles and feelings of another religion and
and media literacy. According to Andy Rudduck, media casts.
influence young people are often about the nature of
democracy. He also describes the quote of Henry Giroux, Parents may feel outsmarted or overwhelmed by their
the matter of how youth use media, and how they are children‟s computer and Internet abilities, or they may
permitted to do so, is the very stuff of democracy(2): not appreciate that the „new medium‟ is an essential
component of the new literacy, something in which their
So as per law new media is totally democratic .every children need to be fluent. These feelings of inadequacy
person having similar right to use and distract with or confusion should not prevent them from discovering
the Internet‟s benefits. The dangers inherent in this
information of new media. Young people feel more
relatively uncontrolled „wired‟ world are many and
democratic when they use media to give respect to
varied, but often hidden. These dangers must be
equality of human being but problem is only there when unmasked and a wise parent will learn how to protect
they use it for disturb the system of regulations and their children by immersing themselves in the medium
duties. We have to create criteria where youth have and taking advice from the many resources aimed at
permission to use media in some relevant fields. This protecting children while allowing them to reap the rich
criteria can be maintained by some spheres of our society benefits in a safe environment.
.first and basic sphere of communication is our family
because family is a primary group of communication The Internet has a significant potential for youth to
educational information, and can be compared with a
system janeiro also describe parents at first according to
huge home library. However, the lack of editorial
janerio, Quality of Media for Children & Youth Growing
standards limits the Internet‟s credibility as a source of
Concerns over Lack of Quality & Control As media
information. There are other concerns as well.
options for most children have grown in recent decades,
so too have concerns about the quality of media aimed at The amount of time spent watching television and sitting
children(3). Growing numbers of parents, educators, in front of computers can affect postural development(4).
researchers and policy-makers around the world are Excessive amounts of time at a computer can contribute
alarmed about the lack of quality media for children and to obesity, undeveloped social skills and a form of
young people and the growing availability of low-quality addictive behavior. Although rare, some persons with
entertainment featuring violence, sexual content, seizure disorders are more prone to attacks brought on by
a flickering television or computer screen.
undesirable role models and lack of diversity. There are
also serious questions about the short- and long-term Other concerns use the Internet to lure young people into
effects of this material. relationships. There is also the potential for children to
be exposed to pornographic material. Parents can use
Janerio also add a quote of Dr. Francis B. Nyamnjoh, technology that blocks access to pornography and sex
University of Botswanato the impressionable, talk on the Internet, but must be aware that this
unquestioning and imitative nature of children. How technology does not replace their supervision or
much the mass media influence children and young guidance(6).
people is somewhat debatable, but sociologists and
Second is educational group. High school programs
researchers in different regions have observed some of
promoting media awareness have been shown to be
the following adverse effects: —growing influence of beneficial. They give students more understanding of
entertainment media on youth style and identity — how the media may affect them socially. In Canada, the
decreasing role of traditional sources of influence: Media Awareness Network has a number of resources
family, school, community, religion, etc(3). Parents have that can be used by both professionals and the public to
to monitor and control their children‟s viewing habits. promote media literacy(4). Their resources are
comprehensive, current and specifically applicable to
Studies show that parents play an important role in their Canadian culture(6).
children‟s social learning, but if a parent‟s views are not

731
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

cooks is also in favor in this he said the challenges of according to UNICEF”s Voices Of Youth Website(3)
teaching media and cultural politics to young students “Many young people also appreciate media content that
who grappled with issues of race, class, gender and deals credibly with topics they may find difficult to
sexuality both inside and outside the seminar room have
discuss with parents or adults, such as personal
been recognized (e.g. Cooks, 2003)(2). To support it, it is
necessary for institutes to motivate students for authentic relationships, sexuality, AIDS, drugs, self esteem, etc.
use of media .they can teach this subject additionally or They value factual information and advice provided by
arrange workshops, seminars and conferences. But in the experts, as well as material prepared and presented by
opposite Davinia Frou Meigs said (2008) all argued that young people themselves. In focus groups Inter Media
the reason why media education has never been so has conducted in different countries, youth say they
important was because media industries have never been believe only young journalists can really understand their
so powerful and national governments have never been
problems.”(3). In nutshell media is very important but if
less enthusiastic about regulating them(2). We take it as a
suggestion so last but not least in governmental groups‟ we seeks some problems with media than we have to
organizational group because every person does work for improvise it with alter the use of media and education
his country and government. Government takes among young generation.
responsibility to educate her young generation towards
every new field .governments can block that websites REFERENCES
which contains only unwise matter and which are totally
concerned with profit holding and not for information,
[1] Refrences: Sarabjit Kanganiwal, Patarkarita Smaj
education and tolerate able entertainment. A government
also takes hard steps towards mischievous persons who Evam Bazar, Unistar books PVT.LTD,
plays with feelings of society and becomes reason of Chandigarh , 2006.
somebody‟s sorrow.
[2] Andy Ruddock ,YOUTH AND MEDIA, SAGE
Health departments can increase their limits on media
Publications Ltd , Monash University,
because health is very necessary for every person Janerio Australia,2013
also add sexually transmitted diseases, growing health
problems by media misuse. Beside it many organizations [3] BRAZI4th world summit on media for children
like UN make some media laws. United Nations and adolescents RIO DE JANEIRO, BRAZL April
Convention on the Rights of the Child The UN 2004
Convention on the Rights of the Child (CRC), adopted in [4] Paediatrics and child health
1989 and ratified by all but two countries, clearly spells www.ncbi.nlm.nih.gov may 2003
out the rights to which all children everywhere are
entitled. According to rio de janerio it contains four basic [5] Russel Nueman ,New Media,www.wikipedia.com
principles to guide political decision-making affecting 2003
the child: 1) the best interests of the child should be a
[6] Sexuality, contraception, and the media
primary consideration in such decisions; 2) opinions of committee on public education,2003
children themselves should be heard; 3) child
development, not only survival, should be ensured; 4) [7] www.aplnet.org/resources/adsummary.2003 pdf.
each child should be able to enjoy his/her rights, without
discrimination(3). Several of the CRC's key articles deal [8] www.cmpa.com/tvent/violence2003htm
with the media and children. As the article13 enshrines
[9] Depiction of alcohol, tobacco and other substances
the right to freedom of expression this right shall include in G-rated animated feature films, Thomson KM,
freedom to seek, receive and impart information and Yokota F, June 2001.
ideas of all kinds, regardless of frontiers, either orally, in
writing or in print, in the form of art, or through any
other media of the child's choice." Article 17, together
with Articles 12 and 13, should contribute not only to the
development of well-informed citizens, but to young
people's voices being heard more and more through the
mass media(3). It sends a clear message that children
should be both participants in and beneficiaries of the
information revolution. In other words we do not ignore
the rights of youth for find information from media but
we can only educate them or regulate the media. We
have to understand the world of youth, the present issues
in ways that are too serious, pedantic or patronizing. In
addition, youth in countries with widespread poverty,
corruption, political turmoil and/or disease also seek
realistic, relevant and meaningful content to help them
understand and cope with hardships they face in their
daily lives. We also do not ignore the vital role of media

732
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Review Study on Presentation of Positive Integers as


Sum of Squares
Ashwani Sikri
Department of Mathematics, S. D. College Barnala-148101
Email: ashwanisikri9@gmail.com

ABSTRACT stating fundamental identity which allow one to express


the product of two sums of four squares as such a sum
It can be easily observed that every positive
and some other crucial results.
integer is represented as sum of squares. In 1640, Fermat
stated a theorem known as “Theorem of Fermat” that

every prime of the form can be expressed as Keywords


sum of two squares. On December 25, 1640, Fermat Integers, Prime, Squares, Sum, Euler
communicated proof of this theorem in a letter to
Mersenne. However the proof of this theorem was first
INTRODUCTION
published by Euler in 1754, who in addition succeeded to
show that the representation is unique. Later it was We begin our efforts to characterize
proved that a positive integer n is representable as the which positive integers can be written as the
sum of two squares if and only if each of its prime factors sum of two squares, the sum of three squares
of the form occurs to an even power in the and the sum of four squares by examining the
canonical form of n.
first few positive integers.
Diophantus conjectured that no number of the form

for non negative integer λ, is the sum of three


squares which was verified by Descartes in 1638. Later
Fermat stated that a number can be written as a sum of
three square integers if and only if it is not of the form

where m and λ are non-negative integers.


This was proved in a complicated manner by Legendre in
1798 and more clearly by Gauss in 1801.
In 1621, Bachet gave a conjecture that “Every So, we see that positive integers are
positive integer can be written as sum of four squares,
represented as sum of four or less than four
counting ” and he checked this for all integers upto
squares.
325. Fifteen years later, Fermat claimed that he had a
Sum of two squares
proof but he gave no details. A complete proof of the four
square conjecture was published by Lagrange in 1772. We begin with problem of
The next year, Euler offered it is much simpler representing positive integers as sum of two
demonstration of Lagrange‟s four squares theorem by

733
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

squares, for this we will first tackle the case Theorem of Fermat[2]:- An odd prime p is
when positive integer is prime. expressible as sum of two squares if and only

if
Theorem:- No prime p of the form is Proof:- Let p be written as sum of two
written as a sum of two squares[1].
squares say … (1)

Claim
Proof:-
Suppose
Let
Then …… (2)
 ……
Also …… (3)
(1)
Suppose if possible that p is written as sum of So, (2) & (3) implies

two squares Which implies that by (1)

i.e. where a, b are positive Which further implies that (because p is


integers. prime)
Now, for any integer „a‟, we have Now



…… (2)
 by (1)
Similarly,
Which is not possible
…… (3)
So
From (2) and (3), we have
In Same way
, which
contradict (1). Now

So, our supposition is wrong. Hence, p is not 


written as sum of two square.  Congruence has unique

Solution say
Wilson’s Theorem:- If p is a prime then
 …… (4)
[1]
(1) 

Thue’s Theorem[1]:- Let p be a prime and a


Modulo p above equation by use of four
be any integer such that Then
becomes;
the congruence has an
integral solution , where, .000000

734
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 

 has sol  [Because


 is quadratic residue of p ]

 

Converse Let So Thue‟s theorem implies that the congruence

 is a has solution

positive integer Where and x0,


Now p is prime so by Wilson theorem we have y0 are integers.
i.e.
1.2.3 …………… 
 by use
 1.2. ……….
of (5)
……….
 [Because
[Because
congruence ≡ is symmetric relation]


]
 …….. (6) [for

 1.2………. ………… some ]

Now


 1.2………. …………

  but is a natural number.

So this implies that


Put in (6)



 p is sum of two square.
[Because

Corollary[2]: Any prime p of the form


 ] 4n+1 can be represented uniquely (aside
where a = from the order of the summands) as a sum
of two squares.
1.2………….

735
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Proof: Since p is prime of the form 4n+1, = p2+ (ac -bd) 2


so it can be represented as sum of two => p2= p2+ (ac-bd) 2
squares, Now we will establish the => (ac-bd)2 =0
uniqueness assertion, suppose that => ac-bd =0
p= a2+b2 =c2+d2 => ac=bd
So (3) => either ad=bc or ac=bd - (4)
(1) Suppose for instance, that ad=bc - (5)
Where a,b,c, d are all positive integers, => bc= ad, d is
(a,b)=1, (c,d)=1 integer
2 2 2 2 2 2 2 2 2 2 2 2
Now a d - b c = a d +b d - b d - b c => a|bc
= (a2+b2) d2 - b2 => a|c [Because
(d2+c2) (a,b)=1]
= pd2 - b2p (by =>  +ve integer λ s.t
1) c=λa (6) Put in (5)
2 2
= p (d -b ) ad=bλa
≡ 0 (mod p) => d= λb -(7)
(because d2 – b2 is an integer) Now p= c2+d2 by (1)
a2d2-b2c2 ≡ 0 (mod p) p = λ2 (a2+b2) by (6), (7)
=> p|a2d2 - b2c2 => (a2+b2) = λ2 (a2+b2) by (1) because
=> p| (ad-bc) (ad+bc) a2+ b2 is not equal to zero
but p is prime => λ=1
=> p|ad-bc or p|ad+bc Put in (6), (7)
c=a, d=b
(2) In same way the condition ac=bd leads to

(1) => a,b,c,d are all less than p a=d, b=c


So uniqueness is established
=> 0≤ad-bc <p & 0< ad+bc <2p
Home stated conclusion is justified
So (2) => ad-bc=0 or ad+bc =p

Lemma[3]: If positive integers ∝ and β are


(3)
written as sum of two squares then ∝ β is also
If ad+bc=p then we would have ac=bd; for,
written as sum of two squares.
2 2 2 2 2 2 2
p = (a +b ) (c +d ) = (ad+bc) + (ac-bd)

736
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Proof :- Let and Suppose p does not divide x  gcd


where a, b, c, d are integers.  =1 where are integers

……….. (iv)

(i) & (iii) 

where

 ∝ β is sum of two squares.


Now we will prove p does not divide m
Theorem[4]:- A positive integer n is
representable as the sum of two squares if and Suppose if possible

only if each of its prime factors of the form 


occurs to an even power in the prime 
factorization of n.

………..(v)
Proof:- Suppose n is written as sum of two
(iv) & (v) 
squares i.e. …. (i)

where a & b are integers.
 Congruence has a
Let p be prime factor of n of the form
which occurs in prime factorization of n. solution

Claim:- Power of p is even  is quadratic residue mod p

Let 

Not possible [because 



…….. (ii) Our supposition is wrong
Hence p does not divide m gcd
Let and  and
[because p is prime]
…….. (iii)
Now 
(ii) and (iii) 

Now either p does not divide x or p does not
because gcd
divide y

[because otherwise if p x and p y because p is prime

then which implies that not possible as p is a prime]

737
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Let be the highest power of p in prime Now

factorization of d, where λ is a positive integer. = Sum of two squares


2 is the highest power of p in prime  Sum of two squares by
factorization of Lemma
2 is the highest power of p in prime  Sum of two squares
factorization of by Lemma
(because p does not divide m) ------------------------
2 is the highest power of p in prime ------------------------
factorization of n = Sum of two squares
 Power of p in prime factorization of n is ……..(vii)
even. Also
Converse:- Let each prime factor of n of the
form occurs to an even power in the
prime factorization of n = Sum of two
Let squares – (viii)
(vi), (vii) and (viii) & repeated use of Lemma
be prime factorization of n implies that
where
and
is sum of two squares
Since  n is sum of two squares.
 is sum of two squares
by Two squares Theorem Examples[5]
of Fermat (i) is not written as sum of two
 is sum of two squares squares as power of prime factor 3 of the form
by Lemma for k=0 in prime factorization of 135
 is sum of two squares is not even.
by Lemma (ii) is written as sum of two
------------------------ squares as power of prime 3 of the form
------------------------ for k=0 in the prime factorization of
sum of two squares 153 is even

Also
……….. (vi)

738
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

sum of two  n is not written as sum of three squares.

squares.
Case II
Sum of three squares Suppose n is sum of three squares
Theorem:[6]- No positive integer of the form Let where a, b, c are
is written as sum of three squares integers
where m and λ are non negative integers 
Proof:- Let …………. (7)
……….. (1)  even because
Case I is multiple of 4
So (1)   Either all the a, b, c are even or either two
are odd and one is even.

Suppose a, b are odd and c is even.
……….…(2)
Let
Suppose n is sum of three squares

Let
where r1, r2 and s are integers.
………….(3)

Where a, b, c are integers
Now a is any integer

  is not multiple of 4
 all a, b, c are even
 Let
………….(4) where
In same way a1, b1 c1 are integers
……….…(5)
Put in (7)


……….…(6)
………… (8)
(4), (5) & (6) 
 even
As above we can prove that;

where a2, b2 and c2 are integers


Not possible by (2)
Put in (8)
 Our supposition is wrong

739
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 each the sum of four squares, then mn is

Repeat above process times more, we likewise so representable.

get
Lemma 2 (Euler):[7] If p is an odd prime then
where am,
bm and cm are integers. the congruence has

Which implies that,


a solution where and
Which further implies that is sum of
three squares not possible by Case I. .
So, Our supposition is wrong
Hence is not written as sum of Theorem:[7] Given an odd prime p, there
three squares. exists a positive integer such that mp
Examples is the sum of four squares.
1. 15, which is of the form 8 λ +7 for λ =1 & Proof: For an odd prime p, Lemma 2 implies
15= 3  2  1  1  sum of three squares
2 2 2 2
that there exists integers
2. 240, which is of the form for
,
m=2 and λ=1 &
……….. (1)
240 = 12  8  4  4  sum of three
2 2 2 2

squares Such that


3. 459 is not of the form for any
m and λ & 
459 = 13  13  11 = sum of three
2 2 2
………….(2)
squares where m is a positive integer
Sum of four squares 
For coming to four squares problem we …………..(3)
consider the Fundamental identity, discovered
by Euler which allow one to express the Now (1) and (2) implies that
product of two sums of four squares as such a
sum, as well as the crucial result that
congruence is i.e.

solvable for any odd prime p as lemmas. 


…………(4)
Lemma 1 (Fundamental Identity of
Euler)[7]: If the positive integers m and n are

740
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

So, (3) & (4) implies that there exists an

integer s.t. mp is sum of four squares. + +

Theorem:[8] Any prime p can be written as +

the sum of four squares.


is representation of p as a sum of four
Proof: The theorem is certainly true for
squares for a positive integer .
,
This violates the minimal nature of n, giving us
since Thus we
our contradiction.
consider the case for odd primes.
There still remains the problem of
Now p is odd prime.
showing that . Assume not, then n being
So, above theorem implies that there exists an
an odd integer, it is at least 3.
integer such that mp is the sum of four
So, it is possible to choose integers A, B, C, D
squares.
such that
Let n be the smallest positive integer such that
np is the sum of four squares; say … (2)

………… (1) and ,

Where a, b, c, d are integers and also

because Here, A, B, C, D are absolute least residue of


a, b, c, d respectively moduleo n.
Claim
Then
We make a start by showing that n is an odd
integer. For a proof by contradiction, assume
that n is even. Then a,b,c,d are all even; or all
So,
are odd; or two are even and two are odd. In
any event we may rearrange them, so that
i.e. ,
It follows that; i.e.
and so
……………
are all integers and (1) implies that
(3)
for some non-negative integer k.

741
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Because of restrictions on the size of A, B, C, i.e.


D we have;
i.e. n r
In same way, n s, n t, n u
r s t u
 , , , are all integers.
n n n n
We cannot have , since this would

signify that and in Now (4)  + + +


consequence, that n divides each of the  pk is sum of four squares.
integers a, b, c, d by (2) which implies that n2 Since , we therefore arrive at a
2 2 2 2
divide each of the integers a , b , c and d contradiction to the choice of n as the smallest
2
which further implies that n divides their sum positive integer for which np is the sum of four
i.e. by (1) squares. With this contradiction we have

Or which is impossible in light of


Put in (1)
inequality

The relation also allow us to


which implies p is sum of four squares and
conclude that .
proof is complete.
In sum; Lagrange’s four square theorem[1, 9-10]
(1) & (3)  Statement: Any positive integer n can be
written as the sum of four squares, some of
which may be zero.

Proof: Clearly, the integer 1 is written as

a sum of four squares.


i.e. Assume that and let
………… (4) be prime factorization if n into primes (not
where necessarily distinct).

We know that each is written as sum of


four squares.

Now Fundamental Identity of Euler permits us to


express the product of any two primes as a sum
by
of four squares.
use of (2)
by use of (3)

742
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Apply Fundamental Identity of Euler r times 4. K. Rasen, Elementary Number Theory

we obtain the result that is and its Applications: Addison-Wesley

written as sum of four squares. Publishing Co. 1993.

Example 5. Roberts. Joe 1977 Elementary


Write 391 as sum of four squares Number Theory Cambridge Mass:
Solution: we use fundamental identity of Euler MIT Press.
to write this. 6. Starke, Harold. 1970, An
Fundamental identity of Euler: Introduction to Number Theory
2 2 2 2 2 2 2 2
If m = a +b +c +d and n = x +y +z +t
Chicago: Markham.
Where a, b, c, d, x, y, z, t are integers.
7. Stewart, B. M. 1964, Theory of
Then mn= (a2+b2+c2+d2) (x2+y2+z2+t2)
Numbers, 2nd edition, New York:
= (ax+by+cz+dt) + (ay – bz – ct +
2

Macmillan.
dz)2 + (az + bt – cx – dy)2 + (at – bz + cy –
8. Landau, E. 1952, Elementary
dx)2
We know 391 = 17.23 Number Theory Trans. Goodman,
= (42 + 12 + 02 + 02) (32 + 32 + New York: Chelsea.
22 + 12) 9. Burton, David. 1985, The History
2
= (4.3 + 1.3 + 0.2 + 0.1) + of Mathematics: An Introduction
(4.3 – 1.3 – 0.1 + 0.2) + 2
Boston: Allyn and Bacon.
(4.2 + 1.1 – 0.3 – 0.3) + 2
10. Upensky, J. and M. A. Heaslet.
(4.1 – 1.2 + 0.3 – 0.3) 2
1939, Elementary Number Theory
= 152 + 92 + 92 + 22
New York: Mcgraw-Hill.
= sum of four squares

REFERENCES
1. David M. Burton 1999, Elementary
Number Theory, 2nd Edition: Wm. C.
Brown Company Publishers.
2. Niven I. and H. Zuckerman, 1980, An
Introduction to the theory of Numbers,
4th Edition, New York: John Wiley and
Sons.
3. Hardy, Wright, An Introduction to the
Theory of Numbers, Oxford, 1954.

743
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Analysis of Material Degradation in Chlorine


Environment of Power Plants

Harminder Singh
Guru Nanak Dev University, Regional Campus, Jalandhar, Punjab, India
harminder10@gmail.com

ABSTRACT plants and waste incinerators is also due to hot corrosion


The chlorine containing deposits in the power plant phenomenon [9].
environment is very harmful in terms of degrading the plant
component materials. The cyclic reaction starts by the 2. OXIDATION PRINCIPLE
chlorine gas shorten the life of the plant components. In this It is a well known concept that metal and their alloys develop
study, this cyclic reaction is study in detail, which is required self healing layer of oxides of metal/alloy elements during
further to find the preventive methods for slow down the reaction with the atmospheric oxygen. This surface layer acts
degradation of the plant components, especially in waste as barrier to the corrosive and environment elements and
incinerators. prevent them from reacting with the metal/alloy elements.
This results in safety of the metal/alloy surface below the
Keywords oxide layer. However, there are molten deposits of the flue
Boiler Tubes, Incinerator, Chlorine, Active oxidation, Hot gases corrosive elements on the material surface in the boiler
corrosion environments. These deposits dissolve the surface oxide
1. INTRODUCTION layers of the metals or alloys. Also the presence of these
The degradation and hence failure of the boiler, superheater deposits breaks contact of the metal/alloys from the
and other material components of the power plant industry is surrounding oxygen. The reducing atmosphere this formed
still a serious issue. The main reason this failure is the prevents the further formation of protective oxide layer on the
presence of corrosive elements in the flue gases generated surface of the boiler materials [10] In incinerators and gas
during burning of the fuel in these plants. These flue gases turbines, the presence of salt deposits on the materials reduces
elements are more aggressive in nature in the energy the local oxygen pressure and results in destroying protective
producing plants like waste incinerators, where alkali and oxide barriers on the surfaces of alloys [11,12]. As oxygen is
heavy metals becomes highly corrosive in the presence of a desirable component for the protective oxide layer, hence
chlorine. The high-degradation of materials in the presence of reducing atmosphere around the substrate is more corrosive in
chlorine still needs an acceptable solution, and thus extensive nature. The destruction or failure of the surface oxide layer is
study and research is still required in this area. Another the first step towards the start of the hot corrosion
important point which should be considered during this study phenomenon on the metal/alloy surface. The presence of
is that fuels and its elements could not be changed for power molten deposits, temperature variations and erosion caused by
plants which are based primarily on waste management like flue gas elements are some of the reasons of the failure of the
incinerators [1-4]. surface layers. The dissolution of the surface layer by molten
deposits is known as fluxing, which can be acidic or basic
2. HOT CORROSION fluxing depends upon the surrounding environment. It is
observed that the acidic fluxing is more harmful over the basic
The material degradation at high temperature under the
one [5, 6, 13-16]. For low oxygen ion activity in the molten
presence of thin layer of molten salts under suitable
salts, it is observed that acidic fluxing takes place [16]. The
atmosphere is known from the year 1940. This phenomenon
study found that alloys having high amount of Mo, V and W
of corrosive failure under the presence of molten salts is
faced acidic fluxing. For incinerators Hara et al. [17] reported
known as hot corrosion. Hot corrosive failure of the
the basic fluxing of surface oxide layer of Cr element, as:
components is more severe under the presence of chlorine
Cr2O3 + 3SO3+2O2- = 2 CrO42-+ 3SO2 (1)
containing molten deposits, and was first observed in the
aircraft engine failures which mainly operated above the
chlorine containing sea water [4-8]. During this phenomenon, 3. CHLORINE BASED FAILURE
the protective oxide layer usually formed on the metal/alloy MECHANISM
surfaces is dissolved, and a non-protective layer is formed, The presence of chlorine element in the molten deposits on
thereby results in accelerating the surface degradation [5,6]. the surface of metal or alloys further increases the pace of the
The corrosive attack in boilers materials of thermal power attack of corrosion elements. This attack is known as ‘Active-

744
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Oxidation’, in which combination of oxidation and metal and thus decreases the protective ability of the oxide
chlorination reactions takes place on the material surface. layer [9-12, 19].
During this mechanism chlorine gas is not consumed and just
act as catalyst to speed up the oxidation of the material 3. Third Step of chloride vaporization
elements. This accelerated oxidation means formation of non- In next step after the formation of chlorides, these become
protective and porous surface oxide layer, and only 0.1 % vaporized because of enough required vapor pressure as:
presence of chlorine can be very harmful, since chlorine is not
consumed [9, 18-24]. Thus it is important to study the failure MeClx (s)=MeClx (g) (5)
of the power plant components by the chlorine containing
4. Last step of reaction of gaseous chlorides
environment. This mode of failure mechanism generally has
The gaseous chlorides, formed in third step, come out through
following steps [10]:
the passages in the oxide layer and interact with the
a) First chlorine should be generated from the surrounding surrounding environment. Now the atmosphere outside the
environment and present on the material surface, b) In next oxide scale is oxidizing and thus has high partial pressure of
step chlorine goes inside the oxide layer to affect the oxygen. Due to this high value of oxygen pressure the
metal/alloy elements, c) Further the chlorine reacts and forms chlorides come out in gaseous form interacts with oxygen as:
chlorides, d) In next stage chlorides come in contact with
MeCl2 (g) + 3/2 O2 (g) = Me2O3 (s) + 2 Cl2 (g). (6)
surrounding environment, e) Formation of oxides from the
chlorides by interaction with the available oxygen and It is noticed that oxides of metal/alloy elements formed and
chlorine again ready to start the cyclic reaction [9]. chlorine which was consumed in second step is again released
in the atmosphere. This chlorine is again ready to interact with
1. First Step of chlorine produced on the surface
the metal elements and cycle of reactions starts.
The presence of chloride containing salts at temperature
higher than their fusion range like in boilers and incinerators Also the oxide formed in this cycle of four steps and is not
is the first to start the degrading reaction. There are many through direct interaction between metal/alloy elements and
different reactions through which chlorine is produced in the oxygen. This type of oxide formed on the surface is noticed to
form of gas in the region close to the metal/alloy surface. It is be porous and loosely bonded with surface. Thus, this oxide
not only chloride compounds, but the presence of alkali and layer is not able to provide blockage to the corrosive and
sulfur compounds contained in waste incinerators also damaging environment elements.
activates the reaction. The sulfur compounds play a
contributory role in promoting corrosion through its chlorine For alloy containing different elements, depending upon the
liberating action as shown in the following reaction: vapor pressure of chlorides and partial pressure of oxide
formation, the oxide layer consists of a layered structure. The
4 NaCl + 2 H2O + O2 + 2 SO2 = 2 Na2SO4 + 4 HC1 (2) chromium oxide formed at low oxygen pressure in
comparison to iron and nickel. Thus the region of layer close
4 HC1 + O2 = 2 Cl2 + 2 H2O (3) to surface is rich in oxide of chromium and upper region
noticed to be rich in nickel oxide, and in-between mixture of
2. Second step of generation of chlorides oxide of chromium, iron and nickel is observed. Thus, if
After the generation of chlorine in the form of gas, it is able to during study this type of layered oxide structure is noticed on
enter into the surface oxide layer of the materials and starts the boiler or other components, then it can be concluded that
interacting with the base material elements. The entrance path ‘Active oxidation’ the possible phenomenon in that
for the chlorine is expected to be cracks and pores in the environment conditions [11, 12, 19].
surface layer. At lower surface the partial pressure of oxygen
is less and that of chlorine is high due to the presence of Thus, the ways should be found out to stop or hinder that
chloride deposits in the incinerator conditions. This pressure phenomenon by various available techniques. The study of
of 10-10-10-13 bar is enough for the formation of chlorides of these techniques is not in the scope of this study.
metal/alloy elements, and for these reactions it is found that
Gibbs free energy has high value towards negative side. Thus, 4. CONCLUSION
the thermodynamics is in favor for the formation of chlorides, The presence of chloride deposits, such as NaCl, accelerates
in gaseous or volatile solid form. The general reaction for this the corrosion in waste incinerators and marine based gas
interaction of elements with chlorine is: turbines. The NaCl plays as a catalyst accelerating the
corrosion reaction. The mechanism of chlorine based active
Me(s)+x/2Cl2(g) = MeClx(s,g). (4) oxidation especially under NaCl depsoits is the oxy-
chloridation, chloridation and re-oxidation process. This
As the high negativity of the Gibbs free energy, the chloride
mechanism is cyclic in nature and is harmful for the
of chromium has high preference of formation in comparison
degradation of the materials used in these plants. Thus, still a
of iron and nickel. The growth of chlorides between oxide and
detailed study is required to gain in-depth view of this
metal surface will weaken the bonding of the oxide layer with
mechanism. Also research should be focused develop easy

745
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

and reliable methods to protect the precious materials from [14] Goebel, J.A, Pettit, F.S. 1970. Na 2SO4-induced
degradation at low cost. accelerated oxidation (hot corrosion) of nickel. Metall.
Trans. 1, 1943-1954.
REFERENCES [15] Goebel, J.A., and Pettit, F.S. 1970. The influence of
[1] Kuo, J-H., Tseng, H-H., Rao, P. S., and Wey M-Y. 2008. sulfides on the oxidation behavior of nickel-base alloys.
The prospect and Development of Incinerators for Metall. Trans. 1, 3421-3429.
municipal solid waste treatment and characteristics of
their pollutants in Taiwan. Applied Thermal Engineering [16] Stringer, J. 1987. High-temperature corrosion of
28 (17-18), 2305-2314. superalloys. Material Science Technology 3(7), 482-493.

[2] Ganapathy, V. 1995. Recover heat from waste [17] Hara, M., and Shinata, Y. 1992. Electrochemical studies
incineration. Hydrocarbon Processing (September 1995). on hot corrosion of Ni−Cr−Al alloys in molten
Na2SO4NaCl. Materials Transactions JIM, 33(8),
[3] Rademakers, P. 2008. Review on corrosion in waste 758−765.
incinerators, and possible effect of bromine, TNO report.
[18] Montgomery, M., and Larsen, O.H. 2001.
[4] Fordham, R.J., and Baxter, D. 2003. The impact of Characterization of deposits and their influence on
increasing demand for efficiency and reliability on the corrosion in waste incineration plants in Denmark.
performance of waste-to-energy plants. Materials at high NACE international, Corrosion 2001 paper no. 01184.
temperature 20(1), 19-25.
[19] Zahs, A., Spiegel, M., and Grabke, H-J. 2000.
[5] Sidhu, T. S., and Prakash, S. 2006. Hot corrosion and Chloridation and oxidation of iron, chromium, nickel and
performance of nickel-based Coatings. Current Science their alloys in chloridizing and oxidizing atmospheres at
90 (1), 41-47. 400-700oC. Corrosion Science 42, 1093-1122.

[6] Sidhu, T.S., and Agarwal, R. D. 2005. Hot corrosion of [20] Singh, H., Sidhu, T. S., and Kalsi, S.B.S. 2014. Behavior
some superalloys and role of high-velocity oxy-fuel of cold sprayed superalloy in incinerator at 900oC. Surf.
spray coatings-a review. Surface & Coatings Technology Eng. DOI: 10.1179/1743294414Y.0000000390.
198, 441-446.
[21] Singh, H., Sidhu, T. S., Kalsi, S.B.S., and Karthikeyan, J.
[7] Rapp, RA. 1986. Chemistry and electrochemistry of the 2014. Hot corrosion behavior of cold sprayed Ni-50Cr
hot corrosion of metals. Corrosion 42(10), 568-577. coating in an incinerator environment at 900oC. J.
Therm. Spray Technol. DOI: 10.1007/s11666-014-0213-
[8] Rapp, R.A. 2002. Hot Corrosion of Materials: A fluxing z.
mechanism?, Corrosion Science 44, 209-221.
[22] Singh, H., Sidhu, T. S., Karthikeyan, J., and Kalsi, S.B.S.
[9] Sorell, G. 1997. The role of chlorine in high temperature 2015. Evaluation of characteristics and behavior of cold
corrosion in waste-to-energy plants. Science and sprayed Ni-20Cr coating at elevated temperature in waste
Technology Letters, Materials at high temperatures 137- incinerator plant. Surf. Coat. Technol. 261, 375–384.
150.
[23] Singh, H., and Sidhu, T. S. 2013. High temperature
[10] Albina, D. O. 2005. Theory and experience on corrosion corrosion behavior of Ni-based superalloy Superni-75 in
of Waterwall and superheater tubes of waste-to-energy the real service environment of medical waste
facilities. Thesis 2005, available online. incinerator. Oxid. Met. 80, (5–6), 651–668.

[11] Uusitalo, M. A., Vuoristo, P. M. J., and Mantyla T. A. [24] Singh, H., Sidhu, T. S., and Kalsi, S.B.S. 2014. Behavior
2004. High Temperature corrosion of coatings and boiler of Ni-Based Superalloys in an actual waste incinerator
steels below chlorine containing salt deposits. Corrosion plant under cyclic conditions for 1,000 h at 900°C.
Science 46(6), 1311-1331. Corrosion. DOI: http://dx.doi.org/10.5006/1163.

[12] Uusitalo, M. A., Vuoristo, P. M. J., and Mantyla, T. A.


2003. High Temperature corrosion of coatings and boiler
steels in oxidizing chlorine containing atmosphere.
Material Science and Engineering 346, 168-177.

[13] Eliaz, N., Shemesh, G., and Latanision R. M. 2002. Hot


corrosion in gas turbine components. Engineering Failure
Analysis 9, 31–43.

746
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Multiple scattering effects of gamma ray in some


titanium compounds
Lovedeep Singh & Pooja Rani Amrit Singh Manoj Kumar Gupta
Sant Longowal Institute of Sant Longowal Institute of Department of Applied sciences,
Engineering and Technology, Engineering and Technology, BGIET,Sangrur, Punjab (India)
Longowal, 148106, Sangrur, Punjab Longowal, 148106, Sangrur, Punjab
(India) (India)
mkgupta.sliet@gmail.com
lovedeepgehal@gmail.com amritsliet@gmail.com

ABSTRACT mixtures viz. water, air with suitable interval up to the


Absorption buildup factors for some titanium compounds like penetration depth of 40 mean free paths. M.J. Berger and J.
Titanium dioxide (TiO2), Titanium Carbide (TiC), Titanium Hubbell [5] provided for the first time the database of mass
Nitride (TiN) and Titanium Silicate (TiSi) has been calculated attenuation coefficient as well as cross-sections for about 100
using the G.P. fitting technique upto penetration depth of 40 elements in the form of software package named as XCom,
mean free path (mfp). The variation of energy absorption which is also capable of generating mass attenuation
buildup factor with incident photon energy for the selected coefficients for compounds and mixtures. Y. Harima [6] has
compounds of titanium has been studied. It has been found given an historical review and current status of buildup factor
that the maximum value of energy absorption buildup factors calculations and applications for the materials water, concrete
shifts to the slightly higher incident photon energy with the and elements Fe, Pb, Be, B, C, N, O, F, Na, Mg, Al, Si, P, S,
increase in equivalent atomic number of the selected Ar, K, Ca, Cu, Mo, Sn, La, Gd, W, U in the energy range
compounds of titanium. 0.01 to 0.3 MeV and 0.5 to 10 MeV with penetration depth up
to 40 mfp, using various codes ADJ-MOM, PALLAS and
Keywords ASFIT. H. Hirayama & K. Shin [7] had used EGS4 Monte
Titanium compounds, multiple scattering, buildup factor. Carlo code to study multilayer gamma ray exposure buildup
factors up to 40 mfp for water, iron and lead at energies 0.1,
1. INTRODUCTION 0.3, 0.6, 1.0, 3.0, 6.0 and 10 MeV.G. S. Sidhu et al. [8] had
Titanium is the ninth most occurring element in the nature. computed energy absorption buildup factor for some
Due to its hardness, brightness and strongness, it is widely biological samples viz. Cholesterol, chlorophyll, hemoglobin,
used in aerospace, sports and medicine fields. Titanium muscle, tissue, cell and bone in the energy range of 0.015 to
compounds are also found useful in various fields. They found 15.0 MeV with penetration depth upto 40 mfp, using G.P.
applications in paint, coloring food and cosmetics, crayons, fitting method. Shimizu et al.[9] compared the buildup factor
UV protection fields, lubricants, wear resistant tools and many values obtained by three different approaches (G.P. fitting,
more. Due to many applications of compounds of titanium an invariant embedding and Monte Carlo method) and only small
attempt has been made to check its possibility in radiation discrepancies were observed for low-Z elements up to 100
shielding. Radiation physicists face a main problem of mean free path. K. Trots et al.[10] proposed vector regression
leakage of radiation due to the Compton multiple scattering model for the estimation of gamma ray buildup factors for
events. This multiple scattering is the main reason for multi–layer shields for Al, Fe, Pb, water and concrete in the
violation of the Lambert-Beer law i.e. I = Ioeµx. In order to energy range of 5 to 10 MeV with penetration depth more
maintain this law, a correction factor B called the buildup than 10 mfp. P.S. Singh et al. [11] measured variation of
factor is used. Buildup factor measures the degree upto which energy absorption build up factors with incident photon
the Lambert-Beer law is violated. Then the intensity equation energy and penetration depth for some commonly used
after modification becomes I = BIoeµx where B is the buildup solvents. T. singh et al [12] worked on Chemical composition
factor. Remember buildup factor is always equals to or greater dependence of exposure buildup factors for some polymers
than unity. After going through the literature, it has been observed that
There are two types of buildup factors – one is energy with the ever increasing use of gamma ray photons in
absorption buildup factor and the other is exposure buildup medicine and bio-physics, there is a dire need of proper
factor. In the former, the quantity of concern is absorbed or investigations concerning gamma rays multiple scattering
deposited energy in the material and detector response effects on some Titanium compounds.
function is that of the absorption in the material whereas in
the latter, the quantity of concern is the exposure of energy 2. COMPUTATIONAL WORK
and detector response function is that of absorption in air. The computations of energy absorption buildup factor have
been divided into three parts, which are given following:-
There are several different methods such as G.P. fitting
method given by Harima et al. [1], invariant embedding 2.1. Computations of equivalent atomic
method given by Shimizu and Hirayama [2] and Shimizu et al numbers (Zeq)
[3] are available for computing buildup factors. American
National Standards ANSI/ANS 6.4.3 [4] (American National Equivalent atomic number is a quantity similar to atomic
Standard, 1991) used G.P. fitting method and provided number of elements. It is the number given to
buildup factor data for 23 elements, one compound and two compound/mixture by giving proper weightage to the

747
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Compton multiple scattering processes. However, the 3. RESULTS AND DISCUSSION


equivalent Z mostly possesses the real values and it is an
energy dependent parameter. As buildup factor is a direct The energy absorption buildup factor for the selected
consequence of multiple scattering, hence equivalent atomic compounds of titanium has been shown in Figs. 3.1 to 3.4.
number (Zeq) is used for the calculation of buildup factors. For From these figures it is found that energy absorption buildup
the calculation of (Zeq), the values of Compton partial factor for all the selected compounds of titanium in the energy
attenuation coefficient comp and the total attenuation region of 0.015–15.0 MeV up to the penetration depth of 40
coefficients total were obtained in cm2/g fistly for the selected mean free path is always greater than one. This is due to build
compounds of titanium in the energy value from 0.015 to 15.0 up of photons from scattering due to larger penetration depth
MeV, using WinXCom computer program (Gerward et al., of the titanium compounds. The dependence of energy
2001) The values of Zeq for the titanium compounds will be absorption buildup factor on incident photon energy and
calculated by the following formula: penetration depth has been discussed as following:-

Z1 (log R 2  log R )  Z2 (log R  log R1 ) Figs. 3.1 – 3.4 show the variation of energy absorption
Zeq  buildup factor with incident photon energy for all the selected
(log R 2  log R1 )
compounds of titanium at 1, 5, 10, 20, 30 and 40 mean free
Where Z1 and Z2 are the atomic numbers of elements path respectively. It has been found that in spite of different
corresponding to the ratios of comp and total, R1 and R2 equivalent atomic numbers (Zeq) of the selected titanium
respectively.R (Comp/total) is the ratio for the selected compounds shows almost similar pattern with the increase in
titanium compounds at a particular energy value, which lies incident photon energy. Initially, energy absorption buildup
between ratios R1 and R2 such that R1 < R < R2. factor increases with the increase in incident photon energy
for all of the selected compounds of titanium.
2.2. Computations of G.P. fitting
parameters
1000
American National Standard (1991) has provided the energy 1mpf
absorption G.P. fitting parameters of 23 elements, one TiC 5
compound (H2O) and two mixtures (air and concrete) in the 10
20
energy range of 0.015–15.0 MeV and up to a penetration
30
Absorption buildup factor

depth of 40 mfp. The calculated values of Zeq for the selected 100 40
solvents were used to interpolate G.P. fitting parameters (b, c,
a, Xk and d) for the energy absorption buildup factor, in the
energy range (0.015–15.0 MeV) and penetration depth (1–40
mfp). The formula (Sidhu et al., 2000) used for the purpose of
interpolation of the G.P. fitting parameters is as follows: 10

P1(log Z2 log Zeq )P2 (log Zeq log Z1 )


P
log Z2 log Z1
where Z1 and Z2 are the atomic numbers of elements between 1
which the equivalent atomic number Zeq of the selected
titanium compounds lies. Here P 1 and P2 are the values of 0.1 1 10
photon Energy(MeV)
G.P. fitting parameters corresponding to the atomic numbers
Z1 and Z2 respectively at a given energy.
Fig. 3.1: Variation of energy absorption buildup factor
2.3. Computations of buildup factors with incident photon energy in case of titanium carbide.
The computed G.P. fitting parameters (b, c, a, Xk and d) were The dominance of absorption processes in the lower and
then used to calculate the energy absorption buildup factors higher energy regions (photo -electric absorption in the lower
for the selected compounds of titanium at some standard and pair-production/triple production in the higher photon
incident photon energies in the range of 0.015–15.0 MeV and energy region) is the reason for this variation of energy
upto a penetration depth of 40 mfp, with the help of G.P. absorption buildup factor with incident photon energy. In the
fitting formula, as given by following equations (Harima et initial stage, in the lower photon energy region, photo-electric
al., 1986) absorption is the dominant photon interaction process; so
b 1 X
B ( E, X )  1  ( K  1) for K 1 energy absorption buildup factor shows minimum values for
K 1 these compounds of titanium.
With the increase in the incident photon energy, the Compton
B( E, X )  1  (b  1) X for K= 1 scattering process overtakes photo-electric absorption process.
It results in the multiple Compton scattering events, which
where results in increasing the absorption buildup factors. Since
scattering process decreases the energy of incident photon, so
 x 
tanh  2   tanh(2) multiple scattering events results in the maximum value of the
 Xk  for X  40 mfp energy absorption buildup factors for these titanium
K ( E , x)  cx a  d
1  tanh(2) compounds. Further increase in incident photon energy,
almost identical values of energy absorption buildup factors
for titanium compounds has been observed.

748
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1000
1000
1 mfp
1mpf TiSi2 5
TiN
5 10
10 20
20 30

Absorption buildup factor


30 100
Absorption buildup factor

100 40
40

10
10

1
1
0.1 1 10
0.1 1 10
Photon Energy(MeV)
Photon Energy(MeV)

Fig. 3.2: Variation of energy absorption buildup factor Fig. 3.4: Variation of energy absorption buildup factor
with incident photon energy in case of titanium nitride with incident photon energy in case of titanium silicate

In the energy region from 3.0 to 15.0 MeV, one more 4. CONCLUSION
absorption process, i.e. pair/triplet production starts From the above studies, it can be concluded that the degree of
dominating which not only decrease the absorption buildup violation of the Lambert–Beer law i.e. value of energy
factor values but also exhibit a significant variation to a small absorption buildup factor is less in the energy regions where
extent due to the chemical composition of the selected absorption processes are dominant over the scattering process
titanium compounds. It may be due to the fact that and when the penetration depth of the material is least. It is
dependence of the cross-section for this absorption process on also found that for the higher equivalent atomic number (Zeq)
the equivalent atomic number (Zeq) is not so much significant of the interacting material, the value of energy absorption
as for the photo-electric absorption process. buildup factor is least.
The energy absorption buildup factor depends strongly on the
nature of the material in the lower energy region, becomes
almost independent in the intermediate energy region and
1000
shows a little dependence in the higher energy region.
TiO2 1mpf
5 5. ACKNOWLEDGMENTS
10
20
Authors are thankful to staff of department of physics, Sant
30 Longowal Institute of Engineering and Technology,
Absorption buildup factor

100 40 Longowal, Sangrur, Punjab, India for their cooperation for


doing this work.

REFERENCES
[1] Harima Y., Tanaka S., Sakamoto Y. & Hirayama H. 1990.
10 Development of New Gamma-Ray Buildup Factor and
Application to Shielding Calculations, Nucl. Sc. & Tech., 74-
84.
[2] Shimizu A. & Hirayama H. 2003, Calculation of gamma-ray
1
buildup factors up to depths of 100 mfp by the method of
0.1 1 10 invariant embedding. Improved treatment of bremsstrahlung.
Photon Energy(MeV) Nucl. Sci. & Eng. 40, 192-200.
[3] Shimizu A. 2002, Calculations of gamma-ray buildup factors
Fig. 3.3: Variation of energy absorption buildup factor up to depth of 100 mfp by the method of invariant embedding
with incident photon energy in case of titanium dioxide . Nucl. Sc. & Tech. 39, 477-486.
[4] American National Standard, ANSI/ANS - 6.4.3. (1991).
[5] Berger, M.J., Hubbell, J.H., 1987. N B S I R87-3597: Photon
Cross Sections on A Personal Computer.National Institute of
Standards and Technology, Gaithersburg, MD.

749
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[6]. Harima Y. ,1993 An historical review and current status of [10] Trots K. , Smuc T. & Pevec D. 2007, Support vector
buildup factor calculations and applications. Radiat. Phy. regression model for the estimation of gamma ray build up
Chem. 41, 631-672 factors for multi-layer shields. Annal. Nucl. Energy. 34, 939-
[7] Hirayama H. & Shin K., 1998 Application of the EGS4 952.
Monte Carlo code to a study of multilayer gamma-ray [11] Singh P.S., Singh T. and Kaur P. ,2008, Variation of energy
exposure buildup factors of up to 40 mfp. J. Nucl. Sci. absorption build up factors with incident photon energy and
Technol. 35, 816-829 penetration depth for some commonly used solvents. Annal.
[8] Sidhu G.S., Singh P.S. & Mudahar G.S., 1999 Energy Nucl. Energy, 35, 1093-1097
absorption buildup factor studies in biological samples. [12] Singh T., Kumar N., Singh P.S. , 2008, Chemical composition
Radiat. Protect. Dosim. 86, 207-216. dependence of exposure buildup factors for some polymers, .
[9] Shimizu A., Onda T. and Sakamato Y., 2004, Calculation of Annal. Nucl. Energy, 36, 114-120
Gamma-Ray Buildup Factors up to Depths of 100 mfp by the
Method of Invariant Embedding, (III), . Nucl. Sc. & Tech, 41,
413-424

750
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Measurements of radon gas concentration in soil

Navpreet Kaur Amrit Singh Manpreet Kaur A S Dhaliwal


Sant Longowal Sant Longowal Sant Longowal Sant Longowal
Institute of Institute of Institute of Institute of
Engineering and Engineering and Engineering and Engineering and
Technology, Technology, Technology, Technology,
Longowal, 148106, Longowal, 148106, Longowal, 148106, Longowal, 148106,
Sangrur, Punjab Sangrur, Punjab Sangrur, Punjab Sangrur, Punjab
(India), (India), (India) (India),
navpreetkarhwal@g amritsliet@gamil.com dhaliwalas@hotmail.
mail.com com

ABSTRACT Przylibski et al. [11] have also measured radon concentrations


Radon and their decay products are the main contributors of in groundwater and its concentrations varied from 0.2 to 1645
total inhalation dose in the living environment. Concentration Bq dm-3 and values exceeding 1000 Bq dm-3 constituted 3.9%
of radon soil gas is about thousand times higher as compared of their results. A national residential radon survey was
to environment. Thus, it is necessary to measure radon soil launched in April 2009 in Canada, this survey uses alpha track
gas. Soil gas 222Rn (radon) concentrations were measured at 5 detectors used for a minimum of three months (Oct-April)
locations in Sant Longowal Institute of Engineering and with the objective of testing is of about 18,000 homes over a
Technology (SLIET), Longowal, by using RAD7 (Durridge two year period [12]. Chauhan and Kumar [13] gave an idea
Company Inc., USA). Measurements were carried out at that radon gas concentration inside the soil is 103–104 times
sampling depths of 30, 40 and 50 cm. The average higher than that of the environment.
concentrations for 30 and 40 cm varied from 298 Bqm-3 to The aim of the present work is to find out the radon
528 Bqm-3 and 1390 Bqm-3 to 3327 Bqm-3 for 222Rn concentration in soil gas at explicit depth within the soil. To
respectively. At 50 cm 222Rn average concentrations ranged do this, the air must be removed from the soil and delivered to
from 9685 Bqm-3 to 52958 Bqm-3. At the location point 5 has a RAD7 (radon monitoring system) of Durridge Company
the maximum Radon concentration for all the three depths (USA), without dilution by outside air. The volume of gas
compared with other locations. The high radon soil gas in any removed depends on the technique used to extract it and the
area leads to the chance of having lung cancer. porosity of the soil. In the present measurement we measured
the dependence of radon concentration with depth in soil of
Keywords Sant Longowal Institute of Engineering and Technology
Radon, RAD7, soil probe (SLIET) at different locations. The high radon soil gas in
particular area leads to the chance of having lung cancer.
1. INTRODUCTION Keeping this in mind the concentration of radon soil gas is
Radon is a radioactive, colorless, tasteless, toxic noble gas measured.
occurring naturally as an indirect decay product of uranium
and thorium. It is monatomic gas. Its most stable isotope is 2. MATERIALS & METHODOLOGY
222
Rn, has a half life of 3.8 days. 222Rn decays into radioactive Present measurements of radon concentrations in soil gas
products or progeny of 218Po, 214Pb, 214Bi, and 214Po until it were carried out using the RAD7 portable radon detector
reaches the final stable isotope 210Pb. Radon comes from the (Durridge Company Inc. USA). The experimental setup is
soil, building and decoration materials, outdoor air etc. The shown in Fig 2. This system contains a solid-state ion-
world average soil radon concentration is 7400 Bq/m3. People implanted planar silicon detector and a built-in pump with a
are not vigilant because Radon is invisible and intangible. flow rate of 1 L min-1. Desiccant (CaSO4) tubes is used to
Nazaroff and Nero [1] observed that radon enters the body absorb the moisture in the soil air, an infra-red HP8224OB
through breathing and the short-lived radionuclide from decay alpha-numeric printer placed on the top of the RAD7 and
of radon will be deposited in the bronchial, lung and kidney nylon inlet filters (pore size 0.45 µm) that block fine dust
tissue, they release α-particle which produces radiation particles and radon daughters from entering the RAD7
injuries from internal. This can damage lung tissue and lead to chamber. The RAD7’s internal sample cell is a 0.7 L
lung cancer over the course of lifetime. Kullab et al. [2] and conducting hemisphere with an average potential of 2200 V
report of UNSCEAR 1988 [3] gives information that thoron relative to the detector that is placed at the center of the
and their decay products contributes 55% of total inhalation hemisphere. The detector operates in external relative
doses to human population. Gupta et al. [4], Khatibeh et al. humidity ranging from 0% to 99% and internal humidity of
[5] and Patra et al. [6] described that radon– thoron comes 0% to10%. The spectra are in 200 channels and the RAD7
from the soil and building materials, because the uranium and groups them into eight windows of energy ranges. A, B, C,
radium are uniformly distributed in these materials from the and D are the major windows and E, F, G, and H are the
time of origination of earth. The indoor radon–thoron levels diagnostic windows. Window A covers the energy range from
depend upon so many factors like geological setting of area, 5.40 to 6.40 MeV, showing the total counts from 6.00 MeV
nature of soil, meteorological conditions, living style of the particles from the 218Po decay. Window B covers the region
dwellers and type of building material used for the house 6.40 MeV to 7.40 MeV, showing the total counts of 6.78 MeV
construction [7-9]. Przylibski and Zebrowski, [10], and particles from the 216Po. Window C represents total counts of

751
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

the 7.69 MeV a particles from 214Po, while the window D concentrations and the depth for the same location point.
represents the total counts of the 8.78 MeV a particles from However large variation of Radon concentration in soil gas
the decay of 212Po. In other words, windows A and B with depth. At the location point 5 has the maximum Radon
represent ‘‘new’’ 222Rn (radon) and 220Rn (thoron), while concentration for all the three depths compared with other
windows C and D represent ‘‘old’’ 222Rn and 220Rn, locations. The average Radon concentration in depth 50cm is
respectively. The RAD7 separates radon and thoron signals by 22763±418 Bqm-3, in depth 40cm is 2322±253Bqm-3, in depth
their daughters products unique alpha particle energies with 30cm is 412±36 Bqm-3.The average radon concentration level
little cross-interference (Durridge Company Inc., 2000). in areas with higher depth may be due to the presence of
Uranium prospect beneath the soil.
In the present measurements sniff mode of the system is used,
which means that the RAD7 calculates radon concentrations
from the data in window A only and thoron concentrations 550
from the data in window B, while the data from windows C Average Radon Concentration in Soil of Depth 30 cm
and D are ignored. In this mode, the built-in pump runs
500
continuously. The soil gas probe used in our study was made (a)

Radon Concentration (Bqm )


-3
of stainless steel with length of 106cm. This probe has an
450
inner rod inside a hollow tube and a sampling outlet. The
probe was inserted into the soil at depths of 30, 40, and 50cm.
After inserting the probe at the specified depth, the sampling 400

outlet was connected to the inlet of the RAD7 via a small


drying tube. Before each measurement the detector was 350

purged for at least 5 to 10min. After detecting high


concentrations of radon, the purging time was much longer. In 300
all the measurements the cycle time was 15min and three
cycles were performed. Thus the total duration of a single run 1 2 3 4 5
Location Point No.
at a specified depth was 20min. The final result is an average
from these 4 measurement cycles. The experimental results
are shown in the table 1 and the variation of the radon
concentration with the location points are shown in Fig. 1.
3500
Average Radon Concentration in Soil of depth 40 cm
3. RESULTS AND DISCUSSION
3000
(b)
An average value of Radon concentration was calculated for
Radon Concentration (Bqm )
-3

each location point in Bqm-3. In Table-1 all the results were


listed , and Fig. 1 shows the average Radon concentrations as 2500
a function of location point number .The radioactive level of
222
Rn for soil samples, as shown in Table-1, range from
2000
9958±130 Bqm-3 for location No.1 at depth 50 cm
underground surface, to 9430±40 Bqm-3 with the same depth.
For depth 40 cm the concentration varied from 1850±590 1500

Bqm-3 to 123±93 Bqm-3. While in the depth 30 cm, the Radon


concentration is varies from 614±58 Bqm-3 to 490±19 Bqm-3.
1000
1 2 3 4 5
For location No.2 at depth 50cm underground surface, Radon Location Point No.
concentration varied from 11608±310 Bqm-3 to 10300±240
Bqm-3. For depth 40cm the concentration varied from
3480±110 Bqm-3 to 2249±330 Bqm-3. While in the depth 30
cm, the Radon concentration is varies from 610±87 Bqm-3 to 60000
418±38 Bqm-3.For location No.3 at depth 50 cm underground Average Radon Concentration in Soil of Depth 50 cm
surface, Radon concentration varied from 27900±590 Bqm-3
50000
to 25702±1300 Bqm-3. For depth 40 cm the concentration (c)
Radon Concentration (Bqm )
-3

varied from 1428±110 Bqm-3 to 1290±2804 Bqm-3. While in


40000
the depth 30 cm, the Radon concentration is varies from
397±13 Bqm-3 to 319±90 Bqm-3.
30000
For location No.4 at depth 50 cm underground surface, Radon
concentration varied from 14800±220 Bqm-3 to 11294±900
20000
Bqm-3. For depth 40cm the concentration varied from
2530±310 Bqm-3 to 2110±600 Bqm-3 While in the depth
30cm, the Radon concentration is varies from 297±42 Bqm-3 10000

to 218±90 Bqm-3. For location No.5 at depth 50 cm


1 2 3 4 5
underground surface, Radon concentration varied from Location points No.
54328±900 Bqm-3 to 52108±100 Bqm-3. For depth 40cm the
concentration varied from 3469±30Bqm-3 to 3158±30 Bqm-3.
While in the depth 30cm, the Radon concentration is varies Fig. 1. Radon concentration as a function of the location
from 407±30 Bqm-3 to 249±20 Bqm-3. point number: (a) at depth 30cm, (b) at depth 40cm, and
(c) at depth 50 cm
By observing data from the table-1, one can see that in the
majority of locations there is linearity between the radon

752
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 1. Radon Concentrations in different depths for the


five location points in SLIET

Radon gas Concentrations in Soil


measured in Bqm-3 for different
Soil Probe
Location depth from ground surface
Point Desiccant (CaSO4) tube

30 cm 40 cm 50 cm
1 394±35 1760±252 10852±418
2 342±34 1390±254 9548±419
3 474±36 2783±253 27253±516 Connecting Tubes

4 325±36 2348±253 13205±450 RAD7 Detector


5 524±37 3327±252 52958±556
Fig 2: Experimental setup of RAD7 for measuring soil
radon gas at SLIET, Longowal.

ACKNOWLEDGMENTS [6] Patra, A.C., Sahoo, S.K., Tripathi, R.M., Puranik, V.D.
We thank Dr. A.S Dhaliwal and Dr. M. M. Sinha from 2013. Distribution of radionuclides in surface soils,
department of physics, SLIET, Longowal for providing RAD7 Singhbhum Shear Zone, India and associated dose.
detector for radon concentration measurements. We would Environ. Monit. Assess. 185, 7833–7843.
like to thank them for their full motivation and appreciation [7] Mehra, R., Bala, P. 2013. Estimation of annual effective
our work. dose due to radon level in indoor air and soil gas in
Hamirpur district of Himachal Pradesh. J. Geochem. 142,
16-20.
1. REFERENCES [8] Plant, J.A., Saunders, A.D. 1996. The radioactivity earth.
[1] Nazaroff, W.W., Nero Jr., A.V. 1988. Radon and Its Radiat. Protect. Dosim. 68, 25–36.
Decay Products in Indoor Air, United States of America. [9] Singh, B., Singh, S., Bajwa, B.S., Singh, J., Kumar, A.,
John Wiley & Sons. 2011. Soil gas radon analysis in someareas of Northern
[2] Kullab, M.K., Al-Bataina, B.A., Ismail, A.M., and Punjab, India. Environ. Monit. Assess. 174, 209–217..
Abumurad, K.M. 2001. Seasonal variation of radon-222 [10] Przylibski, T.A., Zebrowski, A. 1996. Origin of radon in
concentrations in specific locations in Jordan. Radiat. medical waters of Swierado´w Zdro´j. Nukleonika. 41
Meas. 34, 361–364. (4),109-115.
[3] UNSCEAR, 1988. Sources, effects and risks of ionizing [11] Przylibski, T.A., 2004. Concentration of 226Ra in rocks
radiations. United Nations Scientific Committee on the of the southern part of Lower Silesia (SW Poland).
effects of atomic radiation. Report to the General Journal of Environmental Radioactivity 75 (2), 171-191..
Assembly on the Effects of Atomic Radiation. United
Nations, New York. [12] Health Canada. (2010). Cross Canada Residential
Survey. Retrieved September 27, 2010, from
[4] Gupta, M., Chauhan, R.P. 2011. Estimating radiation http://www.hc-sc.gc.ca/ewh-emt/radiation/radon/survey-
dose from building materials. Iran. J.Radiat. Res. 9, 187– sondage-eng.php.
194.
[13] Chauhan, R.P., Kumar, A. 2013. Study of radon transport
[5] Khatibeh, A.J.A.H., Ahmed, N., Matiullah Kenawy,
through concrete modified with silica fume. Radiat.
M.A. 1997. Natural radioactivity in marbles stones.
Meas. 59, 59–65.
Jordan Radiat. Meas. 28, 345–348.

753
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

DETERMINATION OF CONDUCTIVITY OF HUMAN


TEAR FILM AT 9.8 GHZ
Namita Bansal A.S. Dhaliwal K.S. Mann
Department of Physics
Sant Longowal Institute of Engineering and Technology, Longowal, Punjab, India
namitabansal.physics@gmail.com

ABSTRACT
In view of increasing concern about the interaction of microwaves generated from various sources such as cell phones and
their base stations, televisions, radar, Bluetooth devices etc, with human body particularly eye which is the most sensitive
organ to microwave heating due to its low heat dissipating properties, present research deals with determining the
conductivity of human tear film at 9.8 GHz which in turn is expected to be helpful in determining the specific absorption rate
of 9.8 GHz microwaves in this film. The apparatus consists of an X- band rectangular resonant cavity designed to resonate at
9.8 GHz in H103 mode and a vector network analyzer. Due to difficulty in measuring accurately the very small volume of tear
film which is of the order of one micro-litre, indirect measurements are made. In other words, method used presently does
not require the knowledge of volume of tear film. Experimentally it has been observed that resonance frequency in the
presence of pure water is almost same as that of tear film, therefore conductivity of film is determined by measuring directly
the loss tangent of tear film using Slater’s technique and well known calculated value of real permittivity of pure water. Due
to individual physiological differences, the average conductivity of tear film of small population consisting of six subjects is
reported. Survey of literature shows that electrical properties of tear film are not available at any radio as well as microwave
frequencies; therefore presently reported results are first of its kind.

Key Words: Tear Film, Conductivity, Resonant Cavity, Slater’s Technique

REFERENCES
[1]. C. Gabriel, S. Gabriel and E. Corthout, Phys. Med. Biol. 41, 2231-2249 (1996).
[2]. S. Gabriel R. W. Lau and C. Gabriel, Phys. Med. Biol. 41, 2251-2269 (1996).
[3]. A. Peyman, S. Holden and C. Gabriel, Final Tech. Rep., MTHR Department of Health, UK, 2005.
[4]. J.C. Slater, Rev. Mod. Phys. 18, 441-511 (1946).
[5]. K. Hutcheon, M. deJong and F. Adams, J. Microw. Power Electromagn. Energy 27, 87-92 (1992).
[6]. U. Kaatze, J. Chem. Eng. Data 34, 371-374 (1989).

754
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Biological Significance of Nitrogen Containing


Heterocyclic compounds - A Mini Review
Rajni Gupta
Department of Chemistry
S.D. College, Barnala-148101(Pb.)
guptarajni024@gmail.com

ABSTRACT compounds. Among all these compounds, pyrimidines are one


of the most important categories of compounds owing their
Heterocyclic compounds are found in abundance in nature and importance to pharmacological activities. It is one of the
are of great significance to biological system because of their essential constituent of all the cells in living beings (3).
unique structural features. They are found in a number of Pyrimidines are important constituents of nucleic acids, as a
natural products like nucleic acid, vitamins, antibiotics, part of bases like thiamine, uracil and cytosine. (4)
hormones etc. Nitrogen containing heterocyclic compounds
are an important class of heterocyclic compounds that has paid
significant contribution towards medicinal chemistry. The
types of compounds depend upon number of nitrogen atoms
and their position. E.g. Pyrimidine contains one nitrogen atom
in ring skeleton. With two nitrogen atoms it is called diazine,
pyrazine etc. However, the review tends to focus on the
importance of Pyrimidine class of compounds and their role as
antibacterial, antifungal, anti-malarial, anticancer and other
agents. The review also includes some of the marketed drugs Fig. 2 Cytosine
having Pyrimidine ring nucleus and their application.
Pyrimidine skeleton is present in vitamins, riboflavin,
Keywords- Heterocyclic compounds, Pyrimidine, anti- thiamine, and folic acid (5).
bacterial, antifungal, anti malarial, anticancer.

INTRODUCTION
Pyrimidines are heterocyclic compounds similar to pyridine.
Pyridine is a 6-membered cyclic compound that contains 4-
carbon atoms and 2-nitrogen atoms at position 1 and 3 (1).
Though, pyrimidine itself is not very active, its derivatives are
very important in medicinal chemistry.

Fig.3 Riboflavin

Fig. 1 Pyrimidine

Heterocyclic rings are important components of hormones,


vitamins, amino acids and synthesized drugs. Pyrrole,
thiophene, piperidine, furan, pyridine, pyrrolidine, thiazole
etc. Are very important heterocyclic compounds used in
synthesis. (2) Fig. 4 Thiamine

In the family of heterocyclic compounds, nitrogen containing


heterocyclic compounds find an important place in medicinal
chemistry. Azine, pyridine, diazine, pyrimidine, pyrazine are
various examples of nitrogen containing heterocyclic

755
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig. 8 Flucytosine

Sacchi et al (15). prepared compounds (9) and (10) that showed


anti-inflammatory activities
Fig. 5 Folic acid

Pyrimidine skeleton is present in many synthetic compounds


such as barbituric acids and veranal, a derivative of barbituric
acid that are used as hypnotics (6).

Fig. 9

Fig. 6 Barbituric acid

Fig. 10

Cottom et al (16). synthesized (11) as inhibitors of adenosine


kinase. The compound (11) co as found to exhibit anti
inflammatory activity.

Fig. 7 Veranal

The presence of pyrimidine ring in bases of DNA and RNA is


linked with their widespread biological activities. The
literature survey indicated that the compounds containing
pyrimidine nucleus exhibit pharmacological activities like
anti-microbial(7)., anti-viral(8)., anti tumor(9)., anti-malarial(10).,
anti-neoplastic(11)., anti-HIV(12)., cardiovascular agents(13).,
anti-fungal, anti-histaminic, anti-diabetic and herbicidal.

Most drugs within pyrimidine nucleus find their place in four


categories i.e. sulphonamides, anti-tumor, barbiturates and
anti-microbial agents. Fig. 11

Kompis I et al (17). Synthesized Brodimoprim (12) and


Polak A. and co-workers (14). synthesized Flucytosine, a
indicated that the compound is found to have anti bacterial
fluorinated pyrimidine compound exhibiting antifungal
activities.
activities towards infections caused by strains of Cryptococcus
and Candida.

756
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig. 16

Fig. 12 Gogia and co workers (20). synthesized 1, 3, 4-thiadizolo-(3, 2-


a) pyrimidine-5-one (17) and showed antifungal activity.
Brodimoprim

Molina et al (18). Synthesized a number of pyrido (1, 2-c)


pyrimidine derivatives (13),(14) and (15) that showed their
effects on leucocytes.

Fig. 17

Sondhi S.M. and co workers (21). synthesized pyrimidine


derivative (18, 19) that were reported to exhibit analgesic and
anti inflammatory activities.

Fig. 13

Fig. 18

X=O, S

Fig. 14

Fig. 19

Rathod I.S. and co workers (22). synthesized derivatives of


thieno (2, 3-d) pyrimidine-4(3H) - ones (20) and reported their
analgesic activities.

Fig. 15

Bruno et al (19). Synthesized a series of 2, 5-cycloamino-5H,


benzopyrano (4, 3-d) pyrimidines (16) that showed anti-
platelet activity.

757
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

R= -CH3, -NHPh R1=R2= Ph, o-Anisyl

Fig. 20

Fig. 21

Some of the pyrimidines and fused pyrimidines drugs


Gaby, Hamide and Gharab (23). prepared a series of pyrimidine marketed along with their biological activities are tabulated as
-2-thione (21). Some of them displayed anti- cancer activities. follows:-

Biological activity Compound Structure

Afloqualone

Anti-inflammatory and
Analgesics

Epirizole

Celecoxib

758
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Nilotinib

Anti-cancer

Dasatinib

Bosutinib

Trimephoprim

Anti-bacterial

Metiotrim

Tetroxoprim

759
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Anti-fungal Flucytosine

Broxuridine

Anti-viral

Idoxuridina

Respiratory tract Brodimoprim


infections

Urinary tract infections Pipemidic acid

Hyperuricaemia Tisopurine
disorders

760
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

CONCLUSION 15. A. Sacchi, S. Laneri, F. Arena, E. Luraschi and F.


Rossi, Eur. J. Med. Chem.,32,667(1997)
Pyrimidine has a unique and a characteristic role in our life
as it is present in all types of living cells. The heterocyclic 16.H.B. Cottom, D.B. Wasson, H.C. Shih, G.D Pasquale
skeleton exhibits vast medicinal and biological and D.A. Carson, J. Med. Chem.,36,3424(1993)
significance. The reviewed paper reveals a vast range of
17. Komis I and Wick A; Helv. Chim. Acta, 1997; 60:3025
pharmacological activities exhibited by heterocyclic drugs
having pyrimidine moiety. These days pharmaceutical 18.P. Molina; E. Aller, A. Lorengo, P.L. Cremadis, I.Rioja
companies are preparing pyrimidine based drugs. and M.J. Alcaraz, J. Med. Chem.,44,1011(2011)

REFERENCES 19. O. Bruno, C. Brullo; A. Ranise, S. Schenone, S.


Bondavalls, M. Tognolini and M. Impicciatore, Bioorg.
1. Gilchrist, Thomas Lonsdale; Gilchrist, T.L. Heterocyclic Med. Chem. Lett., 11, 1379(2001)
Chemistry, New York; Longman.(1997)
20. Gogia, Prabin Chandera (Reg Research Lab.) Jorhat
2. Devprakash, AB Udaykumar, Journal of Pharmaceutical 785006. Indian Heterocycles, 1991; 32(10)
Research, 2011; 4(7):2436-2440
21. Sondhi SM, Verma RP; Indian Drugs, 1999; 36(1):50
3. T. Sasada, F.Kobayashi, N. Sakai, T. Konakahara,
Organic Letters, 2009; 11:2161-2164 22. Rathod IS, Pillai AS and Shirsath VS; Indian J.
Heterocyclic Chem.; 2000; 10; 93
4. Amir M, Javed S. A. and Kumar H; Indian J. Pharm.
Sciences, 2007; 69(3):337-343 23. M.S. El-Gaby, S.G. Abdel-Hamide and M.M. Gharab,
Acta. Pharm., 49(3), 149(1999); Chem. Abstr, 132, 93278d
5. O. Stanislaw, (2009), Jord. J.Chem, 4:1-15 (2000)

6. Jain M.K., Sharnevas S.C.; Organic Chem.; 2008;


3:997-999

7.Desai K, Patel R,Chickhalia K J. Ind.


Chem.,2006;45:773-778

8. Amr EA, Nermien MS, Abdulla MM. Monatsh Chem.,


2007; 138:699-707

9.Wagner E, Al-Kadsi K, Zimecki M, Sawka-


Dobrowolska W. Eur J Med Chem.,2008;43:2498-2504.
Some of these marketed drugs are listed above. The
biological significance of pyrimidine reflects its versatility
and it offers the medicinal chemist a continued interest in
planning and developing new drug to ensure heterocyclic
chemistry, an area of great interest.

10. Gorlitzer K, Herbig S, Walter Rd. Pharmazie, 1997;


52:670-672

11. Jean-Damien C, David B, Ronald K, Julian G, Pan L,


Robert D. Vertex Pharmaceuticals Incorporated, USA,
PCT Int. Appl. 2002; 22:608

12.Fujiwara N, Nakajima T, Ueda Y, Fujita HK, Awakami


H. Bioorg Med. Chem. Lett.,2007;17:1736-1740

13.Kurono M, Hayashi M, Miura K, Isogawa Y, Sawai K,


Kokai Tokkyo Koho JP 1987;62:267-272, Chem. Abstr,
1988;109:37832

14. Polak A. and Scholer HJ; Chemotherapy, 1975; 21:113

761
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Steady State creep Behavior of Functionally Graded


composite by using analytical method
Ashish Singla Manish Garg V. K. Gupta
Department of Mechanical Department of Physics Department of Mechanical
Engineering S.D. College, Barnala, Punjab Engineering
BGIET, Sangrur, Punjab UCoE, Punjabi University
manishgarg189@gmail.com Patiala, Punjab
er.ashishsingla@hotmail.com
guptavk_70@yahoo.co.in

ABSTRACT transitional stresses and axial strain in a compressible cylinder


The Steady state creep behaviour of a functionally graded subjected to internal pressure by using Seth’s transition
cylinder made of isotropic composite containing varying theory. The study indicates that the presence of
distribution of silicon carbide particles has been investigated compressibility having lesser value at the internal surface of
by a mathematical model. The creep behaviour of the FGM is the cylinder reduces the axial contraction and the stresses,
described by a Norton’s Power law. The effect of varying increases the pressure required for the initial yielding and
distribution of SiCP particles of creep stresses and creep rates decreases its value for fully plastic state. Chen et. al. (2007)
in the FGM cylinder has been analyzed and compared with a analyzed the creep behavior of thick walled cylinders made of
cylinder, having uniform distribution of reinforcement. The FGM and subjected to both internal and external pressures.
study reveals that the increasing particle content in the Derived the asymptotic solutions on the basis of a Taylor
cylinder, tangential and effective stresses increase near the expansion series and compared with the results of Finite
inner radius but decrease near the outer radius. The strain Element analysis (FEA) obtained by using ABAQUS
rates in FGM cylinder decreases with the increase in SiCP software. You et. al. (2007) analyzed steady state creep in
reinforcement. The magnitudes of tangential and radial strain thick-walled FGM cylinders subjected to internal pressure by
rates in FGM discs are significantly lower than in a uniform using Norton’s power law. The impact of radial variations of
composite disc. material parameters was investigated on stresses induced in
the cylinder. Sharma et. al. (2010) estimated creep stresses in
Keywords internally pressurized thick-walled rotating cylinder made of
Functionally Graded Material, Cylinder, Creep. isotropic and transversely isotropic materials by using Seth’s
transition theory. It is seen that a rotating circular cylinder
1. INTRODUCTION made of transversely isotropic material is on the safer side of
Functionally Graded Material (FGM) is characterized by a the design as compared to a rotating circular cylinder made of
continuous variation of volume fractions of the constituent isotropic material.
phases in either one (thickness) or several directions. FGMs In the light of above mentioned, it has been decided to
possess a number of advantages that make them attractive in investigate the effect of varying reinforcement (SiCp) gradient
potential applications, including a potential reduction of in- on the creep behavior of the FGM cylinder by using Norton
plane and transverse through-the-thickness stresses, an Law. The study carried out is an attempt to evolve
improved residual stress distribution, enhanced thermal understanding of the creep behavior and content of the
properties, higher fracture toughness, and reduced stress reinforcement on the creep stresses and strain rate in the FGM
intensity factors (Noda et. al., 1998). FGMs have been cylinder
developed as ultra high temperature resistant materials for
potential applications in aircrafts, space vehicles and other 2. DISTRIBUTION OF
structural components exposed to elevated temperature
(Birma and Byrd, 2007). In most of these applications, REINFORCEMENT
cylinder is subjected to severe mechanical and thermal loads, The distribution of SiCp in the FGM cylinder decreases
causing significant creep and reducing its service life (Gupta linearly from the inner to outer radius. The amount (vol %) of
and Pathak, 2001, Hagihara and Miyazaki, 2008 and, SiCP, V(r), at any radius r, is given by (Singh and Gupta,
Tachibana and Iyoku, 2004). (2011).

Arya and Bhatnagar (1976) analyzed creep analysis of thick- V r   Vmax 


r  a  V  V  (1)
walled anisotropic cylinder subjected to combined internal b  a  max min
and external pressures by considering elastic strains. The time
hardening law was used to obtain the fundamental equations Where Vmax and Vmin are respectively the maximum and
of creep in an orthotropic cylinder. The results obtained for minimum content of SiCP at the inner and the outer radii of
the anisotropic cylinder were compared with those estimated the cylinder respectively
for isotropic cylinder. Mishra and Samanta (1981)
investigated finite creep in an orthotropic thick-walled The average SiCp content in the cylinder can be expressed as,
cylindrical shells operating at high pressure and temperature. b b

 
It is observed that the temperature variation has a significant
effect on the strain as well as the strain-rate, particularly when 2rlV (r )dr 2 rV (r )dr
the molecular anisotropy of the material is taken into account. Vavg  a
 a
(2)
Shukla (1997) obtained the expressions for elastic-plastic  (b 2  a 2 )l (b 2  a 2 )

762
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Where l is the length of cylinder. d r


   r
r (12)
Substituting V(r) from Eq. (1) into Eq. (2) and integrating, we dr
get, The material of the cylinder is incompressible, therefore,
r     z  0
   
(13)
3Vavg 1  2 1     Vmax 1  32  23 The constitutive equations under multi axial creep in an
Vmin  (3)
2  3  
3 orthotropic cylinder, when the principal axes are the axes of
reference, Bhatnagar and Gupta [2] are given by,
Where   a b 
r  e 2 r      z  (14)
2 e
3. CREEP LAW AND PARAMETERS
e
The creep behavior of the FGM cylinder is described by   2    z   r  (15)
Norton’s power law as, 2 e
e  B en e
(4)  z  2 z      r  (16)
2 e
Where  e is the effective strain rate,  e is the effective stress, Where e and  e are respectively the effective strain rate
B and n are material parameters describing the creep and effective stress in the FGM cylinder.
performance in the cylinder. The Principal axes of isotropy are the axes of reference, Dieter
It is evident from the study of Singh and Ray (2001) that the [5], is given by,

 
values of creep parameters B and n appearing in the Norton’s 1
12

law depend on the content of reinforcement, which vary with  e   (   z )2  ( z   r )2  ( r   )2  (17)
the radial distance. 2 
Since Under plain strain condition (  z  0 ), one may get

 V (r )  from Eqs. (7), (8) and (13),
B(r )  Bo   (5) C
 Vavg  u r  (18)
r
 Where C is a constant of integration. Using Eq. (18) in Eqs.
 V (r )  (7) and (8), we get,
n( r )  n o   (6)
 Vavg  r   2
C
(19) and
C
  2 (20)
r r
Where Bo and no are respectively the values of creep
Under plane strain condition, Eq. (16) becomes,
parameters B and n respectively and ϕ is the grading index.
(    )
The values of Bo , n and ϕ are respectively taken as 2.77 x z  r (21)
10-16 , 3.75 and 0.7 as reported in the study of Chen et. al., 2
(2007). Substituting  z from Eq. (21) in to Eq. (17), we get,

4. MATHEMATICAL FORMULATION 3 (    r )
e  (22)
Consider a FGM thick-walled hollow cylinder with an inner 2
radius a and outer radius b subjected to an internal and Substituting  r and  z respectively from Eqs. (19) and (21)
external pressures p and q respectively. The cylinder is made into Eq. (14), we obtain,
of orthotropic material and is sufficiently long and hence is
assumed under plain strain condition (i.e. axial strain rate, 1.33 e C
 z  0 )    r  (23)
e r 2
The radial (  r ) and tangential (  ) strain rates in the Using Eqs. (4) and (22) in Eq. (23) and simplifying, one gets,
cylinder are given by: I1
du u    r  2/n
(24)
 r  r (7) and   r (8) r
dr r 1
n 1
Where u r  du dt is the radial displacement rate and u is I 1  1.33
C n
Where, 2n (25)
1
the radial displacement. B n
Eqs (7) and (8) may be solved to get the following Substituting Eq. (24) into Eq. (12) and integrating, we get,
compatibility equation,  r  X1  p (26)
d
r    r   (9) r


dr I1
Where, X1  n2
dr (27)
The cylinder is subjected to the following boundary
conditions, a r n

 r   p at r  a (10) Substituting Eq. (26) into Eq. (24), we obtain,


 r  q at r  b I
(11)   X1  21/ n  p (28)
Where the negative sign of  r implies the compressive nature r
To estimate the value of constant C, needed for estimating I 1 ,
of radial stress.
By considering the equilibrium of forces acting on an element the boundary conditions given in Eqs. (10) and (11) are used
of the cylinder in the radial direction, we get, in Eq. (26) with X1(Eq. 27) integrated between limits a to b. to
get,

763
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

b 60


I1
n 2
dr  p  q (29)
a r n
55
Substituting the value of I1 from Eq. (25) in to Eq. (29) and
simplifying, we obtain,
n
 p q 50
C  (30)

(MPa)
 X2 
n 1
1.15 n
b
45
Where, X 2 
 n2 1
a r n Bn
dr (31)

40 tangential stress (present study)


Using Eqs. (21) and (22) into Eqs. (14) and (15), one obtains, tangential stress (Chen et al)
3e
  r  (32)
2 35
The analysis presented above yields the results for isotropic 10 12 14 16 18 20
r(mm)
FGM cylinder.
Figure.1: Validation of present study vs Chen et. al. (2007)
5. NUMERICAL SCHEME OF 6.1 Variation of Creep Parameters
COMPUTATION
Figure 2 shows the variation of creep parameters B with radial
Following the procedure described in section 4 and to begin distance in FGM and Non-FGM cylinders. In FGM cylinders
the computation procedure, the vlaues of X 2 Eq.(31) is C2 and C3, the value of parameter B decreases with increase
estimated by substituting the value of the Creep parameters B in radius, but the value of B parameter remains constant for
and n from Eqs(5) and (6) respectively.To obtain the value of Non-FGM cylinder due to constant amount of SiCP content.
constant C by substituted the vlaue of X 2 in Eq.(30) and using The variation of creep parameters B and n exhibits a crossover
this value in Eq.( 25), the value of I1 is obtained. By using this at a radius of around 15.8 mm. Figure 3 shows the variation of
value of I1 in Eq.(27), the value of X1 is obtained. After creep parameters n with radial distance in FGM and Non-
getting the value of X1, the Stresses  r and   are obtained FGM cylinders. In FGM cylinders C2 and C3, the value of
from Eqs.(26) and (28) respectively. Now, to estimate the parameter n increases with increase in radius, but the value of
n parameter remains constant for Non-FGM cylinder due to
distribution of axial stress  z , the value of  r and   are
constant amount of SiCP content. Figure 4 shows the variation
substituted in Eqs.(21), after obtaining the value of SiCP Content in FGM and Non-FGM cylinders. It is
of  r ,   and  z , the values of  e and e are calculated observed that the distribution of SiCP reinforcement in FGM
from Eqs. (22) and (4) respectively. Finally the strain rates cylinder C2 and FGM Cylinder C3 decreases from inner to
 r and  are calculated respectively from Eqs. (14) and (15). outer radius. However, the Non-FGM Cylinder C1 has
uniform value because it remains 20% throughout. The
The results have been obtained for three different composite variation of SiCP Content exhibits a crossover at a radius of
cylinders, as described in Table 5.1. around 15.8 mm.
Table 5.1: Details of different composite cylinders

Cylinder Vmax. Vavg Vmin 3.5


Non-FGM(C1)
vol.% vol.% vol.%
FGM(C2)
Non-FGM (C1) 20 20 20 FGM(C3)
B*10 (MPa /s)

3.0
FGM (C2) 25 20 16
-n

FGM (C3) 30 20 12
2.5
-16

6. RESULTS AND DISCUSSION


Before presenting the result completed, it is necessary to
check the validity of the mathematical formulation carried
out. To accomplish this task, the tangential stress in a cylinder 2.0
for which the results are reported by the Chen et. al., 2007.
The values of Bo , n and ϕ are respectively taken as 2.77 x
10-16 , 3.75 and 0.7 as reported in the study of Chen et. al.,
10 12 14
(2007). The tangential stress obtained in the cylinder is r (mm)16 18 20
compared with that reported by Chen et. al. (2007). A good
agreement is observed in Fig. 1 between the results obtained Figure 2: Variation of creep parameter B in cylinders
in present study and those of Chen et. al. (2007).

764
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5.5

Figure 8 shows the variation of radial and tangential strain


5.0 Non-FGM(C1) rates in the FGM and Non-FGM cylinders. It is observed that
FGM(C2) the effect of radial and tangential strain rates in the cylinders
FGM(C3) decreases with increasing radius. The radial and tangential
4.5
strains rates of Non-FGM cylinder C1 decreases as the radius
increases, when Non-FGM cylinder C1 is compared with
4.0 FGM cylinder C2 and C3, It is observed that in case of FGM
Cylinder C2, the radial and tangential strain rates increases at
n

3.5
inner radius and decreases at outer radius, while in case of
FGM cylinder C3 the radial and tangential strain rate less than
Non-FGM cylinder C1 and decreases as moves from inner
3.0 radius to outer radius. Figure 9 shows the variation of
effective strain rate in the FGM and Non-FGM cylinders
2.5
similar those described for radial and tangential strain rates.
10 12 14 16 18 20 The strain rates show a little decrease in the middle of the
r(mm) cylinder with the increase in SiCP reinforcement.
0
Fig.3: Variation of stress exponent n in cylinders

-10
30

28
Non-FGM(C1)
-20
26 FGM(C2) r(MPa)
FGM(C3)
24
-30 Non-FGM(C1)
22
FGM(C2)
V (%)

20 FGM(C3)
-40
18

16
-50
14
10 12 14 16 18 20
12
r(mm)

10 12 14 16 18 20 Fig.5: Variation of radial Stress in cylinders


r (mm)
Fig.4: Variation of SiCP Content in cylinders
270

240
6.2 Distribution of stresses and strains
Figure 5 shows the variation of radial stress in the FGM and 210
Non-FGM cylinders. It is observed that the radial stress 180 Non-FGM(C1)
remain compressive over the entire cylinder with a maximum
(MPa)

150 FGM(C2)
(compressive) and zero value reported at the inner and outer
FGM(C3)
radii respectively, under the imposed boundary conditions 120
given in Eqs. (10) and (11). It is observed which the radial
stress is increasing over the entire cylinder radii with the 90
increasing SiCP reinforcement in the cylinder and the value of 60
radial stress in FGM cylinder is higher than the Non-FGM
Cylinder. Figure 6 shows the variation of tangential stress in 30
the FGM and Non-FGM cylinders. The tangential stress 0
remains tensile throughout the FGM and Non- FGM
10 12 14 16 18 20
cylinders. By increasing Particle gradient in the FGM r(mm)
Cylinder, the tangential stress increases near the inner radius
but decreases toward the outer radius when compared with the Fig. 6: Variation of Tangential Stress in cylinders
distribution of tangential stress in Non-FGM cylinder C1. At
inner radius the value of stress of FGM cylinder C2 is less
than FGM cylinder C3. Whereas at outer radius its vice versa
7. CONCLUSIONS
i.e value of stress of FGM cylinder C2 is more than FGM A. The study carried out has led to the following
cylinder C3. And the values of all these cylinders intersect conclusions:
each other in between 12.3 to 14.3. Figure 7 shows the B. The radial stress (compressive) in the composite
variation of effective stress in the FGM and Non-FGM cylinder decreases with the increase in gradient in
cylinders. It is observed that the effective stresses increase the distribution of SiCp reinforcement.
near the inner radius but decrease towards the outer radius, C. In the presence of particle gradient in the FGM
when compared with composite Non-FGM cylinder C1 cylinder, the tangential and effective stresses
having uniform distribution of SiCp reinforcement.

765
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

increase near the inner radius but decrease towards REFERENCES


the outer radius, when compared with cylinder [1] Arya, V.K. and Bhatnagar, N.S. (1976) Creep of thick
having uniform distribution of SiCp reinforcement. walled orthotropic cylinders subjected to combined
D. The radial, tangential and effective strains in the internal and external pressures, Journal of Mechanical
composite cylinder decreases significantly with Engineering Science, Vol. 18, No. 1, pp. 1–5
increasing particle content in the cylinder. The [2] Bhatnagar, N.S. and Gupta, S.K., (1969) Analysis of thick-
reduction observed near the inner radius is higher walled orthotropic cylinder in the theory of creep,
than those observed towards the outer radius. Journal of the Physical Society of Japan, Vol. 27, no. 6,
The strain rates shows a little decrease in the middle of the pp. 1655- 1662
cylinder with the increase in SiCP reinforcement [3] Birman, V. and Byrd, L. W. (2007) Modeling and
Analysis of Functionally Graded Materials and
270 Structures, Applied Mechanics Reviews, Vol. 60, pp.
240
195-216

210
[4] Chen, J.J., Tu, S.T., Xuan, F.Z. and Wang, Z.D.(2007),
Creep analysis for a functionally graded cylinder
180 subjected to internal and external pressure, Journal of
150 Non-FGM(C1) Strain Analysis of Engineering Design, Vol. 42, no. 2,
e(MPa)

FGM(C2) pp. 69-77


120 FGM(C3)
[5] Dieter, G.E. (1988) Mechanical Metallurgy, 3rd ed.
90 London, McGraw-Hill Publications.
60 [6] Gupta, S.K., Pathak, S. (2001) Thermo creep transition
in a thick walled circular cylinder under internal
30
pressure, Indian Journal of Pure and Applied
0 Mathematics., vol. 32, issue. 2 , pp.237–253
10 12 14
r(mm) 16 18 20 [7] Hagihara, S., and Miyazaki, N. (2008) Finite element
analysis for creep failure of coolant pipe in light water
Fig. 7: variation of Effective Stress in cylinders reactor due to local heating under severe accident
condition, Nuclear Engineering Design, Vol. 238, issue
1, pp. 33–40.
-9
4.0x10
[8] Mishra, J.C., and Samanta, S.C. (1981) Finite creep in
-9
3.5x10 thick walled cylindrical shells at elevated temperature,
-9 Acta Mechanica, Vol. 41, nos. 1-2, pp. 149-155
3.0x10
[9] Noda, N., Nakai, S., Tsuji, T. (1998) Thermal stresses in
Non-FGM(C1)
  = -  r (s )

-9
2.5x10
-1

FGM(C2) functionally graded materials of particle-reinforced


2.0x10
-9
FGM(C3) composite, JSME International Journal Series, Vol. 41,
.

-9
No.2, pp. 178-184
1.5x10
[10] Shukla, R.K. (1997) Elastic-plastic transition in a
.

-9
1.0x10 compressible cylinder under internal pressure, Indian
5.0x10
-10 Journal of Pure and Applied Mathematics, Vol. 28, issue
2, pp. 277–288.
0.0
[11] Singh, S.B. and Ray, S. (2001) Steady-state creep
10 12 14 16 18 20
r (mm) behavior in an isotropic functionally graded material
Fig. 8: variation of radial and tangential strain rate in rotating disc of Al-SiC composite, Metallurgical and
Materials Transactions, Vol. 32, Issue. 7, pp. 1679-1685.
cylinders
[12] Singh, T. and Gupta, V.K. (2011) Effect of anisotropy on
4.5x10
-9 steady state creep in functionally graded cylinder,
-9
Composite Structures, Vol. 93, Issue 2, pp. 747-758.
4.0x10

3.5x10
-9 [13] Sharma S., Sahni M and Kumar R. (2010) Thermo creep
-9
transition of transversely isotropic thick-walled rotating
3.0x10
Non-FGM(C1) cylinder under internal pressure, International Journal of
2.5x10
-9
FGM(C2)
 e(s )

Contemporary Mathematical Sciences, Vol. 5, no. 11, pp.


-1

FGM(C3)
517 – 527
-9
2.0x10
.

-9
1.5x10
[14] Tachibana, Y., Iyoku, T. (2004) Structural design of high
-9
1.0x10 temperature metallic components, Nuclear Engineering
5.0x10
-10
Design, vol. 233, Issue 1-3, pp. 261–272.
0.0
10 12 14 16 18 20
r(mm) [15] You, L.H., Ou, H. and Zheng, Z.Y. (2007) Creep
deformations and stresses in thick-walled cylindrical
Fig. 9: Variation of effective strain rate in cylinders vessels of functionally graded materials subjected to
internal pressure, Composite Structures, Vol. 78, Issue 2,
pp. 285–291.

766
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

GETTING ENERGY AND A CLEANER


ENVIRONMENT WITH NANOTECHNOLOGY
Savita Sood
Department of Chemistry, S D College, Barnala
savita_sood2007@yahoo.com

ABSTRACT Nanotechnology is cross-disciplinary in nature, drawing


on medicine, chemistry, biology, physics and materials
Nanotechnology involves research and technology science. Nanotechnology promises to be the tool we
development at the 1nm-to-100nm range. It creates and need. Designing and developing new material properties
uses structures that have novel properties because of their on the nanoscale enables new applications and solutions.
small size. Chemists have been doing nanoscience for We are in fact already seeing products such as energy-
hundreds of years. Stained-glass windows found in efficient LED lights, new nanomaterials for thermal
medieval churches contain different sized gold insulation, low friction nanolubricants and lightweight
nanoparticles incorporated into the glass- the specific nanocomposites on the market. This is just the beginning.
size of particles creating orange, purple, red or greenish Mankind has tendency to use energy on everything from
colours. Einstein calculated the size of sugar molecule as heating our homes to driving to work. With shrinking
1nm. Nanotechnology is, at heart, interdisciplinary supplies of fossil fuels, we can use nanotechnology to
subject. Chemists, physicists, and doctors are working produce energy with cost effective solar cells, more
alongside engineers, biologists and computer scientist to efficient hydrogen production and batteries. Many nano
determine the applications, direction and development of materials have properties of adsorbents, depend on size.
nanotechnology. This paper is the review of the use of Chemically modified nanamaterial have also attracted a
nanotechnology in cleansing the environment. One of the lot of attention especially nanoporous materials due to
major problems is the emission of carbon dioxide due to their exceptionally high surface area. TiO2
the burning of coal in power plants. By using functionalized with ethylenediamine as such, tested for
nanocrystals composed of cadmium, selenium, indium in its ability to remove anionic metals from contaminated
smoke tracks the emission of harmful gases can be groundwater. Scientific and technical methods to
minimized. Mercury vapours emitted by coal-fired power mitigate environmental pollution rely on many
plants can be rendered harmless using titanium oxide approaches and vary for the cases of soil, water and air
nanocrystals under UV light. Bio friendly corporations purification. The remediation approach chosen depends
have developed nanocatalyst which when added to diesel on the complexity and nature of the contaminated media
fuel causes it to burn more completely. We can use and economic costs of the treatment. Environmental
nanotechnology to produce energy with more effective remediation plays a pivotal role in the decision to employ
solar cells, more efficient hydrogen production and more a particular technology.
efficient batteries. Scientists have developed solar cells
that use titanium oxide nanocrystals embedded in 2 ENERGY EFFICIENCY
plastics. These cells can be used in devices such as
laptops and mobile phones. Researchers are projecting The International Energy Agency (IEA) estimates that
that light bulbs made with quantum dots will turn almost energy savings corresponding to almost one fifth of the
hundred per cent of the power from electricity into light. current worldwide energy consumption can be achieved
So very little energy will be wasted as heat. Hence, by improved energy efficiency. Nanotechnology enables
nanotechnology has revolutionised our lives. large energy and cost savings, especially in the building,
transportation and manufacturing industries.
Keywords
3 ENERGY PRODUCTION AND
Nanocrystals, Quantum dots, Nanocatylst
POWER TRANSMISSION
1 INTRODUCTION
Nanotechnology is a key enabling technology both to
exploit traditional energy sources in a more efficient,
Nanotechnology is the understanding and control of
safe and environmentally friendly manner, and to tap into
matter and processes at the nanoscale, typically, but not the full potential of sustainable energy sources such as
exclusively, below 100 nanometres in one or more biomass, wind, geothermal and solar power. It also offers
dimensions where the onset of size-dependent solutions to reduce energy losses in power transmission,
phenomena usually enables novel applications.

767
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

and to manage complex power grids with dynamically 6 USING NANOTECHNOLOGY TO


changing loads and decentralised feed-in stations.
ENERGIZE BATTERIES
4 BRIGHT PERSPECTIVES FOR
SOLAR ENERGY Of all batteries currently in use, the lithium-ion type
stores the most electrical power for its weight. The bad
The conversion efficiency of photovoltaic and news is that (for now, at least) you can only use this type
photochemical solar cells is traditionally governed by a
of battery in things like watches and laptop computers.
compromise – in order to absorb enough light, at least
micrometre-thick layers are required, while charge car- These are devices that do not have sudden demands for a
rier collection is more efficient the thinner the active lot of power, as do power tools. No surprise that
layer is. Several types of nanomaterial that absorb light researchers are using nanotechnology to improve
very efficiently are currently under development; they lithium-ion batteries so they can be used in more devices.
include quantum dots, plasmonically active metallic When a battery runs your radio or other gadget, the
nanoparticles and nanowires. Charge carrier collection reaction of chemicals in the battery transfers electrons to
can be improved by designing nanostructures which
the anode — a piece of metal that makes up the negative
exhibit short collection paths with reduced recombination
losses. Consequently, less active material is needed and terminal of the battery. Those electrons become the
purity requirements can be relaxed. Graphene is a electric current that powers your gadget. One way to
promising alternative to indium tin oxide, a scarce improve batteries is to make the anode out of material
material commonly used to fabricate transparent that lets you maximize its surface area. In a lithium-ion
electrodes in solar cells and LCD displays. battery[1], the anode is currently made of carbon. Altair
Nanotechnology-enabled solar cells can thus be produced
Nanotechnologies, Inc., is developing an improved
at a lower cost and in a more resource-efficient way.
Since they can be made flexible, integrating them into lithium-ion battery that replaces the carbon anode with
buildings is possible [2]. one made up of lithium titanate nanocrystals. An anode
made up of these nanocrystals provides about 30 times
greater surface area than one made up of carbon. If the
anode has greater surface area, electrons come out of the
5 TURNING WASTE HEAT INTO
battery faster; this means that a higher electric current —
VALUABLE ELECTRICITY therefore more power — is available to run your gadget.
If electrons can leave a battery faster, they can also go
back in faster, trimming the time it takes to recharge the
Thermoelectric materials convert heat directly into battery. Assume it takes an hour to recharge your battery-
electricity (and vice versa) and can thus recycle some of powered drill, and that you have to get a new battery
the energy contained in, for instance, hot exhaust after only 500 charges. Altair Nanotechnologies projects
streams. While low efficiency has traditionally limited that you can recharge their nano-batteries in a few
the use of thermoelectric to niche markets, recently minutes, and the batteries will last for several thousand
developed nanostructured thermoelectric, with much recharges.
better performance than bulk thermo electric[2], mark the
beginning of a new era. Progress has also been made
7 PRODUCING HYDROGEN WITH
towards inexpensive, large-scale production methods.
Beyond transport and industrial production, interesting DESIGNER MOLECULES
application areas include the transformation of low-grade
solar thermal or geothermal energy, or the use of human Researchers are attempting to design molecules that
body heat to power portable electronics. Actually only absorb sunlight and produce hydrogen from water, just as
about 5 or 10 percent of the power from an electric chlorophyll produces oxygen from water. At Virginia
current running through an incandescent light bulb Tech, some folks have noted that molecules containing
generates light; about 90 percent I is spent generating atoms of ruthenium can absorb light and produce
heat. Researchers are projecting that light bulbs made electrons, so they put an atom of ruthenium at each end
with quantum dots (nanocrystals that emit visible light of a designer molecule. They also put an atom of
when exposed to ultraviolet light, will turn almost 100 rhodium in the centre of the molecule to transfer
percent of the power from electricity into light. With electrons to water. They filled out the structure of their
quantum dot bulbs [3], very little energy will bewasted as designer molecule with atoms of carbon, hydrogen,
unneeded heat. Given that about 20 percent of the chlorine, and nitrogen — and came up with a molecule
electric power consumed in the world issued to generate that produces hydrogen from sunlight and water, just as
light, adopting light bulbs based on quantum dots[4] chlorophyll produces oxygen[1]. Although still in the
could cause significant reduction in overall energy laboratory stages, this research holds out the promise of
consumption worldwide. using a clean process — similar to one in nature — to
produce hydrogen for generating energy.

768
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

8 ENERGY STORAGE AND Hydrogen has a very high energy density by weight, but
its low energy density by volume turns its storage into a
CONVERSION major challenge. Nano composite materials with
exceptional strength-to-weight ratio can be used to
Many sustainable energy sources like wind and solar construct lightweight storage tanks with pressure ratings
power deliver significant power only part of the time. that exceed the performance of traditional materials.
Strategies to store energy are therefore needed. While the High surface area materials such as carbon aerogels,
particularly stringent requirements posed by the transport carbon nanofibres or graphene constitute another
sector are currently only met by fossil fuels, nanotechnology-based storage option. Current research
nanotechnology will make novel types of energy stores, focuses largely on chemical methods, where hydrogen
including electrical stores such as batteries and chemical reversibly reacts with a solid-state material such as
stores such as hydrogen, more competitive. magnesium. Reducing the dimensions of the storage
medium to nanoscale dimensions can alleviate traditional
9 PUTTING PRESSURE ON performance barriers of chemical stores, such as high
HYDROGEN STORAGE release temperatures and slow charge/discharge rates[3].

Figure 1: Formation of MgH2

Scientists at the Lawrence Berkeley National Laboratory 10 NANO-MATERIALS AND THEIR


have recently developed a new, air-stable nanocomposite
material for hydrogen storage, where magnesium APPLICATIONS IN CLEANSING
nanoparticles are embedded in a plastic matrix that ENVIRONMENT
protects the magnesium from oxidation. The
nanocomposite rapidly absorbs and releases hydrogen at Titanium dioxide (TiO2) is one of the popular materials
modest temperatures. used in various applications because of its
semiconducting, photo catalytic, energy converting,
electronic and gas sensing properties. Titanium dioxide
crystals are present in three different polymorphs in
nature.

769
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure 1. The crystal structures of A) rutile, B) anatase, C) brookite

Many researchers are focused on TiO2 nanoparticle and farms and you’ve got a serious problem. While current
its application as a photo catalyst in water treatment. laws have reduced the amount of contamination going
Nanoparticles that are activated by light, such as the into our waters, there are still lakes and streams that are
large band-gap semiconductors titanium dioxide (TiO2) significantly contaminated. Researchers are looking at
and zinc oxide (ZnO), are frequently studied for their ways that nanomaterial can help to clean up our water
ability to remove organic contaminants from various act[1].
media. These nanoparticles have the advantages of
readily available, inexpensive, and low toxicity. The
semiconducting property of TiO2 is necessary for the 13 GETTING RID OF TCE
removal of different organic pollutants through excitation
of TiO2 semiconductor with a light energy greater than One example is a joint venture: Rice University and
its band gap, which could generate electron hole pairs. Georgia Institute of Technology are developing a better
These may be exploited in different reduction processes way to remove TCE (trichloroethylene) from water. TCE
at the semiconductor/solution interface [6]. has been found at a majority of sites on the EPA’s
Superfund list, and it’s pretty horrible stuff that can cause
heart problems, nausea, vomiting, and eye irritation. TCE
11 CONTROLLING AIR is primarily used to degrease components during
POLLUTION manufacturing operations but it is also used in products
such as spot removers for clothing (cleanliness comes at
a price). Palladium acts as a catalyst to convert TCE to
Air pollution can be remediated using nanotechnology in ethane, which is not horrible. But there’s a problem:
several ways. One is through the use of Nano-catalysts Palladium is a rare metal, more expensive than gold.
with increased surface area for gaseous reactions. Scientists are in search of the most efficient way to use
Catalysts work by speeding up chemical reactions that palladium to neutralize TCE. Given the large number of
transform harmful vapours from cars and industrial sites that must be decontaminated, it’s important to find a
plants into harmless gases. Catalysts currently in use way to use as little of this expensive metal as possible.
include a nanofiber catalyst made of manganese oxide Both nanoparticles made of palladium and nanoparticles
that removes volatile organic compounds from industrial made of gold coated with a layer of palladium atoms are
smokestacks [6]. Other methods are still in development. possible options. Coating gold nanoparticles with
Another approach uses nanostructured membranes that palladium atoms makes all the palladium atoms available
have pores small enough to separate methane or carbon to catalyzethe TCE molecules, and seems to be the most
dioxide from exhaust (Jhu etal 2008). John Zhu of the cost-effective use of palladium. One way to get
University of Queensland is researching carbon nanoparticles into the contaminated groundwater to do
nanotubes (CNT) for trapping greenhouse gas emissions their work is to place a filter containing the nanoparticles
caused by coal mining and power generation. CNT can in a pump that is used to circulate contaminated water.
trap gases up to a hundred times faster than other As the water passes through the pump, the TCE is broken
methods, allowing integration into large-scale industrial down.
plants and power stations. This new technology both
processes and separates large volumes of gas effectively,
unlike conventional membranes that can only do one or
the other effectively [5],[7].
14 NANOTECHNOLOGY FOR
HAZARDOUS WASTE CLEAN-UP

12 KEEPING WATER CRYSTALS Nano scale materials can make a huge difference in the
clean-up of hazardous waste. There are two reasons for
CLEAR WITH the optimism: firstly, the size of nano materials lets them
NANOTECHNOLOGY penetrate otherwise impossible- to-reach ground water or
soil and secondly, their engineered coatings allow them
to stay suspended in groundwater, a major asset in clean-
Life needs water, but many of our lakes and streams
ups. If practically feasible, nanomaterial could slash
have been contaminated by wastes from industrial plants
clean-up prices by avoiding the extraordinary costs and
— add to that the pesticides used in our gardens or by
risks of hauling waste away for burning or burial. Most

770
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

nanoremediation research projects undertaken by the


department of defence are focused on cleaning up ground REFERENCES
water contaminated by chlorinated solvents like
trichloroethylene. Research shows that results have been
[1]. Booker R., Boysen E., Nanotechnology (2008)
promising in most demonstrations with most of the
contaminant being destroyed – a finding that has been
replicated by researchers [8]. [2]. Pandey Bhawana and Fulekar M.H., Research
Journal of Chemical Sciences Vol. 2(2), 90-96, Feb.
(2012)
15 CONCLUSION
[3]. Ramachandra M. S., Singh S., Nanoscience and
Nanotechnology
In view of a globally increasing energy demand,
threatening climate changes due to continuously
increasing carbon dioxide emission as well as [4]. Ramsdon J., Essentials of Nano technology
foreseeable scarcity of fossil fuels the development and
provision of sustainable methods for power generation [5]. Rheil A., Schmid J., Aktionslinie Hessen-Nanotech
seems to be the most urgent challenges of mankind. series (volume 9)
Massive efforts at political and economic level are
required to basically modernise the existing energy [6]. www.nano-connect.org
system. To enable the immediate practical
implementation of nano technological innovations in a [7]. Pidgeon N., Nature Nanotechnology 4, 95 - 98
broad field like the energy sector and interbranch and (2009)
inter-disciplinary dialogue with all players involved will
be required. [8]. J Schummer - Scientometrics, 2004

771
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Optimal Real-time Dispatch for Integrated Energy Systems


Sirdeep Singh Prabhpreet Kaur
Department of Applied Science, Bhai Gurdas Institute of Engineering and Technology, Sangrur
preetisethi.84@gmail.com

ABSTRACT and demand for electrical and thermal energy. The


On-site cogeneration of heat and electricity, thermal and electrical economically optimal dispatch of any IES must be in
storage, and curtailing/rescheduling demand options are often cost- response to current and forecasted energy prices, energy
effective to commercial and industrial sites. This collection of demand, and DER equipment availability. Dispatch must be
equipment and responsive consumption can be viewed as an within mechanical and regulatory constraints on the IES.
integrated energy system (IES). The IES can best meet the site’s
cost or environmental objectives when controlled in a coordinated Typical electricity generation equipment found on-site
manner. However, continuously determining this optimal IES
dispatch is beyond the expectations for operators of smaller
includes natural gas-, propane or biogas-fueled gas turbines,
systems. A new algorithm is proposed in this paper to reciprocating engines, micro turbines, and fuel cells. Heat
approximately solve the real-time dispatch optimization problem recovery from these devices can be used for site steam or
for a generic IES containing an on-site cogeneration system subject heating needs or for thermally-activated cooling. Often this
to random outages, limited curtailment opportunities, an use for the waste-heat from electricity generation is what tips
intermittent renewable electricity source, and thermal storage. An the scales in favor of on-site generation. Renewable
example demonstrates how this algorithm can be used in simulation electricity sources (photovoltaic’s and small-scale wind
to estimate the value of IES components. turbines) and thermal sources (solar thermal collectors and
ground-source heat pumps) are also present. Thermally-
I. INTRODUCTION activated cooling is achieved through absorption or
adsorption chillers, which utilize a modified compression-
Integrated Energy Systems (IES) combine on-site power or chiller cycle to replace much of the electric energy input
distributed generation technologies with thermally activated requirement with a thermal energy
technologies to provide cooling, heating, humidity control, requirement. Desiccant dehumidifiers use heat to remove
energy storage and/or other process functions using thermal moisture from air before cooling it, which reduces the
energy normally wasted in the production of energy required to cool the air.
electricity/power. IES produce electricity and byproduct
thermal energy onsite, with the potential of converting 80 Electrical and thermal storage technologies can add value to
percent or more of the fuel into useable energy. Integrated an energy source by shifting its utilization from times of low
Energy Systems have the potential to offer the nation the value to times of high value. For example, the waste-heat
benefits of unprecedented energy efficiency gains, consumer from a continuously running generator can be stored
choice and energy security. This study supports and guides throughout the day and used during times of high thermal
IES projects by assessing technologies and markets where load. Similarly, low-priced electricity such as off-peak
IES is positioned for growth. Furthermore, this effort will power or excess wind-power can be stored for use during
identify areas where technology needs improvement and high priced on-peak hours using a battery or other electrical
where substantial barriers exist, and the potential market storage device.
effects of overcoming these obstacles. As a result, this study
sought to quantify the buildings market for IES, identify key III. DEMAND RESPONSE
market drivers and barriers, and explore potential areas for
technology research and development that could improve the The high price of peak electricity has encouraged price
prospects for IES. responsiveness among some customers. Some customers
may respond to price or control signals from their utility to
II. BASIC ADVANTAGES OF IES reduce or reschedule electric loads, a practice known as
demand response. Demand response opportunities can be
1. Dramatically reduce fossil fuel use and air pollutant characterized as 1) curtail able, such as non-essential
emissions lighting (e.g. hallways, parking garages), 2) re-schedulable,
2. Improve the electric grid’s power quality, efficiency, such as energy-intensive industrial processes, or 3) part-
reliability and return on investment curtail able/part re-schedulable, such as cooling loads. In
3. Enhance energy security pilot programs where the hourly and daily volatility of prices
is passed directly to consumers, rather than monthly
IES Components IES for a site may consist of a large range averaging, demand response behavior increases. This holds
of energy conversion and storage devices, as well as demand promise for bringing demand elasticity to the electricity
response options, giving the site control of both its supply of

772
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

market, a valuable step towards reducing peak capacity costs introduces the concept of the integrated energy system (IES),
and mitigating the threat of market power abuse. a holistic view of all site energy options. Another common
problem for a wider range of IES systems is making the best
use of limited opportunities.
Examples of limited opportunities include
• Profitable DG systems that are operationally constrained
by regulatory efficiency constraints (where there is only
limited use for waste heat), maximum run-time regulations,
or limited fuel supply, and
• DSM measures that a site’s occupants will only
accommodate a limited number of times. Optimally
exploiting limited opportunities is challenging because it is
dependent on uncertain future conditions, such as DG
intermittency (generator outages or variation in renewable
output), end-use demand, and energy pricing. The IES
dispatch problem is to minimize, at each time step, the
Figure 1 Load-Duration Curve and System Dispatch in Long-range Energy expected cost (or other site energy objective) of all energy
Alternatives Planning System (LEAP)
consumption, given past system operation, present
Electricity Utilities incur both variable and fixed expenses; conditions, and forecasts of future conditions. This is done
tariffs are typically designed to cover three kinds of costs: by simultaneously solving the unit commitment and set point
• Fixed charges are invariant, Rs/month. These are level problems for the current time step and all future time
infrastructure costs of supply and delivery required by the steps, conditional on future conditions.
customer regardless of their energy consumption for that
month. V. AN ALGORITHM FOR
• Volumetric charges are proportional to the amount of OPTIMIZATION OF REAL-TIME IES
energy consumed. They are expressed in Rs/kWh and may DISPATCH
vary by time of day within a month. Volumetric rates are
intended to cover the variable costs of producing electricity, Because of the complexity of the IES dispatch optimization
such as fuel and some maintenance, in addition to the fixed problem and large number of time steps to be solved over
costs that generators recover in their volumetric sales of (ideally time steps of several minutes over the course of a
electricity month or more), an exact solution to the problem,
Demand charges are expressed in Rs/kW and levied on the conditional on the statistical description of stochastic
maximum power consumption during a specified time range parameters, is infeasible. A feasible approach is to optimize
(such as the on-peak hours of the month), regardless of the the current dispatch and future dispatch strategy relative to a
duration or frequency of that level of power consumption. finite number of future scenarios. This section describes a
Demand charges are intended to collect the fixed costs of simple IES dispatch optimization algorithm from which
infrastructure shared with other customers by raising more complicated; practical algorithms could be built upon.
revenue in proportion to the amount of power required by The algorithm considers a finite number of possible future
the individual. scenarios as an approximation of the future. Scenarios are
generated randomly; each scenario contains values for each
IV. THE OPTIMIZATION PROBLEM stochastic parameter at each time step. Because of the
FOR INTEGRATED ENERGY similarity of days in a month, a relatively small number of
SYSTEMS scenarios can be used to represent the most probable future
conditions. The dispatch problem, then, is to select a
Dispatch to a site’s DER options must be made continuously dispatch decision for the current time-step and a dispatch
and includes the set points of generators, the charging or strategy for all future time steps, given historic load and
discharging of storage, and DSM commands. Typical dispatch information. This algorithm considers optimization
constraints on the system include over the course of a month for a site with a DG system
• Engineering constraints on equipment such as ramping comprised of one generator with heat recovery for heating
rates and maximum and minimum operating levels and absorption cooling, a photovoltaic (PV) system, and
• Regulatory constraints on noise, operation hours, or overall limited curtailment options. A limited amount of thermal
DG system efficiency (i.e. utilization of waste heat) storage is considered by relaxing the synchronous constraint
• Magnitude, duration, and frequency constraints on DSM. on thermal demand. Two dispatch decisions are considered:
As with any set of decisions that affect a common objective, the set point of the generator in the CHP system and a
the dispatch decisions to all DER options can best meet site curtailment command.
energy objectives if the decisions are coordinated. This

773
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

V.I the modular integrated energy system 1) A DG system: 500 kW reciprocating engine with heat
recovery and a 500 kW (capacity for heat removal)
Figure 2 shows the concept of a modular system in which all absorption chiller, 2) a 200 kW photovoltaic system.
components would be skid mounted to facilitate transporting Incentives, the DG system were constrained to utilize 60%
them to the installation sites and to simplify connection to of input fuel energy in the form of electricity or useful
the site’s utilities. The initial concept was to have the entire thermal energy. The following cases were considered
IES mounted on one skid, however, a single skid design is 1) Curtailment only
too large to ship and maneuver. Therefore, the following 2) DG and curtailment
two-skid design was developed: 3) PV and curtailment
4) DG, PV, and curtailment

Table 1: Financial Value of Heat Recovery to DG


Systems in various Load areas

VI. CONCLUSION
Figure 2 Integrated Energy Systems
The IES dispatch optimization problem is a multi-stage
On-site generation is only allowed when the DG system is problem (hundreds of stages) with several stochastic
available, and must be less than or equal to the capacity of parameters. Vertically-integrated utilities have tackled
the system. Availability at each time step and for each similar (and much more complicated) problems by
scenario is a binary variable equal to zero if the generator is developing heuristic approaches (often tailored to specific
unavailable and one if it is. Electricity loads must be met systems), thus making a tractable problem. This research
instantaneously by the sum of electricity purchase, on site takes a different approach, developing a general method for
generation (including PV generation) solving the multi-stage problems with multiple stochastic
Stochastic Parameters, SP (Electric load at time t) = parameters.
Electricity Purchased (at time t) + Dispatch (Generation at t) The system met the stated goal of achieving 70% thermal
+ PV capacity × SP (Solar insolation) + electric load (kWh) overall HHV efficiency. The overall efficiency of actual
offset by absorption chiller + magnitude (kW) of curtailed installations will depend on the individual customer’s
load. thermal energy needs and the temporal patterns of those
needs. Although this example should not be considered
exemplary of the buildings, it does illustrate how valuable
V.II Example: DSM Value When Coordinated With IES components and systems can be in a location with
Intermittent DG significant demand charges. This example suggests the
proposed algorithm’s usefulness as a screening and design
Considering a load of 1 MW in the locality of large hotel, tool for a wide variety of sites considering IES projects. For
nursing home, large office and for large school, for each sites where this integrated approach suggests significant
case and scenario, a building energy simulation for each costs savings over current control strategies, this algorithm
month of the study year was performed using hour long time would be useful in the actual real-time dispatch of IES
steps. Simulation entails systems.

1) Considering forecasts of each parameter value at each REFERENCES


time step for each of the S stochastic scenarios
2) Determining the optimal dispatch for the current time step [1] Ryan Firestone, Michael Stadler, and Chris Marnay
3) Executing the dispatch and recording the resulting system “Integrated Energy System Dispatch Optimization” IEEE
International Conference, 2006; P 357 - 362
performance data [2] Quelhas, A.; Gil, E.; McCalley, J.D.; Ryan, S.M.; “A
Other IES components considered were Multiperiod Generalized Network Flow Model of the

774
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

U.S. Integrated Energy System”, IEEE Transactions on [6] Martyak, M.S.; Devgan, S.S.; “An optimization model
power system 2007, Page(s): 829 – 836 for the application of an integrated cogeneration-thermal
[3] Stanislav, P.; Bryan, K.; Tihomir, M.; “Smart Grids energy storage system”, IEEE Proceedings of
better with integrated energy system” Electrical Power & Southeastcon '91, vol.1 pp 558 - 562
Energy Conference (EPEC), 2009 IEEE, Page(s): 1 – 8 [7] Valsalam, S.R.; Muralidharan, V.; Krishnan, N.; Sarkar,
[4] Zhengping Xi; Parkhideh, B.; Bhattacharya, S.; T.K.; Khincha, H.P.; “Implementation of energy
“Improving distribution system performance with management system for an integrated steel plant” Energy
integrated STATCOM and super capacitor energy storage Management and Power Delivery, 1998. Proceedings of
system” Power Electronics Specialists Conference, 2008. EMPD'98. 1998 International IEEE Conference on 1998,
PESC 2008. IEEE pp 1390 – 1395 pp 661 – 666; vol.2
[5] Perry Tsao; Senesky, M.; Sanders, S.R.; “An integrated [8] Mandi, R.P.; Yaragatti, U.R.; “Solar PV-diesel hybrid
flywheel energy storage system with homo-polar inductor energy system for rural applications” IEEE Conferences
motor/generator and high-frequency drive” Industry Industrial and Information Systems (ICIIS), 2010,
Applications, IEEE Transactions on 2003, pp 1710 – Page(s): 602 – 607
1725 [9] K. Qiu and A.C.S. Hayden “A Natural-Gas-Fired
Thermoelectric Power Generation System” Journal of
Electronic Materials, 2009, Volume 38, Number 7, Pages
1315-1319

775
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Engineering Fluorescence Lifetimes of II-VI


Semiconductor Core/Shell Quantum Dots
Gurvir Kaur S.K. Tripathi
Department of Physics Sant Longowal Institute of Engineering &
Centre of Advanced Study in Physics Technology, Longowal, Sangrur
Panjab University, Chandigarh. mailtogurvir@yahoo.com

ABSTRACT
Quantum dots (QDs) are nanocrystals of semiconductors materials. As each material has different bandgap, the composition
acts as an additional degree of freedom by which the properties of the QDs can be tuned. The work is concerned with the
tuning of fluorescence emission as well as their relaxation lifetimes with growth of a different shell over the core nanocrystal
of II-VI group. In relevance to bulk band alignment the systems with CdSe core and CdS, ZnSe or ZnS shell ensure a Type-I
heterostructure. However, some semiconductor material combinations (e.g., ZnSe/CdS, ZnTe/CdSe, CdTe/CdSe) allow the
charge carrier localization regime in core/shell HNCs to be gradually tailored from type-I to type-II by varying the shell
thickness and core diameter. Study involves the variation in the shell thickness on the CdSe core and systematically
investigated the steady and transient optical properties of the materials. The results presents the suppression of the deep trap
emission by passivation of most of the vacancies and trap sites on the crystallite surface, and enhances the number of
initially populated excitonic states resulting in emission which is dominated by band-edge excitonic recombinations

Keywords: Quantum dots, Core/shell structures, Time resolved fluorescence.

REFERENCES
[1] Javier A, Magana D, Jennings T, Strouse GF, Appl. Phys. Lett. 83 (2003) 1423.
[2] Donega CD, Hickey SG, Wuister SF, Vanmaekelbergh D, Meijerink A, J. Phys. Chem. B 107 (2003) 489.
[3] Wuister SF, Donegá CM, Meijerink A, J. Phys. Chem. B 108 (2004) 17393.

776
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Human Values and Ethics in the Modern Technology


Driven Global Society

Dr. Sunita Rani Vandana Sharma Neetika


Asstt. Professor in English Asstt. Professor in English Asstt. Professor in Computer Science
Punjabi University College of Engg. & Bhai Gurdas Inst. of Engg. & Tech., Punjabi University College of Engg. &
Management, Rampura Phul, Bathinda. Sangrur Management, Rampura Phul, Bathinda
sunita.23cs@gmail.com vandana.bgiet@gmail.com

ABSTRACT human condition, interest, behavior, and aspiration. Values


Human values are a set of consistent behaviors and measures can be defined as desirable, trans-situational goals, varying in
that guide human beings in doing what is right and acceptable importance that serves as guiding principles in people‘s lives.
by the society. Values and ethics may be treated as keys to As stated by Rokeach, ―The value concept… [is] able to unify
solve many world problems. They operate at the level of the apparently diverse interests of all the sciences concerned
individuals, of institutions, and of entire societies. Scientific with human behavior.‖1 Human values are a set of consistent
and technological advancement has revolutionized every behaviors and measures that guide human beings in doing
aspect of human life. New and expanding technological what is right and acceptable by the society. They attract
capabilities confront people with ethical dilemmas. It has been dignity, respect and appropriateness among people. Values are
felt that for the welfare of society, technocrats and engineers the essence of our personality, and influence us in making
must be morally committed and conscious about human decisions, trust people, and arrange our time and energy in our
values and ethics to grapple with ethical dilemmas they social life. Values may be treated as keys to solving many
confront. The present paper endeavours to explore the world problems. They operate at the level of individuals, of
significance of human, values and ethics in the modern institutions, and of entire societies. Values are a motivational
technology driven global society. construct. They refer to the desirable goals people strive to
attain. They transcend specific actions and situations. They
Keywords are abstract goals. The abstract nature of values distinguishes
Human values, Ethics, Morals, Technology, Engineering, them from concepts like norms and attitudes, which usually
Ethical Conduct. refer to specific actions, objects, or situations. Values guide
the selection or evaluation of actions, policies, people, and
1. INTRODUCTION events and serve as standards or criteria. They are ordered by
importance relative to one another. People‘s values form an
The present paper is an attempt to explore the importance of ordered system of value priorities that characterize them as
human values in the modern technology driven global society. individuals. This hierarchical feature of values also
There is no denying the fact that the present global society is distinguishes them from norms and attitudes.
facing a lot of crises. Human society may not significantly According to Schwartz values are "responses to three
sustain without human values and ethical conduct. Now a universal requirements with which all individuals and
days there is a lot of concern about techno-genic maladies societies must cope: needs of individual biological organisms,
such as energy and natural resource depletion, environmental requisites of coordinated social interaction and requirements
pollution, global warming etc all of which are manmade and for smooth functioning and survival of groups‖2. Ten
are threatening the very existence and survival of man on motivationally distinct, broad and basic values are derived
earth. On the other hand there is rapidly growing danger from these three universal requirements of the human
because of nuclear proliferation, terrorism, large scale condition. Schwartz details the derivations of these ten basic
corruption, scams, breakdown of relationships, depression etc. values.3 Each of the ten basic values can be characterized by
Root cause of all the above mentioned maladies and threats to describing its central motivational goal:
human happiness and peace is the lack of understanding of  Self-Direction: Independent thought and action;
human values and ethical conduct. Hence, it is necessary to choosing, creating, exploring.
talk on the subject and bring about awareness of human values  Stimulation: Excitement, novelty, and challenge in
and ethics into the modern society. Our actions must life.
increasingly be based on an acknowledgment of global and  Hedonism: Pleasure and sensuous gratification for
universally accepted values. Because, it is the human values oneself.
which are to be treated as the keys to solving the global  Achievement: Personal success through
problems. demonstrating competence according to social
standards.
2. VALUES, MORALS AND ETHICS  Power: Social status and prestige, control or
dominance over people and resources.
2.1. Human values  Security: Safety, harmony, and stability of society,
Human values can be defined as- the eternal qualities that an of relationships, and of self.
individual must possess for quality life and which does not  Conformity: Restraint of actions, inclinations, and
change with the change in the society or situation. Human impulses likely to upset or harm others and violate
values embrace the entire range of values pertinent to the social expectations or norms.

777
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 Tradition: Respect, commitment, and acceptance when broadly interpreted. Engineering educator P. Aarne
of the customs and ideas that traditional culture or Vesilind defines ethics as ―the study of systematic
religion provide the self. methodologies which, when guided by individual moral
 Benevolence: Preserving and enhancing the welfare values, can be useful in making value-laden decisions‖4 Work
of those with whom one is in frequent personal ethic is a set of values based on hard work and diligence. It is
contact (the ‗in-group‘) also a belief in the moral benefit of work and its ability to
 Universalism: Understanding, appreciation, enhance character. A code of ethics prescribes how
tolerance, and protection for the welfare of all professionals are to pursue their common ideal so that each
people and for nature. may do the best at a minimal cost to oneself and those they
care about. An individual in his professional capacity has
responsibility for the regular tasks he is assigned, for the
Figure 1. Theoretical model of relations among ten outcomes of the actions and decisions. A professional has
motivational types of values obligations to the employer, to customers, to other
professionals- colleagues with specific expectations of
reciprocity. He is answerable and liable for the actions. He
should have the capacity and moral strength to defend his
actions and decisions.

2.3. The Ethics of Technology:


It is a sub-field of Ethics and generally sub-divided into two
areas:
 The ethics involved in the development of new
technology – whether it is always, never, or
contextually right or wrong, to invent and implement a
technological innovation.
 The ethical questions that are exacerbated by the ways
in which technology extends or curtails the power of
individuals – how standard ethical questions are
changed by the new powers.

In addition to identifying ten motivationally distinct basic 2.4. Engineering Ethics:


values, Schwartz‘s Values Theory explicates a structural It is defined as ―(1) the study of the moral issues and decisions
aspect of values, namely, the dynamic relations among them. confronting individuals and organizations involved in
Actions in pursuit of any value have psychological, practical, engineering; and (2) the study of related questions about
and social consequences that may conflict or may be moral conduct, character, policies, and relationships of people
congruent with the pursuit of other values. The circular and corporations involved in technological activity‖5
structure in Figure 1 portrays the total pattern of relations of Engineering ethics is a form of professional ethics, however,
conflict and congruity among values postulated by the theory. which requires reflection on the specific social role of
The circular arrangement of the values represents a engineers. ―Engineering ethics is a type of professional ethics
motivational continuum. The closer any two values in either and as such must be distinguished from personal ethics and
direction around the circle, the more similar their underlying from the ethical obligations one may have as an occupant of
motivations. The more distant any two values, the more other social roles. Engineering ethics is concerned with the
antagonistic their underlying motivations. question of what the standards in engineering ethics should be
The human values evolve because of the following factors: and how to apply these standards to particular situations.‖6 An
 The impact of norms of the society on the fulfillment of ethical individual is inspired by a vision of excellence, and
the individual‘s needs or desires. being ethical and adopting the moral points of view define the
 Developed or modified by one‘s own awareness, choice, essence of a good and happy life. The scope of professional
and judgment in fulfilling the needs. ethics envelopes diverse activities like
 By the teachings and practice of Preceptors (Gurus) or  Engineering as a social experimentation
Saviors or religious leaders.  Responsibility of technocrats and engineers for
 Fostered or modified by social leaders, rulers of safety
kingdom, and by law (government).  Role of engineers, managers, consultants etc.
 Rights of professionals and engineers
2.2. “Ethics” or “Morals‖  Moral reasoning and ethical theories
We regularly use these two terms interchangeably—means  Responsibility to employers
those habits or customs that are standards of good conduct or  Global issues and concerns
character. Ethics refers to a body of moral principles. To be
ethical is to do the right thing; to consider the well-being of 3. NEED OF VALUES AND ETHICS
others as equal to your own; and to act in ways that aim to FOR ENGINEERS:
maximize the good. To be ethical is to be righteous, in the
sense that our conduct and character are grounded on Natural sciences and engineering are important forces
principle and a commitment to doing our duty regardless of shaping the future of humanity. Technology has a pervasive
narrow self-interest. To be moral is to be fair and considerate and profound impact on contemporary world. Technocrats and
of others, particularly to show them the respect we ourselves engineers have made possible the spectacular human triumph
demand that acknowledges rights to life, liberty and property, once dreamed of only in the myth and science fiction. The
Morals and ethics are self imposed or regulated and voluntary technological advances have significantly improved our

778
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

quality of life in ways so numerous that we cannot imagine advanced the moral character of those who call themselves
the modern world without them. However new challenges, ―Engineers‖.
opportunities and threats continue to bother humanity with
advent of newer technologies. Very often technological
development is Janus faced and morally ambiguous as time is 5. CONCLUSION
not devoted to conduct research on social, economic and
medical impacts they may have on the lives of people. New Human values and ethics possess a significant position in the
and expanding technological capabilities confront people with society. Values are a cognitive structure that describes the
ethical dilemmas. Technocrats and engineers are expected to ideals of life of individuals, their preferences, priorities,
exhibit the highest standards of honesty and integrity. It has principles and the behaviour of a cognitive. Human value is a
been felt that for the welfare of society, technocrats and theory about ―what things in the world are good, desirable,
engineers must be morally committed and conscious about and important.‖7 Today‘s human society is deeply engrossed
human values and ethics to grapple with ethical dilemmas in materialism and human values are now withering very fast.
they confront. The services provided by them require honesty, Scientific and technological advancement has revolutionized
impartiality, fairness, and equity, and they must be dedicated every aspect of human life and has contributed a lot making a
to the protection of the public health, safety, and welfare. better and more efficient world. But the craze of materialism
Engineers must perform under a standard of professional has been quite detrimental to the traditional social and moral
behavior that requires adherence to the highest principles of system. Loss of moral integrity has always been responsible
ethical conduct. for the destruction of civilization in the past. Consequently,
human values play a vital role both for the integrity and
longevity of any human society. Today efforts are required to
4. CAN VALUES AND ETHICS BE impart value education to make young technocrats aware of
TAUGHT? their social responsibilities and to tell them that first a person
is a human being and then scientist, engineer, technocrat or
professional. It is evident that the human values and
Keeping in mind the enriched view of ethics, one may wonder engineering ethics are roadmap of behavior of technocrats and
if the ethics can be taught. There can be no doubt but the engineers and points out the values and traditions of the
answer is certainly yes! Each generation recognizes the need profession in leading humanity to make crucial choices and
to prepare the next one for the responsibilities it must assume confront the challenges necessary for a better and more
in protecting critical human values, in maintaining order and meaningful life. In the reports commissioned by the Club of
in reducing conflict. In the professions, especially in Rome there is a concern for developing a ―new world
engineering, preparing aspiring professionals to assume the consciousness..., a new ethic in the use of material resources,
mantle of responsibility that is central to professional ethics is a new attitude towards nature, based on harmony rather than
crucial. While the education of engineers focuses almost on conquest ... a sense of identification with future
exclusively on developing the technical capacities of aspiring generations‖8 to avoid global catastrophe caused by
engineers for solving a host of technical problems facing unrestrained economic growth. The academic engineering
society, it has not also sufficiently advanced the moral community has the difficult and responsible task and
character of those who call themselves ―Engineers‖. Teaching challenge of ensuring that future practitioners of the
values and engineering ethics in academic institutions is profession are educated and equipped with the skills of
undertaken largely through many case studies for creating confronting the ethical problems, examining the standards of
awareness interactively among engineering students of all conduct with critical thinking and the competence and ability
disciplines. By studying value education and engineering that are illustrated and taught in engineering classes. It is
ethics, the students develop awareness and assessment skill of evident that at the end of the day, it is the human values and
the likely impact of their future decisions on moral and ethical ethics which will save the mankind.
grounds. Ethical standards in engineering are influenced by
many factors: 1.Engineering as an experimentation for the
good of mankind is a notable factor involving far reaching REFERENCES
consequence. 2. Ethical dilemmas make engineering decisions [1] Rokeach, M. 1973. The Nature of Human Values. New
relatively difficult to make. 3. Risk and safety of citizens as a York: Free Press. p.3.
social responsibility is a prime concern of an engineer. 4. [2] Schwartz, S. H. ―Are There Universal Aspects in the
Technological advancement can be very demanding on the Content and Structure of Values?‖ Journal of Social
engineering skill in the global context. 5. Moral values and Issues. April 2014. Volume 50. Issue 4. pp. 19-45.
responsible conduct will play a crucial role in decision [3] Schwartz, S. H. ―Basic Human Values: Their Content and
making. Structure Cross Countries.‖ French Journal of Sociology,
2006. Volume 47. Issue 4. pp. 21-55.
The study of engineering ethics within an engineering
[4] Vesilind, P. Aarne. 1988. ―Rules, Ethics and Morals in
program helps students prepare for their professional lives. A
Engineering Education‖. Engineering Education,
specific advantage for engineering students who learn about
February. p. 290.
ethics is that they develop clarity in their understanding and
[5] Martin, Mike W. and Roland Schinzinger. 1996. Ethics in
thought about ethical issues and the practice in which they
Engineering. New York: McGraw-Hill. p. 23.
arise. The study of ethics helps students to develop widely
[6] Harris, Charles E., Jr., Michael S. Pritchard, and Michael
applicable skills in communication, reasoning and reflection.
Rabins. 1995. Engineering Ethics: Concepts and Cases.
These skills enhance students' abilities and help them engage
Belmont. CA: Wadsworth. p.14
with other aspects of the engineering program such as group
[7] Sinha, S.C. 1990. Anmols Dictionary of Philosophy. New
work and work placements. And while the education of
Delhi: Anmol Publications. p. 196.
engineers focuses almost exclusively on developing the
[8] Mesaroric, M.D. and Pestel, E. (1974). Mankind at the
technical capacities of aspiring engineers for solving a host of
Turning Point. New York: E.P. Dutton, quoted from
technical problems facing society, it has not also sufficiently
Fromm. 1988. p. 148.

779
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

RFID: A Boom for Libraries


Arvind Mittal Amit Mittal Uma Sharma
Senior Librarian Assistant Librarian Assistant Librarian
Bhai Gurdas Group of Main Library Bhai Gurdas Institute of
Institutions, Sangrur Punjabi University, Patiala Engg. & Tech. Sangrur
arvindmittal14@gmail.co amitmittal96@gmail.com umasharma762@gmail.com
m

ABSTRACT reader that converts the radio waves returned from the
As we know libraries are always ready to face new challenges RFID tag into a usable form of data. RFID system
for providing best services to their patrons. As the technology consists of the following components that jointly for to
is developing, libraries are also accepting new technologies make this system effective.
and converting into modern libraries. Among the new
technologies Radio Frequency identification (RFID) is the
most used technology by the libraries. The present paper
3.1. RFID Tags
This is a silicon chip encapsulated in glass of plastic and it is
provides the brief introduction of RFID and its application in
flexible, paper thin smart label, approximately 2x2 inches in
libraries. RFID based Library Management Systems provides
size that is applied directly on the books. Each tag contains
the timely and easy services to the library users. RFID system
microchip and an antenna. This assembly is generally covered
enhances the efficiency of circulation counter with self check
with a protective overlay. It can be applied on any product,
out/in system, fast acquisition and inventory control system
animal, or person for the purpose of identification using radio
and easy stock verification of the library material. This system
waves. Some tags have more power that it can read from
also provides proper security against theft.
many meters away.

Keywords
RFID, Library Security, Library services

1. INTRODUCTION
Now the technologies are changing with the speed of blink of
eyes. So our lifestyle had also changed, everyone want quick
and easy access of everything. So, in library science there is a
technology that had changed the pattern of libraries i.e. RFID
(Radio Frequency Identification). RFID had fasten the
working environment of the library like easy access of any
book on the stacks, multiple books issued at the same time,
easy and fast stock verification of the library material, we
apply routine figure arrest technology that relies on radio-
frequency electromagnetic fields. This technology is known as Figure1: RFID Tag
Radio-Frequency Identification or RFID. This tool is also
antitheft tool. 3.2. RFID Readers
Generally there are two types of RFID readers one is passive
and the other is active. Passive readers gives energy to the tag
2. History OF RFID that don’t has its own energy and the tags use the backscatter
This technology is not much older, first time it has been used technology for forwarding information to the reader. On the
in 1940 for classify airplanes during Second World War, after other hand active reader receives energy transmitted from an
that it has not been used and developed by any one. After active RFID tag that has its own in built power. We use
thirty years this technology had been again used for military different types of RFID readers in the libraries:
purpose. Now this technology is used at different places and
regularly updating. In 1990 Regularity for the interoperability
of RFID tool began and in 1999 MIT (Massachusetts Institute
of Technology) generates auto-id axis regular recognition. In
2004 MIT auto-id axis turn into worldwide EPC (Electronic
Product Code). From 2005 this technology is being used in
almost every field like offices, shopping malls, banks, office,
and extensively used at libraries.

3. RFID SYSTEM
RFID is a combination of two a microchip and other is
antenna. The microchip of RFID consists of EPC and
other related information, on the hand antenna assists in
transmitting the data on the chip. Both of these together
constitute an RFID tag. The antenna assists in
transmitting the information on the chip to a RFID

780
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Figure2: RFID Gate transponder uses an antenna equivalent to a resonance circuit


The RFID gate works for controlling the theft in the libraries. consisting of coil and capacitor, circuit tuned to the frequency
At the exit gate sensor receives the information of the exit of electromagnetic field issued by reader. Magnetic field lines
item and send it to the server, then server checks it that is it issued by reader antenna cross coil turns inducing an electric
properly issued or not then send back this information to the signal rectified and internally stabilized on the chip and then
gate. If the item is issued then it allows to exit from the gate used for the entire transponder supply. When the transponder
on the other if item is not issued the it gives an alarm that is in the magnetic field created by the reader, the energy
indicates that the particular item is not issued. absorption towards the resounded circuit at frequency takes
place. This absorption, rather small, is nevertheless perfectly
detectable, as a tension drop at the inductive coil terminals
(the reader’s antenna) or as an increase of the power through
it. What really happens is a drop of the charge resistance
equivalent at the amplifier’s terminals which supply the
reader’s antenna. In order to send data from transponder to
reader it suffices that the LC circuit be out of tune in accord
with the issued data. Thus, the signal at the entrance of the
reader’s antenna will suffer a mild modulation in amplitude.
Now days, this amplitude modulation (AM) signal can be
separated and demodulated to retrieve data. In conclusion,
RFID is a technology that enables wireless data capture and
transaction processing at a very attractive cost. There are two
main areas of application, defined broadly as proximity (short
Figure 3: RFID Handheld Reader range) and vicinity (long range). Long range or vicinity
applications can generally be described as track and trace
RFID handheld scanner used to identify particular item on the applications, but the technology provides additional
self or find proper placing of any item on the shelf. We can functionality and benefits for product authentication. Short
also find out a particular item from the bulk of items even if it range or proximity applications are typically access control
applications.
is on the wrong place. We can also use it for annual stock
inventory purpose.
4.1. Check in, check out and sorting
3.3. Server System
RFID system allows the self check in and checkout of reading
The server is the main part of RFID system. It is the
material in the library. In this system there is self issued
communication gateway among the various components. It
system of books where any user can himself/herself issued
receives the information from one or more of the reader and
their books by placing his/her id card and books. Even many
exchanges information with the system database. Its software
books can be issued at single time.
includes the SIP/SIP2 (Session Initiation Protocol), APIS
(Application Programming Interface) and NCIP or SLNP
necessary to interface it with integrated library software.

4. Implementation of RFID in Libraries


Radio Frequency Identification (RFID) systems are using
readers and cards or tags. Data is stored on an electronic data-
carrying device. The power supply given to the data-carrying
device and the data exchange between the data-carrying
device and the reader are achieved without the use of galvanic
contacts, using instead magnetic or electromagnetic fields.
The underlying technical procedure is drawn from the fields Figure4: Self Issue system
of radio and radar engineering. Communication between In the self returning system, there are drop boxes installed by
transponder and reader is realized through the same the library at different locations in the whole campus. Users
electromagnetic field used for supply. Thus, reader transmits can drop the issued books in any drop box any time. On the
interrogation signal and transponder answers communicating same time the book /books will be returned from the use’s
stocked data in its EEPROM memory. The communication account.
method used for most transponders. The LC circuit equivalent
to the transponders antenna resounds on the f0 frequency. The
generator, at the reader’s signal, injects power in its antenna.
The power, therefore the field created by the coil, is with a
frequency which RFID systems use passive and active
transponders. Active transponders are used for long-range
applications but because a battery is included, their cost is
much higher. Passive transponders do not contain a proper
source of supply, the necessary energy for functioning being
drawn out of the electromagnetic field of the antenna of a
reader to which it is connected, placed in close vicinity, but
with no electrical contact between transponder and reader. In
order to obtain this energy out of electromagnetic field, the Figure 5: Drop Box for Self Return

781
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. Daniel, H., Puglia, A. and Puglia, M. 2007. RFID: a guide


Self sorting system allows for the implementation of self- to radio frequency identification, New York: John Wiley
sorting machines that sort every book in according to there & Sons, Inc.,1.
subject and category. 5. Dawes, T.A. 2004. Is RFID right for your library? Journal
of Access Services, 2(4), 7-13.
6. Givens, B. 2006. Activists: Communication with
consumers, speaking truth to policy makers. RFID
Applications, Security and Privacy, Addison Wesley, 432
7. Handa, T.S. 2010. Radio frequency identification (RFID)
application in libraries. In: Amudhaveli, A. and Singh, J.,
Ed. Challenges and changes in Librarianship: Papers in
honour of Prof. Sewa Singh, 2 Vols. 2010. B.R.
Publications: Delhi, 829-837.
8. Karen Coyle, Management of RFID in Libraries, Preprint
version of article published in the Journal of Academic
Librarianship, v. 31, n. 5, 486-489
9. Lahari, S. 2006. RFID Sourcebook, IBM Press. 9-10
Figure 6: Self Sorting System 10. Mamdapur, G.M.N. and Rajgoli, I.U. 2011. Implementing
radio frequency identification technology in libraries:
advantages and disadvantages. International Journal of
5. Advantages of RFID in Libraries Library and Information Science, 3(3),46-57.
Basically RFID is mainly useful for theft security, inventory 11. Molnar, D. and Wagner, D. 2004. Privacy and security in
management, users autonomy, automated material handling, library RFID - Issues, practices and architectures CCS’04
charging /discharging transactions etc. The main advantages October 25-29 2004 Washington DC, USA
of RFID system are explained as follow: 12. Nisha, F., Bakshi, S. and Ali, N. 2006. RFID system: a
boon for libraries. In 4th International Convention
caliber-2006, Ahmedabad: Inflibnet, 554-555.
 RFID system speed up the circulation system of the
13. "Radio Frequency Identification. Rachel Wadham.
library by installing self check-in/check-out counters as well Library Mosaics v14 no.5 (S/O 2003), 22.
as drop boxes at different location in the campus. 14. RFID Poses No Problem for Patron Privacy. American
 Quickly Find out the wrong placed material on the Libraries, v34 no11 (D 2003), 86.
stacks in the library. 15. Sarma, S. 2008. RFID technology and its applications. In
 Easy and fast stock inventory in the libraries. Miles, S.B.; Sarma, S.E.and Williams, J.R. Eds. RFID
Technology and Applications, Cambridge University
 Save the time of users as well as library staff Press: Cambridge,16-32.
through self check-in/ check-out counter and drop boxes. 16. Shepard, S. 2005. Radio Frequency Identification Mc
 Replaces the old library technologies like Barcode Graw Hill, 42
and EM security strips. 17. Singh, G. and Midha, M. 2008. RFID: A new technology
in library management systems. Journal of Interlibrary
Loan, Document Delivery & Electronic Reserve, 18(04),
REFERENCES 439-447.
1. Campbell, Brian. 2003. Background Information on RFID 18. Smart, Laura. Making Sense of RFID. Library Journal,
and Automated Book Sorting,Vancouver, B.C., Net Connect Fall 2004. Vol 129. New York.
Vancouver Public Library. November 12, 2003. 19. Sujatha, G. 2007. Radio frequency identification (RFID)
2. Chavan, S.P. 2012. Use of RFID technology in libraries. in library management. Pearl, 1(22-24) Online:
Online International Interdisciplinary Research Journal, http://www.wikipedia.com
2(4), 235-341. 20. Ward, D. 2003. Radio frequency identification systems
3. Coyle, K. 2005. Management of RFID in libraries. The for libraries and archives: an introduction. Library &
Journal of Academic Librarianship, 31(5), 486-489. Archival Security, 18(2), 7-21.

782
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Toolkit for Fast Neutron Removal Cross-Section


Kulwinder Singh Mann Manmohan Singh Heer Asha Rani
Department of Physics, D.A.V. Department of Physics, Kanya Department of Applied Sciences,
College, Bathinda-151001, Punjab, Maha Vidyalaya, Jalandhar- Ferozpur College of Engineering
India 144001, India and Technology,
manmohan.heer@yahoo.com Ferozpur-142052, India
ksmann6268@gmail.com
ashasachdeva78@gmail.com

ABSTRACT difference between them is that the ParShield was written in


C++, MERCSF-N and NXcom were written in FORTRAN 90.
For calculation of macroscopic effective mass removal cross- But as the mass removal cross-sections values of the nine
section (ΣR/ρ, in cm2/g), macroscopic effective removal cross elements have not been included in the database-files of these
section (ΣR, cm-1) parameters in mixtures and composite programs. These programs are incapable to calculate the
materials, computer programs MERCSF-N and NXcom are removal cross section of the compounds, which contain one or
available. A toolkit named WinNC-toolkit, is designed by more of these elements, as zero values were assigned to these
reengineering of these programs with objective of providing nine elements in their database files.The present study reports
the missing values of the parameter for nine elements and a new program reengineered with the aim of transforming
improving the user interface. The estimation of the parameters NXcom to the Windows platform, to modernizing its user
was done by using bi-quadratic polynomial fitting interface, updating the database file of ΣR/ρ values and
interpolation method. The program is designed in MS-Excel, simplifying the input procedure of data file. The present work
2007 and has some extended capabilities, over the previous has been done in the extension of our previously published
programs, such as to simplify the input procedure, provide study on gamma rays interaction parameters, GRIC-toolkit
tabulated, graphical results and calculates the relaxation [15]. The WinNC-toolkit is capable to compute different
length λ (cm). In addition, the present work reported the interaction parameters for the fast neutrons.
calculated values of ΣR/ρ for some compounds of those nine
elements (Tc, Pm, Po, At, Rn, Fr, Ra, Ac and Pa) are provided
for the first time. The toolkit results for effective removal 2. THE WINNC-TOOLKIT
cross-sections were qualified an excellent agreement with the
WinNC-toolkit has been programmed in MS Excel-2007 to
manually calculated values of the same parameter. The
facilitate improved automation of the calculations and results.
WinNC-toolkit is useful for investigations of the neutron
The database-file required in the execution of the toolkit has
shielding behaviours of engineering materials used in nuclear-
been constructed in the form of a matrix containing several
reactors.
physical quantities such as; ΣR/ρ, atomic number, atomic mass
Keywords and density as its matrix-elements.
WinNC-toolkit, Effective removal cross-section, Fast neutron The input data of sample material (composite material,
attenuation parameters, Fast-neutron shielding. mixture and alloy) in its compound or elemental fractional
composition (by wt.) and density are required as input data.
1. INTRODUCTION The toolkit opens in MS-excel 2007 or above in user friendly
Shielding material for nuclear reactor requires hydrogenous window. It can calculate the value of macroscopic effective
materials, heavy metal elements, and other neutron absorbers. mass removal cross-section (ΣR/ρ, in cm2/g), macroscopic
An approximate method for calculating the attenuation of fast effective removal cross section (ΣR, cm-1) and removal
neutrons (2-12MeV) can be achieved by using the relaxation length (λ, in cm) for any compound, mixture or
macroscopic effective removal cross-section concept. composite material. In addition, the toolkit presents the output
Theoretical, calculations of fast neutron removal cross- of results in both tabulated and graphical forms. Fig.1 shows
sections [ΣR (cm-1)] of various shielding materials (compound, the flow chart the execution of WinNC-toolkit program.
composite and mixture) have attained a great importance as
the calculated values by using additivity formula [1,2] have 1.1 Values of macroscopic effective mass
been found in good agreement with experimental removal cross-section, ΣR/ρ (cm2/g)
measurements [3-8]. For calculations of the removal cross- The values of ΣR/ρ for the elements Tc, Pm, Po, At, Rn, Fr,
sections, El-Khayatt et al. [9-10] developed two computer Ra, Ac and Pa are missing (zero values were assigned) in the
programs, MERCSF-N and NXcom for fast neutrons. In these database-file of the pervious programs. In the present work,
programs, the required physical data for calculations had been values of ΣR/ρ for the missing elements have been interpolated
collected from literatures [11-13]. However, working with using bi-quadratic polynomial fitting method with the
these programs is not user friendly, as they require preparation LINEST function of Excel thereby updated in the database-
of input data file with a specific format which is time file of the toolkit. Thus the present toolkit contains a database
consuming and error prone. Moreover, the mass removal file for a complete range of macroscopic cross sections of all
cross-section values of nine elements (Tc, Pm, Po, At, Rn, Fr, the elements up to Uranium (U). Fig.2 authenticates the use of
Ra, Ac and Pa ) have not been included in the database-files bi-quadratic polynomial fitting method for the interpolation of
of these programs. Recently, a computer program ParShield ΣR/ρ (cm2/g) values of some elements (with Z: 5, 10, 15, 20
[14] has been developed by using the same equations and the and 25) whose ΣR/ρ values are already available in literature
same database files of MERCSF-N and NXcom. The only [10]. Fig. 3 (a-c) shows the interpolation of ΣR/ρ values of the
elements (with Z: 43, 61, 84, 85, 86, 87, 88, 89 and 91).

783
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

1.2 Calculation of macroscopic effective PoO2 0.0143


removal cross section, ΣR (cm-1)
PmCl3 0.0177
Macroscopic effective removal cross section, ΣR is defined as
the probability that a fast or fission-energy neutron undergoes
a first collision, which removes it from the group of Pm2O3 0.0163
penetrating, uncollided neutrons [16].
PaCl4 0.0156
WinNC-toolkit calculates the value of ΣR (cm-1) for fast
neutrons of any compound, mixture, and composite material
Tc(CN)2 0.0261
using the updated database-file of ΣR/ρ and additivity formula
given by Eq. (1) [1, 2].
TcO2.2H2O 0.0389

TcO3 0.0233

Tc2O7 0.0242
where ρi is the partial density (g cm-3) (the density
of each element as it appears in the sample material), ρ refers Tc2S7 0.0217
to the material density (g cm-3) and (ΣR/ρ)i is the macroscopic
effective mass removal cross-sections of the ith constituent.
TcS2 0.0200
1.3 Calculation of removal relaxation
Tc(MnO4)2 0.0264
length, λ (cm)
The average distance travelled by a neutron before interaction
TcCO3 0.0244
is known as the Mean Free Path (mfp) for that interaction. The
mfp is also termed as „removal‟ relaxation length, λ (cm) and
defined by Eq. (2) [17]. Tc3N2 0.0175

TcCl4 0.0210

Tc3(AsO4)2 0.0213

Tc3As2 0.0157
Table 1.Calculated values of ΣR/ρ for some compounds.
Compound Formula ΣR/ρ (cm2/g)

AcH2 0.0151 1.4 Calculations for ΣR/ρ (cm2/g) of sample


The ΣR/ρ (cm2/g) is given by
AcF3 0.0151
R
R /   . (3)
AcCl3 0.0148 
Ac2S3 0.0130 where ρ is sample density. For the sample with unknown
value of the mass density, by putting the value of density for
Ac2O3 0.0128 that sample as one (1 gcm-3), the toolkit selects the ΣR/ρ value
for ΣR and provides the results.
Fr2S 0.0112
Table 2. Comparison of calculated values of effective
Fr3P 0.0109 removal cross-sections using the toolkit.

PoH2 0.0159 Concrete Density Measured values of ΣR (cm-1)


Samples
PoCl2 0.0140 (gcm-3) WinNC- Experim
Manually
toolkit entally
[18]
PoCl4 0.0163 [18]

Ordinary-
PoBr4 0.0142 2.30 0.0930 0.0937 0.1083
concrete
Hematite-
PoI2 0.0119 2.50 0.0978 0.0967 0.1160
serpentine
Ilmenite-
PoI4 0.0124 2.90 0.0943 0.0950 0.1433
limonite
Basalt- 3.05 0.1108 0.1102 0.1270

784
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

magnetite the user encounters any problem or difficulty then he/she feels
Ilmenite 3.50 0.1111 0.1121 0.1625 free to contact to the authors
Steel
Scrap
4.00 0.1226 0.1247 0.1654 3. RESULTS AND DISCUSSION
WinNC-toolkit provides the window platform and
Steel-
5.11 0.1421 0.1420 0.1680 user friendly interface for the investigation of fast neutron
magnetite
shielding behaviour for composite materials by using their
chemical composition (by wt.). The toolkit has been used to
calculate ΣR/ρ values for some compounds which contain
2. WORKING PROCEDURE OF WINNC- elements missing in the database files of MERCSF-N and
TOOLKIT NXcom programs for the first time and listed in Table 1 . Thus
The input procedure to feed the data (information of the the reengineered program WinNC-toolkit has the capability to
samples) is very simple and user friendly as compared to old investigate the fast neutron attenuation parameters in mixtures
programs like NXcom. The program structure is such that and composite materials of elements from 1H-92U. Table 2
after pressing the TAB key from key-board the curser will shows the excellent agreement of the WinNC-toolkit
move to the box (cell) where user can feed data as instructed calculated values with the literature values of effective
at each step. The user has to feed the information about the removal cross-sections for seven concrete samples [18].
sample such as Name, Density (g cm-3) and Fractional
Composition (either compound or elemental). User should 4. CONCLUSIONS
feed the data in these files and save as WinNX-Tookit_S1, The WinNC-toolkit is a user‟s friendly computer program
WinNX-Tookit_S2 and so on for different samples. After for quick investigation of fast neutron shielding parameters
completing the input data the toolkit program automatically [such as macroscopic effective removal cross-section ΣR (cm-
1
calculates (within fraction of a second) the values of removal ), effective mass removal cross-section ΣR/ρ (cm2/g), and
cross sections for the sample. The tabulated results can be removal relaxation length λ (cm)] for any composite material
seen by clicking on the tab „OUTPUT‟ at the bottom of the by using its chemical composition (by wt. fraction). Some
window as indicated in Fig. 4. The graphical results can be specific conclusions of the present work are as follows:
seen on clicking on; the tab „GRAPH_Ele.‟ if elemental
composition is provided and the tab „GRAPH_Comp.‟ if  Using bi-quadratic polynomial fitting interpolation
compound composition is provided. The procedure of using method the values of ΣR/ρ for 43Tc, 61Pm, 84Po, 85At,
the WinNC-toolkit program will be clearer with the following
86Rn, 87Fr, 88Ra, 89Ac and 91Pa have been used in the
example and Fig. 4. For cement sample, the input of data updated database of the previous programs (MERCSF-N
using compound chemical composition: Start of the program and NXcom).
and feed the information in specified boxes. The control goes
to next box by pressing TAB from keyboard. Finally click on  The-toolkit, is a reengineering of the NXcom program in
ENTER COMPOSITION. Input: A new window will open MS-Excel-2007 which leads to improvement in user‟s
which requires the compound chemical composition of the interface results for those composites which contain one
sample. Replace the data already present in it. If the numerical or more of the following elements; Tc, Pm, Po, At, Rn,
total of fractional composition ≠ 1 then at the top and bottom Fr, Ra, Ac and Pa.
of the window show a warning “Please Recheck, The  The output of WinNC-toolkit can be easily transferred to
Composition is Incomplete”. It also indicates the numerical new MS-Excel file.
value of the pending composition. Output: Click on the tabs
„OUTPUT‟ the result of the sample appears in a tabular form.
 A good agreement of the manually calculated literature
values and the toolkit results of effective removal cross-
For getting the output of results in graphical form click on
sections confirm and authenticate the toolkit for use.
„GRAPH_Comp.‟ the obtained results as shown in Fig.4. If

Fig.1: Flow-Chart of the program

785
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig. 2

Fig. 3a Fig. 3 b Fig. 3 c

786
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Fig. 3

787
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

5. ACKNOWLEDGMENTS [9] A.M. El-Khayatt, “Calculation of fast neutron removal


Our thanks to Dr. A.M. El-Khayatt for providing the NXcom- cross-sections for some compounds and materials,” Ann.
program. Nucl. Energy, 37, pp. 218-222, 2010.

REFERENCES [10] A.M. El-Khayatt, “NXcom - a program for calculating


[1] J. Wood, Computational Methods in Reactor Shielding, attenuation coefficients of fast neutrons and gamma-rays,”
Pergamon Press, Inc., New York, USA, 1982. Ann. Nucl. Energy, 38 (1), pp.128-132, 2011.

[2] M.F. Kaplan, Concrete Radiation Shielding, John Wiley & [11] A.E. Profio, Radiation Shielding and Dosimetry, John
Sons, New York, USA, 1989. Wiley & Sons, Inc., New York, 1979.

[3] A.M. El-Khayatt, A. El-Sayed Abdo, “MERCSF-N [12] A.B.Chilton, J.K.Shultis, R.E. Faw, Principles of
calculation program for fast neutron removal cross-sections in Radiation Shielding, Prentice-Hall, Englewood Cliffs, NJ.,
composite shields,” Ann. Nucl. Energy, 36 (6), pp. 832-836, 1984.
2009.
[13] M.E. Wieser, “Atomic weights of the elements 2005
[4] E. Yilmaz, H. Baltas, E. Kirisa, İ. Ustabas, U. Cevik, (IUPAC technical report),” Pure Appl. Chem., 78 (11), pp.
A.M. El-Khayatt, “Gamma ray and neutron shielding 2051–2066, 2006.
properties of some concrete materials,” Ann. Nucl. Energy, 38
[14] Y. Elmahroug, B. Tellili, C. Souga, K. Manai,
(10, pp. 2204–2212), 2011.
“ParShield: A computer program for calculating attenuation
[5] G. Osman, A. Bozkurt, E. Kamc, T. Korkut, parameters of the gamma rays and the fast neutrons,” Ann.
“Determination and calculation of gamma and neutron Nucl. Energy, 76, pp.94–99, 2015.
shielding characteristics of concretes containing different
[15] K.S. Mann, A. Rani, M.S. Heer, “Shielding behaviors of
hematite proportions,” Ann. Nucl. Energy, 38, pp. 2719–2723,
some polymer and plastic materials for gamma-rays,” Radiat.
2011.
Phys. Chem., 106, pp. 247–254, 2015.
[6] M. Kurudirek, Y. Ozdemir, “Energy absorption and
[16] E.P. Blizard, L.S. Abbott, Reactor Handbook, vol. III,
exposure buildup factors for some polymers and tissue
Part B, Shielding, John Wiley & Sons, Inc., 1962.
substitute materials: photon energy, penetration depth and
chemical composition dependence,” J. Radiol. Prot.,31,
[17] M.A. Ibrahim, “Attenuation of fission neutrons by some
pp.117–128, 2011.
hydrogenous shield materials and the exponential dependence
of the attenuated total neutron dose rate on the shield
[7] A.M. El-Khayatt, I. Akkurt, “Photon interaction, energy
thickness,” Appl. Radiat. Isot.,52, pp. 47-53, 2000.
absorption and neutron removal cross section of concrete
including marble,” Ann. Nucl. Energy, 60, pp. 8-14, 2013.
[18] I. I. Bashter, “Calculation of radiation attenuation
coefficients for shielding concretes,” Ann. Nucl. Energy,
[8] B. Tellili, Y. Elmahroug, C. Souga, “Calculation of fast
24(17), pp. 1389-1401, 1997.
neutron removal cross sections for different lunar soils,”
Advances in Space Research, 53, pp. 348–352, 2014.

788
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

GREEN COMPUTING

Sarbjit Kaur Sonika


PGGCG, Sector 42 PGGCG, Sector 42
Affiliated to PU, Chandigarh Affiliated to PU, Chandigarh
sarbjitkaur1981@gmail.com sonikahi2000@gmail.com

ABSTRACT Adding contribution onto this, in 1992 U.S. Environmental


Protection Agency launched Energy Star Rating labeling
Green computing whose goals are to reduce the use of program which is designed to promote and recognize energy-
hazardous materials, maximize energy efficiency during the efficiency in monitors, climate control equipment, and other
product's lifetime, and promote the recyclability or technologies. This resulted in the widespread adoption of
biodegradability of defunct products and factory waste. sleep mode among consumer electronics.
Computers today not only used in offices but also at homes.
As the number of computers is increasing day by day, so is 2. GREEN COMPUTING
the amount of electricity consumed by them which in turn is Now days almost in every existing or emerging sector
increasing the carbon content in atmosphere. This problem machinery being used requires large amount of power and
has been realized by people and measures are being taken money for its effective functioning. We now have advanced
which help in minimizing the power usage of computers. machines and equipment’s to accomplish our industrial tasks,
Therefore, this can be called as Green Computing. This paper great gadgets with royal looks and features make our lives
will take a look at several green initiatives currently under more impressive and smooth.
way in the computer industry.
2.1 PURPOSE OF GREEN COMPUTING
Keywords
Green computing whose goals are to reduce the use of
Data centers, Virtual Machines, WEEE & RoHS, Recycling, hazardous materials, maximize energy efficiency during the
Telecommuting equipment’s lifetime, and promote the recyclability or
biodegradability of improper / dysfunctional products and
factory waste. Thus we use Green Computing for following
1. INTRODUCTION
Green computing, refers to environmentally sustainable advantages-
computing or IT. Basically it is the study and practice of
1. Using ENERGY STAR Rating qualified products
designing, manufacturing, using, and disposing of computer
help in energy conservation. This qualifies the
devices like CPU, servers, and associated peripheral devices
product to be environment friendly
like: display monitors, keyboards, printers, storage devices,
2. Donating used computers and other peripherals can
and networking systems efficiently and effectively with
reduce the rate of e-waste creation. Moreover, it
minimal or zero impact on the existing environment. Green IT
helps those who cannot afford to buy computer.
includes the dimensions of environmental sustainability, the
3. Through proper disposal of computers and its
economics of energy efficiency, and the total cost of
accessories, it is possible to reduce environmental
ownership, which includes the cost of disposal and recycling.
pollution.
It can be termed as the study and practice of using computing
4. Surge protectors offer the benefit of green
resources very efficiently.
computing by cutting off the power supply to
If we look global environment situation currently, gas peripheral devices when the computer is turned off
emissions are a major contributing factor to global warming or are at sleep mode.
and therefore now, governments and society at large have an 5. On other prospective, considering benefits of
important new agenda i.e. tackling environmental issues and computers, we are saving the woods by promoting
adopting environmentally sound practices. IT is a required computer revolution. Therefore more electronic
revolution and greening our IT products, applications, data, less paper work.
services, and practices is both an economic and an 6. With download option direct from the website /
environmental imperative, as well as our social responsibility. URL selling for loaded data on compact disks have
Therefore, a growing number of IT vendors and users are reduced drastically which is eventually saving the e-
moving toward green IT and thereby assisting in building a wastage.
green society and economy.

789
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.2 APPROACHES TOWARDS GREEN Virtual machine can be more easily controlled and inspected
from outside than a physical one, its configuration is also
COMPUTING
more flexible. Virtual machine can be easily re-located from
In today’s computerized world, where computers are involved one physical machine to another as needed. For example, a
in every sector, large amount of computer data needs proper sales person going to a customer can copy a virtual machine
storage and management. The data could be in either form with the demonstration software to its laptop, without the need
including audio, video, image or text it takes the storage. to transport the actual physical computer. At the same time
Consider the example of social networking sites of search and error inside a virtual machine does not harm a host
engines like Facebook, Google respectively where huge data system, so there is no risk of breaking down the operating
is being added every day. Data centers here come in picture system in said laptop.
(Figure: 1 below) and such service providers like Google,
amazon need extraordinarily high energy and are a primary
focus for proponents of green computing.

2.2.1. Data centers:


Data centers can potentially improve their energy and space
efficiency through techniques such as storage consolidation
and virtualization. Many organizations are starting to
eliminate underutilized servers, which results in lower energy
usage. The U.S. federal government has set a minimum 10%
reduction target for data center energy usage by 2011. With
the aid of a self-styled ultra-efficient evaporative cooling
technology. Google Inc. has been able to reduce its energy
consumption to 50% of that of the industry average.

Figure 2: Virtual Computing Architecture

2.2.3 Hardware Material Management:


WEEE & RoHS Directives implementation in India:

Several government agencies at global level have adopted the


Restriction of Hazardous Substances Directive (RoHS).
Government of India passed Rules in 2011, regulations that
cover RoHS specifications in all electronic and electrical
components and equipment. It also covers the equivalent of
the European Union’s WEEE e-waste regulations that guide
Figure 1: Data Centre the disposal of electronic and electrical equipment. It is
applicable to all producers and distributors involved in the
2.2.2 Virtual Machines (VM): manufacture, sale, and processing of electronic and electrical
equipment or components. The restrictions also affect waste
Computer virtualization refers to the abstraction of computer collection centers, product dismantlers, and recyclers.
resources, such as the process of running two or more logical Similarly the waste-related provisions is enforced from May
computer systems on one set of physical hardware. This 1, 2012.
virtualization concept originated with the IBM mainframe
operating systems during 1960s, but was commercialized for Such agencies are closely linked with the Waste Electrical and
x86-compatible computers only in the 1990s. With Electronic Equipment Directive (WEEE), which sets
virtualization, a system administrator could combine several collection, recycling, and recovery targets for electrical goods
physical systems into virtual machines on one single, and are part of a legislative initiative that aims to reduce the
powerfully configured system, by unplugging the original huge amounts of toxic e-waste. Example: In 2001, they
hardware which reducing power and cooling focused on lead-free manufacturing, introducing the Enhanced
consumption(Figure: 2 above). Many commercial companies Ball Grid Array (EBGA) package for power efficient VIA
and open-source projects now offer software packages to processors and the Heat Sink Ball Grid Array (HSBGA)
enable a transition to virtual computing. Major companies like package for their chipsets.
Apple, Intel and AMD have built proprietary virtualization
Advanced Configuration and Power Interface:
enhancements to the x86 instruction set into each of their CPU
product lines, in order to facilitate virtualized computing.

790
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

ACPI (Advanced Configuration and Power Interface) is an Computers, printers, TVs, microwave ovens, power strips,
open industry specification co-developed by many electronic lamps, and all other electronic items.
device and chipset manufacturers like Hewlett-Packard, Intel,
Microsoft, Phoenix, and Toshiba. ACPI establishes industry- 2.2.5 Telecommuting:
standard interfaces enabling OS-directed configuration, power
management, and thermal management of mobile, desktop, Teleconferencing and telepresence technologies are often
and server platforms. When first published in 1996, ACPI implemented in green computing initiatives. The advantages
evolved an existing collection of power management BIOS are many; increased worker satisfaction, reduction of
code, Advanced Power Management (APM) application greenhouse gas emissions related to travel, and increased
programming interfaces (APIs), PNPBIOS APIs, and profit margins as a result of lower overhead costs for office
Multiprocessor Specification (MPS) tables into a well-defined space, heat, lighting, etc. The savings are significant; the
power management and configuration interface specification. average annual energy consumption for U.S. office buildings
This specification enables new power management is over 23 kilowatt hours per square foot, with heat, air
technologies to evolve independently in operating systems conditioning and lighting accounting for 70% of all energy
and hardware while ensuring that they continue to work consumed. Other related initiatives, such as hostelling, reduce
together for an ecofriendly environment. Latest specifications the square footage per employee as workers reserve space
version 5.0a is being used for devices manufactured after only when they need it. Many types of jobs, such as sales,
2013. Every change is integrated in form of new versions that consulting, and field service, integrate well with this
describes the structures and mechanisms necessary to design technique.
operating system-directed power management and make
advanced configuration architectures possible. 3. BEST PRACTICES FOR GREEN
COMPUTING
2.2.4 Recycling:
As IT users, we can contribute our own effort to protect the
Computer recycling refers to recycling or reuse of a computer
environment by operating the IT equipment wisely.
or electronic waste. This can include finding another use for
the system (i. e. donated to charity), or having the system Do not keep computer monitor running overnight and on
dismantled in a manner that allows for the safe extraction of weekends. The life of a monitor is related to the amount of
the constituent materials for reuse in other products. hours it is in use. If large amount of time is being spent on
Additionally, parts from outdated systems may be salvaged computer, reduce the light level because it eventually
and recycled through certain retail outlets and municipal or improved monitor life by being less heated and good for user
private recycling centers health too.

The recycling of old computers raises an important privacy Do not turn on the printer until you are ready to print. Printers
issue. The old storage devices still hold private information, consume energy even while they are idle. Avoid unnecessary
such as emails, passwords, and credit card numbers, which use of printer. Always buy and use recycled-content paper
can be recovered simply by someone's using software
available freely on the Internet. Deletion of a file does not Use "paperless" methods of communication such as email and
actually remove the file from the hard drive. Before recycling fax-modems or SMS.
a computer, users should remove the hard drive, or hard drives
if there is more than one, and physically destroy it or store it Request recycled / recyclable packaging from your computer
somewhere safe. There are some authorized hardware vendor.
recycling companies to whom the computer may be given for
recycling, and they typically sign a non-disclosure agreement. 4. CONCLUSION
What can be recycled? Darned near everything! A few key Green computing is not about going out and designing
examples: biodegradable packaging for products. Now the time came to
think about the efficiently use of computers and the resources
Computer monitors (CRTs) contain an average of 4 lbs. of which are non renewable. It opens a new window for the new
lead, a lot of reusable glass, chromium and mercury. All of entrepreneur for harvesting with E-waste material and scrap
these elements can be extracted and reused. For example, our computers. Green Computing strategies does not saves money
recycler takes the glass from old monitors, and sends them to as the set up cost for eco- friendly electronics is more than
Samsung for use on flat screen monitors and TVs. what we save but we personally feel that even if the set up
cost is higher than what we save we still can aim towards
CDs/DVRs contain gold, glass, plastic, nickel and other saving our mother earth using these green computing
elements that are completely recoverable and reusable.• techniques.
Batteries – everything from the batteries that power your
phone, laptop, and mouse can be recycled, whether single-use
or rechargeable.

791
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

REFERENCES
[7] Green IT: Why Mid-Size Companies Are Investing
[1] Maria Kazandjieva, Brandon Heller, Omprakash
Now.www.climatesaverscomputing.org.
GnawaliGreen Enterprise Computing Data Assumptions
[8] Software or Hardware: The future of green Enterprise
and Realities
computing Page 185.14 pages.
[2] Er. Navdeep Kochhar, Er. Arun Garg),’Eco-Friendly
[9] A Greener Approach to Computing by Volume 2, Issue
Computing:
2, February 2012.
[3] Green Computing.Joseph Williams and Lewis Curtis,
[10] Green : The new computing coat of Arms by Joseph
“Green: The New Computing Coat of Arms?”
Willium and Lewis Curtis IT Pro January/February 2008
[4] “GREEN COMPUTING”, Department of the Premier
published by the IEEE computer society.
GJPB Willemse, www.fs.gov.za
[5] Er. NavdeepKochhar, Er. Arun Garg),’Eco-Friendly
Computing: Green Computing’.
[6] Jones, Ernesta "New Computer Efficiency Requirements".
U.S. EPA.

792
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

793
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Recent Modified Fibers with Their Technological


Developments in Different Fields of Application - An
Overview
Nisha Amit Madahar
Department of Textile Engineering Department of Textile Engineering
GZSPTU Campus Bathinda GZSPTU Campus Bathinda
aroranishatxtle@gmail.com amitbtechtex@gmail.com

ABSTRACT All the modified fibers have some significant role in the
The applications of the textile fibers in todays era are different application areas.(1) . But here we study on two
growing at a very faster rate. The textile fibers have not modified fibers that are:
been limited to the yarn and fabric manufacturing
applications. Infact, the future of polymer based fiber Modified fibers
technology for different applications depends largely on
the future needs of our civilization. According to
different application areas the technology of fibers varies.
The important research findings of this paper give idea
about some recent polymer based modified fibers with Wound dressing modified Nano-electrospun
their technological developments in different field of fibers modified fibers
application. In this paper, the characteristics of recent
fibers are outlined and deal with various types of
modified fibers, applications, advantages. 2. Fibers used in wound dressing and
wound healing:
Key-Words- Modified Fibers, Wound Dressing
Fibers and Nano-electrospun Fibers Recently, the molecular modes of action have been
investigated for skin substi-tutes, interactive
1.INTRODUCTION biomaterials, and some traditional material designs as
At a molecular level, there are close similarities between balancing the bio-chemical events of inflammation in the
the biological modification of a fiber with an enzyme and chronic wound to improve healing. The interactive
the biological activity of a modified fiber through wound dressings have activities including up-regulation
inhibition or promotion of the future of modified fibers of growth factors and cytokines and down-regulation of
enzyme activity. At this chemical/biological interface of destructive proteolysis(1) Carbohydrate-based wound
subject areas, interest often becomes interdisciplinary dressings have received increased attention for their
and new ideas may be spawned. It is also very evident molecular interactive properties with chronic and burn
that the scientific community is now turning to enzymes wounds. Traditionally, the use of carbohydrate-based
in an effort to make our world more renewable and wound dressings including cotton, xe-rogels, charcoal
sustainable. Although enzymes have been used in textile cloth, alginates, chitosan, and hydrogels have afforded
processing for many years, it is only in the last 20 years properties such as absorbency, ease of application and
that growing interest has been given to using a variety of removal, bacterial protection, fluid balance,
enzymes for textile and fiber applications. Natural fibers occlusion,and elasticity. Recent efforts in our lab have
are readily available and easily produced owning to their been underway to design carbohydrate dressings that are
remarkable molecular structure that affords a bioactive interactive cotton dressings as an approach to regulating
matrix for design of more biocompatible and intelligent destructive proteolysis in the non-healing wound.
materials. On the other hand, specific material properties Elastase is a serine protease that has been associated with
including the modulus of elasticity, tensile strength, and a variety of inflammatory diseases and has been
hardness are largely fixed parameters for a natural fiber implicated as a destructive protease that impedes wound
but have been more manageable within synthetic fiber healing. The presence of elevated levels of elastase in
design. The molecular conformation native to natural non-healing wounds has been associated with the
fibers is often key to interactions with blood and organ degradation of important growth factors and fibronectin
cells, proteins, and cell receptors, which are currently necessary for wound healing. Focus will be given to the
being studied for a better understanding to improve design, preparation, and assessment of a type of cotton-
medical textiles. The native conformation or periodicity based interactive wound dressing designed to intervene
of structural components in native fibers such as collagen in the pathophysiology of the chronic wound through
and cellulose offers unique and beneficial properties for protease sequestration(2)
biomedical applications. An extension of the bioactive
conformation property in fibers to rationally designed
fibers that would inhibit enzymes or trigger a cell
receptor is a premise of current research.(1)

794
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.Characteristics of wound dressing


fibers and wound healing:

As we go through the ages, both vegetable and animal


fibers have been applied to human wounds to stop
bleeding, absorb exudate, alleviate pain and provide a
protective barrier for the formation of new tissue. So,
These fibers have some chracterestics in them
 Wound dressing fibers merged with natural
ingredients and helps to cure human body from
wounds. The development of wound dressing
occure through the ages are summarized in
Early humankind employed many different
materials from the natural surroundings
including resin-treated cloth, leaves, and wool- Reference of the fig: Fiber Optic Dressing
based materials with a variety of substances Monitors Wound Healing by Gavin Corley
including eggs and honey. Recent studies
suggest that honey may promote wound  Absorbency
healing through stimulation of inflammatory  Ease of application and removal
cytokines from monocytic cells. For example,  Adherence
the antibacterial activ-ity of honey in the
 Elasticity
treatment of wounds has been established (3),
 Bacterial barrier
and honey is now being reconsidered as a
dressing when antibiotic-resistant strains  Gaseous exchange
prevent successful antibiotic therapy.  Comfort
 Hemostatic
 Conformability
4.Fiber used in wound dressing are:
 Non-antigenic and non-toxic
 Drug delivery
 Thin films (bioclusive, tegaderm, opsite) :  Sterilizability
Semipermeable, polyurethane membrane with  Durability
acrylic adhesive
 Water vapor transmissibility
 Sheet hydrogels (clearsite, nu-gel, vigilon,
geliperm, duoderm gel, intrasite gel, geliper
granulate) : . Solid, non-adhesive gel sheets
that consist of a network of cross-linked,
6.Application: The design and preparation of
hydrophilic polymers that can absorb large interactive chronic wound dressings have become
amounts of water without dissolving or losing increasingly important as part of a solution to
its addressing the critical worldwide health crisis of the
growing number of chronic wound patients. Some
applications of the wound dressing fiber are:
 Hydrocolloid(duoderm, comfeel) :
Semipermeable polyurethane film in the form
of solid wafers; contain hydroactive particles as
sodium carboxymethyl cellulose
that swells with exudate or forms a gel.

5. Ideal properties of a wound dressing


as compared between cotton and
alginate materials.

795
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Reference: Macromolecular Materials and


Engineering Volume 295, Issue 10, pages 958–965,
October 12, 2010

Electrospun fibers are more exciting Than most other


nanofibers because their composition is highly diverse.
Until Recently, nanofibers have consisted largely of
carbon nanotubes and other inor-ganic fibers. However,
electrospun fibers can be fabricated of almost limitless
Materials from synthetic to natural polymers.(4) Thus, the
surface chemistry can Be tailored to meet a large number
of applications. The focus for electrospun Products has
so far included nonwovens for filtration membranes for
aerosol Purification thin coatings for defense and
protection, and structures in-corporated in composites.
Biomedical applications include more efficient Wound
healing and drug delivery devices, biocompatible
scaffolding for tissue Regeneration, bioerodable implant
structures, and others . The purpose of This paper is to
review some of the natural and synthetic biopolymers
That can be electrospun into micro- and nanofibers and
to show their value for Future medical applications. Due
to the enormous amount of research published In recent
years only a brief review will be presented here, with the
scope Of this article being to introduce the reader to this
rapidly emerging field.(5)

7.1 Fibers used in nanofiber


electrospinning technology:
Reference : Fiber Optic Dressing Monitors Wound
Healing by Gavin Corley

i. Used to help who suffer from chronic wounds due


to the formation of decubitus bedsores brought on in
the elderly nursing home or spinal chord paralysis
patient.
ii. Helpful for diabetes accounts for at least 60,000
patients annually who also suffer with foot ulcers
iii. Recent efforts to develop wound dressings that do
more than simply offer a moist wound environment Reference of the fig: Preparation of Fibers With
for better healing have prompted most major wound Nanoscaled Morphologies: Electrospinning of Polymer
dressing companies to develop research and Blends. M. Bognitzki; T. Frese; M. Steinhart A. Greiner;
approaches on interactive chronic wound dressings.
J.H. Wendorff, Polym. Eng. Sci. 41, 6, 982-989 (2001)
7. Electrospun nanofibers from
Electro-spinning has originated from electro-spraying,
biopolymers and their biomedical where an electric charge is provided to a conducting
applications: liquid and produces a jet which splits into fine particles
that resemble a spray, hence the name electro-spraying
(Rayleigh 1882; Zeleny 1914). However, when a
polymer is used in place of allow-molecular-weight
substance for the electro-spraying process, the long-chain
nature of polymers does not allow the splitting of the jet
into particles. Instead, the jet undergoes instabilities and
thins to form nano fibers. Therefore, one has to use
polymers (natural or synthetic) to form nano fibers using
the electro-spinning/electro spraying technique.

The physical compatibility of biomaterials includes


variables such as struc-tural integrity, strength,
deformation resistance, fatigue properties, and modulus
of elasticity. For prosthetic devices, carefully engineered

796
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

metallic or ceramic biomaterials have adequate elec-trode distance. The fibers had an average
mechanical properties, wear, and corrosion resistance. diameter of 860–720 nm (±100), de-pending on the
However, metals and ceramics do not match both specific experimental conditions, and a somewhat
modulus and resiliency of living bone. Bone is rough surface.
continually undergoing fracture and repair processes  Nanoscale silk fibroin fibers: Electro spinning was
whereas current synthetic materials do not have this used to produce silk fibroin fibers from Bombyx
property. Attempts to over-come this challenge have mori and Samia cynthia riciniby dissolving the
included making the surface or the entire material porous fibroin in hexafluoroacetone (HFA)
or biodegradable. Porous materials are used to encourage  Cellulose-based nanofibers: Cellulose-based
tissue growth into the prosthetic. Biodegradable materials materials are useful for wound dressings and, less
are chosen so that the prosthetic 5. Electro-spun importantly, for sutures and related applications.
nanofibers from biopolymers will gradually disappear Natural cellulosic materials, such as cotton,
and be replaced by living tissue. To date, neither decompose before they melt, and cannot be melt-
approach has been highly successful. Biopolymers are spun. Efforts have been made to regenerate cellulose
better suited for applications that require flexibility, from solution so as to form nanofibers. Frey
elasticity, and shape-ability. Examples for biopolymer successfully produced electro spun cellulose fibers
stets include wound dressings, drug release devices, soft from polar fluid/salt solutions. It is clear from this
tissue replacements, cardiovascular grafts, and sutures. work that cellulose nano fibers could potentially be
Research in biodegradable polymers has increased spun from inexpensive renewable resources or
dramatically over the past decade and good reviews are reclaimed cellulosic material. Another experimental
available in the literature. So, according to above route to the production of nanofibers based on
mentioned priorties the nano-polymer fibers used A wide cellulose consists of derivatizing cellulose and
range of polymers has been used to electro-spin nano subsequently removing the substituents at the
fibers. Natural polymers such as collagen), chitosan cellulosic hydroxyl groups. (7)
(Bhattarai et al 2005; Geng et al 2005), hyaluronic acid
(Um et al 2004), and silk fibrion (Jin et al 2002, 2004) 8. Application of these fibers:
have been used to produce nanofibers that can form
potential scaffolds for tissue engineering applications.
More recently, nanofibersof protein (Li et al 2005; With the help of the electrostatic spinning technology,
Woerdeman et al 2005) have been demonstrated to have biopolymers can be formed into nanofibrous structures
promising use in tissue engineering. Among the synthetic which have great potential for medical applications.
polymers explored for the fabrication of nanofibers,  Due to their small size, the electrospun fibers
poly(lactic acid) (PLA) (Yang et al 2004, 2005), provide a large surface-to-volume ratio and could be
polyurethane (PU) (Verreck et al 2003b; Riboldi et al used for drug delivery, scaffolds for tissue
2005), poly(ε-caprolactone) (PCL) (Reneker et al 2002; engineering, or provide support for bone repair.
Li et al 2003; Li et al 2005c), poly(lactic-co-glycolic  Due to the relative ease of the electrospinning
acid) (PLGA) (Luu et al 2003;Kim et al 2004; Uematsu technique a large number of different polymeric
et al 2005), poly(ethylene-co-vinylacetate) (PEVA) materials including natural fibers have already been
(Kenawy 2002), and poly(l-lactide-co-ε-caprolactone) explored or will be in the near future. Carefully
(PLLA-CL) (Mo et al 2004; Mo and Weber 2004) have tailored surface chemistries of these micro- and
been well studied. (6) nanofibers will continue to expand their applications
in the medical field Like wound closture & wound
 Electrospun PLGA fibers: these fibers are made healing, Also used for artificial lung transplantation.
up of Poly(lactic acid), poly(glycolic acid), and  Many materials (natural and synthetic) have been
poly(lactide-co-glycolide) Biocompatible and explored as nanofibrous scaffolding materials for
biodegradable, poly(lactic acid) (PLA), bone, cartilage, ligament, and skeletal muscle tissue
poly(glycolic acid) (PGA), and their copolymers are engineering, including HA (Ramay and Zhang
interesting materials for implants, sutures, and 2003),chitosan (Bhattarai et al 2005), PLGA
especially controlled drug release at high loading (Uematsu et al 2005), carbon (Price et al 2003b) and
concentrations. The rate at which the drug is aluminum nanofibers (Webster et al 2005).
discharged into the biological system is determined Although nanofibers have been studied as scaffolds
by the degradation rate of the polymeric carrier. The for multiple tissue types, musculoskeletal tissue is
degradation products, for in-stance lactic acid in the probably the most well studied.
case of PLA decomposition, can easily be  Nanofibers have significant applications in the area
metabolized by the body. PGA fiber formation has of filtration since their surface area is substantially
so far been limited to melt extrusion and drawing, greater and have smaller micropores than melt
producing fibers in the micrometer range. Boland et blown (MB) webs. High porous structure with high
al. (5)could demonstrate that PGA. surface area makes them ideally suited for many
 PHB fibers: usually made from Poly(3- filtration applications. Nanofibers are ideally suited
hydroxybutyrate) and copolymers. PHB and for filtering subm icron particles from air or water.
copolymers have been investigated for tissue and
cartilage repair as it is compatible with various cell
lines. Due to the lowmechanical strength of PHB,
CONCLUSION
blends with other polymers have been
explored.PHB can be electrospun from 5% From the above study it is clear that all modified fibers
chloroform solution to fibers of less than1µm.These have great impact in the medical and as well as in other
fibers produced at a voltage of 10 kV and 7.5 cm application areas. All the modified fibers are one of the

797
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

body’s key natural resources and a component of skin 3) Cooper, R. A.; Molan, P. C.; Harding, K. G.
tissue that can benefit all stages of the wound healing Antibacterial activity of honey against
process. When the modified fiber made available to the strainsofStaphylococcus aureusfrom infected wounds.J.
wound bed, closure can occur. Wound deterioration, R. Soc. Med.1999,92, 283–285.
followed sometimes by procedures such as amputation, 4) (Huang et al 2001; Matthews et al 2002; Gersbach et
can thus be avoided. Therefore it is used as a natural al 2004; Shields et al 2004), gelatin (Zhang et al 2005
wound dressing and has properties that artificial wound 5) Clark, R. A. F. Wound repair: Overview and general
dressings do not have.All these fibers have resistant considerations. In: R C.(Ed.)TheMolecular and Cellular
against bacteria, which is of vital importance in a wound Biology of Wound Repair. Plenum Press, New
dressing and in other applications. These fibers help to York,1996,
keep the wound sterile, because of their natural ability to pp. 3–35.
fight infection and other diseases. 6) J. V. Edwards et al. (eds.), Modified Fibers with
Medical and Specialty Applications,67–80.C
2006Springer.
REFERENCES 7) Int J Nanomedicine. 2006 March; 1(1): 15–30.
8) www. Enfermagemcurativos.com
1) www.springer.com, ISBN-13 978-1-4020-3794-8 (e-
book)
2) Grimsley, J. K.; Singh, W. P.; Wild, F. R.; Giletto, A.
A novel, enzyme-based method forthe wound surface
removal and decontamination of organophosphorus nerve
agents. In:Edwards, J. V.; Vigo, T. L.
(Eds.)BioactiveFibers andPolymers. American Chemical
Society,Washington, DC,2001, 35–4

798
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

INNOVATIONS IN TEXTILE COMPOSITE DESINING


AND THEIR APPLICATIONS
Rajeev Varshney Amit Madahar
Department of Textile Engineering Department of Textile Engineering
GZSPTU Campus, Bathinda GZSPTU Campus, Bathinda
rajeev_varshney2002@yahoo.co.in amitbtechtex@gmail.com

.Abstract: Textile composites are the materials in material such as fibers, yarns or fabrics and its structure can
which the composition and internal structure are varied in a vary from a simple planar sheet to a complex 3D net shape.
controlled manner so as to match their performance to the
most demanding structural or non-structural roles. Textile 2. Step wise procedure for
reinforcement structure can be made of fibers, yarns or
fabrics (woven, braided, knitted or non wovens) and are manufacturing Textile Composites: With the
generally flexible. The application of traditional textile growth and rapid developments in machinery and textile
technology to organize high performance fibers for manufacturing techniques have advanced the science of
composite material applications has provided a route to textiles There are usually four important levels in the
combining highly tailored materials with enhanced process manufacturing process of textile composites:
ability. Many commercially produced composites use a FIBER > YARN > FABRIC > COMPOSITE
polymer matrix material often called a resin solution. There The first step represents the choice of the fibers in the
are many different polymeric materials available depending fabrication of textile structural composites. To resist high
upon the starting raw ingredients. There are several broad loads in structural applications, textile structural composite
categories, each with numerous variations. The most products should be made from high modulus fibers, such as
common are known as polyester, vinyl ester, epoxy, glass, graphite, aramid, ceramic or steel fibers.
phenolic, polyimide, polyamide, polypropylene, PEEK, and The second step of the composites manufacturing process
others. This paper covers the innovations in Textile consists of grouping together the fibers (or filaments) in a
Structural composite designing and their areas of linear assemblage to form a continuous strand having
application. textile like characteristics. The yarns may be composed of
one or more continuous filaments, or even discontinuous
chopped fibers, and finally, two or more single yarns can be
Keywords: Braided, Non woven, Matrix, Polymer, twisted together to form ply or plied yarns.
Woven, Knitted The third step in the textile structural composites
manufacturing process involves bonding and interlocking
of the yarns together to produce a flat sheet with a specific
1. INTRODUCTION pattern. Fabric types are categorized by the orientation of
The Textile composites represent advanced materials which the yarns used, and by the various construction methods
are reinforced with textile performs for structural or load used to hold the yarns together.
bearing applications. Currently, textile structural The four basic fabric structure categories are woven,
composites are a part of a huge category of composite knitted, braided, and nonwoven.
materials: textile composites. Textile composites can be
defined as the combination of a resin system with a textile
fiber, yarn or fabric system. They may be either flexible or
3. Preforms in Composites –
rigid. Flexible textile composites may include heavy duty Manufacturing:
conveyor belts or inflatable life rafts. On the other hand, The reinforcement materials used during manufacturing of
Inflexible or rigid textile composites are found in a variety composites may be in form of a thick woven cloth or the
of products referred to as fiber reinforced plastic (FRP) laminates which are can be combined to get the required
systems. These textile composites have been in use since thickness. So on the basis of reinforcement material used,
the 1950’s mostly in interior and exterior panels and in The composites can be broadly categorized as:
parts for the automotive and aircraft industry and represent (A) Laminated Composites
a good alternative for metal and wood applications. Textile (B) 3-D Composites
structural composite are mainly used as structural materials
to resist heavy loads that occur in the basic framework for
buildings, bridges, vehicles, etc. They are made of a textile
3.1 Laminated Composites: In case of laminar
composites the layers of reinforcement material are stacked
composite perform embedded in a resin, metal or ceramic
in a particular pattern in order to obtain the desired
matrix. The matrix system provides rigidity and holds the
properties in the resulting composite material. These layers
textile reinforcement material in a prescribed position and
are termed as plies or laminates.
orientation in the composite. The composite perform is
The Laminates may be composed of reinforcement
obtained from the assemblage of unrigidized fibrous
materials including

799
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

• Woven The design of a composite structural component illustrates


• Non woven which fiber preform manufacturing technique should be
• Matt employed.
• Braided
• Fiber reinforced
• Uni-directional fibers 3.2.1. Woven Preform
The advantages of laminated composites includes a The most widely used textile manufacturing technique,
relatively well defined arrangement of the fibres in the final weaving accounts for nearly 70%
composite material, higher fiber to volume ratio, higher of the two-dimensional fabric produced. The weaving
strength where as their disadvantages includes relatively process is suited for the production of panels and woven
poor thickness properties and problems of process induced fabric textiles have been used for a number of years in two-
deformations. The unidirectional fibers are mostly used in dimensional laminated composites. However, these
the form of pre-preg for carbon/epoxy and in the form of composites had poor impact resistance, delaminating
non-crimp fabric in case of glass/polyester. strength and, since typical two-dimensional weaves only
possess fibers in the 0⁰ ie warp and 90⁰ ie. weft or filler
directions, reduced in-plane shear properties. To improve
3.2 3-D Composites: The textile preforms can be the impact and inter laminar properties, through-the-
broadly classified as two- and three-dimensional on the thickness reinforcement material was required. This
basis of degree of reinforcement in the thickness direction. reinforcement was achieved by angle-interlock weaves
But the three-dimensional textile performs are more which use fibers to either weave together all fabric layers
attractive as they offer the benefit of near net shape ie. through-the-thickness interlock or weave together
manufacturing with improved damage tolerance. adjacent fabric layers ie. layer-to-layer interlock. Although
Three-dimensional textile preforms are basically the fully this caused an increase in the through-the thickness
integrated continuous-fiber assemblies with multi-axial in- properties, these preforms still had poor in-plane shear
plane and out-of-plane fiber orientations. The composites resistance since the in-plane fibers are only in the warp and
which are reinforced with three-dimensional preforms weft directions. Thus, In order to improve the in-plane
provides several distinct advantages which are not realized shear characteristics O€-axis (bias) fibers were needed and
in traditional laminates. Firstly, due to the out-of-plane there has been several multi-axial 3- D weaving techniques
orientation of some fibers, the three dimensional preforms developed to introduce bias fiber layers. Fig 2 Shows some
provide enhanced stiffness and strength in the thickness of 3-D woven designs.
direction. Secondly, the fully
integrated nature of fiber arrangement in three dimensional
preforms eliminates the inter laminar surfaces characteristic
of laminated composites. The superior damage tolerance of
three-dimensional textile composites based upon polymer,
metal and ceramic matrices has been demonstrated in
impact and fracture resistance. Third, the technology of
textile performing provides the unique opportunity of near-
net-shape design and manufacturing of composite
components and, hence, minimizes the need for cutting and
joining the parts. The potential of reducing manufacturing Fig.2. 3-D woven designs.
costs for special applications is high. The overall
challenges and opportunities in three-dimensional textile
structural composites are very fascinating. As shown in Fig. 3.2.2 Stitching/nonwoven preform: Stitching
1, three-dimensional preforms can be further classed by is the simplest technique of fabricating 3-D textile
their manufacturing techniques: woven, non-woven, performs. However, stitching also causes significant in-
knitted, braided and stitched. plane fiber damage that results in a degradation of the in-
plane mechanical properties of the composite. On the other
hand, By using a non-woven through-the thickness
reinforcement can be introduce without causing significant
in plane fiber damage.
3.2.3 Knitted Preform: Knitting is an inherently
fast process with which multi-axial fabrics can be produced
on commercially available multi-axial warp knitting
machines. Several three-dimensional knitted fabrics are
also produced by using knitting needles to stitch together
the in-plane fibers that can be oriented at 0̊⁰ and 90⁰.

3.2.4 Braiding: The braiding is quite suitable for the


Fig. 1. 3-D Textile performs production of cylinders, rods, beams of various cross
sections and more elaborate structures when coupled with
the use of a mandrel. Track and column braiding processes

800
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

such as two-step, four-step and multi step have all been weave to an eight-harness satin weave or a 3D orthogonal
successfully used to produce a variety of performs. weave pattern, any of which may exhibit a useful bias in
Produced by intertwining three or more yarn groups in a orientation of material properties. The economy of textile
maypole type fashion, traditional solid braiding has been composites arises mainly from the fact that manufacturing
limited to simple cross-sectional shapes. However, recent processes can be highly automated and rapidly
advances have allowed for the production of complex 3-D accomplished on loom and mandrel type machinery. This
shapes. can lead to easier and quicker manufacture of a finished
4. Selection of Resin/ Matrix Materials product, though curing times may still represent a weak
The composites can be classified into two categories link in the potential speed of manufacture. Composite
depending upon the type of resin used: materials have been used for the past 30 years in many
sectors such as aeronautics, space, sporting goods, marine,
automotive, ground transportation and off-shore. Due of
4.1 Thermoplastic composites their high stiffness and strength at low density, high
These are the composite materials with thermoplastic resin specific energy absorption behavior and excellent fatigue
like polyester, HDPE etc. However, they are lesser used as performance these materials have emerged in such areas.
high-tech materials due to their higher viscosity which
cause problem during their penetration into the
reinforcement. Their manufacturing requires the equipment 6. Conclusion
which may withstand at high temperature and pressure Thus, The Textile Composite are composed of a matrix
conditions that causes an increase in the manufacturing material which surrounds and supports the fiber re-
cost. However, they offer some advantages such as non enforcement material and the re-enforcement material itself
toxicity and recyclability.
imparts its special characteristics to the matrix material in a
composite structure. The unique combination of two
4.2 Thermosetting composites material components leads to singular mechanical
These are the composites in which thermosetting polymers
properties and superior performance characteristics not
like epoxy, unsaturated polyester and vinyl-ester are used
as resin. These are most used type of composites materials possible with any of the components alone. Additionally
in automotive, naval, aeronautical and aerospace composite materials are often overwhelmingly superior to
applications. They are preferred over thermoplastic resin other materials (e.g. metals) on a strength-to-weight or
due to their lesser viscosity thermoplastics which help them stiffness-to-weight basis. With this reassurance, the range
to penetrate in reinforcement easily even at room of applications for composite materials appears to be
temperature. Moulding equipments used are relatively
limitless. With the advances in textiles and polymer
cheaper as there is no need to rise till very high temperature
and pressure. However, the disadvantage is that they are technology, textiles reinforced composites have attracted a
toxic, non-recyclable and lesser availability for penetration substantial amount of interest. When coupled with a cost-
time once polymerization starts. effective manufacturing method such as resin-transfer
molding, two- and 3-D textile preforms offer the potential
5. Applications and Advantages of Textile of low cost, mass-produced composite structures.
Structural composites
The worth of textile composites originates from many REFERENCES
advantages, such as speed and ease of manufacture of even
complex components, consequent economy compared to [1]. Nicolas Chretien, Numerical Constitutive Models Of
other composite materials, and out-ofplane reinforcement Woven And Braided Textile Structural Composites,
that is not seen in traditional laminated composites. Further, MS Thesis, Blacksburg,Virginia.
textile composites do not lose the classically valued [2]. Composite materials Wikipedia.
advantage that composite materials possess over their metal [3]. Adanur Sabit, Wellington Sears Handbook of
or traditional counterparts, in that textile composites have Industrial Textiles, Technomic Publishing Co. Inc
an inherent capacity for the material itself to be adapted to [4]. Tsu-Wei Chou, Microstructural Design of Fiber
the mechanical needs of the design. This is to say that the Composites, Cambridge Solid State Science Series
strength and stiffness of the material can be oriented in [5]. Technical yarns and textiles handbook.
needed directions, and no material weight is wasted in
providing reinforcement in unnecessary directions. For a
conventional laminated composite, this is accomplished by
oriented stacking of layers of unidirectional resin
impregnated fibers, such that fibers are aligned with any
preferred loading axes. A textile composite may also be so
adapted by several methods including unbalanced weaves.
The woven fiber tows in a preferred direction may be larger
(containing more constituent fibers per tow) than in other
directions. Also, an extremely diverse set of woven or
braided patterns may be employed, from a simple 2D plain

801
Food Technology
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Effect of plasticizer on the properties of pellets made


from agro-industrial wastes
Shumaila Jan Kulsum Jan C.S.Riar D.C.Saxena
Department of Food Department of Food Department of Food Department of Food
Engineering and Engineering and Engineering and Engineering and
Technology Technology Technology Technology
SLIET, Longowal- SLIET, Longowal- SLIET, Longowal- SLIET, Longowal-
148106 (India) 148106 (India) 148106 (India) 148106 (India)
kulsumnissar@gmail.co kulsumnissar@gmail.co kulsumnissar@gmail.co kulsumnissar@gmail.co
m m m m

ABSTRACT such as starch, protein and cellulose. The demand for


Extruded pellets were made based on deoiled rice bran and relatively inexpensive sources of protein, which can be used
paddy husk using glycerol and cashew nut shell liquid as as a value added products is increasing in recent years [3]. As
plasticizer. Effects of incorporation level of glycerol levels one of the important renewable and abundant resources,
(GL, 6 to 14 %) and cashew nut shell liquid (CNSL, 6 to protein has attracted much attention in the field of packaging
14%) on the physical and functional characteristics of due to its biodegradability [4]. Deoiled Rice bran (DOB) is a
extruded pellets based on deoiled rice bran and paddy husk valuable source of inexpensive protein, contains about 12-
powders were studied. For A3 (12% GL) samples radial 20% protein [5] and is an underutilized agro industrial by-
expansion (RE- 1.08), bulk density (BD-0.892 g/cm3), water product [6]. Proteins are highly complex polymers, and
solubility index (WSI-13.000%), water binding capacity constituents are linked via substituted amide bonds. Due to the
(WBC -4.785%) and hardness (HD-498.253 N) were complexity in their composition and structure, proteins
observed. However, in case of B3 (12% CNSL), radial possess multiple function properties; such as solubility,
expansion (RE-1.202), bulk density (BD-0.892 g/cm3), water gelation, elasticity, emulsification, and cohesion-adhesion [7]
solubility index (WSI- 15.037 %), water binding capacity and Damodaran [8] have reported that protein have an ability
(WBC-5.237) and hardness (HD- 495.027 N) were observed. to interact with the neighbouring molecules and form a strong
Results indicated that GL and CNSL had a significant effect cohesive, viscoeleastic sheets and composites that can
on physical and functional properties. The results suggest that withstand thermal and mechanical motions [9]. Manisha [10]
deoiled rice bran and paddy husk powder can be plasticized developed extruded pellets with proteins isolated from
with glycerol and cashew nut shell liquid for the development Deoiled Rice Bran for the development of biodegradable
of durable pellets using extrusion technology. molded sheets and determined the effect of glycerol on the
properties of the sheet.
General Terms Paddy husk is one of the major agricultural residues
produced during rice processing containing cellulose in
Biodegradable pellets, extrusion, injection molding.
similar amounts with wood [11]. Usually paddy husk has been
a problem for rice farmers due to its resistance to
Keywords decomposition in the ground, difficult digestion, inadequate
Cashew nut shell liquid, Glycerol, physical and functional final disposal (burning) and low nutritional value for animals
properties. [12,13]. According to Martiferrer [14] the lignin and
hemicellulose contents of paddy husk are lower than wood.
1. INTRODUCTION For this reason paddy husk can be processed at higher
Extrusion cooking as a continuous cooking, mixing, temperatures than wood. Therefore, the use of paddy husk in
and forming process, is a versatile, low cost, and very the manufacture of biodegradable pellets using extrusion
efficient technology in food processing. During extrusion technology is attracting much attention. Simone [12] has
cooking, the raw materials undergo many chemical and successfully developed biodegradable thermoplastics
structural transformations, such as starch gelatinization, composites by melt extrusion using paddy husk flour as filler
protein denaturation, complex formation between amylose and found that density of composites slightly increased with
and lipids, and degradation reactions of vitamins, pigments, filler. The group of Yang [15, 16, 17] has published many
etc. [1]. Technologies like extrusion have found its application studies dealing with paddy husk composites. They observed
in other sectors like packaging technology (pellets for film that tensile and impact strengths (notched and unnotched
and molded product development) besides being used for specimens) decreased with increasing filler loading while the
development of food products. The thermal energy generated elastic modulus increased. Han [18] determined the possibility
by viscous dissipation during extrusion, combined with of using lignocellulosic materials as reinforcing fillers in the
shearing effect, quickly cooks the raw mixture so that the thermoplastic polymer composite, and found that paddy husk
properties of materials are modified due to physico-chemical could be utilized as biodegradable filler at end of-use in
changes of the biopolymers [2]. Polymers from renewable polymeric materials to minimize environmental pollution.
resources have attracted an increasing amount of attention In view of the vast availability of these two types of
over the last two decades, predominantly due to two major by-products and waste materials: extruded pellets were
reasons: firstly environmental concerns, and secondly the developed and characterized. Moreover, no research till date
realization that our petroleum resources are finite. Generally, has been published on pellets developed from these two
polymers from renewable resources contain natural polymers, biodegradable agro industrial wastes using GL and CNSL as

802
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

plasticizer. Hence, the main aim was to study the aspects


related to the effect of the different types of plasticizer on the 2.4. Bulk density (BD)
properties of the pellets and to establish correlations between Bulk density was calculated according to the
these properties. method of Alvarez [20]. About 10 pieces of extrudates were
used to determine the bulk density.
2. MATERIALS AND METHODS
2.1. Procurement of raw material: BD (g/cm3) = (2)
Deoiled rice bran (DOB) used for the present study was Where m is mass (g) of an extrudate with L (cm)
kindly donated by M/s. AP Solvex Ltd., Dhuri (Punjab, India) length and d (cm) diameter.
which contain high level of protein. Glycerol, used as a
reference plasticizer in this study was of analytical grade 2.5. Static coefficient of friction (COF) and Angle
(M/s. Merck Specialities Pvt. Ltd., Mumbai, India). Other
plasticizer used in this study was CNSL which was kindly
of repose (AR)
The static coefficient of friction and angle of repose
donated by M/s. Allen Petrochemicals Pvt Lt., Meerut (India).
was calculated according to the method of Maurice [21]. The
Paddy husk (PDH) was also provided from M/s. AP Solvex
Ltd., Dhuri. Paddy husk and DOB was ground in a Laboratory static coefficient of friction ( ) is calculated from the
grinding mill (M/s. Philips India Limited, Kolkata, India) and following equation:
passed through screen (80 mesh) to obtain fine powder. µ = tanα (3)
The angle of repose is derived from the following
2.2 Extrusion process for the preparation of pellets
In the blend preparation, DOB, PDH, CNSL and GL equation (4)
were used. Different formulations used for different samples Where, H and D are the height and diameter of the
are tabulated in Table. 1. The moisture was adjusted to 10%. heap, respectively.
All the ingredients were weighed and then mixed in a Hobart
mixer (M/s. Continental Equipment India Pvt. Ltd, Delhi, 2.6.Water binding capacity (WBC) and water
India) for 20 min at 328 rpm using sigmoid shape blade. This solubility index (WSI)
mixture was then passed through an 80 mesh sieve to WBC and WSI were estimated as per the method described by
eliminate the lumps formed due to addition of moisture. After Anderson [22].
mixing, samples were stored in polyethylene bags at room
temperature for 24h.
Extrusion was performed using a co-rotating twin-
WBC=
screw extruder (M/s. Basic Technology Pvt. Ltd., Kolkata,
India). Screw speed was set at 277 rpm and extrusion was
done for different samples. The specific mechanical energy
(SME) of extrusion was calculated as WSI =
SME = (ω/ωr) x (τ/100) x (Zr/Q) (1) ×100
where ω is screw speed, ωr is the rated screw speed,
τ is percent torque, Zr is rated power and Q is the feed rate. 2.7. Hardness
Screw speed was set at 274 rpm, feed rate at 107.5 g/min, Texture profile analysis (TPA) of all the extruded
barrel temperature at 110oC and rated power was 4755.75 hp. puffs was performed in triplicate using Texture analyzer (TA-
Table 1. Formulations and specific mechanical energy of X2Ti, Stable Microsoft System, UK). Hardness (N) of the
the pellets samples was recorded by analysing the TPA graph using the
Code DOB PDH GL (%) Water SME kJ/Kg Texture Exponent 32 software (Stable Microsoft system, UK).
(%) (%) (%) Hardness was determined by placing five pellets from each
A1 50 50 6 10 27.064 sample on the platform of the analyzer with a probe SMS –
A2 50 50 10 10 36.085 P/75 – 75mm diameter at a crosshead speed of 2 mm/sec with
36.085 a target mode of 2 mm distance. The compression generates a
A3 50 50 12 10
curve with the force over distance. The highest first peak
A4 50 50 14 10 45.106 value was recorded as this value indicated the first rupture of
DOB PDH CNSL Water pellet at one point and this value of force was taken as a
Code SME kJ/Kg
(%) (%) (%) (%) measurement for hardness [23].
B1 50 50 6 10 23.064
B2 50 50 10 10 31.085 2.8. Statistical analysis
B3 50 50 12 10 30.085 Statistical analysis was conducted using a
B4 50 50 14 10 38.995 commercial statistical package, Design-Expert version 6.0.10
(Stat-Ease Inc., Minneapolis, USA). The analyses of extruded
samples were conducted in triplicates.
2.3. Radial expansion
The ratio of diameter of extruded pellet and the
diameter of die was used to express the radial expansion of
3. RESULTS AND DISCUSSION
extruded pellet [19]. The diameter of extruded pellets was The proximate analysis of DOB revealed 8.09%
determined as the mean of 10 random measurements made moisture, 14.69% protein, 0.69% fat, fiber 8.75% and 8.42%
with a Vernier calliper. The radial expansion ratio was ash. The chemical composition of rice husk is similar to that
calculated using the following formulae: of many common organic fibers and it contains of cellulose
Radial Expansion ratio = 40-50 %, lignin 25-30 %, ash 15-20 % and moisture 8- 15 %
Extrudates’ diameter/die diameter [24].

803
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 2 Effect of plasticizer on functional properties of


3.1 Specific mechanical energy (SME) pellets
As shown in Table 1, the SME increased with the Functional Properties of Formulations
concentration of plasticizer in the formulation for both the A1 A2 A3 A4

WBC ( %)
cases. This is due to the increased consistency of the feed with 3.395±0.12 3.734±0.10 4.785±0.11 4.795±0.42
the increased concentration of plasticizer (increased B1 B2 B3 B4
viscosity). Similar effects were observed for SME by Andreas 4.785±0.11 4.965±0.16 5.237±0.46 6.355±0.16
[25] wherein they reported lowest SME for lowest glycerol
content. A1 A2 A3 A4
1.200±0.05 1.09±0.02 1.08±0.27 1.010±0.53

RE
3.2. Radial expansion (RE) B1 B2 B3 B4
The range of expansion was between 1.200 to 1.010 1.390±0.25 1.284±0.54 1.202±0.48 1.170±0.57
in case of (A) samples and 1.390 to 1.170 for (B) samples
(Table 1). It was observed that expansion decreased with the A1 A2 A3 A4

WSI (%)
increase in the amount of plasticizer for both the samples (A 17.120±0.28 17.591±0.53 13.000±0.55 20.880±0.94
and B) this behaviour can be attributed to the increased B1 B2 B3 B4
binding properties of the plasticizers with the particles at 16.270±0.54 16.481±0.24 15.037±0.65 19.542±0.27
elevated levels. Moreover, this behaviour can also be
All the values are Mean ± SD, WBC: water binding capacity,
attributed to the effect of the temperature and shear on the
WSI: water solubility index, RE: radial expansion
expansion of the pellets [26]. Another reason may be due to
the amount of high protein content in DOB and high fiber
contents in paddy husk. Proteins affect the RE through their 3.5. Bulk density (BD)
ability to influence water distribution in the matrix and The density of extrudates varied between 0.593 and
through their macromolecular structure that affects the 0.975 g/cm3 for (A). However, for B samples observed results
extensional properties of the extruded melts [26]. Onwulata in the range of 0.589 to 0.735 g/cm3 (Table 2). There is an
and co-workers [27] investigated the effects of whey protein obvious increase of bulk densities of biodegradable pellets
concentrate and isolate on the extrusion of corn and rice starch which indicates that the extruded biodegradable pellets bear a
and reported a reduction in expansion at higher concentrations good compaction and in turn a dense product [34]. Good
of protein. This may be due to modification in the viscoelastic compaction may be due to the plasticizers used for binding
properties of the dough as a result of competition for the purpose. Highest value was observed for A4 (0.975 g/cm3)
available water between the starch and protein fractions, and B4 (0.735 g/cm3). Lowest values were observed in
leading to a delay in starch gelatinization and a lower sample A1 (0.593 g/cm3) and B1 (0.589 g/cm3).
expansion in the products.
3.6. Static coefficient of friction (COF) and Angle
3.3. Water solubility index (WSI) of repose (AR)
In this experiment the WSI ranged from 17.120 to The static coefficient of friction on stainless steel
20.880 % for (A) samples and 16.270 to 19.542 % for (B) for both the samples decreased with the increase in processing
samples (Table 1). There was an inverse relation of the water temperature as shown in (Table 3). As the plasticizer increases
binding capacity and the water solubility index of the pellets surface of the pellets becomes smoother and in turn apply less
and pots. William [28] also shows that water solubility index friction towards the surface. Correa [35] reported similar
is inversely proportional to water absorption index. It was also results for rice grains applying less friction onto the surfaces.
observed for both the cases that there was a significant rise in The angle of repose (Table 3) for both the samples
the WSI with the increase of the plasticizer upto 10% which did not show significant change but there was an increase in
then reduces for sample A3 and B3 this behaviour can be the values upon increasing the amount of plasticizer. The
attributed to the binding effect of the plasticizer at the parameter is important in proper design of hoppers to
particular percentage. maintain continuous flow of the pellets, which must be larger
than the pellet’s angle of repose.
3.4. Water binding capacity (WBC)
The increase in the amount of plasticizers (from 6 to 3.7. Hardness (HD)
14%) increases water binding capacity from 3.395 to 4.795% The range of HD was found to be in between
in case of ‘A’ samples and 4.785 to 6.355 % for B samples. 439.805 to 460.897 N for A samples and 392.454 to 487.044
This behaviour may be attributed to the hygroscopic nature of N for B samples (Table 1). For HD the increase in the
the plasticizer (Table 1). Another reason may be due to the plasticizer amount showed a significant effect on pellets. This
presence of fiber and protein in PDH and DOB respectively. could be attributed to the lower expansion of products leading
The water absorption is the characteristic feature of fiber to increased HD as observed from expansion values, due to
supplemented powders as reported by several researchers [29, increased binding effects of plasticizer and high protein/ fiber
30]. Fibre may interact with water by means of polar and in DOB and PDH. As proteins affect the RE through their
hydrophobic interactions, hydrogen bonding and enclosure. ability to influence water distribution in the matrix and
The results of these interactions vary with the flexibility of the through their macromolecular structure and conformation that
fiber surface [31]. This may due to the hydrophilic and affects the extensional properties of the extruded melts [26].
hygroscopic nature of the plasticizers and forming the large
hydrodynamic plasticizer-water complex [32]. Moreover,
number of hydroxyl group which exist in the fibre structure is
mainly caused by water, allowing absorption and interaction
through hydrogen bonding [33].

804
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 3 Effect of plasticizer on physical properties of concentration and temperature. Journal of Cereal Science, 60,
pellets 514-519.

[4] Cuq, B., Gontard, N. & Guilbert, S. 1998. Proteins as


Physical Properties of Formulations agricultural polymers for packaging production. Cereal
A1 A2 A3 A4 Chem., 75, 1-1.
BD (g/cm3)

0.593±0.76 0.689±0.85 0.892±0.43 0.975±0.43


[5] Hamada, J.S. 2000. Characterization and functional
B1 B2 B3 B4
properties of rice bran protein modified by commercial
0.589±0.85 0.591±0.43 0.697±0.43 0.735±0.54 exoproteases and endoproteases. J. Food Sci., 65, 305-310
A1 A2 A3 A4 [6] Yoon, J.I, Shin., Sung-AE, J., & Kyung, B. S. 2011.
HD (N)

439.805±0.02 448.583±0.16 498.253±0.16 460.897±0.08 Preparation and Mechanical Properties of Rice Bran
B1 B2 B3 B4 Protein Composite Films Containing Gelatin or Red Algae.
392.454±0.14 448.592±0.05 495.027±0.15 487.044±0.14 Food Sci. Biotechnol. 20(3): 703-707
A1 A2 A3 A4 [7] Zhang, W. & Xu, S.Y. 2003. Food chemistry, 3rd edn.
0.613 0.597 0.522 0.454
COF

China Light Industry Press, Beijing, pp 267–302


B1 B2 B3 B4
0.574 0.568 0.561 0.555 [8] Damodaran, S. 1994. Protein functionality in food
system. Marcel Dekker, New York, pp 1–38
A1 A2 A3 A4
25.641 38.234 45.000 46.397 [9] Phillips, M.C. 1981. Food Technol, (Chicago), 35:50.
AR

B1 B2 B3 B4
33.690 36.607 37.117 43.452 [10] Manisha, J., Srivastava, T. & Saxena, D.C. 2012.
Extrusion processing of Deoiled rice bran in the
development of biodegradable molded sheets. Scholarly
Journal of Agricultural Science, 2: 163-178.
All the values are Mean ± SD
BD: bulk density, HD: hardness, COF: coefficient of friction, [11] Yang, H.S., Kim, H.J., Son, J., Park, H.J, Lee, B.J. &
AR: angle of repose Twang, T.S. 2004. Rice-husk flour filled polypropylene
composites; mechanical and morphological studies.
4. CONCLUSION Composite Structures., 63, 3-4, 305-312.
The responses for pellets include RE, BD, WSI,
WAI and hardness were affected by both GL and CNSL. It [12] Simone, M., Leal, R., Evelise, F. S., Carlos, A, F., &
was concluded that the studies on the properties of the Sonia, M.B. N. 2009. Studies on the Properties of Rice-Husk-
biodegradable pellets and ingredients used for their Filled-PP Composites – Effect of Maleated PP. Materials
production can be helpful in designing of equipments for Research, 12, 3, 333-338,
further processing of these pellets (injection molding).
Amount of plasticizer and extrusion temperature showed [13] PIVA, A.M., Steudner, S.H. & Wiebeck, H. (2004):
significant effect on functional and physical properties. The Physico-mechanical properties of rice husk powder filled
findings of this study investigated the feasibility of developing polypropylene composites with coupling agent study. In:
value added products (pots) from mixture of DOB and PDH Proceedings of the Fifth International Symposium on Natural
with in combination GL and CNSL by extrusion processing. Polymer and Composites; Sao Pedro/SP, Brazil

ACKNOWLEDGMENT [14] Marti-Ferrer, F., Vilaplana, F., Ribes-greus, A.,


This work was financially supported by Council of Scientific Benedito-Borras, A., & Sanz-Box, C. 2006. Flour rice husk as
and Industrial Research (CSIR), MHRD, Govt. of India. filler in block copolymer polypropylene: Effect of different
coupling agents. Journal of Applied Polymer Science, 99, 4,
1823-1831.
REFERENCES [15] Yang , H.S, Kim, H.J, Park, H.J, Lee, B.J & Twang, T.S.
(a). (2006): Water absorption behavior and mechanical
[1] Ilo, S., Tomschik, U., Berghofer, E. & Mundigler, N. properties of lignocellulosic filler-polyolefin bio-composites.
1996. The effect of extrusion operating conditions on the Composite Structures. 72, 4, 429-437.
apparent viscosity and the properties of extrudates in twin-
screw extrusion cooking of maize grits. Food Science and [16] Yang HS, Kim HJ, Park, HJ, Lee B.J. & Hwang TS. (a)
Technology, 29(7), 593–598. 2007. Effect of compatibilizing agents on rice –husk flour
reinforced polypropylene composites. Composite Structures.
[2] Jan K, Riar CS, Saxena DC (2014) Mathematical 77(1):45-55.
Modelling of Thin Layer Drying Kinetics of Biodegradable
Pellets. J Food Process Technol 5: 370. doi:10.4172/2157- [17] Yang, HS., Wolcott, M.P, Kim, H.S, Kim, S. & Kim,
7110.1000370 H.J. (b). 2006. Properties of lignocellulosic material filled
polypropylene bio-composites made with different
[3] Ali, R., Seyede, S. M. & Seyed, A. S. 2014. Dynamic manufacturing processes. Polymer Testing. 25(5):668-676.
rheological behavior of rice bran protein (RBP): Effects of

805
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[18] Han-Seung, Y., Hyun-Joong K., Jungil, S., Hee-Jun, P., [27] Onwulata, C.I., Smith, P.W., Konstance, R.P., Holsinger,
Bum-Jae, L., Taek-Sung, H. 2004. Rice-husk flour filled V.H., 2001. Incorporation of whey products in extruded corn,
polypropylene composites; mechanical and morphological potato or rice snacks. Food Research International, 34, 679–
study. composite structures, 63, 305–312. 687.

[19] Fan, J., Mitchell, J.R. & Blanchard, J.M.V. 1996. The [28] Williams, M.A., Horn, R.E. & Rugula, R.P. 1977.
effect of sugars on the extrusion of maize grits: I. The role of Extrusion: An indepth look at a versatile process. I. J. Food
the glass transition in determining product density and shape. Eng., 49(10), 87-89.
International Journal of Food Science and Technology, 31,
55– 65.
[29] Dreese P.C. & Hoseney R.C. 1982. Baking properties of
[20] Alvarez-Martinez, L., Kondury, K.P. & Karper, J.M. bran fractions from brewer’s spent grains. Cereal Chemistry
1988. A general model for expansion of extruded products. 59, 89–91.
Journal of Food Science, 53, 609–615.
[30] Haridas R.P. & Hemamalini R., 1991. Effect of
[21] Maurice A. C. 1970. Experiments on the angles of repose incorporating wheat bran on rheological characteristics and
of granular materials, Sedimentology, 14 147-158 bread making quality of flour. Journal Food Science and
Technology, 28, 92–97.
[22] Anderson, R.A., Conway, H.F., Pfeifer, V.F. & Griffin,
E.L. 1969. Gelatinization of corn grits by roll and extrusion [31] Gamlath, S. & Ravindran G. 2009. Extruded products
cooking. Cereal Science Today, 14, 4–12. with Fenugreek (Trigonella foenum-graecium) chickpea and
rice: Physical properties, sensory acceptability and glycaemic
[23] Stojceska, V., Ainsworth, P., Plunkett, A., Ibanoglu, E. & index. Journal of Food Engineering, 90, 44–52.
Ibanoglu, S. 2008. Cauliflower by-products as a new source of
dietary fibre, antioxidants and proteins in cereal based ready [32] Krochta, J.M. 2002. Proteins as raw materials for films
to- eat expanded snacks. Journal of Food Engineering, 87,4, and coatings: definitions, current status, and opportunities.
554-563. Protein-based films and coatings. 1-41.
[33] Sudha, M.L., Vetrimani, R. & Leelavathi, K. 2007.
[24] Hwang, C.L. & Chandra, S. 1997. The Use of Rice Husk Influence of fibre from different cereals on the rheological
Ash in Concrete. Waste Materials Used in Concrete characteristics of wheat flour dough and on biscuit quality.
Manufacturing. Edited: Chandra, S., Noyes Publications, Food Chem. 100, 1365-1370.
USA, p. 198.
[34] Igathinathane, C., Jaya, S.T., Sokhansanj, S., Bi, X., Lim,
[25] Andreas R, Marie H M, Joelle B, Stephane G, Bruno V. C.J, Melin, S. & Mohammad, E. 2010. Simple and
1999. Rheological properties of gluten plasticized with inexpensive method of wood pellets macro-porosity
glycerol: dependence on temperature, glycerol content and measurement. Bioresource Technology, 101, 16, 6528–6537
mixing conditions. Rheol Acta, 38: 311-320.
[35] Correa, P. C., Schwanz, F., Silva, D.A., Jaren, C.,
[26] Moraru, C.I. & Kokini, J.L. 2003. Nucleation and Afonso, P. C., Junior., & Arana, I. 2007. Physical and
Expansion During Extrusion and Microwave Heating of mechanical properties in rice processing. J. Food Engg.,79,
Cereal Foods. Comprehensive reviews in food science and 137–142.
food safety, 2.

806
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Textural and Microstructural Properties of Extruded


Snack prepared from rice flour, corn flour and deoiled
rice bran by twin screw extrusion
Renu Sharma Raj Kumar Tanuja Srivastava D.C. Saxena
Deptt. of Applied Deptt. of Applied Deptt. of Food Deptt. of Food Engg.
Sciences BGIET, Sciences, PTU, Technology BGIET & Tech.SLIET,
Sangrur, jalandhar Sangrur Longowal
renu.sharma6286@g raj.scientia@gmail.co tanusriva@yahoo.co. dcsaxena@yahoo.
mail.com m in com

ABSTRACT extruders used to large extent for the production of starch


Rice flour, corn flour and deoiled rice bran blends were used based food products as compared to single screw extruders.
to prepare ready-to-eat extrudates at barrel temperature Inspite of high cost and complexity, twin screw extruders,
(1100C, 1200C and 1300C) and moisture content (14%, 15% allows greater flexibility of operating conditions for the
and 16%). A three-level, two-factor central composite achievement of desired time, temperature and shear history
rotatable design was employed to investigate the effect of for the processed material [6]. The product coming out of the
temperature and feed moisture content and their interactions, extruder depends upon several factors like type of raw
on the mechanical hardness of extruded product. It was found material, moisture content of raw feed, temperature of the
that the increase in feed moisture content leads to increase in barrel section, screw speed, type of extruder, screw
hardness of extrudates while increasing temperature leads to configuration etc. [7]. A lot of work has been reported on
decrease in hardness of product. A numerical optimization extrusion cooking of rice flour. It remains an attractive
technique was used to obtain compromised optimum ingredient in extrusion industry, due to its ease of digestion,
processing conditions of barrel temperature (1200C) and attractive white colour and hypoallergenicity [8]. It is the most
moisture content (15%). A good agreement between the important cereal product used as staple food for many
predicted (14.91N) and actual value (15.105N) of the response populations over the worldwide. The major product obtained
confirms the validation of RSM model. The surface during rice milling is 70% rice (endosperm) and remaining are
morphology of extrudates, examined through scanning by-products consisting of 20% rice husk, 8% rice bran and 2%
electron microscopy (SEM) showed large number of sheared rice germ [9-11].
and flattened granules at varied temperature. The more
damage of starch granules was observed at higher temperature During de-husking and milling of paddy, the brownish portion
i.e. at 1300C. of rice taken out in form of fine grain, is the rice bran which is
nearly 8% of milled rice [12, 13]. Rice bran is highly
nutritious compound. It contains micronutrients like
Keywords oryzanols, tocopherols, tocotrienols, phytosterols which
RSM, barrel temperature, hardness, SEM, extruded product. comprises of vitamin E and exhibit significant antioxidant
activity. Rice bran also contains 20% oil, 15% protein, 50%
1. INTRODUCTION carbohydrates (mainly starch), dietary fibers like pectin, beta-
glucan and gum [14-17].
Cereals and starch-based products provide a large proportion
of energy to all the humans. Besides, providing energy, starch
In the earlier times, rice bran was used as either fertilizer or
also contributes to the texture as well as the structure of the
animal feed. But in these days, it is used for extraction of oil
food that we consume. The textural as well as the functional
namely rice bran oil (RBO) [9, 18, 19]. Rice bran is rich in
properties of the final product depends upon the
proteins, dietary fiber and bioactive compounds [20] that help
gelatinization, molecular degradation and/or reassociation of
in reducing the risk of coronary heart disease. It helps in
the raw material [1]. The technology of extrusion cooking is
lowering blood cholesterol [21, 22], decrease of
no exception. The extrusion technology is growing day by day
arthrosclerosis disease [23] and it posses laxative effect [20].
because of its versatility and economical production. It
The substitution of rice bran in food products will increase the
produces variety of food products with attractive texture, size
nutritional value as well as provide health benefits to
and shape [2]. It can be considered as continuous cooking,
consumers. Deoiled cake, a residue of rice bran after
mixing and forming process in which raw material undergo
extraction of oil, possesses high protein and vitamin contents.
many chemical and structural transformations. In the food
It can be utilized to prepare functional food which keeps
industry, the extrusion cooking plays a significant role and
humans healthy due to low fat content. So, value added,
considered as efficient technology for the production of
edible food products can be obtained by utilizing defatted rice
breakfast cereals, baby foods, flat breads, snacks, meat and
bran through extrusion cooking.
cheese analogues and modified starches, etc. [3,4].
Although, deoiled rice bran is highly nutritious compound and
The process of food extrusion is very complicated process in
possesses several health benefits, but it is still underutilized
which small variations in the processing conditions affect
for the development of functional foods. The research work
process variables as well as the quality of final product [5].
concerning the use of rice bran in the preparation of extruded
The use of extruders has widened the scope of extrusion
product is very scanty. Jadhav et al.,(2012) used deoiled rice
technology. Due to several advantages, the twin screw
bran for the development of biodegradable and medium water

807
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

resistant agriculture planting containers by the application of plate of the die fixed by a screw nut tightened by a special
twin screw extruders [24]. Keeping in view, the nutritional wrench provided. The automatic cutting knife is fixed on
value as well as health benefits of deoiled rice bran, an rotating shaft. The extruder barrel temperature was maintained
attempt has been made to prepare ready to eat extruded at 1100c, 1200c and 1300c respectively and the moisture was
product. The functional properties of extruded product are adjusted to 14%, 15% and 16% by adding required amount of
highly dependent upon extrusion conditions and type of raw water to all the flours. The twin screw extruder was kept
material. So, in the present research work, the effect of barrel running for suitable period of time to stabilize the set
temperature (1100c-1300c) and moisture content (14%-16%) temperatures and sample were then poured in to feed hopper
on the textural properties (hardness) of the product during and the feed rate was adjusted to 4kg/h for easy and non-
twin screw extrusion cooking of deoiled rice bran and the choking operation. The die diameter of 4 mm was selected as
surface morphology of the extrudates through scanning recommended by the manufacturer for such product. The
electron microscopy have been studied. product was collected at the die end and packed in already
numbered zipped lock packets for proper storage.
2. MATERIALS AND METHODS Table 1. Values of independent variables at three levels of
the CCRD design
2.1 Materials
Ingredients for the production of highly nutritious ready-to-eat Independent variables coded Levels in uncoded form
snack food consisted of deoiled rice bran, corn flour, rice
flour. Deoiled rice bran used for present study was procured
from M/s. AP Solvex Ltd., Dhuri. Corn flour and rice flour Feed moisture (%) X1 14 15 16
were purchased from local market Sangrur, Punjab, India.
Barrel temperature (°c) X2 110 120 130
2.2 Experimental Design
Response surface methodology (RSM) was adopted in the
experimental design. [25]. The main advantage of RSM is Table 2. Central Composite Rotatable Experimental
reduced number of experimental runs needed to provide design in actual levels for extruded snacks
sufficient information for statistically acceptable result. A
three level, two factor central composite rotatable design was Independent Variables Dependent
employed. Table 1 shows independent variables and their Variables
levels which were chosen by taking trials of samples. The
ranges having good expansion are taken. Experimental Design S. Moisture Content Barrel Temperature Hardness
comprises of 13 experiments. The independent variables No. (%) 0
( C) (N)
chosen for study were barrel temperature and moisture content
of the raw material while the response variable was hardness. 1. 15.00 134.14 14.76
The three levels of the process variables were coded as -1, 0, 1 2. 14.00 130.00 11.57
[25] and experimental design in terms of actual levels is given
in Table 2. 3. 15.00 120.00 14.91
4. 16.41 120.00 18.71
2.3 Preparation of Sample
For the preparation of value added extruded product, the 5. 15.00 105.86 16.02
different powdered ingredients like rice flour, deoiled rice 6. 15.00 120.00 14.91
bran and corn flour were mixed in the 70:20:10 ratio. The feed
composition and screw speed of 300 rpm was kept constant 7. 14.00 110.00 12.78
for all the experimental runs. The moisture content of all the 8. 16.00 110.00 17.56
flours was determined before extrusion by using the hot air
oven method [26]. The required moisture was adjusted by 9. 15.00 120.00 14.91
sprinkling the stilled water to all the dry ingredients. All the 10. 15.00 120.00 14.91
ingredients were weighed and then mixed in the food
processor with mixer attachments for 20 min. This mixture 11. 16.00 130.00 19.27
was then passed through a 2 mm sieve to reduce the lumps
12. 15.00 120.00 14.91
formed due to addition of moisture. After mixing, samples
were stored in polyethylene bags at room temperature for 24h 13. 13.59 120.00 11.89
[27].

2.5 Evaluation of textural characteristics of


2.4 Extruder and Extrusion cooking extrudates
A co-rotating twin-screw extruder (G.L. Extrusion Systems 2.5.1 Hardness
Pvt. Ltd., Delhi) having barrel with two electric band heaters Mechanical properties of the extrudates were determined by a
and two water cooling jackets received the raw feed from a crushing method using a TA – XT2 texture analyzer (Stable
variable speed feeder was used. The main drive is provided Micro Systems Ltd., Godalming, UK) equipped with a 500 kg
with 7.5 HP motor (400 V, 3ph, 50 cycles) and a temperature load cell. An extrudate 40 mm long was compressed with a
sensor was fitted on the front die plate. The output shaft of probe SMS – P/75mm diameter at a crosshead speed 5
worm reduction gear was provided with a torque limiter mm/sec to 3 mm of 90% of diameter of the extrudate. The
cooling. The standard screw configuration design for compression generates a curve with the force over distance.
processing cereals and flour-based products was used. The die The highest first peak value was recorded as this value

808
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

indicated the first rapture of snack at one point and this value significant, whereas lack-of-fit value of 0.55 was not
of force was taken as a measurement for hardness [27]. significant. The chance of large model F-value due to noise
was only 0.01%. The fit of model was also expressed by the
coefficient of determination R2, which was found to be
2.6 Statistical Analysis of Responses 0.9751, indicating that 97.51 % of the variability of the
The response (hardness) for different experimental
response could be explained by the model, whereas Adjusted
combinations was related to the coded variables (xi, i=1 and
R2 was 0.9573. Adequate Precision was 23.774 which are
2) by a second degree polynomial equation.
greater than 4 suggesting that model may be used to navigate
Y = β0 + β1x1 + β2x2 + β11x12 + β22x22 + β12x1.x2 + ε
the design space. The quadratic model for hardness (H) in
Where x1 and x2 are the coded values of moisture content of terms of coded levels of variables was developed as follows:
feed (%) and temperature of barrel (0c). The coefficients of the
polynomial were represented by β0 (constant), β1, β2 (linear H = 14.91 + 2.77 Xl – 0.16 X2 + 0.18 X12 - 0.23 X22 + 0.73
effects); β12 (interaction effects) ; β11 , β22, (quadratic effects) X1X2 …………………….. …………………………. Eq. (1)
; ε (random error). Multiple regression analysis was used for
data modeling and for each response, the statical significance
of the terms were examined by analysis of variance. Design Table 3. Analysis of variance for Hardness
expert 6.0 (version 6.0, by STAT-EASE inc., USA) was used
for statistical analysis of the data To check the adequacy of Coeff. of
the regression model, R2, Adjusted R2, Adequate Precision Sum of Mean F
and F-test were used [25]. In order to generate three Source Model DF Prob > F
squares square Value
dimensional plots for the regression model, statistical terms
calculations were made by using regression coefficients. Model 14.91 64.07 12.81 5 54.78 <0.0001***
X1 2.77 61.21 61.21 1 261.65 <0.0001***
2.7 Microstructural Analysis
X2 -0.16 0.21 0.21 1 0.88 0.3800
2.7.1 Scanning Electron Microscope (SEM)
2
Scanning electron microscope (Jeol JSM-7500, Jeol Ltd, X1 0.18 0.23 0.23 1 0.99 0.3531
Tokyo, JAPAN)) was used to view extrudate in three X22 0.23 0.36 0.36 1 1.54 0.2552
dimensions and to determine the shape and surface of
extrudate. The sample was mounted on SEM stub using X1 X2 0.73 2.13 2.13 1 9.11 0.0194**
double sided adhesive tape and was coated with platinum. An Lack of 0.55
accelerating potential of 5KV was used during micrography. Fit
R2-value 0.9751
3. RESULT AND DISCUSSION 2
Variation of response (hardness) of extrudates with Adj. R 0.9573
independent variables (moisture content and barrel Adequate 23.774
temperature) was analyzed. A complete second order Precision
quadratic model employed to fit the data and adequacy of the
model was tested to decide the variation of responses with F – value 54.78
independent variables. The analysis of variance tables were **Significant at P<0.05, *** Significant at P<0.001,
generated and the effect and regression coefficients of df: degrees of freedom
individual linear, quadratic and interaction terms were From the analysis of Equation (1), it was found that the linear
determined. The significances of all terms in the polynomial term feed moisture content (X1) has significant positive effect
were jugged statistically by computing the F-value at while barrel temperature (X2) has significant negative on the
probability (p) of 0.01 or 0.05. The regression coefficients hardness of rice flour, corn flour and rice bran extrudates. It
were then used to make statistical calculation to generate means hardness of extruded product increases with increase in
contour maps from the regression model. Optimization of moisture content of raw material while it decreases with
parameters was done by partially differentiating the model increase in barrel temperature. In this case, for the linear term
with respect to each parameter, equating zero and moisture content (X1), the F value is 261.65 and P value is
simultaneously solving the resulting functions. Design expert less than 0.0001(P<0.01), indicating term is significant. The
6.0 (version 6.0, by STAT-EASE inc., USA) was used for negative significant effect of barrel temperature on hardness
optimization of selected parameters. may be due to reduced expansion. F-value for interaction term
of feed misture and barrel temperature (X1X2) was 9.11 and p
3.1 Effect of Process Variables on value 0.0194 (P<0.1) predicting the terms are significant.
Response surface plot (Fig.1), showing that hardness
Mechanical Hardness of extrudates increased with increasing feed moisture (X1) and decreased
The textural properties of extrudates are very important
with increasing barrel temperature (X2). As temperature has
properties and are closely related to the acceptance of the
significant negative effect on hardness of extrudates.
product. Hardness and crispness of the ready-to-eat extrudates
Therefore, a crispy texture was obtained with increasing
are found to be associated with expansion and cell structure.
temperature. This result is in agreement with [28-31]. The
The instrumental method for the determination of hardness is
results are also similar to the earlier findings of Tanuja et al.,
by measuring the force (Newton) required to break the
(2014) [32]. The increase or decrease of hardness with
extrudates. In our study, hardness of the extrudates varied
moisture content and barrel temperature is shown in Fig.2 and
between 11.57 and 19.27 (Table 2). Table 3 shows the
Fig.3.
coefficients of the model and other statistical attributes of
hardness. Regression model fitted to experimental results of
hardness (Table 3) shows that model F-value of 54.78N was

809
.
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.3 Verification of results


The suitability of the model developed for predicting the
optimum response values was tested using the recommended
optimum conditions of the variables and was also used to
validate experimental and predicted value of the responses. The
optimized processing conditions for extrusion as well as results
of predicted and actual values of textural properties of
extrudates are shown in Table 4.

Table 4. Optimum Processing Conditions for extrusion.


Fig.1. Response surface plot of hardness as a function of
moisture content Optimum Processing Conditions Values of Response

19.27 .

Moisture Barrel Hardness


17.345
0
Content (%) Temperature ( C)
Hardness

15.42

13.495
Coded Actual Coded Actual Predicted Actual

11.57
Value Value
14.00 14.50 15.00 15.50 16.00

Moisture Content

0 15 0 120 14.91 15.105


Fig. 2 : Effect Moisture content upon hardness of
extrudates

3.4 Microstructural Analysis


19.27 .
Extruded samples had porous, open-celled structures. Fig.4
. shows the damage of starch molecules which takes place during
17.345
extrusion process. The extruded product show large no. of
Hardness flattened and sheared granules. Damage of granules was started
at 1100C and it was highest in case of extrusion at 1300C
15.42

followed by 1200C. The results are in agreement with the


13.495

results of Bhattacharya et al. (2005) [33]. The damage of starch


11.57
molecules increases with increase in temperature
110.00 115.00 120.00 125.00 130.00

Barrel Temp.

Fig. 3 : Effect Barrel temperature upon hardness of


extrudates

3.2 Optimization
A numerical multi-response optimization technique was applied
to determine the optimum combination of feed moisture
content, and barrel temperature for the production of extrudates
containing rice bran, rice flour and corn flour. The assumptions
were to develop a product which would have maximum score
in sensory acceptability so as to get market acceptability and
minimum hardness. Therefore, among responses, these
parameters were attempted to be maintained whereas other
parameters were kept within range. Under these criteria, the
uncoded optimum operating conditions for development of rice
bran, rice flour, corn flour extrudate were 120°C of barrel
temperature and 15% of feed moisture. The responses predicted
by the Software for these optimum process conditions resulted
hardness, 14.91N. Table 4 shows the different condition of the
constraints for optimization. Fig. 4. SEM picture of extruded product of rice, corn and
deoiled rice bran at 1100C, 1200C and 1300C

810
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

4. CONCLUSION [11] Anonymus. Sea Handbook-2009. 9th ed. The solvent


The process of extrusion cooking not only utilize rice milling extractor’s association of India: India, 885-891.
by-product but also add value to commercialized product with
health benefits. The application of response surface [12] Hernandez, N., Rodriguez-Alegría, M.E., Gonzalez ,
methodology helps to interpret the relationship between the F., Lopez-Munguia, A. 2000. Enzymatic treatment of
effects of barrel temperature and moisture content of raw rice bran to improve processing. JAOCS, vol.77,
material on the mechanical properties of the product. Increasing pp.177-180.
feed moisture content leads to increase in hardness of
extrudates while increasing temperature leads to decrease of [13] Takeshita, Y., Iwata, F. 1988. Recent technical
hardness of product. Microstructural studies reveals that the advances in rice bran oil processing (II About
surface morphology of extrudates was also affected and the refining process). Transactions of the Kokushikan
organized structure of starch molecules modified to flattened Univ, Faculty of Engineering, vol.21, pp.118-124.
and sheared granules during extrusion process.
[14] Wells, J.H. 1993. Utilization of rice bran & oil in
human diets. Louisiana Agriculture, vol.36, pp.4-8.
REFERENCES
[15] Jiang, Y., Wang, T. 2005. Phytosterols in cereal by
[1] Guha, M. and Ali, S.Z. 2006. Extrusion cooking of products. JAOCS, vol.82, pp.439-444.
rice: Effect of amylase content and barrel temperature
on product profile. Journal of Food Processing and [16] Piironen, V., Lindsay, D.G., Miettinen, T.A., Toivo,
Preservation, vol.30, pp.706-716. J., Lampi, A.M. 2000. Plant Sterols: biosynthesis,
biological function and their importance to human
[2] Smith, A.C., and Singh, N. 1996. New applications of nutrition. J Sci Food Agric, vol.80, pp.939-966.
extrusion technology. Indian Food Ind., vol.15,
pp.14-23. [17] Hu, W., Wells, J.H., Tai-Sun, S., Godber, J.S.1996.
Comparison of Isopropanol and Hexane for
[3] Anderson, R.A., Conway, H.F., Pfeifer, V.E. and Extraction of Vitamin E and Oryzanols from
Griffin, E.L. 1969. Gelatinization of corn grits by roll Stabilized Rice Bran. JAOCS. vol.73, pp.1653-1656.
and extrusion cooking. Cereal Science Today,
vol.14, pp.4-12. [18] Barber, S., Camacho, J., Cerni, R., Tortosa, E.,
Primo, E. 1974. Process for the stabilization of rice
[4] Meuser, F.and Van Lengerich, B. 1992. System bran I Basic research studies. In Proceedings of Rice
analytical model for the extrusion of starches. In J. L. byproducts utilization (International Conference,
Kokini, C. Ho, and M. V. Larwe (Eds.), Food Valencia, Spain), pp.49-62.
extrusion science and technology New York: Marcel
Dekker. pp 619-630. [19] Hammond, N. 1994. Functional & nutritional
characteristics of rice bran extracts. Cereal foods
[5] Desrumaux, J.A., Bouvier, J. M. and Burri, J. 1999. world, vol.39, pp.752-754.
Effect of free fatty acids addition on corn grits
extrusion cooking. Cereal Chemistry, vol. 76, pp. [20] Saunders, R.M. 1990. The properties of rice bran as a
699-704. food stuff. Cereal Foods World, vol.35, pp.632-635.
[6] Gogoi, B.k., Oswalt, A.j. and Choudhury, G.S. 1996. [21] Chotimarkorn, C. and Silalai. 2007. Oxidative
Reverse screw element(s) and feed composition stability of fried dough from rice flour containing rice
effects during twin-screw extrusion of rice flour and bran powder during storage. LWT., vol.41, pp.561-
fish muscle blends. J. Food Sci., vol.61, pp.590–595. 568.
[7] Ding, Q.B., Ainsworth, P., Tucker, G., Marson, H. [22] Kahlon, T.S., Chow, F.I. and Sayre, R.N. 1994.
2005. The effect of extrusion conditions on the Cholesterol lowering properties of rice bran. CFW.,
physicochemical properties and sensory vol.39, pp.99-103.
characteristics of rice based expanded snacks. J. food
Eng., vol.66, pp.283-289. [23] Saunders, R.M. 1985. Rice Bran: Composition and
Potential food uses. Food Rev Int., vol.1, pp.465-495.
[8] Kadan, R.S., Bryant, R.J., & Pepperman, A.B. 2003.
Functional properties of extruded rice flours. Cereal [24] Jadhav, M., Srivastava, T., Saxena, D.C. 2012.
Chemistry, vol.68, pp.1669–1672. Extrusion process of deoiled rice bran in the
development of biodegradable molded sheets.
[9] Van Hoed, V., Depaemelaere, G., Villa Ayala, J., Scholarly Journal of Agricultural Science, vol.2,
Santiwattana, P., Verhe, R., De, et al. 2006. Influence pp.163-178.
of chemical refining on the major & minor
components of rice bran oil. JAOCS, vol.83, pp.315- [25] Montgomery, D.C. 2001. Design and Analysis of
321. Experiments. New York Wiley, pp.416-419.
[10] De Deckere, E.A.M., Korver, O. 1996. Minor [26] Ranganna, S. 1995. Handbook of Analysis and
constituents of rice bran oil as functional foods. Nutr Quality Control for Fruit and Vegetable Products,
Rev., vol.54, pp.S120-S126. India. Tata Mc Graw Hill Publishing Company
Limited, New Delhi.

811
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[27] Stojceska, V., Ainsworth, P., Plunkett, A., Ibanoglu, [31] Ding, Q.B., Ainsworth, P., Plunkett, A., Tucker, G.,
E., Ibanoglu, S. 2008. Cauliflower by-products as a Marson, H. 2006. The effect of extrusion conditions
new source of dietary fiber, antioxidants and proteins on the functional and physical properties of wheat
in cereal based ready-to-eat expanded snacks. Journal based expanded snacks. J. food Eng. 2006, vol.73,
of Food Engineering, vol.87, pp.554-563. pp.142-148.

[28] Bhattacharya, S. and Choudhary, G.S. 1994. Twin [32] Srivastava, T., Saxena, D.C., Sharma, R. 2014.
screw extrusion of rice flour: Effect of extruder Effects of twin screw parameters upon the
length-to-diameter and barley temperature on the mechanical hardness of ready-to-eat extrudates
extrusion parameters and product characteristics. J. enriched with de-oiled rice bran. Journal of Food
Food Pro. Preserve., vol.18, pp.389-94. Engineering & Environment safety, vol.13, pp.299-
308.
[29] Ryu, G.H and Walker, C.E. 1995. The effects of
extrusion conditions on the physical properties of [33] Bhattacharyya, P., Ghosh, U., Gangopadhyay, H.,
wheat flour extrudates. Starch – Starke, vol.47, Raychaudhuri, U. 2006. Physico-chemical
pp.33-36. characteristics of extruded snacks prepared from rice
(Oryza sativa L), corn (Zea mays L) and taro
[30] Duizer, L.M., Winger, R.J. 2006. Instrumental [Colocasia esculenta (L) Schott] by twin screw
measures of bite force associated with crisp products. extrusion. Journal of Scientific & Industrial
Journal of Texture studies, vol.37, pp.1-15. research,vol.65, pp.165-168.

812
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Extruded Products Analog To Meat


Raj Kumar Renu Sharma, Tanuja Srivastava, D. C. Saxena,
Deptt. of Applied Sciences Deptt. of Applied Sciences Deptt. of Food Technology Deptt. of Food Engg. &
PTU, Jalandhar BGIET, Sangrur. BGIET, Sangrur, Tech.SLIET, Longowal,
raj.scientia@gmail.com renu.sharma6286@ tanusriva@yahoo.co.in dcsaxena@yahoo.com
gmail.com

and mouth feel [2]. Two types of textured meat like


products are manufactured by employing the extrusion
ABSTRACT – textrization process. The meat extenders is produced
Consumption of meat and plant based meat analogues by employing a single – screw extruders operating at
are a growing market segment. In Europe soy steaks, high temperatures and pressures, followed by drying.
cutlets and other meat analogues have proved very After hot water hydration, this texturized vegetable
successful in the market owing to their fibrous protein (TVP) is used to extend / replace meat for use
structure and nutritional value, similar to that of meat as pizza toppings, meat sausages and fabricated food
products. Such products are in demand with formulations.
vegetarians but also highly recommended by The second type of product is meat analogs that can be
dieticians. Texturized proteins, a unique product made used in place of meat. Both these products exhibit
by extrusion, can be produced from a wide range of fiber formation due to extrusion cooking of defatted
raw ingredient specifications, while controlling the soybean flovr and consequent alignment when passing
functional properties such as density, rate and time of throught the restriction or die. [8]. Textured proteins
rehydration, shape, product appearance and mouth of vegetable or animal origin can be processed into
feel. Formed meat analogs are blends of various meat like extruded compounds by two methods :dry
protein sources such as isolates, glutens, albumin, and / or wet texturization. Dry expanded products are
extrusion-cooked vegetable proteins and others which characterized by a spongy texture, they are usually
are blended with oils, flavors and binders before dried and rehy drated for final use (targeted water
forming them into sheets, patties strips or disks. absorption 2-3.5). Wet extruded products are
Literature on extruded products along to meat has processed near the final moisture content and therefore
been reviewed. do not need to be expanded for more water absorption.
They generally have a more fibrous, less expanded
Key words texture than dry extruded. Meat analogues must be
Extrusion, Protein texturization, Rheology meat rehydrated with water or flavoring liquids. A spongy
analog Twin screw extruder. structure produces products with poor flavor retention
and lack of real texture. [9]
1. INTRODUCTION
Meat analog is a major type of texturized plant Wet texturization to produce meat
protein (TPP) which is extensively used to imitate analog : Wet extrusion has made possible
meat products [1]. It is possible to produce an unconventional products such as texturized proteins.
extruded meat analog which has a remarkable Some of there products have already been
similarity to meat in appearance, texture and mouth commercialized in Japan, China, and USA. Examples
feel [2] The introduction of meat analogues (also include extruded crab analog made from Alaska
termed meat substitutes, meat surrogates and meat Pollack surimi with egg white and 1% starch and
replacement food) in Western markets is a relatively texturized soybean foods such as „fupi‟ [10].
recent development, starting in the early 1960s [3,4] A Texturization process via extrusion can make products
growing awareness in the population about health and that imitate the texture, test and appearance of meat or
sustainable foods has led to a rising interest in plant seafood with high nutritional value [11].
protein based meat alternatives in many European
Countries and worldwide. The new consumer group of
“flexitarians.” Who reduce their meet consumption in Creating a Meat-like Structure :- Plant
their daily diet is growing rapidly [5]. Extrusion and animal based proteins need to unfold, cross-link
cooking is the principle processing method used to and align themselves to form microscopic and
fabricate meat like texture and plexilamellar structure macroscopic fibres. The high moisture cooking
for soy protein products [6].It also offers some extrusion process, which is characterised by water
advantages in terms of cost and compatibility for high levels up to 70 percent, provides the required process
volume production [7]. conditions. Co-rotating twin screw extruders equipped
with long screws and specially adapted long cooling
dies proved to be effective for processing the low
Extruded meat analog : It is possible to
viscosity mass into a protein strand with a distinctive
produce an extruded meat analog which has a
fibrous structure (Figure 1) [12,13]. In a first step the
remarkable similarity to meat in appearance, texture
ingredients, in particular food protein powders and

813
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

water, are continuously fed into a long extruder barrel. bite and a juicy mouth-feel as well as for avoiding
The co-rotating screws mix the ingredients thoroughly dominant, disturbing flavours from single ingredients.
while the mass is steadily heated to temperatures of enzymes lose activity within the extruder unless they
130-180°C and is moved towards the die section. are stable to heat and shear. Protein solubility in water
During the hydrothermal treatment, the proteins or dilute salt solutions is decreased after extrusion.
unfold and form new covalent intermolecular bonds. Although denaturation and loss of solubility are by
Once the mass enters the cooling die section, drag and increased barrel temperature, specific mechanical
shear flows align the proteins in the flow direction. energy (SME) appears to be important [19] .Even
The strong cooling in the long die section has several under the low temperature (<1000
effects. The temperature gradient from the core of the Changes in protiens during extrusion:
strand to the die wall increases the shear flow, effects Several changes in protein occur during extrusion,
the cooling of the mass to a core temperature below denaturation is undoubtedly the most important. Most
100°C and avoids product expansion caused by C) of pasta extrusion, wheat protein solubility is
evaporation of superheated water. Along with the reduced [20]. During extrusion, disulfide bonds are
cooling, non-covalent hydrogen bonds, electrostatic broken and may re-form. Electrostatic and
interactions and van der Waals interactions develop. hydrophobic interaction favour formation of insoluble
The viscosity rises and the mass solidifies to a strand aggregates. The creation of new peptide bonds during
with a meat-likestructure [14,15,16] . extrusion is controversial. High molecular weight
protein can dissociate into smaller subunits. Exposure
of enzymes susceptible sites improves digestibility.
Maillard reactions occur during extrusion, particularly
at higher barrel temperature lower feed moistures.
Free sugars may be produced during extrusion to react
with lysine and other amino acids with free terminal
amines. Starch and dietary fibre fragments as well as
sucrose hydrolysis products are available for maillard
reactions. Lower pH favoured maillard reactions, as
measured by increased colour in a model system
consisting of wheat starch, glucose, and lysine [21].

Table 1: Protein changes during extrusion

Figure 1. Twin screw extruder with opened barrel.


Source of all figures: Fraunhofer IVV, Freising, Functional Changes Nutritional Changes
Germany The processing of meat analogues in such
cooking extruders involves a multitude of machine
and process parameters. Furthermore, the composition Reduced solubility in
Reduced lysine
of the matrix, the variety of ingredients and the water water and diluted buffer
content has a major effect on the final product.
However, literature data about the individual impacts Texturization Improved digestibility
of each parameter and their interactions on fibre
formation and the final product quality are scarce and
are limited to soy, wheat and pea protein. High protein Source (Della Valle et al; 1994)
levels in the recipe and the use of proteins with an During extrusion,protein structures are disrupted and
adequate cross-linking capability have been shown to alerted under high shear,pressure,and temperature[22].
be favourable [12,13,17,18]. Protein solubility decreases and cross-linking
The approach to mix protein ingredients and reactions occur. Possibly, some covalent bonds form
further components such as starches and fibres was at high temperatures [23].
very effective for creating products with a meat-like

Table 2–. Typical meat analog ingredients and their purpose (Egbert and Borders 2006).

Ingredient Purpose Usage level


Water Ingredient distribution Emulsification, juiciness, cost 50 to 80
Textured vegetable proteins: textured soy Water binding, Texture/mouthfeel Appearance; protein 10 to 25
flour, textured soy concentrate, textured fortification/nutrition Source of insoluble fiber
wheat gluten, textured protein
combinations such as soy and wheat
Nontextured proteins: isolated soy Water binding, emulsification Texture/mouthfeel Protein 4 to 20
proteins, functional soy concentrate, wheat fortification/nutrition
gluten, egg whites, whey proteins
Flavors/spices Flavor: savory, meaty, roasted, fatty, serumy Flavor 3 to 10

814
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

enhancement (for example, salt) Mask cereal notes


Fat/oil Flavor, texture/mouthfeel Succulence, Maillard 0 to 15
reaction/browning
Binding agents: wheat gluten, egg whites, Texture/“bite,” water binding, may contribute to fiber 1 to 5
gums and hydrocolloids, enzymes, starches content, can determine production processing conditions
Coloring agents: caramel colors, malt Appearance/eye appeal Natural or artificial 0 to 0.5
extracts, beet powder, FD&C colors

Table 3 Major nonmeat protein sources

Type of protein Sources References


Koshiyama and Fukushima (1976), Mitsuda and
others (1965), Thanh and Shibasaki
β-conglycinin, Glycinin Soybean
(1977), Staswick and others (1984), Sun and
others (2008)
Vicilin, Legumin Duranti and Gius (1997), Kang and others
Legumes
(2007), Plietz and others (1984),
Albumins, Globulins , Prakash and Narasinga Rao (1986), Marcone
Oil seeds
Glutelins (1999),
Singh and MacRitchie (2001), Green and Cellier
Gluten, Gliadins, Glutenins Wheat, rye, and barley
(2007)
Fusarium venenatum(Filamentous
Mycoprotein Rodger (2001), Denny and others (2008)
fungus)

Raw materials
Protein dispersibility Index (PDI) : below 5.0.
Textured soy products have been produced with raw
materials ronging from 20 to 70 PDI. Modifying the pH to the alkaline side will increase the
water absurption. This is generally done by using
calcium hydroxide or the more widely used sodium
Fat level : Raw materials have been texturized hydroxide at about 0.1% or as required.
containing 0.5 to 6.5% fat levels. This higher range of
fat (5.5%) allows mechanically extracted soybean
meal to be texturized into meat extenders and meat Calcium chloride : Calcium chloride (CaCl2) is
analogs. very effective in increasing the textural integrity of
extruded vegetable protein and also aids in smoothing
its surface with the addition of CaCl2 and small
Fiber level : The presence of fiber in extruded soy amounts of sulfur, soybean meal containing 7.0% fiber
protein in hibits b blocks the interaction or cross - may be texturized, retorted for one hour at 110°c and
linking of protein molecules neceessary for good still maintain a strong meat like texture. [41].
textural integrity.
Color enhancers : When supplementing light
Particle size ; with regard to ssuccessful colored meats with meat extenders made from
production, twin screw extruders can, sometimes use textured vegetable proteins, it is desirable to bleach or
raw materialwith a particle size range up to properties lighten the color of the extruded meat extendder.
of the final product 8 mesh (2360 micron) without Bleaching agents such as hydrogen peroxide are often
affecting the textural. used for this purpose. Dosing levels for the hydrogen
peroxide range from 0.25 to 0.5% Pigments such as
Adjustments in pH : increasing the pH of titanium dioxide are also used at levels between 0.5
vegetable protein before or during the extrusion and 0.75% to lighten color.
process will aid in texturization of the protein. 2. Conclusion
Extreme increases in pH will increase the solubility The literature survey reviewed that the processing
and decrease the final product [39] . conditions and the composition of the matrix, the
variety of ingredients and the water content has a
Lowering the pH has the opposite effect and will major effect on the final product with can substitue the
decrease protein solubility, making the protein more meat.
difficult to process[40].

Vnderirable sour flavours in the texturized vegetable 3. REFERENCES


protein products may be evident, if the pH is adjusted

815
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[1] SHEARD, P. R., LEDWARD, D. A. and [22] Harper, J.M. 1989. Food extruder and their
MITCHELL, J. R. 1984. Role of carbohydrates in applications. In C. Mercier, P. Linko and J.M. Harper
soya extrusion. International Journal of Food Science (Eds.), Extrusion cooking (pp 1-16). St. Paul,
Minnesola, USA: America Association of Cereal
& Technology, vol.19, pp. 475–483.
Chemists Inc.
[2] Smith, O.B.1975. "Textured Vegetable Proteins" [23] Areas, J. A. G. 1992. Extrusion of Food Proteins.
presented at the World (19Soybean Research Critical Review of Food Science Nutrition, vol. 32, pp
Conference, University of Illinois. 365 – 392.
[3] Sadler M.J.2004. Trends Food Sci. Technol, vol. [24] Egbert, R. and Borders, C. 2006. Achieving
15 (5), pp 250-260 . success with meat analogs. Food Technol-Chicago,
[4] Davies J., Lightowler H., Nutrition and Food vol. 60, pp 28–34.
Science 98 (2), 90-94 98\ [25] Koshiyama I, Fukushima D. 1976. Identification
[5] http://blog.euromonitor.com/2011/08/ of the 7S globulin with β-conglycinin in soybean
[6] Burgess, L.D., and Stanley, D.W. 1976. A possible seeds. Phytochemistry, vol.15, pp 157–9.
mechanism for thermal texturization of soybean [26] Mitsuda H, Kusano T, Hasegawa
protein. J. Ints. Can. Sci. Technol. Aliment. Vol.9(4), K. 1965. Purification of the 11S component of
pp 228-229. soybean proteins. Agr Biol Chem Tokyo, vol. 29, pp
[7] Harper, J.M. 1981. Extrusion of foods II. CRC
7–12.
Press. Florida, USA.pp. 174.
[8] Book: Advances in Food Extrusion Technology. [27] Thanh VH, Shibasaki K. 1977. Beta-conglycinin
pp. 78 from soybean proteins. BBA-Protein Struct M 490, pp
[9] Book: Extrusion Cooking. pp. 123 370–84.
[10] Shen, Z. C. & Wang, Z. D. 1992. A novel [28] Staswick PE, Hermodson MA, Nielsen
extruder for soybean texturization. In: Food Extrusion NC. 1984. Identification of the cysteines which link
Science and Technology (edited by J. L. Kokini, C. T. the acidic and basic components of the glycinin
Ho & M. V. Karwe). pp. 725–732. New York: Marcel subunits. J Biol Chem, vol. 259, pp. 13431–5.
Dekker, Inc [29] Sun P, Li D, Li Z, Dong B, Wang
[11] Cheftel, J. C., Kitagawa, M. & Queguiner, C. F. 2008. Effects of glycinin on IgE-mediated increase
1992. New protein texturization processes by of mast cell numbers and histamine release in the
extrusion cooking at high moisture levels. Food small intestine. J Nutr Biochem, vol. 19, pp 627–33.
Reviews International, 8, pp 235–275. [30] Duranti M, Gius C. 1997. Legume seeds: protein
[12] Cheftel J.C., Kitagawa M., Queguiner C.1992. content and nutritional value. Field Crop Res 53, pp
Food Reviews International 8 (2),pp 235-275 31–45.
[13] Akdogan H. 1999. Int. J. Food Sci. Techn. [31] Kang IH, Srivastava P, Ozias-Akins P, Gallo
Vol.34, pp 195–207. M. 2007. Temporal and spatial expression of the major
[14] Prudêncio-Ferreira S.H., Arêas J.A.G.1993. J. allergens in developing and germinating peanut
Food Sci., vol.58, pp 378-381 seed. Plant Physiol, vol.144, pp 836–45.
[15] Liu K., Hsieh F.H.2008. Journal of Agricultural
[32] Plietz P, Zirwer D, Schlesier B, Gast
and Food Chemistry, vol. 56, pp 2681-2687.
[16] Lee G., Huff H. E., Hsieh F.2005. Transactions of K, Damaschun G. 1984. Shape, symmetry, hydration
the ASAE. Vol. 48 (4), pp 1461-1469. and secondary structure of the legumin from Vicia
[17] Lin S., Yeh C.S., Lu S.2002. J. Food Sci., vol. faba in solution. BBA-Protein Struct M 784, pp 140–
67(3), pp 1066-1072 . 6.
[18] Wild F., Zunabovic M., Domig K.J., Die [33] Prakash V, Narasinga Rao
Ernährung / Nutrition (in press). MS. 1986. Physicochemical properties of oilseed
[19]Della Valle, G., Quillien, L. and Gueguen, J.
proteins. CRC Crit Rev Biochm Mol 20: pp 265–363.
1994. Relationships between processing conditions
and starch and protein modifications during extrusion [34] Marcone MF. 1999. Biochemical and biophysical
cooking of pea flour. Journal of Science Food properties of plant storage proteins: a current
Agriculture. Vol.64, pp 509 – 517. understanding with emphasis on 11S seed
[20] Ummadi, P., Chenoweth, W.L., and P.K.W. globulins. Food Res Int 32, pp 79–92.
Nigeria. 1995a . Changes in solubility and distribution [35] Singh H, MacRitchie F. 2001. Application of
of semolina proteins due to extrusion processing. polymer science to properties of gluten. J Cereal Sci.,
Cereal Chemistry. Vol.72, pp 564 – 567.
vol.33, pp 231–43.
[21] Bates, L., Ames, J. and MacDougall, D.B. 1994.
The use of a reaction cell to model the development [36] Green PHR, Cellier C. 2007. Celiac disease. New
and control of colour in extrusion cooked foods. Engl J Med, vol. 357, pp 1731–43.
Lebensm. – Wiss. U. Technology, vol.27, pp 375 –
379.

816
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[37]Rodger G. 2001. Production and properties of


mycoprotein as a meat alternative. Food Technol-
Chicago 55, pp 36–41.
[38] Denny A, Aisbitt B, Lunn J. 2008. Mycoprotein
and health. Nutr Bull 33, pp 298–310
[39] Simonsky, R.W., and D.W. Stanley. 1982 Can.
Inst.Food Sci. Technol. J. vol.15, pp.294 .
[40] DeGroot, A.P., P. Slump,V.J. Feron and L. Van
Beek.1976. J. Nutr., vol. 106, pp.1527.
[41] Joseph Kearns and et .al Extrusion Of Texturized
Proteins. muyang.com 2013.

817
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Biotechnology & Genetic Engineering: Enhancement in


food quality and quantity

Joni Lal Kulbhushan Rana


S.D.COLLEGE, BARNALA

ABSTRACT New Dehli, has also developed many vegetable crops that are
Biotechnology and genetic Engineering is now able to increase 'rich' in minerals and vitamins. e.g. Vitamin A enriched
food quality and quantity by introducing modified recombinant carrots, pumpkin and spinach. Vitamin C enriched Bathua,
genes into the crop plant. University of Hawaii and Cornell tomato, mustard and calcium and iron enriched spinach and
University developed two varieties of papaya to papaya Bathua etc. Artificial seeds can also be produced with the help
resistant virus by transferring genes from virus. Crops like of Biotechnology. In India artificial or synthetic seeds in being
cotton potato have been successfully transformed to make a done for sandal wood and mulberry at BARC ( Bhaba Atomic
protein; cry protein to kill harmful insects. Indian agricultural Research Centre, Mumbai). Post-Harvest-losses can also be
research institute, New Delhi has also developed many reduced with the help of biotechnology by delayed fruit
vegetable crops that are rich in vitamin and minerals. ripening. In 'Flavr Savr' transgenic tomato, expression of gene
Biotechnology and Genetic engineering has developed GM producing enzyme-galacturonase which promotes softening of
crops like „Golden rice‟ and „Flavr Savr‟ which are far better fruits has been blocked Non-availability of this enzyme
than natural varieties. prevents over-ripening and enhance the shelf life of fruit.

Keywords 2. DISEASE RESISTANT CROPS


1. Transformation Plants are mainly infected with certain pathogens bacteria,
2. rDNA virus, fungi, nematodes etc. Very important crop plants like
3. Biofortification wheat, potato are infected with fungi. Fungi causes 'Loose
4. Genetic Engineering Smut of Wheat', 'Black Smut of Wheat' diseases in wheat and
5. Agrobacterium early 'Blight of Potato' and 'Late blight of Potato'. Due to these
fungal infections their is great loss to crop plant and food
production, So Biotechnology with Genetic Engineering is able
1. INTRODUCTION to introduce those genes which can protect these plants from
Biotechnology in combination with genetic engineering, now, great destruction. Genes encoding enzymes, chitinase and
able to increase food quality and quantity by introducing gluacanase which are selected from other plants or bacteria and
modified recombinant genes into the crop plants. These genes transform in to the crop plants. These enzymes can easily
are modified for their better expression. Researchers can select destruct the chitin formed cell wall of fungi. In viral infected
these genes from other species and transform them into the plants, genes encoded for 'viral coat protien are introduced to
crop plants. In plants the DNA is generally inserted using protect and develop viral resistant plants by introducing such
Agrobacterium mediated recombination or biolistics. In Agro modified genes in to the crop plants will increase the crop
bacterium-mediated recombination, the plasmid is constructed productivity.
that contains T-DNA, which is responsible for insertion of r-
DNA into the lost plants. Modified genes can also be
transformed through electroporation method in plants and 3. PEST RESISTANT CROPS
animals. University of Hawaii and Cornell University Insect pest is a serious agricultural problem leading to yield
developed two varieties of papaya resistant to 'papaya' ringspot losses and decreases food production. Researchers have used
virus' by transferring gene from virus. Farmers can also use genetic engineering to take the bacterial genes needed to
modern biotechnology techniques to prevent their crop plants produce Bt toxins and introduce them in to plants. If plants
by developing disease resistant, pest resistant and drought produce Bt toxin on their own, so they can defend themselves
resistant. Certain crops like cotton, corn, potato have been against harmful insects. So farmers no longer have to use
successfully transformed to make a protein e.g.. Cry protein chemical pesticides to control these insects. This strategy is an
from Bacillus thuringiensis which can kill certain harmful eco-friendly, which protect the plants from harmful insects and
insects .Biotechnology is, also, now able to increase nutritional also keep the environment safe from harmful chemicals.
value ,flavor and texture of foods. Transgenic crops in
development include soybeans with higher protein content,
potato with more starch and improved amino acid content. Rice 4. STRESS RESISTANT CROPS
with improved ß-Carotene content (Golden Rice), a precursor The transcription factors DREB 1 and DREB2 are important in
of vitamin A, to help prevent 'Night Blindness' in peoples who the ABA- independent drought tolerant pathway; that induces
have low ß-Carotene content in their diets. Biofortification is the expression of stress response genes. Genes for the these
also come from biotechnology in which breeding of crops with transcription factor increases the tolerance of transgenic
higher levels of vitamins, minerals, proteins and fats is done. Arabidopsis plants to drought, high salinity and cold. But study
shows that these DREB genes also reported in important, food
Maize hybrids which had twice the amount of amino acid- crops such as rice, potato, barley, maize, wheat. It means that
lysine and tryptophan, as compared to existing maize hybrids this is a conserved, universal stress defense mechanism in
were developed in 2000. Indian Agricultural Institute IARI, plants. So DREB genes are important targets for crop

818
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

improvement for drought through genetic engineering and Gareth; Wright, Susan Y; Hinchliffe, Edward; Adams,
enhancement of food production. Jessica l(2005).‟Improving the nutritional value of Golden
Rice through increased pro vitamin A content “. Nature
biotechnology 23 (4): 482-7.doi:10.1038/nbt 1082. PMID
5. IMPROVEMENT OF SHELF LIFE 15793573.
Through genetic engineering shelf life of fruits can be improve 3. Martineau, Belinda 2001. First Fruit: The creation of the
by delaying the fruit ripening. This makes the long distance Flavr Savr tomato and the Birth of Biotech Food. Mc
transport of fruits like tomato easy. Slow ripening also Graw Hill.
improves the flavor. Most of this work for delayed fruit 4. Zupan,J; Muth.TR;Draper,O; Zambryski,P.(2000)”The
ripening has been done on tomatoes. Fruit ripening in fruits transfer of DNA from Agrobacterium tumefaciens into
normally promoted by increased respiration and increased and plants; a feast of fundamental insights” Plant J.23:11 – 28.
rapid ethylene production. Fruits gets softened by the activity doi:10.1046/j . 1365-313x.2000.00808.x.
of enzyme like polygalacturonase and methyl esterase. But with 5. Rebecca Bailey,‟Biofortifying‟ one of the word‟s primary
the application of genetic engineering the activity of these foods, Retrieved on July22,2008‟.
enzyme can be blocked. 6. Babu M, Geetha M. “DNA shuffling of cry proteins”
Retrieved 2008-11-23.
7. Broderick, nichole A; Raffa, Kenneth F; Handelsman, To
6. BIOFORTIFICATION (2006) “Midgut bacteria required for Bacillius thuringienis
Biofortification is a process through which nutritional value of insecticidal activity”. Proceeding of the national Academy
food can be increased. This can be achieved through selective of sciences 103 (41); 15196-9.
breeding and genetic engineering. Biofortification is an 8. International rice research Institute: About Golden rice.
upcoming or rapidly expanding technology for dealing with 9. Biofortification: Harnessing agricultural technology to
deficiency of micronutrients in the world. WHO estimated that improve the health of the poor, IFPRI and CIAT pamphlet,
biofortication could help curing the 2 billion people suffering (2002).
from iron-deficiency anemia. Golden rice is an example of a 10. Watson, Janes D. (2007). Recombinant DNA: Genes and
GM crops developed for its nutritional value. genomes: A short course. San Francisco: W.H. Freeman.
ISBNO-7167-2866-54
11. Donna U. Vogt and Mickey Parish.(1999) food
REFERENCES biotechnology in the United States: Science, Regulation,
1. The European parliament and the council of the European and issues.
Union(12 march 2001)”Directive on the release of 12. Peters, Pamela.” Transforming Plants. Basic Genetic
genetically modified organisms. Directive Engineering Techniques”.
2001/18/ECANNEX1A”.Official journal of the European 13. 13 Segelken, Roger (14 may 1987). “Biologists invent gun
communities ,p.page 17. for shooting cells with DNA”.
2. Paine Jacqueline A; Shipton , Catherine A; Chagger,
Sunandha; Howells, Rhian M; Kennedy,MikeJ; Vernon,

819
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Effect of physical properties on flowability of


commercial rice flour/powder for effective bulk handling
Shumaila Jan Syed Insha Rafiq D.C. Saxena
Department of Food Engineering and Technology
Sant Longowal Institute of Engineering and Technology,
Longowal- 148106 (India)
shumailanissar@gmail.com

ABSTRACT continues until the silo empties or until another no-flow


This work evaluates the physical properties affecting the scenario occurs with the development of a stable rathole [1]
flowability of the commercial rice flour. This flour/ powder Furthermore, these flow properties could be again governed
were selected because of the flow issues encountered by the by the material physical properties, for instance; particle size
mills/ industries during bulk handling. A number of powder and shape, the granular surface structure and particle density,
physical properties, including moisture content, particle size the packing properties and other external factors. External
distribution, bulk density, compressibility index, angle of conditions could include water content, temperature and the
repose and co-efficient of friction were measured. Powder presence of other materials or ingredients [2].
flowability was measured in terms of cohesive index, caking Physical properties of granular solids play a significant
strength and powder flow speed dependency. These properties role in their resulting flow behaviour and storage and thus are
are used in interpreting the flow behaviour of the commercial essential to design appropriate, efficient, and economic bulk
rice flour. solids handling and storage equipment and structures [3].
These properties are important in determining the flow
General Terms behavior of the flour, thus being helpful in their proper
handling during the various stages of processing, conveying
Flowability, bulk handling, commercial rice flour.
and storage. Bulk density, kernel density and porosity affect
the structures, loads and in sizing grain hoppers and storage
Key words: Physical properties, cohesive index, caking
facilities. The angle of repose is important in designing of
strength, powder flow speed dependence.
storage and transporting structures. The static coefficient of
friction of the grain, against the various surfaces is used to
1. INTRODUCTION determine the angle at which the grains or their flour must be
Food powders are mostly used materials, both in industry positioned in order to achieve consistent flow of material.
as well as in households all around the world and are In addition, it is also important in the designing of
considered among the most difficult materials to characterise. conveyors because friction is necessary to hold the grains or
Much research regarding handling and storage characteristics flour to the conveying surface without slipping or sliding
of bulk solids has been conducted over the years. The powders backward. The frictional properties data will be useful in
are usually surrounded by air (or other fluid) and it is the hopper design, since inclination angle of the hopper walls
solids plus fluid combination which determines the bulk should be greater than the angle of repose to ensure the
properties of the powder and the amount of fluid can be continuous flow of the crops (gravity flow) thus these
variable. Powders are the least predictable of all materials in parameters are also necessary in designing of conveying,
relation to flowability because of the large number of factors transporting and storing structures. The Hausner ratio and
that can change their rheological properties. Powders are Carr‘s index are two widely used measurements to indicate
blends of solids, liquids and gases, usually air. Their flow flowability of bulk solids, and are commonly referred to as the
properties or rheology may be affected by perhaps 100 or compressibility index. The present study has been conducted
more factors. Broadly, powders are either cohesive or non- with the commercial rice flour which has been assessed for
cohesive. A free flowing powder is non-cohesive wherein particle size analysis, bulk density, tapped density,
particles are separate and when unconfined are able to move compressibility index, angle of repose, static co-efficient of
individually. Generally speaking the interparticle forces are friction cohesive index, caking strength and powder flow
negligible in such case. As long as the powder is free flowing, speed dependency test.
the major obstruction to flow is the internal friction.
The most vulnerable industrial powder problems are
obtaining reliable and consistent flow out of hoppers and
2. MATERIALS AND METHODS
feeders without excessive spillage and dust generation. These
problems are usually associated with the flow pattern inside 2.1. Physical properties
the silo and hoppers. The worst-case scenario is no flow. This
can occur when the powder forms a cohesive arch across the
opening, which has sufficient strength within the arch to be 2.1.1. Procurement of sample: Rice flour sample was
self-supporting. Another flow which is termed as mass flow is obtained from local Flour Mill, Longowal, Sangrur (Punjab).
the ideal flow pattern where all the powder is in motion and
moving downwards towards the opening. Funnel flow is 2.1.2. Particle size analysis
where powder starts moving out through a central ‗‗funnel‘‘
that forms within the material, after which the powder against Particle size analysis was done by using Laser
the walls collapse and move through the funnel. This process Diffraction Particle Size Analyzer (Model SLAD-2300, M/s.
SHIMADZU Corporation, Kyoto, Japan). Measuring method

820
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

used was laser diffraction and laser scattering intensity 2.1.6. Co efficient of friction
pattern, with a measuring range of 0.017µm to 2500µm. Light The coefficient of static friction (μ) was determined
source used was semi-conductor laser with a wavelength of from three structural materials namely glass, galvanized steel
680nm and an output of 3Mw. Operating temperatures were sheet and plywood. A plastic cylinder of 30 mm diameter and
10oC to 30oC, with an operating humidity of 20% to 80% 35 mm height was placed on an adjustable tilting flat plate
(with no condensation). A suspension was prepared with 0.5g faced with the test surface and filled with the sample. The
powder and 1ml ethanol (refractive index: 1.36) by continuous cylinder was raised slightly so as not to touch the surface. The
stirring on a magnetic stirrer for 3-5 min. The prepared sample structural surface with the cylinder resting on it was inclined
was filled into the cuvette and readings were taken. gradually, until the cylinder just started to slide down. The
angle of tilt was noted from a graduated scale [7, 8, 9] The
2.1.3. Moisture content angle of inclination was calculated as:
Moisture content (wet basis) was measured by weighing μ = tan α
5 g of a sample before and after drying in an oven at 105oC for
3 hours. Each test was carried out in triplicates. 2.1.7. Flow property measurement by powder
flow analyzer
2.1.4. Bulk density and loose density of rice flour
The loose bulk density of rice flour was determined by 2.1.7.1. Cohesion index
carefully filling a standard graduated cylinders of Vankle‘s Cohesion Index (CI), which is defined as the ratio of
design (100 ml) (M/s. Standard Instrument Corporation, ―Cohesion Coefficient/sample weight‖, was measured by
Patiala, India) with a sample. Initially, the material was filled using a Stable Micro Systems TA.XT2i Plus texture analyzer.
up to the 25 ml volume of the cylinder. At this point, the The cohesion coefficient (g.mm) is the work required to lift
material was weighed. The cylinder was tapped on a flat the blade up through the powder column during the
surface about 10 times to allow the material to settle. The decompression phase at the speed of 50 mm sec-1, which is
cylinder was further filled with material. The material was determined by integrating the negative areas under the force
again weighed, and the bulk and tapped density were displacement curve
calculated based on the volume of the cylinder and weight of
material [4]. CI (mm) =
𝐶𝑜𝑕𝑒𝑠𝑖𝑜𝑛 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 (𝑔.𝑚𝑚 )
𝑆𝑎𝑚𝑝𝑙𝑒 𝑤𝑒𝑖𝑔 𝑕𝑡 (𝑔)
𝑊𝑙
𝜌𝑙 = (1)
𝑉𝑙 2.1.7.2. Caking test
𝑊𝑡
The rotor moves to a force of 5 g at 20 mm/s. This step
𝜌𝑡 = (2) levels off the powder and allows to record the rate at which
𝑉𝑡
the column height reduces during the caking process. Once
The bulk and tapped densities were used to calculate the target force is reached, the data is recorded as it moves down
through the powder column at 20mm/s until it reaches a force
Hausner ratio (HR) (Eq. 3) and Carr‘s index (CI) (Eq. 4). of 500 g. Then the rotor moves upwards at 10 mm/s
Hausner ratio (HR) was expressed in decimals; and is defined subjecting powder column to minimum displacement. This is
as the ratio of a material tapped bulk density to its loose bulk repeated for five compactions. At the end, the rotor slices
density [5, 6] and is given as: through the compacted cake recording hardness of the cake,
i.e. the force required to get the compacted powder flowing
CI = 100 × [(ρt – ρl ) / ρt ] (3) freely. Finally the rotor moves back to the top. The data
analysis is done using the data on column height at the start of
𝜌𝑡 each compaction cycle, the distance at the point the final 500
HR = , (4)
𝜌𝑙 g force is reached (the cake height) for each cycle, and finally
the mean force and work required (g.mm) to slice through the
Where ρl and ρt are the bulk densities (kg m-3), Wl and caked area. Cake height ratios (ratio of the initial column
Wt are the masses (kg), V l and Vt are the volumes (m3) of height) and also the cake strength (both as the mean force and
flour in loose and tapped fill conditions, respectively. also the work required—the area under the curve) are
calculated.
2.1.5. Angle of repose
The angle of repose is the angle formed by the 2.1.7.3. Powder Speed Flow Dependence (PFSD)
horizontal base of the bench surface and the edge of a cone- test
like pile of granules. The cylinder was placed over a plain The rotor moves down through the powder column at
surface and rice flour was filled in. Tapping during filling was 10 mm/s. The data is recorded to measure the resistance of the
done to obtain uniform packing and to minimize the wall powder to being pushed at a controlled flow, i.e. the
effect if any. The tube was slowly raised above the floor so interparticle friction of the powder thus measures the flow
that whole material could slide and form a natural slope. The stability index. At the bottom of the powder column, the rotor
height of heap above the floor and the diameter of the heap at slices through the powder to avoid hard compact. The rotor
its base were measured and the angle of repose (φ) was then moves up through the powder at 20 mm/s twice followed
measured by following relations. by two cycles of 50 mm/s, two cycles at 100 m/s, and two
2𝑕 cycles at 10 mm/s. The analysis is performed on both the
Angle of Repose ϕ (°) = 𝑡𝑎𝑛−1
𝐷 positive (force vs distance) and negative areas. The average is
taken for the two areas for the compaction (as the rotor moves
where, Φ = angle of repose (◦);h = height of the pile down through the powder column) at each flow rate. These
(mm);D = diameter of the pile (mm). are recorded as the compaction coefficient at 10, 20, 50 and

821
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

100 mm/s. The compaction coefficient for the final two cycles Bulk and tapped density of commercial rice flour was
at 10 mm/s is averaged and ratio with that from the initial two measured and the results are presented in Table 1. As
cycles at 10 mm/s to assess whether the powder has broken obvious, particle size of rice flour has a direct relation with
down during the testing to get a value of PFSD. A value close bulk density. It was found that commercial rice flour had
to 1 means it has not changed at all during the testing. If the lower bulk (0.644 kg m-3) and tapped (0.762 kg m-3) densities.
figure is >1 means the product has changed during testing CI is a measure of powder bridge strength and stability,
(giving a higher compaction coefficient) if <1 means it has and the Hausner ratio (HR) is a measure of the interparticulate
changed giving it a lower compaction coefficient. friction. The Hausner ratio (HR) and Carr‘s compressibility
index (CI, %) were calculated based on Eqn. 3, 4 respectively.
3. RESULTS AND DISCUSSION CI (15.480) and HR (1.183) were shown in Table 1. This is in
accordance with density measurements. The Hausner ratio
(1.183) and Cars index (15.48) showed the fair flowability as
3.1. Particle size analysis per the scale described Table 2 [14].
Particle size analysis of commercial rice flour is shown
in Fig. 1 and the data are summarized in Table 1. The TABLE 2. Scale of flowability
particles of commercial rice flour showed a unimodal
distribution profile for granular size. The results showed that Carr’s Angle of Hausner Flow
particle diameter of commercial rice flour had a widest size Index Repose Ratio Characteristics
distribution with a median granule diameter of 102.429 µm. It 10 25-30 1.00- EXCELLENT
is generally considered that powders with particle sizes larger 1.11
than 200 μm are free flowing, while fine powders are subject 11-15 31-35 1.12- GOOD
to cohesion and their flowability is more difficult. Our results 1.18
indicate a slight contribution of physical factors mainly due to 16-20 36-40 1.19- FAIR
median particle size. An increase in flow difficulties and 1.25
cohesiveness in conjunction with a reduction in particulate 21-25 41-45 1.26- PASSABLE
size is observed for the selected commercial rice flour. The 1.34
effect may be due to the reducing particle size tends to 26-31 46-55 1.35- POOR
increase cohesion behaviour because the particle surface area 1.45
per unit mass increases, favours a greater number of contact
32-37 56-65 1.46- VERY POOR
points for interparticulate bonding and additional interactions,
1.59
resulting in more cohesive and less free flowing powders
>38 >66 >1.60 VERY, VERY
[10,11,12]. Similar results have been obtained for wheat flour
POOR
[13].
3.4. Angle of repose and Co efficient of friction
The angle of repose for commercial rice flour was
found to be 66.571. The parameter is important in proper
design of hoppers to maintain continuous flow of the rice
flour, which must be larger than the angle of repose of rice
flour. The angle of repose αr is increased by the cohesion. The
highest average values of the static coefficient of friction
against plywood, glass and Galvanized steel sheet for
commercial rice flour were 0.587, 0.555 and 0.682,
respectively as shown in Table 3. However, for higher friction
Fig. 1: Particle size analysis of commercial rice flour coefficients, more stable open bridged structures can develop
within the body of the powder, creating a lower overall
TABLE 1. Physical properties and flow characteristics packing density.

Properties Values TABLE 3: Co-efficient of friction of commercial rice flour


Moisture Content (%) 11.6
Particle size ( μm) 102.429 Plywood Glass Galvanized steel sheet
Loose bulk density (kg m-3) 0.644 0.587214 0.555 0.682
Tapped bulk density(kg m-3) 0.762
Angle of repose (o) 66.571
Carr‘s index 15.480 3.5. Cohesion index
Hausner ratio 1.183 Force vs distance curves for commercial rice flour is
Flow stability index 0.987 shown in Fig.2. Based on these curves recoded, cohesion
Cake strength (gmm) 322.097 index (the ratio of cohesion coefficient and the sample weight)
Cohesion index 14.18 is calculated by integrating the negative areas under the curve
using the MACRO software supplied along with the texture
analyzer. The results are presented in Table 2. Each
measurement is an average of 6 cycles in each mode designed
3.2. Moisture content by standard operating procedure itself. As observed
The commercial rice flour had 11.6% moisture content. commercial rice flour exhibited a cohesive behaviour shown
in fig 2. This property can be attributed to the decreased
3.3. Bulk density and loose density of rice flour particle size of the rice flour as described by [15] for
commercial Infant Formula Powders. On comparing the

822
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

obtained values of rice flour with the standard scale of 4. CONCLUSION


cohesion it was again predicted that commercial rice flour had
a cohesive behaviour.
An increase in flow difficulties and cohesiveness in
conjunction with a reduction in particulate size is observed for
the selected commercial rice flour. The static coefficient of
friction for commercial rice flour was higher on galvanized
steel plywood, followed by and glass. Cake height ratio and
compaction coefficient of as-received commercial rice flour
indicated the poor compactability of these powders. Powder
flow analysis is an effective tool to characterize the flow
properties of commercial rice flour to elucidate the effect
powder granule size and their distribution on flow behaviour
and subsequent compaction processing. To reach a better
understanding of the contribution of physical properties of rice
flour in the flowability and cohesive properties of rice flour, it
Fig 2. Cohesive index of the commercial rice flour would be interesting to evaluate the surface composition of
rice flour particles. The biochemical surface composition can
3.6. Caking test be supposed to be more relevant to evaluate the flowability
A typical curve for Cake strength and mean cake and cohesive properties, than the bulk composition of the
strength are calculated using texture exponent software and is flour.
shown in Fig. 3. As received commercial rice flour showed an
average cake height ratio this can be attributed to the factor REFERENCES
that Powders with larger particles are not as submissive to
caking as powders with smaller particles [15]. Results are [1] Fitzpatrick J J, Barringer S A, Iqbal T, 2004. Flow
shown in Table 2. property measurement of food powders and sensitivity of
Jenikes hopper design methodology to the measured values.
Journal of Food Engineering. 61, 399–405
[2] Abu-hardan M, & Hill S E. 2010. Handling properties of
cereal materials in the presence of moisture and oil. Powder
Technology 198, 16–24.
[3] Ganesan V, Rosentrater K A, Muthukumarappan K.. 2008.
Flowability and handling characteristics of bulk solids and
powders – a review with implications for DDGS. Bio systems
engineering 101, 425 – 435
[4] Tumuluru J S, Tabil L G, Song Y. , Iroba K L, Meda V.
2014. Grinding energy and physical properties of chopped and
hammer-milled barley, wheat, oat, and canola straws. Biomass
and bio energy 6 0, 58- 67
Fig. 3: Caking Strength of the commercial rice flour [5] Grey RO, Beddow J K. 1969. On the Hausner ratio and its
relationship to some properties of metal powders. Powder
3.7. Powder Speed Flow Dependence (PFSD) test Technology, 2(6), 323–326
Powder flow stability distribution (PFSD) was [6] Garcia RA, Flores RA, Mazenko CE. 2007. Factors
analyzed to quantify the dependence of flow characteristics on contributing to the poor bulk behavior of meat and bone meal
flow rate for powders. It also measures flow stability or how and methods for improving these behaviours. Bioresource
the powder breaks down during testing. A typical force Technology, 98(15), 2852–2858
displacement profile for PFSD test is shown in Fig. 4. Values [7] Dutta S.K., Nema V.K. and Bhardwaj R.K. 1988. Physical
are described in Table 2 FSI (0.987). Flow stability index properties of gram. Journal of Agricultural Engineering
below 1 is associated with granulation effects during the Research, 39:259-268.
flowability test [13]. Flow stability index up to 1 is associated [8] Fraser B.M., Verma S.S. and Muir W.E. 1978. Some
with no-granulation effects during the flowability test. physical properties of faba beans. Journal of Agricultural
Engineering Research, 23:53–57.
[9] Shefherd H. and Bhardwaj R.K. 1986. Moisture-dependent
physical properties of pigeon pea. Journal of Agricultural
Engineering Research, 35:227–234.
[10] Fitzpatrick, J. J., Barringer, S. A., & Iqbal, T. 2004. Flow
property measurement of food powders and sensitivity of
Jenike’s hopper design methodology to the measured values.
Journal of Food Engineering, 61, 399–405.
[11] Katikaneni, P. R., Upadrasha, S. M., Rowlings, C. E.,
Neau, S. H., & Hileman, G. A. 1995. Consolidation of
ethylcellulose: Effect of particle size, press speed, and
Fig. 4: Powder flow speed dependency of the commercial lubricants. International Journal of Pharmaceutics, 117, 13–
rice flour 21.
[12] Teunou, E., & Fitzpatrick, J. J. 1999. Effect of relative
humidity and temperature on food powder flowability. Journal
of Food Engineering, 42, 109–116.

823
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[13] Landillon V, Cassan D, Morel MH, Cuq B. 2008.


Flowability, cohesive, and granulation properties of wheat
powders, Journal of Food Engineering 86,178–193
[14] Igathinathane C, Jaya ST, Sokhansanj S, Bi X, Lim CJ,
Melin S, Mohammad E. 2010. Simple and inexpensive
method of wood pellets macro-porosity measurement.
Bioresource Technology, 101(16), 6528–6537
[15] Benkovic M, Bauman I. 2009. Flow Properties of
Commercial Infant Formula Powders. World Academy of
Science, Engineering and Technology 30.

824
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Extraction of starch from differently treated horse


chestnut slices
Syed Insha Rafiq Shumaila Jan Sukhcharn Singh D. C. Saxena

Department of Food Engineering and Technology


Sant Longowal Institute of Engineering and Technology,
Longowal- 148106 (India)
syedinsha12@gmail.com

ABSTRACT purity of the isolated starch. A good representative starch


sample must contain >96 % (w/w) pure starch and be devoid
Starch was extracted from dehydrated horse chestnut slices of other plant components, such as fiber (soluble and
dried at temperature of 50, 60, 70, 80 and 90°C and insoluble), protein, and lipids.
rehydrated chips (dried at 50 and 60°C) at rehydration The objective of the present study was to
temperature of 25 and 40°C. Optimization was done on the standardize the extraction of starch from dried and rehydrated
basis of starch yield. The highest yield was found in the horse chestnut slices.
sample dried at 50°C. Physicochemical properties of
optimized starch were determined. Color values indicate that
the starch was light in color with L value of 96.4. The starch 2. MATERIALS AND METHODS
was having a neutral pH with zero carboxyl content. The bulk 2.1 Materials
density value was 0.85g/ml and sediment value was 36ml. Indian Horse Chestnut seeds were harvested from
Light transmittance showed a decreased trend with increased trees located in rural areas of Anantnag, J and K, India during
storage period of 120hrs. Syneresis and freeze thaw values the month of October, 2013. The seeds were washed with
were increased from 0 to 3.24 % and 0 to 20.21 % with water, oven dried at 60oC to remove excess moisture and
storage period. stored at 5o C until further use. All the reagents and chemicals
General Terms used in the study were of analytical grade and were obtained
from M/s. Loba Chemie Pvt. Ltd, Mumbai (India).
Horse chestnut starch, centrifuge, spectrophotometer, grinder
2.2. Sample preparation
Keywords Seeds were peeled manually and the kernels were
Horse chestnut starch, Bulk density, Sediment value, Paste cut into slices of 4 ± 0.4 mm. The slices were dipped into
clarity, Color, Syneresis water containing 0.5% KMS and 1.0% citric acid for 30 min
[8]. The slices were then dried at temperatures of 50, 60, 70,
1. INTRODUCTION 80, and 90°C and the sample dried at 50 and 60 °C was
Indian Horse Chestnut or Himalayan chestnut rehydrated at 25 and 40 °C. All the prepared slices were then
(Aesculus indica), locally known as Han dun is a large taken for starch extraction.
deciduous tree found in moist and shady ravines of Jammu
and Kashmir, Himachal Pradesh and Uttar Pradesh [1, 2] 2.3. Starch isolation
yielding large number of seeds, which ripen in October [3]. Starch was extracted from dried and rehydrated
The seeds are about 3.5 cm in diameter with a hard shining horse chestnut slices, using alkaline steeping method [9].
black rind from outside and lime white cotyledons inside [4]. Dried and rehydrated horse chestnut slices were steeped in
The seeds possess edible uses. In Himachal Pradesh, seeds 0.25 % NaOH solution (w/v) with ratio of 1:3 and stored at
with reduced level of bitterness after being kept in running 4°C for 2 hrs. The steeped chips were ground along with alkali
water are dried and ground into flour called Tatttwakhar, used with a laboratory grinder and filtered through 100 mesh sieve,
for making Halwa (porridge) and also mixed with wheat flour kept for settlement followed by 2-3 washings with distilled
to make chapattis [5]. During famine times, the seeds have water. The slurry was again filtered through 300 mesh sieve
been used as food by various tribes of North and North and then centrifuged at 3000 rpm for 15 min. The aqueous
Eastern India [6]. The seeds constitute about 50.5% moisture, phase obtained on centrifugation was discarded and the upper
5.85%, sugars, 0.39% protein, 1.93% ash [4] and about 38.3% non-white layer was scraped off. The white starch layer was
starch on dry weight basis [7]. Due to this satisfactory starch re-suspended in distilled water and centrifuged 2-3 times. The
content, horse chestnut seeds can prove as an inexpensive starch was then collected and dried in hot air oven at 40oC.
non-conventional source of starch.
The utilization of starch as additive in various 2.4 Physicochemical properties of starch
industrial applications has stimulated the development of 2.4.1 Yield and composition
various extraction methodologies with high purity and well Optimization was done on the basis of yield and the
defined physical and functional properties. Important sample with highest yield was taken for further analysis.
advances have been made during the last decades in the Moisture content, crude protein, fat, and ash content of the
development of methods for starch extraction. However, the purified starch was determined [10].
successful characterization of starch mainly depends upon the

825
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

2.4.2 Color stored at 4°C in a refrigerator and transmittance was


Color measurements of HCN starch was carried out determined after storage period of 0, 24, 48, 72, 96 and 120h
in triplicates, using Color Flex Spectrocolorimeter (Hunter at 640 nm against water blank with a UV spectrophotometer
Lab Colorimeter D-25, Hunter Associates Laboratory, Ruston, (Shimadzu UV-1601, Japan) [16].
USA), and results were expressed in terms of L*, a* and b*
values. The L* value indicates the lightness/ darkness of the 2.4.9 Syneresis
sample, 0–100 representing dark to light. The a* value gives Syneresis of HCN starch was determined by the
the degree of redness/greenness of the sample, with a higher modified method of Wani, et al. (2010). Starch suspensions
positive a* value indicating more red. The b* value indicates (6%, w/w db) were heated at 90°C for 30 min in a water bath
the degree of yellowness/blueness of the sample, with a higher (SWB-10L-1- Taiwan) with constant stirring at 75 rpm. The
positive b* value indicating more yellow. The functions starch sample was stored for 0, 24, 48, 72, 96, and 120 h at
chroma [C* = (a*2 + b*2)1/2] and hue angle [h* = tan-1(b*/ a*) 4°C in separate tubes for each day. Syneresis was measured as
were also calculated. % amount of water released after centrifugation at 3000х g for
10 min (5810R, Eppendorf, Hamburg, Germany) [17].
2.4.3 pH of starch suspension
The pH of each starch suspension was determined 2.4.10 Freeze thaw stability
using a digital pH meter (Hanna, USA). Starch samples for Freeze thaw stability of HCN starch was determined
pH measurements were prepared by suspending 1 g of starch by the method of Hoover and Ratnayake (2002). Aqueous
in 25 mL of water at 25°C and agitating for 5-10 min [11]. starch slurry (6%, w/v db) was heated in a water bath (SWB-
10L-1-Taiwan) at a temperature of 90°C, for 30 min. The gels
2.4.4 Carboxyl content were subject to cold storage at 4°C for 16 and then frozen at -
The carboxyl content of HCN starch was 16 °C. To measure freeze thaw stability, the gels frozen at -16
determined by the standard method [12]. °C for 24 h, were thawed at 25°C for 6 h and then refrozen at -
16°C. Five cycles of freeze thaw were performed. The tubes
2.4.5 Bulk density were centrifuged at 1000х g for 20 min at 10°C and the
released water was measured as freeze thaw stability [18].
Bulk density of HCN starch was measured by the
method of Wani, et al. (2013). Flour sample was gently
transferred into 10ml previously weighed graduated cylinder. 2.4.11 Statistical analysis
The sample was then packed by gently tapping the cylinder on The reported data in tables are an average of
a laboratory bench several times until no further diminution of triplicate observations and were subjected to statistical
the sample level was observed after it was filled up to the analysis using Statistica-log software package version 7
mark. The weight of the filled cylinder was taken, and the (StatSoft Inc., OK, USA).
bulk density was calculated as the weight of sample per unit
volume of sample (g/mL). The measurements were made in 3. RESULTS AND DISCUSSION
triplicates [13].
3.1 Physicochemical properties
2.4.6 Paste clarity 3.1.1 Proximate composition
The clarity (% transmittance at 650 nm) of starch Highest yield starch was extracted from chips dried
paste was determined with slight modification to the method at 50°C as shown in Table 1. The proximate analysis of HCN
described by Sandhu and Singh (2007). A 1% aqueous starch revealed 10.97 ± 0.23% moisture, 0.31 ± 0.14%
suspension of starch adjusted to pH 7.0 was heated in boiling protein, 0% fat and 0.29 ± 0.9% ash. The physicochemical
water bath for 30 min with intermittent shaking. After that the properties of native starch are shown in Table 2.
suspension was cooled down to 25°C. The light transmittance
was read at 650 nm against water blank [14]. Table 1. Yield of starch extracted from various treated
HCN slices
2.4.7 Sediment volume Starch Yield (%)
Sediment volume of HCN starch was determined Sample dehydrated at 50°C (D1) 28±1.4
with slight modification to the method of Tessler (1978).
Starch (1g, dry basis) was weighed into beaker and 95 ml of Sample dehydrated at 60°C (D2) 20±1.32
distilled water was added. The pH of the starch slurry was Sample dehydrated at 70°C (D3) 15±1.54
then adjusted to 7.0 using 5% NaOH or 5% HCl following Sample dehydrated at 80°C (D4) 10±1.37
which the slurry was cooked in a boiling water bath for 15
min. The mixture was then stirred thoroughly and transferred Sample dehydrated at 90°C (D5) 5±0.64
to a 100 ml graduated cylinder, the volume was made up to D1sample rehydrated at 25°C (25D1) 9.24±1.2
100ml with distilled water. The cylinder was then sealed and
D1 sample rehydrated at 40°C (40 D1) 8.09±1.03
kept at room temperature for 24 hrs for settlement of starch
granules. The volume of the sediment consisting of starch D2 sample rehydrated at 25°C (25 D2) 8.49±0.95
granules was then measured for sediment volume [15]. D2 sample rehydrated at 40°C (40 D2) 6.25±0.87
The values are means ± standard deviation of three
2.4.8 Light transmittance replicates.
Transmittance of HCN starch was measured
according to the method followed by Craig, et al. (1989). A
1% aqueous suspension of starch sample was heated in a 3.1.2 Color measurement
boiling water bath for 1 h with constant stirring. The As shown in Table 2, a high degree of whiteness
suspension was then cooled for 1 h at 30°C. The samples were was observed for HCN starch with L* value of 96.4.

826
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Boundries et al., (2009) have concluded that L* values greater the starch paste clarity is directly affected by degree of
than 90 give a satisfactory whiteness for starch purity [19]. swelling of starch [24].
The chroma (c°) and hue (h°) values of starch sample was
96.23 and 84.30, respectively. 3.1.7 Sediment value
The sediment volume of HCN starch is 36ml as
Tab. 2. Physicochemical properties of HCN starch presented in Table 2. HCN starch is having highest sediment
value and upon modification of any starch sample there is
Parameters HCN starch decrease in sediment value. The decreased values in modified
Color values starch are due to the disruption of granules resulting in
decreased swelling and low volume makeup. Studies were
L 96.4±0.12
done for acetylated and cross-linked rice starch which
A 96.2±0.09 reported reduced sedimentation volume, due to decreased
B 96.0±0.08 interaction between starch molecules by acetyl group, and
inhibit swelling by cross-linking [25]. The decreased sediment
c* 96.23 ± 0.03 volume may also be due to large starch granules which caused
°
h 84.30±0.05 decrease in bond strength upon heating [26].
Yield (%) 28±1.02
Amylose (%) 26.10±0.51 3.2.1 Light transmittance
Light transmittance which provides the insight of
Ph 7±0.11 starch paste when light passes through it can be used to
Carboxyl content (%) 0 indicate the clarity of starch paste and depends on the level of
Bulk density (g/mL) 0.85±.025 swollen and non-swollen granule remnants. Transmittance is
the fraction of incident light that passes through a sample at a
Sediment value (ml) 36±0.34 specified wavelength. The percent transmittance (%T) was
Paste clarity (A) 0.832±0.14 measured as a function of wavelength for various starch
pastes and it was observed that more opaque pastes gave a
The values are means ± standard deviation of three
lower %T [27]. The effect of storage period on percent
replicates.
transmittance of investigated starches is presented in Table 3.
The percent transmittance was reduced with increased storage
3.1.3 pH of starch suspension period of the paste. Similar time-dependent reduction in
The pH of HCN starch suspension is shown in transmittance has been reported for banana starch [28].
Table 2. The starch suspension is having a neutral pH and any
modification done to the starch results in increase or decrease 3.2.2 Syneresis
in pH value. Results have reported decreased pH value due to As presented in Table 3, the syneresis of pastes
the breakdown of starch molecules [11], which presumably increased with storage period and HCN starch paste displayed
induce COOH formation that could be responsible for highest syneresis (0.00-3.24%) during 120 h of storage. [29]
increase in acidity of the modified starch. Perera and Hoover (1999) reported that the increase in
percentage syneresis during storage is due to the interaction
3.1.4 Carboxyl content between leached amylose and amylopectin chains, which lead
The carboxyl content of native starch is listed in to the development of junction zones and release of water.
Table 2. The carboxyl content value is zero due to the neutral Amylose aggregation and crystallization have been reported
pH of the starch. Any modification involving use of chemical to be completed within the first few hours of storage while
treatment results in increase in carboxyl content. Reports on amylopectin aggregation and crystallization occurs during
the increase in carboxyl content after the oxidative treatment later stages [30, 31].
has been discussed by different researchers [20, 21]. Acids
like formic, pyruvic, acetic and glucouronic acids are said to 3.2.3 Freeze-thaw stability
be responsible for the elevation in carboxyl content [22]. Freeze-thaw stability of gelatinized starch pastes is
presented in Table 3. The lowest freeze-thaw stability
3.1.5 Bulk density (20.21% syneresis) was observed at 5thaw for HCN starch
Bulk density (BD) is a reflection of the load a after five freeze thaw cycles. Freeze thaw stability of native
sample can carry if allowed to rest directly on another. The starch was in the range of 0 - 20.21%. This may be attributed
bulk density of native starch as presented in Table 2 was 0.85 to degradation of molecular structure of starch and formation
g/mL. Bulk density is generally affected by the particle size of smaller fragments which retain more water. The amount of
and density of powder. However, smaller the particle size, the water separated from the gels increased with storage time.
more is the resistance for flow of powders. The high bulk Baker and Duarte (1998) have reported this behavior for corn
density of material suggests their suitability for use in food starches, mentioning low freeze–thaw gel stability for corn
preparations. In contrast, low BD would be an advantage in and amaranth starches [32]. Likewise, Bello-Perez et al.,
the formulation of complementary foods [23]. (1999) found low gel stability in the same processes for
banana starch [33]. Low gel stability under these conditions
3.1.6 Paste clarity suggests it is not convenient for use in food systems involving
Paste clarity of starch suspension is presented in refrigeration or freezing processes.
Table 2. The paste clarity of the suspension is because of
variation in water penetration and absorption of the starch
granules, which ultimately led to difference in swelling of
starch and resulting in more or less transmittance of light [16].
Bhandari and Singhal (2002) observed in amaranth starch that

827
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Tab. 3. Light transmittance, syneresis and freeze thaw [3] Wani, I. A., Jabeen, M., Geelani, H., Masoodi, F.A. Saba,
stability of HCN starch I., & Muzaffar, S. (2014).Effect of gamma irradiation on
Parameter HCN starch physicochemical properties of Indian Horse Chestnut
Light transmittance (Aesculus indica C.) starch. Food Hydrocolloids, 35,
253–263.
0h 5.8±1.7 [4] Parmar, C., and Kaushal, M. K. (1982). Aesculus indica.
24h 3.2±1.2 In wild fruits; Kalyani Publishers, New Delhi, India, pp
6-9.
48h 2.1±0.9
[5] Rajasekaran, A., & Singh, J. (2009). Ethnobotany of
72h 1.5±1.3 Indian horse chestnut (Aesculus indica) in Mandi district,
96h 1.2±1.0 Himachal Pradesh. Indian Journal of Traditional
Knowledge,Vol. 8(2), pp. 285-286.
120h 0.9±1.5 [6] Singh, G., & Kachroo, P. (1976). Forest flora of Srinagar.
Syneresis cf http://www.ibiblio.org/pfaf/cgi-bin/arr_html. Aesculus
0h 0 indica.
[7] Singh, B., Katoch, M., Ram, R., and Aijaz, Z. (2003). A
24h 0.24±0.5 new antiviral agent from Indian Horse Chestnut
48h 0.67±1.2 (Aesculus indica). European Patent Specification,
International publication No. WO 2003/079795.
72h 2.1±0.6
[8] Singh, G. D., Sharma, R., Bawa, A. S., & Saxena, D. C.
96h 2.43±0.9 (2008). Drying and rehydration characteristics of water
120h 3.24±1.4 chestnut (Trapa natans) as a function of drying air
temperature. Journal of Food Engineering, 87, 213–221.
Freeze thaw stability [9] Sun, Q., Han, Z., Wang, L., & Xiong, L. (2014).
0thaw 0 Physicochemical differences between sorghum starch
1thaw 2.04±1.3 and sorghum flour modified by heat-moisture treatment.
Food Chemistry, 145, 756–764.
2thaw 10.28±0.9 [10] AOAC, (1995). Official methods of analysis.
3thaw 15.66±0.7 Washington, D.C., U.S.A. Association of Official
Analytical Chemists.
4thaw 18.61±1.1
[11] Sokhey, A. S., & Chinnaswamy, R. (1993).
5thaw 20.21±0.8 Physicochemical properties of irradiation modified starch
The values are means ± standard deviation of three extrudates. Food Structure, 11, 361-371.
replicates. [12] Chattopadhyay, S., Singhal, R. S., & Kulkarni, P. R.
(1997). Optimization of conditions of synthesis of
oxidized starch from corn and amaranth for use in film-
forming applications. Carbohydrate Polymers, 34, 203-
CONCLUSION 212.
Horse chestnut seeds having high content of starch
[13] Wani, I. A., Sogi, D. S., Wani, A. A., Gill, B. S. (2013).
presently go waste, but can be utilized for the production of
Physico-chemical and functional properties of flours
starch. The isolated starch showed ash, protein, lipid as
from Indian kidney bean (Phaseolus vulgaris L.)
0.29%, 0.31%, and 0%, respectively with a yield of 28%. The
cultivars. Food Science and Technology 53, 278-284.
characteristics of HCN starch displayed high light
[14] Sandhu, K. S., & Singh, N. (2007). Some properties of
transmittance and low syneresis. Thus they can be used in
corn starches II: Physicochemical, gelatinization,
food products which need refrigeration. The analysis of
retrogradation, pasting and gel textural properties. Food
physicochemical properties of starch provides valuable
Chemistry, 101, 1499–1507.
information associated with the functional properties of
[15] Tessler, M. M. (1978). Process for preparing cross-
starch. There is thus, a need for detailed investigation of horse
linked starches. US Patent 4098997.
chestnut to gather more information related to various
[16] Craig, S. A. S., Maningat, C. G., Seib, P. A., & Hoseney,
properties of starch.
R. C. (1989). Starch paste clarity. Cereal Chemistry,
66(3), 173-182.
ACKNOWLEDGMENTS [17] Wani, A. I., Sogi, S. D.,Wani, A. A., Gill, S. B., &
Shivhare, S. U. (2010). Physicochemical properties of
The authors are grateful to Maulana Azad National
starches from Indian kidney beans. International Journal
Fellowship (MANF) for providing financial assistance in the
of Food Science & Technology, 45, 2176-2185.
form of Junior Research Fellowship.
[18] Hoover, R., & Ratnayake,W. S. (2002). Starch
characteristics of black bean, chick pea, lentil, navy bean
and pinto bean cultivars grown in Canada. Food
REFERENCES Chemistry, 78, 489-498.
[1] Singh, B. (2006). Simple process for obtaining beta Aescin [19] Boudries, N., Belhaneche, N., Nadjemi, B., Deroanne, C.,
from Indian horse chestnut. United States patent Mathlouthi, M., Roger, B., Sindic, M., (2009).
application pub. no. US 2006/0030697. Physicochemical and functional properties of starches
[2] Zhang, Z., Li, S., & Lian, X. Y. (2010). An overview of from sorghum cultivated in the Sahara of Algeria.
genus Aesculus L.: ethnobotany, phytochemistry, and Carbohydrate Polymers, 78 (3), 475–480.
pharmacological activities. Pharmaceutical Crops, 1, 24- [20] Wang, Y. J., & Wang, L. (2003). Physicochemical
51. properties of common and waxy corn starches oxidized

828
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

by different levels of sodium hypochlorite. Carbohydrate modified banana starch prepared by alcoholic alkaline
Polymers, 52, 207-217. treatment. Starch, 52, 154–159.
[21] Zhou, M. X., Robards, K., Glennie-Holmes, H., & [29] Perera, C., & Hoover, R. (1999). Influence of
Helliwell, S. (1998). Structure and pasting properties of hydroxylpropylation on retrogradation properties of
oat starch. Cereal Chemistry, 75, 273-281. native, defatted and heat-moisture treated potato starches.
[22] Ghali, Y., Ibrahim, N., Gabr, S., & Aziz, H. (1979). Food Chemistry, 64, 361-375.
Modification of corn starch and fine flour by acid and [30] Miles, M. J., Morris, V. J., Orford, R. D., & Ring, S. G.
gamma irradiation. Part 1. Chemical investigation of the (1985a). Recent observation on starch retrogradation. In
modified product. Starch/Starke, 31, 325-332. R. D. Hill, & L. Munck (Eds.), New approaches to
[23] AAkpata, M. I., & Akubor, P. I. (1999). Chemical research on cereal carbohydrates (pp. 109-115).
composition and selected functional properties of sweet Amsterdam: Elsevier.
orange (Citrus sinensis) seed flour. Plant Foods for [31] Miles, M. J., Morris, V. J., Orford, R. D., & Ring, S. G.
Human Nutrition, 54, 353-362. (1985b). The roles of amylose and amylopectin in the
[24] Bhandari, P. N., Singhal, R. S. (2002), Effect of gelation and retrogradation of starch. Carbohydrate
succinylation on the corn and amaranth pastes. Research, 135, 271-281.
Carbohydrate Polymers, 48, 233–240. [32] Baker, L. A.; Rayas-Duarte, O.(1998), Freeze-thaw
[25] Raina, C. S., Singh, S., Bawa, A. S., & Saxena, D. C. stability of amaranth starch and the effects of salts and
(2006). Some characteristics of acetylated, cross-linked sugars. Cereal Chemistry, 75, 301–307.
and dual modified Indian rice starches. European Food [33] Bello-Perez, L. A., Agama-Acevedo, E., Sanchez-
Research Technology, 223, 561-570. Hernandez, L., Paredes-Lopez, O. (1999), Isolation and
[26] Moorthy, S. N. (2002). Physicochemical and functional partial characterization of banana starches. Journal of
properties of tropical tuber starches: a review. Agricultural and Food Chemistry, 47, 854–857.
Starch/Starke, 54, 559-592.
[27] Stuart, A. S. C., Claduaoldoc, M., Paul, A. S., &
Hosenney, R. C. (1989). Starch paste clarity. Cereal
Chemistry, 66, 173-182.
[28] Bello-Perez, L. A., Romero-Manilla, R., Paredes-Lopez,
O. (2000), Preparation and properties of physically

829
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

INDUSTRIAL EFFLUENTS AND HUMAN HEALTH


Anchla Rupal
Deptt. Of Chemistry, S.D. College, Barnala
e.mail: anchlarupal@gmai.com

ABSTRACT metal likes cadmium, chromium, copper and lead. Cadmium


compounds are used in nickel cadmium batteries and used
Punjab is one of the most prosperous states of India, pioneering
batteries are dumped together as a waste. Hence cause
in the green revolution. Despite being one of the richest states of
contamination. The other noted effect of heavy metal
country in terms of per capita income, the health indices of the
contamination is on aquatic life and increase human
state are not the best. Due to rapid industrialization and
consumption of heavy metals, hence leading to various diseases.
urbanization in the state a huge amount of waste water is
The study also observed the high pesticide levels in the ground
generated and untreated water is drained leading to pollution of
and tap water and higher level of pesticides enters the human
the drains. Hence the water resources are getting polluted and
food chain through vegetables grown in soil. Health problems
most of the effluent is generated by different kind of industries,
caused by pesticides include gastrointestinal effect like nausea,
municipal sewerage, urban storm water etc.
vomiting and loss of appetite. Even chronic exposure to
Keywords
pesticides leads to carcinogenic diseases, reproductive toxicity,
Industrialization, revolution, resources and lung and kidney diseases as the study revealed during the
INTRODUCTION survey carried out on a sample population. It was also found that
The present study was conducted to find the effect of effluent most common source of drinking water is hand pump water and
water pollution on human health among the people living near most of the people use polluted Nallah water for irrigation and
Budha Nallah. Samples of water, vegetable and milk were growing vegetables. As the study reveals that there is a
collected to ascertain the effect of pollution. Hence, physical and significant effect of effluent disposal on water quality and
chemical tests were carried along with testing of contamination human health among the people living in close proximity to the
by heavy matter and pesticides. It was found there is significant drain and as human health is the major concern and metal
occurrence of gastrointestinal problems, water borne diseases, industries and electroplating unit are incriminating sources of
skin and eye problem among the people residing in the reference heavy metal pollution.
area. The study also revealed mercury, cadmium, chromium, DISCUSSION
copper to be much higher in water samples and in green The potential adverse impact of chemical and heavy metals as
vegetable grown in the fields irrigated with water from the well as pesticide found in much higher concentration leads to
drains. The water samples were analyzed and the testing was various health problems as even significant pesticide residue
conducted for heavy metal and pesticides analysis. The major levels have been found in milk samples also. So certain steps
reason for contamination can be due to industrial units including must be taken for improving the quality of life, hence
hosiery units, involved in dying which discharge their effluent  There should be regular monitoring of drinking water
into the drain. Mercury level were found to be more than quality for physical and chemical parameters, heavy
permissible limits in most of the ground water samples. Mercury metals and pesticides.
affects the central nervous system causing tremors, insomnia,
 There is a need for regular monitoring of water quality
memory loss and headache. It also causes gastrointestinal
of drains, industry and municipal bodies for pollution.
symptoms like nausea, vomiting, diarrhea and burning of eyes.
 Health department should have a continued survey to
Sources of mercury include broken thermometer and BP
identify acute and chronic effect due to heavy metals
apparatus, electrical switches, florescent bulbs, batteries and
and pesticides.
paints. The city of Ludhiana where Budha Nala passes has about
 Biomedical waste management must be done to
400 electro plating units. The waste from these units could be
prevent contamination of drinking water
the cause of mercury contamination. Beside mercury heavy

830
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

 Agriculture department should also undertake regular 9 Gunnar S. Eskeland and Emmanuel Jimenez (1992),
“Policy Instruments for Pollution Control in
monitoring of pesticides and heavy metals in food Developing Countries”,The World Bank Research
grains, vegetables, fruits and milk. Observer, vol.7, No.2.
10 Living Standards Measurement Survey Unit, (2002),
 Finally sources of water pollution from industry must “Survey of Living Conditions-Village Questionnaire
be identified, so that timely improvement of water (Uttar Pradesh and Bihar) ”, World Bank.
11 Murthy.M.N., James A.J. and Smitha Misra.(2000),
quality can be carried out and industries should treat “Economics of Water Pollution The Indian
their effluent before discharging into water bodies. Experience”,Oxford University Press, New Delhi.
12 Nemat Shafik (1994), “Economic Development and
REFERENCES Environmental Quality-An Econometric Analysis”,
Oxford Economic Papers 46.
1 Charles Ford Runge (1987), “Induced Agricultural 13 Richard E. Just and David Zilberman (1988), “A
Innovation and Methodology for Evaluating Implications of
2 Environmental Quality: The Case of Groundwater Environmental Policy Decisions in Agriculture”, Land
Regulation”, Land Economics, vol64. No.1 February.
3 Economics vol63. No.3 August. 14 Ronald.H. Coase. (1960), “The Problem of Social
4 Charlisle W. Abdalla, Brain A. Roach, and Donald J. Cost”, Journal of Law and Economics, No.3.
Epp (1992), “Valuing 15 Water Resource Organisation, (2001), “Noyyal River
5 Environmental Quality Changes using Averting System- Hydraulic Particulars- Report”, PWD,
Expenditures: An Application to Groundwater Coimbatore
Contamination”, Land Economics, vol. 68(2) May. 16 Water Resource Organisation, (2001), “Report on
6 Ecological Economist Unit – Institute for Social and Noyyal Orathupalayam Dam Schemes”, PWD,
Economic Change, (1999), Coimbatore.
7 “Environment in Karnataka” A Status Report, 17 Xia Guang (2000), “An Estimate of the Economic
Economic and Political Weekly, Sep.18. Consequences of Environmental Pollution in China”,
8 Gunnar S. Eskeland and Emmanuel Jimenez (1992), Policy research Center of the National Environmental
“Policy Instruments for Pollution Control in Protection Agency, Beijing, China.
Developing Countries”,The World Bank Research
Observer, vol.7, No.2.

831
Agriculture Sciences
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Determination of attenuation coefficient and water


content of Broccoli leaves using beta particles
Komal Kirandeep Parveen Bala Amandeep Sharma
Department of Mathematics, Statistics and Physics,
Punjab Agricultural University Ludhiana, India-141004
adsphy@gmail.com

ABSTRACT et al. [10] estimated leaf water content from the mid to
Water content in the leaves of Broccoli is determined based thermal infrared (2.5–14.0 μm) spectra, based on continuous
on their attenuating characteristics to beta particles of 204Tl. wavelet analysis. Yi et al. [11] used hyperspectral indices to
The mass attenuation coefficient is obtained as the slope of estimate water content in cotton leaves. It aimed on the
leaf thickness versus logarithm of relative transmission relation between water content and hyperspectral reflectance.
intensity. As the water content in the leaves varies, these It aimed to identify an index for remote water content
parameters also vary. The transmission intensity decreases estimation and correlated field spectral measurements to leaf
with increase of water amount in plant leaves. Beta water content. Recently, Giovanni et al. [12] detected crop
attenuation is a fast, reliable and non destructive method that water status in mature olive groves using vegetation spectral
provides continuous monitoring of plant water status. measurements and regression. However, this technique
requires the availability of full spectra with high resolution,
Keywords which can only be obtained with handheld spectro-
Mass attenuation coefficient, Water content of leaves, Beta radiometers or hyper-spectral remote sensors.
particle transmitted intensity, Non- destructive inspection A more reliable beta attenuation technique overcomes the
sophistication of instruments/techniques used in spectroscopic
1. INTRODUCTION methods and measures the water content in a convenient way.
Mass attenuation coefficient of a material is the measurement Although the beta-gauging technique has been explored in the
of its capability to absorb or scatter radiations as they pass past for several applications, yet the direct use of measured
through them. The attenuation studies in matter have helped a mass attenuation coefficient has not been utilized successfully
lot in solving variety of problems in physical sciences, bio- for applied work. To the best of our knowledge there is no
sciences, agricultural science and medical physics [1]. Beta study of this type, plotting moisture content versus transmitted
particle attenuation yields fundamental information on intensity, for Broccoli leaves using beta particles from 204Tl.
material composition such as thickness, water content etc.
Nathu Ram et al. [2], Thontadarya [3], Batra et al. [4] have 2. THEORY
witnessed the measurement of mass attenuation coefficients Water constitutes the major portion in a plant leaf, thus
through different materials as a function of energy, atomic changes in water content of plant leaves are reflected by
number of the absorber and experimental geometries. changes in the absorber thickness. The attenuation of beta
Mahajan [5] measured attenuation coefficients for some radiation through a leaf depends upon mass per unit area of
elements and found to be in agreement with empirical the leaf. The intensity of transmitted beta radiation through a
relation. plant leaf , is given as;
Water content of the leaves and vegetation on the larger scale I= (1)
is an important variable in physiological plant activities. It Where I0 is the intensity of the unattenuated beta radiation, t
maintains their vitality, photosynthetic efficiency and hence is and µ are the thickness and mass attenuation coefficient
an important production limiting factor. Actual water content respectively of leaf (organic matter and water). From this
of the leaves and plants varies with its type and the equation mass attenuation coefficient (µ) can be calculated by
environmental conditions. When leaves dry up, they mainly knowing the other quantities. Rewriting equation (1) for a
lose their water content, hence water content is a strong fresh leaf as
indicator of vegetation stress also. Mederski [6] introduced ln (2)
beta radiation gauge for measuring relative water content in
leaves of soyabean plant. Jarvis and Slatyer [7], Obrigewitsch And for a completely dry leaf
et al. [8] made an attempt to calibrate the beta gauge for (3)
determining the leaf water status using cotton leaf as absorber. Where td, Id and µd are the mass per unit area, intensity and
For calibration purpose, the methods require the fully turgid mass attenuation coefficient respectively of completely dry
leaf in addition to completely dry and fresh leaves. Beta leaf (organic matter).
gauging technique makes the method little awkward due to Since leaves contain organic matter and water, we can write
the loss of organic matter. However, in present work the tw = t f - td (4)
technique is modified by estimating the absolute water content Where tw is the mass of water per unit leaf area. Using
through fresh leaves using Geiger Muller counter. equations (2), (3) and (4), we get,
Some spectroscopic methods have also been used to (5)
determine water content in leaves. Xiangwei et al. [9] used
And n = µd/µf ; the ratio of mass attenuation coefficients of
parameters such as leaf reflectance, correlation coefficients
completely dry leaves to those of fresh leaves. Thus, using the
and spectral index for determining crop water content. Ullah
experimental values of µd, µf and Id/I0 and measuring If/I0 for

832
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

a fresh leaf, equation (5) provides the absolute water content selected for investigations. The circular leaf is kept under an
in the plant leaf. For direct weighing measurements of leaf, IR lamp for one minute. This evaporates some amount of leaf
the percentage water content is given by the following water and reduces its thickness (mg/cm2). Transmission of
formula [8]: beta particles is measured through this leaf and counts are
% water content = x100 (6) noted. The same leaf sample is again exposed to one minute
IR radiation, which causes further drying and reduction in
thickness. The transmitted intensity is measured again for this
dried leaf. This process is repeated till it shows sensitivity for
3. EXPERIMENTAL transmission intensity.
The picture of experimental arrangement used in present
investigations is shown in Fig. 1. Radioactive source 204Tl 4. RESULTS AND DISCUSSION
The data for broccoli leaf thickness and logarithm of relative
transmission intensity has been shown in Table 1. The column
2 in data table contains the values for thickness of leaf. This
thickness (mass per unit area) includes water content and
organic matter of selected leaf. The column 3 shows the
logarithmic values of relative transmission through leaves
with different thickness.
The plot for this data of leave thickness versus logarithm of
relative transmission intensity has been shown in Fig.2. The
equations for the best fitted regression lines is linear one and

Table 1: Thickness versus counts of Broccoli with 204Tl


Sr. No. Thickness
-ln(I/I0)
(mg/cm2)
1. 28.62 0.713
2. 66.60 1.433
Fig 1: Experimental set-up for measurements
3. 100.80 2.060
(end point energy 0.766 MeV) have been used as a source of 4. 138.92 3.005
beta particles. The Geiger Muller counter is used for intensity
5. 174.58 3.730
measurements of experimental study. The plant leaves of
broccoli that act as attenuating medium are held between two 6. 209.63 4.598
rectangular equal sized aluminum sheets, having 2.5 cm
diameter matching holes. The distance between beta source
and GM window is kept 4.3 cm while placing leaf sample
almost at the middle of this gap. The source-absorber-detector
geometry is centrally aligned. This geometry is kept same
during the experiment, resulting in non-varying scattering and
air absorption effects. A Geiger Muller tube operating at the
middle of the beta plateau, at 450 Volt, is used for measuring
the beta intensity. The output of the tube is amplified and fed
to the discriminating and scaling units of a counter.
Leaves of Broccoli are taken from fields of Punjab
Agricultural University, Ludhiana. Leaves to be investigated
are washed with water and then soaked for a few minutes in
layers of blotting paper. Circles of radius 3.1 cm were cut
from the leaves and thickness of each circular leaf is
determined by weighing with an electrical balance having an
accuracy of 10-4 gm. The transmission studies of beta particles
are made through these fresh leaf circles. The observation
time chosen for each absorber thickness is 200 sec and
statistical error in observed counts remains below 2%. The
stability of the apparatus was checked by keeping it on for
sometimes and also by taking three or more readings for the Fig.2.: The relative transmission (logarithmic) as a
selected plant leaf. function of leaf thickness

The transmission studies of beta particles are made through have been shown in the plot. This plot shows that relative
one fresh leaf circle. Another leaf circle was placed exactly transmission intensity decreases linearly with increase of leaf
above this leaf circle and transmitted intensity was noted. thickness. The slope of this fitted line gives the mass
Likewise leaf circles are placed one above the other and their attenuation coefficient (as per equation 1) and determined
transmission study was done. The counts are noted for the value of µ is 0.0215 cm2/mg. A small value of attenuation
increasing leaf circles till they showed sensitivity to the beta coefficient indicates that the material in question is relatively
particles and the apparatus. In second part of this work, for transparent, while larger values indicate greater degrees of
variation in water content of leaves, one circular leaves is opacity. Table 2 shows variation in water content, with

833
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Table 2: Data for Broccoli leaf and 204Tl beta source. The 5. CONCLUSION
errors in count rate indicate statistical uncertainties only The mass attenuation coefficient values are useful for
quantitative evaluation of interaction of radiations with leaves
Thickness
Absolute Moisture content (%) of plants. Measuring leaf water content can build knowledge
moisture Transmitted of the soil moisture status aiding in effective irrigation water
of leaf
content intensity management. The beta attenuation method overcomes the
(mg/cm2) Beta Direct
tw attenuation weighing sophistication of techniques used in spectroscopic methods.
34.47 9.87 28.63 28.22 5027±71 Moreover, the method is fast, non-destructive, an easy-to-
handle and hence can be utilized for planted leaves.
32.38 7.71 23.81 23.59 5228±72 Investigations based on beta transmission methods are rarely
30.88 6.27 20.33 19.88 5652±75 available for selected vegetable leaves that are grown in
seasonal conditions of this region. It will lay an important
29.66 4.93 16.62 16.58 5844±76
foundation for sustainable development and modern
28.50 3.81 13.36 13.19 6205±79 agricultural technology. There is also a need to simulate the
27.51 2.83 10.29 10.06 6429±80 present experiment with some suitable Monte Carlo
Simulation code for better understanding of present work to
26.52 1.99 6.37 6.71 6673±82 prototype the method in field practice.
25.63 0.98 3.82 3.47 6879±83
REFERENCES
exposure of leaf to IR heat radiation, and corresponding [1] Nilsson, B., Brahme, A. 2014. Interaction of ionizing
change in transmitted intensity. The Column 2 of this Radiation with matter. Reference module in Biomedical
table shows decreasing values for the thickness of water Sciences, from Comprehensive Biomedical Physics. 9,
content (calculated by equation 5) and is also known as the 1-36.
absolute moisture content. The percentage moisture content of [2] Nathu, R., Sundra Rao, I.S. and Mehta, M.K. 1968. Mass
the leaf by beta attenuation method and by direct weighing absorption coefficients and range of beta particles in Be,
(calculated by equation 6) has been shown in columns 3 and 4 Al, Cu, Ag and Pb. Pramana.18,121-126.
respectively. A comparison and close agreement of measured
values by two methods provides authenticity of beta [3] Thontadarya, S.R. 1984. Effect of geometry on mass
attenuation technique. Column 5 shows the transmitted attenuation coefficient of beta particles. Appl. Radiat.
intensity values, for different leaf thickness, increases with Isot.,35,981-982.
decrease in water content of leaves which must be obvious [4] Batra, R.K., Singh B. and Singh K. 1992. Determination
too. The errors quoted (less than 2%) in count rate indicate of water content of Plant Leaves by Beta attenuation.
statistical uncertainties only. The slope of fitted lines (Fig.3) Appl. Radiat. Isot.,43,1235-1239.
is negative because increase of water content causes
more absorption of beta particles and hence decrease of [5] Mahajan, C.S. 2012. Mass attenuation coefficients of beta
transmitted intensity. The best-fit straight line serving as particles in elements. Science Research Reporter, 2, 135-
calibration curves provide an alternative way to measure 141.
moisture content. We believe that the present experimental
[6] Mederski, H.J. 1961. Determination of internal water
findings with regard to agricultural fields will be quite useful
status of plants by beta ray gauging. Soil Sci., 92, 143-
to other investigators in improving their design for field
146.
instruments.
[7] Jarvis, P.S., Slatyer, R.O. 1966. Calibration of beta gauge
for determining leaf water status. Sci.,153, 78-79.
[8] Obrigewitsch, R.P., Rolston, D.E, Neilson, D.R.,
Nakayama, F.S. 1975. Estimating relative leaf water
content with a simple beta gauge calibration. Agron
J.,67, 729-732.
[9] Xiangwei, C., Wenting, H. and Min, L., 2011.
Spectroscopic determination of leaf water content using
linear regression and artificial neural network. African J.
biot., 11, 2518-2527.
[10] Ullah, S. , Skidmore, A.K., Mohammad, N., Schlerf, M.
2012. An accurate retrieval of leaf water content from
mid to thermal infrared spectra using continuous
wavelet analysis. Sci Total Environ., 431, 145-152.
[11] Yi, Q., Bao, A., Wang, Q., Zhao, J., 2013. Estimation of
leaf water content in cotton by hyperspectral indices.
Comput & Electr Agri., 90, 144-151.
[12] Giovanni, R., Mario, M., Giuseppe, C., Guiseppe P.,
Fig3: Variation of transmitted intensity as a function of
2014. Dectecting crop water status in mature olive
water content and thickness of broccoli leaf
groves using vegetation spectral measurements, Biosys
Engin.,128, 52-68.

834
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

The Study of Cloud Computing Challenges in Agriculture with


Special Reference to Sangli District (MS)

Dalvi Teja Satej Kumbhar S.R.


Research Scholer, Department of Electronics,
JJT University, Zunzunu, Willingdon College, Sangli,
Rajasthan, India. srkumbhar@yahoo.co.in
tejudal@gmail.com

ABSTRACT For the deployment of the information many strategies are


In the present paper concepts and Cloud Computing used. A cloud can be deployed using any of the below
challenges in agriculture are highlighted. Cloud Computing is mentioned strategies.
delivery of computing services rather than the product. Where
by shared resources, software and information are provided to 1. Public Cloud can be accessed by any subscriber with
computers and other devices as a utility over a network. Cloud an internet connection and access to the cloud space for
computing will gives us great facility to store our data its use in the public interest.
remotely and use that data on demand. Cloud computing
technology is nothing but on-demand resource sharing 2. Private Cloud is established for a specific group or
technology. As we all know that agriculture is the main part organization and limits access to just that group and the
of Indian Economy, in different manner like food supply, information is also dealt for the specific group..
main source of employment as well as earning of foreign
exchange through export of agricultural commodities like 3. Community Cloud is shared among two or more
grapes, sugar, turmeric, other food grains, etc. so there is need organizations that have similar cloud requirements.
to improve the productivity of such products along with the
quality. This will help us to improve the economic status of 4. Hybrid Cloud is essentially a combination of at least
Sangli district people and that’s why one need’s to use cloud two clouds, where the clouds included are a mixture of
computing technology for better results. The lack of public, private or community.
knowledge regarding the ICT and its use in the development
is necessary. Some of the challenges such as availability,
There are three types of cloud providers [2] that user can
connectivity, literacy, ICT awareness are in Sangli district
subscribe to:
(MS) for use the cloud computing in agriculture.
 Software as a Service (SaaS)
Keywords
Cloud Computing, Krushi Mitra Cloud, Challenges in cloud  Platform as a Service (PaaS)
computing.  Infrastructure as a Service (IaaS).
1. Software as a Service (SaaS)
1. INTRODUCTION SaaS is a hosted set of software that user didn’t own but
pay for the same element of utilization by user or some
other kind of consumption basis here you do not have to do
Overview of cloud computing
development or programming but you may need to come in
Cloud computing, often referred to as simply “the cloud,” is and configure the software you don’t have to purchase any
the delivery of on-demand computing resources, everything software.
from applications to data centers over the internet on a pay-
2. Platform as a Service (PaaS)
for-use basis. Cloud computing is not application oriented but
service oriented. It offers on-demand basic cloud computing. This provides the hardware and the software together for
Environment virtualized resources as measurable and billable some application with certain services for common
utilities. The main idea behind cloud computing is to provide function.
the information to the used as and when required. John
McCarthy [1] has already discussed regarding the facility in 3. Infrastructure as a Service (IaaS)
the computing field. This facility provides the information to IaaS is a delivery of hardware i.e. server, storage network
the public. The cloud computing facility is provided [4] by the and associated software operating systems, virtualization
service provider depending on the certain condition and the technology, file system as a service.
requirement for the betterment of the user. Government is
providing many agricultural and development facilities using The business model of cloud computing [4-5] is depicted by
the cloud. The cloud facility is prepared using the various Fig. 1. According to the layered architecture of cloud
geographical, economical, metrological basis. Bohem and computing, it is entirely possible that a PaaS provider runs
et.al. stated different types of clouds [2]. The treats in the its cloud on top of an IaaS provider’s cloud. However, in
cloud computing are given by Ashktorab and Taghizadeh [3]. the current practice, IaaS and PaaS providers are often

835
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

parts of the same organization. This is why PaaS and IaaS 2. CHALLENGES OF CLOUD
providers are often called the infrastructure providers or
cloud providers. COMPUTING IN AGRICULTURE
India is principally an agricultural country.
End User The agriculture sector accounts for about 18.0% of the
Interface (Web)
GDP and employs 52% of the total workforce. In this
paper Sangli District is taken as a research area.
The Indian climate divided in to six major climatic
Service Provider subtypes and Maharashtra state is having tropical monsoon
climate. One part of Sangli district is situated in river basins
Utility computing of Krishna and Warana and another part is famine. Sangli
district can be broadly divided into three agro-climatic zones
as under
Infrastructure Provider A. Western part of Shirala tahasil
B. Tahsils of shirala(East), Walwa , Miraj(West)
Fig. 1 business model of cloud computing
C. Tahsils of khanapur, Atpadi, Kavthe Mahankal,
Jat, Miraj(East) and Tasgaon(East)
Depending on the type of provided capability clouds
are used for various models such as service, business or Agriculture plays vital role in economic point of
agricultural model. The Fig 2 shows the four basic layers in view. The agricultural growth is totally depending on different
which resources are managed. kinds of cycles those are seasonal, climatic, topographic, etc.
Many time farmers from Sangli District may have to face with
one of the problem due to climate changes and all.
End user (SaaS) Application
In other countries like china, Japan fujitsu has
designed the cloud for agricultural development but
economical and technological point of view these countries
(PaaS) Platform are developed. India is developing country and economical
share of Sangli district’s in India’s economy is considerably
higher through agriculture. Sangli district is famous for
turmeric, Grapes etc. So to improve the productivity of Sangli
district farmers should know all the basic information
(IaaS) Infrastructure regarding market conditions, market demands etc.
Considering basic requirement of farmers from
Sangli district Krushi Mitra Cloud is designed. That will help
the farmer to get better results. This Krushi Mitra Cloud is
Hardware beneficial to farmers for better productivity and government
for economical development.
Cloud Computing is better solution to provide all
Fig. 2 Resource management layers
the facilities to the farmer as Cloud Computing is having
The four layer resource management has some specific various benefits like
function applicability. The application layer is at the top
which is related to the end user having the software as the  Data and applications are accessible from any
main functional part. Below that the platform layer which connected computer.
contains the software framework storage as PaaS layer. The  No data is lost if your computer breaks, as data is in
infrastructure layer has computation and storage block. And at the cloud.
the bottom the data containers consists of CU, memory, disks
and the bandwidth for the service. The four layers works  The service is able to dynamically scale to the usage
together to form the resource management layer. needs of your organization.
 Location Independent.
Characteristics of Cloud Computing  Cost Saving
The characteristics of the cloud computing are import from For all the above benefits of cloud computing, there
the cloud point of view. The various characteristics are given - are some challenges they are given bellow.
A. On-demand self-service.
The Krushi Mitra Cloud is specially designed for
B. Independency Sangli district according to the climate, soil patterns, etc. In
C. Elasticity of workload this cloud testing of soil will be done through soil sensors and
that will gives results regarding type of soil, total moisture in
D. Disaster Recovery soil, humidity of soil, rainfall and using this data experts will
E. Security. gives suggestion about which crop will gives more benefits or
maximum productivity from that soil. There are some more
facilities given by Krushi Mitra Cloud, instead of these

836
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

facilities there are some challenges regarding implementation 3. CONCLUSION


of cloud in agricultural sector in Sangli district.
Cloud Computing is really great technology which is
 Computer availability having various benefits. To implement these types of
technology in India is necessary because in case of ICT the
 Connectivity respondent should have to invest his money more as compare
 Computer Literacy to Cloud Computing. From the investment point of view the
respondent only invest his money to buy computer system and
 Internet Awareness internet connection. The expectations from the government of
India are the basic requirement is 24x7 power supply, strong
1. Computer availability network connectivity. The government should provide special
Most of the part of Sangli district comes under rural area schemes to access these things easily, like as we access
and the financial condition of these people is very poor. So electricity, telephone connection having minimum charges for
to buy the computer systems are not affordable to the rural areas. Because the success of this Cloud Computing in
respondent. The electricity supply is also not available Sangli district is totally depends on the rural area. Creating
during 24 hours almost half the day load shedding is awareness to these people about to adoption of this kind of
applicable. The technology in their daily practices for better results.

2. Connectivity REFERENCES
Due to poor network, internet access is not much available [1] Andrei, T., & Jain, R. (2009). Cloud computing
for the villagers. The villagers are not able to by the
internet connections due to their economical condition and challenges and related security issues. … 
in the remote area of mountains and vast dry land the [2] Gao, J., Bai, X., & Tsai, W. (2011). Cloud testing-issues,
connectivity is not provided. Now attempts are mode to
provide the internet through the mobile services but still it challenges, needs and practice. Software Engineering: An
is not enough to reach to the common poor people. International Journal, 1(1), 9–23.
3. Computer Literacy [3] Ashktorab, V., & Taghizadeh, S. R. (2012). Security
As we all know Sangli district is having higher literacy Threats and Countermeasures in Cloud Computing.
ratio but in rural area in district these people unable to
International Journal of Application of Innovation in
aware of computer due to less number of computers are
available here. The use of ICT is least used in the district. Engineering & Management (IJAIEM), 1(2), 234–245.
4. Internet awareness. [4] Dalvi, T. S., and Kumbhar, S. R. (2014). Need of
Most of the people are not knowing the internet services. KRUSHI MITRA Cloud Model in Agriculture with
The cost of the internet services and speed is also not up Special Reference to Sangli District ( MS ),IJRCSIT Vol.
the desired level.
3(1), 28–31.
These are some challenges which are the main
barriers in implementation of Cloud Computing in [5] Application of Cloud Computing in Agricultural
agriculture sector in Sangli district. To avoid these barriers Development of Rural India, 4(6), 922–926.
government has to initiate because Krushi Mitra Cloud
provides various facilities to farmers to get profit from his
farm, and that is directly proportional to the economical
development of India. Then the question is what kind of
efforts government of India or Ministry of Agriculture
should have to take –
 Government of India should have to provide computer
systems in affordable price.
 They have to start training centers to avoid computer
illiteracy.
 To implement cloud computing there should be better
network coverage and government of India has to provide
better connectivity in low price.
 As these people got network connectivity easily then they
will automatically aware about use Internet.

837
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

BALER TECHNOLOGY FOR THE PADDY RESIDUE


MANAGEMENT – NEED OF THE HOUR

Er. Ankit Sharma Dr. Bharat Singh Bhattu


Assistant Prof. in Agri. Engg., Incharge –cum- Associate Prof.
PAU, Krishi Vigyan Kendra, Mansa (Animal Science),
PAU, Krishi Vigyan Kendra, Mansa
ankitsharma@pau.edu bharatsingh@pau.edu

ABSTRACT 2. BALER TECHNOLOGY


Problems associated with the burning of paddy residues, The straw baler is used for collecting and baling
sincere efforts are needed to find ways and means to straw in the combine-harvested field. Before baling, first
efficiently utilize the huge amount of surplus rice stubble shaver is operated to harvest the stubbles from base
residues produced in the state for maintaining soil, level. It can form bales of varying length from 40 to 110 cm.
human and animal health, and increasing farmer’s The height and width of the bales is generally fixed at 36 cm
profits. To overcome this problem, a straw baler was 46 cm respectively. The weight of bales varies from 20 to 30
introduced by Punjab Agricultural University, kg depending on moisture content of straw and length of
Ludhiana. Accordingly, the baler is used in Mansa bales. The capacity of the baler varied from 0.30-0.35 ha/h.
district of Punjab state since the last three years for Krihi Vigyan Kendra (KVK) Mansa has promoted the use of
collecting and baling paddy straw in the combine commercially available paddy straw baler in the district with
harvested fields. With the awareness of technology
the help of Punjab Agricultural University (PAU) Ludhiana,
among the farmers, there is continuously increase in
Punjab. The management of paddy straw by use of baler was
area under baler machine which is benefit for the farmer
practiced in 40 villages of Mansa district during the year
community in terms of revenue and avoid serious
2013-14. Total area under management of paddy straw was
problem of straw burning.
more than 550 ha in district Mansa. The bales were prepared
KEYWORDS and sold @120/- quintal to the Bio-mass plant situated very
near to KVK Mansa. The area was covered by total 05 balers
Baler, paddy straw management, biomass including one baler of PAU, Ludhiana. Total quantity of
plant, income, pollution. paddy straw including bales and loose straw received by bio-
mass plant from the farming community of various districts
was 12000 tonnes during the year 2013-14. Instead of selling
1. INTRODUCTION the paddy straw at bio-mass plants it can also be utilized in the
In Punjab, the total production of paddy straw alone following industry for additional income:
contributes more than 22 million tonnes, from the 2.82 million
hectare cultivated area. Out of the total area under paddy  Paper industries use this as raw material for paper
around 91% per cent paddy is harvested by combines (Gajri et production
al 2002). Paddy straw is considered poor dry fodder for  Dairy/cattle owners use it for fodder application and
animals due to high silica content. Therefore, the farmers are urea treatment projects of straw.
burning it without bothering about the damage to ecology.  Packaging industry, mushroom industry, straw/board
Burning of straw causes environmental pollution leading to manufactures, natural manure (wormi-compost) and
many diseases. Burning also produces CO2, which creates many more.
green house effect during the short span of 15 - 20 days. The
Green house effect disturbs the natural climate of the planet.
In addition, burning also decreases the efficiency of some
herbicides used for controlling weeds in wheat crop (Singh et
al, 2012). Sincere efforts are needed to find ways and means
to efficiently utilize the huge amount of surplus paddy
residues produced. The paddy straw baler machine, which can
be used for making the bales of loose paddy straw, may be the
possible solution for the paddy residue management.
.

838
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3. ADVANTAGES OF USING PADDY


STRAW BALER
The straw baler technology creates the following advantages:
Avoid the serious problems associated with the burning of
paddy residues.
 Efficient utilization of the huge amount of surplus rice
residues produced in the state.
 Maintain soil, human and animal health.
 Increasing farmer’s profits to the tune of Rs. 2400/acre.
 It clean the field for sowing of next crop.
 It generates the employment as various peoples are
involving in collecting and transporting the straw from
field to the desire location.

Due to the interventions of KVK Mansa and


demand of paddy straw in the biomass plant, farmers have
started collecting the paddy straw for sale to biomass plants.
Total area where straw was collected by paddy straw bale
reached to more than 550 ha from Nil. Similarly total village
where this technology has spread is 40. Farmers have even
started collecting loose straw for sale.

REFERENCES
[1] Gajri, P R, Ghuman B S and Singh S 2002. Tillage
and residue management practices in rice-wheat
system of Indo-Gengetic plains – Diagnostics survey,
Technical report of NATP at PAU, Ludhiana by
ICAR Pp. 1-12.

[2] Singh Avtar, Kaushal Meenakshi and Singh Harmeet


2012. Improvement in productivity of wheat with
crop residue and nitrogen-a review. Asian Journal of
Biological and Life Sciences 1 (3): 139-46.

839
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Study on Constraints of Broiler Farming


Entrepreneurship in Mansa District of Punjab

Dr. Bharat Singh Bhattu, Er. Ankit Sharma Dr. Gurdeep Singh
Incharge –cum- Associate Prof. Assistant Prof. in Agri. Engg., Asstt. Professor, Extension
(Animal Science), PAU, PAU, Krishi Vigyan Kendra, Education, PAU,
Krishi Vigyan Kendra, Mansa Mansa Krishi Vigyan Kendra, Mansa
bharatsingh@pau.edu

ABSTRACT rabbit farming, sheep and goat farming, Kennel/dog


A survey work was designed to study the present management farming etc.
status, impact of farm sizes on adoptability of various
recommended practices and major constraints faced by broiler
The main aim of animal science enterprise with crop
farmers in adoption of improved broiler farming technologies farming is to improve the employment opportunities
in Mansa districts of Punjab. To undertake this survey work, and income potential. The major thrust is to increase
total 25 broiler farmers were selected for the collection of quality milk and milk products, meat, wool, pups
data. Analysis of data revealed that 40.00 per cent farmers production with crop production through transfer of
complained about non-availability of quality day old chicks. improved animal husbandry technologies. The results of
Overall 60.00 per cent farmers expressed their views existed animal science enterprise with crop production
regarding the high cost and poor quality of inputs including inferred that mixed farming system increase
costs of quality day old chicks, constructional material, feed,
productivity and enhance the per capita income and
medicines, equipments and machineries. Total 80.00 per cent
farmers also complained about non-availability of loan
provide employment throughout the year for the small
including rigid procedure for supply of loans. Total 24.00 per and medium farmers in particular and large farmer in
cent farmers also complained about high cost of electricity. general. Therefore, we can say animal science
Total 48.00 per cent farmers faced the problem of oligopsony enterprise with arable farming has immense potential to
marketing structure for purchase of quality day old chicks, address the burning problems like unemployment,
feed, medicines and sale of broilers including high cost of nutritional security and socio-economic upliftment of
transportation whereas total 72.00 per cent farmers also the people.
showed about their lack of knowledge of scientific broiler
farming including construction of shed, winter and summer However, setting up of these enterprises and securing
management, feeding and watering. Total 44.00 per cent loans for them is not a planned option. The prospective
farmers criticized the non-remunerative prices of broilers, borrowers set up these enterprises as a means of their
24.00 farmers faced the problem of non-availability and livelihood, as they do not have the required background
higher costs of labours whereas 64.00 farmers also faced the or training. Even, the educated youths, rural and/or semi
problem regarding incidence of diseases including lack of
urban dwellers do not possess the adequate knowledge
diseases investigation and monitoring facilities at proper time.
of preparing a viable plan/project for these enterprises.
KEYWORDS Yet, bankers extend financial assistance to these
Broiler farming, Adoption, Feeding, Production, untrained persons under compulsion, so as to meet the
Management, Constraint. financial targets. The entire process, though not very
scientific, has come to stay as proposals carry some
short comings, which include:
1. INTRODUCTION  Higher investment in fixed assets;
Punjab is predominantly dependent on agricultural  A little or no training;
economy and the size of average land holding is very
small. Moreover, the agricultural production has  Higher borrowings;
reached its plateau and there is not much scope of  Non-profitability of enterprises and
further improvement unless we increase the soil fertility
and water resources that are costly inputs. At present,  No background or forward linkage.
income from arable farming alone is hardly efficient to These lacunae lead to high rate of failures. In view of
maintain the livelihood of farmers and their families. these facts and providing avenues for self employment
Therefore, adoption of mixed farming is the solution for and income generation, there is urgent need to sensitize
increasing agricultural production and raising the the farmers/farm women, educated youths, rural and/or
economic status of the farmers. Mixed farming ensures semi-urban dwellers to establish scientific broiler
judicious utilization of resources for agriculture and farming units as a subsidiary occupation amongst the
adopted suitable animal science enterprise viz. Layer animal science enterprises. In Punjab, broiler farming
farming, broiler farming, dairy farming, pig farming, has emerged as the fastest growing segment of

840
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

agriculture registering a phenomenal growth in frequencies of each response/constraint were worked


production. out and expressed in percentage.
Hardly any survey work has been carried out to study 4. IDENTIFICATION, DESCRIPTION
constraints of broiler farming in Punjab. Accordingly,
the identification/recognition of non-adoption of AND ANALYSIS OF EXISTING
recommended technologies is essential to formulate DAIRY FARMING SYSTEMS
adequate measures to circumvent the crisis befalling the
broiler industry. Therefore, the present survey work has
The data presented in table-2 shows that total number of
been designed to study the present management status
in adoption of scientific methods of broiler farming in poultry units is 90 in district Mansa. Out of these, two
Punjab. Keeping these points in view, the present work are layer units whereas 90 are broiler units. In addition
will be undertaken with following specific objectives. of these, about 20 broiler units are under construction.
There is no hatchery in District Mansa. Generally, the
1. To study the existing management practices adopted
by the broiler farmers. farmers purchased their day old chicks from Guargoan,
Jind, Jalandhar, Hisar and, Malerkotla @ Rs. 22-28/
2. To study major constraints faced by broiler farmers
chick. They purchased the feed for broilers from
in adoption of dairy technologies.
Rajpura, Khanna, Dhuri, Lehragagga, Patiala, Jind,
2. AREA OF THE STUDY bathinda, Hisar, Dabawali @ Rs. 2700-2800/ quintal for
To undertake this work all blocks of Mansa district was starters and @ Rs. 2500-2600/ quintal for finishers.
selected. Out of each block, two or three villages were A gap was observed in adoption of recommended
selected where broiler farming was highly concentrated.
practices regarding poor quality of day old chicks,
In each block, 05 broiler farmers were selected at
random. Accordingly, total 25 broiler farmers were balance feeding, deworming, and health care. Table-2
selected for the collection of data by survey method as also reveals that farmers have lack of accurate
detail is given below in knowledge to prepare domestic ration. It was also
Table-1: Selection of broiler farmers for collection of observed that broilers farmers do not prepare their own
data in Mansa district of Punjab domestic feed. Farmers are also facing the difficulty in
timely detection of diseases due to lack of knowledge
and diseases diagnostic laboratories. Major reasons for
Name Name of the Name of the Total no. of low productivity are poor genetical potential of chicks,
of the Block selected farmers
District villages selected higher feed conversion ratio, poor management due to
lack of knowledge of recommended broiler
Mansa Mansa Khiala Kalan, 05 management practices and birds are infected by
Tamkot,
numbers of diseases due to harsh climatic conditions.
Aliser
Table-2: Identification, description and analysis of
Sardulgarh Ahloopur, 05
Fattamaluka existing broiler farming systems
Bhikhi Kotra Kalan, 05
Kishangarh
Pharmahi
Budhladha Boha, Bareta, 05
Fafrebhaike
Jhunir Bajewala, 05
Dasomdia
Total 25

3. COLLECTION OF DATA
By reviewing the literature and through discussions
with university experts and extension personnel, a
questionnaire was prepared. The responds of broiler
farmers were collected on a two point response category
viz. “agree” and “disagree”. The data collection
commenced from the beginning of the July, 2014 and
was carried through the end of December, 2014. The

841
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Sr. of knowledge of
Particulars Existing situation
No. recommended broiler
management practices
1. Total poultry units 92
Age at the time of
2. Total layer units 02 17. 35-44 days
sale of broilers
3. Total broiler units 90
Weight at the time
18. 1.5-3.0 kg
Broiler units of sale of broilers
started after 1.5:01 to 1.8:01
4. obtaining the 60 FCR at the time of
training from 19. (Feed in kg : weight in
sale of broilers
KVK, Mansa kg)
Total broiler units Generally sales @ Rs.
5. 20 20. Sale rate of broilers
under construction 70-120/ kg live weight
No. of hatcheries in 21. Mortality 04-08%
6. Nil
District Mansa
Source of Guargoan, Jind,
7. availability of day Jalandhar, Hisar,
5. GENERAL EXISTED
old chicks malerkotla MANAGEMENT PRACTICES
ADOPTED BY VARIOUS
Existing rate of day Generally @ Rs. 22-28/
8.
old chicks chick CATEGORIES OF BROILER
FARMERS
Rajpura, Khanna, Dhuri,
Source of Do Overall 60 per cent farmers acquired broiler
Lehragagga, Patiala,
9. availability of entrepreneurial development training programme before or
Jind, bathinda, Hisar,
broiler feed after starting the broiler farms whereas total 20 per cent
Dabawali farmers have taken loan from various financial institutions for
Generally @ Rs. 2700- establishment of broiler farms. The orientation of broiler
Existing rate of 2800/ quintal for starters sheds should be East to West lengthwise and 70.00%, broiler
10. farmers established correct broiler houses according to their
broiler feed and @ Rs. 2500-2600/
directions. All the farmers are adopting deep-litter system of
quintal for finishers housing. They are adopting all-in all-out system (single batch
Preparation of Lack of accurate at a time).Only 4.0% farmers analysed their feed and or water
sample from various resources. As per the recommendations,
11. balanced feed at knowledge to prepare
height of roofs of broiler house should be minimum 10 feet,
domestic level domestic feed whereas height of side walls of house should not be more than
Use of vitamins, 2 feet and rest of the space should be covered with wire
netting which offer less resistance to air movement, the
mineral mixtures,
farmers do not adopted these recommendations due to lack of
antibiotics, accurate knowledge and higher initial costs. Most of the
probiotics (growth Mainly use by medium farmers know very well that sprinklers on roof help in
12. promoters) for and large broilers reducing the temperature even upto 100°F during the hot and
better productivity farmers dry weather. Fogging is an effective method for reducing the
and health temperature in the house especially when relative humidity is
maintenance of the low. Shade is the simplest and relatively inexpensive tool for
birds combating heat. Shade trees scattered around the sheds help to
keep the sheds cool during the season but the adoption of
Winter/summer stress these recommendations is also very rare. Only 16% farmers
Reasons for spread
13. conditions and poor are adopting vaccination schedule in broilers to control
of various diseases diseases (Marek’s at 1st day, RDV F1 at 5-7th day, IBD
management
Vaccine at 14thday, RDV La Sota at 21st day and IBD
14. Deworming Lack of knowledge Vaccine-Booster at 28th day of age). Most of the farmers are
adopting the proper floor space/bird, feeder space/bird and
Proper FYM pits are not waterer space/bird as per the recommendations (450 cm2, 3
Correct practice of
available. Generally sold cm and 1.5 cm up to the age of 18 days and1000 cm2, 6-7 cm
disposal of manure/
15. to owners of Bricks Kiln and 3 cm, respectively at the age from 19 days to 42 days).
excreta produced
or farmers as a manure The broilers farmers don’t know about the maximum levels of
waste certain ingredients in a safe water supply for broilers (total
@ Rs. 100/ quintal
dissolved solids =1000ppm, total alkalinity = 400ppm, pH =
Major reasons for Poor genetical potential 8.0, Nitrates = 45ppm, Sulphates =250ppm, Sodium Chloride-
16. of chicks and poor growers=500ppm). They used fresh water but don’t aware
low productivity
management due to lack about water temperature and feed consumption (water

842
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

temperature should be 65°F to 70°F and generally at 70°F machineries. Total 80.00 per cent farmers also
chickens will consume two litter of water for one kg of feed complained about non-availability of loan including rigid
consumed). Most of the farmers having lack of accurate procedure for supply of loans. Total 24.00 per cent
knowledge about proper ventilation, litter management, light farmers also complained about high cost of electricity.
management, feed management and water management.
Total 48.00 per cent farmers faced the problem of
Table 3: Constraints encountered by the broiler oligopsony marketing structure for purchase of quality
farmers in adoption of recommended technologies day old chicks, feed, medicines and sale of broilers
(in percent) Place Tables including high cost of transportation whereas total 72.00
per cent farmers also showed about their lack of
Sr. Existing knowledge of scientific broiler farming including
Constraints
No situation construction of shed, winter and summer management,
feeding and watering. Total 44.00 per cent farmers
1. Non-availability of quality day old chicks 40 (10) criticized the non-remunerative prices of broilers, 24.00
farmers faced the problem of non-availability and higher
High cost and poor quality of inputs
costs of labours whereas 64.00 farmers also faced the
including costs of day old chicks,
problem regarding incidence of diseases including lack of
2. constructional material, feed, 60 (15)
diseases investigation and monitoring facilities at proper
medicines, equipments, machineries
etc.
time.

Non-availability of loan including rigid 7. RESULTS AND DISCUSSION


3. 80 (20)
procedure for supply of loans
An in-depth analysis of practices followed under
feeding aspect revealed that all the respondents fed
4. High cost of electricity 24 (06)
purchased unanalyzed rations. Use of minerals, vitamins,
Oligopsony marketing structure for antibiotics, pro-biotics, enzymes etc. for broiler feeding
purchase of quality day old chicks, feed, on regular basis was not common practice. High cost and
5. 48 (12) poor quality of inputs, oligopsony marketing structure,
medicines and sale of broilers including
high cost of transportation non-remunerative prices of broilers including high cost of
transportation, lack of knowledge of scientific broiler
Lack of knowledge of scientific broiler farming, incidence of diseases including lack of diseases
farming including construction of shed, investigation and monitoring facilities, high cost of
6. 72 (18)
winter and summer management, electricity, non-availability of loan including rigid
feeding and watering procedure for supply of loans and higher costs of labours
are the major constraints of broiler farming.
Incidence of diseases including lack of
7. diseases investigation and monitoring 64 (16) On the basis of findings it could be concluded
facilities that the respondents scored highest in response to lack of
accurate knowledge of different recommended
8. Non-remunerative prices of broilers 44 (11) management practices, feeding and health aspect. Similar
results were also reported by Balamurugan and
Lack of availability and higher costs of
9. 24 (06) Manoharan (2014), Shaikh and Zala (2011) and Varinder
labours
Pal Singh, et. al., (2010) who observed that maximum
Figures in parenthesis indicate number of broiler farms constraints were found in management, feeding and
health care practices.
6. CONSTRAINTS ENCOUNTERED BY
8. CONCLUSION
THE VARIOUS CATEGORIES OF
BROILER FARMERS IN ADOPTION There was difference in adoption levels between
OF DAIRY TECHNOLOGIES different categories of respondents with regard to
procurement of day old chicks, feeding and management
practices of broiler rearing. The finding of the study
The broiler farmers were asked to express their
revealed that non-availability of quality day old chicks,
responses on constraints faced by them in adoption of
high cost of ration, rigid procedure for supply of loans,
broiler farming technologies. Their responses on
high cost and poor quality of construction materials,
constraints are presented in Table-3.
oligopsony marketing structure, incidence of diseases,
The data presented in Table-3 show that 40.00
lack of knowledge of scientific housing/feeding/lighting
per cent farmers complained about non-availability of
including temperature maintenance were the main
quality day old chicks. Overall 60.00 per cent farmers
constraints expressed by broiler farmers. To increase the
expressed their views regarding the high cost and poor
adoption of broiler technologies, emphasis should be
quality of inputs including costs of quality day old chicks,
given to overcome all these constraints.
constructional material, feed, medicines, equipments and

843
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

9. SUGGESTIONS REFERENCES
For upliftment of broiler production and to [1] Balamurugan, V. and Manoharan, M. (2014). Cost
sustain the present pace of growth, certain steps are and benefit of investment in integrated broiler
required to be taken like licensing of hatcheries, so that farming - A case study. International journal of
desired pressure is maintained for production and supply Current Research and Academic Review, Vol.2 (4)
of quality chicks to the producers. Strengthening and pp. 114-123.
establishments of disease diagnostic and feed testing
laboratories are immediately required. Producers have to [2] Shaikh and Zala, Y.C.(2011). Production
incur heavy losses for want of immediate diagnosis of Performance and Economic Appraisal of Broiler
diseases. Some kind of legislation is also required so that Farms in Anand District of Gujarat , Agricultural
feed manufactures are obliged to put proper label on Economics Research Review, Vol.23 pp 311-323.
feedbags with respect of composition. Timely information
about out-break of diseases must be monitored and [3] Varinder Pal Singh, et. al., (2010). Broiler
supplied to producers. The efforts may be made to Production in Punjab - An Economic Analysis ,
remove intermediatries by creating cooperative societies Agricultural Economics Research Review, Vol.23
both at the level of producers and consumers. pp 315-324.

Moreover, to adopt broiler farming on


commercial scale, the economic feasibility and up to date
management knowledge is a pre-requisite. Therefore,
apart from PAU/GADVASU/KVK’s, other departments
and financial agencies must provide practical training and
guidance on scientific management of broilers and framed
the different schemes for the benefit of users according to
the continuous changing scenario of feed and meat prices.

844
Business Administration
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Constraints and Opportunities Faced by Women


Entrepreneurs in Developing Countries- with Special
Contest to India

Kamaljit Singh Deepak Goyal


Assistant Professor Student MBA
Bhai Gurdas Institute of Engg. & Tech Bhai Gurdas Institute of Engg. & Tech
Sangrur Sangrur

ABSTRACT
and figuratively. But still they have not capitalized their
Woman constitutes the family, which leads to society and potential in India the way it should be.
Nation. Social and economic development of women is
necessary for overall economic development of any society The first part of this paper deals with the ideas why to boost
or a country. Entrepreneurship is the state of mind which the women entrepreneurship and what are the reasons that
every woman has in her but has not been capitalized in propel women to undertake such profession. This part also
India in way in which it should be. Due to change in depicts the factors of hindrance of women entrepreneurship
environment, now people are more comfortable to accept and also the likely measures to be taken for removing such
leading role of women in our society, though there are obstacles that are affecting women entrepreneurship. The
some exceptions. Our increasing dependency on service second part deals with a review of various research studies
sector has created many entrepreneurial opportunities done on women entrepreneurship along with study on their
especially for women where they can excel their skills with impact on various economies. The third part deals with
maintaining balance in their life. Propose of this empirical objectives and research methodologies. The fourth part
study is intended to find out various motivating and de- concentrates on analysis of data collected through
motivating internal and external factors of women questionnaires to establish motivating and de-motivating
entrepreneurship. It is an attempt to quantify some for non- internal and external factors of women entrepreneurship. The
parametric factors to give the sense of ranking these attempt has been made to rank these factors in regard to their
factors. It will also suggest the way of eliminating and severity of impact on women entrepreneurship. The last part
reducing hurdles of the women entrepreneurship of this study includes the suggestive measures for eliminating
development in Indian Context. and reducing the hurdles for the women entrepreneurship
development in Indian context.
1. INTRODUCTION
Reasons for Boosting Women
Entrepreneurship refers to the act of setting up a new
business or reviving an existing business so as to take Entrepreneurship
advantages from new opportunities.
The role of women entrepreneurs in the process of economic
Thus, entrepreneurs shape the economy by creating new development has been recognized form nineties in various
wealth and new jobs and by inventing new products and parts of the world. Today, in the world of business, women
services. However, an insight study reveals that it is not about entrepreneurship has become an essential movement in many
making money, having the greatest ideas, knowing the best countries and has been accepted in all areas of working. The
sales pitch, applying the best marketing strategy. It is in reality United Nations report has also concluded that economic
an attitude to create something new and an activity which development is closely related to the advancement of women.
creates value in the entire social eco-system. It is the psyche In nations where women have advanced, economic growth has
makeup of a person. It is a state of mind, which develops usually been steady. By contrast, in countries where women
naturally, based on his/ her surrounding and experiences, have been restricted, the economy has been stagnant. The data
which makes him/ her think about life and career in a given on correlation between Gender related development index and
way. GDP per capital reinforces the above fact:-

The women have achieved immense development in their


state of mind. With increase in dependency on service sector,
many entrepreneurial opportunities especially for women have
been created where they can excel their skills with
maintaining balance in their life. Accordingly, during the last
two decades, increasing numbers of Indian women have
entered the field of entrepreneurship and also they are
gradually changing the face of business of today, both literally

845
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

TABLE NO. 2

COUNTRY BUSINESS ASSOCIATION


TABLE NO. 1
GENDER RELATED DEVELOPMENT INDEX AND ITS Russia Novgorod Women’s Parliament,
COMPONENT Perm Business Women's Club,
St. Petersburg Institute for International
Gender Entrepreneurship Development.
related As a GDP per
development per % Capital
Rank Country of HDI US Business and Professional Women,
index (US$) National Association of Women
Business Owners (NAWBO)

1 Australia 0.966 98.9 34923 Nepal Women Entrepreneurs Association of


Nepal (WEAN)
2 Norway 0.961 99.6 53433
Malawi National Association of Business
3 Iceland 0.959 99.0 44613 Women (NABW)
4 Canada 0.959 99.2 35812
5 Sweden 0.956 99.3 36712 Reasons for women opting for
6 France 0.956 99.4 33674 entrepreneurship
7 Netherland 0.954 98.9 38694
Self-determination, expectation for recognition, self-esteem
8 Finland 0.954 99.5 34526 and career goal are the key drivers for taking up
entrepreneurship by women (Moore & Buttner, 1997).
9 Spain 0.949 99.4 31560 Sometimes, women chose such career path for discovering
their inner potential, calibre in order to achieve self-
10 Ireland 0.948 98.2 44613
satisfaction. It can also provide a mean to make best use of
14 India 0.594 97.1 4102 their leisure hours.
However, dismal economic conditions of the women arising
out of unemployment in the family and divorce can compel
As shown in the above table, Gender related development women into entrepreneurial activities.
index is significantly correlated with GDP per capita. The
value of correlation coefficient comes 0.857371. Therefore, it
can be treated as one of the parameter to show the economic Obstacles for women entrepreneurship
condition & growth of the country.
The entrepreneurial process is same for men and women.
Business association and Women Successful men and women entrepreneurs undergo similar
Entrepreneurship motivations and thus achieve success in largely same way
under similar challenges. They are also found to have access
Structural association and group of people also promote to fund from the same sources. The same condition both men
women entrepreneurship. A vital link to economic decision- and women can be successful entrepreneurs. (Cohoon et.al.
making processes, the business associations has made their 2010). However, in practice most of the upcoming women
members’ visions and priorities a part of the national political entrepreneurs face problems that are of different dimensions
and economic agenda. Around the globe many more and magnitudes than that faced by their male counterparts.
organizations are contributing in similar ways. Table 2 depicts These problems, generally, prevent these women
some of the business associations of some countries. These entrepreneurs from realizing their potential as entrepreneurs.
associations undertake wide varieties of activate The major hurdles that the women face during starting and
encompassing credit, business skill training,technical and running a company generally come from financing and
technology training, employment creation, marketing balancing of life. The balancing of life is caused due to lack of
services, legal assistance, psychological counselling and family support for the women.
some social welfare trade programs. These associations have
also played a positive and vital role in promoting international
trade for women entrepreneurs.
2. LITERATURE REVIEW

Darrene, Harpel and Mayer, (2008) performed a study on


finding the relationship between elements of human capital
and self-employment among women. The study showed that
self-employed women differ on most human capital variable
as compared to the salary and wage earning women. The
study also revealed the fact that the education attainment level
is faster for self employed women than that for other working
women. The percentage of occupancy of managerial job is
found to be comparatively higher in case of self employed

846
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

women as compared to other working women. This study also This table depicts the various internal and external factors that
shed light on similarity and dissimilarity of situations for self- affect the development of women entrepreneurship in various
employed men and self-employed women. Self-employed countries level, personal motivations, self-employed father,
men and women differ little in education, experience and social adroitness, interpersonal skills etc. There is a worldwide
preparedness. The analysis is based on data from the Current pool of economically active persons, known as the Women’s
Population Survey (CPS) Annual Social and Economic Indicators and Statistical Data Base (WISTAT), from which
Supplement (ASEC) from 1994 to 2006. one can extrapolate the general number of women
entrepreneurs. WISTAT titles the category “employers and
Jalbert, 2000 performed a study to explore the role of women own-account workers,” The category describes those who are
entrepreneurs in a global economy. It also examined how economically independent and who could be entrepreneurs.
women’s business associations can strengthen women’s The number of women to 100 men in each region is
position in business and international trade. The analysis is represented for three decades spanning 1970 to 1990. The
performed on the basis of facts and data collected through study revealed that the gap between men and women business
field work (surveys, focus groups and interviews) and through owners has narrowed significantly. In 1970 women numbered
examining the existing published research. The study has 26 for each 100 men, but by 1990 women numbered 40 for
shown that the women business owners are making significant each 100 men who were self employed Tambunan, (2009),
contributions to global economic health, national made a study on recent developments of women entrepreneurs
competitiveness and community commerce by bringing many in Asian developing countries. The study focused mainly on
assets to the global market. As per the analysis of the research women entrepreneurs in small and medium enterprises based
study, women entrepreneurs have demonstrated the ability to on data analysis and review of recent key literature. This study
build and maintain long-term relationships and networks to found that in Asian developing countries SMEs are gaining
communicate effectively, to organize efficiently, to be fiscally overwhelming importance; more than 95% of all firms in all
conservative, and to be aware of the needs of their sectors on average per country. The study also depicted the
environment and to promote sensitivity to cultural differences. fact that representation of women entrepreneurs in this region
Researchers contend that women business owners possess is relatively low due to factors like low level of education,
certain specific characteristics that promote their creativity lack of capital and cultural or religious constraints. However,
and generate new ideas and ways of doing things. These the study revealed that most of the women entrepreneurs in
characteristics include focus, high energy SMEs are from the category of forced entrepreneurs seeking
for better family incomes.
S.N. Country Factor
1. United States • access to information
• access to networks 3. OBJECTIVES
• To identify the reasons for women for involving
2. Korea • financing themselves in entrepreneurial activities
• the effort to balance work • To identify the factors of hindrance for women
and family entrepreneurship
3. Indonesia • exporting their product • To determine the possible success factors for
overseas women in such entrepreneurial activities.
• increasing the volume of • To make an evaluation of people’s opinion about
production women entrepreneurship.
4. Vietnam • formal education, ownership
of property, and social
mobility 4. METHODOLOGY
• business experiences
5. Bangladesh • Inadequate financing The research is based on secondary & primary data. It’s an
• balancing time between the exploratory & descriptive in nature. The secondary data is
enterprise and the family collected from review of past researches and other reports.
6. Uganda • lack of training and advisory The factors have been identified then classified into three
services categories factors responsible for hindrance, reasons for
• processes and starting the business & reasons for success in women
• difficulties in accessing entrepreneurship. Then these factors with their sub-
loans classification rated on likert scale of 1 to 5, where 1 denotes
7. Rwanda • restricted mobility least important & 5 denote most important. Then these factors
• security have been further analyzed through Chi square test to check
the difference between opinions collected from different sets
8. Morocco • lack of operational and
of people. PSW 18 has been use for calculation purposes.
managerial skills
• Cultural constraints
The data has been collected from the female students &
9. Kenya • lack of technical skills, faculty members only. It is just have common areas of
• confidence, concerns in both the sample group. All the three forms of
• strong individual factors analyzed from the viewpoints of marital status &
involvement occupation. Results show that one basis of marital status we
10 Mauritius • the hassle of getting permits find major differences of opinion at significance level of 5.
• the lack of market These are need for Money & others factors on which these
two sets of people have different opinion. However in
Table No. 3: SNAPSHOT OF KEY FACTORS hindrance reason we could not find any significant difference.
When it comes on success factors Marketing skills &

847
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

Preservation also have significant difference. (Refer No Obstacle 33 33%


hypothesis test summary for marital status in annexure) believing in your abilities 16 16%
Lack of information 8 8%
On the basis of occupation differences are more because two
sets have difference in generation also. Students & faculty Access to business support 18 18%
members have significant differences on various issues almost Entrepreneurial skills 12 12%
in every set of factors. Starting from hindrance factors they Combining family & work 10 10%
significantly differ on Raising Capital, Information & advice, life
skills & expertise, Gender discriminations & others. In the
second category of factors i.e. reasons for starting business Gender discrimination 3 3%
factors for significant difference are passion; need for money,
to become independent, self-satisfaction & others. In the When you face any problem in business with who will
category of success factors difference are on the issues like you?
quality of product & service, uniqueness of design & services, Own 23 23%
management skills & marketing skills & preservation. Advice from Husband & 50 50%
family
5. Results Advice from others 27 27%
Demographic Profile of the respondents is presented in Table
4.

Demographic Frequency Percentage Demographic Profile of the Sample


Age How is your self-confidence and self-esteem after
Under 25 Years 7 7% becoming anEntrepreneur?
25-34 Years 45 45% Very much 56 56%
35-44 Years 13 13% increased
45+ Year 35 35% Increased 38 38%
Education Qualification Not Increased 6 6%
Secondary Education 8 8% How important are the following success factors for
Graduation 32 32% women to get ahead?
Post-Graduation 55 55%
Formation/Training Course 5 5% 1 2 3 4 5
Who Advised You Undertake Present Line of Activity? Not Rele Neither Import Ver
Parents 19 19% Rele vant importa ant y
Husband 43 43% vant nt nor imp
Self-decision Making 26 26% relevant orta
Friends And Others 12 12% nt
What made you start your own business?  Optimisi
Profit making money 4 4% ng
Did not want to work for others 20 20% entrepren
eurial
Want for control & freedom 8 8%
spirit and
To make my own decision 12 12% skills
Social status 14 14%  Successf
Self-achievement 16 16% ully
Threat of losing my job 26 26% managin
How do you find the Attitude of the Family Members as g others
an Entrepreneur?  Successf
Helpful 78 78% ully
Not helpful 10 10% managin
Flexible 7 7% g myself
Sympathetic 5 5%  Having
Did you benefit from external support to set up your recognize
business? d
Financial Support 70 70% expertise
Legal Advice 8 8% in a
Technological support 12 12% specific
Networking 10 10% area
When you started your business, what were the main  Gaining
problems you faced? intercultu
ral and

848
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

language may vary from place to place business to business but women
skills entrepreneurship is necessary for the growth of any economy
Total 2% 4% 12% 52% 30% weather it large or small.

From the above information we found that 52% candidates are REFRENCES
agree with these factors are important for woman to go ahead
nd 30% are with these are very important factors. 12% are [1] Ayadurai, Selvamalar , (2005), An Insight into The
said these are neither important nor relevant. And only 6% are “Constraints” Faced by Women Entrepreneurs in A War-
told these factors are relvent for woman to go ahead. Torn Area: Case Study
[2] of The Northeast of Sri Lanka, presented at the 2005 50th
World Conference of ICSB Washington D.C.
6. FINDINGS [3] Bowen, Donald D. & Hirsch Robert D. (1986), The
Female Entrepreneur: A career Development Perspective,
 The elimination of obstacles for women entrepreneurship
Academy of Management Review, Vol. 11 no. 2, Page
requires a major change in traditional attitudes and mind-
No. 393-407.
sets of people in society rather than being limited to only
creation of opportunities for women. Hence, it is [4] Cohoon, J. McGrath, Wadhwa, Vivek& Mitchell Lesa,
imperative to design programmes that will address to (2010), The Anatomy of an Entrepreneur- Are Successful
attitudinal changes, training, supportive services. Women Entrepreneurs Different From Men? Kauffman,
 The basic requirement in development of women The foundation of entrepreneurship.
entrepreneurship is to make aware the women regarding [5] Greene, Patricia G., Hart, Myra M, Brush, Candida G, &
her existence, her unique identity and her contribution Carter, Nancy M, (2003), Women Entrepreneurs:
towards the economic growth and development of Moving Front and Center: An Overview of Research and
country. Theory, white paper at United States Association for
 The basic instinct of entrepreneurship should be tried to Small Business and Entrepreneurship.
be reaped into the minds of the women from their [6] Hackler, Darrene; Harpel, Ellen and Mayer, Heike,
childhood. This could be achieved by carefully designing (2008), “Human Capital and Women’s Business
the curriculum that will impart the basic knowledge Ownership”, Arlington, Office of Advocacy U.S. Small
along with its practical implication regarding Business Administration, August 2006, VA 22201 [74],
management (financial, legal etc.) of an enterprise. No. 323.
 Adopting a structured skill training package can pave the [7] Handbook on Women-owned SMEs, Challenges and
way for development of women entrepreneurship. Such Opportunities in Policies and programmes, International
programmes can train, motivate and assist the upcoming Organization for Knowledge Economy and Enterprise
women entrepreneurship in achieving theirultimate goals. Development.
 As a special concern, computer illiterate women can be [8] http://www.nfwbo.org/Research/8-21-2001/8-21-
trained on Information Technology to take the advantage (2001).htm
of new technology and automation. [9] Jalbert, Susanne E., (2008), Women Entrepreneurs in the
 The established and successful women entrepreneurs can Global Economy, education research.
act as advisors for the upcoming women entrepreneurs. http://research.brown.edu/pdf/1100924770.pdf.
The initiatives taken from these well established [10] Lall, Madhurima, &SahaiShikha, 2008, Women in
entrepreneurs for having interaction with such upcoming Family Business, presented at first Asian invitational
women entrepreneurs can be proved to be beneficial in conference on family business at Indian School of
terms of boosting their morale and confidence. It may Business, Hyderabad.
result in more active involvement of women [11] Mathew, Viju,(2010), “Women entrepreneurship in
entrepreneurs in their enterprises. Middle East: Understanding barriers and use of ICT for
 Infrastructure set up plays a vital role for any enterprise. entrepreneurship development”, Springer Science +
Government can set some priorities for women Business Media, LLC 2010
entrepreneurs for allocation of industrial plots, sheds and [12] Moore, D. P. &Buttner, E. H. (1997). Women
other amenities. entrepreneurs: Moving beyond New Generation of
 Even in today’s era of modernization the women Women Entrepreneurs Achieving Business Success.
entrepreneurs depend on males of their family for
marketing activities. This is simply because they lack the [13] Orhan M. & Scott D. (2001), Why women enter into
skill and confidence for undertaking such activities. entrepreneurship: an explanatory model.
Women development corporations should come forward [14] Women in Management Review, 16(5): 232-243.
to help the women entrepreneurs in arranging frequent [15] Singh, Surinder Pal, (2008), An Insight Into The
exhibitions and setting up marketing outlets to provide Emergence Of Women-owned Businesses As An
space for the display of products or advertisement about Economic Force In India, presented at Special
services made by women. Conference of the Strategic Management Society,
December 12-14, 2008, Indian School of Business,
7. CONCLUSION Hyderabad.
[16] Tambunan, Tulus, (2009), Women entrepreneurship in
The study tried to find out the difference among various set of Asian developing countries: Their development and main
people of the crucial factors which are concerned with the constraints, Journal of Development and Agricultural
women entrepreneurial opportunities at large. Issues have Economics Vol. 1(2), Page No. 027-040.the glass ceiling.
been identified through various review of literature. It should Thousand Oaks, CA: Sage.
be cross checked with the real entrepreneurs. These factors

849
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

ANNEXURE

Questionnaire

1. What is your age category?


Under 25 Year [ ] 25 - 34 Year [ ]
35 – 44 Year [ ] 45+ Year [ ]

2. What is your educational background?


Secondary Education [ ] Graduation [ ]
Post-Graduation [ ] Formation/Training Course [ ]

3. What made you start your own business?


Profit making money [ ] Did not want to work for others [ ]
Want for control & freedom [ ] To make my own decision [ ]
Social status [ ] Self-achievement [ ]
Threat of losing my job [ ]

4. Who Advised You Undertake Present Line of Activity?


Parents [ ] Husband [ ]
Self-decision Making [ ] Friends And Others [ ]

5. How do you find the Attitude of the Family Members as an Entrepreneur?


Helpful [ ] Not helpful [ ]
Flexible [ ] Sympathetic [ ]

6. Did you benefit from external support to set up your business?


Financial Support [ ] Legal Advice [ ]
Technological support [ ] Networking [ ]

7. When you started your business, what were the main problems you faced?
No Obstacle [ ] A question of self-confidence (believing in your abilities) [ ]
Lack of information [ ] Awareness/Access to business support [ ]
Entrepreneurial skills [ ] Combining family and work life [ ]
Gender discrimination [ ]

8. When you face any problem in business with who will you?
Own [ ] Advice from Husband & family [ ]
Advice from others [ ]

9. How is your self-confidence and self-esteem after becoming an Entrepreneur?


Very much increased [ ] Increased [ ]
Not Increased [ ]

10. How important are the following success factors for women to get ahead?

1 2 3 4 5
Not relevant Relevant Neither important nor Important Very important
relevant
 Optimising
entrepreneurial
spirit and skills
 Successfully
managing others
 Successfully
managing myself
 Having
recognized
expertise in a
specific area
 Gaining
intercultural and
language skills

850
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Descriptive Study of the Marketing Mix Strategies of


Milkfood Ltd.
Kamaljit Singh Ramandeep Kaur
Assistant Professor Student MBA
BGIET, Sangrur BGIET, Sangrur

ABSTRACT
The Marketing Mix model (also known as the 4 P’) can be used by PRICE
marketers as a tool to assist in implementing the Marketing
strategy. Marketing managers use this method to attempt to “Price” refers to how much you charge for your product or
generate the optimal response in the target market by blending 4 service. Determining your product’s price can be tricky and even
variables in an optimal way. It is important to understand that the frightening. Many small business owners feel they must
Marketing Mix principles are controllable variables. The absolutely have the lowest price around. So they begin their
Marketing Mix can be adjusted on a frequent basis to meet the business by creating an impression of bargain pricing. However,
changing needs of the target group and the other dynamics of the this may be a signal of low quality and not part of the image you
Marketing environment. The function of the Marketing Mix is to want to portray. Your pricing approach should reflect the
help develop a package (mix) that will not only satisfy the needs appropriate positioning of your product in the market and result in
of the customers within the target markets, but simultaneously to a price that covers your cost per item and includes a profit margin.
maximize the performance of the organization. The result should neither be greedy nor timid. The former will
price you out of the market; pricing too low will make it
1. INTRODUCTION impossible to grow. As a manager, you can follow a number of
alternative pricing strategies.
Marketing your business is about how you position it to satisfy
your market’s needs. There are four critical elements in marketing PLACE
your products and business. They are the four P’s of marketing.
“Place” refers to the distribution channels used to get your product
1. Product: - The right product to satisfy the needs of your target to your customers. What your product is will greatly influence
customer. how you distribute it. If, for example, you own a small retail store
2. Price: - The right product offered at the right price. or offer a service to your local community, then you are at the end
3. Place: - The right product at the right price available in the of the distribution chain, and so you will be supplying directly to
right place to be bought by customers. the customer. Businesses that create or assemble a product will
4. Promotion: - Informing potential customers of the availability have two options: selling directly to consumers or selling to a
of the product, its price and its place. vendor.

Each of the four P’s is a variable you control in creating the PROMOTION
marketing mix that will attract customers to your business. Your
marketing mix should be something you pay careful attention to “Promotion” refers to the advertising and selling part of
because the success of your business depends on it. As a business marketing. It is how you let people know what you’ve got for sale.
manager, you determine how to use these variables to achieve The purpose of promotion is to get people to understand what
your profit potential. This publication introduces the four P’s of your product is, what they can use it for, and why they should
marketing and includes worksheets that will want it. You want the customers who are looking for a product to
help you determine the most effective marketing mix for your know that your product satisfies their needs.
business.
2. REVIEW OF LITERATURE
PRODUCT
Marketing mix is originating from the single P (price) of
“Product” refers to the goods and services you offer to your microeconomic theory (Chong, 2003). McCarthy (1964) offered
customers. Apart from the physical product itself, there are the “marketing mix”, often referred to as the “4Ps”, as a means of
elements associated with your product that customers may be translating marketing planning into practice (Bennett, 1997).
attracted to, such as the way it is packaged. Other product Marketing mix is not a scientific theory, but merely a conceptual
attributes include quality, features, options, services, warranties, framework that identifies the principal decision making managers
and brand name. Thus, you might think of what you offer as a make in configuring their offerings to suit consumers’ needs. The
bundle of goods and services. Your product’s appearance, tools can be used to develop both long-term strategies and short-
function, and support make up what the customer is actually term tactical programmers’ (Palmer, 2004). The idea of the
buying. Successful managers pay close attention to the needs their marketing mix is the same idea as when mixing a cake. A baker
product bundles address for customers. will alter the proportions of ingredients in a cake depending on the

851
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

type of cake we wishes to bake. The proportions in the marketing


mix can be altered in the same way and differ from the product to
product (Hodder Education, n.d). The marketing mix management RESEARCH DESIGN
paradigm has dominated marketing thought, research and practice
Data Source : Primary& Secondary
(Grönroos, 1994), and “as a creator of differentiation” (Van
Research Approach : Observations, Survey
Waterschoot, n.d) since it was introduced in 1940s. Kent (1986)
Research Instrument :Questionnaire
refers to the 4Ps of the marketing mix as “the holy quadruple…of
Research Type : Descriptive
the marketing faith…written in tablets of stone”. Marketing mix
Sampling Unit : Respondents
has been extremely influential in informing the development of
Sampling Size : 50 people
both marketing theory and practise (Möller, 2006).
Contact Method : Personal
4Ps delimits four distinct, well-defined and independent
Sample Type : Simple Random Sampling
management processes. Despite the consistent effort by many
physical businesses to deal with the 4P in an integrated manner,
DATA COLLECTION SOURCE
the drafting but mainly the implementation of the P policies
remains largely the task of various departments and persons within
Data collection process has been completed by gathering data
the organisation. Even more significant thought is the fact that the
from primary and secondary source. Direct information is
customer is typically experiencing the individual effects of each of
advisable as it gives the correct information. The data collection
the 4Ps in diverse occasions, times and places, even in case that
was carried out by directly interviewing the rural and urban
some companies take great pains to fully integrate their marketing
people.
activities internally (Constantinides, 2002; Wang, Wang and Yao,
2005). However, a study by Rafiq and Ahmed (1995) suggested
APPROACH FOR COLLECTION OF DATA
that there is a high degree of dissatisfaction with the 4Ps
framework. Even, Overall these results provide fairly strong 1. Survey:
support Booms and Bitner’s (1981) 7P framework should replace A survey of 4 weeks was carried out in village to collect
McCarthy’s 4Ps framework as the generic marketing mix. information. Questionnaire was prepared and used as an
Development of marketing mix has received considerable instrument to gather information from the people.
academic and industry attention. Numerous modifications to the
4Ps framework have been proposed, the most concerted criticism 2. Observation:
has come from the services marketing area (Rafiq and Ahmed, While interviewing, the respondent was observed and all
1995). The introductory marketing texts suggest that all parts of additional information was jotted down which did not form the
the marketing mix (4Ps) are equally important, since a deficiency part of the questionnaire directly.
in any one can mean failure (Kellerman, Gordon and Hekmat,
1995). Number of studies of industrial marketers and purchasers Some of the questions of the research are listed below:-
indicated that the marketing mix components differ significantly a) To study whether people like "Milkfood" products?
in importance (Jackson, Burdick and Keith, 1985). Two surveys b) Which brand does people prefer the most?
focused on determination of key marketing policies and c) From which outlet do they prefer?
procedures common to successful manufacturing firms (Jackson, d) Are they satisfied with its prices?
Burdick and Keith, 1985). Thus, it appears from these studies that e) What changes they want to make in 'Milkfood' products?
business executives do not really view the 4 Ps as being equally f) Is 'Milkfood' products easily available in the society?
important, but consider the price and product components to be
the most important (Kellerman, Gordon and Hekmat, 1995). The I took a sample size of 50 rural and urban people which were
concept of 4Ps has been criticised as being a production-oriented divided into 4 age groups, viz.
definition of marketing, and not a customer-oriented (Popovic, a) Under 25 Years
2006). It’s referred to as a marketing management perspective. b) 25 - 34 Years
Lauterborn (1990) claims that each of these variables should also c) 35 - 44 Years
be seen from a consumer’s perspective. d) 45+ Years

3. OBJECTIVS OF STUDY A sample of rural and urban people was taken from each age
group. The samples were chosen on a random basis, which
The research conducted by me related Marketing Mix of Milkfood included people from all life styles
Products as key revival of Milk Plant.
1. To study marketing Mix strategies of Milkfood. Result
2. To study the various marketing mix that influence the customer Demographic Profile of the Sample
behavior.

Demographic Frequency Percentage


4. RESEARCH METHODOLOGY
AGE
I conducted a research on "4 P’s of Marketing - Key revival to Under 25 Years 12 24%
Milk Plant" Milkfood Products. The main reason of the research is 25-34 Years 12 24%
to study the 4 P’s of Marketing of Milkfood products among the
rural and urban people of various age groups. 35-44 Years 12 24%

852
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

45+ Year 14 28% Gain energy 30 60%


GENDER Advised by doctor 2 4%
For taste 7 14%
Male 36 72%
As a substitute of home 11 22%
Female 14 28% products
What is your occupation? Are Milkfood products easily available in the society?
Farming 9 18% Yes 39 78%
Shopkeeper 6 12% No 11 22%
Student 12 24% Why do you buy Milkfood products?
Buy when you don’t have home made 11 22%
Unemployed 13 26% products.
Homemaker 10 20% These are cheaper than the home 39 78%
How did you come to know about this brand? made products.
Glow Sign 0 0%
FINDINGS
Posters 17 34%
Human Source 17 34% After analyzing the primary data collected on the demand and
preferences of Milkfood products in the rural and urban area
Painting’s on 7 14% regarding 4 P’s of Marketing. I have come across some findings
dealers shop i.e. what is the choice of people, what are their preferences,
whether they are satisfied with the quality and what changes they
Magazines 9 18%
want to make in Milkfood's products. These findings can be
Have you ever purchased Milkfood products? summed up as follows:-
Daily 1 2% 1) People of all age groups had different choices of outlets from
Frequently 9 18% which they prefer to buy Milkfood products. Majority with a
choice of milk bars, which are liked by children and
Occasionally 18 36%
teenagers. Other like to buy from the societies.
Never 22 44% 2) Majority of people is satisfied with the prices but adult
Do you like milkfood products? citizens say that they want the prices should be reduced
Very much 35 70% 3) About the changes they want to make in Milkfood products,
Not much 1 2% majority of them said that they want the quantity to be
changed followed by a change in the price. They were
Average 9 18%
satisfied with the quality of Milkfood products.
Like 5 10% 4) Rural and urban people prefer buying Milkfood products as
Dislike 0 0% these are cheaper than home made products rather than
If you like Milkfood product, then from which outlet do you buying when they have not taken their home made products.
prefer? 5) Lastly, I found that advertisements and distribution system
plays an important role in shaping consumer behaviour
Shop 13 26% towards a particular product. If there will be slight
Milk bar 22 44% improvement in the advertising strategy of Milkfood, it will
surely help to boost up its sales and increase awareness
Society 15 30% amongst all classes and age groups of rural and urban areas.
What you think about its prices?
Affordable 37 74% 5. CONCLUSION
Expensive 8 16%
Very expensive 3 6% Working at Milkfood Milk Plant, Bahadurgarh gave me an
Not affordable 2 4% opportunity to apply my skills and knowledge, which I had gained
Are you satisfied with quality of the product? previously. Majority of people is satisfied with the prices but adult
citizens say that they want the prices should be reduced. Rural and
Excellent 31 62% urban people prefer buying Milkfood products as these are
Good 16 32% cheaper than home made products rather than buying when they
Average 3 6% have not taken their home made products. About the changes they
Satisfactory 1 2% want to make in Milkfood products, majority of them said that
Not Satisfactory 0 0% they want the quantity to be changed followed by a change in the
What changes you want in Milkfood products? price. They were satisfied with the quality of Milkfood products.
Price 10 20%
Quantity 14 28% REFERENCES
Quality 16 32%
Flavour 5 10% [1] Czinkota and Ronkainen (2002), Marketing Mix, Thomson
Availability 5 10% South-Western.
No Change 0 0% [2] De Mooij (2003), Global Marketing and Advertising,
Why do you take Milkfood products? Understanding Global Paradoxes, Sage.

853
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[3] Keegan and Green (2005), Global Marketing, Prentice Hall. (f) Magazine [ ]
[4] Kotler and Armstrong (2006), Principles of Marketing, [3] Have you ever purchased Milkfood products?
Prentice Hall. (a) Daily [ ]
[5] Prime et Usunier (2004), Marketing mix, Development des (b) Frequently [ ]
marches et management intercultural, Vuibert. (c) Occasionally [ ]
[6] Usunier (2000), Marketing across Cultures, Prentice Hall. (d) Never [ ]
[7] Periodicals: MOCI, HBR, JIBS, JM, JMR. [4] Do you like Milkfood products?
[8] www.milkfoodltd.com (a) Very much [ ]
[9] www.indiaagronet.com/indiaagronet/Dairy.htm (b) Not much [ ]
[10] www.hindu.com/thehindu/2008/10/11/stories/0611000c.html (c) Average [ ]
[11] Steve Crabtree – (2004) Getting personnel in the work place (d) Dislike [ ]
– Are negative relationships squelching productivity in your [5] If you like Milkfood products, then from which
company? – Gallup Management Journal, June 10, 2004. outlet do you prefer?
[12] http://www.siescoms.edu/images/pdf/reserch/working_papers (a) Shop [ ]
/marketing mix.pdf (b) Milk bar [ ]
[13] http://en.wikipedia.org/wiki/Marketing mix (c) Society [ ]
[14] Associated chambers of commerce and industry of india [6] Are you satisfied with its prices?
2005. (a) Affordable [ ]
[15] Boyd, Walker & Larréché (1995). Advertising, personal (b) Expensive [ ]
selling and sales Promotion, Define PR on (p 352) as non- (c) Very expensive [ ]
paid, non-personal stimulation of demand for a product, (d) Not affordable [ ]
service or business unit by planting significant news about it [7] Are you satisfied with quality of the products?
or favorable presentation of it in the media (looks (a) Excellent [ ]
suspiciously similar to other definitions of publicity). (b) Good [ ]
[16] Chong, K. W. (2003). The Role of Pricing in Relationship (c) Satisfactory [ ]
Marketing - A Study of the Marketing Mix strategy of (d) Not satisfactory [ ]
Milkfood Pvt. Ltd., Industrial Marketing Management, 26(1), [8] What changes you want in Milkfood products?
1-13. (a) Price [ ]
[17] Christo Boshoff, (2002) “service advertising”, Journal of (b) Quantity [ ]
service research, Vol.4, November2012; pp 290-298. (c) Quality [ ]
[18] Dhar, Ravi.Stephen M.Nowlis, and steven J Sherman (1999), (d) Availability [ ]
“comparison effect on preference construction”, Journal of (e) Flavour [ ]
consumer research 26th dec.pp.293-306. (f) No change [ ]
[9] Why do you take Milkfood products?
QUESTIONNAIRE (a) Gain energy [ ]
(b) Advised by doctor [ ]
I am doing main research paper on “Product, Price, Place and (c) For taste [ ]
Promotion of Marketing” at Milkfood Ltd. I request you to (d) As a substitute of home products [ ]
provide the required information for the completion of my study. [10] Are Milkfood products easily available in the
Promise that the information is used exclusively for academic society?
purpose only. (a) Yes [ ]
(b) No [ ]
Name: ________________________________ [11] Why do you buy Milkfood products?
(a) Buy when you don’t have home made products.
YOUR AGE: (a) under 25year [ ] [ ]
(b) 25-34 [ ] (b) These are cheaper than the home made products.
(c) 35-44 [ ]
(d) 45+years [ ] [ ]
SEX: (a) Male [ ]
(b) Female [ ]
[1] What is your occupation?
(a) Farming [ ]
(b) Shopkeeper [ ]
(c) Unemployed [ ]
(d) Homemaker [ ]
(e) Student [ ]

[2] How did you came to know about this brand?


(a) Human source [ ]
(b ) Glow sign [ ]
(c) Posters [ ]
(e) Painting’s on dealers shop [ ]

854
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

COHORTS IN MARKETING: A REVIEW PAPER


Amandeep Singh Gurbir Singh
Guru Kashi University, Guru Kashi University,
Talwandi Sabo (Bathinda) Punjab Talwandi Sabo (Bathinda) Punjab
amandeep.garai@gmail.com gurbir_ubs@yahoo.co.in

ABSTRACT 3. COHORTS IN MARKETING


Marketers often segment consumers on factors such as age, Similar to demographics, cohorts use quantifiable descriptors
gender, income, stage of life, and geography. Another to identify target markets. Demographics, however, are the
innovative approach is to group consumers into cohorts. quantifiable descriptors themselves, such as the
aforementioned age and household income. Cohorts in
Cohorts are groups of individuals who are born and travel life
marketing use age descriptors as demographic markers to
together and experience similar external events during their make inferences about attitudes and behavior of people in the
late adolescent/early adulthood years. These events influence same age group based on common experiences. The basic
people to create values, attitudes, and preferences that remain premise is that people are profoundly influenced by seismic
with them for their lifetime. This article reviews and assesses experiences and events remembered from late adolescence
the current literature on American cohorts. The value and the and early adulthood—their coming-of-age years. These
validity of cohorts as a segmentation technique are discussed, "defining moments" tend to inform people's attitudes,
preferences and shopping behavior for the remainder of their
as well as areas for future research.
lives.

Keywords 3.1 Generational Cohorts


Marketing, Cohorts, Theory. Charles Schewe, Geoffrey Meredith and Janice Karlovich
identified seven generational cohorts in their year 2000 book
titled "Defining Markets, Defining Moments." These include
1. INTRODUCTION the Great Depression Cohort with people born between 1912
and 1921, the World War II Cohort born between 1922 and
Originally used to describe a military unit in ancient Rome, 1927, the Post-WW II Cohort born between 1928 and 1945,
the word cohort retains some of its original meaning by the Baby Boomers I born between 1946 and 1955, the Baby
describing a group of people that shares a common statistical Boomers II born between 1956 and 1965, Generation X born
or demographic trait. The word is widely used in academia in between 1966 and 1976, and N-Gens born between 1977 and
a variety of disciplines that study groups of people based on 1987. Generational cohorts do not suggest "generations,"
shared characteristics, such as economics, health sciences and which are typically 20 to 25 years. Cohorts vary in the
sociology. Cohort in marketing, however, is commonly number of years covered. For marketing purposes, the
accepted to mean "age-based marketing relevance is in the years when the cohorts came of age.
When it comes to marketing, the term "cohort" refers to 3.2 Defining Moments
specific experiences, events or other factors shared by a
group of consumers. These cohorts are used to identify and Cohorts are shaped by significant defining moments that
target segments of the market that, although they may fit into affect their attitudes and behavior for the remainder of their
other models, are more effectively grouped and treated as one lives. For instance, the Great Depression Cohort came of age
during the Great Depression, 1930-1939. As a defining
2. COHORTS VS DEMOGRAPHICS moment, the Great Depression, defined by financial
insecurity, had a profound effect on this group in terms of
Cohorts are often confused with the general demographics frugal spending, saving as a priority and a high premium on
that are typically used to segment the marketplace. financial security. Each cohort group has its defining
Demographic groupings such as income level or age are not moments that translate into hot buttons in terms of attitudes,
considered marketing cohorts. Cohorts are the things that tastes and preferences.
separate specific groups of people even within their
demographic groupings. For example, men between the ages 3.3 Improved Target Marketing
of 55 and 70 are a demographic grouping. Men between the
ages of 55 and 70 who share the experience of having served Understanding and using cohort analysis can help you
in the Navy on board aircraft carriers are a cohort. Cohorts identify target markets with greater precision to get new
are far more specific than standard demographic groupings customers. It requires a bit of extra effort, but small-business
and as such are a valuable tool for precisely targeted owners typically increase their chances of coming up with
marketing campaigns and niche businesses. winners when they aim for smaller target markets, such as
those associated with a specific cohort.

855
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

3.4 Events research provided an understanding of market segmentation


that has development over time and helped to contribute to
Special events are a natural cohort, and marketing teams are modern segmentation concepts and theories. Continuing on
quick to seize upon them for promotional gain. For example, the timeline of segmentation concepts, research from the new
people attending a concert of some note will often feel as millennium showed signs of development into what
though they were present for something that was meaningful marketing segmentation is today. In a 2000 study, it was
in their lives. This cohort creates an opportunity to promote found that individual product or service needs of consumers
and sell items that commemorate the event and focus on the are all different, but that rather than a heterogeneous market
concert as a defining moment. segmentation approach as used in the past, a homogeneous
approach is needed to satisfy more consumers and develop
3.5 Experiences better marketing strategies (Blois and Dibb, 2000). In 2004,
Shared experiences can account for valuable marketing Schewe and Meredith were more focused on the topic of
cohorts that bank on the consumer's desire to remain a part of market segmentation by generational cohorts, and determined
the past. The great depression was for all its trauma and that generational cohorts were a new concept in marketing
hardship an experience that shaped many people, in particular segmentation at25 this point in history. The authors discussed
their outlook on spending and saving. People who have lived the values and beliefs that motivate different age groups, such
through the lean times have a better understanding that they as consumers from Generation X and the Baby Boomer
may return at any time and they live their lives in a state of Generation. This research took into consideration the
semi-preparedness as a result. Marketing professionals may experiences, values, beliefs, attitudes and preferences of
use such an experience as a cohort to identify segments of the specific generational cohorts. It expressed that consumers of
population who may be more receptive to financial products the same generation go through the same external factors and
like bank accounts, gold shares or life insurance. events, and that these factors helped to shape their attitudes
toward spending. The authors noted the presence of more
3.6 Research on Age Cohorts „tech-savvy‟ consumers than ever before, creating a need for
more personal and well-guided advertising towards ideal
Existing literature on the topic of market segmentation, and consumers of the product or service (2004). Targeting a
more specifically age cohorts, has shown a variety of articles specific generation by researching the celebrities, movie
on the topic and the history of market segmentation stars, and athletes that are considered to be heroes of their
beginning from the 1970s. The beginning of this research childhood, gives marketers an effective use of nostalgia.
dates back to 1974, where the Journal of Marketing, Appealing to nostalgic feelings and memories gives
published a section on selecting the best segmentation consumers the feeling that the product is being specifically
correlate. This article conducted a study on the significance directed towards them, and allowing them to connect to the
of certain segmentation variables in consumer behavior, product or service on a personal level. Market segmentation
regarding weak associations with the actual marketed product by generational cohort is a more personal and well adapted
or service. A conclusion discovered in this study showed that method of connecting to individual consumer‟s attitudes and
a segmentation correlate used in one market may not be values. Understanding different generations of consumers
effective in another. The findings indicated that in order to gives marketers a way of researching buying habits and will
achieve the most24 significant segmentation correlate for a aid in forecasting future product trends that may apply to
product or service, various variables must be tested in each future promotional strategies (Schewe and Meredith, 2004).
individual market in order to determine the most successful In another example of research on age cohorts, Bennett, Dees
correlate for the product or service. It was also discovered and Sagas review the differences between Generation X and
that in determining the most effective market correlate, Generation Y consumers. This work describes the difference
frequency of use is an important factor (Hirsch and Peters, in media preferences of action sports consumers, like
1974). In 1987 research, the marketer was encouraged to attending traditional sports games versus watching action
choose a market segmentation variable and adapt the product sports on television or playing sport video games. The
or service in order to fit the criteria of that specific market authors describe the difficulty in appealing effectively to
correlate, rather than adapting a segmentation correlate to the youth markets over marketing history (2006). As the current
product or service- the current method of choosing youth consumers are Generation Y, marketers work harder
segmentation variables. This illustrates a difference in market and more effectively to appeal to these consumers, because
segmentation theories that have developed since the 1980s. In achieving brand loyalty earlier in a consumer‟s life will
the 1980s, four factors were discussed that were considered create a future of loyal customers. Appealing to Generation Y
helpful in evaluating the desirability of a segmentation earlier will also be beneficial because this generation is three
variable; these factors include measurability, accessibility, times larger than its predecessor,Generation X.
substantiality, and action-ability. Furthermore, consideration Characteristics researched on Generation Y shows that they
was also needed to determine the specific type of segment are more brand conscious than previous generations and have
categories desired, identified as geographic, demographic, been exposed to a wider range of media and advertising. For
psychological, and behavioral variables. Overall the theory example television, movies, video games, magazines, and the
behind this marketing segmentation concept stated that “once internet are more exposed to this generation than Generation
a segment has been identified which fulfils the requirements X consumers, due to the rapid technological advances
of measurability, accessibility, substantiality and action- established during the childhood of Generation Y (Bennett et
ability it is possible to develop a product or service to meet al., 2006). Research has lead to the conclusion that
the needs of the segment” (Drayton and Tynan, 1987). This Generation Y consumers are less inclined to watch or attend

856
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

traditional sporting events like NBA basketball and MLB Cohorts are also good for spotting weekly, monthly or
baseball, than that of Generation X. Generation Y prefers seasonal trends. In most mobile retail cohorts, you see a sharp
watching action sports or playing action sport video games, drop in purchases seven days before a major holiday.
like skateboarding and BMX (Bennett et al., 2006). Further
research highlights the differences in people born in 4.2 How does it work
Generation X and those born in Generation Y, in terms of
First, marketers need to have a conversion tracking solution
characteristics that are present in the workplace. Generation
in place to capture all the data they need to do cohort
Y are believed to be more optimistic and entrepreneurial than
analysis. The data required to track differs by vertical.
that of Generation X, who appear to be more pessimistic and
Retailers may want to measure registrations, purchase data,
mistrustful people. One of the most evident characteristics
repeat purchases and total revenue per purchase. Travel
that apply to Generation Y in the workplace, is that they are
marketers often measure hotel bookings, airline reservations
the most rewarded and recognized of any generation in terms
and car rentals. For other verticals, marketers should measure
of childhood accomplishments (Galagan, 2006). This
data that is most relevant to their business.
characteristic leads them to feel more entitled when they
Once conversion tracking is up and running, you can access
enter the workforce for the first time. Curbing this sense of
the raw data you need. The more data you have, the more
selfentitlement in Generation Y employees is the most
options you have to segment cohorts. Create cohorts based on
difficult challenge for managers of a different generation,
the week in which newly acquired mobile app users
who believe they must work hard for what they earn
downloaded your app. Then measure all purchases that cohort
(Galagan, 2006). Characteristics of Generation X and
makes over time. The final product of a cohort analysis is an
Generation Y in the workplace can also transfer over into
easy to read table or chart showing the breakdown of total
consumption habits of each group. Research has also been
marketing spend compared to data tracked and lifetime value
conducted on the tendency that marketers have to divide baby
for each cohort.
boomers into two segments: older baby boomers born
between 1946 and 1955, and younger baby boomers born 5. RELIABILITY OF COHORTS IN
between 1956 and 1965. The results of this work showed that
even though the whole group is large in number, there are MARKETING
more similarities than differences between the younger and Why Age Cohorts are a Reliable Segmentation Target
older baby boomers; therefore marketers should use caution Segmenting a market by age cohorts is very effective and
in dividing age cohorts into 2627 further segments. More helps to narrow down the most ideal target market for a
conclusions state that generational segmentation is a good product or service. The external factors and events that were
starting point, but marketers should consider other experienced by consumers in a specific generation, impact
demographic and/or psychographic methods in segmenting the interests and consumption habits of those specific
markets for the most effective results (Iyer and Reisenwitz, members of the generation. Understanding the differences in
2007). consumption between members of Generation X, Generation
Y, and the Baby Boomer Generation allow for advertisers to
4. COHORT ANALYSIS effectively advertise products to those generations. The three
A cohort is a group of users who were acquired from a generations focused on in the research are completely
mobile marketing initiative over a certain period of time, for different in terms of how members of each generation
example, one week. Cohort analysis tools simply follow this consume, behave, and spend. The range of differences found
group week over week and monitor their return. This type of in each generation, shows the need for different marketing
analysis offers marketers a tremendous level of insight into and advertising strategies specific to each generation. Schewe
users‟ lifetime value, improves marketing funnel conversions and Meredith stated that “when such similarities exist,
and helps optimize campaigns directly to revenue generated. marketers can offer the same (or very similar) products,
distribution and/or communications programs to alarge
4.1 Benefits of cohort analysis number of potential customers who are more likely to
respond in the way desired. Efficiency in marketing is
One popular way to look at cohort is to monitor the rate of realized and marketers and consumers benefit” (2004). The
return for the first week and set a target, such as 25% return main benefit of generational cohort segmentation is the
on ad spend. Over time, you monitor your return and improve potential for specifically-tailored advertising and overall
your marketing, which should enable you to beat your target. effective promotion toward the ideal target market.
By looking at different cohorts, the true performance of your Furthermore, Schewe and Meredith state that events and
marketing becomes clear. When you make a change, it is experiences within a specific generation‟s history transform
represented in that week‟s data. You don‟t have the the values, beliefs, and attitudes of its members. These
performance from a previous week masking the true values, beliefs, and attitudes shape the consumption habits
performance. and patterns of each generation of consumer. The
Over a longer period of time, such as 12 weeks, the lifetime development of consumer‟s interests, attitudes, values, and
value of users acquired becomes very easy to understand. beliefs does not fully occur until the consumer generation is
You simply add up the 12 week return of each cohort and in young adulthood. Marketers who understand generational
compare. Some weekly cohorts will be stronger than others, cohorts can appeal to the defining moments or events that are
but an average 12-week trend should emerge once you have of importance to certain generations. Appealing to defining
collected enough data. events in a consumer‟s life can influence emotional feelings
such as nostalgia, happiness, or youthfulness. Marketers can

857
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

take advantage of these feelings because the consumer would then organizing the very specific data required to segment
be more likely to buy a product that invokes these attitudes cohorts are also often prohibitive for many companies.
and emotions. Modern consumers look for personal
connections that appeal to their emotions, beliefs, values, and 7. CONCLUSION
attitudes. Also they look for products that appeal to their
Research on this topic has confirmed the benefits that this
lifestyle and help to promote their ideal self. These
type of market segmentation contributes to determining target
relationships that consumers are looking to establish with
markets and32 marketing overall. The personal connection
brands and products are what marketers try to capture in their
between product and consumer is of crucial importance in
advertisements. Obtaining long-term relationships with
this age of consumerism. Companies who effectively appeal
consumers will provide companies with brand loyalty and a
to ideal consumers will establish brand loyalty and develop
strong future. For example, members of Generation Y tend to
trust within their customer base for years to come. In a
be more techsavvy, music-oriented, and fashion conscious.
society of conspicuous and comparative consumption
Also they tend to spend more money than any other
advertisers have a better understanding of which generations
generation of consumers. Identifying appropriate products
tend to spend more or consume more than others. Marketers
and using techniques such as using members of the same
have to have thorough knowledge of generational cohort
generation in advertising, and by appealing to this
segmentation in order for the concept to work effectively.
generation‟s sense of fashion and love of music, will make
Research, development, and experimentation are crucial in
for very appropriate and effective advertisement. 3031 The
the early stages of developing a suitable segmentation
Baby Boomer generation is on the verge of retirement age
variable. Generational cohort segmentation may not be the
and tends to gravitate toward products that bring a sense of
best suited method of segmentation for a specific market. A
nostalgia and a reminder of their earlier years. Even though
marketing department needs to be able to determine
this generation is the wealthiest of them all, they are
relevancy and accuracy of the potential segmentation
beginning to save money and live frugally. Advertisements
variables to that of the specific market. In order for
that are simple and act as reminders to Baby Boomers that
generational cohort segmentation to be effective, consumers
retirement is around the corner, will potentially appeal to this
need to feel that the product is „meant‟ for them and the
audience. Generation X consumers are mid-way through their
company understands them as a person. If consumers do not
careers and tend to have more family-oriented and casual
feel a sense of connection to the product through its
lifestyles. They look for balance and perspective in their lives
advertising, the marketers have failed to accurately portray
and make time for leisure activities (Foley and LeFevre,
the product or have failed in their cohort analysis (Hirsch and
2001). Advertisements that appeal to the sense of family,
Peters, 1974). Therefore in order to correctly develop and
more specifically the „ideal family‟ lifestyle that members of
understand generational segmentation, marketers and
Generation X value, are more likely to be successful with this
advertisers need to understand market segmentation as a
cohort. Since they are well-defined within their careers, an
whole. Being able to compare and contrast segmentation
established and balanced lifestyle at their current age is very
variables is the first step to accurately portraying an ideal
important. Products that are geared towards these beliefs and
target audience. Further research on market segmentation is
sell family-values, will appeal most to Generation X. Market
also needed in order to develop standard methods and
segmentation by generational cohort helps marketers to
accurate experiments that will help advertisers determine
narrow down potential target audiences and find the
what segmentation variables are suitable, and which ones are
consumers who would be genuinely interested in the product
not suitable for different types of products and services. If
or service. Understanding this type of segmentation helps
marketers do thorough and efficient research on what type of
advertisers determine what values and beliefs different
consumer would be interested in their product, they are
consumers associate with their product. Knowing the
destined for a successful marketing campaign that will help
generation of consumer that is most likely to purchase the
define a new age of innovative advertising. Market
product helps companies to effectively advertise. Developing
segmentation is a key factor in the advertising and
brand loyalty for the company starts with correctly marketing
promotional process, and determining which type of
the product or service. Therefore consumers, who find the
market33 segmentation to choose is crucial. Generational
product worth their while, will develop strong relationships
cohort segmentation is only one of many segmentation
with that company through the future. Generational
variables, but it provides many benefits that if researched
segmentation is one of the more basic segmentation variables,
correctly and applied accurately can be very effective. The
and has helped to develop reliable target markets today.
study of consumer behavior, which is practiced in the
6. DRAWBACKS development of market segmentation, is a driving force in the
advertising world. Thorough knowledge of consumer
For all their marketing value, cohorts do have significant behavior will allow for marketers to apply the most effective
drawbacks. They are often too precise for most companies to segmentation variables in a world of complex consumerism
use as a general marketing tool and can only be employed in
cases where extreme drilling down of marketing methods and REFERENCES
approaches are warranted. If your company handles the
[1] Bennett, G., Sagas, M., and Dees, W. (2006). Media
litigation for a specific type of illness contracted in a specific
preferences of action sports consumers: differences
place and time, cohort marketing is the way to go. If you are
between generation X and Y. Sport Marketing
advertising a general legal practice, there are better ways to
Quarterly, 15(1), 40-49. Retrieved from Business Source
reach a broader audience. The costs involved with collecting,
Premier database.

858
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

[2] Blois, K., and Dibb, S. (2000). Market segmentation. (p. journal, 19(2), 91-103. Retrieved from Business Source
380). Oxford University Press 2000. Retrieved from Premier database.
Business Source Premier database.
[9] Rowe, M. (2008). Generation revelations. Restaurant
[3] Foley, E., & LeFevre, A. (2001). Understanding hospitality, 92(1), 26-30. Retrieved from Business
generation X. Zagnoli, McEvoy, Foley LLC. Retrieved Source Premier database.
from http://www.voirdirebase.com/pdfs/gen_x.pdf.
[10] Schewe, C., and Meredith, G. (2004). Segmenting
[4] Galagan, P. (2006). Engaging generation Y. T+D, 60(8), global markets by generational cohorts: determining
27-30. Retrieved from Business Source Premier motivations by age. Journal of consumer behaviour,
database. 4(1), 51-63. Retrieved from Business Source Premier
database.
[5] Geoffrey E. Meredith & Charles D. Schewe. Defining
Markets, Defining Moments: America's 7 Generational [11] Solomon, M. R. (2010). Consumer behavior: buying,
Cohorts, Their Shared Experiences, and Why Businesses having, and being, 9th edition. Prentice Hall.
Should Care.
[12] Tonks, D. (2009). Validity and the design of market
[6] Hisrich, R., and Peters, M. (1974). Selecting the superior segments. Journal of marketing management, 25(3/4).
segmentation correlate. Journal of marketing, 38(3), 60- 341-356. Retrieved from Business Source Premier
63. Retrieved from Business Source Premier database. database.
[7] Reisenwitz, T., and Iyer, R. (2007). A comparison of [13] Tynan, A., and Drayton, J. (1987). Market segmentation.
younger and older baby boomers: investigating the Journal of marketing management, 2(3), 301-335.
viability of cohort segmentation. Journal of Consumer Retrieved from Business Source Premier database.
Marketing, 24(4), 202-213. Retrieved from Business
Source Premier database. [14] Welch, D., and Kiley, D. (2009). The incredible
shrinking boomer economy. BusinessWeek, (41), 26-30.
[8] Reisenwitz, T., and Iyer, R. (2009). Differences in Retrieved from Business Source Premier database.
generation X and generation Y: implications for the
organization and marketers. Marketing management

859
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

A Study of Corporate social responsibility in India


Sandeep Kaur Seema Jain
Bhai Gurdas Institute of Bhai Gurdas Institute of
Engineering and Tecnology. Engineering and Tecnology.
Sangrur. Sangrur.
sandeepbgiet@gmail.com seema.cics@gmail.com

ABSTRACT argue that corporations increase long term profits by


Socially responsible human resource management operating with a CSR perspective, while critics argue
(SRHRM), defined as corporate social responsibility that CSR distracts from business' economic role. A 2000
(CSR) directed at employees, underpins the successful study compared existing econometric studies of the
implementation of CSR. While its relationship with relationship between social and financial performance,
employee social behavior has been conceptualized and concluding that the contradictory results of previous
received some empirical support, its effect on employee studies reporting positive, negative, and neutral financial
work behaviors has not been explored. In this article we impact, were due to flawed empirical analysis and
develop and test a meso-mediated moderation model that claimed when the study is properly specified, CSR has a
explains the underlying mechanisms through which neutral impact on financial outcomes. Critics questioned
(CSR) affects employee task performance and extra-role the "lofty" and sometimes "unrealistic expectations" in
helping behavior. The results of multilevel analysis show CSR or that CSR is merely window-dressing, or an
that organization-level (CSR) is an indirect predictor of attempt to pre-empt the role of governments as a
individual task performance and extra-role helping watchdog over powerful multinational corporations.
behavior through the mediation of individual-level 1.1 Consumer Perspectives
organizational identification. In addition, the mediation Most consumers agree that while achieving business
model is moderated by employee-level perceived targets, companies should do CSR at the same time Most
organizational support and the relationship between consumers believe companies doing charity will receive
organizational identification and extra-role helping a positive response Somerville also found that consumers
behavior is moderated by organization-level cooperative are loyal and willing to spend more on retailers that
norms. These findings provide important insights into support charity. Consumers also believe that retailers
why and when (CSR) influences employee work selling local products will gain loyalty. Smith shares the
behavior belief that marketing local products will gain consumer
Keywords: corporate social responsibility; society; trust. However, environmental efforts are receiving
organizational development. negative views given the belief that this would affect
customer service.

1. INTRODUCTION
Corporate social responsibility (CSR, also called 1.2 Approaches
corporate conscience, corporate citizenship or sustainable A more common approach to CSR is corporate
responsible business/ Responsible Business) is a form of philanthropy this includes monetary donations and aid
corporate self-regulation integrated into a business given to nonprofit organizations and communities.
model. CSR policy functions as a self-regulatory Donations are made in areas such as the arts, education,
mechanism whereby a business monitors and ensures its housing, health, social welfare and the environment,
active compliance with the spirit of the law, ethical among others, but excluding political contributions and
standards and international norms. With some models, a commercial event sponsorship.
firm's implementation of CSR goes beyond compliance Another approach to CSR is to incorporate the CSR
and engages in "actions that appear to further some social strategy directly into operations. For instance,
good, beyond the interests of the firm and that which is procurement of Fair Trade tea and coffee.
required by law. CSR aims to embrace responsibility for Creating Shared Value, or CSV is based on the idea that
corporate actions and to encourage a positive impact on corporate success and social welfare are interdependent.
the environment and stakeholders including consumers, A business needs a healthy, educated workforce,
employees, investors, communities, and others. sustainable resources and adept government to compete
The term "corporate social responsibility" became effectively. For society to thrive, profitable and
popular in the 1960s and has remained a term used competitive businesses must be developed and supported
indiscriminately by many to cover legal and moral to create income, wealth, tax revenues and philanthropy.
responsibility more narrowly construed. Proponents

860
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

as managerial discretion. According to this view


managerial actions are not fully defined by corporate
policies and procedures. So although managers are
constrained by their work environ-ment they nonetheless
have to weigh the moral consequences of the choices
they make. “The purpose of stakeholder management
was to devise a framework to manage strategically the
myriad groups that influenced, directly and indirectly,
the ability of a firm to achieve its objectives.” (Freeman
& Velamuri, 2006) The aim of stakeholder management
is thus to analyze how a company can serve its customers
and be lucrative while also serving its other stakeholders
such as suppliers, employees, and communities. Recently
the stakeholder perspective has dominated the
reinterpretation of CSR pushing the question of the
legitimacy of corporate power as well as the moral
dimension of managerial decisions more into the
background
2. OBJECTIVE OF THE STUDY
3.2 Organizational Level:
 The aim of this research is to examine how CSR can
CSR as Stakeholder Management With Freeman‟s
contribute to building organizational-level social
(1984) seminal book the focus moved from legitimacy
capital.
and morals towards a new theory of the firm. Social
 The study focuses on investigating the strategic role
considerations are thus no longer outside an
that CSR activities play in yielding better
organization but are part of its purpose of being. CSR
organizational performance/profitability through
thus becomes a question of stakeholder identification,
development of intangible organizational resources
involvement, and communication (Mitchell, Agle, &
(social capital/reputational capital).
Wood, 1997; Morsing & Beckmann, 2006; Morsing &
 To study the issues and challenges faced by CSR in
Schultz, 2006).
India.
 To make suggestions for accelerating CSR 3.3 Global Level:
initiatives CSR as Sustainable Development The latest literature
tradition to have impacted our understanding of
3. A LITERATURE REVIEW corporate social responsibility is that of sustainable
In recent years the business strategy field has development. It was the Brundtland Commission (1987)
experienced the renaissance of corporate social that for the first time systematically emphasized the link
responsibility (CSR) as a major topic of interest. The between poverty, environmental degradation, and
concept has not surfaced for the first time. CSR had economic development. Its definition of sustainable
already known considerable interest in the 1960s and development, as meeting the needs of the present,
70s, spawning a broad range of scholarly contributions without compromising the ability of future generations to
(Cheit, 1964; Heald, 1970; Ackermann & Bauer, 1976; meet theirs, extends the responsibility of firms both inter-
Carroll, 1979), and a veritable industry of social auditors and intra-generationally. Thus firms are expected to also
and consultants. However, the topic all but vanished consider traditionally unrepresented stakeholders such as
from most managers' minds in the 1980s (Dierkes & the environment and as well as future generations.
Antal, 1986; Vogel, 1986). Having blossomed in the Although many CSR authors have taken up the notion of
1970s CSR all but vanished and only re-emerged in a “triple bottom line” (Elkington, 1997) there remain
recent years. CSR resurfaced forcefully over the past ten important tensions between the CSR and the sustainable
years in response to mounting public concern about development debate (i.e. Dyllick & Hockerts, 2002).
globalization. Firms find themselves held responsible for
human rights abuses by their suppliers in developing 4. RESEARCH METHODOLOGY
countries; interest groups demand corporate governance
to be transparent and accountable; rioters from Seattle to The research paper is an attempt of exploratory research,
Genoa protest violently against the cost of free trade and based on the secondary data sourced from journals,
other perceived negative consequences of globalization. magazines, articles and media reports. Looking into
requirements of the objectives of the study there search
3.1 Individual Level: design employed for the study is of descriptive type.
CSR as Moral Choices of Managers At the individual Keeping in view of the set objectives, this research
level, CSR has been constructed by Ackermann (1975) design was adopted to have greater accuracy and in depth
analysis of the research study. Available secondary data

861
Proceedings of 3rd International Conference on Advancements in Engineering & Technology (ICAET-2015)
ISBN: 978-81-924893-0-8

was extensively used for the study. The investigator the business out of the water. Some of the drivers
procures the required data through secondary survey pushing business towards CSR include:
method. Different news articles, Books and Web were 5.1 The Shrinking Role of Government
used which were enumerated and recorded.

Das könnte Ihnen auch gefallen