Sie sind auf Seite 1von 78

Comparative Analysis of AODV and DSR

Protocols for Varying Number of Nodes


Meena Rao
Asst Prof., Dept. of ECE,
MSIT,GGSIP University
New Delhi
meenarao@msit.in

AbstractMobile Ad Hoc Networks (MANETs) are a


collection of network nodes that work independently.
MANETs are used in diverse applications like
conferences, e-classrooms, military applications etc.
Since the devices in MANETs act as host as well as
router and are also used in critical applications, it
becomes necessary that they are supported by
efficient routing protocols. Efficient routing protocols
also result in good Quality of Service (QoS)
parameters. In this paper, two popular reactive
routing protocols: Ad hoc On Demand Distance
Vector (AODV) protocol and Dynamic Source
Routing (DSR) protocol are analyzed for less and
high density of nodes in MANETs. Their QoS
parameters are evaluated and obtained results are
analyzed as well as co. Simulation is done in
MATLAB.
KeywordsMANETs; AODV; DSR; QoS

I.

INTRODUCTION

MANETs are self-configuring network of mobile


nodes without any pre-established or fixed
architecture [1]. Here, network nodes act as routers
and relay each others packets. Nodes can either
communicate through single hop or multihop path
[2]. In single hop MANETs several nodes are
connected. However, only those nodes that are in
communication range of each other can send and
receive packets from one another. As applications
of MANETs diversified it became necessary that all
nodes could communicate with one another. Hence,
multihop MANETs came into use wherein two
nodes can communicate via intermediate nodes. For
proper communication between various nodes in
MANET and for proper utilization of resources, it
is required that MANETs have efficient routing
protocols. MANET routing protocols can be
classified as proactive, reactive and hybrid [3]. In
proactive routing protocols, route information is
maintained in the form of routing tables [4]. In
constantly changing environment like MANETs,
maintaining route information constantly is not
possible. Also, saving information in the form of
routing tables results in more bandwidth
consumption which is not desirable in a resource
constrained environment. Reactive or on-demand
routing protocols are more popular in MANETs.

Neeta Singh
Asst Prof., School of ICT,
Gautam Buddha University
Greater Noida, U.P.
neeta@gbu.ac.in

These routing protocols establish routes as and


when necessary. As routes are established ondemand it results in bandwidth as well as resource
conservation. Ad hoc On Demand Distance Vector
(AODV) and Dynamic Source Routing (DSR) are
the two popular reactive routing protocols.
Hybrid routing protocols are a combination of
proactive as well as reactive routing protocols. All
the nodes that are within certain radius of each
other use a table driven approach. For nodes
outside this radius, routes are maintained on
demand. However, for a constantly changing
system it has been observed that reactive routing
protocols
provide
best
possible
results.
In this paper, AODV and DSR protocols have been
studied and their results have been evaluated for
different values of node densities. The results of
both AODV and DSR protocols are compared in
terms of their QoS parameters. The paper is divided
as follows: Section I is introduction, in section II
AODV and DSR protocols are explained in detail.
Section III describes the simulation setup and
implementation details. In section IV results
obtained are analyzed. Section V is conclusion of
thenpaper.
II. AODV AND DSR PROTOCOLS
A. AODV Protocol
AODV is a reactive routing protocol wherein routes
are created as and when necessary [5]. Here, when
a node has to send data packets to a destination it
broadcasts route request (RREQ) packets.
Complete route information is obtained in terms of
route reply (RREP) packets. During transmission of
packets from destination to the source, all
intermediate nodes are updated with the current
routing information and current node positions.
Apart from RREQ and RREP, AODV transmits
route error message (RERR) in the network. RERR
is transmitted when link between source to
destination or if any intermediate node is broken.
B. DSR Protocol
DSR protocol uses the technique of route caching.
In DSR, sender node knows the complete route to
the destination as it is saved in the route cache.

DSR consists of route discovery as well as route


maintenance. Source node initially searches its
route cache to find a route to the destination. Route
cache saves multiple routes to the same destination.
In case route is not available in the destination, then
route discovery process is initiated.
During the route discovery process each node
receives a RREQ and rebroadcasts it until either the
destination node or route to the destination is
found. RREP routes back to the initial or source
node by routing itself backward. Similar to AODV,
DSR too sends a RERR packet whenever it finds an
intermediate link unusable.

packets , no. of Rounds, speed of the nodes and


protocols.
Simulation Parameters
Field Size
100mX100m
Number of Nodes
10, 100
Number of
Packets
Number of
Rounds
Speed of the nodes

4000

20m/sec

Protocols

AODV, DSR

6000

III. IMPLEMENTATION
TABLE I. Network Specifications

Fig. 1 shows a rectangular field area of size


with a destination that is initially
placed in the centre. At the start of the data
transmission all nodes including the destination
node moves randomly. A system with 10 and 100
nodes has been considered. Node mobility being 20
m/sec. All the nodes are randomly placed in the
field area and initial energy of a node is 0.5J. Total
packets to be transmitted are 4000 and number of
transmission rounds being 6000. System with 10
nodes is referred as a sparse MANET and system
with 100 nodes is referred as a dense MANET.
Distances between all the nodes are calculated
using distance vector calculation [6].
Average distance between the transmitting device
and destination Dbs
= (one dimension of field)/

(k=1)

= (0.765one dimension of field)/2


The calculated average energy
particular round is given by

(1)
(2)

of a node after a
(3)

Rmax= Maximum number of Rounds

IV. SIMULATION RESULTS


Simulations were performed in MATLAB, an open
source package and QoS parameters were obtained
in terms of throughput, end to end delay and packet
delivery fraction.
A. Throughput
It is defined as the total number of data packets
received by the destination over the total simulation
time. Fig. 2 shows throughput obtained for sparse
MANETs. As seen from Fig. 2, throughput is
maximum for AODV protocol, followed by DSR
protocol. It is observed that after round 2000,
throughput is 3965 bits transmitted for DSR
protocol. In case of AODV protocol, throughput is
7459 bits transmitted after 2000 rounds. Fig.3
represents throughput obtained for dense MANETs.
In this case too, throughput obtained is higher in
case of AODV protocol with its value being 75700
bits after 2000 rounds. In case of DSR protocol,
value of PDF obtained is 41930 bits after 2000
rounds.

Et = Total Energy

Fig. 1. MANET Simulation Setup


Fig. 2. Throughput ( Nodes 10)

Here, Table I represents the simulation parameters


which includes Field size, no. of notes, no. of

C. End to End delay


End to end delay is a measure of the number of
rounds taken by the data packets to reach the
destination. Fig. 6 and Fig. 7 shows end to end
delay obtained in case of sparse as well as dense
MANETs. In case of sparse MANET, after 2000
rounds delay is 0.02642 secs, after 4000 rounds it is
0.05284 secs and 0.07927 after 6000 rounds. For
DSR protocol, end to end delay value after 2000
rounds is 0.0477 secs, 0.09539 secs after 4000
rounds and 0.1431 secs after 6000 rounds.

Fig. 3.Throughput (Nodes 100)

B. PDF
Packet Delivery Fraction (PDF) defined as the ratio
of the total number of data packets received by the
destination to the total number of data packets
transmitted.
.

(4)
Fig. 6. End to End Delay (Nodes 10)

Fig. 4 and Fig. 5 shows PDF obtained in case of


sparse MANETs as well as dense MANETs
respectively. For sparse MANETs, PDF obtained
after 2000 rounds for DSR protocol is 0.1048. In
case of AODV protocol, PDF is 0.1892 after 2000
rounds onwards. Similarly for dense MANETs too,
the results obtained is better in case of AODV
protocol.

Fig. 7. End to End Delay (Nodes 100)

V. CONCLUSION
Fig. 4. PDF (Nodes 10)

In this paper AODV as well as DSR protocols have


been simulated for different values of node
densities. QoS parameters have been obtained in
terms of throughput, packet delivery fraction as
well as end to end delay. From the observed results
it can be concluded that AODV protocol performs
better than DSR protocol for MANETs with
different ode densities.
REFERENCES

Fig. 5. PDF (Nodes 10)

[1]

Chlamatac I., Conti M., Jennifer J.-N. Liu, Mobile Ad


Hoc Networking : Imperatives and Challenges, Ad Hoc
Networks, Elsevier, pp. 13-64, 2003.

[2]

Chen L., Heinzelman W.B., A survey of Routing


Protocols that Support QoS in Mobile Ad Hoc

[3]

[4]

Networks, IEEE Networks, Vol. 1, Issue 6, pp. 30-38,


2007.
Royer E.M. and Perkins C.E., An implementation study
of the AODV routing protocol, IEEE Wireless
Communications and networking Conference, Vol. 3, pp.
1003-1008, 2000.
Goyal P., Simulation Study of Comparative Performance
of AODV, OLSR, FSR and LAR Routing Protocol in
MANET in Large Scale Scenarios, IEEE International
Conference on Information and Communication

Technologies,

pp.

282-286,

2012.

[5]

Lee S-J. and Gerla M., AODV-BR: Backup Routing in


Ad Hoc Networks, Wireless Communications and
Networking Conference (WCNC 2000), Vol. 3, pp. 13111316,
2000.

[6]

Hossain A., On the Impact of Energy Dissipation Model


on Characteristic Distance in Wireless Networks,
International Conference on Industrial and Communication
Application,
pp.
1-3,
2011.

Speaker Recognition using Matlab


Sandeep Singh
Asst Prof., MSIT, GGSIP
University
New Delhi

Payal Mittal
Student, MSIT, GGSIP
University
New Delhi

Abstract- Real time speaker recognition is needed for


various voice controlled applications Background
noise influences the overall efficiency of speaker
recognition system and is still considered as one of the
most challenging issue in Speaker Recognition System
(SRS). In this paper MFCC feature is used along with
VQLBG (Vector Quantisation-Linde, Buzo, and
Gray) algorithm for designing SRS. MFCC feature is
extracted from the input speech and then vector
quantization of the extracted MFCC features is done
using VQLBG algorithm. Speaker identification is
done by comparing the features of a newly recorded
voice with the database under a specific threshold
using Euclidean distance approach. The entire
processing is done using MATLAB tool. The
experimental results show that the proposed method
gives
good
performance
for
limited
speakermdatabase.
Keywords- Mel frequency cepstral coefficients,
MATLAB, Vector Quantization, LBG Algorithm

Priti Nisha Shekhar


Student, MSIT, GGSIP
University
New Delhi

Divyanshi Rathi
Student, MSIT, GGSIP
University
New Delhi

methods and techniques has been undertaken for


more than five decades and it is still an active area.

Fig.1 Speaker recognition system.

I. INTRODUCTION
Speaker recognition is the process of recognizing
the speaker from the database based on the
characteristics in the speech wave. Most of the
speaker recognition systems contain two phases. In
the first phase feature extraction is done. The
unique features from the voice signal are extracted
which are used latter for identifying the speaker.
The second phase is feature matching in which we
compare the extracted voice features with the
database of own speakers. The overall efficiency of
the system depends on how efficiently the features
of the voice are extracted and the procedures used
to compare the real time voice sample features with
the database. A general block diagram of speaker
recognition system is shown in Fig 1[1]. It is clear
from the above diagram that the speaker
recognition is a 1: N match where one unknown
speakers extracted features are matched to all the
templates in the reference model for finding the
closest match. The speaker feature with maximum
similaritynisnselected[2].
Research and development on speaker recognition

Vector quantization (VQ) is a lossy data


compression method based on the principle of
block coding. It is a fixed-to-fixed length
algorithm. In 1980, Linde, Buzo, and Gray (LBG)
proposed a VQ design algorithm based on a
training sequence. In 1985, Soong et al. used the
LBG algorithm for generating speaker-based vector
quantization (VQ) codebooks for speaker
recognition. VQ is often used for computational
speed-up techniques. It also provides competitive
accuracy when combined with background model
adaption. The results gives very good performance
in terms of memory space and computation
required. We have used VQ approach in our work
as it is easy to implement and gives accurate results
[3].
The remainder of this paper is organized as
follows. In section II, MFCC used for feature
extraction from the input voice is presented in
detail. Section III gives the details about the vector
quantization method used for feature matching.
Section IV gives the experimental results and
finally in section V conclusions are drawn.

II. FEATURE EXTRACTION


Speaker recognition is the process of automatically
recognizing who is speaking on the basis of
individual information included in speech waves.
Speaker recognition system consists of two
important phases. The first phase is training phase
in which a database is created which acts as a
reference for the second phase of testing. Testing
phase consists of recognizing a particular speaker.

1000Hz and a logarithmic spacing above 1000Hz.


The approximation of mel from frequency can be
expressed as
mel(f) = 2595*log(1+f/700)

-------- (1)

where f denotes the real frequency and mel(f)


denotes the perceived frequency. The block
diagram showing the computation of MFCC is
shown in Fig. 2.

Speaker recognition systems contain three main


modules [4]:
(1) Acoustic processing
(2) Features extraction
(3)Feature matching
These processes are explained in detail in
subsequent sections.
Fig.2 MFCC Extraction

A. Acoustic Processing
Acoustic processing is sequence of processes that
receives analog signal from a speaker and convert it
into digital signal for digital processing. Human
speech frequency usually lies in between 300Hz8000kHz [5].Therefore 16kHz sampling size can be
chosen for recording which is twice the frequency
of the original signal and follows the Nyquist rule
of sampling [6].The start and end detection of
isolated signal is a straight forward process which
detect abrupt changes in the signal through a given
threshold energy. The result of acoustic processing
would be discrete time voice signal which contains
meaningful information. The signal is then fed into
spectral analyser for feature extraction.
B. Feature Extraction
Feature Extraction module provides the acoustic
feature vectors used to characterize the spectral
properties of the time varying speech signal such
that its output eases the work of recognition stage.
MFCC Extraction
Mel frequency cepstral coefficients (MFCC) is
probably the best known and most widely used for
both speech and speaker recognition [7].
A mel is a unit of measure based on human ears
perceived frequency. The mel scale is
approximately linear frequency spacing below

In the first stage speech signal is divided into


frames with the length of 20 to 40 ms and an
overlap of 50% to 75%. In the second stage
windowing of each frame with some window
function is done to minimize the discontinuities of
the signal by tapering the beginning and end of
each frame to zero. In time domain window is point
wise multiplication of the framed signal and the
window function. A good window function has a
narrow main lobe and low side lobe levels in their
transfer function. In our work hamming window is
used to perform windowing function. In third stage
DFT block converts each frame from time domain
to frequency domain [8],[9].
In the next stage mel frequency warping is done to
transfer the real frequency scale to human
perceived frequency scale called the mel-frequency
scale. The new scale spaces linearly below 1000Hz
and logarithmically above 1000Hz.
The mel frequency warping is normally realized by
triangular filter banks with the center frequency of
the filter normally evenly spaced on the frequency
axis [Fig 3].
The warped axis is implemented according to
Equation 1 so as to mimic the human ears

perception. The output of the ith filter is given by-

S(j) is the N-point magnitude spectrum (j =1:N)


and i(j) is the sampled magnitude response of an
M channel filter bank (i =1:M). In the fifth stage
Log of the filter bank output is computed and
finally DCT (Discrete Cosine Transform) is
computed. The MFCC may be calculated using the
equation-

where N is the number of points used to compute


standard DFT. Fig.4 shows screen shot of GUI
developed using MATLAB of the input speech,
MFCC plots.

Fig.3 Triangular Filter Bank

III. FEATURE MATCHING


VQ is a process of mapping vectors from a large
vector space to a finite number of regions in that
space [10],[12]. Each region is called a cluster and
can be represented by its center called a codeword.
The collection of all code words is called a
codebook. Fig. 6 shows a conceptual diagram to
illustrate this recognition process where only two
speakers and two dimensions of the acoustic space
are shown [15], [17]. The circles are the acoustic
vectors from the speaker 1 whereas the triangle
refers to speaker 2.
In the training phase, a speaker-specific VQ
codebook is generated for each known speaker by
clustering his training acoustic vectors. The
resultant codewords which are called centroids are
shown in Fig. 5 by black circles and black triangles
for speaker 1 and 2, respectively.
The distance from a vector to the closest codeword
of a codebook is called a VQ-distortion. In the
recognition phase, an input utterance of an
unknown voice is vector-quantized using each
trained codebook and the total VQ distortion is
computed [11]. A sequence of feature vectors {x1,
x2,.,xn}for unknown speakers are extracted.
Then each feature vector of the input is compared
with all the codebooks. The codebook with the
least average distance is chosen to be the best. The
formula used to calculate the Euclidean distance
can be defined as follows: Let us take two points P
= (p1, p2pn) and Q= (q1, q2...qn). The Euclidean
distance between them is given by-

The speaker corresponding to the VQ codebook


with smallest total distortion is identified as the
speaker of the input speech.
One speaker can be discriminated from another
based of the location of centroids (Adapted from
Song et al.,1987)[14].
Vector quantization codebook formation is shown
below in the diagram.
Fig.4 GUI waveforms showing input speech and MFCC

4. Centroid Update: update the codeword in each


cell using the centroid of the training vectors
assigned to that cell.
5. Iteration 1: repeat steps 3 and 4 until the average
distance falls below a preset threshold
6. Iteration 2: repeat steps 2, 3 and 4 until a
codebook size of M is designed.

Fig.5 Conceptual diagram showing vector quantization


codebook formation.

A. Clustering the Training Vectors


After the enrolment session, the acoustic vectors
extracted from input speech of each speaker
provide a set of training vectors for that speaker.
Then a speaker-specific VQ codebook is build for
each speaker using those training vectors. LBG
algorithm [Linde, Buzo and Gray, 1980], for
clustering a set of L training vectors into a set of M
codebook vectors is being used. The LBG
algorithm designs an M-vector codebook in stages.
It starts first by designing a 1-vector codebook,
then uses a splitting technique on the codewords to
initialize the search for a 2-vector codebook, and
continues the splitting process until the desired Mvector codebook is obtained.
The algorithm is implemented by the following
recursive procedure [12], [16]:
1. Design a 1-vector codebook; this is the centroid
of the entire set of training vectors (hence, no
iteration is required here).
2. Double the size of the codebook by splitting each
current codebook yn according to the rule

where n varies from 1 to

the current size of the

codebook, and
is a splitting parameter (we
choose =0.01).
3. Nearest-Neighbor Search: for each training
vector, find the codeword in the current codebook
that is closest (in terms of similarity measurement),
and assign that vector to the
corresponding cell (associated with the closest
codeword).

Fig.6 Flow diagram of the LBG algorithm (Adapted from


Rabiner and Juang, 1993)

Fig. 6 [17] gives the steps of the LBG algorithm.


Cluster vectors represents the nearest-neighbor
search procedure which assigns each training
vector to a cluster associated with the closest
codeword. Find centroids is the centroid update
procedure. Compute D (distortion) sums the
distances of all training vectors in the nearestneighbor search so as to determine whether the
procedure has converged.
IV. CONCLUSIONS
The recognition rate obtained in this work using
MFCC and LBG-VQ is good. The VQ distortion
between the resultant codebook and MFCCs of an
unknown speaker is used for the speaker
recognition. MFCCs are used because they mimic
the human ears response to the sound signals.
Analysis of the % accuracy for recognition with
codebook size shows that the performance of the
proposed system increases with increase in number
of centroids. However VQ has certain limitations
and its efficiency deteriorates when the database
size is large and hence HMM techniques or Neural
Network technique can be used to improve the
performance and to increase the accuracy.

REFERENCES

mm,,.Signal Process., vol. 10, Detroit, Michingon, Apr. 1985,


mm,,.pp. 387-90.

[1] Ch.Srinivasa Kumar, Dr. P. Mallikarjuna Rao, 2011,


mm"Design of an Automatic Speaker Recognition System
mm.using MFCC, Vector Quantization and LBG Algorithm,
mm.International Journal on Computer Science and
mm.Engineering,Vol. 3 No. 8 mm,pp:2942-2954.

[15] T. Matsui, and S. Furui, "Comparison of text-independent


mm, speaker recognition methods using VQ-distortion and
mm,,,Discrete/continuous HMMs," IEEE Trans. Speech Audio
mm,,,Process., vol. 2(3), pp. 456-9, July 1994.

[2]Amruta Anantrao Malode,Shashikant Sahare,2012,


mm"Advanced Speaker Recognition, International Journal of
mm.Advances in Engineering & Technology ,Vol. 4, Issue 1,
mm.pp. 443-455.
[3] A.Srinivasan, "Speaker Identification and verification using
mm.Vector Quantization and Mel frequency Cepstral
mm.Coefficients,Research
Journal
of
Applied
mm.Sciences,Engineering and Technology 4(I):33-40,2012.
[4] .Vibha Tiwari, "MFCC and its applications in speaker
mm.recognition,International
Journal
on
Emerging
mm.Technologies1(I):19-22(2010)
[5] Md. Rashidul Hasan, Mustafa Jamil, Md. Golam Rabbani
mm,Md Saifur Rahman, "Speaker Identification using Mel
mm,Frequency
Cepstral
coefficients,3rd
International
mm.Conference on Electrical & Computer Engineering, ICECE
mm,2004,28-30 December 2004,Dhaka ,Bangladesh
[6] Fu Zhonghua; Zhao Rongchun; An overview of modeling
mm,technology of speaker recognition, IEEE Proceedings of
mm,the International Conference on Neural Networks and
mm,Signal Processing Volume 2, Page(s):887 891, Dec. 2003.
[7] S. Furui, "Speaker-independent isolated word recognition
mm,using,dynamic features of speech spectrum," IEEE Trans.
Mm.Acoust., Speech, Signal Process., vol. ASSP-34, pp. 52-9,
mm.Feb. 1986.
[8] .Sasaoki Furui, "Cepstral analysis technique for automatic
mm,,speaker verification," IEEE Trans. Acoust., Speech, Signal
mm.,Process., vol. 29(2), pp. 254-72, Apr. 1981.
[9] .D.A. Reynolds, "Experimental evaluation of features for
mm..robust ,speaker identification," IEEE Trans. Speech Audio
mm..Process., ,vol. 2(4), pp. 639-43, Oct. 1994.
[10] B. Yegnanarayana, K. Sharat Reddy, and S.P. Kishore,
mm,,"Source and system features for speaker recognition using
mm AANN ,models," in proc. Int. Conf. Acoust., Speech,
mm,,Signal Process., Utah, USA, Apr. 2001.
[11] .C.S. Gupta, "Significance of source features for speaker
mm recognition," Master's thesis, Indian Institute of
mm,,,Technology Madras, Dept. of Computer Science and
..Engg., ,Chennai, ,India,,2003
[12] Y. Linde, A. Buzo, and R.M. Gray, "An algorithm for
mm,,,,vector quantizer design," IEEE Trans. Communications,
mm,,,,vol. COM- 28(1), pp. 84-96, Jan. 1980.
[13]
R. Gray, "Vector quantization," IEEE Acoust., Speech,
mm,,,,Signal Process. Mag., vol. 1, pp. 4-29, Apr. 1984.
[14] F.K. Soong, A.E.Rosenberg, L.R. Rabiner, and B.H.
mm,,.Juang, "A Vector quantization approach to speaker
mm,,.recognition," in Proc. IEEE Int. Conf. Acoust., Speech,

[16] DSP Mini-Project: An Automatic Speaker Recognition


mm,,,.Systemnhttp://www.ifp.uiuc.edu/~minhdo/teaching/speak
mm,,.er,_recognition
[17]mVoice
Recognition
using
DSP:http://azhar
mm,,,,paperpresentation.blogspot.in/2010/04/voice recognitionmm,,,,using-dsp.html

COCOMO Model Using Neuro-Fuzzy


Logic Technique
Anupama Kaushik
Asst. Prof. Dept of IT
MSIT, GGSIP University
New Delhi
anupama@msit.in

Indu
Student, Dept of IT
MSIT, GGSIP University
New Delhi
panchithakur8@gmail.com

Abstract- Software Project planning is one of the most


important activity in software projects. Nowdays.
Rate of project failures are increasing and the main
reason behind this is imprecision of the software
estimation in software project planning. Software cost
estimation is the process of predicting the effort
required to develop a software system. In the field of
Software Project Management. Effort estimation is an
important area of research and is a challenging task .
Though ,for predicting effort ,
a number of
estimation models exist but still many newer models
are being proposed and to obtain more accurate
estimation models , research is still going on.In this
paper we present the most common and widely used
effort estimation techniques using neuro-fuzzy logic.
A neuro-fuzzy Constructive Cost Model (COCOMO)
is proposed for software cost estimation. This model
carries some of the desirable features of a neuro-fuzzy
approach, such as learning ability and good
interpretability, while maintaining the merits of the
COCOMO model.
Keywords:
software project planning, project
failures, software cost estimation, effort, COCOMO,
neuro-fuzzy.

I. INTRODUCTION
The process or an approximation of the probable
cost of a product, program, or a project is called
cost estimation. Software cost estimation starts at
the proposal state and continues throughout the life
time of a project.
Accurate cost estimation is important because of
the following reasons :
It can help to classify and prioritize development
projects with respect to an overall business plan ;
to determine what resources to commit to the
project and how well these resources will be used ;
to assess the impact of changes and support
preplanning; Projects can be easier to manage and
control when resources are better matched to real
needs.; Customers expect actual development costs
to be in line with estimated costs.
Inaccurate estimation is a real problem in the
production world of software which should be
solved despite of the indicated statistics which is
pessimistic.

Vinay Gautam
Student, Dept of IT
MSIT, GGSIP University
New Delhi
gautamv1532@gmail.com

Predicting software output metrices is a challenging


task as the relationships between software output
metrics and contributing factors exhibit strong
complex nonlinear characteristics and software
metrics measurements are often imprecise and
uncertain. There are several approaches to solve
that problem . but there is no single software
development estimation technique suited for all
situations. In this paper We have taken into
consideration the features of the cost estimation
problem and various techniques and have proposed
a neuro-fuzzy COCOMO model. [1]
The existing cost estimation methods.
There are basically two types of Cost estimation
methods which are: algorithmic and nonalgorithmic.
Non-algorithmic methods: In 1990s nonalgorithmic models was born and have been
proposed to project cost estimation. Software
researchers have turned their attention to new
approaches that are based on soft computing such
as artificial neural networks, fuzzy logic models
and genetic algorithms. Non-algorithmic methods
do not use a formula to calculate the software cost
estimate. For performing the accurate estimation,
we required to use both types of cost estimation
methods. If the requirements are known better,
their performance will be better
Algorithmic Methods are designed to provide
some mathematical equations which are used to
perform software estimation. the research and
historical data of cost measurement are the basis for
these mathematical equations. These methods use
inputs such as Source Lines of Code (SLOC),
number of functions to perform, and other cost
drivers such as language, design methodology,
skill-levels, risk assessments, etc. Few examples of
algorithmic models are COCOMO models[7],[8],
Putnam model[9], and function points based
models.
The aim of this paper is to present a possible way
of combining fuzzy logic and neural networks for
achieving higher accuracy. the fuzzy logic concept
along with the neural network, results in a Neurofuzzy system having the advantages of both

techniques. The neural network research started in


the 1940s, and the fuzzy logic research started in
the 1960s, but the Neuro-fuzzy research area is
relatively new [2].
II. BACKGROUND AND RELATED WORK
Modeling accuracy largely affects estimation
accuracy. Good models are hard to find for
software cost estimation and is very critical in
building and planning for software engineering. In
the recent years many software estimation models
have been developed [1],[3], [4], [5]. a fuzzy logic
model for development time estimation was
proposed by Lpez Martn et al. [3]. An enhanced
fuzzy logic model for software development effort
estimation which had the similar capabilities as the
previous fuzzy logic model in addition to
enhancements in empirical accuracy in terms of
MMRE was described by Ting su et al. [4].Also,
Abbas Heiat [5] used artificial neural network
techniques like RBF (Radial Basis Function) and
MLP (Multi-Layer Perceptron) for estimating
software development effort. Furthermore, a novel
neuro-fuzzy Constructive Cost Model (COCOMO)
was developed by Xishi Huang et al. [1] for
software cost estimation which uses the desirable
features of a neuro-fuzzy approach, such as good
interpretability and learning ability, in COCOMO
model. As neuro-fuzzy COCOMO model is based
on the standard COCOMO model, the fuzzy logic
model and neural network models, we briefly
review these techniques.
III. FRAMEWORK OF COCOMO MODELS
(A) COCOMO model:
The COCOMO cost and schedule estimation model
was originally published by Boehm [7].This is one
of most Popular parametric cost estimation models
of the 1980s. However, there were difficulties
experienced in estimating the costs of software
developed in COCOMO81 along with its 1987
Ada update according to new lifecycle processes
and capabilities. Thus, the COCOMO II [8]
research effort was started in 1994 at University of
Southern California. This research effort was done
to address issues on non sequential and rapid
development process models, reuse driven
approaches,
reengineering,
object
oriented
approaches,etc. The COCOMO model is a
hierarchy of software cost estimation models and
they are:
(a) Basic cocomo model
This model computes software development effort
(and cost) as a function of program size. Program
size is expressed in estimated thousands of source
lines of code (SLOC).
The basic COCOMO equations take the form:

Effort Applied,
E=
Development Time,
D=
People required,
P=

[man-months]
months]
[count]

(1)
(2)
(3)

where, SLOC is the estimated number of delivered


lines (expressed in thousands ) of code for project,
The coefficients a, b, c and d are dependent upon
the three modes of development of projects.
(b) Intermediate cocomo model
The estimated effort in person-months (PM) for
the intermediate COCOMO is given as:
Effort =
(4)
the coefficient a is known as productivity
coefficient and the coefficient b is the scale
factor. They are based on the different development
modes of the project.
(c )Detailed cocomo model
Detailed COCOMO incorporates all characteristics
of the intermediate version with an assessment of
the cost driver's impact on each step (analysis,
design, etc.) of the software engineering process.In
detailed COCOMO, the effort is calculated as
function of program size and a set of cost drivers
given according to each phase of software life
cycle. The Four phases of detailed COCOMO
includes Plan and requirement; System design;
Detailed design; Module code and test.
(B) COCOMO II model:
The COCOMO II model is a regression based
software cost estimation model and thought to be
the most cited, best known and the most plausible
of all traditional cost prediction models. COCOMO
II comprises of the following models: [8]
(a) Application Composition Model :
According to this model systems are
created from reusable components,
scripting or database programming.
- For resolving potential high risk issue
such as user interfaces, software/system
interaction, performance, or technology
maturity; prototyping efforts are involved.
-

Software size estimates are based on


application points / object points, and a
simple size/productivity formula is used to
estimate the effort required.
(b) Early Design Model : It uses a small set
mmnof new cost drivers and new estimating
mmnequations. It uses Unadjusted Function
mmnPoints (UFP) as the measure of size.
(c) Post Architecture Model : after
mmndesigning the system architecture, a

mmnmore accurate software size estimation


mmncan be made.
- Development and maintainence of
software product is involved.
- Uses function point and LOC as a measure
of size.
- This model describes 17 cost drivers.these
cost drivers are rated from verylow to
extra high.
Table I and Table II lists the COCOMO II cost
drivers and scale factors.
TABLE I: COCOMO II cost drivers with given range.

RELY

Reliability required

0.82-1.26

DATA

Database size

0.90-1.28

CPLX

Product complexity

0.73-1.74

RUSE

Required reusability

0.95-1.25

DOCU

Documentation

0.81-1.23

TIME

Execution time constraint

1.00-1.63

STOR

Main storage constraint

1.00-1.46

PVOL

Platform volatility

0.87-1.30

ACAP+

Analyst capability

1.42-0.72

PCAP

Programmers capability

1.34-0.76

PCON

Personnel continuity

1.29-0.81

AEXP
PEXP

Analyst experience
Programmer experience

1.22-0.81
1.19-0.85

LTEX

1.20-0.84

TOOL

Language & tool


experience
Use of software tool

SITE

Multi site development

1.22-0.80

SCED

Schedule

1.43-1.00

1.17-0.78

TABLE II : COCOMO II scale factors.

SCALE
FACTOR
PREC
FLEX
RESL
TEAM
PMAT

SCALE FACTOR
(NAME)
Precedentedness
Development
flexibility
Architecture/risk
resolution
Team cohesion
Process maturity

RANGE
0.00-6.20
0.00-5.07
0.00-7.07
0.00-5.48
0.00-7.80

COCOMO II Post Architecture Model is defined


as:
PM= A
(5)
where A is baseline calibration constant, Size refers
to the size of the software project measured in
terms of thousands of Source Lines of Code
(kSLOC), SF is the scale factor, and EM is the
effort multiplier.

IV.

PROPOSED WORK

Significant effort has been put into the research of


developing software estimation models using
neural networks[10],[11]. Neural networks are
based on the principle of learning from example
with no prior information being specified. There
are three characteristics of Neural networks and
these entities are: neurons, interconnection
structure and learning algorithms. For developing
software models using neural networks, multilayer
feed-forward networks are used. The first move in
development activity of such a neural network
model starts with an appropriate layout of neurons,
or connections between the network nodes. For this
we need to define the number of layers of neurons,
the number of neurons within each layer, and the
manner in which they are linked. The activation
functions of the nodes and the specific training
algorithm to be used are determined. After building
the network, the model is trained and a set of
historical project data input and the actual values
known correspondingly must be provided for
project effort. The model is followed by iterations
performed on its training algorithm, the weights
(parameters) are automatically adjusted until the
model weights converge. Once all this is done then
,New input to predict the corresponding project
effort can be presented to the neural network. In
general, To accurately train neural networks large
data sets are needed.
when the by conventional approach fails for
analysing the system or when the available data is
uncertain, inaccurate or vague , in such situations a
fuzzy model is used. The fuzzy model uses the
fuzzy logic concepts introduced by Lofti A. Zadeh
[12]. Three main components of a fuzzy model are:
fuzzification process, inference from fuzzy rules
and defuzzification process. In Fuzzification
process the objective term is transformed into a
fuzzy concept. the confidence factor is determined
by applying membership functions to the actual
values of variables or membership value (MV).
input and output is expressed in linguistic terms. In
Inferencing , defuzzification of the conditions of
the rules and propagation of the confidence factors
of the conditions to the conclusion of the rules is
done. The translation of fuzzy output into objective
terms is called defuzzification. The popular fuzzy
logic systems can be categorised into three types:
pure fuzzy logic systems, Takagi and Sugenos
fuzzy system, fuzzy logic systems with
fuzzification and defuzzification [12].
The basic idea behind the neuro-fuzzy system is to
hybrid the neural networks and fuzzy logic. A
neuro-fuzzy system is based on a fuzzy system
which is trained by a learning algorithm derived
from neural network theory. The (heuristical)
learning procedure operates on local information,

and causes only local modifications in the


underlying fuzzy system. A neuro-fuzzy system
can be viewed as a 3-layer feedforward neural
network. The first layer represents input variables,
the middle (hidden) layer represents fuzzy rules
and the third layer represents output variables.
Fuzzy sets are encoded as (fuzzy) connection
weights[1][2].
Significant effort has been put into the research of
developing software estimation models using
neuro-fuzzy network [1][2].
X. Huang et al. in 2007[1] proposed a model in
which input for this type of model is the software
size and ratings of 22 cost drivers including 5 scale
factors (SFRi) and 17 effort multipliers ( ). The
output is the software development effort
estimation[1].
cost drivers can have ratings in continuous
numerical values or linguistic terms such as
low, nominal and high.

value,which is a quantitative value used in the


COCOMO model.
Sub-model NFi is used for the translation of the
qualitative rating of a cost driver into a quantitative
multiplier value and to calibrate these relations
usingmindustrymprojectmdata[1].
Note: not all six rating levels are valid for all cost
drivers.
For each NFi the Adaptive Neuro-Fuzzy Inference
System is adopted[13].

Fig .2

Input and Output : There is one input and one


corresponding
output
for
each
NF.
NFi has input as the adjusted rating value
of
the i-th cost driver Ci
and output as the
corresponding multiplier value .
Let us denote =
where i= 1,2,3,4,5 and
= where i=1,2,.....17.
The structure of subsystem NFi is shown in the
figure 2 above, which is functionally equivalent to
a Takaki and Sugenos [14] type of fuzzy system
with
the
rule:

Fig. 1

In this neuro-fuzzy model


components:

there are two major

Sub model NFi twenty two submodel


NFi ; rating value of a cost driver is the
input and the corresponding multiplier
value is the output .
COCOMO Model - software size inputs
and the output of NFi are the inputs of
the COCOMO Model and software effort
estimation is the output.

In this neuro-fuzzy model there are 22 cost drivers.


Factor that contributes to the development effort os
represented by each cost driver, such as product
complexity and application domain experience. To
evaluate the contribution, six qualitative rating
levels are used. In linguistic terms, these six rating
levels are: very low (VL), low (L), nominal (N),
high (H), very high (VH) and extra high (XH).A
corresponding value is related to the rating level of
every cost driver and this value is called multiplier

Fuzzy rule : If
is Ais ,then = Cis where s = 1,
2,. . ., 6.
where the fuzzy set Ais corresponds to a rating level
having range from Very Low to Extra High for
the i-th cost driver, and Cis is the corresponding
parameter value of the s-th rating level. Now let us
study each layer of the given above figure.
Layer 1 : The membership values for each rule is
calculated in this layer. The activation function of
node in this layer is defined as the corresponding
membership function[13].
= (
)
Where CRi is the adjusted rating value of the i-th
cost driver, and Ais is a fuzzy set associated with
the s-th rating level such as Low, High.
(
)
is the membership function of Ais.
Layer 2 : The firing strength for each rule is
calculated in this layer. The inputs for each nodes
are the membership values in the premise of the
fuzzy rule. The product of all input membership
values is the output, which is called the firing
strength of the corresponding fuzzy rule. the firing
strength is the same because there is only one
condition in the premise of each fuzzy rule of NFi

and that is, the membership value obtained from


Layer 1.
WS =
Layer 3 : To normalize the firing strength for each
fuzzy rule, this layer is used. The output of the s-th
node is called the normalized firing strength, which
is defined as:
=

Where > 0 is learning rate.

Layer 4 : The reasoning result of a rule is


calculated in this layer as follows:

(13)

=
Layer 5 : All the reasoning results of fuzzy rules
are sum up in this layer that we get from Layer 4,
i.e.:
=
In summary, the overall output of neuro-fuzzy
subsystem NFi is :
=

Learningmalgorithm:
For the neuro-fuzzy subsystem NFi, we use the
followingmformulamtomcalculatemthemnumerical/
multiplier value
of cost driver i :
=
(C
;
) =
=
, i=1,2....,22.

the corresponding effort estimation by our neurofuzzybmodel,


the set of increasing cost
drivers whose higher rating values correspond to
higher development effort, and
is the set of
decreasing cost drivers whose higher rating values
correspond to lower development effort.
The learning algorithm for our neuro-fuzzy model
is as follows:
(12)

(6)

where
=[ ,
,......,
] is the parameter
vector calibrated by the learning process.
For the COCOMO II post architecture model, we
predict the software development effort by:
Effort=
( , ,..........
)
(7)
=A
=A
From Eqs. (6) and (7), we can rewrite our
neurofuzzy model as follows:
Effort=
)
(8)
where X = [ , ,.....
,
,....
,
Size ) and
,
,...
.
Now let us derive the learning algorithm. Given
NN project data points (
), n = 1,2,..NN. the
learning problem of parameters CD can be
formulated as the following optimizing problem:
E=
(9)
subject to the following monotonic constraints:
(10)
(11)
where
is the size of software and cost drivers
ratings for Project n,
is the weight of Project n,
is
the actual effort for Project n,
=
=
)

,forvvvvi=1,2,...,5
.

(14)
(15)
(16)

V.

CONCLUSION

Software Project planning is one of the most


important activity in software projects. Nowdays,
Rate of project failures are increasing and the main
reason behind this is imprecision of the software
estimation in software project planning. In the
budgeting, project planning and control, tradeoff
and risk analysis of effective project management ,
Accurate software development cost estimation is
very important . This paper is first reviewed with
major software cost estimation technique including
model-based techniques , neural network models
and fuzzy logic models. There are various
advantages and limitations of each model. In this
paper we have proposed a neuro fuzzy logic model
and applied it to COCOMO II model. There are
many reasons that are responsible for in accurate
software cost estimation. Soft computing provides
the various promising techniques such as fuzzylogic , artificial neural networks and much more.
The neuro-fuzzy approach has been successfully
used in this area but it is new to the field. When we
combine the neuro-fuzzy technique/concept tothe
standard COCOMO model we can take advantage
of both the desirable features of neuro-fuzzy
approach such as its learning ability and good
interpretability while maintaining the aspects of
COCOMO model. Finally, the neuro-fuzzy
technique allows the integration of numerical data
and expert knowledge and can be a powerful tool
when tackling important problems in software
engineering such as cost and quality prediction.
REFERENCES
[1] X. Huang, Danny Ho, J. Ren, L.F. Capretz, Improving the
hhhCOCOMO model using a neuro-fuzzy approach, Applied
Computing , Vol.7, Issue 1, 2007, pp. 29-40.hhhhhhh
[2] J. Jantzen, Neuro- Fuzzy modeling, Reort no 98-H-874,
bbb1998.

[3] C.L. Martin, J.L. Pasquier, M.C. Yanez, T.A. Gutierrez,


bbbSoftware Development Effort Estimation Using Fuzzy
ggggLogic: A Case Study, IEEE Proceedings of the Sixth
vvv.Mexican International Conference on Computer Science
hhh(ENC05), 2005, pp. 113-120.
[4] M.T. Su, T.C.Ling, K.K.Phang, C.S.Liew, P.Y.Man,
bbbbEnhanced Software Development Effort and Cost
jjjjjjjEstimation Using Fuzzy Logic Model, Malaysian Journal
bbbbof Computer Science, Vol. 20, No. 2, 2007, pp. 199-207.
[5] nA. Heiat, Comparison of artificial neural network and
hhh,regression models for estimating software development
bbb,effort, Information and Software Technology, Vol. 44,
ggg,,Issue 15, 2002, pp. 911-922.
[6] B.W. Boehm, Software Engineering Economics, Prentice
Vvv Hall, 1981
[7] ,B.W. Boehm, et al. Software Cost Estimation with
nnnbCOCOMO II, Prentice Hall, 2000..
[8]b L.H. Putnam, A general empirical solution to the macro
bbbnsoftware sizing and estimating problem, IEEE
nnnntransactions on Software Engineering, 1978, Vol. 2, pp.
nnnn345- 361.
[9] A.R. Gray, S.G. MacDonell, A comparison of techniques
bbb for developing predictive models of software metrics, Inf.
Bb Software Technol. 39 (1997) 425437.
[10] A. Idri, T.M. Khoshgoftaar, A. Abran, Can neural
bbbbnetworks be easily interpreted in software cost
bbbbestimation?, in: Proceedings of IEEE International
vvvvConference on Fuzzy Systems, 2002, 11621167.
[11] Lofti Zadeh, A., Fuzzy Logic, Neural Networks and Soft
bbbb Computing, Communication of ACM 37(3): 77-84,1994.
[12] http://fuzzy.cs.uni-magdeburg.de/nfdef.html
[13] R.J.S. Jang, ANFIS: adaptive-network-based fuzzy
bbbbinference system, IEEE Trans. Syst. Man Cybernetics 23
nnnn(1993) 665 685.
[14] T. Takaki and M. Sugeno, Fuzzy identification of systems
nnnnand its application to modeling and control, IEEE Trans.
nnnnSyst., Man, Cybern., Vol. 15, pp.116-132, 1985.

Crystal Model of the Earth and BIS Lattice


Vibrations at the Critical Temperatures
Sobinder Singh
Asst Prof., Dept of Applied Sciences
MSIT, GGSIP University
New Delhi
sobinder_singh@yahoo.co.in

Abstract In this paper the core of the earth is


assumed to hold the properties of a unique crystal.
Breakdown of integrated system leads to the
vibrations of the BIS lattices. The ordered crystal
structure is disturbed. In our present paper, ordered
crystal models having nonsingular heat capacities at
the critical temperatures are considered. A new
parameter vector
is found to describe the spin
correlations and fluctuation characteristics. The
conservation of scalar q indicates that there is simple
harmonic motion of , and the motions quantum is
called block-spin phonons, like the phonons in a
crystal, resulting in nonsingular heat capacity near
the critical point. The harmonic motion shows there
are hierarchies and symmetries of fluctuations, while
the soft mode may lead to the interactions of blockspin phonons with different frequencies. We have
considered Ising model as the main protocol of our
studies.
Keywords Ising Correlation, BIS Effect, Phonon,
Heat Capacity, BIS lattice vibration

I INTRODUCTION
The critical phenomena are characterized by the
free energy singularity and the long-range
correlations of spins. The former indicates that an
old phase, a disordered state, disappears; the latter
shows that this is just the property of a new phase,
an ordered state. As a new state it is considered a
normal ferromagnetic that should have its normal
and non-singular free energy and heat capacity. It is
incredible that a ferromagnetic Ising model has no
normal thermodynamic quantities to which the
spins contribute. There has been only one theory so
far, the spin-wave theory, can explain these normal
quantities. Dyson drew a physical picture for a 3di-mensional model [1]. On microscopic level, as his
showing, the spin waves arise from interference
effects in the lowest partial-wave collisions
resulting in a rotation of the spins of the scattered

Tuhin Dutta
Dept of Physics
Ramjas College North Campus
University of Delhi
New Delhi
dr.tuhindutta@gmail.com

atoms. The cumulative effects of many such


rotating collisions are such that inhomogeneous
spin states propagate like waves rather than
diffusively. Dyson presented that his theory was
suitable to the low temperature. Vaks and his
colleagues tried to apply the theory to a Heisenberg
model to study the spin waves and correlation
function [2]. They found that the damping of the
spin waves and the damping increased with the
increasing temperature in the range T Tc .All of
these tell us that the spin-wave picture cannot
illustrate the critical phenomena, especially the
critical properties of Ising model, since there is no
spin-spin collision caused by the moving atoms
with spins. The authors of the reference [3]
constructed an inhomogeneous planar square lattice
Ising model with finite size, and discussed its
abnormal specific heat at the critical temperature
under a boundary condition. We think that,
however, there is great deference between such
model and conventional Ising model. The rest of
the model is still infinite if a finite part is picked up
from an infinite Ising model. In addition, we notice
that the authors work is prior to the Wilsons
renormalization group theory, and the imposed
boundary condition makes the work be considered
as a particular result rather than general. Some
coherent spin states similar to linear harmonic
oscillators were introduced by Radcliffe [4], and the
relevant physical properties were discussed for a
Heisenberg model. He supposed that such
discussion might benefit to the understanding of the
correlation of spins. His idea may inspire us to
investigate the underlying form of the spins
correlation function. In this paper we try to show
by our theory the normal properties of the new
phase for Ising models at Tc . In section 2 we find a

spin parameter vector q describing the block spin


correlations and get a conservation equation of the
scalar q, revealing the block-spin correlations exist

in the form of simple harmonic waves. As a result


we get the quantum of the wave motion, the blockspin phonon. In section 3 we obtain nonsingular
heat capacities for the new phase and discuss the
correlation functions with some sym-metric
properties and interpret the hierarchies in the
fluctuations. We also consider the occurrence of
soft modes. Section 4 is conclusion.
This article is a revision of the reference [5] , it is
not only a continuation of the reference [6] but also
a base for the reference [7].
II THEORY

the deviation in the block-spin states. An obvious

disadvantage of the conventional spin parameter S


is that it is always one-dimension such that it
cannot be a match for the spin correlations that
have the same dimensions as the ones of the lattice
system. In order to research for the fluctuation
nature further we should find a new spin parameter

of d dimensions, besides S , to describe the


correlations. Solving the Gaussian model in the

reference [6] we introduce a parameter vector q in


the Fourier transform, a new lattice spin is

S i (1/ )q S q exp q r i . The vector q is originally

A.

SPIN PARAMETER VECTOR

The calculating results of the critical points for the


2-dimansional models and 3-dimensional models
show that our theory is available for the
investigation of the critical behaviors of Ising
models [6]. Especially, the fractal analysis used in
this theory reveals the detailed structures in the
fluctuations, which has never been discussed in
other theories. The concepts of the block spin and
the self-similar transformation mentioned in this
article all are attributed to this theory. According to
this theory, a system will never arrive at its critical
point Kc j /( kBTc ) due to that the self-similar
transformation forbids the fractional side n * . The
Eq. (11) of the reference [6] indicates that the free
energy of the system dependents on the magnitude
of the logarithm on the right side of the equation,
the smaller the value of the logarithm the lower the
free energy because the natural logarithm is a
monotone function. In order to minimize the free
energy the system always try to attain the critical
point such that the two terms in the logarithm
should be close to each other, which leads to that
the system adjust the blocks side from time to
time. Such adjustment will never stop, which is just
the cause of the endless fluctuation. Lets consider
two limit cases. At first, for a finite number r, on
the r-th hierarchy the vector summation of block
spins is always zero although the spins are
correlative with each other. In the mean-time a
system appears order only on the infinite hierarchy,
namely, r , where the system is just an
isolated block spin after infinite iteration of the
transformations. De-note the system magnetization

by M 1 . In the second case, on a finite hierarchy all


block spins are parallel to each other, consequently
the system is order. Denote the relevant

magnetization by M 2 . Clearly, the magnitude of


M 1 is smaller than the magnitude of M 2 . So the

M 1 is connected with Tc , and the M 2 with the


temperature T lower than Tc . The fluctuations are

not only the deviation in the block sides, but also

a reciprocal lattice vector for the new lattice but the


one for a block spin system. The so-called new
lattice spin is actually the lattice spin on the i th
hierarchy, which is just the block spin on the

(i 1) th hierarchy. The magnitude of q determines


the new lattice spin magnitude being consistent
with the magnitude of a certain block spin, its
changes in both direction and magnitude relate to
the changes of the new lattice spins in both
direction and magnitude. Because there is a
mapping relationship between a new lattice spin

and a block spin, if the vector q can serve as an


appropriate parameter to demonstrate the block
spin state for a certain block-spin system instead of
the new lattice spin system, it should have specific
features that there is an one-to-one correspondence

between the length of q and the magnitude S of a

block spin, each block spin has its own q , and all

of block spins with q are correlated. The vector


change in direction is connected with the block spin
change in direction. The traversal time of the

component of q , q x , or q y , or q z , in either its


own positive-direction state or negative-direction
state is identical, since every block spin has the
same probability in both the spin-up state and the
spin-down state in the thermodynamic equilibrium
on any finite hierarchy to keep the spin vector

summation zero. In a word, the q will be a ddimensional periodically varying parameter rather
than random.
B. CONSERVATION EQUATION OF
SCALAR

The On the one hand the free energy will be


singularity at the critical point Kc j /( kBTc ) ; on
the other hand the fluctuations undergo around the
critical point, such that the magnitude of the block
spin is not the minimum related to the critical point,
and q doesnt vanish at Tc , since only the q 0

links to K c . Therefore, the algebraic expression the

equation (10) of [6] changes into K ( q ) 1/ Str 2 , and

Str Str2 , min , for the triangle lattice spin system,


2

where K ( q ) K exp i q ij ,
ij

ij is

a vector

motion of q although q x is not of independence.


Let us investigate the correlation of two block
spins, they may be either adjacent or far apart; and

denote their parameters by q1 and q , the vectors


2

t1 , their

rotate anticlockwise. At the moment

j th lattice, and

components in the y-axis are positive, q1 y 0 ,

K j /( kBT ) . Generally, for a 2-dimensional

q2 y 0 , the spins are parallel up; at t 2 (t 2 t1 ) ,

from the i

th

lattice to the

system when the integer side n varies abo-ut the


fractional side n* the value of q changes certainly
around q 0 . Since the magnitude of q is very

small we can expand the function K ( q ) about


q 0 as a power series to keep its quadratic term
of q . We then get

q y1 0 and q y 2 0 , anti-parallel; at t 3 (t 3 t 2 ) ,
q1 y 0

q2 y 0 ,

and

parallel

down;

at

t 4 (t 4 t 3 ) , q1 y 0 and q 2 y 0 , anti-parallel, the


spin orientations are just opposite to the
orientations at t 2 ; at t 5 (t 5 t 4 ) , q1 y 0 and

q 2 y 0 , parallel up, but q and q are not the


1
2

qx2 q y2

4
D
[1 min ]
2
(n 1)
D

(1)

where the fractal dimension Dmin is determined by


the fractional side n* , the fractal dimension D is
determined by the integer side n . Equation (1)
signifies that there is a circle of radius q and a

t1 ; at t 6 (t 6 t 5 ) , both vectors
return to the states at t1 , indicating the two block
spins come back their states at t1 . The time
same as those at

difference t t 6 t1 is just a vibration period. It


is easy to prove that the locus of

rotating vector q , and the initial point and the


terminal point of the vector are at the circle center
and on the circumference, respectively. When the
vector rotates its direction changes and the rotation
is a unique moti-on way for it. Clearly, if the side is
constant the magnitude of q is certain too; and vise
versa. The conservation of scalar q corresponds to
the constant block spin value. The following
trigonometric functions are the solutions of Eq.

qx q cos[ (t r / v) ]
q y q sin[ (t r / v) ]

(2)

where the sign - or + represents the forward


wave or the backward wave, the system can be in
either the one state or the another state; v is the
wave velocity (the system is a uniform medium),
r ( x 2 y 2 )1 / 2 , x and y the position coordinates
of the symmetric center of a block, is angle
frequency, the initial phase angle of the block at
the site r 0 . In order to illustrate explicitly the
correspondence between q and the block spin state,
consider a particular situation: Let the spin
direction be parallel to the y-axis, and q y 0
refers to the spin-up state, q y 0 to the spin-down
state. A block spin travels in each state for the same
time, half period. The traversal time of a spin at the
state q y 0 is omitted. Let the component q y be

q x follow after it by equation


(2). Clearly, without q x there is no harmonic
independent, and

q in a 3-

dimensional block spin system is a sphere of radius


q. We can in particular think of the spin-up state as
q z 0 , the spin-down state as q z 0 . A set of
trigonometric functions will depict the behaviors of

rotating vector q . Generally, we can set up a

correspondence bet-ween the state of q and the


state of a block spin by the fol-lowing method: For
a 2-dimensional system, a line passing through the
origin of the coordinate system divides the
circumference into two identical parts, the points
on one semi-circumference and the relevant q x and
q y correspond to the spin-up state; the points on
another semi-circumference and the relevant
and

qx

q y to the spin-down state. For a 3-

dimensional system, a plane passing through the


origin divides the sphere into two equal parts, the
points on one hemisphere and the relevant q x ,

q y , and q z are related to the spin-up state, the rest

points and the relevant components of q to the


spin-down state. In the light of this arrange we will
not deliberately review the corresponding relation
between their states in the next discussion.
Equation (2) indicates that the statistical average

values of the components of q keep zero while the


vector in the simple harmonic oscillation, which are
the inevitable results for their symmetric motions.

function cos ka 1 (ka) 2 / 2 , and get a dispersion


rela-tion linking up the frequency and k

III BLOCK SPINE- PHONON


Note that there is no parameter describing a wave
state in between two nearest neighbors in equation
(2), which means we regard in fact the system as
continuum. It is well known that a simple harmonic
wave is an elastic wave. The elastic wave in the
continuum has the specific properties of acoustic
wave satisfying the long-wavelength limit [8]. In
crystal the same harmonic motion of lattices can be
respectively de-scribed by several waveforms for
different purposes. Similarly, let us consider the

harmonic motion of q from another angle of view.


For the cube ordered reducible block spin system
denote the position coordinates of the symmetric
center of a reducible block by ( x, y, z ) , every
such block has its own center. All of these centers
make up a 3-dimensional lattice system with lattice
constant a n 1 , where n is the block side length.

For the elastic vibration of the q at the site

( x p , y p , z p ) of the p-th block, there is an effective


elastic force

f p , a restoring force that may be

driven by the fluctuation-dissipation mechanism.


Because the Hooks law the force f ( x p ) , the

component of f p in the x-axis, caused by the


displacements of the adjacent q x ( x p 1 , t ) and
q x ( x p 1 , t ) relative to the q x ( x p , t ) , respectively, is
given by

f ( x p ) C[ qx ( x p 1 ,t ) qx ( x p ,t ) qx ( x p 1 ,t ) q x ( x p ,t )] M

2 (C / M )k 2 a 2

(5)

where v / k , v is the wave velocity. So does


the mo-tion in either the y-axis or the z-axis. The
previous procedure is completely analogous to the
dealing with lattice wave in a crystal [8], so that it
is easy to make quantization for the wave motion of

q , and the quantum of the motion is called the


block-spin phonon like the phonon in the lattice
wave. For the brevity, we dont write here the
process.
For the cube lattice system, the interaction between
nea-rest-neighbor sub-blocks is along the directions
normal to the sub-block side (long side). Infinite
sub-blocks construct a 2-dimensional system
because the symmetric centers of these sub-blocks
are on the same plane. There are many such planes
parallel to one another, on each of them there is a
sub-block spin system, and there is no any
interaction between the systems since the directions
of sub-block interactions are parallel to the planes.
It is easy to pro-ve that in every such system there

are also the simple harmonic waves of q similar to


as shown by equations (2) and (3), and there are
also sub-block spin phonons in each sys-tem like
the block-spin phonons in the ordered reducible
block-spin system. For brevity, we also call the
sub-block spin phonon the block-spin phonon.

d 2 qx ( x p ,t )

IV DISCUSSION

dt 2

(3)
A. NONSINGULAR HEAT CAPACITIES

where C is a proportional constant, M effective


mass, all displacements have the time dependence
exp it . Since the harmonic motion leads to a
dynamic equation

d 2 qx ( x p , t )
dt 2

2qx ( x p , t )

and Eq. (3) then becomes a difference equation in


the displacements of q x , and has traveling wave
solution:
qx ( x p1 ,t ) qexp ( ipka ) exp ( ika ) exp ( it )

Consider a sub-block spin system of the cube


lattice: the symmetric centers of the sub-blocks
form a 2-dimensional lattice system of area G. The

vibration number of q per dimensionality from k to

k dk is given by G 2kdk . It can equivalently


be, using the identity v / k , expressed by
2G ( / v 2 )d for d . The phonons
obey the Plank distribution [8]. We get the total
average phonon energy E for every such 2dimensional system

(4)

where k is the magnitude of wave-vector k ,

x p pa . Combining equation (3) with (4), and


using the above dynamic equation, we get
2 (2C / M )(1 cos ka) . If the long-wavelength
limit ka 1 holds, we can expand the

E (

4G
k BT 2 x D x 2
)
k
T
(
)
dx
B
v2
0 e x 1

(6)

x /( k BT ) ,
Plank
constant,
x D D /( k BT ) , D is Debye frequency.
Suppose that the total number of such 2-

is

dimensional systems equals N G , the heat capacity


of the sub-block systems is expressed as

4 NGGkB kBT 2 xD x3e x


E
CV NG
(
)(
)
dx

v2
h 0 ( e x 1 )2
T G

(7)

If the condition Tc D / kB holds, the integral in


the above equation will have a finite value,
therefore Eq. (7) states that CV obey the T 2 law,
which can be shared by all of 2-dimensional
systems including the triangle lattice sys-tem. With
the same reason, for the ordered reducible block
(four sub-blocks form an ordered reducible block,
see [6]) spin system of cube lattice the heat
capacity behaves as the T 3 law:

12Vs k B k BT 3 x D x 4e x
CV (
)(
)
dx
v3
0 (e x 1) 2

(8)

where the integral value is finite if Tc D / kB .


The law of Eq. (8) is suitable to all of 3dimensional systems. The cube lattice system
includes two types of interactions in that the subblock spins and the ordered reducible block spins.
In the reference [7], the heat capacities to which
con-tribute the spin phonons coming from the
lattice spins in the sub-blocks and the blocks are
also considered, and the heat capacity represented
finally by the Eq. (6) of the reference can describe
more correctly the huge heat capacity of ferromagnetic at the critical temperature.
B. SYMMETRIES AND HIERARCHIES
OF FLUCTUATIONS

There are a few of papers to study the correlation


functi-ons at Tc for Ising models. G. Delfino and G.
Mussardo studied the spin-spin correlation function
in a 2-dimensional Ising model in a magnetic field
at Tc , they used the scatte-ring method applied
usually in the spin-wave theory and considered the
electron charge action, which is more or less apart
from the original phase transition topic [9]. Crag A.
Tracy and Barry M. McCoy presented the spin-spin
correlation by considering the neutron scattering
effect and ma-king use of phenomenological
formula, yet the fluctuation fine structure was a
riddle to us [10]. The heart of the problem is that if
we have not a good spin parameter to describe the
fluctuation nature, we will not be able to find the
function with specific form to portray the critical
behavior, and the function is very the correlation
one. In terms of the Prigogines theory of selforganization [11], the new phase at Tc in essence is

the self-organization in the thermodynamic


equilibrium as a sequence of the space-time order
of spins. It is well known that there is conservation
there are some symmetries. The conservation of q
reveals the space-time symmetric properties of the
fluctuations at Tc . For example, in the triangle
lattice the trigonometric functions in equation (2)
may be regarded as a kind of spin correlation
functions at Tc . Although every block spin changes
its state constant-ly, all of the changes show a

certain harmonization. The terminal point of q


circles about the center q 0 obeying the
symmetric properties of a rotating group and
leading directly to the space-time order of spin
states: The time translation invariance: a definite
spin state will periodically appear at the same
position. The space translation invariance: the same
state will simultaneously turn up at some sites
regularly arranged. Though different side length
correspond to different circles radii, these centric
circles have the same symmetries but chaos. Since
Eqs. (2) and (3) hold on any finite hierarchy, thus
the fluctuations are of hierarchies. In this sense the
fluctuations itself are the very new phase
characteristics.
C. SOFT MODES

Soft modes exist in vibration systems, originating


from anharmonic forces, therefore the fluctuations
of Ising mo-dels at Tc will be comprised in that.

When we make Fourier transform of K ( q ) about

q 0 , we neglected the term of q 4 that is next to


the term of

q 2 as the function cosine is even.

Counting the term of q 4 , an effective potential


energy is included
U (qi ) Cqi2 fq i4
(9)

x, y, z , C and f are positive constants,


2
the term of q i leads to harmonic force linked to
Where i

harmonic vibration, and the term of q i4 to the


softening of the vibration at large amplitudes. Like
in the lattice wave where the an harmonic term is
concerned in thermal expansion [8], the term of q i4
may result in nonzero average value of the
components of

q , qx 0 or q y 0 , or

qx 0 and q y 0 at the same time on a finite

hierarchy, which means that the vector q has a


preferential direction such that the lattice spin
system can become order on a finite hierarchy
rather than on the infinite hierarchy. In addition,
when the system adjusts the block side in order to

approach to the critical po-int further new blocks


will come out and old blocks will be decomposed,
as well as appropriate for the modulation of the
lattice constant spacing adjacent block symmetric
centers. Deductively, there exist interactions of
block-spin phonons with different frequencies, like
the phonon interactions in a crystal. We note that in
the lattice wave model because the soft mode effect
is too weak to affect the existence of the phonon
model and the Debey heat capacity that has been in
agreement with experiment. The reason may be
suitable to the block-spin phonons. In a word, soft
modes may be of the fluctuation characteristic.
V CONCLUSION

The vector q is a good spin parameter, its feature


shows that the spin correlations at Tc behave as
simple harmonic motion, and the motion quantum
results in nonsingular heat capacity. The BISfluctuations are of symmetries and hierarchies,
which are just the characteristics of the order phase.
There are soft modes in the fluctuations, leading to
the interactions between block-spin phonons of
different frequencies.
REFERENCES
[1]

Arthur E. Ferdinand and Michael E. Fisher, 1969


Bounded and inhomogeneous Ising models. I. Specificheat anomaly of a finite lattice. Phys. Rev. 185, 832-846.

[2]

C. Kittel, 1996 Introduction to solid state physics. 7th


Edition, John & Wiley Sons Ltd., New York.

[3]

F. J. Dyson, 1956 General theory of spin-wave


interactions. Phys. Rev. 102, 1217-1230.

[4]

G. Delfino and G. Mussardo, 1995 The spin-spin


correlation function in the two-dimensional Ising model
in a magnetic field a T=TC. Nucl. Phys. B455, 724-758.

[5]

Grag A. Tracy and Barry M. McCoy, 1973 Neutron


scattering and the correlation functions of the Ising model
near Tc . Phys. Rev. Lett. 31, 1500-1504.

[6]

J. M. Radcliffe, 1971 some properties of coherent spin


states. J. Phys. A: Gen. Phys; 4, 313-323.

[7]

P. G. Glansdorff and I. Prigogine, 1977 Self-organization


in nonequilibrium system. Wiley, New York.

[8]

V. G. Vaks, A. L. Larkin, and S. A. Pikin, 1968 Spin waves


and correlation functions in a ferromagnetic. Soviet
Physics, JETP. 26, March, 647.

[9]

You-gang Feng 2011 arxiv:1111.2233

[10] You-Gang Feng, 2014 Secondary phase transition of


Ising model. Amer. J. Mod. Phys. 3(4), 178-183.

[11] You-Gang Feng, 2014 Self-similar transformations of


lattice-Ising models at critical temperatures.
Mod. Phys. 3(4), 184-194.

Amer. J.

DIGITAL STEGENOGRAPHY
Surender Bhanwala
Asst Prof. Dept. of IT
MSIT, GGSIP University
New Delhi

Nikhil Sharma
Student, Dept of IT
MSIT, GGSIP University
New Delhi

Kirti Lakra
Student, Dept of IT
MSIT, GGSIP University
New Delhi

Abstract-Steganography is the science of invisible


communication. The purpose of steganography is to
maintain secret communication between two parties
i.e. it is concerned with hiding information raising
any suspicion about the existence of such information.
And the secret information can be concealed in
content such as image, audio, or video. There are
different type of steganography techniques each have
their strengths and weaknesses. In this paper, we
review the different security and data hiding
techniques i.e. text steganography and image
steganography using techniques such as LSB, ISB,
MLSB etc.
Keywords - Steganography, Cryptography, LSB,
BPCP, PVD, DCT, PSNR.

I. INTRODUCTION
In todays information technology era, the rise of
the internet is one of the most important fact of
information technology and communication due to
this the security of the data and the information has
raise concerned. So, great measures should be
taken to protect the data and information.
Cryptography- Cryptography is a method of storing
and transmitting data in a particular form so that
only those for whom it is intended can read and
process it. The modern field of cryptography can be
divided in to two ways: Symmetric key
cryptography and public key cryptography. The
symmetric key cryptography provides encryption
of data at the sender and the receiver side where
both share the same key. The symmetric key
cryptography is implemented via block ciphers or
stream ciphers. This form of cryptography has a
disadvantage that it involves the key management
process for the secure networking. The number of
keys required increases as the square of the number
of network members, which very quickly requires
complex key management schemes to keep them
all consistent and secret .To solve the said issue ,

Anshul Dabas
Student, Dept of IT
MSIT, GGSIP University
New Delhi

Yagnish Dahiya
Student, Dept of IT
MSIT, GGSIP University
New Delhi

the public key cryptography came in to existence.


In public key cryptography, the encryption is done
through public key(which is available to all) and
the secret key often referred to as private key is
used to perform the decryption process. The pairing
of public and private key ensures secure
communication. This technology can be used to
implement digital signatures scheme.
Plainly visible encrypted messages no matter how
unbreakable will arouse interest, and may in
themselves be incriminating in countries where
encryption is illegal. Thus, whereas cryptography is
the practice of protecting the contents of a message
alone, steganography is concerned with concealing
the fact that a secret message is being sent, as well
as concealing the contents of the message. The
primary objective of steganography is to avoid
drawing attention to the transmission of hidden
information. If suspicion is raised, then this
objective that has been planned to achieve the
security of the secret message because if the
hackers noted any change in the sent message then
this observer will try to know the hidden
information inside the message.
II. STEGANOGRAPHY
Steganography is a Greek word which means
concealed writing. The word steganos means
covered and graphial means writing . Thus,
steganography is not only the art of hiding data but
also hiding the fact of transmission of secret data.
Steganography hides the secret data in another file
in such a way that only the recipient knows the
existence of message. In ancient time, the data was
protected by hiding it on the back of wax, writing
tables, and stomach of rabbits or on the scalp of the
slaves. But todays most of the people transmit the
data in the form of text, images, video, and audio
over the medium. In order to safely transmission of

confidential data, the multimedia objects like audio,


video, images are used as a cover sources to hide
the data.
Cryptography: is the practice and study of
techniques for secure communication in the
presence of third parties. The modern field of
cryptography can be divided in to two ways:
Symmetric key cryptography and public key
cryptography. The symmetric key cryptography
provides encryption of data at the sender and the
receiver side where both share the same key. The
symmetric key cryptography is implemented via
block ciphers or stream ciphers. This form of
cryptography has a disadvantage that it involves the
key management process for the secure
networking. The number of keys required increases
as the square of the number of network members,
which very quickly requires complex key
management schemes to keep them all consistent
and secret .To solve the said issue, the public key
cryptography came in to existence. In public key
cryptography, the encryption is done through
public key (which is available to all) and the secret
key often referred to as private key is used to
perform the decryption process. The pairing of
public and private key ensures secure
communication. This technology can be used to
implement digital signatures scheme.

hiding any kind of files or data into digital video


format. In this case video (combination of pictures)
is used as carrier for hiding the data. Generally
discrete cosine transform (DCT) alter the values
(e.g., 8.667 to 9) which is used to hide the data in
each of the images in the video, which is
unnoticeable by the human eye. H.264, Mp4,
MPEG, AVI are the formats used by video
steganography.
5. Network or Protocol Steganography: It
involves hiding the information by taking the
network protocol such as TCP, UDP, ICMP, IP etc,
as cover object. . In the OSI layer network model
there exist covert channels where steganography
can be used.
Text Steganography
Text steganography can be achieved by altering the
text formatting, or by altering certain
characteristics of textual elements (e.g., characters).
The goal in the design of coding methods is to
develop alterations that are reliably decidable (even
in the presence of noise) yet largely indiscernible to
the reader. These criteria, reliable decoding and
minimum Visible change, are somewhat
conflicting; herein lies the challenge in designing
document marking techniques. The Document
format file is a computer file describing the
document content and page layout (or formatting),
using standard Format description languages such
as PostScript2, TeX, @off, etc. It is from this
format file that the image - what the reader sees - is
generated. The three coding techniques that we
propose illustrate different approaches rather than
form <an exhaustive list of document marking
techniques. The techniques can be used either
separately or jointly. Each technique enjoys certain
Advantages or applicability as we discuss below.

Fig 1. Cryptographic Technique

Types of Steganography
1. Text Steganography: It consists of hiding
information inside the text files. In this method, the
secret data is hidden behind every nth letter of
every words of text message. Numbers of methods
are available for hiding data in text file. These
methods are i) Format Based Method; ii) Random
and Statistical Method; iii) Linguistics Method.
2. Image Steganography: Hiding the data by
taking the cover object as image is referred as
image steganography. In image steganography
pixel intensities are used to hide the data. In digital
steganography, images are widely used cover
source because there are number of bits presents in
digital representation of an image.
3. Audio Steganography: It involves hiding data
in audio files. This method hides the data in WAV,
AU and MP3 sound files. There are different
methods of audio steganography. These methods
are i) Low Bit Encoding ii) Phase Coding iii)
Spread Spectrum.
4. Video Steganography: It is a technique of

Line-Shift Coding
This is a method of altering a document by
vertically shifting the locations of text lines to
encode the document uniquely. This encoding may
be applied either to the format file or to the bitmap
of a page image. The embedded codeword may be
extracted from the format file or bitmap. In certain
cases this decoding can be accomplished without
need of the original image, since the original is
known to have uniform line spacing between
adjacent lines within a paragraph.
Word-Shift Coding
This is a method of altering a document by
horizontally shifting the locations of words within
text lines to encode the document uniquely. This
encoding can be applied to either the format file or
to the bitmap of a page image. Decoding may be
performed from the format file or bitmap. The
method is applicable only to documents with
variable spacing between adjacent words. Variable
spacing in text documents is commonly used to

distribute white space when justifying text.


Because of this variable spacing, decoding requires
the original image - or more specifically, the
spacing between words in the un-encoded
document.
Feature Coding
This is a coding method that is applied either to a
format file or to a bitmap image of a document.
The image is examined for chosen text features,
and those features are altered, or not altered,
depending on the codeword. Decoding requires the
original image, or more specifically, a specification
of the change in pixels at a feature. There are many
possible choices of text features; here, we choose to
alter upward, vertical endlines - that is the tops of
letters, b, d, h, etc. These endlines are altered by
extending or shortening their lengths by one (or
more) pixels, but otherwise not changing the
endline feature There is another form of text
steganography which is defined by Chapman et al.
as the text steganography is a method of using
written natural language to conceal a secret
message.
Image Steganography
Hiding information inside images is a popular
technique nowadays. An image with a secret
message inside can easily be spread over the World
Wide Web or in newsgroups. The use of
steganography in newsgroups has been researched
by German steganographic expert Niels Provos,
who created a scanning cluster which detects the
presence of hidden messages inside images that
were posted on the net. However, after checking
one million images, no hidden messages were
found, so the practical use of steganography still
seems to be limited. To hide a message inside an
image without changing its visible properties, the
cover source can be altered in noisy Areas with
many colour variations, so less attention will be
drawn to the modifications. The most common
methods to make these alterations involve the use
of the least-significant bit or LSB, masking,
filtering and transformations on the cover image.
These techniques can be used with varying degrees
of success on different types of image files.
Least Significant Bits
A simple approach for embedding information in
cover image is using Least Significant Bits (LSB).
The simplest
Steganography techniques embed the bits of the
message directly into least significant bit plane of
the cover image in a deterministic sequence.
Modulating the least significant bit does not result
in human-perceptible difference because the
amplitude of the change is small. To hide a secret
message inside an image, a proper cover image is
needed. Because this method uses bits of each pixel

in the image, it is necessary to use a lossless


compression format, otherwise the hidden
information will get lost in the transformations of a
lossy compression algorithm. When using a 24-bit
color image, a bit of each of the red, green and blue
color components can be used, so a total of 3 bits
can be stored in each pixel. For example, the
following grid can be considered as 3 pixels of a
24-bit color image, using 9 bytes of memory:
(00100111 11101001 11001000)
(00100111 11001000 11101001)
(11001000 00100111 11101001)
When the character A, which binary value equals
10000001, is inserted, the following grid results:
(00100111 11101000 11001000)
(00100110 11001000 11101000)
(11001000 00100111 11101001)
In this case, only three bits needed to be changed to
insert the character successfully. On average, only
half of the bits in an image will need to be modified
to hide a secret message using the maximal cover
size. The result changes that are made to the least
significant bits are too small to be recognized by
the human visual system (HVS), so the message is
effectively hidden. As you see, the least significant
bit of third color is remained without any changes.
It can be used for checking the correctness of 8 bits
which are embedded in these 3 pixels. In other
words, it could be used as parity bit.
Masking and filtering
Masking and filtering techniques, usually restricted
to 24 bits or grayscale images, take a different
approach to hiding a message. These methods are
effectively similar to paper watermarks, creating
markings in an image. This can be achieved for
example by modifying the luminance of parts of the
image. While masking does change the visible
properties of an image, it can be done in such a way
that the human eye will not notice the anomalies.
Since masking uses visible aspects of the image, it
is more robust than LSB modification with respect
to compression, cropping and different kinds of
image processing. The information is not hidden at
the noise level but is inside the visible part of the
image, which makes it
more suitable than LSB modifications in case a
lossy compression algorithm like JPEG is being
used.
III. APPLICATION OF STEGANOGRAPHY
i)
ii)
iii)
iv)
v)
vi)
vii)
viii)

Confidential Communication and Secret


Data Storing
Protection of Data Alteration
Access Control System for Digital Content
distribution
E - Commerce
Media
Database Systems.

ix)

Digital watermarking.
IV. STEGANOGRAPHY REVIEW

There are several surveys that have already been


done in this area of this knowledge. Some of the
studies are discussed by us above and we have
presented our views and research below with some
research of other authors.
The Research papers we have studied have done
survey on recent achievements of LSB based image
steganography. In this survey authors discuss the
improvements that enhance the steganographic
results such as high robustness, high embedding
capacity
and
un-detectability
of
hidden
information. Along with this survey two new
techniques are also proposed. First technique is
used to embed data or secret messages into the
cover image and in the second technique a secret
gray scale image is embedded into another gray
scale image.
These techniques use four state table that produce
pseudo random numbers. This is used for
embedding the secret information. These two
methods have greater security because secret
information is hidden on random selected locations
of LSBs of the image with the help of pseudo
random numbers generated by the table. Image
securing method using Steganography. In their
work new steganography technique is proposed in
which multiple RGB images are embedded into
single RGB image using DWT steganographic
technique. The cover image is divided into 3 colors
i.e. Red, Green and Blue color space. These three
color spaces are utilized to hide secret information.
Experimental results obtained using this system has
good robustness.
We have gone through some papers and find out
that utilization of both steganographic and
cryptographic techniques in order to gain extra
layer of security to the hidden data. This proposed a
security scheme in which steganography is used
along with cryptography to provide better security
to embedded data. In their method first data is
encrypted then it is embedded into cover image
using steganographic method. Proposed algorithm
transforms any kind of message into text with the
help of manipulation tables, and then carries out
hill cipher methods to it and finally hides the data
into red, blue, and green pixels of the cover image.
Ishwarjot Singh ,J.P Raina from International
Journal of Computer Trends and Technology
proposed a very innovative system that will
combine the steganography and cryptography into
one system.

There will be no separate computations for


steganography and cryptography. Hence this
system needs lesser computations than existing
methods, while maintain the higher security levels.
Core of this system is LSB matching technique and
Boolean function in stream ciphers. For
steganography gray scale images are utilized and
Boolean functions are applied for cryptographic
purpose and to control the pseudo-random
increment and decrement of LSBs. Experimental
results shows that this system is very much safer
from steganalysis attacks.
In 2014, there are three researches being reviewed,
the first research is the survey of various audio
steganography techniques, which is described by
the comparison of various data security techniques
followed by the comparison of various
steganography techniques. The second research
presents a different image steganography technique
that takes two secret keys to randomize the bit
hiding process. The paper proposes that the use of
two secret keys will maintain the high data hiding
capacity. The third research focuses on bitmap
image format to implement LSB steganography
method. The paper also proposes the use of AES
algorithm to ensure two layer security of the
message
From the three consecutive years (2012-2014), the
most preferred choice of the researchers is image
Steganography techniques. There are four main
categories that used in steganography that are
image, audio, sound and protocol . Out of ten
researchers, seven is proposing new techniques or
methods in image steganography. Image files
usually are comply with the requirements of
creating a stego image but researchers are also
focussing on other methods like audio, video, etc to
hide the secret data.
V. FUTURE SCOPE
In this modern era of technology, with the increase
in the need of secure and robust communication,
the Information and technology sector looks
towards the future research in the field of
Steganography, as cryptography alone cannot
provide a secure communication. Some future
researches may include:
1. Developing a system by combining the benefits
of both cryptography and steganography
2. Developing an environment which should be
platform independent.
3. Considering different media other than images,
video i.e. the traditional media
4. Use of best algorithms to achieve a secure and a
robust communication.

VI. CONCLUSION
The paper presented above gives a understanding
of cryptography and steganography concepts, along
with it the paper gives a review of the research and
developments in the field of steganography through
the various steganography techniques. The paper
also provides the suggestion regarding the future
researches in the field of steganography.
REFERENCES
[1]

T. Morkel J.H.P. Eloff and M.S. Olivier


An Overview Of Image Steganography.

[2]

Ishwarjot Singh ,J.P Raina, Advance Scheme


for Secret Data Hiding System using Hop field
& LSB International Journal of Computer
Trends and Technology (IJCTT) volume 4
Issue
7July
2013.

[3]

Yang, Chunfang., Liu, Fenlin., Luo,


Xiangyang., and Zeng, Ying., Pixel Group
Trace Model-Based Quantitative Steganalysis
for
Multiple
Least-Significant
Bits
Steganography, IEEE Transactions on
Information Forensics and Security, Vol. 8,
No. 1, January 2013.

[4]

G. Manikandan, N. Sairam and M. Kamarasan


A
Hybrid
Approach
for
Security
Enhancement by Compressed Crypto-Stegno
Scheme , Research Journal of Applied
Sciences, Engineering and Technology.

[5]

Swati malik, Ajit Securing Data by Using


Cryptography
with
Steganography
International Journal of Advanced Research in
Computer Science and Software Engineering,
Volume 3, Issue 5, May 2013

An Overview of Smart City


Sunil Gupta
Reader, Dept of EEE
MSIT, GGSIP University
New Delhi
sun16delhi1@gmail.com

Abstract Smart Grid technologies realize the smart


generation, transmission, distribution, and utilization of
electrical energy throughout the power system. The
paper demonstrates the concept of smart city through
integration of Renewable energy sources means in
conventional power grid. With this, this paper is an
attempt to figure out the possibilities embedded in
smart grid technologies to support the applications and
goals of smart city.
Keywordsrenewable energy sources; smart grid
technologies; smart city

I.

INTRODUCTION

The Smart City concept was first emerged in the


late 90s. Around this time the European Commission
supported initiatives like Euro cities. Many projects
were launched in cities but the new Smart City
approach is somewhat different. The Smart City aims
to make optimal and sustainable use of all resources,
while maintaining an appropriate balance between
social, environmental and economic costs. There are
many components in building the Smart City that
includes smart government services; smart transport
and traffic management; smart energy generation,
transmission and distribution technologies; smart
health care services; smart water and waste
management etc. E-Services are still a key component
of the Smart City concept; however, other dimensions
are now included, e.g. broadband, contactless
technologies, smart energy, Open Data, Big Data [1,
2]. In the Smart City, maximum use is made of
information and communication technologies (ICT)
to improve the functioning, management, and
supervision of the variety of systems and services,
with an emphasis on saving energy, water, land and
other natural resources. The main categories that
define smart cities include the quality of the
environment, energy, water and wastewater,
transportation and traffic, information and
communication systems, quality of life as discussed
below:
The Smart Energy City is highly energy and
resource efficient, and is increasingly powered by
renewable energy sources; it relies on integrated
figure out the possibilities embedded in smart
Identify applicable sponsor/s here. If no sponsors,
delete this text box (sponsors).

and resilient resource systems, as well as insightdriven and innovative approaches to strategic
planning. The application of information,
communication and technology are common
means to meet these objectives :
A smart city uses digital technologies or ICT
to enhance quality and performance of urban
services, to reduce costs and resource
consumption, and to engage more effectively
and actively with its citizens.
Smart city applications are developed with the
goal of improving the management of urban
flows and allowing for real time responses to
challenges. The Smart Energy City, as a core
to the concept of the Smart City, provides its
users with a livable, affordable, climatefriendly and engaging environment that
supports the needs and interests of its users
and is based on a sustainable economy.
The Smart Energy City is highly energy and
resource efficient, and is increasingly powered
by renewable energy sources; it relies on
integrated and resilient resource systems, as
well as insight-driven and innovative
approaches to strategic planning. The
application of information, communication
and technology are common means to meet
these objectives.
The Smart Energy City leverages the Smart
City vision as a tool to help set the trajectory
for an overall smarter city development. This
development will encompass all social,
economic and environmental aspects of
sustainability. Major technological, economic
and environmental changes have generated
interest in smart cities, including climate
change, economic restructuring, move to
online retail and entertainment, ageing
populations, and pressures on public finances.
Beyond these objectives, this paper demonstrates
the concept of smart city through integration of
Renewable energy sources means in conventional
power grid. With this, this paper is an attempt to

grid technologies to support the applications and


goals of smart city.
II. SMART GRID OBJECTIVES

Grids are to maximize CO2 free energy and reduce


environmental impact, improve energy efficiency
across the value chain, and increase grid reliability
and stability.
III. SMART CITY: A PROTOTYPE

The European Technology Platform defines the


Smart Grid as an electricity network that can
intelligently integrate the actions of all users
connected to it generators, consumers and those
that do both, in order to efficiently deliver
sustainable, economic and secure electricity
supply [3], [4]. The primary objectives in Smart
Grid are to optimize assets usage, reduce overall
losses, improve power quality, enable active
customer participation, make energy generation,
transmission and distribution eco-friendly and to
make detection, isolation and rectification of
system disturbances automatic. To achieve these
objectives, the Smart Grid utilizes technological
enhancements in equipments and methodologies
that are cost effective, innovative, and reliable [5].

Fig.1. Integration of conventional and clean energy generation


technologies [6].

Smart grid makes use of various forms of clean


energy technologies to reduce the burden on fossil
fuels and the emission of green house gases, as
shown in Fig. 1[6]. The energy management
system functions help consumers to choose among
the various tariff plan options, reduce peak load
demand and shift usage to off-peak hours, save
money and energy by using more efficient
appliances and equipments. Smart City not only
provides a means of distributed green power
generation but also helps in load balancing during
peak hours through energy storage technologies
and applications such as Plug-in Hybrid Electric
Vehicles (PHEV). Thus, the key drivers for Smart

The working model of a smart city, as shown in


Fig.2 consists of the following smart grid elements
as described in Table I.
Solar power generation: Solar power is the
conversion of sunlight into electricity either
directly using Photo-Voltaic (PV) cells, or
indirectly using concentrated solar power.
Here, the smart city appliances utilized for
the conversion of solar/photo energy into
useful electrical energy should be highly
efficient and secure.
Table I: Smart grid elements

Components

Specificati
ons

Utilization

Stepper motor

500 rpm

Wind plant

Dynamo(Gener
ator)
Solar panel

12V

Hydro power plant

3W

Solar plant

Solar panel

2W

Solar plant

Solar panel

1.5W

Solar plant

LED Bulbs

1W

Lighting

Water pump

220V

Hydro water flow

Battery

4V

Storage of electricity

Wind blades

Wind plant

Turbine

6 blades

Hydro plant

Battery

4 volt

Battery bank

Solar street lights are raised light sources


which
are
powered
by photovoltaic
panels generally mounted on the lighting
structure or integrated in the pole itself. The
photovoltaic panels charge a rechargeable
battery, which powers a fluorescent or LED
lamp during the night.
Hydroelectricity is the term referring
to electricity generated by hydropower; the
production of electrical power through the use
of the gravitational force of falling or flowing
water. It is the most widely used form
of renewable energy.

Fig.2. Working model of a smart city.

Wind energy or we can say wind power is


extractednnfrom airmflow using wind.turbines or
sails to produce electrical energy. Windmills are
used for their mechanical power; Wind
pumps for water
pumping;
and sails to
propel ships. Wind power as an alternative
to fossil fuels, is plentiful, renewable, widely
distributed, clean, produces no green house
gas emissions during operation, and uses little
land.
A smart home, or smart house, is a home that
incorporates advanced automation technologies
and systems to provide the inhabitants with
sophisticated monitoring and control over the
building's functions. IT solutions for advanced
energy and power management owners to
improve the efficiency of their operations and
participate as a more active player in the grid
system by adjusting load and self-generation in
response to changes in energy pricing. For
example a smart home may control lighting,
temperature, multi-media, security, window and
door operations, as well as many other functions.
Like they can intelligently control everything
from window shutters to lighting, heating and
cooling in response to weather, occupancy and
energy prices. Lighting control systems can
deliver power savings of up to 50 percent and
building automation up to 60 percent. Smart City
empowers the consumer through smart metering
which allows two way interaction of energy and
information flow between the utility and end
consumers. It manages their energy use and thus
reduces the energy cost. It also improves customer
satisfaction with increased reliability in grid
operations and reduced outages [7].

IV. CONCLUSION
Requirements of the Smart Grid scenario in Smart
City are presented. The potential of technical
features available with smart grid technologies to
support these requirements and also to streamline
grid operations at the distribution level with
improved reliability and efficiency, is explored. It
has been analyzed that, by adopting advanced and
innovative communication and information
technologies in various utilities provide various
opportunities for advanced and futuristic Smart
City applications.
REFERENCES
[1] www.itu.int
[2] www.metropolitansolutions.de
[3] C. Cecati, G. Mokryani, A. Piccolo, and P. Siano, An
overview on the smart grid concept, 36th Annual
Conference on IEEE Industrial Electronics Society,
IECON, pp. 3322-3327, 2010.
[4] Z. Xue-Song, C. Li-Qiang and M. You-Jie , Research on
smart grid technology, International Conference on
Computer Application and System Modeling (ICCASM),
pp.599-603, 2010.
[5] Ikbal Ali, Mini S. Thomas, Sunil Gupta, Substation
communication architecture to realize the future smart
grid, International Journal of Energy Technologies and
Policy, Vol. 1(4), pp. 25-35, 2011.
[6] John D. McDonald, P.E., Smart Grid Applications,
Standards Development and Recent Deployments, GE
energynT&D.http://www.ieeepesboston.org/files/2011/06/
McDonaldSlides.pdf.

Three phase Line-Line Fault Detection using


8051 Microcontroller
Rakhi Kamra
Asst. Prof., Dept of EEE
MSIT, GGSIP University
New Delhi

Anish Sharma
Student, Dept of EEE
MSIT, GGSIP University
New Delhi

AbstractThe paper explains the concept of fault


detection using Microcontroller 8051. A model
simulating transmission lines and possibility of
various faults occurring between the lines is
interfaced with microcontroller AT89C52 which
detects the fault. IC AT89C52 belongs to 8051
microcontroller family and it is interfaced with
output peripheral such as LCD to indicate the status
of transmission lines. The Microcontroller scans all
three phases with help of relays and generates a signal
indicating fault along with the location of the same.
KeywordsMicrocontroller AT89C52, LCD, relays

I.

INTRODUCTION

Advancement in technology has led to great


opportunities to improve the power transmission
schemes and there protection. Unlike before, where
analogues systems were employed to protect over
head and underground transmission lines, the
protection scheme has now digitized. It is proven to
be more accurate and fast, giving promising results
and great efficiency. Introduction of Integrated
Circuits and microcontrollers led to a much more
secure and advance defensive system against faults
occurring on transmission lines.
In this project we employ microcontroller
AT89C52 which is basically a low-power, highperformance CMOS 8-bit micro-computer with 8K
bytes of Flash programmable and erasable read
only memory (PEROM).
The objective of this project is to determine the
distance of underground cable fault from base
station in KM. The proposed system is to find the
exact location of the fault. An overhead
transmission line is one of the main components in
every electric power system. The transmission line
is exposed to the environment and the possibility of
experiencing faults on the transmission line is
generally higher than that on other main
components. Line faults are the most common
faults, they may be triggered by lightning strokes,

Anshul Mendiratta
Student, Dept of EEE
MSIT, GGSIP University
New Delhi

Puja Rani Samanta


Student, Dept of EEE
MSIT, GGSIP University
New Delhi

trees may fall across lines, fog and salt spray on


dirty insulators may cause the insulator strings to
flash over, and ice and snow loadings may cause
insulator strings to fail mechanically. When a fault
occurs on an electrical transmission line, it is very
important to detect it and to find its location in
order to make necessary repairs and to restore
power as soon as possible. The time needed to
determine the fault point along the line will affect
the quality of the power delivery. Therefore, an
accurate fault location on the line is an important
requirement for a permanent fault. Pointing to a
weak spot, it is also helpful for a transient fault,
which may result from a marginally contaminated
insulator, or a swaying or growing tree under
the line. Most of fault locators are only based on
local measurements. Currently, the most widely
used method of overhead line fault location is to
determine the apparent reactance of the line during
the time that the fault current is flowing and to
convert the ohmic result into a distance based on
the parameters of the line. It is widely recognized
that this method is subject to errors when the fault
resistance is high and the line is fed from both
ends, and when parallel circuits exist over only
parts of the length of the faulty line. Accurate fault
location is important to power utility distribution
networks in order to insure reliable economic
operation of the system. Despite the reliability of
the power network componentry utilized in today's
power distribution network, faults may nevertheless
occur. Reliability in such a power distribution
network depends on rapid location and isolation of
any fault occurring within the distribution network.
In order to quickly isolate and repair such a fault,
the fault must be accurately located. Accurate fault
location reduces the number of switching
operations required to isolate a faulted line section
after a permanent fault. This results in quick
restoration of power supply to those customers not
serviced by the faulted line section, and further
facilitates quick repair of the faulted line section,
also speeding up the restoration of power to those

customers serviced by the faulted line section. It is


also beneficial to develop an accurate estimation of
the resistance of the fault. The fault resistance
value is useful for post fault analysis of the fault
and of network grounding. In the system of the
present application, consideration of the fault
resistance also facilitates more accurate estimation
of fault position. In such a distribution network,
when a fault occurs, existing relaying schemes
typically make correct and fast tripping decisions
based upon relatively simple measurements
computed in real time. Such relaying schemes
automatically isolate a faulted line section so that
power distribution remains on un-faulted portions
of the distribution network. The faulted line section
must be then located and repaired. Since line repair
must be manually performed by the distribution
system's
maintenance
personnel,
accurate
identification of physical fault location is not
required instantaneously, even several minutes after
the fault has occurred being acceptable. This allows
for elaborate calculations which may be more
complicated than those employed in tripping the
relays to isolate the faulted line section. Because of
this, fault location techniques are normally
employed after the fault has occurred using stored
fault data.
The monitored data includes voltage and current
waveform values at the line terminals before,
during, and after the fault. These values are either
stored as waveform samples, or computed phasors,
by microprocessor based line relays, Digital Fault
Recorders "DFRs" or dedicated fault-locating
systems, all installed at the power substation. The
use of this data to identify fault location may be
performed at the substation within the line relay or
DFR, which may display the results, or may be
transmitted to a remote site using low speed data
communications or Supervising Control and Data
Acquisition (SCADA) systems as is known in the
art. The objective of this project is to determine the
distance of underground cable fault from base
station in KM.
II.

TECHNOLOGIES USED

The project uses the standard concept of Ohms law


i.e., when a low DC voltage is applied at the feeder
end through a series resistor (Cable lines), then
current would vary depending upon the location of
fault in the cable. In case there is a short circuit
(Line to Ground), the voltage across series resistors
changes accordingly, which is then fed to an ADC
to develop precise digital data which the
programmed MC of 8051 family would display in
km

III.

HARDWARE REQUIRED

Following is the list of components used and there


description.
A. 8051 microcontroller AT89s52
The 89S52 has 4 different ports, each one having 8
Input/output lines providing a total of 32 I/O lines.
Those ports can be used to output DATA and
orders do other devices, or to read the state of a
sensor, or a switch. Most of the ports of the 89S52
have 'dual function' meaning that they can be used
for two different functions.

Fig. 1 MC 8051 Block Diagram

B. Relay
A relay is an electrically operated switch. Many
relays use an electromagnet to mechanically
operate a switch, but other operating principles are
also used, such as solid-state relays. Relays are
used where it is necessary to control a circuit by a
low-power signal (with complete electrical
isolation between control and controlled. Relays
were used extensively in telephone exchanges and
early computers to perform logical operations.

Fig 2 Sugar Cube Relay, 25 volts DC, 250 volts AC

C. Liquid Crystal Display (LCD)


Certain organic large size molecule types of liquids
possess properties, which cause them to interfere
with light passage in them. One type, called the
twisted nematic type, is becoming more useful in
todays LCDs. In this, the liquid crystals have
thread-like shapes: the units join head to tail for
million molecules to form lengthy chains.
Moreover each plane is twisted a few degrees from
the next.

Some of the recent chemicals of this variety are


made of pyrimidines, phenyl cyclohexanes,
bicyclohexane and 4-(4 methoxy
benzylidine) n-butylaniline. They exhibit a
crystalline structure even in liquid form at ordinary
temperatures. The property of the liquid is
anisotropic in the two perpendicular directions. The
cell thickness is so designed that there is a 900 turn
of the molecules between the top and the bottom
faces. The twisted nematic has the property that
twists light, which passes through it. Polaroid
filters are fitted above and below the cell so that
light is polarized as it enters, and is twisted through
900, exiting through a filter kept at 900 to the one at
top. The light is then reflected via a mirror at the
back and returns via the same pathway. It has just a
12 mm thin layer of liquid between two or more
sheets of glass cum polarizer filters. One glass plate
has the 7 segment electrodes etched on it and a
conductive coating of tin oxide or Tin cum Indium
oxide. The other plate has the common electrode.
The conductive coat is treated further for good
surface contact to liquid. The cell when assembled
appears as clear glass: the segments are not visible.
When a voltage is applied between the plates, the
molecules move with the dipoles aligned in the cell
axis. Thus those regions under the segments, which
have the electric field, have a contrasty appearance
when viewed in light, while other excited segments
are invisible. The voltage needed is preferable 2-20
V A.C. The cathode (or front plane) voltage input
to the LCD goes through an analog switch that is
on at any time so that a.c. voltage is applied to the
appropriate segment. The anode (back plane)
receives the a.c. supply. The display driving
switches are from a set of MOSFET switches,
which also form part of Integrated circuit. For eg. C
1200 clock LSI I.C. chip from Computer Syst. Inc,
USA, is a digital clock chip with the LCD display
driver. Turn on time for the LCD displays vary
form 0.2-100 millisecs, depending on voltage
applied. Turn of time is 30-100ms. So these
displays are not suitable for very fast changing
numbers. The power consumption is 1to 10 micro
watt/cm2. The voltage threshold for watch type
LCD display is 1 to 2V. The operating a.c.
frequency is 50-100 KHz.
IV.

voltage across series resistors changes accordingly,


which is then fed to an ADC to develop precise
digital data which the programmed MC of 8051
family would display in km.
A. Circuit Diagram
Following figure shows the circuitry of the three
phase fault detection module

Fig 3 Layout of Working model

B. Block Diagram without Fault


Following figure shows the block diagram of three
phase model without any fault condition.
(3 phase
transmission
line)

(Resistance)
S3a

1k

S3b

1k

S3c

1k

3KM

1k

S1a

1k

S2b

1k

S1b

1k

S2c

1k

S1c

1k

2KM

Relay 1

FAULT DETECTION

In this project we employ microcontroller


AT89C52 which is basically a low-power, highperformance CMOS 8-bit micro-computer with 8K
bytes of Flash programmable and erasable read
only memory (PEROM). The project uses the
standard concept of Ohms law i.e., when a low DC
voltage is applied at the feeder end through a series
resistor (Cable lines), then current would vary
depending upon the location of fault in the cable. In
case there is a short circuit (Line to Ground), the

(Switch)
S2a

MC

Relay
Driver

Relay 2

Relay 3

1KM
Phase A

Phase B

Phase C

Fig 4 Block diagram showing three phase module without any


faulty line.

Table I. Table of Abbreviations

MC
S1a,s1b,s1c,etc
1k

V.

Microcontroller
Switch matrix
1 kilo-ohms reistance

C. Calculations
In case of copper Conductor, R (copper) = 22.5 /
Surface-area .

Here S = 10 mm , hence R (copper) = 22.5/10 =


2.25 Ohms /km.
In case of Aluminium Conductor, the resistance per
km for Aluminium is given as R = 36/ Surface-area
Again S = 10mm2, hence R = 36/10 = 3.6
Ohms/km

In case of Practical analysis, as 10 mm2 gives 2.25


ohms/km, hence 1000 ohms/km = (1000 x 10)/2.25
that gives 4444.44 mm2.
Hence 1 kilo-ohms per km = 4444.44 mm2 of
conductor, which gives us 4.44 x 10-3m2 of copper
conductor

D. Block Diagram during faulty condition


Change
in
current

S3a
S3b

S2a

1k

1k

S2b

1k

1k

S1a

1k

S1b

1k

S1c

1k

activated

S3c

S2c

1k

1k

System is snappy and quick to response.


Digitalization in system helps it to get
interfaced with various other modules.
Time to time updates are available of the
Digital module which improves overall
efficiency
In case of any damage, the whole setup is not
required to be changed.
Low power requirement for working and high
efficiency.
VI.

ADVANTAGES

APPLICATIONS

It is applicable in industrial areas which


consumes heavy three phase AC power to run
their machinery
Applicable in over head transmissions lines
both three phase and single phase
Applications in ground faults
Applications in Equipment protection such as
heavy generators and transformers which
require protection from faulty transmission
lines
Application in domestic areas like houses
VII.

CONCLUSION

Both theoretical facts and practical analysis proves


the necessity of employing MC modules in
protection schemes. The rate of efficiency given by
these modules exceed far more than any other
analogous setup. The three phase module prepared
for fault detection works effectively giving precise
and discrete location of fault.

ADC

REFERENCES

Relay 1

Relay
Driver

MC

Phase -B
Relay 2

active

www.edgefxkits.com/three-phase-fault-analysis-with-autoreset-on-temporary-fault-and-permanent-trip-otherwise

[2]....www.scribd.com/doc/159496497/3-PHASE-FAULTnDETECTION-2-docx

Relay 3
LCD Display

[1]

[3]m.www.academia.edu/2236873/Microcontroller_based_Fault
_Detector

NO FAULT
FAULT
AT PH-B (2KM)

[4] www.irjet.net/archives/V2/i1/Irjet-v2i115.pdf
Fig.5 Block diagram of faulty condition.

MC
S1a,s1b,s1c,etc
1k

Microcontroller
Switch matrix
1 kilo-ohms reistance

Fig 5 Figure showing faulty condition where phase


B has been shorted by switch S2B to indicate a
faulty condition. MC immediately scans and
detects the fault and displays it on the LCD screen

[5]

www.digikey.ch/en/pdf/c/cr-magnetics/3-phase-imbalance
-ground-fault-detection

[6]nnwww.mathworks.com/help/physmod/sps/powersys/ref/thr
ephase,fault.html?requestedDomain=www.mathworks.co
m
[7]nnwww.mathworks.com/help/physmod/sps/powersys/ref/thre
ephasenfault.html?requestedDomain=www.mathworks.co
m

EFFECTIVE KEY GENERATION FOR


MULTIMEDIAAPPLICATIONCRYPTOGRAPHY AND
STEGANOGRAPHY TOOL
Neeraj Deshwal
Anupama Kaushik
Student, Dept of IT
Asst. Prof. Dept of IT
MSIT, GGSIP University MSIT, GGSIP University
New Delhi
New Delhi

Abstract- The Effective Key Generation For


Multimedia Application is a desktop application
which deals with security during transmission of
data. Security for the data is required, as there is
always a chance for someone to read those secret
data. The security is implemented by using
steganography. Steganography is the good art of
hiding information in ways so as to prevent
detection of hidden messages. This paper designs
software which tries to alter the originality of the
data files into encrypted form using Tiny
Encryption Algorithm .This Algorithm is to be
designed for simplicity and better performance. In
an encryption scheme, information is encrypted
using tiny encryption algorithm that changes it into
an unreadable cipher text. After encryption, the
encrypted data is embed in a video by using the
concept of steganography and then this video file
sent via email. The application should have a
reversal process as of which should be in a position
to decrypt the data to its original format upon the
proper request by the user.
Keywords:-Encryption, steganography

I.

INTRODUCTION

Effective Key Generation For Multimedia


Application is mainly designed for providing
security to data. In this the sender encrypts the
data in some form by using Tiny Encryption
Algorithm. Tiny Encryption Algorithm requires
less memory. It uses simple operations and it is
easy to implement[1]. While encrypting the data
in some form, the key is entered by the sender.
The key is use to provide security to the system.
The key is known only to the sender and the
receiver. After entered the key the data is
encrypted then the encrypted data is embed in a
video by using the concept of steganography. The
steganography will read the video and encrypted

Mahesh Kumar
Prabhat Kr. Chaubey
Student, Dept of IT
Student, Dept of IT
MSIT, GGSIP University MSIT, GGSIP University
New Delhi
New Delhi

data and takes whole it as a video. Then this


video is sent . So whenever the third person tries
to open the video, only video is visible to them.
The receiver will receive the video. Then the
receiver will de-embed the encrypted data from
the video. The application of decryption is done
when the receiver enters the same key. This is
how the data is transferred from sender to
receiver in a secured manner.[2]
II.

TINY ENCRYPTION ALGORITHM

The Tiny Encryption Algorithm (TEA) is a


cryptographic algorithm designed to minimize
memory footprint and maximize speed. It is a
Feistel type cipher that uses operations from
mixed (orthogonal) algebraic groups. This
research presents the cryptanalysis of the Tiny
Encryption Algorithm. In this research we
inspected the most common methods in the
cryptanalysis of a block cipher algorithm. TEA
seems to be highly resistant to differential
cryptanalysis, and achieves complete diffusion
(where a one bit difference in the plaintext will
cause approximately 32 bit differences in the
cipher text) after only six rounds. Time
performance on a modern desktop computer or
workstation is very impressive.
As computer systems become more pervasive and
complex, security is increasingly important.
Cryptographic
algorithms
and
protocols
constitute the central component of systems that
protect network transmissions and store data. The
security of such systems greatly depends on the
methods used to manage, establish, and distribute
the keys employed by the cryptographic
techniques. Even if a cryptographic algorithm is
ideal in both theory and implementation, the

strength of the algorithm will be rendered useless


if the relevant keys are poorly managed[1].
The following notation is necessary for our
discussion.
Hexadecimal numbers will be subscripted with
h, e.g., 10 = 16. h
Bitwise Shifts: The logical shift of x by y bits is
denoted by x << y. The logical right shift ofx by
y bits is denoted by x >> y.
Bitwise Rotations: A left rotation of x by y bits is
denoted by x <<< y. A right rotation of x by y
bits is denoted by x >>> y.
Exclusive-OR: The operation of addition of ntuples over the field (also known as 2F exclusiveor) is denoted by xy.
The Tiny Encryption Algorithm is a Feistel type
cipher that uses operations from mixed
(orthogonal) algebraic groups. A dual shift causes
all bits of the data and key to be mixed
repeatedly. The key schedule algorithm is simple;
the 128-bit key K is split into four 32-bit blocks
K = ( K[0], K[1], K[2], K[3]). TEA seems to be
highly resistant to differential cryptanalysis and
achieves complete diffusion (where a one bit
difference in the plaintext will cause
approximately 32 bit differences in the cipher
text). Time performance on a workstation is very
impressive.[3]

The inputs to the encryption algorithm are a


plaintext block and a key K .The plaintext is P =
(Left[0], Right[0]) and the cipher text is C =
(Left[64], Right[64]). The plaintext block is split
into two halves, Left[0] and Right[0]. Each half is
used to encrypt the other half over 64 rounds of
processing and then combine to produce the
cipher text block.
Each round i has inputs Left[i-1] and Right[i-1],
derived from the previous round, as well as a
sub key K[i] derived from the 128 bit overall K.
The sub keys K[i] are different from K and from
each other.
The constant delta =(51/2-1)*231 =9E3779B h ,
is derived from the golden number ratio to ensure
that the sub keys are distinct and its precise value
has no cryptographic significance.
The round function differs slightly from a
classical Fiestel cipher structure in that integer
addition modulo 2 is used instead of exclusiveor as the combining operator.[4]

Block ciphers where the cipher text is calculated


from the plain text by repeated application of the
same transformation or round function. In a
Feistel cipher, the text being encrypted is split
into two halves. The round function, F, is applied
to one half using a sub key and the output of F is
(exclusive-or-ed (XORed)) with the other half.
The two halves are then swapped. Each round
follows the same pattern except for the last round
where there is often no swap. The focus of this
thesis
is
the
TEA
Feistel
Cipher.

Fig. 2. An abstraction of ith cycle of TEA

Above Figure presents the internal details of the


ith cycle of TEA. The round function, F, consists
of the key addition, bitwise XOR and left and
right shift operation. We can describe the output
(Left[i +1] , Right[i +1] ) of the ith cycle of TEA
with the input (Left[i] ,Right[i] ) as follows
Left [i+1] = Left[i] F ( Right[i], K [0, 1], delta[i]
),
Fig. 1 TEA Structure

Right [i +1] = Right[i] F ( Right[i +1], K [2, 3],


delta[i] ),

delta[i] = (i +1)/2 * delta,


The round function, F, is defined by
F(M, K[j,k], delta[i] ) = ((M << 4) K[j]) (M
delta[i] ) ((M >> 5) K[k]).
The round function has the same general structure
for each round but is parameterized by the round
sub key K[i]. The key schedule algorithm is
simple; the 128-bit key K is split into four 32-bit
blocks K = ( K[0], K[1], K[2], K[3]). The keys
K[0] and K[1] are used in the odd rounds and the
keys K[2] and K[3] are used in even rounds.
Decryption is essentially the same as the
encryption process; in the decode routine the
cipher text is used as input to the algorithm, but
the sub keys K[i] are used in the reverse order.
Figure presents the structure of the TEA
decryption routine. The intermediate value of the
decryption process is equal to the corresponding
value of the encryption process with the two
halves of the value swapped. For example, if the
output of the nth encryption round is
ELeft[i] || ERight[i] (ELeft[i] concatenated with
ERight[i]).
Then the corresponding input to the (64-i)th
decryption round is
DRight[i] || DLeft[i] (DRight[i] concatenated
with DLeft[i]).
After the last iteration of the encryption process,
the two halves of the output are swapped, so that
the cipher text is ERight[64] || ELeft[64], the
output of that round is the final cipher text C.
Now this cipher text is used as the input to the
decryption algorithm. The input to the first round
is ERight[64] || ELeft[64], which is equal to the
32-bit swap of the output of the 64th round of the
encryption process.
III.

hidden at all. If a person tries to views the object


that the information is hidden inside will not
visible to them therefore the person will not
attempt to decrypt the information. The most
common use of Steganography is to hide a file
inside other file. When information is hidden
inside a file, the data is usually encrypted with a
password.[7] There are various methods used to
hide information inside of Picture, Audio and
Video files. The two common methods are LSB
(Least Significant Byte) and Injection.
DCT Based Steganography [6] Algorithm to
embed encrypted information:Step 1: First of all read the video file.
Step 2: Then Read encrypted information that
will embed in video.
Step 3: To embed the information in video,
divided the video into frames of 88 block.
Step 4: Processing each frame from left to right,
top to bottom
Step 5: Then DCT will applied to each block. .
Step 6: Then write steganography video.
Algorithm to de-embed the encrypted
information:Step 1:first read steganography video.
Step 2: Steganography video divided into frames
and each frame is broken into 88 block of
pixels.
Step 3: Processing each frame from left to right,
top to bottom subtract 128 in each block of
pixels.
Step 4:Then DCT will applied to each block.
Step 5: Then Retrieve the encrypted information.
IV.

FUTURE SCOPE

The correction of errors using various error


correction techniques or development of new
techniques.
The compression of data using existing
techniques or developing of new techniques.

STEGANOGRAPHY

Steganography is an art of hiding information


within
other
information.
The
word
steganography means secret writing. Now days
cryptography become very popular science.
Cryptography is about concealing the
information. At the same time encrypted data
package is itself evidence of the existence of
valuable information. Steganography makes the
cipher text invisible to unauthorized users. [5,6]
Steganography is the practice of hiding private
or sensitive information within something that
appears to be nothing out of the usual.
Steganography is used to protect important
information. Steganography involves hiding
information so it appears that no information is

V.

CONCLUSION

The software is developed and is tested with the


sample data and process according to our
requirements .The system performance is more
efficient then the existing system. The project
meets our primary requirements.

REFERENCES
[1]. Andem, VikramReddy . A Cryptanalysis
m,,,,,,of the Tiny Encryption Algorithm, 2003
[2]. Wheeler, D.J., & Needham, R.J. TEA, a

mmmtiny.encryption
algorithm.
In
Fast
m,,,,,,Software Encryption nProceedings of the
m,,,,,,2nd International Workshop,1008, 1998
[3] Hernndez, Julio Csar; Isasi, Pedro;
nnnnRibagorda,.Arturo. "An application of
mm,genetic algorithms to the cryptoanalysis of
mm one round TEA"...Proceedings of the 2002
Symposium onArtificial Intelligence and its
Application, 2002.
[4]JohnsonN.
and
Jajodia
S.,
mmnSteganography: Seeing the Unseen,
mmnIEEE Computer Magazine, vol. 25, no. 4,
mmnpp. 26- 34, 1998.
[5]. .R.Anderson, R. Needham, and A. Shamir.
mmnThe steganographic file system. In
mm,..IWIH: International on Information
mm,..Hiding, 1998.
[6]. Video Steganography by LSB Substitution
mmnUsing Different Polynomial Equations, A.
mmnSwathi, Dr. S.A.K Jilani, International
mmnJournal of Computational Engineering
mmnResearch (ijceronline.com) Vol. 2 Issue. 5
[7]. An overview of image steganography, T
mmnMorkel, J.H.P. Eloff, M.S.Olivier.

NANOTECHNOLOGY FOR POWERFUL


SOLAR ENERGY
Sudesh Pahal
Asst Prof., Dept of ECE
MSIT, GGSIP University
New Delhi

Mayank Bhargava
Student, Dept of ECE
MSIT, GGSIP University
New Delhi

Abstract-At the size, size, physical, chemical and


biological properties of nano-materials differ in
fundamental and valuable properties of individual
atoms and many matter. The properties in nano, we
can create a wide range of potential applications for
nano-materials discovery. One of these applications
includes the creation of revolutionary potential
energy.
These are the sources of energy, including hydrogen,
geothermal energy, unconventional natural gas
nuclear fission and solar energy, while hydrogen is
energy. Understanding of hydrogen as an alternative
energy source, hydrogen gas is frustrated by gaps in
technology which is not the efficient and economical
storage and transport.
Nanotechnology provides new approaches to
fundamental questions about the interaction of
hydrogen with nano materials. which enable the more
efficient and economical storage and transport of
hydrogen atoms. Applications of nano-technology
help us to make solar energy more economically.
Nanoscience photovoltaic cells are used to improve
the efficiency for creating efficient systems for
conversion cost, efficient solar energy storage systems,
or solar energy on a large scale. Also it is used in DCDC power converters, fuel cells, nanocomposites for
high temperature applications, CO2 reduction and
clean-up , air and water filtration, waste and water
treatment, hazourdous materials disposal, in-building
environmental systems, remediation.

Sabya Sanchi Mishra


Student, Dept of ECE
MSIT, GGSIP University
New Delhi

Harshit Chaudhary
Student, Dept of ECE
MSIT, GGSIP University
New Delhi

electricity. Conventional solar cells have two main


drawbacks: they can only achieve efficiencies
around ten percent and they
are expensive to manufacture. The first drawback,
inefficiency, is almost unavoidable with silicon
cells. This is because the incoming photons, or
light, must have the right energy, called the band
gap energy, to knock out an electron. If the photon
has less energy than the band gap energy then it
will pass through. If it has more energy than the
band gap, then that extra energy will be wasted as
heat. Scott Aldous, an engineer for the North
Carolina Solar Center explains that, These two
effects alone account for the loss of around 70
percent of the radiation energy incident on the
cell[1].

Keywords: nano-materials, CO2 reduction and cleanup, waste and water treatment, nanocomposite

I INTRODUCTION
Conventional solar cells are called photovoltaic
cells. These cells are made out of semiconducting
material, usually silicon. When light hits the cells,
they absorb energy though photons. This absorbed
energy knocks out electrons in the silicon, allowing
them to flow. By adding different impurities to the
silicon such as phosphorus or boron, an electric
field can be established. This electric field acts as a
diode, because it only allows electrons to flow in
one direction [1]. Consequently, the end result is a
current of electrons, better known to us as

Fig. 1 Nanotechnology for powerful solar energy


Mahesh V. Jadhav, et al. 209

Fig1. Consequently, according to the Lawrence


Berkeley National Laboratory, the maximum
efficiency achieved today is only around 25 percent
[2]. Mass-produced solar cells are much less

efficient than this, and usually achieve only ten


percent efficiency.
Nanotechnology might be able to increase the
efficiency of solar cells, but the most promising
application of nanotechnology is the reduction of
manufacturing cost. Chemists at the University of
California, Berkeley, have discovered a way to
make cheap plastic solar cells that could be painted
on almost any surface.

When employing SWNTs as conducting scaffolds


in a TiO2 based DSSC, the photoconversion
efficiency can be boosted by a factor of 2 [10].
TiO2 nanoparticles were dispersed on SWNT films
to improve the photoinduced charge separation and
transport of carriers to the collecting electrode
surface. An alternative material to TiO2 used in
DSSCs is ZnO [11]. ZnO has a similar band gap
(3.2 eV) and band edge position to TiO2 [12], with
similar or smaller crystallite sizes than that of
typical TiO2. ZnO nanowires have been used in
DSSCs [13].
II DYE-SENSITIZED NANOCRYSTALLINE
SOLAR CELL (DSC)

Fig 2. Picture of a solar cell, which utilizes nanorods to convert


light into electricity [3].

These new plastic solar cells achieve efficiencies of


only 1.7 percent; however, Paul Alivisatos, a
professor of chemistry at UC Berkeley states, "This
technology has the potential to do a lot better.
There is a pretty clear path for us to take to make
this perform much better[3].
The conversion efficiency of dye-sensitized solar
cells (DSSCs) has currently been improved to
above 11% [4]. DSSCs with high conversion
efficiency and low cost have been proposed as an
alternative to silicon based photovoltaics [5, 6].
The encapsulation problem posed by the use of the
liquid electrolyte in conventional liquid electrolyte
based DSSCS, solvent leakage and evaporation are
two main challenges; therefore, much work is being
done to make an all solidstate DSSC [7].
In addition, the use of solvent free electrolytes in
the DSSC is expected to offer stable performance
for the device. Plastic and solid-state DSSCs
incorporating single-walled nanotubes (SWNTs)
and imidazolium iodide derivative have been
fabricated [8]. The introduction of carbon
nanotubes (CNTs) can improve solar cell
performance through reduction of the series
resistance. TiO2 coated CNTs were recently used
in DSSCs [9]. Compared with a conventional TiO2
cell, a TiO2-coated CNT (0.1 wt%) cell gives an
increase in short circuit current density (JSC),
resulting in 50% increase in conversion
efficiency.

A schematic presentation of the operating


principles of the DSC is given in Fig. 3. At the
heart of the system is a mesoporous oxide layer
composed of nanometer-sized particles which have
been sintered together to allow for electronic
conduction to take place [19]. The material of
choice has been TiO2 (anatase) although alternative
wide band gap oxides such as ZnO [14], and
Nb2O5 [15] have also been investigated. Attached
to the surface of the nanocrystalline film is a
monolayer of the charge transfer dye. Photo
excitation of the latter result in the injection of an
electron into the conduction band of the oxide. The
original state of the dye is subsequently restored by
electron donation from the electrolyte, usually an
organic solvent containing redox system, such as
the iodide/triiodide couple. The regeneration of th
sensitizer by iodide intercepts the recapture of the
conduction band electron by the oxidized dye. The
iodide is regenerated in turn by the reduction of
triiodide at the counter electrode the circuit being
completed via electron migration through the
external load. The voltage generated under
illumination corresponds to the difference between
the Fermi level of the electron in the solid and the
redox potential of the electrolyte. Overall the
device generates electric power from light without
suffering any permanent chemical transformation.
A recent alternative embodiment of the DSC
concept is the sensitized heterojunction usually
with an inorganic wide band gap nanocrystalline
semiconductor of n-type polarity as electron
acceptor, the charge neutrality on the dye being
restored by a hole delivered by the complementary
semiconductor, inorganic [16,17] or organic [18]
and of p-type polarity.
The prior photo-electrochemical variant, being
further advanced in development, has an AM 1.5
solar conversion efficiency of over 10%, while that
of the solid-state device is, as yet, significantly
lower.

optical absorption of a single monolayer of dye is


weak, a fact which originally was cited as ruling
out the possibility of high efficiency sensitized
devices, as it was assumed that smooth substrate
surfaces would be imperative in order to avoid the
recombination loss mechanism associated with
rough or polycrystalline structures in solidstate
photovoltaics. Also used in Light harvesting by
nanocrystalline TiO2 films, the dilemma of light
harvesting by surface, immobilized molecular
absorbers.
2 Photovoltaic performance stability:

Fig. 3

Fig. 3.Operation and energy level scheme of the


dye-sensitized
nanocrystalline
solar
cell.
Photoexcitation of the sensitizer (S) is followed by
electron injection into the conduction band of the
mesoporous oxide semiconductor. The dye
molecule is regenerated by the redox system, which
itself is regenerated at the counter electrode by
electrons passed through the load. Potentials are
referred to the normal hydrogen electrode (NHE).
The open-circuit voltage of the solar cell
corresponds to the difference between the redox
potential of the mediator and the Fermi level of the
nanocrystallline film indicated with a dashed
line.[19]
III PRESENT DSC RESEARCH AND
DEVELOPMENT
1. Panchromatic sensitizers
Upon excitation it should in-ject electrons into the
solid with a quantum yield of unity. The energy
level of the excited state should be well matched to
the lower bound of the conduction band of the
oxide to minimize energetic losses during the
electron transfer re-action. Its redox potential
should be sufficiently high that it can be
regenerated via electron donation from the redox
electrolyte or the hole conductor. Finally, it should
be stable enough to sustain about 108 turnover
cycles corresponding to about 20 years of exposure
to natural light. Much of the research in dye
chemistry is devoted to the identification and
synthesis of dyes matching these require-ments,
while retaining stability in the photoelectrochemical environment. The attachment
group of the dye ensures that it spontaneously
assembles as amolecular layer upon exposing the
oxide film to a dye solution. This molecular
dispersion ensures a high probability that, once a
photon is absorbed, the excited state of the dye
molecule will relax by electron injection to the
semiconductor conduction band. However, the

A photovoltaic device must remain serviceable for


20 years without significant loss of performance.
The stability of all the constituents of the
nanocrystalline injection solar cells, that is, the
conducting glass the TiO2 film, the sensitizer, the
electrolyte, the counterelectrode and the sealant has
therefore been subjected toclose scrutiny. The
stability of the TCO glass and the nanocrystalline
TiO2 film being unquestionable investigations have
focused on the four other components. As a pure
solid the N3 dye is stable even in air up to 280 C
where decarboxylation sets in. Upon long time
illumination it sustained 108 redox cycles without
noticeable loss of performance corresponding to 20
years of continuous operation in natural sunlight.
The reason for this outstanding stability is the very
rapid deactivation of its excited state via charge
injection into the TiO2 occurs in the femtosecond
time domain. This is at least eight orders of
magnitude faster than any other competing
channels of excited state deactivation including
those leading to chemical transformation of the
dye. The oxidized state of N3+ the dye produced by
the electron injection is much less stable although
the
N3/N3+
couple
shows
reversible
electrochemical behavior in different organic
solvents indicating that the lifetime of N3+ is at
least several seconds under these conditions.
However when maintained in the oxidized state the
dye degrades through loss of sulfur. Regeneration
of the N3 in the photovoltaic cell should therefore
occur rapidly, i.e. within nanosecond or
microseconds to avoid this unwanted side reaction.
Lack of adequate conditions for regeneration of the
dye has led to cell failure.
IV CONCLUSIONS
The dye-sensitized nanocrystalline electrochemical
photovoltaic system has become a standard devices
for the conversion of solar energy into electricity.
Recent developments in the area of sensitizers for
these devices have lead to dyes which absorb
across the visible spectrum to higher efficiencies.
The development of an solid-state heterojunction
dye solar cell keeps additional potential for cost

reduction and simplification of the manufacturing


of dye solar cells.
REFERENCES
[1] Aldous, Scott. How Solar Cells Work. How Stuff
nnnnWorks.22May2005.<http://science.howstuffworks.com/son
nnnnlarcell1.htm>.
[2] Paul Preuss. An unexpected discovery could yield a full
VVVspectrum solar cell. Research News. Berkeley Lab. 18
VVVNovember2002.<http://www.lbl.gov/SciencenNANOTEC
VVVHNOLOGYnFORmPOWE-RFUL SOLAR ENERGY
VVVMahesh V. Jadhav, et al. 212
,Articles/Archive/MSDVVVfull-spectrum-solarcell.html>.
[3] Sanders, Bob. Cheap, Plastic Solar Cells May Be On The
VVV Horizon.UC Berkeley Campus News. 28 March 2002.
VVV
[4] .Wang Z S,Yamaguchi T, Sugihara H and Arakawa
,,,,,,,,,Arakawa H 2005 Langmuir 21 4272
[5] Gratzel M 2001 Nature 414 338
[6] Hagfeldt A and Gratzel M 2000 Acc. Chem.Res.33269
[7]

Bach U, Lupo D, Comte P, Moser J E, Weissortel F, Salbec


-k J,Spreitzer H and Gratzel M 1998 Nature 395 583

[8]

Ikeda N and Miyasaka T 2007 Chem. Lett. 36 466

[9] , Lee T Y, Alegaonkar P S and Yoo J B 2007 Thin .Solid


,.nFilms 515 5131
[10]

Kongkanand A, Dominguez R M and Kamat P V 2007 Nan


o Lett. 7676

[11] Meulenkamp E A 1999 J. Phys. Chem. B 103 7831


[12]

Hagfeldt A and Gratzel M 1995 Chem. Rev. 95 49

[13] Law M, Greene L E, Johnson J C, Saykally R and Yang P


2005 .2005 Nat. Mater. 4 455
[14] . K. Tennakone, G.R.R. Kumara, I.R.M. Kottegoda, V.S.P.
VVV Perera, Chem. Commun. 15 (1999).
[15] K. Sayama, H. Suguhara, H. Arakawa, Chem. Mater. 10
NNN.(1998) 3825.
[16]
K. Tennakone, G.R.R.A. Kumara, A.R. Kumarasinghe,
VVV,,K.G.U. Wijayantha, P.M. Sirimanne, Semicond. Sci.
mmm,Technol. 10 (1995) 16891693.
[17] B. ORegan, D.T. Schwarz, Chem. Mater. 10 (1998) 15
011509
[18] U. Bach, D. Lupo, P. Comte, J.E. Moser, F. Weissrtel, J.
BBB Salbeck, H.Spreitzert, M. Grtzel, Nature 395 (1998) 544.
[19] Michael Grtzel, 2003. Review Dye-sensitized solar cells. /
NNN Journal
of Photochemistry and Photobiology C:
nnnnnPhotochemistry Reviews 4 (2003) 145153

[4] .Wang Z S, Yamaguchi T, Sugihara H and Arakawa H 2005


,,,,,,,,,Langmuir 21 4272
[5] Gratzel M 2001 Nature 414 338
[6] Hagfeldt A and Gratzel M 2000 Acc. Chem. Res. 33 269
[7] ,Bach U, Lupo D, Comte P, Moser J E, Weissortel F,
Salbeck NN,,,J,Spreitzer H and Gratzel M 1998 Nature 395 583
[8]

Ikeda N and Miyasaka T 2007 Chem. Lett. 36 466

[9] Lee T Y, Alegaonkar P S and Yoo J B 2007 Thin Solid


Films 515 NNN5131

A Study on Nature Inspired


Metaheuristics Techniques
Anupama Kaushik
Asst Prof., Dept. of IT
MSIT, GGSIP University
New Delhi

Abstract A metaheuristic is a high-level


problem-independent algorithmic framework that
provides a set of guidelines or strategies to develop
heuristic optimization algorithms especially with
incomplete or imperfect information or limited
computation capacity. Compared to optimization
algorithms and iterative methods, these algorithms do
not guarantee that a globally optimal solution can be
found on
some
class
of
problems. Many
metaheuristics implement some form of stochastic
optimization, so that the solution found is dependent
on the set of random variables generated. By
searching over a large set of feasible solutions,
metaheuristics can often find good solutions with less
computational effort than optimization algorithms,
iterative methods, or simple heuristics. This paper
discusses three metaheuristic algorithms, Firefly ,
Cuckoo Search and Bat inspired algorithm.
KeywordsMetaheuristic; Firefly; Cuckoo Search;
Bat inspired algorithm;

I.

INTRODUCTION

Now-a-days optimization is everywhere from


engineering design to business planning and from
routing of the Internet to holiday planning. In all
these activities we are trying to achieve certain
objectives or to optimize something such as profit,
quality and time. As resources, time and money
are always limited in real-world applications, we
have to find solutions to optimally use these
valuable resources under various constraints.
Mathematical optimization or programming is the
study of such planning and design problems using
mathematical
tools.
Nowadays,
computer
simulations become an indispensable tool for solving
such optimization problems with various efficient
search algorithms.
Very recently a class of algorithms called heuristic
have come up which provide quality solutions to a
tough optimization problem in a reasonable amount

of time, but there is no guarantee that optimal


solutions are reached. Further development over the
heuristic algorithms is the so-called meta- heuristic
algorithms. Here meta- means beyond or higher
level, and they generally perform better than
simple heuristics. In addition, all meta- heuristic
algorithms use certain tradeoff of randomization and
local search. It is worth pointing out that no agreed
definitions of heuristics and meta- heuristics exist
in the literature; some use heuristics and
metaheuristics interchangeably. However, the
recent trend tends to name all stochastic
algorithms with randomization and local search as
metaheuristic.
Heuristics is a way by trial and error to produce
acceptable solutions to a complex problem in a
reasonably practical time. The complexity of the
problem of interest makes it impossible to search
every possible solution or combination, the aim is
to find good feasible solution in an acceptable
timescale. There is no guarantee that the best
solutions can be found and we even do not know
whether an algorithm will work and why if it does
work. The idea is to have an efficient but practical
algorithm that will work most of the time and is able
to produce good quality solutions.
Among, the found quality solutions, it is
expected some of them are nearly optimal, though
there is no guarantee for such optimality. Two major
components of any metaheuristic algorithms are:
intensification and diversification, or exploitation
and exploration. Diversification means to generate
diverse solutions so as to explore the search space on
the global scale, while intensification means to
focus on the search in a local region by exploiting
the information that a current good solution is found
in this region. This is in combination with with the
selection of the best solutions. The selection of the
best ensures that the solutions will converge to the
optimality,
while
the
diversification
via

randomization avoids the solutions being trapped


at local optima and, at the same time, increases
the diversity of the solutions. The good
combination of these two major components will
usually ensure that the global optimality is
achievable.
Metaheuristic algorithms can be classified in
many ways. One way is to classify them as:
population-based and trajectory-based.
For
example, genetic algorithms are population-based
as they use a set of strings, so is the particle
swarm optimization (PSO) which uses multiple
agents or particles.
On the other hand, simulated annealing uses a
single agent or solution which moves through the
design space or search space in a piecewise style.
A better move or solution is always accepted,
while a not-so-good move can be accepted with a
certain probability. The steps or moves trace a trajectory in the search space, with a non-zero
probability that this trajectory can reach the global
optimum.
This paper is organized as follows: section 2
describes firefly algorithm; section 3 describes
cuckoo search algorithm and section 4 describes
bat algorithm.
II.

FIREFLY ALGORITHM

The fireflymalgorithmn(FA) is,a metaheuristic algo


rithm given by Xin-She Yang [1]. It was inspired
by the flashing behaviour of fireflies and is based
on the following three rules :
1. All fireflies are unisexual, so that one
firefly will be attracted to all other fireflies
2. Their attractiveness is proportional to their
brightness, and for any two fireflies, the
less brighter one will be attracted to the
more brighter one; however, the
brightness can decrease as their distance
increases.
3. The brightness of a firefly is determined
by the characteristics of the objective
function.
Since the fireflys attractiveness is proportional to
the light intensity seen by adjacent fireflies, the
variation of attractiveness with the distance r is
given as
(1)

Where
is the attractiveness at r = 0.
The distance between any two fireflies i and j at
location xi and xj respectively is the Cartesian
distance [2].
=

(2)

Where x

i, k

is the kth component of the spatial

coordinate xi of ith firefly.


In 2-D we have,
(3)
The movement of the firefly i is attracted to another
more attractive (brighter) firefly j is determined by
[1]
=

)+t

(4)

where the second term is due to the attraction and


the third term is randomization with t being the
randomization parameter and
is a vector of
random numbers being drawn from a Gaussian
distribution or uniform distribution at time t.
As t controls the randomness, this parameter can
be tuned during iterations to vary with the iteration
counter t.
So, t can be expressed as t = 0 , 0< <1,
Where 0 is the initial randomness scaling factor
and is essentially a cooling factor.
For most applications, we can use = 0.95 to 0.97 .
The FA is more efficient if 0 is associated with the
scaling of design variables. Let L be the average
scale of the problem then 0 = 0.01 L can be set
initially. The factor 0.01 is used in determining 0 ,
as the random walk requires a number of steps to
reach the target while balancing the local
exploitation without jumping too far in a few steps
[3, 4].
The parameter
controls the attractiveness and
can be used for most applications . The
parameter is also related to scaling L and we can
set =
. If the scaling variations are not
significant, then we can set = O(1). For most
applications, typically varies from 0.1 to 10 [2].
The basic steps of the Firefly Algorithm (FA) can
be summarised as the pseudo code shown in Fig. 1.

Procedure FA:
Define the objective function of f(x), where
x=(x1,........,xd).
2. Generate the initial population of fireflies or xi (i=1,
2 ,..., n)
3.
Determine the light intensity of Ii at xi via f(xi)
4. While (t<MaxGen),
5.
For i=1 to n (all n fireflies)
6.
For j=1 to n (n fireflies)
7.
If(Ij>Ii)
8.
move firefly i towards j by using
11 equation;
9.
end if
10.
Attractiveness varies with distance r
via Exp [-r2 ];
11.
Evaluate new solutions and update
light intensity;
12.
End for j;
1.

13.
End for i;
14. Rank the fireflies and find the current best;
15. End while;

3.

Although the loudness can vary in many


ways, we assume that the loudness varies
from a large (positive) A0 to a minimum
constant value Amin.

Fig.1. Pseudocode Firefly Algorithm

III.

BAT ALGORITHM

Bat algorithm (BA) is a bio-inspired algorithm


developed by Yang in 2010 [5] . The standard bat
algorithm was based on the echolocation or biosonar characteristics of microbats. There are about
1000 different species of bats [6]. Their sizes can
vary widely, ranging from the tiny bumblebee bat of
about 1.5 to 2 grams to the giant bats with wingspan
of about 2 m and may weight up to about 1 kg. Most
bats uses echolocation to a certain degree; among all
the species, microbats use echolocation extensively,
while megabats do not. Microbats typically u use a
type of sonar, called, echolocation, to detect prey,
avoid obstacles, and locate their roosting crevices in
the dark. They can emit a very loud sound pulse and
listen for the echo that bounces back from the
surrounding objects[7] . Their pulses vary in
properties and can be correlated with their hunting
strategies, depending on the species. Most bats use
short, frequency-modulated signals to sweep
through about an octave, and each pulse lasts a few
thousandths of a second (up to about 8 to 10 ms) in
the frequency range of 25kHz to 150 kHz.
Typically, microbats can emit about 10 to 20 such
sound bursts every second, and the rate of pulse
emission can be sped up to about 200 pulses per
second when homing on their prey. Since the speed
of sound in air is about v = 340 m/s, the wavelength
of the ultrasonic sound bursts with a constant
frequency f is given by = v/f, which is in the range
of 2mm to 14mm for the typical frequency range
from 25kHz t to 150 kHz. These wavelengths are in
the same order of their prey sizes. In reality
microbats use time delay between their ears and
loudness variations to sense three-dimensional
surroundings.
Based on the above description and characteristics
of bat echolocation, Xin-She Yang [5, 8] developed
the bat algorithm with the following three idealised
rules:
1. All bats use echolocation to sense distance,
and they also know the difference between
food/prey and background barriers in some
magical way;
2. Bats fly randomly with velocity vi at
position xi with a frequency fmin, varying
wavelength and loudness A0 to search for
prey. They can automatically adjust the
wavelength (or frequency) of their emitted
pulses and adjust the rate of pulse emission
r [0, 1], depending on the proximity of
their target;

A. Bat Motion
Each bat is associated with a velocity
and a
location
, at iteration t, in a d-dimensional search
or solution space. Among all the bats, there exists a
current best solution x [8]. Therefore, the above
three rules can be translated into the updating
equations for and velocities :
(5)

(6)
(7)

where [0, 1] is a random vector drawn from a


uniform distribution. Wavelengths or frequencies
can be used for implementation, use fmin = 0 and fmax
= O(1), depending on the domain size of the
problem of interest. Initially, each bat is randomly
assigned a frequency which is drawn uniformly
from [fmin, fmax]. For this reason, bat algorithm can
be considered as a frequency-tuning algorithm to
provide a balanced combination of exploration and
exploitation. The loudness and pulse emission rates
essentially provide a mechanism for automatic
control and auto zooming into the region with
promising solutions.
B. Variations of Loudness and Pulse Rates
In order to provide an effective mechanism to
control the exploration and exploitation and switch
to exploitation stage when necessary, vary the
loudness Ai and the rate ri of pulse emission during
the iterations [8]. Since the loudness usually
decreases once a bat has found its prey, while the
rate of pulse emission increases, the loudness can
be chosen as any value of convenience, between
Amin and Amax, assuming Amin = 0 means that a bat
has just found the prey and temporarily stop
emitting any sound .
With these assumptions,
(8)
where and are constants. In essence, here is
similar to the cooling factor of a cooling schedule
in simulated annealing. For any 0 < < 1 and > 0,
we have
as
In the simplest case, use =
IV.

CUCKOO SEARCH

Cuckoonsearchn(CS) is,an optimization algorithm d


eveloped by Xin-she Yang and Suash Deb in
2009[9]. It was inspired by the obligate brood
parasitism of some cuckoo species by laying their

eggs in the nests of other host birds (of other


species). Some host birds can engage direct conflict
with the intruding cuckoos. For example, if a host
bird discovers the eggs are not their own, it will
either throw these alien eggs away or simply
abandon its nest and build a new nest elsewhere.
Some cuckoo species such as the New
World brood-parasitic Tapera have evolved in such
a way that female parasitic cuckoos are often very
specialized in the mimicry in colors and pattern of
the eggs of a few chosen host species [10]. Cuckoo
search idealized such breeding behavior, and thus
can be applied for various optimization problems.
Cuckoo search (CS) uses the following
representations [9]:
Each egg in a nest represents a solution, and a
cuckoo egg represents a new solution. The aim is to
use the new and potentially better solutions
(cuckoos) to replace a not-so-good solution in the
nests. In the simplest form, each nest has one egg.
The algorithm can be extended to more complicated
cases in which each nest has multiple eggs
representing a set of solutions.

random steps are drawn from a Levy distribution for


large steps
Levy u = t, (1 < 3),

which has an infinite variance with an infinite mean.


Here the consecutive jumps/steps of a cuckoo
essentially form a random walk process which
obeys a power-law step-length distribution with a
heavy tail. It is worth pointing out that, in the real
world, if a cuckoos egg is very similar to a hosts
eggs, then this cuckoos egg is less likely to be
discovered, thus the fitness
should be related to the difference in solutions.
Based on these three rules, the basic steps of the
Cuckoo Search (CS) can be summarised as the
pseudo code shown in Fig. 2.
Procedure Cuckoo Search:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

CS is based on three idealized rules[9]:


1. Each cuckoo lays one egg at a time, and
dumps its egg in a randomly chosen nest;
2. The best nests with high quality of eggs
will carry over to the next generation;
3. The number of available hosts nests is
fixed, and the egg laid by a cuckoo is
discovered by the host bird with a
probability

. In this case, the


host bird can either throw the egg away or
abandon the nest so as to build a
completely new nest in a new location.
This last assumption can be approximated by a
fraction pa of the n nests being replaced by new
nests (with new random solutions at new locations).
For a maximization problem, the quality or fitness
of a solution can simply be proportional to the
objective function. Other forms of fitness can be
defined in a similar way to the fitness function in
genetic algorithms.
When generating new solutions
for, say
cuckoo i, a Levy flight is performed.

Levy ()

(9)

11.
12.
13.
14.

Objective function f(x), x = (x1, ..., xd)T ;


Initialize a population of n host nests xi (i = 1, 2, ...,
n);
while (t <MaxGeneration) or (stop criterion);
Get a cuckoo (say i) randomly by Levy flights;
Evaluate its quality/fitness Fi;
Choose a nest among n (say j) randomly;
if (Fi > Fj),
Replace j by the new solution;
End
Abandon a fraction (pa) of worse nests[and build new
ones at new locations via Levy flights];
Keep the best solutions (or nests with quality
solutions);
Rank the solutions and find the current best;
end while
Postprocess results and visualisation;
Fig.2. Pseudocode Cuckoo Search

V.

CONCLUSION

In this paper three metaheuristic techniques i.e.


Firefly Algorithm, Cuckoo Search and Bat
Algorithm are discussed.
These can be applied to different types of
applications as they imitate the best features in
nature, especially the selection of the fittest in
biological systems which have evolved by natural
selection over millions of years. Metaheuristic
algorithms attempt to find the best (feasible)
solution out of all possible solutions of an
optimization problem. To this end, they evaluate
potential solutions and perform a series of
operations on them in order to find different, better
solutions.

where > 0 is the step size which should be


related to the scales of the problem of interest. In
most cases, we can use = O(1). The product
means entry-wise multiplications. Levy flights
essentially provide a random walk while their

(10)

REFERENCES
[1]

Xin-She Yang and Xingshi He. 2013. Firefly Algorithm:


Recent Advances and Applications.
Int. J. Swarm
Intelligence. 1: 36-50.

[2]

[3]

[4]

[5]

Saibal K. Pal, C.S. Rai, and Amrit Pal Singh. 2012.


Comparative Study of firefly algorithm and particle swarm
optimization for noisy non-linear optimization problems.
I.J. Intelligent Systems and Applications.10: 50-57.
X.S. Yang. 2009. Firefly algorithms for multimodal
optimization. In proceedings of 5th Symposium on
Stochastic Algorithms, Foundations and Applications,
(Eds. O. Watanabe and T. Zeugmann), Lecture Notes in
Computer Science.5792: 169-178 .
X.S. Yang . 2010.
Engineering Optimisation: An
Introduction with Metaheuristic Applications, John Wiley
and Sons, USA.
X. S. Yang, A New Metaheuristic Bat-Inspired Algorithm,
in: Nature Inspired Cooperative Strategies for
Optimization (NISCO 2010) (Eds. J. R. Gonzalez et al.),

Studies in Computational Intelligence, Springer Berlin,


284, Springer, 65-74 (2010).http://arxiv.org/abs/1004.4170
[6] Colin, T., (2000). The Varienty of Life. Oxford University
Press, Oxford.
[7] Richardson, P., (2008). Bats. Natural History Museum,
London.
[8] Xin-She Yang, Bat algorithm: literature review
andapplications, Int. J. Bio-Inspired Computation, Vol. 5,
No. 3 pp. 141149 (2013).
[9] Yang, X.-S., and Deb, S. (2010), Engineering
Optimisation by Cuckoo Search,Int. J. Mathematical
Modelling and Numerical Optimisation, Vol. 1, No. 4,
330343 (2010).
[10] https://en.wikipedia.org/wiki/Cuckoo_search

Discrete Wavelet Transform Image


compression
Sandeep Singh
Asst. Prof., Dept of ECE
MSIT, GGSIP University
New Delhi

Nitin Khanna
Student, Dept of ECE
MSIT, GGSIP University
New Delhi

Abstract- This paper proposes a simple and efficient


calculation
scheme
for
2D-Haar
wavelet
transformation in image compression using sparseorthogonal matrix. Since image compression is a key
process in transmission and storage of digital images
to tackle the space and bandwidth problems, this
research suggests a new image compression scheme
with a proposal based on discrete wavelet
transformation (DWT) using simple matrix
operations. The correctness of the algorithm has been
justified over some real images and compared with
results of the corresponding images after performing
some other algorithms for compression. The work is
particularly targeted towards wavelet image
compression using Haar mother wavelet and the
discussions in this paper are also limited to Haar
mother wavelets only.
Keywords: Sparse-orthogonal Matrix, Image
Compression, Haar wavelet transform.

I.

INTRODUCTION

1.1 Sparse-Orthogonal Matrix


In contrast to Dense Matrix, a sparse matrix is a
matrix in which most of the elements are zeros.
Sparse matrices are often used in scientific or
engineering applications or in solving partial
differential equations. When storing and
manipulating sparse matrices on a computer
program or hardware, it is beneficial and often
necessary to use specialized algorithms and datastructures that take advantage of the sparse
structure of the matrix to avoid zero elements.
Orthogonal matrix on the other hand, is a square
matrix with real entries whose columns and rows
are orthogonal unit vectors (i.e., orthonormal
vectors), i.e.
Q.QT = I

Amit Kumar
Student, Dept of ECE
MSIT, GGSIP University
New Delhi

Kapil Kumar
Student, Dept of ECE
MSIT, GGSIP University
New Delhi

example, it is often desirable to compute an


orthonormal basis for a space, or an orthogonal
change of bases, both take the form of orthogonal
matrices. As the determinant 1 and all eigenvalues are of magnitude 1, thesematrices are of
great benefit for numeric stability.
1.2 Image Compression
Image compression is a key technology in
transmission and storage of digital images because
of vast data carried through them. Image
compression may be lossy or lossles. Lossless
compression preferred for archival purposes and
often for medical imaging, technical drawings,
clipart,or comics.Lossy compression methods,
especially when used at lower bit rates introduce
compression artifacts. Lossy methods are
especially suitable for natural images such as
photographs in applications where minor
(sometimes imperceptible) loss of fidelity is
acceptable to achieve a substantial reduction in bit
rate. The lossy compression that producible
differences may be called visually lossless. This
research
suggests
an
innovative
image
compression method with a simple proposal based
on discrete wavelet transformation (DWT).

(1)

where I is the identity matrix. Numerical analysis


takes advantage of many of the properties of
orthogonal matrices such as in linear algebra. For

Fig 1 . Basic Process of Image compression

Where, f(x,y) is an image and (x,y) is compressed


image.

II.

HAAR TRANSFORM

e) It increases detail in a recursive manner.

Alfred Haar (1885-1933) was a Hungarian mathematician who


worked in analysis studying
orthogonal systems of functions, partial differential
equations. Chebyshev approximations and linear
inequalities. In 1909, Haar introduced the Haar
wavelet theory. A Haar wavelet is the simplest type
of wavelet. In discrete form, Haar wavelets are
related to a mathematical operation called the Haar
transform. The technical disadvantage of the Haar
wavelet is that it is not continuous, and therefore
not differentiable. This property can, however, be
an advantage for the analysis of signals with
sudden transitions, such as monitoring of tool
failure in machines[2].
The Haar wavelet's mother wavelet function (t)
can be described as :

Although the use of Wavelet Transforms was


shown to be more superior to DCT when applied to
image compression, some of the finer some of the
finer details in the image can be sacrificed for the
sake of saving a litstle more bandwidth or storage
space. This also means that lossy compression
techniques
such
as
DCT
canmbemusedminmthismarea.

III.

ALGORITHM
FOR
HAARTRANSFORM OF AN IMAGE

Step 1: Extract the square image in matrix in a


variable, say im_in. in gray format so that its in 2D space.
Step 2 : Assign the square image matrix order into
a variable , say size.

(t) =

1,t

[0,1/2)

-1 , t [1/2,1)

Step 3: Make an orthogonal sparse matrix, say


spa_row of the same ordersize.
(2)

0 , otherwise

Step5: Now , for 1-D row transform , multiply


im_in and spa_row and store the result in
im_out_1d.

Its scaling function (t) can be described as:

1,

Step 4: Take transpose of spa_row and store it in


a variable ,say spa_col.

Step6: Now , for 2-D row transform , multiply


im_out_1d and spa_col and store the result in
im_out_2d.

0t<1

Step7: Store im_out_1d and im_out_2d in


output files in .jpg or .png format.

(t) =
0

, otherwise

(3)

Some of the basic advantages of Haar transform


are:
a) It is conceptually simple and fast

The results obtained will be similar to results


shown in this paper later. Observe the vertical and
horizontal detailed coefficient results in the second
and third quadrant (in 2D sample). Image in first
quadrant is perfect approximation of the N*N size
image , where N is the order of pixel matrix of the
input image.

b) It is memory efficient, since it can be


calculated in place without a temporary array.
c) It is exactly reversible without the edge effects
that are a problem with other wavelet transforms.

d) It provides high compression ratio and high


PSNR (Peak signal to noise ratio).
Fig. 2 Input image

onmthe Lifting Scheme and its Implementation,


IJCSInInter -national Journal of Computer Science
Issues, Vol. 8, Issue 3, No. 1,May 2011.

[4]
S.nBhavani,nK.Thanushkodi, A Survey on Coding
mmmAlgorithms in
Medical Image Compression,
mmn.,International
Journal
on,Computer
Scienceand
mmn.,Engineering, Vol. 02, No. 05, pp. 1429-mmn,,1434, 2010.
[5]
G. K. Kharate, V. H. Pati, ColornImage Compression
mmn..Based On Wavelet Packet Best Tree, International
mmn..Journal of Computer Science, Vol. 7, No.3, March 2010.

Fig 3. 1-D sampled output

[6]
Othman Khalifa, W avelet Coding Design for Image
mmb . Data Compression, The International Arab Journal of
mmm Information Technology, Vol. 6, No. 2, pp. 118-127,
mmm April 2009
[7]
N. A. Koli, M. S. Ali, A Survey on Fractal Image
mmm,,Compression Key Issues,Information Technology
mmn,,,,Journal, 2008.
[8]

M. R. Haque and F. Ahmed, Imagendata compression


with JPEG and JPEG2000, 8th International Conference
on Computer and Information Technology, pp. 10641069, 2005.

[9]
Aldroubi, Akram and Unser, Michael (editors,
mmmn,Wavelets in Medicine and Biology), CRC Press, Boca
mmmn,Raton FL, 1996.

Fig 4. 2-D sampled output

IV.

CONCLUSION

As seen above , the size of image reduces to half of


its original size , every time it is sampled using
haar wavelet transform. Hence a pyramid typestructure is formed every time we pass our image to
Haar Wavelet transform filter. Since the size of
original image is atleast two times the output image
(in size) , the overall carrier and bandwidth
requirements reduce significantly. Since Haar
wavelet is easy to apply, the overall complexity of
the system also reduces exponentially.
REFERENCES
[1]

S.S. Tamboli, V. R. Udupi, IMAGEnCOMPRESSION


USING
HAAR
WAVELET
TRANSFORM,
International Journal of Advanced Research in Computer
and Communication Engineering , Vol. 2, Issue 8, August
2013.

[2] M. Mozammel Hoque Chowdhury and Amina Khatun


mmn,.Image
Compression
Using
Discrete
Wavelet
mmn,,Transform, IJCSI ,Vol. 9, Issue 4, No 1, 2012.
[3]

A.
Alice
Blessie1,J.
Nalini
andmS.C.Ramesh
ImageCompression Using Wavelet Transform Based

[10]
Benedetto, John J. And Frazier, Michael (editors,
mmn, ,wavelets, Mathematics and Applications), CRC Press,
mmn,, Boca Raton FL,1996.

Smart Grid Technologies to Realize the


Future Smart-City
Sunil Gupta
Reader, Dept of EEE
MSIT, GGSIP University
New Delhi
sun16delhi1@gmail.com

Abstract Recent trend across the globe is to use


advanced state-of-the-art electronics, communication
and information technologies in various utilities to
realize the concept of Smart City. This paper focuses
on integration of the new innovative Smart Grid
technologies throughout the power system for the
smart generation, transmission, distribution, and
consumption of electrical power. It demonstrates the
smart grid, a major component of smart city, through
integration of conventional and Renewable energy
sources means. The paper is an attempt to figure out
the possibilities embedded in smart grid technologies
to support the applications and goals of smart city.
Keywords communication; IEC 61850
communication protocol, renewable energy sources;
smart city; smart grid technologies

I.

INTRODUCTION

Cities around the world are facing the


challenges of accommodating their increasing
populations sustainably, or to become more
sustainable, competitive and livable. Many
intelligent power and automation solutions already
exist to enable cities to automate their key public
and industrial services in the areas of city
communication platforms, electricity grids, water
networks, transport, buildings, district heating and
cooling etc. However, smart cities may be viewed
as cities of the future. It is highly likely that smart
city models will, over the coming decade, become
very feasible strategies for cities development.
Smart city is defined as A city well performing
in a forward-looking way in [economy, people,
governance, mobility, environment, and living]
built on the smart combination of endowments and
activities of self-decisive, independent and aware
citizens or A city that monitors and integrates
conditions of all of its critical infrastructures
including roads, bridges, tunnels, rails, subways,
airports, sea-ports, communications, water, power,
even major buildings, can better optimize its
resources, plan its preventive maintenance
activities, and monitor security
aspects while
maximizing services to its citizens[1]

A SMART power network and its components play


an important role in maintaining high reliability,
and availability of the power supply. Due to the
proliferation of electronics, communication and
information technologies in power utilities, there
seems to be an immediate need to integrate nonconventional means of power generation,
transmission and distribution in conventional
power grid for meeting the critical demands of
smart grid and also to be future ready to tackle
demand growth and changing scenario due to
restructuring and deregulation. Smart Grid, being
an integral part of smart city, provides an
opportunity to generate, transmit and distribute the
energy optimally, achieved through technological
innovations, energy efficient management, healthy
market competition and intelligent decisions in
management and operation.
This paper demonstrates the concept of smart
city focusing on integration of the new innovative
technologies throughout the power system for the
smart generation, transmission, distribution, and
consumption of electrical power. Smart Grid, a
major component of smart city, has been
demonstrated through integration of conventional
like Hydro and renewable like wind, solar energy
sources means. The paper is an attempt to figure
out the possibilities embedded in smart grid
technologies to support the applications and goals
of smart city.
II. SMART CITY CRITICAL COMPONENTS
AND TECHNOLOGIES
Smart City is a developed urban area that creates
sustainable economic development and high quality
of life by excelling in 6 key areas, i.e. Smart
Economy, Smart Mobility, Smart Environment,
Smart People, Smart Living, and Smart
Governance, as shown in Fig.1 that can be done
through strong ICT infrastructure [1].
Early
concept of Urban City is changed to Smart City
adapting the needs of green and sustainability for
cities, and the introduction of Smartphone and 4G
mobile network infrastructures.

Living

Environment

SMART- CITY
COMPONETS

Mobility

People

Governance

Economy

Fig.1 Key areas of smart city [1].

E-Services are a key component of the Smart


City concept; however, other dimensions are now
included, e.g. broadband, contactless technologies,
smart energy, Open Data, Big Data. There are
many components/technologies in building the
Smart City that includes [2]:

Energy efficiency solutions for buildings and


homes, automation systems, renewable
installations and Electric Vehicles charging
infrastructure.
Utility-engaged consumers and a utility-grade
smart home.
Sensing and metering infrastructure
Urban and interurban Real-time adaptive
traffic control system: traffic signals, HD
enforcement
system,
supervision and
monitoring system for expressway, CCTV,
traffic flow detection, weather detection and
guidance system.
Integrated
mobility
management platform improves efficiency of
multimodal transportation in major corridor.
GIS for upgrading of power grid towards
Smart Grid including Integration of
Renewable Energy, energy storage, smart
meter deployment, Micro-Grid, Low-Voltage
and Medium-Voltage grid management, user
automation for energy efficiency and peak
shedding.
SCADA improves efficiency & resilience to
disruptions of water distribution system,
efficiency of electricity distribution systems.
Weather information system for airport;
Storm water management, Air Quality
monitoring, metro fare collection etc.
Buildings account for the largest share of
energy consumed in most cities. Opportunities exist
to make them more efficient, as well as to control
their energy use intelligently, for example through
the automatic adjustment of window blinds when
the sun is too intense and by linking lighting and air

conditioning to occupancy. IT solutions for


advanced energy and power management enable
industry and commercial building owners to
improve the efficiency of their operations and
participate as a more active player in the grid
system by adjusting load and self-generation in
response to changes in energy pricing. ABB offers
full building automation solutions to intelligently
control everything from window shutters to
lighting, heating and cooling in response to
weather, occupancy and energy prices. Lighting
control systems can deliver power savings of up to
50 percent and building automation up to 60
percent. In addition, building automation solutions
cover building security and communications
access.
About two-thirds of all the electrical energy
produced in the world is converted into mechanical
energy by electric motors. Most of these motors are
used to power fans, pumps and compressors and
are operated at constant speed all the time, even
when not needed, using throttles or valves to
control the flow of fluids or gases. Energy
efficiency solutions such as variable speed drives
(VSDs) can achieve large energy savings of the
order of 50 percent or more and reduce emissions
accordingly [3].
ABB is delivering reliable power and efficient
climate control for the Burj Khalifa building,
Dubai, UAE. The worlds largest and tallest
building is using ABB drives to ensure efficient air
conditioning and intelligent control over the
electricity network and building loads. Smart
Hoche, a project in France, empowers residential
consumers to manage energy and water
consumption by providing them with real-time
information on their consumption and tools with
which to manage it. ABB is providing an energy
gateway and communications for electricity, cold
and hot water plus heating, linking to visualization
over the internet and locally in the home.
Cities are particularly good environments for
electric vehicle (EV) use since most journeys are
local, so drivers need never be far from a charging
station. Even so, charging infrastructure needs to be
built, sometimes with grid upgrades, to
accommodate the different needs of EV drivers and
to maintain electric grid reliability. Charging
infrastructure options available today can charge in
5-6 minutes the equivalent of a gas station refill,
for a couple of hours while owners work or shop,
or overnight. ABB is deploying EV charging
infrastructure in cities around the world. In the
Netherlands, ABB helped install 23 fast chargers to
support 550 fast charge cars on the road in 2011.
And in Estonia, ABB has installed 200 DC fast
chargers and 507 AC chargers at office locations in
a fully turnkey project. ABB has developed a new
technology that helps to power the worlds first
flash charging electric bus system. The new
charging technology can be deployed on a large
capacity electric bus carrying as many as 135

passengers. The battery on the bus is charged at


selected stops with a 15-second energy boost from
a new type of automatic fast charger. This happens
while passengers board and disembark. At the end
of the bus line a 3 to 4 minute boost enables the full
recharge of the batteries. TOSA (Trolleybus
Optimization System Alimentation) is a project in
Geneva which is demonstrating this new
technology on a real bus line between Geneva
airport and the citys international exhibition
center.
The kinetic (movement) energy of a train can
represent up to 80 percent of the total energy
consumption of a rail transportation system.
Whenever a train brakes at a station, its kinetic
energy is converted into electricity and returned on
the traction power line. Where there is no
connection to the AC network, onboard loads and
other trains take a small portion of this energy and
the surplus is dissipated by the onboard braking
resistors. Thereby, recovery of the braking energy
of a moving train is no longer a luxury. Rail transit
authorities must meet the challenges of increasing
the energy efficiency of rail transportation and
reducing its environmental impact. ENVILINE
energy management solutions offer up to 30
percent energy savings for direct current (DC) rail
transportation.
Southeastern Pennsylvania Transit Authority
(SEPTA) in USA has been using the ENVILINE
energy storage system since April 2012. Equipped
with lithium-ion batteries, the solution recycles the
braking energy to reduce the energy consumption
by as much as 10 percent while simultaneously
providing frequency regulation services to the local
Regional Transmission Organization (RTO). One
of the substations on the Warsaw metro line 2 in
Poland is equipped with a 40 MJ (mega joule)
energy storage system based on double-layer supercapacitors to exploit the recovered braking energy
from the metro cars.
III. SMART GRID
Key drivers for Smart Grids are to maximize
CO2 free energy and reduce environmental impact,
improve energy efficiency across the value chain,
and increase grid reliability and stability. The
European Technology Platform defines the Smart
Grid as an electricity network that can intelligently
integrate the actions of all users connected to it
generators, consumers and those that do both, in
order to efficiently deliver sustainable, economic
and secure electricity supply [4], [5]. The primary
objectives in Smart Grid are to optimize assets
usage, reduce overall losses, improve power
quality, enable active customer participation, make
energy generation, transmission and distribution
eco-friendly and to make detection, isolation and
rectification of system disturbances automatic.

Fig 2. Smart grid technologies.


To achieve these objectives, the Smart Grid utilizes
technological enhancements in equipments and
methodologies, as shown in Fig.2 that are cost
effective, innovative, and reliable [6].

Fig.3 Smart meter

Smart Grid collects and analyses real-time


operational data across a distribution network.
Smart City empowers the consumer through smart
metering, as shown in Fig.3, which allows two way
interaction of energy and information flow between
the utility and end consumers. It manages their
energy use and thus reduces the energy cost. It also
improves customer satisfaction with increased
reliability in grid operations and reduced outages.
Smart grid makes use of various forms of clean
energy technologies to reduce the burden on fossil
fuels and the emission of green house gases. The
energy management system functions help
consumers to choose among the various tariff plan
options, reduce peak load demand and shift usage
to off-peak hours, save money and energy by using
more efficient appliances and equipments. Smart
City not only provides a means of distributed green
power generation but also helps in load balancing
during peak hours through energy storage
technologies and applications such as Plug-in
Hybrid Electric Vehicles (PHEV) [7].
In Smart Grid, the main focus is on smart
distribution. It includes advanced technologies such

as Distribution Automation (DA), increased


interconnection & effective utilization of
Distributed Energy Resources (DER) and customer
participation in distributed applications by means
of innovative methods [8]. To realize these
functions in Smart Grid, a substation needs a fully
integrated and fully automatic system that performs
data acquisition and processing, protection &
control functions accurately, and delivers quality
power efficiently with minimum environmental
impact of green house gases. Demand response,
reduction in peak load demand and power losses,
improved power quality with less outage hence,
overall improvement in distribution system
reliability and performance are the main objectives
that could be achieved by automating substations
using advance networking technology.
Modern SAS uses IEC 61850 for the real time
operation of the power system [9]. In IEC 61850
based modern substations, copper cables are
replaced by communication links between primary
and secondary devices. It results in very significant
improvements in both cost and performance of
electric power system [10-13]. IEC 61850 based
SAS reduces operational and maintenance expenses
by integrating multiple functions in a single IED
[14]. These functions are distributed among IEDs
on the same, or on different levels of the SAS. It
enables distributed intelligence in a network for
developing various new and improved applications.
This improves the functionality, design and
construction of modern substations [15-18]. Hence
IEC 61850 communication standard allow the
substation designer to focus more attention on other
important issues like intelligence, reliability,
availability, security, and efficiency of the power
network.
IV. SMART CITY- PERSPECTIVES IN INDIA
Worlds major cities have embarked on smart
city projects, including Seoul, New York, Tokyo,
Shanghai, Singapore, Amsterdam, Cairo, Dubai,
Kochi and Malaga. The eco-friendly cities would
provide world-class facilities with 24-hour power
supply and drinking water, mass rapid urban
transportation, with bicycle and walking tracks,
complete waste and water recycling, systems for
smart grids - digitally managed systems to control
energy consumption - and smart metering. We
have nearly 10 to 12 countries that have evinced
keen interest in developing smart cities.

Fig.4 Smart City: a prototype.

Japan, UK, Sweden, Singapore, Malaysia, Australia


and The Netherlands are keen to partner with India.
As the government draws out the blueprint of 100
smart cities across the country, Indias first smart
city project at Dholera is expected to get a boost at
the seventh edition of the Vibrant Gujarat Summit.
To be spread on a 920 square kilometre area, the
Dholera smart city is located about 110 kilometres
from Ahmedabad along the DMIC. Gujarat
International Finance Tech-City or GIFT is an
under-construction city in the Indian state of
Gujarat which is about 12 kms from Ahmedabad
International Airport. Its main purpose is to provide
high quality physical infrastructure (electricity,
water, gas, district cooling, roads, telecoms and
broadband), so that finance and tech firms can
relocate their operations there from Mumbai,
Bangalore, Gurgaon etc. where infrastructure is
either inadequate or very expensive. It will have a
special economic zone (SEZ), international
education zone, integrated townships, an
entertainment zone, hotels, a convention center, an
international techno park, Software Technology
Parks of India (STPI) units, shopping malls, stock
exchanges and service units.
The Union industry ministry has a plan to
develop seven cities around the Delhi-Mumbai
Industrial Corridor (DMIC) that will cross six
states. The DMIC project, comprising -- Uttar
Pradesh, Haryana, Rajasthan, Gujarat, Maharashtra
and Madhya Pradesh -- is being developed in
collaboration with Japan as a manufacturing and
trading hub. The plan is to have brand new cities
along Delhi-Mumbai Dedicated Rail Freight
Corridor which is under implementation. On these
lines the Chennai-Bangalore Industrial corridor and
Chennai-Hyderabad Industrial Corridors are also
proposed and are being developed. The focus of
these corridors will be automobile and ancillaries in
Chennai,
Aerospace
in
Bangalore
and

pharmaceuticals in Hyderabad. The ChennaiBangalore Industrial corridor is expected to cover


the cities of Ranipet and Hosur. Social
infrastructure is also encouraged along this corridor
which is an integral part of any industrialization.
Karnataka government wants to extend this
corridor to Belgaum and Mangalore with plans to
integrate mining, food parks and cements as part of
the corridor industries. Tamil Nadu government is
also planning industrial corridors along ChennaiMadurai-Tuticorin-Tirunelveli
corridor
and
Coimbatore-Salem corridor. These corridors are
expected to encourage industrialization and
integration of regional economies. This could also
be seen in the rising real estate prices along the
upcoming and proposed corridors.
The European Business and Technology Centre
(EBTC) plans to initiate a pilot project to
demonstrate smart city concept at the industrial
town of Haldia in West Bengal. The project would
focus on lowering carbon footprint. The pilot
project would focus on bringing down environment
related hurdles that the industrial units in Haldia
face while expanding their operations [19].
The future smart cities will come in two
packages Greenfield and Brownfield. Greenfield
cities would be built a new and this would be one
of the biggest urbanization drives of the 20th
century. The jobs would be created in the IT, big
data, app creation, hardware maintenance, waste
management, energy efficiency, renewable energy
companies, transport and infrastructure builders,
lighting (LED) and other service providers. Experts
of green building code, energy auditors, green
building companies, architects, renewable energy
and waste management experts, engineers and
managers will be in demand
V. CONCLUSION
Requirements of the Smart Grid scenario in
Smart City are presented. The potential of technical
features available with smart grid technologies to
support these requirements and also to streamline
grid operations at the distribution level with
improved reliability and efficiency, is explored. It
has been analyzed that, by adopting advanced and
innovative communication and information
technologies in various utilities provide various
opportunities for advanced and futuristic Smart
City applications.

[3]
[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

REFERENCES
[1]
[2]

www.itu.int
www.metropolitansolutions.de

[19]

www.abb.com
C. Cecati, G. Mokryani, A. Piccolo, and P. Siano, An
overview on the smart grid concept, 36th Annual
Conference on IEEE Industrial Electronics Society,
IECON, pp. 3322-3327, 2010.
Z. Xue-Song, C. Li-Qiang and M. You-Jie , Research on
smart grid technology, International Conference on
Computer Application and System Modeling (ICCASM),
pp.599-603, 2010.
John D. McDonald, P.E., Smart Grid Applications,
Standards Development and Recent Deployments, GE
energy
T&D.
http://www.ieeepesboston.org/files/2011/06/McDonaldSlid
es.pdf
Ikbal Ali, Mini S. Thomas, Sunil Gupta, Substation
communication architecture to realize the future smart
grid, International Journal of Energy Technologies and
Policy, Vol. 1(4), pp. 25-35, 2011.
Ikbal Ali, Mini S. Thomas, Sunil Gupta and Suhail
Hussain, Information modeling for distributed energy
resource integration in IEC 61850 based substations, 12th
IEEE India international conference, INDICON 2015,
Jamia Millia Islamia (JMI), 17th -20th December, 2015.
IEC 61850: Communications Networks and Systems in
Substations,
2002-2005,
Available
[Online]:
http://www.iec.ch./
Ikbal Ali, Mini S. Thomas, Sunil Gupta, Features and
scope of IEC 61850, National Electrical Engineering
Conference (NEEC-2011), Delhi Technological University
(DTU), 16th 17th December, 2011.
Ikbal Ali, Mini S. Thomas, Sunil Gupta, Integration of
PSCAD based power system & IEC 61850 IEDs to test
fully digital protection schemes, IEEE PES Innovative
Smart Grid Technologies Conference (IEEE ISGT ASIA
2013), 10-13th Nov., 2013, Bangalore, India.
Ikbal Ali, Mini S. Thomas, Sunil Gupta, Sampled values
packet loss impact on IEC 61850 distance relay
performance, IEEE PES Innovative Smart Grid
Technologies Conference (IEEE ISGT ASIA 2013), 1013th Nov., 2013, Bangalore, India.
Ikbal Ali, Mini S. Thomas, Sunil Gupta, Methodology &
tools for performance evaluation of IEC 61850 GOOSE
based protection schemes, IEEE Power India
International Conference (PICON, 2012), December,
2012, DCRUST, Murthal, Haryana (India).
J.D. McDonald, Substation automation, IED integration
and availability of information, IEEE Power and Energy
Magazine, vol. 1, no. 2, p.p. 22-31, Mar.-Apr. 2003.
M.C. Janssen and A. Apostolov, IEC 61850 impact on
substation design, IEEE/PES Transmisssion and
Distribution Conference and Exposition, Chicago, 2008,
pp. 1-7.
A. Apostolov and D. Tholomier, Impact of IEC 61850 on
power system protection, IEEE PES Power Systems
Conference and Exposition, PSCE06, Atlanta,1053-1058.
S. Mohagheghi, J. C. Tournier, J. Stoupis, L. Guise, T.
Coste, C. A. Andersen, and J. Dall, Applications of IEC
61850 in distribution automation, IEEE/PES Power
Systems Conference and Exposition (PSCE), pp. 1-9, 2011.
R.E. Mackiewicz, Overview of IEC 61850 and benefits,
IEEE PES Transmission and Distribution Conference and
Exhibition, 21-24 May 2006, pp. 376-383.
smart-cities-india.com

Study of Methods of Noise Reduction in


Fuzzy Based Image Segmentation
Jyoti Arora
Digvijay Singh Latwal
Asst Prof.,
Student,
MSIT, GGSIP University MSIT, GGSIP University
New Delhi
New Delhi

Abstract- image Segmentation is the process of


partitioning a digital image into multiple segments i.e.
set of pixels. The goal of segmentation is to simplify
and change the representation of an image into
something that is more meaningful and easier to
analyze. Image segmentation is typically used to
locate objects and boundaries i.e. lines, curves etc. in
images. However, some inherent noise is present in
every image. In the present paper, the emphasis is on
reducing noise, especially salt and pepper noise from
the image by applying different filters. Several filters
were studied such as median filter, mean filter etc.
Properties of image such as PSNR and MSE were
calculated to analyse the filters.
After removal of the salt and pepper noise, FCM
algorithm was applied to the images.
Keywords Image segmentation, Fuzzy c-means,
Pattern Recognition, Machine Learning, median
filter, mean filter, FCM, Laplacian filter, Gaussian
filter, MSE, SN ratio.

I. INTRODUCTION
Segmentation is a key step towards image analysis
in various image processing application such as
object recognition, pattern recognition, medical
imaging. It can be stated as partitioning the image
into different regions each having homogeneous
features such as color, texture, and so on. This type
of segmentation is called clustering which is very
important in classifying different patterns in an
image. Data Mining comprises dependency
detection, class identification, class description, and
outlier/exception identification, the last focusing on
a very small percentage of data points which are
often ignored as noise. Cluster analysis plays an
important role in many engineering areas such as
data analysis, image analysis and pattern
recognition. Clustering helps in finding natural
boundaries in the data and fuzzy clustering is used
to handle the problem of vague boundaries of
clusters. In fuzzy clustering, the requirement of
crisp partition of the data is replaced by a weaker
requirement of fuzzy partition, where the

Himanshu Gupta
Student,
MSIT, GGSIP University
New Delhi

Harish Phulara
Student,
MSIT, GGSIP University
New Delhi

association amongst the data is represented by


fuzzy relations. Outlier identification and clustering
are interrelated processes. The fuzzy clustering
identifies groups of similar data, whereas the
outlier identification extracts noise from the data
which does not belong to any cluster. Areas of
application of fuzzy cluster analysis include for
example data analysis, pattern recognition, and
image segmentation.
The paper is organized as follows. Section II gives
the related background work on fuzzy c means
segmentation technique [11]. In section III, a brief
introduction of the filters that were used in the task
is given. Section IV discusses the approach that
was adopted during the research in detail. It also
deals with the method that was used for the work.
In section V, the experimental results that were
obtained are listed in a table. Section VI talks about
our conclusion and future scope of the paper.

II. RELATED WORK


The accuracy of image segmentation depends upon
a number of factors. This section reviews the
related works that have been done in this field.
Stephen V. Rice, Junichi Kanai and Thomas A.
Nartker [2] presented a report describing the
accuracy of fcm. Ivan Dervisevic [3] provides a
detailed performance analysis of various algorithms
in the segmentation task, with a highest accuracy of
70.21%. In his paper, the focus is on segmentation
of single typewritten characters. His paper further
discusses the results obtained after classification of
images. The images used as dataset for
classification were altered by the application of
some feature selection functions on the original
images. Faisal Mohammad, Jyoti Anarase, Milan
Shingote, Pratik Ghanwat [4] presented a paper on
fuzzy c means using pattern matching with
recognition rate of 70% for noisy data to up to
75%. Kaushik Roy, Obaidullah Md Sk, Nibaran
Das [5], in their paper compared different filters for
script identification from handwritten text. They

have proposed a scheme to identify document level


script from Indian texts written in six popular
scripts namely Devnagari, Bengali, Malayalam,
Urdu, Oriya & Roman script documents and
compared the performance using different well
known classifiers. In their paper the accuracy of the
SMO classifier is reported to be 89.7 % with a
convergence time of about 0.05 seconds. Gerard
Bloch Fabien Lauer, Ching Y. Suen b. [16], in their
paper describe a trainable feature extractor for
handwritten digit recognition. Similar work on
comparison of various machine learning classifiers
for the recognition task has been done by Elijah
Olusayo Omidiora, Ibrahim A.A., Olusayo D. F.
[17] . Another related work in the field is given by
Walid Moudani and Abdel Rahman Sayed [18].
Their paper focuses on improving image
segmentation using techniques of data mining.
They have proposed a skin detection method using
the J48 decision tree. The problem statement in
their paper involved identifying skin pixels in the
image. Mohini D Patil and Dr.Shirish S.Sane [19]
have presented a comparative study on how to
effectively classify data after dimension reduction.
They have described some of the techniques
available for dimension reduction. The paper also
deals with achieving effective and relevant
dimension reduction by using the tools given in the
Weka workbench. They performed the experiments
for their study using a number of standard datasets
which have been extensively studied. Finally, it
concludes by summarizing the results obtained
after training various classifiers like the J48
decision tree and nearest neighbour classifiers like
ANN and KNN on the dataset. Another work worth
mentioning is presented by Jisu Oh and Shan
Huang [20].
III. FILTERS USED
Digital images are prone to a variety of types of
noise. Noise is the result of errors in the image
acquisition process that result in pixel values that
do not reflect the true intensities of the real scene.
There are several ways that noise can be introduced
into an image, depending on how the image is
created. Hence various filters were used on the
images.
Median Filter The median filter is a nonlinear
digital filtering technique, often used to remove
noise. Such noise reduction is a typical preprocessing step to improve the results of later
processing (for example, edge detection on an
image). Median filtering is very widely used in
digital image processing because, under certain
conditions,
it
preserves
edges
while
removingmnoise.

Mean Filter- The idea of mean filtering is simply


to replace each pixel value in an image with the
mean (`average') value of its neighbours, including
itself.
This has the effect of eliminating pixel values
which are unrepresentative of their surroundings.
Mean filtering is usually thought of as a
convolution filter. . Often a 33 square kernel is
used, as shown in Figure 1, although larger kernels
(e.g. 55 squares) can be used for more severe
smoothing. (Note that a small kernel can be applied
more than once in order to produce a similar but
not identical effect as a single pass with a large
kernel.)

Fig 1. 33 averaging kernel often used in mean filtering

Gaussian Filter - Mathematically, a Gaussian filter


modifies the input signal by convolution with a
Gaussian function; this transformation is also
known as the Weierstrass transform The Gaussian
function would theoretically require an infinite
window length. However, since it decays rapidly, it
is often reasonable to truncate the filter window
and implement the filter directly for narrow
windows, in effect by using a simple rectangular
window function. In other cases, the truncation
may introduce significant errors. Better results can
be achieved by instead using a different window
function. Filtering involves convolution. The onedimensional Gaussian filter has an impulse
response given by -

Gaussian filtering is used to blur images and


remove noise and detail.
Laplacian Filter-The Laplacian is a 2-D isotropic
measure of the 2nd spatial derivative of an image.
The Laplacian of an image highlights regions of
rapid intensity change and is therefore often used
for edge detection (see zero crossing edge
detectors).

The Laplacian is often applied to an image that has


first been smoothed with something approximating
a Gaussian smoothing filter in order to reduce its
sensitivity to noise, and hence the two variants will
be described together here.
The Laplacian L(x,y) of an image with pixel
intensity values I(x,y) is given by:

Each data point in the image was assigned a


random value for being in the clusters. Any point x
has a set of coefficients giving the degree of being
in the kth cluster w(x). With fcm, the centroid of a
cluster is the mean of all points, weighted by their
degree of belonging in the cluster. The goal was to
minimize the objective function:
argmin ijm||xi-cj||2
FCM Algorithm was applied on Original image as
well as Resultant image of Median Filter.

The operator normally takes a single graylevel


image as input and produces another graylevel
image as output.
IV. PROPOSED APPROACH
This section discusses the method that was adopted
during the course of the implementation of fcm
algorithm on the input images. Salt and pepper
noise was introduced to the original image that
needs to be segmented.

Fig 4

V. RESULTS
The following table shows the performance of all
the filters which were used. The table also
compares the
MSE and PSNR values after
applying various filters to the target image. The
above table summarizes the performance of the
filters under study. As is evident from the table, the
median filter shows the best results in removing
salt and pepper noise.

Fig 2. Image II. Noisy image

After introduction of Salt and Pepper noise to the


original image, four different filters were applied to
the noisy image. The filters used were Mean
filter, Median filter, Gaussian filter and Laplacian
filter.

Filters

Median

Gaussian

Mean

Laplacian

->

filter

filter

filter

filter

PSNR

23.8

22.6

21.42

11.96

MSE

267.0

339.7

468.5

414.2

Table I.Compiled results

VI. CONCLUSION AND FUTURE SCOPE

Fig 3

Mean Square error and Peak Signal to Noise ratio


of resultant images of all the four filters were
calculated and compared. After comparison of
MSE and PSNR values of all four resultant images,
Median filter comes out to be the best filter for
images with Salt and Pepper noise. For
Segmentation, Fuzzy C-Means algorithm was used.

A conventional FCM algorithm does not fully


utilize the spatial information in the image. A fuzzy
c-means (FCM) algorithm that incorporates spatial
information into the membership function for
clustering. This technique is a powerful method for
noisy image segmentation. There is a kind of
hesitation present which arises from the lack of
precise knowledge in defining the membership
function. The limitations do however, present us
with an opportunity to work towards finding ways
and techniques to improve the performance of the

median filter and FCM algorithm. This will


facilitate in accentuating the confidence factor of
this work and possibly bringing out some
interesting or unexpected result which might
warrantnfurtherninvestigation.
ACKNOWLEDGEMENT
We would like to thank Mrs.Jyoti Arora for her
continuous support and critical comments over the
entire course of the work. We are also grateful to
Maharaja Surajmal Institute of Technology for its
support.

REFERENCES
[1]

Ian H. Witten and Eibe Frank, Data Mining: Practical


Machine
Learning Tools and Techniques, second
edition, Morgan Kaufmann Publishers, Elsevier, San
Francisco, CA, 2005.

[2]

.Stephen V. Rice, Junichi Kanai and Thomas A. Nartker,


A Report on the Accuracy of OCR Devices, Information
Science Research Institute, University of Nevada, Las
Vegas, 1993.

[3]
Ivan Dervisevic, Machine Learning Methods for image
nnnn segmentation, mm2006
[4]

Faisal Mohammad, Jyoti Anarase, Milan Shingote, Pratik


Ghanwat,
,Optical
Character
Recognition
Implementation mm Using Pattern ,,,,,,,,Matching,
ISB&M School of Technology, Nande, Pune

[5],,,..Kaushik Roy, Obaidullah Md SK, Nibaran Das,


Comparison of m,,,Different Classifiers for Script
Identification from Handwritten m,,,Document, 2013.
[6]n

Ross Quinlan, C4.5: Programs for Machine Learning,


Morgan Kaufmann Publishers, San Mateo, CA, 1993.

[8] , John C. Platt, Fast Training of Support Vector Machines


using Sequential Minimal Optimization, Microsoft
Research.
[9] ..Harry Zhang, The Optimality of Naive Bayes,
m,,,/.FLAIRS2004 conference.
[10]

Jisu Oh & Shan Huang, Spatial Outlier Detection and


Science

m..Implementation in WEKA, Computer


m..Department, University of Minnesota.

[11] R.Schalkoff, Pattern Recognition: Statistical, Structural,


and Neural Approaches, Wiley, 1992.
[12] G. M Provan & M. Singh, Learning Bayesian networks
nnnnnusing feature selection, in "Fifth International Workshop
nnnn on Artificial Intelligence and Statistics", 1995.
[13]
M. Pazzani, Searching for attribute dependencies in
mmm..,Bayesian classifiers, in "Fifth International Workshop
mmm on Artificial Intelligence,and Statistics", 1995.
[14] P. Langley, W. Iba & K. Thompson, An analysis of
mmm.,Bayesian classifiers, in "Proceedings of the Tenth
mmm.National Conference on Artificial Intelligence", 1992.

[15] Bi Ran & Leong Tze Yun, Hand Written Digit


mmm,..Recognition and Its Improvement, Department of
mmn,,,Computer Science, School of Computing, ,National
...........University of Singapore.
[16] mGerard Bloch Fabien Lauer, Ching Y. Suen b, A
m,,,,trainable feature extractor for handwritten digit
m,,,,recognition, 2007.
[17] m.Elijah Olusayo Omidiora, Ibrahim A. A., Olusayo D. F.,
mmn ...Comparison of Machine Classifiers for recognition of
mmn.....Online and Offline Handwritten Digits, Ladoke
mmn.....Akintola University of Technology.
[18]
.Walid Moudani & Abdel Rahman Sayed, Efficient
mmn,....Image Classification using Data Mining, Lebanese
mmn.....University, 2011.
[19]
Mohini D Patil and Dr.Shirish S.Sane, Effective
mmn....Classification after Dimension Reduction: A Comparitive
mmn....Study, Pune University, 2014.

Extending traditional methods


Rinky Dwivedi
Asst Prof., Dept of CSE
MSIT, GGSIP University
New Delhi
Abstract-One-size-fits-all, suggests that every
problem fits into a single methodological framework.
The correct phase should be one-size-fits-one. Since
problems are different, people are different, teams
are different our experience of research in the field of
method engineering leads us to the conclusion that
software companies do deconstruct the methods and
use only some of practices to develop the project. The
project-personnel also believe that it would be
impossible or at least impractical to implement all
practices strictly. However, while one size doesnt fit
all projects, does not lead us to the conclusion that
every project team needs to start from scratch and
design its own process- previously developed method
elements can be reused. This motivates to create a
blend of different methods based on the rich
knowledge of the past usage of these methods under
different requirement sets.
Extending a method means balancing the consistency
needs of business enterprises with the flexibility
required by project teams. The applicability of the
method thus formed will be significantly improved
than the existing methods because the extended
method thus formed contains the required constituent
of different method
KeywordsMethod
Engineering,
Extension,Configurability.

I.

Method

manner in the unsupported part, which could add to


the project risk list.
Since the method configuration approach consists
of one primary process - method configuration and
one extended process - method extension. The
configuration process is the fundamental to our
approach and method extension should be
attempted only when configurability fails to deliver
the desired method. This failure can happen
If a method concept is missing in the
desired method.
If a method needs to extend its
functionality.
The method can be extended- Either by adding
more concepts in it, the purpose is to make the
method more efficient for the software
development phase to which the method was
initially designed for, OR to extend the method to
get functional on other phases of software
development. Section 2 will describe the position
of method configuration with respect to existing
Situational Method Engineering approaches
(SME). Section 3 explains the method extension for
missing product entities and subsequent sections
will further explains extending the conceptual
model and extending component model.

INTRODUCTION
II.

The software engineering methods cover the


software development process to different extents
(Henderson-Sellers, et.al. 2007). Some of them
have a narrow scope, for example, Data Flow
Diagram is an instance of traditional software
engineering method. It focuses on the design phase
of the software development (Yourdon, E., 1989).
Modern Structured Analysis, Prentice-Hall, London
and aim at showing the flow of data in a system.
However good a system engineering method is at
designing phase, it only supports a delimited part of
the development process. As a consequence,
projects relying only on a standard software
engineering method that cover a limited part of
development process and have a shortcoming in
quality control (Tuunanen, T. and Rossi, M., 2004)
and (Moaven S., et.al., 2008). Hence, this could
mean that work has to be performed in an ad-hoc

EXISTING SME APPROACHES

To position the method configuration process


within the existing SME research we will first
describe the characteristics that can be used to
compare different SME approaches. There are
several aspects in which SME approaches differ,
but most notably are:
Meta Model and its underlying principles
Reusable building blocks
A. Meta model and its underlying principles
Meta-model (Meta-modeling) is the principal
technique used for understanding, comparing and
evaluating methods. It is generally defined as a
model of models. Broadly, two types of
information are required to develop a meta-model:
the structure of products and the procedures to
produce the products i.e. process. Reflecting the

product-process dichotomy of methods, two types


of meta-models have been developed. The first of
these are meta-data models for the product aspects
of methods. Such models introduce a system of
concepts in which the static and data aspects of
methods as well as constraints defined in them can
be represented. The second kind of meta-models
deals with the process aspects of methods; these
meta-models are called meta-activity models and
specify a system of concepts to define tasks and
task transition criteria. Next the need comes for
coupling the product and process aspects of
methods, the contextual meta-model and the
decisional meta-model have been proposed. The
contextual meta-model, context is defined as
<situation, decision>, where decision reflects the
choice the application engineer makes in a
situation. The decisional meta-model is process and
paradigm independent, it views a method as a set of
decisions and dependencies between them together
with a mechanism for decision enactment. After
these, the requirements moved towards the need of
generic models, generic models abstract out the
common properties of meta-models. This results in
a three-layer framework consisting of the generic,
meta-model and method layers capable enough of
handling the link types between the architectures.
The configurable Meta model presented in this
proposal is engineered from generic model hence
making it an important proposal in the field. Since
the generic concepts are entered in the configurable
Meta model, it makes sense to model a number of
methods from it and after customization; a
complete and consistent situated method can be
formed.
B. Reusable building blocks
SME reuses method components, which are the
building blocks of development methods to create
situational methods in SME. The methods are built
from small parts of existing methods i.e. smaller
pieces (e.g. fragments, patterns, components,
chunks). Method fragment consists of process
fragments and product fragments. The process
fragment describes the stage, activities and task
whereas the product fragment concerns the
structure of a process product (diagrams,
deliverables etc.). Further, method fragments
defined as "standard building blocks based on a
coherent part of a method. A situational method
can be constructed by combining a number of
method fragments.
A Pattern is described as a problem which occurs
over and over again in our environment and then
describes the core of the solution to that problem,
in such a way that you can use this solution a
million times over, without ever doing the same
twice. It is necessary to specialize the pattern and
to fill the abstract solution with additional

information in order to meet the specific conditions


of the actual case. It is used to guide model based
design of software. We can apply pattern for the
issue at hand maps with the general problem
specification in particular pattern. For example,
Analysis pattern which contains the knowledge on
how to appropriately represent a certain fact in
requirement engineering.
A component provides the partial solution to a
specific problem. They are more concrete as
compared to the patterns and can be used without
modification. Process building blocks are an
example for model components. Researcher
observed that components may also be specialized
or configured before or after aggregation. Method
components, the building blocks of SME, are
development methods or any coherent parts of
them.
A method chunk represents a reusable building
block for situation-driven method construction or
adaptation whereas a road-map represents a path in
a method or a specific sequence of method chunks
in a method.
In addition to above, many other existing method
modules have been used as an input for analysis.
The aim of the analysis has been to determine the
limitation of the current design and what changes
are made in order to transform it as a configurable
method component. Subsequently, the analyses
have focused on configuring a configurable method
component and adapt it for current organizational
definition.
Our notion of configurable method is similar to the
method chunk; it combines product and process
perspective into the same modelling component.
The configurable method can be atomic,
compound, transformational and constructional
besides these a new essentiality concept is added in
the method that transforms a method into
configurable method.
III.

EXTENDING METHOD WITH


MISSING PRODUCT ENTITIES

The methods in the method base exists in one of


these three forms

The method is sufficient and complete to


create the desired method.
The retrieved method cannot lead towards
the desired method. It is discarded, and
another one is considered.
The retrieved configurable method
partially meets the requirements. In this
case, method extension is to be
performed.
During method extension, the method engineer
selects and integrates the missing product entities
to form the desired method. For method extension,

the external view of the method that represents only


the utility of the method is considered.
Steps for method extension:
The Method engineer selects the missing method
concepts.
Instantiate the is composed of or is
mapped to relationships in which the new
method
concept
or
component
participates.
Add the new method concept in the
original method conceptual model or
method component model of the method.
The new method concept or component
has essentiality =variable in the method
conceptual model or method component
model.

Generate the modified set of purposes and


dependencies.

The modified set of purposes and dependencies are


generated based on the set of rules. The rules are
adapted from the generic rules given by (Gupta and
Prakash, 2001). To get better understanding, a
flavour of these rules is given below:IV. ADDING A NEW METHOD CONCEPT IN
METHOD CONCEPTUAL MODEL
Whenever a new method concept is added in the
method conceptual model following operation,
need to perform

The new method concept must be


imported from some other method.
The new method concept must enter in the
is composed of relationship with some
existing method concept in the method.
It must satisfy the completeness,
conformity and fidelity constraints.

For example, to add Generalisation with the usecases in the basic Use Case Diagram that show
users interaction with the system (Rumbaugh J.
et. al, 1991). In this situation, the new method
concept added, must enter into the chunk of
purposes i.e. Basic life cycle purpose, Relational
purpose, Constraint Enforcement Purpose and
Integration class purposes. Since the set of
purposes gets modified, the dependencies and the
constraints have to mutate accordingly.
For every method concept Si to be added define the
purposes:
< Si, create> and < Si, delete >
<Generalisation, create> < Generalisation, delete>
Rules for Relational purposes
Rule-1: For all added method concepts Si such that
Si is composed of Sj that already exists and Si is

not a collection of concepts, generate the relational


purposes
< Sj, Si , O> and < Sj, Si , O'>
The operations O and O' are defined as follows:
If Sj is simple definitional and Si is
complex definitional, then they are attach
and detach respectively.

If Sj is complex definitional and Si is


complex definitional, then they join and
dejoin respectively.

If Sj is simple constructional or complex


constructional and Si is complex
constructional, then they are associate and
dissociate respectively.

If Sj is definitional and Si is constructional


or link then, they are a couple and
uncouple respectively.

Rule-2: - If Si is a collection of concepts and is


composed of a sk and Sj where Sk is of link type
and Sj is of constructional type only then generate
following purposes:
< Sj1, Sj2, sk, Si, relate> and < Sj1, Sj2, sk, Si,
unrelate>
Where, Sj1 and Sj2 are two instances of Sj.
For our example, following purposes are
generated
<use_case1, use_case2, extend_link, generalisation,
relate>
<use_case1, use_case2, extend_link, generalisation,
unrelate>
Method Conceptual model of Use Case conf
Method Concepts Essentiality
<Actor> <Common>
<use-case> <Common>
<Generalisation> <Variable>
<assoc-link> <Variable>
<include-link> <Variable>
<extends-link> <Variable>
V. ADDING A NEW METHOD COMPONENT
IN METHOD COMPOUND MODEL
Whenever a new method component is added, in
the method component model of compound
method. Following operation need to perform
The new method component must be
imported from some other method.

The new method component must enter in


the is mapped to relationship with some
method component exists in the method.

It must satisfy the icompleteness,


iconformity and ifidelity constraints.

For example: To extend the object model of OMT


method (Booch G., 1994) with activity diagram of
UML method.
Generating Integration Purposes
Rule-1 : For every method concept si of a method
component M1 that is mapped to the added method
concept sj of other method component M2,
generate following integration purposes:
(i) M2: <sj, export >, <sj, withdraw >
(ii) M1: <sj, import>, <sj, dump>
For example,
UML: <activity diagram, export>, < activity
diagram, withdraw>
OMT: < activity diagram, import>, < activity
diagram, dump>
Rule-2: If type of the method component is
constructional, then for method concept Si of
method M1 which is mapped to method concept Sj
in method component M2 generate following
integration purposes:
M1:<sj, si, correspond>, <sj, si, separate>
For example,
OMT :< object model, activity diagram,
correspond > < object model, activity diagram,
separate>
Rule-3: If type of the method component is
transformational, then for every method concept si
of method component M1 which is mapped to
method concept sj in method component M2
generate following integration purposes:
M1: <sj, si, convert>, <sj, si, deconvert>
Compositional Constraint Purposes
Rule-1: For every compositional completeness
structure, si_icompleteness, generate the purpose
<si, si_icompleteness, enforce_si_icompleteness>
Rule-2: For every compositional conformity
structure, si_iconformity, generate the purpose
<si, si_iconformity, enforce_si_iconformity>
Rule-3: For every compositional fidelity structure,
si_ifidelity, generate the purpose
<si , si _ifidelity, enforce_si _ifidelity>
For example, these rules will generate following
purposes:
<activity_diagram,
activity_diagram
_icompleteness
enforce_
activity_diagram
_icompleteness>.
<activity_diagram, activity_diagram _iconsistency,
enforce_ activity_diagram _iconsistency>.

< activity_diagram, activity_diagram _ifidelity,


enforce_ activity_diagram _ifidelity>.
VI CONCLUSION
During the research, it was observed that there may
be some requirements that may not be covered by
the configured method alone. To satisfy the
complete set of project requirements the candidate
method may be extended with other concept. The
method extension is a detailed and tedious task, it
starts from the process framework of the method,
followed by the actors responsible for the process
and management and finally to the practices need
to be added that are required to implement the
extended phase.
Extended methods are goal-driven, not artifactdriven. They do not prescribe practices or specific
artifacts. Rather, they suggest alternative strategies
that can be applied at certain parts of the lifecycle.
Extended methods picks up where original methods
leave off.
REFERENCES
[1] Booch G., (1994). Object Oriented Analysis and Design
mmnwith Applications. Benjamin/Cummings Publishing
mmnCompany Inc., Redwood City, CA, Second edition., Ra
onal Method Engine.
[2] Gupta, D. and Prakash, N. (2001). Engineering methods
mmmfrom their requirements specification. Requirements
mmmEngineering Journal, (6), 135160.
[3] Henderson-Sellers, B., France, B., Georg, G. and Reddy, R.
mm (2007). A method mmengineering approach to developing
mm aspect-oriented modeling processes mmbased on the OPEN
mm,.process framework. In Information and software
mm..technology, 49 mm(7) 761-773.
[4] Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F. and
mmnLorensen, W. (1991). Object oriented modelling and
mmndesign, Prentice mHall International, Englewood cliffs,
mmnNew Jersey.
[5] Tuunanen, T. and Rossi, M. (2004). Engineering a Method
mmnfor Wide Audience Requirements Elicitation and
mmnIntegrating It to Software Development. In Proceedings of
mmnthe 37th Hawaii International Conference on System
mmnSciences, Hawaii, (pp. 1-10).
[6] Moaven S., Habibi, J. and Ahmadi, H. (2008). Towards an
mm Architectural-Centric Approach for Method Engineering. In
mm IASTED conference on Software Engineering, Austria, (pp.
mm.74-79).
[7] Yourdon, E., (1989). Modern Structured Analysis,PrenticennnnHall, London.

Comparison of Different Steganography


Techniques
Minakshi Tomer
Asst Prof., Dept of IT
MSIT, GGSIP University,
New Delhi
minakshi.tomer@msit.in

Laveena Rathi,
Student, Dept of IT
MSIT, GGSIP University,
New Delhi
laveena.rathi88@gmail.com

AbstractThe enhancement of data transfer through


internet has made it easier to send information
accurately and faster from source to the destination.
But information security is one of the most prominent
concern in any communication. There are many
security attacks related to information security and a
lot of techniques has been implemented to prevent
these attacks. Data hiding is a related field to
information security. Data hiding can be achieved by
using Digital Steganography. Digital Steganography
is an art of hiding information in a way that prevent
detection of any hidden information. Steganography
means hiding a secret message inside a larger one
(source cover) in such a way that an intruder cannot
sense the presence of contents of the hidden message.
For embedding secret message in images, there exist a
large number of Steganography techniques some of
them are more complex than the others and all of
them have respective strong and weak points. This
paper introduces the concept of steganography using
LSB
METHOD
and
compares
different
steganography techniques available.
KeywordsCover
image,
Encryption,
Image
Steganography, Decryption, Information Hiding,
Least Significant Bit, Steganography.

I.

INTRODUCTION

The word Steganography comes from the


Greek steganos (covered or secret) and graphy
(writing or drawing) and thus means, literally,
covered writing. It is a data hiding techniques,
which aims at transmitting a message on a channel
where some other kind of information is already
being transmitted. The goal of steganography is to
hide messages inside the images in such a way that
does not allow any enemy to even detect that
there is a secret message present in the image.
Steganography attempts to hide the existence of
communication [1].
The four main categories of file formats that
can be used for steganography are text, image,
audio and video. Given the proliferation of digital

Pooja
Student, Dept of IT
MSIT, GGSIP University,
New Delhi
pooja1pathak@gmail.com

images, especially on Internet, and given the large


amount of redundant bits present in the digital
representation of an image are the most popular
cover
objects
for
steganography
[2].
Implementation of digital image Steganography
needs these three components:
i. The Cover image,
ii. The Message,
iii. The Key
Start
Application
Encryption

Image

Message
file

BMP image
file

Decryption

Image file

Image

Message file

Fig 1. Block diagram of steganography

Steganography, watermarking and cryptography


are the three fields which are said to be closely
related to each other and belong to same family i.e.
Information Security. Information Security systems
consist of three components namely, Cryptography,
Steganography and Watermarking. Last two
techniques come under information hiding.
1. Steganography vs. Cryptography: Steganography
and cryptography are very closely related to each
other such that they are said to belong to same
family of data security. Cryptography is the art of
transforming plain data to the unreadable form. It is
a process of intermixing the plain text into cipher
text by using some cryptographic algorithms.
Cipher texts are not understandable by anyone. On
the other hand, as alleged, Steganography is the art

of hiding data behind the cover using


steganography algorithms. Both steganography and
cryptography can be used for providing extra layer
to the data security framework. Steganography and
cryptography are techniques used to protect
information from unwanted parties but neither
technology alone is perfect . Once the presence of
hidden information is revealed or suspected, the
reason of Steganography is partly defeated. The
strength of Steganography increases by combining
it with cryptography [3].
2. Steganography vs. Watermarking: There exists a
very significant difference between steganography
and watermarking. In steganography technique of
data hiding secret information must never be visible
to viewer. Viewer should be unaware of presence
of any information. On the other hand this is not
necessary in watermarking technique of data
hiding. Watermark may or may not be visible to the
viewer depending on the application for which it is
required.
II. HISTORY
The history of Steganography technique can be
seen back from 440 B.C
1. Wax Tablets: In ancient Greece, people used to
write secret messages on wood and then cover it
with Wax. Also, a normal message was written
on the wax to cover the secret message.
2. Shove Heads: This method was also used back
in ancient Greece. Slaves heads were shoved
and secret message was written on the scalp of
the slave. Then, the slaves hair was leaved to
grow and the receiver can get the message after
shaving the head of slave again.
3. Invisible Ink: Invisible ink was used to write
secret messages, which became visible only
when the papers carrying the messages were
heated. Aqueous substances such as milk,
vinegar and fruit juices were used to make
invisible inks. This technique was used by the
French Resistance during World War II. He
wrote secret messages on the back of the
couriers using invisible ink.
4. Morse Code: Morse code were used to write
secret messages in the knitting yarn. The
clothes were made out of the yarn which the
carrier worn.
III.

TYPES OF STEGANOGRAPHY
TECHNIQUES

1. Spatial Domain Steganography


The secret information is directly embedded in
pixel values by using spatial domain steganography
technique.

In other words we can say that the pixels are


directly altered to store secret messages. Basically
these techniques are very simple to implement but
have greater effect than other techniques.
1.1.
Least
Algorithm:

Significant

Bit

Substitution

LSB substitution algorithm is one of the simplest


forms of algorithm in which LSBs of the cover
image is altered according to the secret message. It
is a simple but very effective technique of
encapsulating secret messages into images. In Gray
scale images each pixel is of 8 bits. 24 bits are used
in color RGB images to store color information
where each 8 bits are for Red, Green and Blue
components. The advantages of this algorithm are
its simplicity and high perceptual efficiency. High
embedding capacity can also be achieved by using
this technique but this algorithm is very sensitive to
image manipulations such as cropping, scaling and
rotations, Lossy compression and addition of noise.
There exists a number of variations of this
algorithm which includes Edge and texture
masking of cover image to determine the number k
bits of LSBs for data embedding [4], Adaptive LSB
algorithm based on brightness, optimized LSB
algorithm using cat swarm and genetic algorithm
[5,6], image steganography based on histogram
modifications [7,8] etc. This research work mainly
focuses on LSB steganography algorithm, so the
rest of all the algorithms available in the literature
will not be discussed in detail.
1.2.

Pixel Value Differencing:

In this technique the cover image is sub divided


into non overlapping blocks consisting of at least
two connecting pixels. Difference between two
connected pixels is altered to hide the data in pixel
value differencing technique to hide the data. The
high alterations are allowed by high difference in
the cover image pixel value. Area of the pixel
decides the hiding capacity of this technique for
example if edge area is chosen then the difference
is high in between the connecting pixels. Whereas
in smooth areas, difference is low. So ideal choice
is to select edge areas to embed the secret message
that is having more embedding capacity. Stego
image produced by this technique has more quality
and has better imperceptibility results [9].
1.3.

Grey Level Modification:

In this technique data is mapped by applying some


modifications to the gray values of the image
pixels. This technique will not hide or embed data,
instead it map the data by using some mathematical
functions. Set of pixels are selected for mapping
using this mathematical function. It uses the
concept of odd and even numbers for mapping the
data in cover image. High hiding capacity and low
computational are some advantages of this
technique [10].

1.4.

Prediction based Steganography:

In this technique pixel values are predicted by


using predicator. The loopholes of other techniques
were removed by using this technique which
directly embeds the secret data into pixel values.
Prediction error values (EV) are used in order to
improve hiding capacity and visual quality. The
values of EVs are altered to hide the secret data. It
mainly consists of two steps, namely prediction
step and entropy coding. In prediction step
predicator is used to determine the pixel values of a
cover image and in the second step i.e., entropy
coding of prediction error values is determined.
1.5.

Quantization Index Modulation (QIM):

Quantization index modulation is a technique of


spatial domain steganography in which secret
information are embedded in cover image by
modulating an index with embedded information
and then quantization processes are applied to the
host signal with associated quantizer(s). This
technique has a number of advantages such as high
embedding capacity and highly robust technique.
2.
Transform
Steganography

or

Frequency

Domain

Transform domain steganography techniques are


the most complex ways to encapsulate the secret
data in the cover image. Every digital image is
made up of high and low frequency components.
Digital images have smooth and edge (sharp) areas.
Low frequency is represented by smooth areas
whereas edge or sharp areas of the cover image
represent high frequency. Changes that are done in
low frequency areas can be easily visible to human
eyes. So it is not possible to embed equal amount
of secret information in all the regions of the cover
image. It has number of advantages over the spatial
domain methods of steganography such as it is
more robust against compression, image processing
and cropping and these methods are less prone to
attacks. These techniques are not dependent upon
image
file
format.
Transform
domain
steganography techniques are broadly classified
into following types:
2.1 Discrete
Technique:

Wavelet

Transformation

In Discrete wavelet transformation techniques the


cover image is divided into four sub bands where
higher band represent finer details and lower band
has more important information. Entropy coders are
used to locate the transform coefficients and encode
them. DWT technique has extra benefit over DCT
that it offers efficient energy compaction than DCT
without any blocking artifacts after the process of
coding. DWT has multi-resolution nature which
make it best fit for scalable image coding. There
are many other types of transforms that can be
applied with DWT such as integer transform,
curvelet transform contourlet transform, dual tree
DWT etc.

2.2 Discrete Cosine Transformation Technique:


Discrete cosine transformation is very popular
steganography techniques which is best suited for
images having JPEG format. JPEG image is the
most widely used over the internet and also have
lossy nature of compression. DCT is extensively
used for image and video compression techniques.
Quantization table of JPEG is used to quantize
every block of DCT. Quantized coefficients are
used for hiding the secret message. After that
coding methods are applied such as Huffman
coding. In this technique high frequency region is
best for information hiding because they often
become zero after the process of quantization.
Hence it is not needed to modify the coefficient
value if the embedded data is zero. Some of the
DCT steganography tools are JSteg/ JPHide, F5,
YASS (Yet another steganographic scheme) and
Outguess.

2.3 Spread Spectrum Steganography:


Spread spectrum is very famous technique in
digital and wireless communication. It is process in
which bandwidth of narrow band is modulated
across the wider band of frequencies. After
spreading, resulting signal is added with the cover
image and the output image is stego image with
secret information in it. Embedded signal has very
low power which is very difficult to detect the
presence of steganography. So in this case SignaltoNoise (SNR) is very less. It must require
synchronizing of pseudo random noise generated at
transmitter as well as receiver end to generate
desired results [11]. This technique uses symmetric
key system which requires transmitter and receiver
to use same key for communication. Advantages of
this technique are good stego image quality and it
maintains the robustness against various attacks. It
becomes very difficult for the attacker to detect and
extract the embedded secret information. There
could be further improvements that could enhance
the embedding capacity and reduce the bit error
rate during embedding process. To analyze the
performance of this steganographic algorithm in
term of stego image quality with respect to original
cover image Peak-signal-to-noise-ratio (PSNR) and
Mean square error (MSE) can be used.
2.4

Adaptive Steganography:

The Number of advantages of transform domain


steganographic methods over image or spatial
domain technique encourages the use of these
techniques. Adaptive steganography is also known
as model based steganography or Statistics-aware
embedding. Statistical properties of the cover
image are used in this technique. This technique
embeds the secret information in cover image
without making any change in its properties. It can
be of two types, in first type we select random
adaptive pixels depending on cover image and in
the second type we select those pixels which have

higher local standard deviation value. This


technique has large embedding capacity and it
provide high security to the stego image against
various attacks. So every steganographic method
has its own merits and demerits. Depending upon
the type and requirement of application one can use
a method which is best fit his/her requirements.
IV.

Structure of text file: Any text file consists of


streams of characters, each character is of 1 byte
(ASCII code) and each byte consists of 8 bits.

LEAST SIGNIFICANT BIT TECHNIQUE

Least significant bit (LSB) insertion technique is a


most common and simple way to hide information
in an image file. In this method of LSB insertion a
byte is replaced with an Messages bit. This
technique is successfully used for image
steganography. For a human eye the stego image
will look same as the carrier image. For the purpose
of hiding data inside the images, the LSB (Least
Significant Bit) techniques are usually used. For a
computer an image file is simply a file which
shows different colors and intensities of light in
different areas of the image. One of the best types
of image file to hide information inside is a 24 Bit
BMP (Bitmap) image file. When an image is of
high quality and resolution it becomes very easy to
hide information inside the image. Although 24 Bit
images are best for hiding information due to their
size. Some people may choose 8 Bit BMPs or
possibly another image format such as GIF [12].
The reason being is that uploading large size
images on the internet may arouse suspicion to the
intruder. The least significant bit i.e. the eighth bit
is used to hide information by changing its value to
a bit of the secret message. By using a 24-bit
image, one could store 3 bits in every pixel by
changing a bit of each of the red, green and blue
color components. Suppose that we have three
adjacent pixels (9 bytes) with the RGB encoding
[13]

The new combination of RGB components of each


pixel of image will be like:

Fig 2. RGB components of pixels

ENCRYPTION STAGE:
The Encryption stage uses two types of files for
encryption purpose. One of them is the secret file
which is to be transmitted securely, and the other
one is a carrier file such as image which will carry
the secret data. In the encryption stage the secret
information is embedded into the image.

10010101 00001101 11001001


10010110 00001111 11001011
10011111 00010000 11001011
When the number 300, can be which binary
representation is 100101100 embedded into the
least significant bits of this part of the image. If we
overlay these 9 bits over the LSB of the 9 bytes
above we get the following (where bits in bold have
been changed)
10010101 00001100 11001000

Fig 3. Encryption Phase Process

10010111 00001110 11001011


10011111 00010000 11001010

DECRYPTION STAGE:

Here the number 300 was embedded into the


grid, only the 5 bits needed to be changed
according to the embedded message. On average,
only half of the bits in an image will need to be
modified to hide a secret message using the
maximum cover size [14].

The Decryption stage is opposite of encryption


stage. The Decryption stage uses the carrier
image in which the data is hidden as an input file.
Decryption section uses the Least Significant bit
Algorithm (LSB) by which the encoded bits inside
the image file is decoded and turns to its original
state and gives the output as a text document.

Now let us see how this LSB works:

Fig 4. Decryption Phase Process

V.

ALGORITHM

1. Select a cover image of size m*n and message


file as an input.
2. Get LSB of Red component of cover image say
R.
3. Convert message into bit Stream say K.
4. XOR the value of R and K.

Fig 6. Flowchart to decrypt stego image using LSB

5. If the value is 1 replace LSB of Green by 1 else


replace LSB of Blue by 1.
6. Repeat until whole information is embedded.

VI.

CONCLUSION

Steganography technique is been used from ages


and has set its roots from ancient Greece. Null
Ciphers, Microdots and invisible ink method was
also very popular steganography techniques during
ancient times. All these technique encourages the
modern day engineers and scientists to invent more
steganography techniques in digital era of
computers. LSB method has many disadvantages.
One of the major disadvantage associated with LSB
method is that intruder can change the least
significant bit of all the image pixels. In this way
hidden message will be destroyed by changing the
pixel of image. Also image quality is destroyed, a
little bit, i.e. in the range of +1 or -1 at each pixel
position. LSB technique is not immune to noise and
compression technique. It is observed that the
embedding procedure is very easy in spatial domain
steganography technique as compared to complex
transform domain steganography techniques.
Spatial domain techniques are simple and have high
stego visual quality, whereas transform domain
techniques are more robust and less prone to image
processing attacks. So this paper reviews different
types of embedding secret messages with their
advantages and disadvantages.
Fig 5. Flowchart to encrypt using LSB

REFERENCES
[1] K.B.Raja, C.R.Chowdary, Venugopal K R, and
L.M.Patnaik, A Secure Image Steganography using
LSB, DCT and Compression Techniques on Raw
Images Department of Computer Science
Engineering, Bangalore 2005 IEEE.
[2] .Tahir Ali and Amit Doegar,A Novel Approach of
LSB
Based
Steganography
Using Parity
Checker,IJARCSSE, Vol. 5, Issue 1, January 2015.
[3] .Mr. Falesh M. Shelke1, Miss. Ashwini A. Dongre
and Mr. Pravin D. Soni, Comparison of different
techniques for Steganography in images",IJAIEM,
Vol. 3, Issue 2, February 2014.
[4] H. Yang, X. Sun, and G. Sun, A High-Capacity
Image Data Hiding Scheme using Adaptive LSB
Substitution , Radio Engineering, vol. 18, no. 4,
(2009).
[5] Z. H. Wang, C. C. Chang, and M. C. Li, Optimizing
Least Significant Bit Substitution using Cat Swarm
Optimization Strategy, Information Sciences, vol.
192, no. 1, (2012).
[6] S. Wang, B. Yang, and X. Niu, A secure
steganography .method based on genetic algorithm,
Journal of Information .Hiding and Multimedia
Signal Processing, vol. 1, no. 1 .(2010).
[7]

Z. Zhao, H. Luo, Z. M. Lu, and J. S. Pan,


Reversible Data .Hiding Based on Multilevel
Histogram Modification and .Sequential Recovery,
International
Journal
of
Electronics
and
.Communication, vol. 65, no. 10, (2011).

[8] C. C. Lin, W. L. Tai, and C. C. Chang, Multilevel


Reversible
Data Hiding Based on Histogram
Modification of Difference Images, Pattern
Recognition, vol. 41, no. 12, (2008).
[9] D. Wu, and W.H. Tsai, A Steganographic Method
for Images nby Pixel Value Differencing, Pattern
Recognition, vol. 24, nno. n9-10, (2003).
[10] V. M. Potdar, and E. Chang, Gray Level
..Modification
Steganography
for
Secret
..Communication, Proceeding of n2nd IEEE
..International Conference on Industrial Informatics
n(INDIN), (2004) June 26-26, Berlin, Germany.
[11] .L. M. Marvel, C. G. Boncelet, and C. T. Retter,
Spread nSpectrum Image Steganography, IEEE
Transaction on Image nProcessing, vol. 8, no. 8,
(1999).
[12] V. Lokeswara Reddy, Dr.A.Subramanyam, Dr.P.
..Chenna nReddy, Implementation of LSB
..Steganography and its nEvaluation for Various File
..Formats, Int. J. Advanced nNetworking and
..Applications 868 Volume: 02, Issue: 05, nPages:
..868-872 (2011)
[13] T Morkel, JHP Eloff and MS Olivier, "An
..Overview of Image
nSteganography," in
..Proceeding of the Fifth Annual nInformation

Security South Africa Conference (ISSA2005),


Sand to South Africa, June/July 2005.
[14] Rahul Joshi1,Lokesh Gagnani and Salony Pandey,
Image nSteganography With LSB, IJARCET,
Volume 2, Issue 1, nJanuary 2013.

DISCOVERY LEARNING:HOW TO MAKE


IT EFFECTIVE?
Monika Davar
Asst Prof.,
MSI, GGSIP University
New Delhi
monikadavar @yahoo.com

Abstract-- Discovery learning is an instructional


approach in which the students investigate a problem,
interact with the environment, explore and
manipulate objects and conduct experiments to find a
solution. Students discover knowledge and learn from
their own observations & experimentation. Majority
of researches in current times indicate that pure
discovery learning with no or minimum guidance
from the teacher is not very effective while discovery
under guidance and direction of the teachers was
found to be effective. This paper lists the factors to be
considered for implementing discovery learning
approach. It further deals with the stages of discovery
learning and illustrates how discovery learning can be
implemented effectively.
Keywords- Discovery learning, direct instructional
guidance, enactive level, iconic level, symbolic level

I INTRODUCTION
Discovery learning is an instructional approach
based on constructivism. It was originated in
1960s by Bruner and is supported by the works of
Piaget and S.Papert. Discovery learning can occur
whenever the students are provided with a problem.
It embarks students on an exciting, innovative and
thought provoking journey of investigation and
inquiry. The student draws on his previous
knowledge and experiences to investigate a
problem, interacts with the environment, explores
and manipulates objects and conducts experiments
to find a solution.
Jerome Bruner in his book The process of
education has discussed this method and
popularized it. The rationale behind using
discovery approach is that the motivation of pupil
to learn science will be increased, if they
experience the thrill scientists get after
discovering scientific knowledge. The students
also learn about the nature of science through the
process of discovery. It provides opportunity to
students to analyze the problem, collect data,
conduct experiments and arrive at solutions. In pure
discovery minimum guidance is given by the
teacher. There is no direct transmission or
imparting of facts. Rather students discover
knowledge and learn from their own observations

& experimentation. Focus is more on the processes


of science rather than the product. Students get
training in the scientific method of attacking and
solving a problem.
II RESEARCH ON EFFECTIVENESS OF
DISCOVERY LEARNING
There are two schools of thought regarding the
effectiveness of discovery learning. Some experts,
such as Bruner(1961), Papert (1980), Steffe and
Gale(1995) advocate that learners learn more
effectively when they discover or construct
information for themselves rather than being
provided information. Others like Cronbach &
Snow(1977), Klahr & Nigam(2004), Mayer(2004)
and Krishner, Sweller & Clark(2006) feel that
novice learners learn better when provided with
direct instructional guidance which fully explains
the concepts rather than discovering it.
Bruner advocated teaching by organising concepts
and learning by discovery. He believed that
curriculum should foster the development of
problem solving skills through the processes of
inquiry and discovery. As per Bruner (1960),
Intellectual activity of the child is no different in
kind from the intellectual activity of a scientist,
only different in degree.
Research on minimal guidance was conducted by
Kolb (1971) and Kolb and Fry (1975).They
followed a procedure in which the learner carries
out actions, sees and understands the effects
followed by understanding the general principles
with minimal guidance. They found that attempts to
validate this kind of learning (which they called
experiential learning) were not completely
successful. Perkins(1991) also reported that
guidance was necessary.
As per Steffe and Gale (1995), knowledge is
constructed by learners and thus learners need to
have opportunity to construct by being presented
with goals but instruction should be minimum.
Richard E. Mayer(2004) has reviewed researches
on discovery of problem solving rules in the

1960s, discovery of conservation strategies in the


1970s and discovery of logo programming
strategies in the 1980s and found in each case that
guided discovery was more effective than pure
discovery in helping students learn and transfer. As
per Mayer, pure discovery, with no guidance from
teacher even if it involves lot of hands on activities
and group discussion, fails to promote selection of
relevant information. On the other hand, guided
discovery, in which the teacher provides systematic
guidance to solve the given problem was
considered as the best method for constructivist
learning.
Research by Moreno(2004) found that students
learn more deeply from strongly guided instruction
than discovery. Klahr and Nigam also report that
quality of learning is better with direct instruction.
Klahr and Nigam(2004) on basis of their research
studies have reported advantages of
direct
instruction in science. According to them, teachers
found discovery learning as successful only when
students have prerequisite knowledge and undergo
some prior-structured experiences.
As per Paul A. Kirschner, John Sweller & Richard
E. Clark(2006), The past half century of empirical
research on this issue has provided overwhelming
and unambiguous evidence that minimal guidance
during instruction is significantly less effective and
efficient than guidance specifically designed to
support the cognitive processing necessary for
learning.They suggest that based on the current
knowledge of human cognitive architecture,
minimally guided instruction is likely to be
ineffective.
Majority of researches in current times indicate that
pure discovery learning with no or minimum
guidance from the teacher is not very effective. On
the other hand, discovery or problem solving
under guidance and direction of the teachers was
found to be effective. Pure discovery learning was
found to be successful only in those cases where
learners have some previous knowledge about the
concepts and issues related to the problem. Hence,
it is suggested that whenever teachers use
discovery learning for teaching science, they
should act as a facilitator, helper and guide to
ensure that learners move in the right direction and
grasp the concepts effectively.
III
MOVING
TOWARDS
IMPLEMENTATION
OF
LEARNING

EFFECTIVE
DISCOVERY

For Discovery learning to be effective, the


teachers approach should be to teach science with
a question mark rather than in statement form.

This means that the teacher should ask questions to


make students think and discover knowledge rather
than directly state the facts. He should also
encourage the students to ask questions. The
teacher should not only have sound scientific
knowledge but should also possess curiosity and
zeal for investigation. The teacher should have a
scientific attitude. Unless the teacher has scientific
attitude we cannot expect students under his
guidance to develop an attitude of a discoverer.
Guidance is necessary and should be provided for
students to proceed in the right direction. Teacher
should be a guide, motivator and friend of the
students. The teacher should provide references and
resources for the students to study and investigate
the problem. The teacher should monitor that
students are moving in the right direction. He
should help them in arriving at the desired
conclusion by asking relevant questions at various
stages. He should guide their thinking in the right
direction, whenever they start moving in the wrong
direction.
An atmosphere of freedom should be provided for
students to express themselves and become selfreliant. Students should never be snubbed for their
mistakes. Any ideas given by the students to solve
the problem should not be discarded. Rather by
testing various ideas, it can be shown to the learner
why it is not relevant or acceptable. Adequate
resources and infrastructure should be provided to
students to investigate, conduct experiments and
find solution to problems.
IV ILLUSTRATION OF THE STAGES OF
DISCOVERY METHOD
In a discovery lesson, the teacher decides in
advance the process or concept of scientific
knowledge which is to be discovered by the
students. The lesson proceeds through a hierarchy
of stages which may be associated with the
following Bruners levels of thought:
Stage -1: Enactive level
Students perform activities related to what is to be
discovered. At this stage, a student is able to think
about the nature of the physical world in terms of
personal experience. For example, to explain
Boyles law, students are instructed to perform
activities (to discover the relationship between
pressure and volume). Students are instructed to
draw some air into a large syringe, plug the exit
and mount the syringe vertically. Then load the top
of the piston successively with objects of equal
weight (i.e. one textbook, then second followed by
third, fourth etc) and note the observations in terms
of change in volume.

Stage 2: Iconic level


At this stage the pupil might describe what
happened. Based on observations, they discover the
relationship that as pressure (in terms of increasing
the number of books pushing the piston) is
increased, volume decreases. The teacher directs
their
thinking
in
the
right
direction.
Stage 3: Symbolic level
This is the stage where the students replace mental
images (of objects used in these activities) with
symbols. For example, students now express the
relationship between pressure and volume in terms
of symbols:
p 1/v (where p = pressure and v=volume).
Then they conclude that pv =k (constant).

solution is formed which turns blue litmus paper


red.
C

Conclusion:

On the basis of their observation and with the help


of the teacher, students will draw their conclusion.
Students may infer that non-metals (such as
sulphur) combine with oxygen (on heating) to form
their oxides (such as sulphur dioxide). The oxides
of non-metals when dissolved in water form acids.
They try to represent their discovery in symbolic
form:
(1) S+O2
(2) SO2+H2O

SO2
H2SO3

Teacher provides guidance wherever needed, but


does not impart facts directly.
VI CONCLUSION

V IMPLEMENTING DISCOVERY METHOD


EFFECTIVELY
For teaching the reaction of non-metals with
oxygen and water, teacher can use the heuristic
method as follows:
A. Activities
Teacher can give the following instructions to
students to perform some
activities:a)

Take a small amount of Sulphur


powder (non-metal) in a long handled
spoon. Heat it over the flame of a
burner.

b) When Sulphur starts burning, lower the


spoon into a gas jar.
c)

After some time cover the mouth of the


gas jar partly with a lid, (while Sulphur
is still burning). What do you observe?
Note your observations.

d) Now add about 25ml of water to the


gas jar. Note your observations.
e)

Test the nature of the solution formed


with litmus paper and note your
observation.

Students perform these activities and note their


observation.
B.

Observations:

Students will observe that as sulphur burns inside


the gas jar, a gas is formed. On adding water, a

Discovery learning inculcates scientific attitude by


promoting the habits of keen observation, critical
evaluation and drawing relevant inferences. With
its emphasis on training in scientific method, it
fulfills the major objectives of teaching science.
This method is psychologicallysound as it focuses
on active participation of the learners. It provides
better understanding and retention of concepts as
knowledge is acquired through concrete
experiences. Learning becomes an adventure and a
pleasant experience as it is based on discovering
something. Learning environment is democratic. It
provides freedom to students to investigate, clarify
doubts and reach their own conclusion. Teachers
role is not authoritarian but acts as a helper and
guide. This leads to friendly relations and good
rapport between teacher and students. This method
caters to individual differences among the students.
They work at their own pace. It encourages the
attitude of research in students and helps them to
appreciate the work of scientists.
Considering the above benefits, it is desirable to
use discovery learning for teaching science. But to
ensure that it provides maximum benefit to the
learners, teachers guidance, an environment of
freedom, opportunities to investigate and adequate
resources must be provided to the learners.
REFERENCES
[1]

D.Klahr and M. Nigam, The Equivalence of learning

mmn,paths in early science instruction: Effects of direct


mmminstruction and,discovery learning, Psychological ,
mmn,Science, vol. 15, pp 661-667, mmn,2004.

[2]N..J.Sweller, Cognitive load during problem solving


mmn:Effectsof,learning, Cognitive Science, vol. 12, pp 257mmn 285, 1988.
[3] NJ.S. Bruner, The art of discovery, Harvard Educational
VVm,Review, vol. 31, pp 21-32, 1961.
[4] NL.S. Schulman, Those who understand: Knowledge
mm...growth in,teaching, Educational Researcher, vol. 15, pp
JJJ mJ4-NN14, 1986.
[5]

L.Steffe & J. Gale,

.Constructivism in Education,

mmHillsdale, NJ, , Lawrence Erlbaum Associates, Inc.,


mm1995.
[6] B M. Davar, Heuristic Method in Teaching of Science,1st

mmed, Delhi, India, PHI learning private limited, 2012, pp


JJJmm163-168
[7] R.Mayer, Should there be a three strikes rule against pure
CC..,discovery learning? The case for guided methods of
JJJH.instruction, American Psychologist, vol.59, pp14mm.19,2004.
[8] P. A. Krishner, J. Sweller, R. E. Clark, Why minimal
JJJJJ,guidance during instruction does not work: An analysis of
MM,the failure of constructivist discover problem based,
MH,,experiential and inquiry based teaching, Educational
NN,,.Psychologist, JJJJvol.41(2), pp75-86, 2006.

Digital Watermarking with Visual


cryptography
Sunesh
Asst Prof., Dept of IT
MSIT, GGSIP University
New Delhi
suneshmlk@gmail.com

Mamta
Asst Prof., Dept of IT
MSIT, GGSIP University
New Delhi
mamta.ghalan@gmail.com

Juhi Jha
Student, Dept of IT
MSIT, GGSIP University
New Delhi
juhi.prashar1@gmail.com

Abstract: Due to growing usage of internet throughout


the world a large amount of data is available on web
for multiple usage and distribution.But at the same
time, data security has become a most important issue
in data communication especially in the field of
Computer Network.Due the distribution of data over
different kinds of communication channels are
making the copyright protection a very important
issue in the digital world. For copyright protection of
digital images, watermarking techniques and visual
cryptographic schemes have been recently used in
different approaches.The Watermarking technique is
used to embed secret information into an original
image, with different purposes and different
features.It is used to assess the ownership of the
modified image. Visual cryptography can be defined
as a way to decompose a secret image into shares and
distribute them to a number of participants, so that
only legitimate subsets of participants can reconstruct
the original image by combining their shares. These
two techniques when use together can provide some
important solutions for copyright protection of a
given image, as provided by several proposals
appeared in literature. In this work we try to provide
a general model for the watermarking schemes
obtained from the combination with visual
cryptography.
involved
and
their
possible
applications in new scenarios.
Keywords:
Visual
Cryptography,
Digital
watermarking, Shares, Image Security, Watermark,
Embedding, Extraction

I. INTRODUCTION
In our digital society Copyright protection is a
very important issue. A very large amount of
multimedia data are daily generated and distributed
using different kinds of consumer electronic
devices and very popular communication channels,

Ekta Sharma
Student,Dept of IT
MSIT, GGSIP University
New Delhi
shekta@yahoo.com

Nikhar Desai
Student, Dept of IT
MSIT, GGSIP University
New Delhi
nikds013@gmail.com

such as the Web and the social networks. The


increased facilities for the production of digital
information have lowered the costs .Also, it has
made image processing and distribution possible
for all users. At the same time, It becomes very
difficult to protect the intellectual property rights
and to control the diffusion of source multimedia
data[1].Since, digital information is very easy to
duplicate in an indistinguishable way from the
original, or to tamper modifying the data and
producingvabnewnoriginalnproduct.
A solution to some of the problems related to the
copyright protection and tampering verification of
multimedia data can be provided by the adoption of
a digital watermarking technique[2]. Commonly, a
digital watermark contains some
extra information that can be inserted into the
original data in usually a imperceptible, to avoid
distortion of the image, and robust, to contrast
removal attempts, way. An illegitimate copy can be
recognized by testing the presence of a valid
watermark and a dispute on the ownership of the
image resolved. Different
kinds of watermarking techniques, providing
different features and characteristics have been
presented in literature.
Visual Cryptography (VC) [3] scheme was
introduced by Naor and Shamir in 1994 for
encrypting visual information such as hand written
note, pictures etc.
Visual cryptography enables distributing sensitive
visual materials to involved participants to the
scheme, through public communication channels.
The produced random looking shares do not reveal
any information if they are not combined as

prescribed. Only qualified sets of participants are


able to reconstruct the image by simply stacking
together the shares they own. The attractiveness of
this paradigm consists in the fact that the
reconstruction phase does not require any
computation, but it is performed directly by the
human visual system.
Deviating a little bit from the main goal of the
original schemes, visual cryptography has been
exploited in many applications as a means to
protect a secret image.
This paper highlights the combined use of Digital
watermarking and Visual cryptography schemes for
image data protection. The rest of the papers are as
follows. Section I emphasizes on Digital
watermarking, Section II focuses on Visual
Cryptography scheme followed by Section III,
which focus on combined watermarking and
Visual Cryptography scheme.

information about the original value of p. When the


two shares are stacked together, if p is black, two
black sub-pixels will appear, while ,if p is white,
one black sub-pixel and one white sub-pixel will
appear as reported in the rightmost column of the
table. The human visual system will distinguish
whether p is black or white. Since c there is a
contrast between the two reconstructed pixels, even
if instead of a really white color, the color of the
merged subpixel will be gray.[4]

II. VISUAL CRYPTOGRAPHY


Visual cryptography is used to allow the
encoding of secret image having black or white
pixels into n shares. These n shares are distributed
to a set of n participants. Each share is composed of
black and white subpixels. These are further printed
in close proximity to each other. By doing so ,the
human visual system averages their individual
black/white contributions. Only qualified subsets of
participants can visually recover the secret image.
The other subsets of participants, known as
forbidden sets, cannot gain any information about
the secret image.[3]
The shares can be represented with a n*m matrix S
where each row represents one share, and each
element is either 0(for white subpixel) or 1(for
black subpixel).The matrix which represents the
shares is called distribution matrix. A group of
participants stacks together their shares for
reconstructing the secret image. Pixel expansion
and the construct are very important parameters for
visual cryptography scheme. The pixel expansion
corresponds to the number of subpixels contained
in each share. The contrast measures the
difference between a black and a white pixel in
reconstructed image.
Naor and Shamirs Basic (2,2) VC Scheme
NNNNNNNNNNNNNNNNNNNNNNNNNNNN
The basic idea of (2, 2) Naor and Shamirs
encoding scheme is depicted in figure 1. The
scheme encodes each single pixel p of a binary
image into two shares S1 and S2 . If p is white, the
dealer randomly can choose one of the first two
rows of the table in Figure 1 to build S1 and S2. If p
is black, the dealer randomly chooses one of the
last two rows of the table. The probabilities of the
two encoding cases are the same, independently of
whether the original pixel is black or white. Thus,
an adversary looking at a single share has no

Fig. 1. Basic (2,2) VC scheme with 2 subpixels

As shown in the table,for each pixel, two pixels


will be associated in the reconstruction.That means
the pixel expansion of the scheme is 2. During this
process the original image will be stretched out and
cause a distortion. To avoid this ,a larger pixel
expansion can be selected to maintain the aspect
ratio, by using 4(=2 2) pixels exapnsion, as
reported in Figure 2. The encoding and decoding
procedure are the same as considered in the first
case. More correctly, considering the base matrices
M0 and M1 , it is possible to note that the
collections C0 and C1 will be composed of all the
permutations of the columns of those base
matrices.[5]
As shown in the table, reporting the stacked
result, the reconstructed pixel r = S1 S2 may
contain two white and two black subpixels if p is
white, or all four black sub-pixels when p is black.
When all pixels in p are encoded in this way, and
each time an independent selection is made for
encoding each pixel p, the encoded shares S1 and
S2naremindeednrandom

Fig 2. (2,2) VC scheme with 4 subpixels

pictures, containing no information on the original


image. When S1 and S2 are superimposed, all of
the four sub-pixels are black in the reconstructed
blocks corresponding to each black pixel p, while
two sub-pixels are white and the other two are
black corresponding to each white pixel. Based on
the contrast obtained, the human visual system can
distinguish between white and black pixels in p
from S1 S2. In some cases, the OR operation is
substituted with the XOR operation obtaining some
improvement in the reconstruction of the image.
The price paid is that the reconstruction is no more
performed by the human visual system, but some
computation is needed to perform the XOR of the
shares.
Pictorial representations of visual cryptography
scheme are shown in figure 1. Suppose any
confidential data say the symbol e to be sent
secretly to any trusted authority, then the VC
scheme splits it in to two shares i.e. share1 and
share2. These individual shares would not provide
any information regarding the secret data.But by
stacking of these two shares can able to provide the
information regarding the secret/ confidential data
i.e., the e symbol in this example.[6]

Secret image

Share2

Share2

overlapped shares

steganography the digital signal has no relation to


the message, and it is merely used as a cover to
hide its existence. Watermarking can be
categorised broadly as visible and invisible
watermarking.. However, for copyright protection
of many multimedia product there is a requirement
of a secret data embedded into the image without
being visible. That is called invisible watermarking.
The encryption and decryption process of digital
watermarking technique are shows in figure 2.[7]

Fig. 3.visual cryptography scheme

III.DIGITAL WATERMARKING
Digital watermarking is another technique that is
use to secure the data inside another piece of data
called cover media. Digital watermarking is the act
of hiding a message related to a digital signal (i.e.
an image, song, video) within the signal itself.
digital watermarking is used for copyright
protection of the image. Copyright protection of
digital data is defined as the process of proving the
intellectual property rights to a court of law against
the
unauthorized
reproduction,
processing,
transformation or broadcasting of digital data It is a
concept closely related to steganography, in that
they both hide a message inside a digital signal.
However, what separates them is their goal.
Watermarking tries to hide a message related to the
actual content of the digital signal, while in

Fig 4. Embedding and Extraction process of Steganography


technique.

In Digital image watermarking, there are a many


methods exist to embedd information inside
another image. The following are the most popular
methods used in steganography technique.
A. Least significant bit (LSB) method
LSB is the lowest bit in a sequence of binary
number. Say, if bits of binary number are
10101001, the least significant bit is far right 1 [9].
The LSB based Steganography is used to insert the
secret data into the least significant bits of the pixel
values in a cover image. For example, to insert a bit
of secret information say 01010001 in a 8th bit of
some or all the bytes of a cover image is as follows:
Pixel of Cover image:
(11101111 11001001 10111000)
(10110111 01011010 11101101)
(10011000 10000111 01111001)

After change the LSB:


(11101110 11001001 10111001)
(10100111 01011010 11101100)
(10011001 10000111 01111001)
Here, secret bits 01110101 are embedded into first
eight bytes of the cover image and only three bits
are changed. This minimal changes are not noticed
by the human visual system, hence the LSB
insertion is very easy to implement and most
popular method in Steganography technique.
ALGORITHM
USED
WATERMARKING

IN

LSB

BASED

For Embedding phase:


Step 1. Read the cover image in which the secret
image to be hidden.
Step 2. Read the secret image and convert in binary
form.
Step 3. Compute the LSB of each pixels of cover
image.
Step 4. Replace least significant bit (LSB) of cover
image with each bit of secret image one by one.
Step 5. Write watermarked image
For Extraction phase:
Step 1. Read the watermarked image.
Step 2. Compute LSB of each pixel from the
watermarked image.
Step 3. Retrieve bits and convert into
corresponding image.
B. Transform Domain Techniques
A digital watermarking algorithm based on the
DCT domain is proposed to embed a true color
watermarking image into the host color image for
the purpose of protecting the copyrights of digital
products. Before embedding the watermarking, the
coordinates of R, G, and B components of the
watermarking image are scrambled first by using
the chaotic sequence, which will bring in good
security to the watermarking. Watermarking with
different strength is embedded into the different
regions of a host image, balancing the robustness
and imperception of watermarking simultaneously.
ALGORITHM USED
WATERMARKING

IN

DCT

BASED

For Embedding phase:


Step 1. Read carrier image or cover image.
Step 2. Read secret image and convert it into binary
form.
Step 3. Extract RGB compenents of the cover
image.
Step 5. Apply DCT to each component.

Step 6. Across rows and columns of each RGB


channels, add the multiplied value of secret image
and DCT coefficient
Step 7. Perform Inverse DCT.
Step 8.Write watermarked image.
For Extraction phase:
Step 1. Read Watermarked image.
Step 2. Watermarked image is splits into RGB
components.
Step 3. Apply DCT to each RGB channels.
Step 4. Across rows and columns of each RGB
channels,subtract
the multiplied value of secret
image and DCT coefficient
Step 5. Perform Inverse DCT.
Step 6.Write cover image.
IV. WATERMARKING WITH VISUAL
CRYPTOGRAPHY
As discussed in section III,In watermarking
schemes,the produced watermark is directly
embedded into the image to be protected, in order
to prevent abuses and illegitimate distribution of
the image. When a visual cryptographic scheme is
used in combination, usually the watermark is
given as input to the VC scheme, obtaining a
number of shares. One of the share is then used as
watermark and given in input to the embedding
phase of the watermarking algorithm, while the
other ones will be stored and protected as a key.[8]
The typical scenario considered in combined
watermarking VC based scheme includes
followings:
1. the owner of the image who wants to mark its
own image and prevent unauthorized use of the
image;
2. a trusted authority (TA) who participates to the
scheme and who arbitrates the ownership of the
image if a dispute occurs;
3.finally, the adversary who wants to alter the
image and/or its watermark and use it and may
cheat about the ownership of a stolen image.[7]
Watermarking with (2,2) VC Scheme
Most of the schemes combining watermarking
with visual cryptography are based on the use of a
(2, 2) VC scheme. As mentioned in [4], such VC
schemes can be thought of as a private key
cryptosystem. The secret message is encoded into
two random looking shares: one of the two shares
can be freely distributed .This share is used as
cipher text. Whereas the other share serves as the
secret key. The original image is reconstructed by
stacking together the two shares.In combined
watermarking and vc scheme,the on of the share,
generated by using vc ,is used as the secret image
that is further embedded into the cover image to
produce watermark image.

Fig 5. Embedding phase for watermarking combined with a


(2,2) VC scheme

Fig. 7. Extraction phase for watermarking combined


with a (2, 2) VC scheme

V. CONCLUSION

Cover image

secret image

Share1

Share2

The visual cryptography is very much useful for


copyright protection of image based data. Since in
this scheme the decoding is done directly by human
visual system, it takes less time consumption and
also the computation complexity becomes very less
.The digital watermarking technique makes secret
data invisible to the users and hence left no clue of
the existence of secret data being transmitted inside
the cover image. The data security can be increased
by using these two schemes together.
By using vc scheme ,it scrambles the data that
would not give any information about the secret
data and digital watermarking hides this scramble
data inside another cover image/data. The
integration of VC scheme and digtal watermarking
increases the computation complexity to some
extent but it provide high security to data. Hence,
the
various confidential information can be
protected by selecting the suitable security
mechanism as per the requirement of users.

REFERENCES

Watermarked image
Fig 6. Encryption process of watermarking and VC schemes

The owner of the image I, gives in input the


watermark W (2, 2) VC scheme in order to obtain
two shares W1 andW2, which appear as random
images. One of the shares, W2 is then used as a key
such that only the legitimate extractor can
reconstruct the watermark and show it to a third
party. The second share, W2, is then embedded into
the cover image, performing an embedding
operation that depends on the particular kind of
watermark technique considered. At the end of the
process, the watermarked image WI is
generated.[10]
In the extraction phase, W1 is to be extracted from
watermarked image WI. The extracted W1 and key
W2 are stacked together to generate secret image
W as shown in figure

[1] Ding Wei,Digital Image Watermarking Based on


mmnDiscrete Wavelet Transform Vol.17 No.2 J.
mmnComputer. Sci. & Technol Mar. 2002.
[2]b,,Chandra, M.; Pandey, S.; Chaudhary, R., "Digital
mm.nwatermarking technique for protecting digital
mm,nimages," 2010 3rd IEEE International Conference
mm, on , nvol.7, no., pp.226,233, 9-11 July mmn2010
[3] Suhas B. Bhagate 1, P.J.Kulkarni 2 An Overview
mmn,of Various Visual Cryptography Schemes.
[4] Adel Hammad Abusitta, A Visual Cryptography
mmnBased Digital Image Copyright Protection, Journal
mmnof Information Security,2012, 3, 96-104
[5]b.Ming Sun Fu. Oscar C. Au*, JOINT VISUAL
mm..CRYPTOGRAPHY AND WATERMARKING,
mm.2004 IEEE International Conference on Multimedia
mmnand Expo (ICME)

[6] ,Th.Rupachandra Singh, Kh. Manglem Singh,


mmn..Robust Video Watermarking Scheme Based on
iiiiii....Visual Cryptography 978-1-mmn4673-4805iiiiii8/12/$31.00_c mmn2012 IEEE
[7] Stelvio Cimato, James Ching-Nung Yang, Visual
mmn..Cryptography Based Watermarking:Definition and
mmn,.Meaning, Springer-Verlag Berlin Heidelberg
iiiiii2013
[8]n,..Mrs.D.Mathivadhani1, Dr.C.Meena 2, Digital
mmn.Watermarking and Information Hiding Using
mm...Wavelets, nSLSB and Visual Cryptography
mm...Method
978-1-4244-mmn5967-4/10/$26.00
mmm2010 IEEE
[9] ...Moumita Pramanik1, Kalpana Sharma2, Analysis
mm...of nVisual Cryptography, Steganography Schemes
mmn.and its Hybrid approach for Security of Images
mm.nISSN 2250- mmn2459, ISO 9001:2008 Certified
mmn.Journal, Volume 4, Issue mmn2, February 2014
[10] . Monish Kumar Dutta and Asoke Nath Scope and
mmn,.Challenges in VisualnCryptography

Das könnte Ihnen auch gefallen