Sie sind auf Seite 1von 33

ESWC: Efficient Scheduling for the Mobile Sink in Wireless Sensor Networks

with Delay Constraint


Abstract:
Exploits sink mobility to prolong the network lifetime in wireless sensor networks
where the information delay caused by moving the sink should be bounded. Due to
the combinational complexity of this problem, most previous proposals focus on
heuristics and provable optimal algorithms remain unknown a unified framework
for analyzing this joint sink mobility, routing, delay, and so on. We discuss the
induced sub-problems and present efficient solutions for them. Then, we generalize
these solutions and propose a polynomial-time optimal algorithm for the origin
problem. In simulations, we show the benefits of involving a mobile sink and the
impact of network parameters (e.g., the number of sensors, the delay bound, etc.)
on the network lifetime. Furthermore, we study the effects of different trajectories
of the sink and provide important insights for designing mobility schemes in realworld mobile WNNs.

INTRODUCTION:
Wireless sensor network (WSN), one of the fastest growing research areas, has
been attracted a lot of research activities. Due to the maturity of embedded
computing and wireless communication techniques, significant progress has been
made. Typically, a WSN consists of a data collection unit (also known as sink or
base station) and a large number of sensors that can sense and monitor the physical
world, and thus it is able to provide rich interactions between a network and its
surrounding physical environment in a real-time manner.
The capacity-limited power sources of small sensors constrain us from fully
benefitting from WSNs. Due to the unique many-to-one (converge-cast) traffic
patterns, the traffic of the whole network will be converted to a specific set of
sensor nodes (e.g., neighboring nodes of the sink) and results in the hotspot
problem. Much research effort has been dedicated to resolve this issue, for
example, energy efficient communication protocols, multi sink systems. However,
as long as the sink and sensor nodes are static, this issue cannot be fully tackled.
Therefore, there is a recent trend to exploit mobility of the sink as a promising
approach to the hotspot problem.
The sink is designed to move randomly within the network presented an
architecture on which mobile entities (named MULEs) pick up data from sensors
when in close range in sparse sensor networks. Schemes based on random mobility
are straightforward and easy to implement. However, they suffer from
shortcomings like uncontrolled behaviors and poor performance. Hence, recent
research resorts to controlled mobility to improve the performance. For the
controlled mobility, the key problem is to deterministically schedule the sink to
travel around the network to collect data. It is shown that by properly setting the

trajectory even limited mobility would significantly improve the network lifetime.
However, the mobility also brings new issue, i.e., the delay of the data delivery
caused by the movement of the sink. Some previous proposals tried to avoid this
issue by considering the so called fast mobility, whereas the speed of the sink are
sufficiently high so that the resulting delay can be tolerated while others address
this delay-bounded mobility problem by heuristics with little theoretical
understanding.
The delay-bounded sink mobility problem (DeSM) of WSNs assumes that WSNs
are deployed to monitor the surrounding environment and the data generation rate
of sensors can be estimated accurately. The mobile sink to a set of sink sites first, A
unified framework that covers most of the joint sink mobility, data routing, and
delay issue strategies. Based on this framework, we develop a mathematical
formulation that is general and captures different issues. However, this formulation
is a mixed integer nonlinear programming (MINLP) problem and is time
consuming to solve directly. Therefore, instead of tackling the MINLP directly, we
first discuss several induced sub-problems, for example, sub-problems with
zero/infinite delay bound or connected sink sites (sink sites are connected if for any
two sites there exists a path that connects them and each edge of that path meets
the delay constraint). We show that these sub-problems are tractable and present
optimal algorithms for them. Then, we generalize these solutions and propose a
polynomial-time optimal approach for the origin DeSM problem. The benefits of
involving a mobile sink and the impact of network parameters (e.g., the number of
sensors, the delay bound, and so on.) on the network lifetime. Furthermore, we
study the effects of different trajectories of the sink and provide important insights
for designing mobility schemes in real-world mobile WSNs. Our main
contributions are the following:

1. A unified formulation of DeSM, which is general and practical. We discuss subproblems of DeSM and offer efficient algorithms for them to guide the design of
our algorithm for the origin DeSM.
2. Generalize algorithms for sub-problems and present an optimal algorithm with
polynomial complexity for the DeSM.
3. The effects of different trajectories of the sink and provide important insights via
extensive simulations.

System Analysis:
Problem Definition:
A mobile sink Sensor nodes which are stationary, keep monitoring the surrounding
environment and generating data. A mobile sink is used to gather sensed data by
traveling around the network. We assume that only at certain locations, the sink can
communicate with the outside network and then deliver cached data to users. due
to interference and security issues, for a sensor network deployed in the battle field
for the surveillance mission, it is reasonable that the sink can connect with the
headquarters only at certain locations using wireless techniques like WiMAX or
LTE. These locations are represented by squares in the figure. The sink has a
maximum speed Vmax (in m/s). We assume that while the sink is moving, sensors1
will buffer their newly generated data, as in only when the sink stays at one of sink
sites, sensors will start transmitting data to the sink through multi-hop routing.
Network Model: The set of links between sensors and sink sites do not explicitly
consider radio interferences assume the data generation rates of sensors can be
properly scaled so that underlay MACs like TDMA can eliminate the interference
among communications.
Energy Model: The total energy consumption of i cannot exceed Ei. Typically, the
radio module is the most energy-consuming part, and thus its energy consumption
consists of three parts: transmission, receives, and sleeps. The power
characteristics of two representative radio modules that have been widely used in
wireless sensor platforms. Since usually the power assumption in sleep state is
several orders of magnitude lower than in other states, it has non-significant impact
on the network lifetime and thus can be ignored.

Modules:
WIRELESS SENSOR NETWORK
DESM PROBLEM
EXTENDED SSDR (E-SSDR) ALGORITHM FOR DESM
NUMERICAL RESULTS

MODULES DESCRIPTION:
WIRELESS SENSOR NETWORK
As sensor nodes for event monitoring are expected to work for a long time without
recharging their batteries, scheduling method is always used during the monitoring
process. Obviously, sleep scheduling could cause transmission delay because
sender nodes should wait until receiver nodes are active and ready to receive the
message. The delay could be significant as the network scale increases. Therefore,
a delay-efficient sleep scheduling method needs to be designed to ensure low
broadcasting delay from any node in the WSN.
A trend to investigate mobility as a means of relieving traffic burden and
enhancing energy efficiency in WSN can classify sink mobility into two categories:
random mobility and controlled mobility. Sinks in the first category move
randomly within the network. Schemes based on random mobility are easy to
implement, but they suffer from shortcomings like uncontrolled behaviors and poor
performance. Recent research tends to use controlled mobility to improve the
performance. The hardcore is to jointly schedule different issues (e.g., sink
mobility, data routing, information delay, and so on.) to optimize the network
lifetime. For this paradigm, first challenged this problem and proposed a heuristic
algorithm. Relaxed the problem by doing the sink scheduling and data routing
separately and their proposed routing scheme can work only in a grid network
topology. The problem and proposed several greedy algorithms the first algorithm
with performance guarantee with a single sink. Extended by considering issues like
multiple sinks and the maximum number of hops from each sensor to a sink these
three-stage heuristics has been developed to find high-quality trajectory for each
sink as well as the actual sojourn time at each sojourn location. In our recent

research proposed a generalized column generation based algorithm that can be


applied to a set of sink mobility problems with near-optimal performance.
Sinks are high speed so that information delay caused by moving the sink can be
ignored. However, on the one hand, mobile sinks in physical worlds usually have
limited speed. On the other hand, underlay applications like the real-time
surveillance demand a delay upper bound. Therefore, it is natural to take the delay
issue into consideration. Jointly considered the sink mobility and delay issue in
they assumed that the routes are predetermined used multiple controllable sinks to
travel among event locations to efficiently gather data. They considered issues like
the mobile distance of a sink and time delay. However, only one-hop routing has
been used. Jointly considered the multi-hop routing, sink mobility, and delay
bound to improve the energy efficacy. They still used the fast mobility assumption,
so the delay is caused by nodes holding their transmissions until the location of the
sink is most favorable for energy saving, not by the movement of the sink.
The message delivery capacity problem in delay-constrained mobile sensor
networks where the sink nodes are static while sensor nodes are mobile. They
focused on maximizing the percentage of sensing messages that can be
successfully delivered to sink nodes within a given time constraint. Their network
model is fundamentally different with ours and is somehow similar to the DTN.

DESM PROBLEM
A sensor network deployed in the battle field for the surveillance mission, it is
reasonable that the sink can connect with the headquarters only at certain locations
using wireless techniques like WiMAX or LTE. These locations are represented by
squares in the figure. The sink has a maximum speed V max (in m/s). We assume that
while the sink is moving, sensors1 will buffer their newly generated data. Only
when the sink stays at one of sink sites, sensors will start transmitting data to the
sink through multi-hop routing. This could potentially cause a high delay for data
packets.
The network lifetime T can be divided into epochs with different lengths: A new
epoch begins when S0 resides in some sink site and sensors start to transmit data to
the sink. tp is defined as the time span for that epoch. The travel time of the sink to
the sink site is not included in t p. To connect S0 with sink sites in the pth epoch, we
define xpk as the indicator:

Zero Delay: The sub-problem of DeSM where real time applications are involved
In this case, the underlay applications demands that

0. Therefore, s0 can

only choose one sink site and remains stationary during the operation of the WSN.
We name this sub-problem as the Z-DeSM problem. Linear programming (LP) is
referred to as ZERO hereinafter. Equation guarantees flow balance, i.e., for each
sensor, data from it to its neighbors and the sink equals to data received from its
neighbors plus data generated by itself.

Infinite Speed: The sub-problem of DeSM where the sink can move at a high
speed (approximately infinite) this assumption has been widely used in previous
proposals name this sub-problem as the I-DeSM problem. S 0 can move at a very
high speed, the travel route does not impact the optimal network lifetime as long as
the sojourn time S0 stays at some sink site remains the same. Therefore, define tk
as the sojourn time S0 stays in site k, f kij as the sojourn flow from nodes i to j when
S0 stays at site k and fik as the sojourn flow from node i to S0 which stays at site k.
Connected Sink Sites:
The sink site graph is connected as C-DeSM. We develop an optimal algorithm
named Sink-Scheduling and Data-Routing algorithm (SSDR), which runs in
polynomial time. The SSDR approach can be summarized as following three steps.
1. Run the INFI optimization to get the optimal solution = {T =k tk, fkij, fikg}
2. Run the depth-first search to find a path on the sink site graph that includes
all the sink sites (may visit some site multiple times).
3. Run the sink-schedule-flow-rebuild algorithm (as shown in Algorithm 1) to
assign the visit time of every point on the path found in Step 2 and
corresponding routes based on the data obtained in Step 1.
Divide C-DeSM into two parts and conquer them one-by-one. In Step 1, we ignore
the path S0 would travel and use INFI to get the sojourn time s0 stays at site k and
corresponding routes. Then, we randomly select a path can contains all the sink
sites, which is easy since is connected. In the final step, which is the key step, we
assign the visit time of every point on that path and corresponding flows.

EXTENDED SSDR (E-SSDR) ALGORITHM FOR DESM:

A sink path including these two sites that meets the delay constraint thus, these two
sites are connected and should be in the same sub-graph. This is the contradiction
and finishes. An E-SSDR approach to solve the origin DeSM optimally:
Step 1- Divide G0 into connected sub-graphs.
Step 2- Apply the SSDR approach to each sub-graph and obtains the optimal sink
path as well as corresponding routes.
Step 3- Choose the solution of the sub-graph with the longest network lifetime as
the output
Connectivity Analysis: The connectivity of the sink site graph . Assume that sink
sites are uniformly distributed in the deployment area, so the site density is
= m / L2
The probability that there are m0 sites within an arbitrary region A with an area A
can be calculated as

The probability of one site that is disconnected from others is

The probability that the sink site graph is connected is


The impact of the speed of the sink V and the number of sink sites on Pcon. By
properly choosing V and m, we could obtain a connected sink site graph with a
high probability.

NUMERICAL RESULTS:
Linear Trajectory:

A mobile sink can always bring benefits to the network compared with a static
sink, even under delay constraint. For example, we use the box plot to summarize
the results when =100, which contain five quantities: lower quartile 25 percent,
median, upper quartile (75 percent), and the two extreme observations. The
average lifetime improvement of INFI and = 2 over ZERO is around 50 and 13
percent, respectively. The main reason is the load balancing effect brought by the
mobility. Namely, mobility relieves the traffic burden on a specific set of sensor
nodes. In the origin problem, the sink is constrained to move among a limited
number of sites. The delay requirement actually further constrains the mobility of
the sink, but as long as the sink can move, there exists this load balancing effect.
The increment of sensors the lifetime achieved first keeps increasing and then
begins to decrease after some point (see the blue line with star marks in the figure).
We know that there are two effects of increasing the sensors. The first effect is the
increased neighbors of the sink, which results in more alternatives routes. The
other one is the increased traffic load, since sensors are generating data. Initially,
the first effect dominates the second and the lifetime keeps increasing. However,
after some point, e.g., = 60, the second effect dominates the first one and the
lifetime will decrease.

Boundary Trajectory:

The boundary trajectory contains 40 sites. The distance between any two
neighboring sites is 10 m. Therefore, if 1 s, the sites are connected; otherwise,
the sink is static. We vary n from 40 to 100 with an increment of 20. For each , we
run 30 trials. The impact of n on the lifetime under different scenarios
investigations similar to the linear trajectory case have been got, for example,
mobility always brings benefits to the network. For = 100, the average lifetime
improvement of INFI and = 2 over ZERO is around 35 percent. For the static
case, we confirmed the impact of increasing n on the lifetime: initially, the lifetime
will increase with n, however, after some point, it begins to decrease. The pause
time distribution when = 2 (s), = 100. The square trajectory can be seen as four
connected linear trajectories, and on each linear trajectory the sink always prefer to
stays at the center sites. This is because we use a square area and these center sites
are much more effective than the corner sites in the sense of load balancing. This
investigation is inconsistent with some they claimed that the sink should equally
spend the residual time over the boundary trajectory. However, they considered a
sphere area instead of a square one using a sphere area to verify their result.
Arbitrary trajectory:
In this case, we have little control over the distribution of sink sites, for example,
in a battle field. Due to page limit, a supplement files, available online, for the
simulation results the arbitrary trajectory.

SOFTWARE CONFIGURATION

HARDWARE CONFIGURATION
The hardware used for the development of the project is:

PROCESSOR

PENTIUM IV 2.8MHz

RAM

256 MB SD RAM

MONITOR

15 COLOR

HARD DISK

40 GB

FLOPPY DRIVE :

1.44 MB

CDDRIVE

LG 52X DVD RAM

KEYBOARD

STANDARD 102 KEYS

MOUSE

3 BUTTONS

SOFTWARE CONFIGURATION
The software used for the development of the project is:

OPERATING SYSTEM

Linux

ENVIRONMENT

Network Simulator (NS-2.35)

LINUX

Linux is a free computer operating system that was created by Linux Torvalds and
has grown from contributions from software developers all over the world. It
includes system utilities and libraries from the GNU Project are sometimes referred
to as GNU/Linux. It is of open source development. Its underlying source code is
available for anyone to use, modify, and redistribute freely, and in some instances
the entire operating system consists of free/open source software.
Linuxs roots are based in the UNIX operating system. UNIX provides most of the
framework that was used to create Linux .Today Linux is used in numerous
domains, from embedded systems to supercomputers, and has gained a stronghold
in server installations. Red Hat Linux is the most popular commercial distribution
of Linux. Basically, Red Hat has made it possible for Linux to be used by people
other than computer geeks.
Linux Processes
So that Linux can manage the processes in the system, each process is represented
by a task_struct data structure (task and process are terms that Linux uses
interchangeably). The task vector is an array of pointers to every task_struct data
structure in the system.
This means that the maximum number of processes in the system is limited by the
size of the task vector; by default it has 512 entries. As processes are created, a
new task_struct is allocated from system memory and added into the task vector.
To make it easy to find, the current, running, process is pointed to by the current
pointer.
As well as the normal type of process, Linux supports real time processes. These
processes have to react very quickly to external events (hence the term "real time")

and they are treated differently from normal user processes by the scheduler.
Although the task_struct data structure is quite large and complex, but its fields can
be divided into a number of functional areas:
State
As a process executes it changes state according to its circumstances. Linux
processes have the following states:
* Running
The process is either running (it is the current process in the system) or it is
ready to run (it is waiting to be assigned to one of the system's CPUs).
* Waiting
The process is waiting for an event or for a resource. Linux differentiates
between two types of waiting process; interruptible and uninterruptible.
Interruptible waiting processes can be interrupted by signals whereas
uninterruptible waiting processes are waiting directly on hardware
conditions and cannot be interrupted under any circumstances.
* Stopped
The process has been stopped, usually by receiving a signal. A process that
is being debugged can be in a stopped state.

* Zombie

This is a halted process which, for some reason, still has a task_struct data
structure in the task vector. It is what it sounds like, a dead process.
Scheduling Information
The scheduler needs this information in order to fairly decide which process
in the system most deserves to run,
Identifiers
Every process in the system has a process identifier. The process identifier is
not an index into the task vector, it is simply a number. Each process also
has User and group identifiers, these are used to control this processes access
to the files and devices in the system,
Inter-Process Communication
Linux supports the classic Unix

TM

IPC mechanisms of signals, pipes and

semaphores and also the System V IPC mechanisms of shared memory,


semaphores and message queues. The IPC mechanisms supported by Linux
are described in the section on IPC.
Links
In a Linux system no process is independent of any other process. Every
process in the system, except the initial process has a parent process. New
processes are not created, they are copied, or rather cloned from previous
processes. Every task_struct representing a process keeps pointers to its
parent process and to its siblings (those processes with the same parent
process) as well as to its own child processes. You can see the family

relationship between the running processes in a Linux system using the


pstree command:
init(1)-+-crond(98)
|-emacs(387)
|-gpm(146)
|-inetd(110)
|-kerneld(18)
|-kflushd(2)
|-klogd(87)
|-kswapd(3)
|-login(160)---bash(192)---emacs(225)
|-lpd(121)
|-mingetty(161)
|-mingetty(162)
|-mingetty(163)
|-mingetty(164)
|-login(403)---bash(404)---pstree(594)

|-sendmail(134)
|-syslogd(78)
|-update(166)
Additionally all of the processes in the system are held in a doubly linked
list whose root is the init processes task_struct data structure. This list allows
the Linux kernel to look at every process in the system. It needs to do this to
provide support for commands such as ps or kill.
Times and Timers
The kernel keeps track of a processes creation time as well as the CPU time
that it consumes during its lifetime. Each clock tick, the kernel updates the
amount of time in jiffies that the current process has spent in system and in
user mode. Linux also supports process specific interval timers, processes
can use system calls to set up timers to send signals to themselves when the
timers expire. These timers can be single-shot or periodic timers.
File system
Processes can open and close files as they wish and the processes task_struct
contains pointers to descriptors for each open file as well as pointers to two
VFS inodes. Each VFS inode uniquely describes a file or directory within a
file system and also provides a uniform interface to the underlying file
systems. How file systems are supported under Linux is described in the
section on the file systems. The first is to the root of the process (its home
directory) and the second is to its current or pwd directory. pwd is derived
from the Unix

TM

command pwd, print working directory. These two VFS

inodes have their count fields incremented to show that one or more
processes are referencing them. This is why you cannot delete the directory
that a process has as its pwd directory set to, or for that matter one of its subdirectories.
Virtual memory
Most processes have some virtual memory (kernel threads and daemons do
not) and the Linux kernel must track how that virtual memory is mapped
onto the system's physical memory.
Processor Specific Context
A process could be thought of as the sum total of the system's current state.
Whenever a process is running it is using the processor's registers, stacks
and so on. This is the processes context and, when a process is suspended,
all of that CPU specific context must be saved in the task_struct for the
process. When a process is restarted by the scheduler its context is restored
from here.

COMPONENTS OF THE LINUX SYSTEM


The Linux system is composed of three main components:

Kernel
The kernel is responsible for maintaining all the important abstractions of the
operating system, including such things as virtual memory and processes.
System libraries
The system libraries define a standard set of functions through which applications
can interact with the kernel, and that implement much of the operating system
functionality that does not need the full privileges of kernel code.
System utilities
The system utilities are the programs that perform individual, specialized
management tasks. Some may be invoked just once to initialize and configure
some aspects of the system; others may run permanently.
FEATURES
Every Linux kernel can offer the following features:
Multi-user
In addition to many user accounts available on a linux system, it is possible to
have multiple users logged in and working in the system at the same time. User
accounts can be password-protected, so that user can control who has access to
their applications and data.
Multitasking

It is possible to have many programs running at the same time,which means that
not only having many programs going at once,but that the linux operating system
can itself have programs running in the background .These background processes
are referred to as daemons.
Graphical User Interface
The powerful framework for working with graphical applications in linux is
referred to as X window System .X handles the functions of opening X-based GUI
applications and displaying them on an X server process.
Networking Connectivity:
Linux offers support for a variety of Local Area Network boards, modems and
serial devices. In addition to LAN protocols, all the most popular upper level
networking protocols can be built-in. The most popular of these protocols is
TCP/IP.
Network Servers:
A variety of software packages are available that enable to use Linux as a print
server, file server, FTP server, mail server, Web server, news server or workgroup
server.

Application Support

Because of compatibility with several different application programming


interfaces, a wide range of freeware and shareware software is available for Linux.
Most GNU software from the free software foundation will run in Linux.
Hardware Support
It is possible to configure support for almost every type of hardware that can be
connected to a computer. There is support for floppy disk drives, CD-ROMs
removable disks, sound cards tape devices, video cards.
ADVANTAGES OF LINUX
The majority of Linux variants are available for free or at a much lower
price than Microsoft Windows.
Linux is and has always been a very secure Operating System. Although it
still can be attacked when compared to Windows, it much more secure.
The majority of Linux variants and versions are notoriously reliable and can
often run for months and years without needing to be rebooted.
Linux is that source code and help is always available on the internet.
Linuxs best assets are its price and its reliability

NETWORK SIMULATOR
Ns is a public domain simulator boasting a rich set of Internet Protocols, including
terrestrial, wireless and satellite networks. ns is the most popular

choice of

simulator used in research papers appearing in select conferences like Sigcomm. ns


is constantly maintained and updated by its large user base and a small group of
developers at ISI.Ns is a discrete event simulator targeted at networking research.
It provides substantial support for simulation of TCP, routing, and multicast
protocols over wired and wireless (local and satellite) networks. It includes an
optional network animator.
THE NETWORK ANIMATOR
Ns together with its companion, nam, form a very powerful set of tools for
teaching networking concepts. ns contains all the IP protocols typically covered in
undergraduate and most graduate courses, and many experimental protocols
contributed by its ever-expanding users base. With nam, these protocols can
visualized as animations .
STARTING NS
You start ns with the command 'ns <tclscript>' (assuming that you are in the
directory with the ns executable, or that your path points to that directory), where
'<tclscript>' is the name of a Tcl script file which defines the simulation scenario
(i.e. the topology and the events). You could also just start ns without any
arguments and enter the Tcl commands in the Tcl shell, but that is definitely less
comfortable. For information on how to write your own Tcl scripts for ns,
Everything else depends on the Tcl script. The script might create some output on

stdout, it might write a trace file or it might start nam to visualize the simulation.
Or all of the above.
STARTING NAM
You can either start nam with the command 'nam <nam-file>' where '<nam-file>' is
the name of a nam trace file that was generated by ns, or you can execute it directly
out of the Tcl simulation script for the simulation which you want to visualize.
Below you can see a screenshot of a nam window where the most important
functions are being explained.

Network Animating Window

SYSTEM DESIGN
Data Flow Diagram:
WSN

Network lifetime

Mobile sink

Unified framework

Neighbor Sink

Network parameters

Joint sink

Delay-constrained mobility
E-SSDR

DeSM

Network Model

Energy Model

I-DeSM

Zero Delay

Energy cost

Connected sub-graphs

SSDR approach

Optimal sink path


Infinite Speed

Travel time

Longest network lifetime


Connected Sink Sites

Result

SYSTEM IMPLANTATION:
Design is multi-step process that focuses on data structure software
architecture, procedural details, (algorithms etc.) and interface between modules.
The design process also translates the requirements into the presentation of
software that can be accessed for quality before coding begins.
Computer software design changes continuously as new methods; better
analysis and broader understanding evolved. Software Design is at relatively early
stage in its revolution.
Therefore, Software Design methodology lacks the depth, flexibility and
quantitative nature that are normally associated with more classical engineering
disciplines. However techniques for software designs do exist, criteria for design
qualities are available and design notation can be applied.

INPUT DESIGN
Input design is the process of converting user-originated inputs to a
computer-based format. Input design is one of the most expensive phases of the
operation of computerized system and is often the major problem of a system.
In the project, the input design is made in various NAM forms with various
methods.
Node creation
Protocol Implementation
Simulation results

OUTPUT DESIGN
Output design generally refers to the results and information that are
generated by the system for many end-users; output is the main reason for
developing the system and the basis on which they evaluate the usefulness of the
application. In any system, the output design determines the input to be given to
the application.

SYSTEM TESTING:
Cyclomatic Testing :
The cyclomatic complexity of a section of source code is the count of the number
of linearly independent paths through the source code. For instance, if the source
code contained no decision points such as IF statements or FOR loops, the
complexity would be 1, since there is only a single path through the code. If the
code had a single IF statement containing a single condition there would be two
paths through the code, one path where the IF statement is evaluated as TRUE and
one path where the IF statement is evaluated as FALSE.
Mathematically, the cyclomatic complexity of a structured program is defined with
reference to a directed graph containing the basic blocks of the program, with an
edge between two basic blocks if control may pass from the first to the second (the
control flow graph of the program). The complexity is then defined
M = E N + 2P
where
M = cyclomatic complexity
E = the number of edges of the graph
N = the number of nodes of the graph
P = the number of connected components.

Graph Based Testing:


First step in black box testing is to understand the objects that are modeled in s/w
and the relationships that connect their object. Testing begins with creating a graph
of imp. Objects and their relationship and then devising a series of tests that will
cover the graph so that each object and relationship is exercised and errors are
uncovered.
We start with creating a graph
Directed link ----> relations bet. Objects
Node weight ----> attributes of the objects.
Nodes ----> Objects.
Graph based testing begins with the define of all nodes and node weights. ie,
objects and attributes are identified. The data model can be used as a starting point.
Based on this, we can conduct the following behavioral testing methods:
Transaction flow modeling
Finite state modeling
Data flow modeling.
Now we must go for node coverage and link coverages to ensure that all objects
and their relations are executed.

CONCLUSION:
A unified framework to analyze the sink mobility problem in WSNs with delay
constraint a mathematical formulation that jointly considers different issues such as
sink scheduling, data routing, bounded delay, and so on. The formulation is general
and can be extended. However, this formulation is a MINLP and is time consuming
to solve directly. Therefore, we discussed several induced sub-problems and
developed corresponding optimal algorithms. Then, we generalized these solutions
and proposed a polynomial-time optimal approach for the origin problem. We
show the benefits of involving a mobile sink and the impact of network parameters
(e.g., the number of sensors, the delay bound, and so on.) on the network lifetime.
Furthermore, the effects of different trajectories of the sink and provide important
insights for designing mobility schemes in real world mobile WSNs.

REFERENCES:
1. W. Song, R. Huang, M. Xu, B.A. Shirazi, and R. Lahusen, Design and
Deployment of Sensor Network for Real-Time High-Fidelity Volcano
Monitoring, IEEE Trans. Parallel and Distributed Systems, vol. 21, no. 11,
pp. 1658-1674, Nov. 2010.
2. C. Liu, K. Wu, and J. Pei, An Energy-Efficient Data Collection Framework
for Wireless Sensor Networks by Exploiting Spatiotemporal Correlation,
IEEE Trans. Parallel.
3. H. Kim, Y. Seok, N. Choi, Y. Choi, and T. Kwon, Optimal Multi- Sink
Positioning and Energy-Efficient Routing in Wireless Sensor Networks,
Proc. Intl Conf. Information Networking: Convergence in Broadband and
Mobile Networking, pp. 264-274, 2005
4. S. Jain, R.C. Shah, G. Borriello, W. Brunette, and S. Roy, Exploiting
Mobility for Energy Efficient Data Collection in Sensor Networks, Proc.
Second IEEE/ACM Workshop Modeling and Optimization in Mobile, Ad
Hoc and Wireless Networks (WiOpt), Mar. 2004.
5. Z.M. Wang, S. Basagni, E. Melachrinoudis, and C. Petrioli, Exploiting Sink
Mobility for Maximizing Sensor Networks Lifetime, Proc. 38th Hawaii
Intl Conf. System Sciences, Jan. 2005.
6. J. Luo and J.-P. Hubaux, Joint Sink Mobility and Routing to Increase the
Lifetime of Wireless Sensor Networks: The Case of Constrained Mobility,
7.

IEEE/ACM Trans. Networking, vol. 18, no. 5, pp. 1387-1400, 2010.


E. Guney, N. Aras, I.K. Altinel, and C. Ersoy, Efficient Integer
Programming Formulations for Optimum Sink Location and Routing in
Heterogeneous Wireless Sensor Networks, Computer Networks, vol. 54,
no. 11, pp. 1805-1822, 2010.

Das könnte Ihnen auch gefallen