Beruflich Dokumente
Kultur Dokumente
Abstract: This TCP is primarily designed for wired networks and became very efficient and robust with years of
enhancements. However, experiments and research showed that TCPs congestion control algorithm performs very
poorly over Wireless Sensor Networks (WSN) with severe unfairness among flows. Because the deployment of a sensor
network causes unpredictable patterns of connectivity and varied node density, resulting in uneven bandwidth
provisioning on the forwarding paths. This paper studies TCPs fairness issues in WSNs, and designs an Extended
congestion control algorithm based on the characteristics of the WSNs. The protocol is designed as extension to DCCP
(Datagram Congestion Control Protocol) with a new congestion control component. We also implemented this
congestion control algorithm in NS2. Simulation results show improvements on fairness achieved by using Extend
congestion control algorithm.
Key Words: Max-min Fairness, Datagram Congestion Control Protocol, congestion control, wireless sensor networks
IEEE 802.11 wireless links, the work here also assumes that
1 INTRODUCTION the MAC layer is an IEEE 802.11-like random access
WSN typically consists of a large number of tiny wireless protocol.
sensor nodes (often referred to as nodes or motes) that are
densely deployed [1]. Nodes measure some ambient
conditions in the environment surrounding them. These
measurements are, then, transformed into signals that can
be processed to reveal some characteristics about the
phenomenon. The data collected is routed to special nodes,
called sink nodes (or Base Station, BS), typically in a
multi-hop basis. Then, the sink node sends data to the user.
Depending on the distance between the user and the
network, a gateway may be needed in order to bridge both,
either through the Internet or satellite. Two sensors are
neighbors if they can directly communicate with each other.
As is shown in the figure 1. Fig 1. Wireless Sensor Networks.
Sensor networks have a wide range of applications in
habitat observation, health monitoring, object tracking, The sensors share the same wireless media and each packet
battlefield sensing, etc. They are different from traditional is transmitted as a local broadcast in the neighborhood. We
wireless networks in many aspects [2,3]. Particularly, assume the existence of a MAC protocol, which ensures
sensor nodes are limited in computation capability, memory that, among the neighbors in the local broadcast range, only
space, communication bandwidth, and above all, energy the intended receiver keeps the packet and the other
supply. neighbors discard the packet. The sensors are statically
located after deployment. We study data packets sent from
Nowadays, nodes are intended to be small and cheap. sensors to base stations. The base stations are connected via
Consequently, their resources are limited (typically, limited an external network to a data collection center. A data
battery, reduced memory and processing capabilities). packet may be sent to any base station as long as there is a
Because of the restrained transmission power, wireless forwarding path.
sensor nodes can only communicate locally, with a certain
number of local neighbors. So, nodes have to collaborate in Transmission Control Protocol (TCP) is a reliable,
order to accomplish their tasks: sensing, signal processing, end-to-end transport protocol, which is widely used for data
computing, routing, localization, security, etc. services and is very efficient for wired networks. However,
Consequently, WSN are, by its nature, collaborative experiments and research showed that TCPs congestion
networks [4]. As most wireless Networks are built based on control algorithm performs very poorly over Wireless
sensor networks with degraded throughputs [5]. Research
therefore has focused on further improving TCP to address
This project supported by National Natural Science Foundation of P. R. the special characteristics of Wireless sensor networks.
China under Grant (60474029, 60774023, 60774045), China Postdoctoral
Science Foundation under Grant (2005038558).
978-1-4244-2723-9/09/$25.00
c 2009 IEEE 4732
Currently, the vast majority of the traffic in the Internet the worst rate in the most congested area. Directed
relies upon the congestion control mechanism provided by Diffusion [9] and SPEED [10] were not specifically
TCP. However, applications such as streaming video and designed for congestion control, but they may be adapted
Internet Telephony prefer timeliness to reliability. The for this purpose to a certain degree.
reliability and in-order delivery algorithm provided by TCP
often results in arbitrary delay, and TCPs rate control 2.2 TCP Fairness
AIMD (Additive Increase and Multiplicative Decrease) TCP flows experience severe unfairness in wireless sensor
algorithm causes very sharp bandwidth change upon the networks. TCPs window-based congestion control adjusts
detection of one packet loss. Consequently, such the congestion window size every RTT. Flows with longer
applications often choose UDP, with either their own RTT increase the congestion window slower than flows
congestion control mechanisms implemented on top of it or with shorter RTT. At the network routers, an unfair
none at all. Long-lasting UDP flows without any packet-dropping scheme, such as a simple FIFO drop tail
congestion control mechanism present a potential threat to scheme, may cause some flows to experience more losses
the network. Also, congestion control mechanisms are than others. Medium access at a gateway is inherently
difficult to implement and may behave incorrectly. unfair when using a MAC protocol such as IEEE 802.11.
A sensor is congested if it receives more traffic than its Upstream flows (from senders to the gateway) tend to
maximum forwarding rate. The nature of sensor occupy the whole media and the downstream flows (from
deployment leads to unpredictable patterns of connectivity the gateway to receivers) almost stop transmission when
and varied node density, which causes uneven bandwidth multiple upstream and downstream flows co-exist.
provisioning on the forwarding paths. The data sources are Unfairness between the upstream and downstream flow
often clustered at sensitive areas under scrutiny and may throughputs is extremely high, with a ratio of up to 800
take similar paths to the base stations. When data converge between them [11]. In a Wireless sensor network, IN TCP
toward a base station, congestion may occur at sensors that flows (from the wired part to the wireless part) get more
receive more data than they can forward. bandwidth than the coexisting OUT TCP flows (from the
Congestion causes many problems. When a packet is wireless part to the wired part) [12]. IN flows obtain a much
dropped, the energy spent by upstream sensors on the higher share of the bandwidth when mixed flows exist due
packet is wasted. The further the packet has traveled, the to exposed and hidden node effects. TCPs own timeout and
greater the waste. When a sensor X is severely congested, if back-off schemes further worsen the unfairness.
the upstream neighbors attempt to send to X, their efforts The max-min flow control was first proposed by Faffe [13]
(and energy) are deemed to be wasted and, worse yet, to distribute the network bandwidth fairly among a set of
counter-productive because they compete for channel best effort flows. The name max-min comes from the
access with neighboring sensors. Finally, and above all, the strategy of maximizing the bandwidth allocated to those
data loss due to congestion may jeopardize the mission of flows that receive the minimum bandwidth. Much further
the application. While fusion techniques [6] can be used for research [14,15] has been done since then. All these works
data aggregation, applications may require some specifics assume that each flow has a fixed routing path. Two basic
(e.g., exact locations of the reporting sensors) to be kept [1], properties of the max-min flow control are:
which place a limit on how much the fusion can do. z Fairness property.
At each link, any passing flow is entitled to an equal share
2 RELATED WORK
of the link capacity unless the flow is limited to a smaller
bandwidth at another link on its path.
2.1 Congestion control
z Maximum throughput property.
TCP provides a connection-oriented, reliable data The entire capacity of a link must be allocated to the flows
transmission. The basic idea of TCP congestion control is unless every passing flow has a bottleneck link elsewhere
that TCP senders probe the network for available resources, which limits the bandwidth that the flow can receive.
and increase the transmission rate until packet losses are
A bottleneck algorithm that assigns the max-min bandwidth
detected. TCP takes packet loss as indication of network
to every flow was described in [16, 17] and is repeated here:
congestion and triggers appropriate congestion control
Find the global bottleneck link that has the smallest
schemes.
bandwidth per flow. Assign an equal share of the links
The problem of congestion control in sensor networks is capacity to each passing flow. Remove the link and the
largely open. A typical approach is for a congested sensor passing flows from the network. When a flow is removed,
to send backpressure messages to its neighbors [7], which the capacities of all links on its routing path are reduced by
reduce their data rates and may further propagate the the bandwidth assigned to the flow. Repeat the above
backpressure messages upstream. However, the important process until every flow is assigned a bandwidth and
issue of ensuring fairness among the sensors during their removed from the network.
rate reduction is not addressed by this approach.
In general, TCP works poorly in Wireless sensor Networks.
In ESRT [8], by monitoring the congestion notification bit This is caused by the high bit error rate over wireless links,
carried in the packet header, the base station decides a as well as TCPs built in congestion control algorithm
common rate for all sensors such that no packet will be lost working with the contention based media access of IEEE
in the network. This approach achieves fairness but is too 802.11. A large amount of research has focused on
pessimistic because every sensor must conform its rate to improving the fairness issues discussed above.