Sie sind auf Seite 1von 7

1. P. Bose, P. Morin, I. Stojmenovic, and J. Urrutia, Routing with Guaranteed Delivery in Ad Hoc Wireless Networks, Wireless Networks, vol.

7, no. 6, pp. 609-616, 2001.

Consider routing problems in ad hoc wireless networks modeled as unit graphs in which nodes are points in the plane and two nodes can communicate if the distance between them is less than some _xed unit.They describe the _rstdistributed algorithms for routing that do not require duplication of packets or memory at the nodes and yet guarantee that a packet is delivered to its destination. These algorithms can be extended to yield algorithms for broadcasting and geocasting that do not require packet duplication. A byproduct of our results is a simple distributed protocol for extracting a planar subgraph of a unit graph. They also present simulation results on the performance of algorithms such as Wireless Networks, Routing, Unit Graphs, Online algorithms, Gabriel graphs

2.F. Bian, R. Govindan, S. Schenker, and X. Li, Using Hierarchical Location Names for Scalable Routing and Rendezvous in WirelessSensor Networks, Proc. Second ACM Intl Conf. EmbeddedNetworked Sensor Systems (SenSys 04), pp. 305-306, 2004.

Until practical ad-hoc localization systems are developed, early deployments of wireless sensor networks will manually configure location information in network nodes in order to assign spatial context to sensor readings. In this paper, such deployments will use hierarchical location names (for example, a node in a habitat monitoring network might be said to be node number N in cluster C of region R), rather than positions in a twoor three-dimensional coordinate system. We show that these hierarchical location names can be used to design a scalable routing system called HLR. HLR provides a variety of primitives including unicast, scoped anycast and broadcast, as well as various forms of scalable rendezvous. These primitives can be used to implement most data-centric routing and storage schemes proposed in the literature; these schemes currently need precise position information and geographic routing in order to scale well. They evaluate HLR using simulations as well as an implementation on the Mica-2 motes.

3.C.Bettstetter,Proc.IEEE Intl Conf.comm., The Cluster Density of a Distributed Clustering Algorithm in Ad Hoc Networks, pp. 4336-4340,2004.

Given is a wireless multihop network whose nodes are randomly distributed according to a homogeneous Poisson point process of density (in nodes per unit area). The network employs Basagni's distributed mobility-adaptive clustering (DMAC) algorithm to achieve a self-organizing network structure. We show that the cluster density, i.e., the expected number of cluster- heads per unit area, is c= (1+2), where denotes the expected number of neighbors of a node. Consequently, a clusterhead is expected to incorporate half of its neighboring nodes into its cluster. This result also holds in a scenario with mobile nodes and serves as a bound for inhomogeneous spatial node distributions.

4.F. Araujo et al., CHR: A Distributed Hash Table for Wireless Ad Hoc Networks, Proc. 25th IEEE Intl Conf. Distributed Computing Systems Workshop, 2005.

This paper focuses on the problem of implementing a distributed hashtable (DHT) in wireless ad hoc networks. Scarceness of resources and node mobility turn routing into a challenging problem and therefore,they claim that building a DHT as an overlay network (like in wired environments) is not the best option. Hence, they present a proofof-concept DHT, called Cell Hash Routing (CHR) designed from scratch tocope with problems like limited available energy communication range or node mobility. CHR overcomes these problems, by using position information to organize a DHT of clusters instead of individual nodes. By using position-based routing on top of these clusters, CHR is very efficient. Furthermore, its localized routing and its load sharing schemes, make CHR very scalable in respect to network size and density. For these reasons,they believe that CHR is a simple and yet powerful adaptation of the DHT concept for wireless ad hoc environments.

5.H. Frey and I. Stojmenovic, On Delivery Guarantees of Face and Combined GreedyFace Routing in Ad Hoc and Sensor Networks,Proc. ACM MobiCom, pp. 390-401, 2006

It was recently reported that all known face and combined greedy-face routing variants cannot guarantee message delivery in arbitrary undirected planar graphs. The purpose of this article is to clarify that this is not the truth in general. Specifically in relative neighborhood and Gabriel graphs recovery from a greedy routing failure is always possible without changing between any adjacent faces .Guaranteed delivery then follows from guaranteed recovery while traversing the very first face. In arbitrary graphs, however, a proper face selection mechanism is of importance since recovery from a greedy routing failure may require visiting a sequence of faces before greedy routing can be restarted again. A prominent approach is to visit a sequenceof faces which are intersected by the line connecting the source and destination node. Whenever encountering an edge which is intersecting with this line, the critical part is to decide if face traversal has to change to the next adjacent one or not. Failures may occur from incorporating face routing procedures that force to change the traversed face at each intersection. Recently observed routing failureswhich were produced by the GPSR protocol in arbitrary planar graphs result from incorporating such a face routing variant. They cannot be constructed by the well known GFG algorithm which does not force changing the face anytime.Beside methods which visit the faces intersected by the source destination line, we discuss face routing variants which simply restart face routing whenever the next face has to be explored. They give the first complete and formal proofs that several proposed face routing, and combined greedy face routing schemes do guarantee delivery in specific graph classes or even any arbitrary planar graphs. They also discuss the reasons why other methods may fail to deliver a message or even end up in a loop.

6. M. Albano, S. Chessa, F. Nidito, and S. Pelagatti, Q-NiGHT: Adding QoS to Data Centric Storage in Non-Uniform SensorNetworks, Proc. Eighth Intl Conf. Mobile Data Management(MDM), pp. 166-173, 2007.

Storage of sensed data in wireless sensor networks is essential when the sink node is unavailable due to failure and/or disconnections, but it can also provide efficient access to sensed data to multiple sink nodes. Recent approaches to data storage rely on Geographic Hash Tables for efficient data storage and retrieval. These approaches however do not support different QoS levels for different classes of data as the programmer has no control on the level of redundancy of data. They result in a great unbalance in the storage usage in each sensor, even when sensors are uniformly distributed. This may cause serious data losses, waste energy and shorten the overall lifetime of the sensornet. In this paper, we propose a novel protocol, QNiGHT which (1) provides a direct control on the level of QoS in the data dependability, and (2) uses a strategy similar to the rejection method to build a hash function which scatters data approximately with the same distribution of sensors. The benefits of Q-NiGHT are assessed through a detailed simulation experiment, also discussed in the paper. Results show its good performance on different sensors distributions on terms of both protocol costs and load balance between sensors.

7. M. Albano and S.Chessa, Distributed Erasure Coding in Data Centric Storage for Wireless Sensor Networks, Proc. 14th IEEESymp. Computers and Comm. (ISCC), pp. 67-75, 2009.

In-network storage of data in wireless sensor networks contributes to reduce the communications inside the network and to favor data aggregation. In this paper, consider the use of erasure codes in combination to in-network storage. They provide an abstract model of in-network storage to show how erasure codes can be used, and we discuss how this can be achieved in two cases of study. They also define a model aimed at evaluating the probability of correct data encoding and decoding, and exploit this model and simulations to show how the parameters of the erasure code and the network should be configured in order to achieve correct data coding and decoding with high probability even in presence of sensors faults.

Das könnte Ihnen auch gefallen