Sie sind auf Seite 1von 7

Light-trains: A Cross-Layer Delivery Mechanism for High Bandwidth Applications in Moving Metro-Trains

Ashwin Gumaste1, Nasir Ghani2 and Si Qing Zheng3


1 Indian Institute of Technology, Bombay Tennessee Tech University, Cookeville, USA 3 The University of Texas at Dallas, USA ashwing@ieee.org, nghani@tntech.edu, sizheng@utdallas.edu 2

Abstract: Trains as a mass transport system are a strong application case for coarsely granular bandwidth-on-demand networking. Multiple trains time-sharing the same track, traveling at a significant speed and containing a large number of bandwidth savvy users represents a premier motivation for the merger of optical and wireless communications. Wireless delivery methods alone cannot suffice the need for such a large moving mass of bandwidth intensive users. We propose a method to efficiently integrate a wireless network over a flexible optical backbone. The proposed method is built upon the light-trails optical platform and involves a unique control layer that adapts to an overlaid wireless network. Here, we propose a cross-layer protocol that enables seamless communication for these lighttrains. We also show benefits in terms of throughput, end-to-end delay, bandwidth efficiency and scalability of such a scheme through a simulation study.

switch to the rest of the Internet (core network). Provisioning the underlying physical layer which is typically an optical network taking into cognizance this moving entity poses as a new design problem. We propose the light-trains model as an engineering solution for providing bandwidth on demand to moving trains. In Section II we discuss the system design of the light-trains model. Section III discusses the node architecture, while Sections IV and V propose an efficient protocol for the cross-layer design. Detailed simulations are presented in Section VI along with conclusions in Section VII. II. LIGHT-TRAIN SYSTEM Consider a train m moving along a track u and assume there are Gu more trains sharing u in the same direction as shown in Fig. 1. As the train m moves along the track it communicates with the rest of the core network through wireless gateways (along the track) that are situated at periodic intervals. The wireless gateways are overlaid on a fiber network. When a train moves, it communicates to the core network through a wireless gateway. We associate the concept of geographic range with every wireless gateway and define it as the time a train spends in communicating with the gateway at a particular velocity. In addition to geographic range, we define the concept of provisioning zone as the overlap area between the geographic ranges of two adjacent wireless gateways. To maintain ubiquity, trains move from one wireless gateway to another seamlessly so that the communication is not dropped. Power based hand-off is the typical technique used for maintaining ubiquity. Hand-off occurs in the provisioning zones. Since multiple trains share the same track we are compelled not to use the track as a medium for data transport. Instead, an underlying optical network serves as a good choice, as track operators generally tend to have rights of way along their routes. In order to support the communication to moving trains, the underlying optical network needs to meet certain characteristics. The principle requirement on the optical network is dynamic provisioning of bandwidth. Data paths must be provisioned for wireless gateways along the track through the optical network on an on-demand basis. This timesensitive requirement, coupled with the fact that multiple

I. INTRODUCTION The growth and spread of Internet and other web services into the wireless domain has led to ubiquity and mobility for users. High bandwidth applications are now being distributed through a wireless network. A significant number of the endusers that make use of such high-speed wireless applications reside in mass-transport systems. The metro train represents a popular form of a mass-transportation system. There are a large number of bandwidth hungry users within the confinement of a metro train. Such a train represents a single moving entity that requires dynamic bandwidth allocation as it moves through a wireless network. Further several trains can co-exist on the same track, in a statistical TDM fashion. This makes the bandwidth allocation problem along a track complex in lieu of the limited bandwidth distance product offered by a wireless gateway. These triple needs juxtaposed by the large data requirement in the train; significant train speed; and multiplicity of such trains along a track means added levels of complexity in the design of hand-off mechanisms and in provisioning the underlying core network. We propose a mechanism that enables bandwidthon-demand to trains in a cross-layer network comprising of a wireless network supported by a metro-core optical network. The delivery mechanism of bandwidth to end-users in a train leads to a new type of hierarchy. End-users within the train receive and transmit data from access points in each compartment, and hence are part of an ad hoc (IEEE 802.11 type) network. The access points are further connected to an Ethernet switch in the train. The question then is how to enable delivery of large bandwidth chunks to and from the

trains will be sharing the same track, implies that preprovisioning will generally lead to an inefficient (unfeasible) solution. To enable ubiquity when a train moves from the geographic range of one wireless gateway to another, the data available to one wireless gateway must also be made available either in its entirety or in partial form to its neighboring wireless gateway with minimal delay. This leads to the second requirement on the optical layer, i.e., providing for optical multicasting. Another characteristic for cross-layer integration is signaling: wireless gateways must be able to signal to each other about ongoing communication activities when trains move through their geographic ranges. This signaling is particularly important between neighboring wireless gateways. To illustrate this consider the example of a train that moves through the geographic range of one wireless gateway to the next. Assume that a user in the train is trying to download a large file. The download time of the file exceeds the geographic range of the wireless gateway; and hence it cannot be downloaded within the range of one wireless gateway. In such a case the file must be made available to the next adjacent wireless gateway so that the user in the train can successfully download the entire file. Further the second wireless gateway must be aware of the fraction of the file that was successfully transmitted through the first wireless gateway so that it can only transmit the remaining fraction. This necessitates the need for good signaling both at the optical and the data layer between nodes along the track.

light-trail requires no optical switching and is carried about by the OOB control channel using a protocol we define in Section IV. This procedure is called micro-management. III. NODE ARCHITECTURE In this section we describe the node architecture that supports cross-layer integration of wireless and optical light-trail networks for providing bandwidth to a moving train. Shown in Fig. 2 is the architecture of the cross-layer node. A wireless gateway is connected to an optical lighttrail infrastructure through a layer-2 switch and supporting electronic buffers. The wireless gateway and the light-trail infrastructure are assumed to be independent and the lighttrail network control channel binds the two layers together to function as a single entity. The binding mechanism is done via the protocol described in Sections IV and V. The light-trail infrastructure is modified from the one shown in [3, 7] to suite cross-layer integration. Select DWDM channels (light-trails) are accessible to the node through a light-trail access unit (LAU) [7]. This ensures that the remaining channels can pass-through with limited power-loss. To do so we introduce a sub-system called the wavelength selectable switch (WSS) [8] that allows switching of any combination of channels from the express path to the LAUs (each consisting of two arrayed waveguides (AWG), couplers and ON/OFF switch as shown in the bubble in Fig. 2). The light-trails that are accessible to the node are processed within the bubble. The light-trails group is first de-multiplexed into individual light-trails by an AWG de-multiplexer. Each light-trail then feeds to two 2x2 couplers connected to each other through an ON/OFF switch. The first coupler drops-and-continues the optical signal while the second coupler allows the node to passively add data into the light-trail. The status of the switch is ON for the intermediate nodes while it is OFF for the convener and end-nodes. The dropped signal from the first coupler of a LAU is fed to an electronic buffer. Likewise, data from the wireless gateway that is to be sent into the light-trail is collected in another buffer connected to the other (add) coupler. Scheduling of the buffers into the wireless gateway as well as the light-trail is done through the control channel. We define two directions of communication in this node: Forward Communication: from the optical layer to the wireless layer (train), see Fig. 2 and also Fig. 5. Reverse Communication: from the train (wireless layer) to the optical layer, see Fig. 2 and also Fig. 5. Control Channel Processing Unit (CCPU): the OOB control channel is optically dropped and electronically processed at every node in the light-trail. This makes the control channel synchronous with respect to every node. Control packets are 64 bytes long and encapsulated within the control channel which is at 155 Mb/s and is standardized by the ITU for optical supervisory channels (OSC). In addition to light-trail and connection

Metro train

IP router

Range of the wireless channel

Optical access line (underlying network)

Light-trail node and wireless access point

Fig. 1. Conceptual layout of metro trains over a wireless network and underlying optical network.

An optical networking solution that enables dynamic provisioning, optical multicasting and provides for effective signaling is the light-trails solution originally proposed by us in [1-4]. As opposed to conventional end-to-end circuit or lightpath a light-trail is a generalization of a lightpath [5] such that multiple nodes along the path can interact with each other. A light-trail is analogous to an optical bus with the added advantage that communication and arbitration of the bus is carried about by an out-of-band (OOB) control channel. In order to create a light-trail, nodes require an architecture that supports three functionalities: the ability to drop-and-continue optical signal, to passively add optical signal and to support the OOB control. A light-trail is defined between two extreme nodes that regulate signal flow in the optical bus: the convener and the end-node. Signal flow is in the direction of the convener to the end-node. Light-trail setup and tear down involves optical switch configuration and this procedure is called macro-management, see [6] for full details. Once a light-trail is setup, its respective nodes can communicate by setting up connections. Connection setup and tear-down over a

provisioning fields as mentioned in [2], control packets also carry service information and inter-node management. Layer-2 Digital Switch: The digital switch shown in the figure connects the light-trail access buffers to the wireless gateway. An important function of the switch is to differentiate between data packets and wireless packets that arrive to and from both optical and wireless layers. In forward communication, the switch aggregates the management packets from the CCPU with the data packets from the lighttrail buffer while broadcasting into the wireless medium. Meanwhile, in reverse communication the switch separates data and management packets that arrive from the wireless domain (train). Here data packets are sent to the light-trail access buffers whereas management packets are sent to the CCPU. The switch is also connected to the control channel processor unit and is responsible for sending status packets to the CCPU. Status information includes buffer occupancy levels, fractional connection transmittance level (percentage of connection transmitted by a given gateway), and provisioning of the wireless gateway when a train arrives (or leaves).
Wireless gateway GROUND LEVEL GROUND LEVEL

DIGITAL SWITCH

BUFFER SET

CONTROL CHANNEL PROCESSING UNIT (CCPU)


O S C C A R D

CONTROL CHANNEL PROCESSING UNIT (CCPU)

O S C C A R D

AWG BUBBLE
1x2 WSS CONTROL CHANNEL FIlTER Coupler for drop and continue

AWG

2x1WSS

LIGHT-TRAIL + WIRELESS GATEWAY NODE

CONTROL CHANNEL FIlTER Coupler for passive adding of the optical signal

Fig. 2. Cross-layer node architecture supporting a wireless gateway and lighttrail based optical network. Note the 1x2 WSS conserving power and cost by processing only select WDM channels at a site.

IV. CROSS-LAYER PROTOCOL DESIGN In this section we define a protocol that integrates the optical and wireless layers. A core network element (either an IP router or aggregation switch is situated at both convener and end-nodes of every light-trail below a given track. Consider a particular train m that moves along the track. The train moves along the track with variable velocity. Let there be P gateways on the track. The range of each gateway depends on factors such as terrain, wireless channel loss and train velocity. Each gateway has two electronic light-trail access buffers for every supported wavelength in the optical layer, see Fig. 2. In particular, the protocol implements the following key tasks: 1. Setup and tear-down of optical connections between IP network elements and nodes. 2. Traffic engineering of connection data amongst nodes to facilitate seamless communication to and from the train as the train moves along the track.

3. Setup and tear down light-trails to maintain the virtual light-trail topology and maximize utilization. Following is the explanation on how the protocol facilitates for each of the three tasks. Case 1: Connection Setup. In this case consider a train m moving through gateway Pi. Upon a bandwidth request in the train, the request is broadcast to the wireless gateway. The wireless gateway in turn sends this request to the switch, which switches the request in the form of a control packet to the management CCPU. The light-trail management system is then invoked to setup a connection. Case 2: Hand-off procedure. This procedure is used to ensure seamless communication when a train moves from the geographic range of one gateway to the next gateway. When a train approaches the provisioning zone of a gateway a wireless hand-off is initiated. In order that the underlying optical layer enables this hand-off, we define a procedure for dynamic bandwidth allocation at the optical layer in the next node where the train is moving. During hand-off, the ingress gateway (from where the train is departing) sends a control packet to the CCPU informing the light-trail management system of the change. This control packet is also sent to the next (egress gateway), where the train is moving to. There are two tasks involved: first to continue the provisioned connection that is feeding (draining) data to (from) the train, and second to ensure that the data transmitted by the next gateway to the train minimizes transmission replication. These tasks are carried out as follows. When a light-trail connection is provisioned, the data that arrives into a node from the nearest IP router is also broadcast to the nodes that are further down in the direction of the train movement. This is due to the optical multicast property of the lighttrail. Note that the directions of optical signal propagation and train movement need not necessarily be the same. Several nodes in the direction of the train movement have access to the data that arrives from the connection. However, only select nodes in the optical layer retain this data for transmission into the wireless layer. The decision to retain data (in Layer 2 format) is based on a simple retaining policy. Retaining Policy: a node retains a data packet that it gets from the light-trail if either the train for which the packet is destined is in its range, or if another node has informed, that a train with a matching destination as the packet is scheduled to arrive in its range (see also Section V MultiGateway Connection). A packet (data) that is not to be retained is promptly discarded from the buffer. As the train moves through the gateway, the retained data is transmitted into the wireless channel. As the train leaves the range of one gateway and enters the next, the first gateway intimates the next gateway through the control channel of the fraction of data that has already been transmitted. This way each gateway is aware of the fraction of the connection that has been broadcast to the train. The new gateway in whose range the train enters then begins transmitting only the data

that has yet not been transmitted. We assume that optical propagation delay and electronic processing times are negligible when compared to train velocity (range). Realistically we assume train speeds will not exceed 500 km/h and that gateway ranges will be on the order of several hundred milliseconds to several seconds (300m to 1.5 km) [9]. In the reverse communication direction, as the train moves from the range of one gateway to the next, it transmits data generated by its onboard users. This data is collected from the switch and sent to the light-trail buffers. The buffers are emptied into the light-trail when a connection is established between them and the node to which the IP router is connected (typically end-node, as shown in Fig. 1). The proposed lighttrail protocol in Section V enables connections from nodes to be provisioned hierarchically for reverse communication. V. LIGHT-TRAIL PROTOCOL DESIGN In this section we describe the protocol that enables connection and light-trail setup and tear-down taking into account the overlaid wireless network and moving train. We define the following control packet types for our proposed protocol: Buffer-status packets: these packets are used to inform the light-trail management system of the status of a buffer at a node. Buffer status is conveyed through a dual of buffer occupancy and service criticality. The former is a measure of buffer occupancy whereas the latter is a time-out feature that enables latency sensitive services to be provisioned over shared medium light-trails. Computation of buffer status is a key feature of our protocol. Buffer-status is denoted by a parameter called packet_value used in the protocol. Session-interruption packets (SIP): these packets are used for communication between nodes of a light-trail. When a train passes from one gateway range to the next, the ingress gateway sends a SIP to the egress gateway. The SIP carries information indicating what fraction of the connection has been transmitted into the train. Light-trail management action packets: these messages are used by the convener to detail respective actions at the nodes. This type includes successful_ack, neg_ack, connection_disrupt and macro_ack, and are defined as: Successful_ack: are used to indicate to a node if it has successfully reserved bandwidth. Neg_ack: have the opposite function of successful_ack. Connection_disrupt: are part of the SIP functions and are used to indicate to nodes as to when a connection is disrupted. Macro_ack: are used to notify nodes of the need to setup a new light-trail. Light-trail Controller: is a node that is given the responsibility of assigning connections to the light-trail. The controller is ideally situated in proximity of the IP router. Now for forward-direction communications, the controller is at the convener node, whereas for reverse-direction communications, the controller is at the end-node.

Physical constant of the light-trail: we have defined in [10] a physical constant of the light-trail called Tg or guard_band_time. This is the minimum time required for any micro-management functions to be carried, e.g., such as laser turn on, burst-mode receiver lock etc. Typically this value is about 10 s for most commercial components [10]. Connection durations: Connections are also classified based upon their durations. If the duration of a connection is less than the geographic range of a gateway then it is called a single gateway connection (SGC). Conversely, if the duration of a connection spans across multiple gateway ranges then it is called a multi-gateway connection (MGC). V. A. Protocol Working Forward Communication: In this direction, gateways seek data from an IP router which is connected typically at the convener of the light-trail. The convener node is the lighttrail controller for the forward connection and has multiple electronic buffers each allocated to a particular node (gateway) in the light-trail. The convener evaluates which node in the light-trail deserves priority for connection establishment at a given time. This evaluation is done by matching the buffer-status packet_value sent by the lighttrail node with the corresponding buffer-status value of the node that currently holds the connection. For the reverse connection (RC), a requesting node sends its buffer-status packets to the end-node of the light-trail which is the now the light-trail controller as data flows from nodes in the light-trail towards the end-node. The convener node matches the value of the buffer status packet with the current packet_value for that light-trail. It then concludes whether to grant the connection request to the initiating node or not. V. B. Convention and Specifications Consider the following variable definitions for trains, gateways, and user services: Train Specifications BW (t ) = {BW1 (t ), BW2 (t ),...., BWa (t )} represents the buffer values in the trains at time t in the forward communication direction, BWm (t ) represents the buffer value of the mth Also, BW (t ) = {BW1 (t ), BW2 (t ),...., BWa (t )} represent the buffer values in the trains at time t in the reverse communication direction, whereas BWm (t ) represents the value of the mth train. BTm represent the size of the buffer in any train. Finally, u u let v u (t ) = {v1u (t ), v2 (t ),....va (t )} represent the velocities of the a trains on track u at time t. Gateway Specifications Let the index i represents a particular gateway. Therefore, u P u = {P1u , P2u ,....., Pw } represents the set of wireless gateways along track u. Also, B (t ) = {B1 (t ), B 2 (t ),....Ba (t )} represents the set of buffer values of gateways at time t for forward communication, train.

and B (t ) = {B1 (t ), B2 (t ),....Ba (t )} represents the set of buffer occupancy values of gateways at time t for reverse communication. Bmax represents the size of the buffer at every node, whereas Di (t ), Di (t ) represent the data-rate in the forward and reverse communication directions in the wireless channel for the ith gateway at time t. Finally, Ri (t ) is defined as the range of ith gateway at time t. Fig. 3 illustrates BWm (t ) , BWm (t ) , Bi (t ) and Bi (t ) . Services Specifications Let S1, S2, , Sq,Sh represent the set of h services that are present in the network. Also, let 1 , 2 ,..., q ,.... h represent the corresponding maximum permissible delays for each of the h services in ascending order ( 1 < 2 ) and z mq represent the time elapsed since the first packet of service type q entered the buffer in the train for forward communication. Correspondingly z mq is the time elapse value in the reverse communication direction. Furthermore, let y ikq represent the time elapsed since the first packet of service type q entered the buffer in the gateway at node i destined for communication in light-trail k in the forward communication direction. Similarly y ikq is defined as the time elapsed for the first packet of service type q in the same node i for reverse communication.
Ri (t )

single back-to-back entity as shown in Fig. 3. Now to compute Buff ik2 (t ) we, define the service statistics at the node in the forward direction as: ik (t ) = min ( q y ikq )
q =1,...h

and, q 'i = arg min ( q y ikq ) q =1,...h denoting the service that corresponds to the critical value in the buffer at node i. Similarly we define for the service statistics in the train m in forward direction: m (t ) = min ( q z mq )
q =1,...h

and similarly the service that corresponds to creation of the corresponding statistic in the forward direction as: q ' ' = arg min ( q zmq ) q =1,...h 2 Hence, this gives Buff ik (t ) as a maximization of the two service statistics (in the train and at the node). The maximization ensures collective treatment of the moving train and static node. Hence, one can write: (t ) (t ) (3) Buff ik2 (t ) = max ik , m q '' q' Meanwhile, the buffer status in the reverse communication direction is computed as follows: 1 (4) Buff ik (t ) = max( Buff ik (t ), Buff ik2 (t ))
i

BWm (t )

BWm (t )

Direction of train Track

The two terms on the right-hand side of (4) are computed as: 1 BWm (t ) Bi (t ) 1 (5) + Buff ik = 2 BTm Bmax Here, the second factor of Buff ik2 (t ) is due to service provisioning, and is computed similarly to service statistics for forward communication, i.e., ik (t ) = min ( q yikq )
q =1,...h

Di (t )
Wireless Gateway and light-trail node: i

Di (t )

Bi (t )

Bi (t )

Fig. 3. Buffer representations at gateway and in train along with range of ith gateway.

q 'i = arg min ( q y ikq ) q =1,...h

Using the above-defined variables, the algorithm for computing the buffer status (i.e., packet_value) for forward communications is now explained. The buffer status computation at a gateway node i intended for transmission in light-trail k at time t is decomposed into two factors: 1 (1), Buff ik (t ) = max( Buff ik (t ), Buff ik2 (t )) where, the first bracketed term on the right hand side is the buffer status value due to occupancy level while the second term is the latency condition of the buffer (relative to maximum allowable latency of packets. The first term in the right-hand side of (1) is denoted by: BWm (t ) Bi (t ) 1 (2) Buff ik (t ) = min 1 , BTm Bmax Eq. (2) gives the minimum of the average occupancy of the buffers in the train or node. The minimization is necessary to reflects in the critical requirement as both buffers act as a

The value of
(t ) (t ) (6) Buff ik2 (t ) = max ik , m q" q 'i are the buffer status (packet_value) value Buff ik (t ) Buff ik (t ) for node i vying for a connection in light-trail k at time t and is computed from Eqs. (16). The values of Buff ik (t ) and Buff ik (t ) are less than unity and are

proportional to the network parameters (delay and buffer occupancy) for the node with respect to the other nodes in the light-trail as well as the train network parameters. We now consider four cases of connection request and their corresponding algorithms: SGC request in FC: this request occurs when data is to be sent to a moving train and the connection duration is lesser than a single gateway range. A user in the train requests for this connection. The request is sent to the convener. The

convener node then computes if it can set up a connection to this gateway for the train by invoking Algorithm # 1 (detailed below Fig. 5). MGC request in FC: this request occurs when data is to be sent into a moving train and the duration of the connection spans the range of several gateways. The convener in this case first invoked algorithm # 1 and then when the train moves from one gateway to another, it receives a corresponding SIP and hence invokes the hand-off procedure, i.e., Algorithm # 2 (detailed below Fig. 5). SGC request in RC: This request is similar to SGC in FC direction and instead of the convener invoking Algorithm # 1, it is the end-node (which is the light-trail controller for RC) that invokes Algorithm # 1. MGC request in RC: This request is similar to MGC in FC and the controller again in this case is the end-node. Algorithm # 1 works at the light-trail controller. In the if statement, the packet_value sent by nodes is compared to the CURR_LT(k) which denotes the packet_value of the node that holds the current connection in the light-trail. If the value of the node seeking connection is greater than this value then the light-trail bandwidth is assigned to the seeking node through a success_ack and the seeking node is the argument (arg) of this successful ack. At the same time, a connection disruption message is passed on to the node that held the light-trail bandwidth. Deadlocks and no-transmissions are handled as special cases of the algorithm.
Connection type Forward Connection SGC At convener At gateway node MGC MGC Reverse Connection SGC At gateway node (drain At At only) gateway convener node At convener

interruption packet and correlates this value to the ongoing connection. The node then begins to transmit from the point where the previous node ceased its transmission. Transmission of data stops as the train enters the provisioning margin of the gateway supported by the node. This is followed by sending a SIP to the next node along the track notifying it of the point where transmission has stopped.
Algorithm # 1 At the light-trail controller: /for each buffer_status packet received: If packet_value >CURR_LT(k)_value send a successful_ack to arg[packet_value] send connection_disrupt to arg[CURR_LT(k)_value] pause for guard_band_time initiate connection to arg[packet_value] CURR_LT(k)_value:=packet_value Elseif packet_value<=CURR_LT(k)_value send neg_ack to arg[packet_value] if packet_value > max[packet_value] send macro_ack to arg[packet_value] send new-LT_Wave_No to arg[packet_value] update L2 database, IP table endif end Algorithm # 2 At each node when a train is undergoing hand-off (entering or leaving) / train enters a gateway range compute SIP(i-1) correlate SIP(i-1) buffer(i) set counter 0 transmit buffer(i) if counter <Ri(t)-2PM stop transmit send SIP(i) to node i+1 end Algorithms used at the light-trail controller and nodes for bandwidth allocation (#1) and hand-off (#2).

At At gateway convener node

Fig. 4 Algorithm for connection assignment


FORWARD COMMUNICATION
TRACK

REQueST

REVERSE COMMUNICATION
TRACK

Light-trail buffers

VI. SIMULATION STUDY To evaluate the performance of the light-trains framework we developed detailed C# simulation model. We assumed a train network as shown in Fig. 6 and we assumed 8 IP routers and bi-directional light-trails (2-fiber). Further we assume the number of gateways between each IP-router segment to be 30. The choice of train speed, gateway ranges, and other parameters are shown in Table 1. Traffic requests are modeled in the train as either voice, video or data and are differentiated by maximum permissible delay. All three services are assumed to be encapsulated in Ethernet Layer-2 frames. Also, the data-rate in the wireless layer between the train and the gateway is a function of physical fading modeled through the UMTS standard [9]. Load in the network is computed by the total number of bits

IP ROUTER

WIRELESS GATEWAY + LIGHTTRAIL NODE

Light-trail buffers

Fig.5. Forward and Reverse Communication.

In the elseif statement, if the packet_value of the seeking node is less than the value of the node that currently holds light-trail bandwidth then a negative acknowledgement is sent by the controller. Further, if a time-out condition happens where the seeking node has to meet a service guarantee then the controller allows the seeking node to set up a new lighttrail. Macro acknowledgement is invoked in such a case. Correspondingly the Layer-2 and Layer-3 tables are updated at the light-trail nodes and IP router respectively. Algorithm # 2 works at all other light-trail nodes (except the controller). The node computes the values of session

transmitted by the train averaged over the average data-rate and time. This gives us normalized load. In the simulation the load is under 50 %, as higher loads will result in high retransmission.
Parameter Train speed Range Data-rate Provisioning margin BTm Bmax Light-trail line-rate
N1 N8

with loads. The protocol takes into account the services leading to lower delay for voice/video as compared to data.
0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0.1 0.2 0.3 Load 0.4 0.5 Sharing 2:30 (file size 2 Mb) Sharing 4:30 (file size 2 Mb) Sharing 6:30 (file size 2 Mb) Sharing 2:30 (file size 0.25 Mb) Sharing 4:30 (file size 0.75 Mb) Sharing 6:30 (file size 0.25 Mb)

Specification 400 kmh 14.4sec 1~54 Mb/s 10 msec 75 Mb 150 Mb 1.25 Gb/s

Table. 1. Simulation Spec.


N7

Fig. 8. Throughput of the light-train system for 30 gateways and different file sizes as a function of network load.
1

V ideo Data

N2

0.1
Delay (s)

V oice

N3 N5

0.01
N5

0.1

0.2

0.3 Load

0.4

0.5

N6

Fig. 6. Topology used for simulation.


250 Sharing ratio = 2:30 200 Time delay 150 100 50 0 0.1 0.2 Load 0.3
600 500 Time delay 400 300
Average File Size = 2 M b

Fig. 9. Average delay experienced by services as a function of load.

120 100 Time delay 80

Sharing ratio = 2:30 Sharing ratio = 4:30 Sharing ratio = 6:30


Average File Size = 0.25 M b

VII. CONCLUSION In this paper we have proposed an efficient light-trains framework to provide bandwidth on demand to moving metro trains in a hybrid wireless-optical environment. In particular, we introduce a new node architecture that provides for cross-layer design between wireless and optical layers. Namely, at the optical layer we use modified light-trail technology that provides bandwidth on demand to nodes. Furthermore, we propose a protocol that enables dynamic bandwidth provisioning to multiple moving trains, detailing message types and hand-off functionalities. Finally, a sample simulation study is also presented to numerically gauge delay as a function of load for different services.
REFERENCES
[1] A. Gumaste, Light-trail and Light-frame Architecture for Optical Networks, Ph.D thesis Fall 2003 UT-Dallas. [2] A. Gumaste and I. Chlamtac, Light-trails: An IP Centric Solution in the Optical Domain, Proc. of IEEE HPSR 2003 Torino Italy [3] A. Gumaste and I. Chlamtac, Light-trails: An Optical Solution for IP Transport, OSA Journal on Optical Networking May 2004, 864-891. [4] A. Gumaste, S. Ganguly, N. Ghani, A. Somani and S. Q. Zheng, An Optical Networking Infrastructure for Enterprise Computational Grids using the Light-trail Model, submitted. [5] A. Gumaste, I. Chlamtac and J. Jue, Light-frames: A Pragmatic Framework for Conducting Optical Packet Transport, Proc of IEEE Intl Conf. on Communications, (ICC) Paris 2004. [6] A. Gumaste and S. Q. Zheng, Next Generation Optical Storage Area Networks: The Light-trails Approach, IEEE Communications Magazine Mar. 2005 Vol. 21. No. 3. pp. 72-79. [7] Jing Fang, Wensheng He and Arun K. Somani, "Optimal Light Trail Design in WDM Optical Networks," IEEE. ICC 2004, Paris. [8] J. C. Tsai, L. Fan, C. H. Chi, D. Hah, M. C. Wu A Large Port-Count 1x32 WSS Using a Large Scan-Angle, High Fill-Factor, Two-Axis Analog Micromirror Array, European Conf. on Optical Comm.. (ECOC) 2004 [9] online http://www.umtsworld.com/technology/overview.htm [10] A. Gumaste and S. Q. Zheng, Dual Auction and Recourse Protocol for WDM Light-train Networks, IEEE Wireless and Optical Comm. Nets/ WOCN 2006 Bangalore, India

Sharing ratio = 4:30 Sharing ratio = 6:30


Average File Size = 0.75 M b

60 40 20 0

0.4

0.5

0.1

0.2 Load 0.3

0.4

0.5

Sharing ratio = 2:30 Sharing ratio = 4:30 Sharing ratio = 6:30

200 100 0 0.1 0.2 Load 0.3 0.4 0.5

Fig. 7(a-c) Shows delay in accessing files for users in a train as a function of network load and different number of trains on a 30 gateway track.

The delay profiles seen in the light-trains network (with differing numbers of trains sharing the track) are shown as a function of average file-size and load in Figs. 7a-c. We see that for larger number of trains sharing a single track the delay experienced is higher than for a smaller number of trains even at the same load and requesting similar file sizes. This is due to the dynamic response of the light-trail network. Though inherently dynamic, the response (agility in provisioning connections) decreases with the number of demands. Since the number of demands in a 6 train system is larger than a 2 train system, the connection sizes tend to be smaller and hence utilization suffers resulting in larger delay. We also see that for smaller file sized requests, the delay is also significantly smaller as seen in the top two charts in Fig. 7. Further, with large file sizes the delay is exponential with load as seen in the bottom chart in Fig. 7. Furthermore, Fig. 8 plots the throughput of the light-train network as a function of load for different file sizes and different train/track sharing ratios. Finally, Fig. 9 shows the performance of the protocol for the three key service types provisioned over the light-trains framework as a function of load, i.e., voice, video, data. Our objective of keeping voice and video delay levels consistent and oblivious to the network load are achieved as their associated delays increase gradually

Das könnte Ihnen auch gefallen