Sie sind auf Seite 1von 7

Multi-level Priority Queue Scheduling Algorithm for Critical Packet Loss Elimination during Handoff.

Mahshid Madani (x1aut@unb.ca) University of New Brunswick Saint John, Canada Janet Light (jlight@unbsj.ca) University of New Brunswick Saint John Canada

Abstract
Mobile IP provides node mobility by allowing change in point of attachment to the network. Handling the mobile network performance degradation due to critical packet loss during handoff is the main focus of this paper. Critical packets are defined according to application objectives. Combination of efficient scheduling algorithms, optimum window size and priority buffers have been proposed as a solution to decrease the critical packet loss and improve the quality of service. Simulation results substantiate the ideas presented in this paper.

1. Introduction
When a wireless node moves from its home network to a foreign network, it attaches to the new network using the normal connection procedure. However, packet loss or connection termination might happen during this phase. Packets that were directed to the previous base station (BS) are now forwarded to the new BS with the help of the home agent (HA). The HA has a routing table that keeps track of the current location of mobile node and its Care of Address (CoA). From now on, the home agent routes all the packets from the home address to the mobile node in the foreign network as shown in Fig 1.

node visits many different networks [8]. Also the performance suffers when the node moves from one network to another, such as data network to cellular network and vice versa. A logical solution is to store packets in a buffer in the BS and re-direct them to the new BS whenever handoff takes place. Our objective here is to introduce a solution to decrease or eliminate critical packet loss during handoff. The critical packets are packets that incur highest transmission cost, and are assigned the highest priority in the scheduling algorithms used here. A variety of existing scheduling algorithms are compared for performance. A random and multi-level priority queue scheduling algorithm developed here, with optimum sized buffers in the simulated BS, gives the best performance. The rest of the paper is organized as follows: Section 2 discusses the handoff and smooth handoff process. Section 3 explains the queue management and packet discarding techniques used here. The performance evaluations of the scheduling algorithms proposed are discussed in section 4. Section 5 explains the network model used. Section 6 is dedicated to the simulation program proposed, followed by the results, comparison and discussion.

2. Handoff Schemes
Handoff is the process of switching communication from one BS to the other. Handoff decisions are based upon the signal strength, the bit error rate, and the estimated distance from the BS. The efficiency of handoff algorithms will be judged by the following criteria: 1- Measure of quality of received signal strength from the BS. 2- Number of handoff between neighboring cells. 3- The number of bad handoffs, in case a node gets attached to a BS that has low signal strength. 4- The number of unnecessary handoffs, when handoffs occur even while current BS still provides satisfactory signal strength. 5- Delay in making a handoff.

Figure 1Mobile IP System Components Performance degradation happens when a mobile node leaves from one network and attaches to the other and suffers more from this process especially when the mobile

Proceedings of the 3rd Annual Communication Networks and Services Research Conference (CNSR05) 0-7695-2333-1/05 $20.00 2005 IEEE

At any instant, the connections within a cell can have different traffic characteristics, such as, constant or variable bit rates, and different Quality of Service (QoS) requirements, throughput, delay and packet loss threshold. Packet loss during handoff is the main source of quality degradation in the wireless environment Dynamic resource allocation enables servicing newly arrived customers without compromising the QoS. If there are not enough resources available in the new cell, the QoS gets compromised, or the connection gets terminated. Many admission control schemes proposed in [18] suggest premature termination of connections, which is more undesirable than admitting fewer mobile nodes. One strategy is to reserve a percentage of BSs resource to serve handoff requests. Another strategy is to dynamically allocate resources in the base station [6]. The resource requirements for successful handoffs can be calculated theoretically. This quantitative measurement helps defining and allocating adequate capacity, which in turn decreases the probability of new handoffs being blocked. Mobile IP offers an inter-subnetwork handoff protocol that provides a mechanism for seamless handoff. However, packet loss still happens in a wireless mobile network especially when a user roams between different networks especially from data to voice network and vice versa. The TCP protocol was not originally designed to act in a wireless environment and it interprets the packet loss due to handoff, as sign of congestion in the network [8]. That is why TCP performance highly suffers when a mobile node gets attached to a different sub network. Caseres and Iftode [6] have proposed a fast retransmit method to be implemented in the TCP protocol but the solution requires additional duplicating packets. Seamless handoff proposed as a method for packet loss control in [12] requires additional packet re-routing. All these solutions however require large amount of resources. Our goal is to avoid quality degradation by eliminating the packet loss during handoff. On the other hand, to avoid data stream disruption, throughput must stay the same during handoff. The mobility management techniques, such as network layer mobility management and transport layer mobility management, are implemented to reduce the packet loss during handoff. However, we suggest congestion control methods here using suitable queuing techniques to ensure the QoS. A. Smooth Handoff Smooth handoff is the process of storing packets in base station temporarily and forwarding them when the mobile node gets attached to the new network. We have adopted this method, which has been introduced as a solution to packet loss problem during handoff in [16]. Whenever a new foreign agent receives a registration message from a new mobile node, it sends the binding update to the old foreign agent to inform it from its new IP address, the CoA. It also relays the registration request. If the old foreign agent receives a packet that is heading toward the mobile node, it forwards it to the new foreign agent. Also, the new foreign agent updates the home agent by sending a warning message. This causes the

correspondent node to be informed and updated with the new CoA. From now on, the correspondent node sends the packets directly to the new foreign agent. This technique is called tri-diagonal routing and is proved to improve the packet loss during handoff [12]. The main reason is that most of the times the new foreign agent is farther than the home agent than the old foreign agent. The reduction in transmission delay by this method of route optimization is the main source of packet loss reduction during handoff. B. Packet Delay and Arrival Rate The system considered here is a multiplexer for packets. Each packet spends time t in the system. It is possible that system gets blocked because of lack of resources. This causes the packets to be lost or blocked. The measures that are interesting in realizing performance of the system are: The time spent in the system, or service time: t Number of packets: N(t) Ratio in which arriving packets are blocked or lost: Pl Average number of messages per second that pass through the system: throughput Number of packets is chosen to be random here. Also the time they spend in the system is random. Consider the number of packet arrivals in the interval between t0=0 to t to be A(t). The number of packets lost or blocked to be B(t). The number of departing packets in the same interval to be D(t). Then the number of packets in the system at this interval is: N(t)=A(t)-D(t)-B(t) It is assumed here that the system does not have any packets at time 0. According to L. Garcia [1]: Long-term arrival rate is: = lim t A(t)/t packets/sec Throughput = lim t D(t)/t packets/sec The average number in the system is: E[N] = lim t 1/t N(t) d(t) packets The ratio of blocked (lost) packets is: Pl = lim t B(t)/A(t) Having the function A(t) for arrivals, we can trace the time and number of packets arrived in the system. Assume counting at time t0=0, and that first packet arrives at time t1 and second packet arrives at time t2, and third at time t3, then we have 3 packets at time t1+t2+t3 in the system, assuming none of them depart yet. Hence, the arrival rate until the nth customer arrives is n/ (t1+t2+t3++tn) packets per second. So we can conclude that the long-term arrival rate is: = lim t n / (t1+t2+t3+.+tn) = 1/ E[t] Here we assumed that the time intervals are statistically independent and have equal probability distribution. According to the Littles formula, the average number of packets in the system is equal to the average time spent on the system multiplied by arrival rate. E [N] = E[T] For a system with blocking, the Little formula is: E[N]= (1-Pl) E[T]

Proceedings of the 3rd Annual Communication Networks and Services Research Conference (CNSR05) 0-7695-2333-1/05 $20.00 2005 IEEE

The average number of packets in the network E[Nnet] is equal to the packet arrival rate ( net ) multiplied by average time spent by packet in the network E[Tnet]: E[Nnet]= net E[Tnet]. We assumed a packet switching network with multiple multiplexers in each switch. The Littles formula is applied to each multiplexer in the network. Overall the network delay depends on each multiplexer delay and also on overall arrival rate to the network and to each multiplexer. In each multiplexer, we calculate the delay according to the arrival rate and the rate that multiplexer can transmit the packet, which is the service time. The arrival time used here is random time.

send registration message to the new foreign agent within this time bound. This idea is used to implement the simulation here and compare the effectiveness of different scheduling algorithms in reducing this variation of packet loss. Seop & Eom et al. [8] have proposed an extension to the route optimization method, which aims to reduce this category of packet loss by incorporating a local handoff protocol. The disadvantage is that the beacon message is shorter than advertisement message and as a result the sending rate is a lot higher than that of the advertisement message.

3.2 Packet Discarding

3. Queue management and scheduling


Queue management algorithms manage the length of packet queues by dropping packets if necessary. Scheduling algorithms, however, are used to determine a logical order for packet transfer in the queue and also manage the allocation of bandwidth among the data flow. Congestion is the main source of packet drops considered here. The queue management is the only solution for congestion control in the wireless network as well as wired networks. Dropping packets although necessary for congestion control is the source of global synchronization problem, lockouts and increase in average per packet delay. Global synchronization problem arises when queue drops high amount of consecutive packets in a short period of time. This leads to network slowdown and underutilization of network resources in some links. Lockout happens when small proportion of flow gets a large amount of bottleneck bandwidth and causes unfair allocation of resources. Issues mentioned above degrade the overall network performance. The queuing system assumed here has the following characteristics: the inter arrival time is deterministic; the service time is deterministic; Single server model is used; At most k packets allowed in the system. Queues are used as storage medium here. Short-term congestion caused by packet bursts, can be corrected using appropriate queuing techniques. Short-term congestion, mostly caused by packet bursts, can be corrected using appropriate queuing techniques, while long term congestion is corrected by packet discarding. A packet discarding algorithm equipped with congestion determination tools is needed to decide packet discarding. Our algorithm treats all packets with identical precedence level, equally. It just makes a difference, if two packets with different priority classes enter the network at the same time. This causes the corresponding packet discarding algorithm to be triggered. Most congestion level control schemes define acceptable threshold of congestion level. If the threshold is less than a critical level, packets are discarded in linearly increasing probability. If the critical threshold has been exceeded, all packets are discarded. Data streams with different short-term burst shapes but equal long-term packet rates should have packets discarded with equal probability. This means that we can use random functions in the dropping algorithm. This scheme is used here. However, if long-term packet arrival rate is not equal and follows exponential arrival shape, the usual randomness cannot be used in dropping algorithm. Exponential long-term arrival rate follows the Poisson distribution, which is considered when implementing the later case. Four different dropping algorithms have been implemented and compared in the simulation assuming equal arrival rates and using randomness.

3.1 Packet buffering


In this paper, we have considered that only the current foreign agent performs packet buffering. Whenever this agent receives the binding update message from the new foreign agent, it forwards the packets to the new foreign agent. On the other hand, it has been suggested that the maximum sending rate of agent advertisement message be increased to one per second to reduce the network traffic that is caused by agent advertisement message [7]. Hence, even if the new foreign agent uses the maximum advertisement sending rate method, the mobile host does not receive the advertisement message during one second after moving to the new foreign agent, in worst case. This causes the mobile node to loose packets, because the mobile node is unable to

4. Performance Evaluation of Scheduling and Dropping Algorithms


Cost functions are used to evaluate the scheduling algorithm performance. The cost function C(t) is the function that evaluates cost in existence of queuing delay of t. Cost function is expressed as cost per unit of packet length. In our simulation, we have assumed the cost proportional to the packet length, because the duration of the packets transmission is proportional to this criterion. This means larger the packet, higher the transmission cost. However, the cost function can be defined differently according to applications objective too. For instance, if the packets queuing delay is less than specified threshold, if it is meeting

Proceedings of the 3rd Annual Communication Networks and Services Research Conference (CNSR05) 0-7695-2333-1/05 $20.00 2005 IEEE

its deadline of handoff, there is no cost. However, if a packets queuing delay exceeds this threshold, the cost C will incur.

Scheduling Algorithms

Scheduling algorithms such as Static Priority (SP), Earliest Deadline First (EDF), and First Come First Served (FCFS), have been so far extensively used [2,3,4,5]. The scheduling algorithms implemented and tested for performance in this project are: First in First Out (FIFO) or generally called without scheduling, Static Priority (SP) with random dropping, Shortest in First Out (SIFO), and Prioritizing Packets According to Type in SIFO. The number of packets dropped during handoff in a connection, is positively related to the maximum window size and time delay during which packets are being dropped and negatively related to the round trip time of the connection. The following formula is used in [8] to calculate the number of packets dropped during handoff is used here: Min ((mws/rtt )* t loss, mws) [1] where mws is the maximum window size, rtt is round trip time, t loss = time delay + beacon time + T_new_fa + T_HA. The time delay is the delay between home agent and old foreign agent. Beacon time is the time needed for agent advertisement message to be received. T_new_fa is the period of time that mobile node receives beacon message to the time that new foreign agent sends the registration request message. T_HA is the period of time between the times when the new foreign agent sends the registration message to the time that home agent receives it. C. TCP Window Size The receive window size is the amount of data in bytes that can be received and buffered during a TCP connection. The maximum window size is varied within different protocols and is the amount of data that a sending host can send before it must wait to receive the acknowledgement and window update from the receiving node. TCP can adjust to the increments of the Maximum Segment Size (MSS). MSS is negotiated during connection setup phase. This adjustment can increase the percentage of full sized TCP segments, which is utilized within bulk data transmissions period. The process for determining receive window size is as follows: - The first connection request to the correspondent node advertises a receive window size of 16,384 bytes (16 K) - After connection gets established, this window size can be rounded up to an even increment of maximum segment size. This window size can be adjusted to a maximum of 64K.

5. The Network Model


The Packets from different users are tunneled towards the destination. These data stream packets are then copied into a buffer. Whenever the mobile node attaches to

the new foreign agent, the previous foreign agent is notified through the binding update from the foreign agent. The buffered packets then are re-tunneled to the new foreign agent. All packets with the mobile node destination address that are being sent to the previous foreign agent now are tunneled to the new foreign agent. There is a chance of packet duplication at this point. To solve this issue, mobile node sends the source address and datagram specifications of the most recent packets to the previous foreign agent during the registration period. The foreign agent checks this request and drops the packets that have already reached the mobile node. Soon after reaching the new wireless cell, in handoff region, mobile node registers itself with the new foreign agent. The delay existing in the process of finding the new foreign agent, registering and getting reply, causes the packets that have been reached at the previous foreign agent before the registration reply to be considered unauthorized traffic and discarded. The handoff process has the following phases: 1- Handoff takes place when the Mobile Node (MN) in sub-network A moves to subnet B. 2- After a time delay of t0, mobile node obtains a new IP address, the CoA. 3- Within a short delay of t1, mobile node sends the registration request to the gateway foreign agent. In this process, the gateway will be notified of Mobile Nodes CoA and updates its binding table. 4- A registration message is sent to the new foreign agent. 5- New foreign agent forwards the registration request to its gateway, the foreign agent gateway. 6- New foreign agent sends the binding update to the previous foreign agent. 7- Previous foreign agent receives the binding update and updates its binding cash and forwards the packets in its buffer to the new foreign agent. These buffered packets are those that the mobile node has not yet received. 8- Gateway foreign agent processes the registration request, updates its routing table, and sends a reply to the mobile node via the new foreign agent. If new foreign agent does not receive this reply from the gateway, the traffic being sent to it will be considered unauthorized and the packets will be discarded. 9- Upon receipt of the registration reply from gateway, new foreign agent transfers the forwarded packets to the mobile node. Packet routing in mobile IP happens in one of the following ways: 1- Packets are directly forwarded to the mobile node via the previous foreign agent. 2- Packets are buffered in the previous foreign agent and then get forwarded to the new foreign agent. Following issues might arise: Packets are discarded because they reach the new foreign agent before the

Proceedings of the 3rd Annual Communication Networks and Services Research Conference (CNSR05) 0-7695-2333-1/05 $20.00 2005 IEEE

registration reply is received from the gateway. Packets reach the new foreign agent safely because the registration reply has been already received and the connection has been established. Buffer in the previous foreign agent gets overloaded and the packets get lost. Following are the assumptions made in our network model: 1.We consider the packet forwarding from the gateway foreign agent because the handoff does not affect the route of data stream until it reaches the gateway foreign agent. 2.The datagram packets, received from users are being scrambled into one data stream according to the TCP/IP routing protocol. The users data is packetized and mixed with other users packets in the data stream. Each packet has the source and destination information in its header and can find the way to the destination with the help of routers. 3.The packet sizes can be up to 65,535 bytes, which is the maximum packet length. The payload can be up to 65,515 bytes. However, in practice, the maximum possible length is rarely used because most physical networks have their own length limitations. For instance payload is limited to 1500 bytes in Ethernet. In our network model, packets of 20000 20010 are created and forwarded according to different maximum window sizes. 4.The propagation delay has been considered as follows: It is assumed here that the propagation delay is 5 ms in the routers and 5 ms as link delay for the packets that are being transferred from previous foreign agent to the new foreign agent. With this assumption, a route between a correspondent node to the mobile node, in existence of 2 routers on the path, is going to be 20 ms. 5 ms for each router and 10 ms for 2 links between the previous foreign agent and new foreign agent. Time delay and round trip time has not been considered here in the simulation.

that are point of attachment of mobile node to the network. Different sized buffers within base stations of cell A and B are created. Also it has been assumed that buffers elements are transferred from cell A to cell B at exact time of handoff. If a buffer exists in the base station of cell A, packets that are on the route to the mobile node in this zone are stored in the buffer and forwarded to cell B as soon as MN hands off. If no buffer exists in cell A, all packets that are on the move to MN in Cell A will be dropped.

The simulation steps are:


Packets sizes are randomly generated in the range of 20000-20010 bits. Packets are randomly assigned priority and payload information. In priority according to type, packets are assigned random types. Buffer sizes have been randomly generated in the range (98000-128000 bits) for maximum window size of 128 KB and (3000-101000 bits) for maximum window size of 16 KB. Simulation is done on 100 instances of different buffer sizes while number of packets and their size are kept constant. Random generation of packets size and payload are done outside the 100 times loop, causing the number of packets and their size to be constant for all these instances. Other conditions such as congestion and bandwidth are kept equal during the simulation. Packets are assumed to reach the BSs buffer randomly. BSs keep track of valuable information such as traffic in the zone, number of users, routing information and bandwidth. Different priority patterns are considered and corresponding scheduling algorithm are used to improve the performance. For instance in Shortest Come First Out scheme, shortest packets are assigned higher priority and SIFO algorithm is used to re-order packets. In Priority according to type, video packets have been assigned highest priority. The scheduling algorithms implemented in this project are as follows: First in First Out (FIFO) or Last Come First Dropped Static Priority (SP) dropping or random dropping Shortest in First Out (SIFO) Prioritizing Packets According to Type in SIFO In FIFO, the buffer accepts packets up to its maximum size and overflows if the required maximum size exceeds. In this approach, when the packet arrives and buffer is full, packet gets dropped, regardless of performance degradation. In the SP algorithms, a multilevel queue buffer, has been created. Packets are reordered according to four different priority levels in these queues. If the buffer is full, the queued packets with lowest static priority drop. In this scheme, packets are assigned priorities 0-3 according to 2 bit priority flag in the header of the frame.

6. Simulation
Critical packets are defined according to application objectives. Priority queues have been proposed and used as a base for improving performance by decreasing the number of critical packets lost. These priority queues are added to the BSs to re-order packets according to their criticality. Existing scheduling and dropping algorithms are added to the simulation and large range of buffer sizes is considered. At the end, the relationship between buffer size and critical packet loss in existence of different scheduling algorithms are evaluated. Also the throughput is calculated in each case. The results are evaluated in cases where scheduling algorithms and priority queues exist versus the implementation without scheduling method and priority queue. The existence of two cell networks side by side and a Mobile Node (MN) that is handing off from the first cell (A) and handing over to the second cell (B) has been assumed throughout the simulation. Each cell consists of BSs

Proceedings of the 3rd Annual Communication Networks and Services Research Conference (CNSR05) 0-7695-2333-1/05 $20.00 2005 IEEE

Fig.2 mws and total packet Loss


The SIFO and the Priority Packets According to Type algorithms have been implemented using priority queues. In these two schemes, diverse mix of heterogeneous traffic, such as the different size packets in SCFO and video, voice and data packets in Priority According to Type, with different resource requirements are created and dynamic resource allocation to packets according to size or type has been considered. However, it is necessary to assign a fixed threshold for the amount of high priority packets that are allowed in the network, because otherwise, the network will be flooded with the highest priority packets. In the simulation program, four distinct traffic categories have been created: voice, video, data, and a reserved type for currently unspecified traffic. Obviously establishing these connections require variable resource allocation. The bandwidth required by a video stream is a lot higher than the one for the data stream. That means more resources are needed to serve an incoming stream of video packets over data stream. Critical packets in this scheme are those with highest priority such as the video packets in our simulation. In the SCF algorithm, upon arrival of a new packet to the queue, the packet size is compared with other packets in the queue and the correct turn is decided. If two packets with the same size arrive, the first come first served algorithm gets activated for those two packets. The drawback in using priority queues with SCF is the algorithm cost. This is because the elements of the queue have to be shuffled every time a new packet arrives. In existence of n packets in the queue, upon arrival of n new packets, in worst case, n*(n+1) different orders must be reevaluated. This causes our algorithm to have O (n2) running time.

Maximum window size 16KB 64KB 128KB

Throughput Throughput (15893B buffer) (3850B buffer) %88.88 %22.46 %25.00 %9.23 %12.64 %4.59 Table 2 Throughput
Throughput Comparison
MWS, 16 KB MWS, 64 KB MWS, 128 KB

1
Throughput

0.8 0.6 0.4 0.2 0 0 5000 10000 Buffer Size 15000 20000

Fig.3 Throughput Comparison of 3 Window Sizes Fig.4 is the plot of the total and critical packet loss while no scheduling algorithm, in a protocol with maximum window size of 128 KB. Fig.5 shows the critical and total packet loss after random priority algorithm has been applied to the data stream. The improvement in the critical packet loss is obvious. In this case, the critical packet loss reaches total elimination, a lot faster than without any scheduling scheme.
Critical Packet loss versus Total Packet Loss No Scheduling Alg Used, mws 128 KB Total of 52 Packets
Critical Packet Loss Total Packet Loss

7. Results and Discussion


Fig.2 shows the relationship between Maximum Window Size (MWS) and total packet loss, for buffer sizes ranging from 3000 to 15200 bytes. As the graph shows there is a direct relationship between maximum window size and total and critical packet loss. The larger the maximum window size, the higher the packet loss during handoff. It is clear that the same relationship between maximum window sizes and total and critical packet loss are same for all the four scheduling algorithms considered here. Table 2 and fig.3 express the throughput calculated for each of the three window sizes.
Maximum Window Size and Total Packet Loss
MWS, 16 KB

60 50 40 30 20 10 0 0 50000 100000 150000 Buffer Size (Bytes)

Fig.4 Critical / Total Packet Loss, mws 128 KB, No Scheduling


Critical Packet Loss versus Total Packet Loss Random Priority, mws 128 KB Total of 52 Packets
Critical Packet Loss Total Packet Loss

Packet Loss

60 50 40 30 20 10 0 0 20000 40000 60000 80000 100000 120000 140000 Buffer Size (Bytes)

MWS, 64 KB

MWS, 128 KB

Total Packet Loss

100 80 60 40 20 0 0 5000 10000 Buffer Size (Byte) 15000 20000

Fig.5 Critical / Total Packet Loss, MWS 128 KB, Random Priority Fig.6 compares the total and critical packet loss in all four scheduling algorithms using different buffer sizes. It is obvious from the plots that the random priority using multi-level queue provides the best performance and provides total packet loss elimination using a lot less buffer

Proceedings of the 3rd Annual Communication Networks and Services Research Conference (CNSR05) 0-7695-2333-1/05 $20.00 2005 IEEE

Packet Loss

resources. SCFO algorithm comes the second. It was also expected to see better performance in priority according to type algorithm. Taking a closer look at the spread of traffic type, it has been found that critical packets have less distribution in this specific data stream, causing the critical packet to get less advantage of the scheduling algorithm. In the same data stream, it is also clear that the total packet loss is almost equal for all these four categories. There is a valid explanation for this output as well. Scheduling has been proposed as a solution for decreasing critical packet loss by re-ordering packets in the priority queues. Total packet loss, however, is just a function of buffer size, which causes total packet loss to get less advantage from scheduling. Table 3 represents the overall result of imposing scheduling algorithms to the data stream and using different receive window sizes in eliminating packet loss.

References
[1] Leon Garcia, Widjaja, Communication Networks, Fundamental Concepts and Key Architecture, MC Graw Hills, Pages 813-817 [2] Jon M. Peha, Fouad A. Tobagi, Cost Based Scheduling and Dropping Algorithms To Support Integrated Services, IEEE Transactions on Communication, Vol 44, No 2, Feb 1996, Pages 192-202 [3] S. S. Panwar, D. Towsley, and J. K. Wolf, Optimal Scheduling Policies for a Class of Queues with Customer Deadlines to the Beginning of Service ACM, vol 35, no 4, Pages 832-44, Oct 1988 [4] D. Ferrari, and D. C. Verma, A Scheme for Real Time Channel Establishment in Wide Area Networks, IEEE J. Sel. Areas Commun., Vol 8, No 3, Pages 368 -3 79. Apr 1990 [5] J. Hyman, A. A. Lazar, G. Pacifici, MARS: The Magnet II Real Time Scheduling Algorithm, in Proc. ACM Signcomm, Sept 1991, Pages 285-293 [6] R. Caceres and L. Iftode, Improving the performance of reliable transport protocols in mobile computing environments , IEEE Journal on Selected Areas in Communications 13, 5, Pages 100-109, 1995 [7] C. E. Perkins, IP Mobility Support for Ipv4 , revised draft-itself-mobileip-rfc2002-bis-03.txt, 2001 [8] Eom D. S., Lee H., Sugano M., Murata M., and Miyahara H., Improving TCP handoff performance in Mobile IP based networks, Computer Communications 25, Pages 635-646, 2002, Online, Available: Elsevier Science Direct, Last Visited: Feb 2004 [9] F. L. Severance, System Modeling and Simulation, Wiley, 2001 [10] Modeling, Analysis, and Simulation of Wireless and Mobile Systems, Proceeding of the 4th ACM International Workshop, ACM MSWiM 2001 [11] B. A. Forouzan, Data Communications and Networking, Mc Graw Hills, 2001, Pages 705-709 [12] Y. Bejerano, I. Cidon, and J. Naor, Efficient Handoff Rerouting Algorithms: A competitive On-Line Algorithmic Approach, IEEE, ACM Transactions on Networking, Vol. 10, No. 6, December 2002 [13] Y. Pan, M. Lee, J. B. Kim, and T. Suda, An End to End Multi Path Smooth Handoff Scheme for Stream Media, ACM 158113-768-0/03/0009, September 2003 [14] V. Veeravalli, O. Kelly, A Locally Optimal Handoff Algorithm for Cellular Communications, IEEE Transactions on Vehicular Technology, Vol. 46, No. 3, August 1997

7. Conclusion
This paper expresses the possibility of performance improvement in the process of handoff for decreasing or eliminating critical packet loss in a mobile environment. Combination of efficient scheduling algorithms, optimum window size and priority schemes proposed here decrease the critical packet loss and improve the QoS. The results of simulation show that larger receive window size causes higher rate of packet loss during handoff. Also the simulation proves that the shortest come first out algorithm, implemented using priority queues, provides better performance by decreasing the critical packet loss. However random priority scheme and priority according to type have shown better result when the number of critical packets is dominant in the receiving data stream. Overall it has been shown that re-ordering packets according to size, type and cost using a priority queue provides good improvement by decreasing or eliminating critical packet loss.
Critical Packet Loss Comparison, mws 128 KB, Total of 52 Packets
Type Pri SCFO No Sched Random Pri

20 Packet Loss 15 10 5 0 0 20000 40000 60000 80000 100000 120000 140000 Buffer Size (Bytes)

Scheduling Algorithm

Fig.6 Critical Packet Loss for all 4 Algorithms Using a multi-level priority queue, as implemented in random priority scheme, has highly decreased the critical packet loss and can be pointed as a technique for improving current inefficiencies of handoff process in the mobile network. Also, it has been shown that smaller receive window size highly increases the throughput implying use of less buffer resources, and improving the effective cost.

Multi-Level Queue, Random Priority Shortest Come First Out No Scheduling

Buffer Requirement (mws 16 KB) 2500 Bytes 10,000 Bytes >12,000 Bytes

Buffer Requirement (mws 128 KB) 50,000 Bytes 100,000 Bytes 110,000 Bytes

Priority According to Type >12,000 Bytes 125,000 Bytes Table 3 Critical Packet Loss Elimination Resource Requirements

Proceedings of the 3rd Annual Communication Networks and Services Research Conference (CNSR05) 0-7695-2333-1/05 $20.00 2005 IEEE

Das könnte Ihnen auch gefallen