Sie sind auf Seite 1von 10

Hybrid Packet/Fluid Flow Network Simulation

Cameron Kiddle, Rob Simmonds, Carey Williamson, and Brian Unger kiddlec,simmonds,carey,unger @cpsc.ucalgary.ca Department of Computer Science University of Calgary Calgary, Alberta, Canada

Abstract
Packet-level discrete-event network simulators use an event to model the movement of each packet in the network. This results in accurate models, but requires that many events are executed to simulate large, high bandwidth networks. Fluid-based network simulators abstract the model to consider only changes in rates of trafc ows. This can result in large performance advantages, though information about the individual packets is lost making this approach inappropriate for many simulation and emulation studies. This paper presents a hybrid model in which packet ows and uid ows coexist and interact. This enables studies to be performed with background trafc modeled using uid ows and foreground trafc modeled at the packet level. Results presented show up to 20 times speedup using this technique. Accuracy is within 4% for latency and 15% for jitter in many cases. Keywords: Network Simulation, Simulation Abstraction Techniques, Fluid Simulation, Scalable Network Simulation

1. Introduction
Discrete-event network simulators often model trafc at the packet level, with an event being used to represent packet arrivals or departures from network devices or buffers. This can lead to accurate models. However, when simulating large networks and high bandwidth links, the computational cost of processing the resulting huge number of events representing the trafc as packet ows can be prohibitive. When simulators are used within real-time network emulation environments, this cost severely restricts the size and type of network that can be modeled. Parallel discrete-event simulation (PDES) techniques can increase model scalability, i.e., the size of network and the trafc densities that can be executed in real-time. Modeling larger bandwidth links is less amenable to parallelization techniques due to the sequential nature of each packet ow at each network port. Therefore, model abstraction

techniques are required to simulate large trafc ows. Fluid-based modeling can be used to simplify trafc ows in a network simulation [2, 3, 4, 8, 9]. With a uid model, events are only generated when the rate of a ow changes. If the ows change rate infrequently, large performance gains can be achieved using this technique. Since model detail is reduced, the level of accuracy of the simulation results obtained using this abstraction technique will not be as high as when packet-level simulation is used. As with all abstraction techniques, the appropriateness of the method depends on the simulation requirements. One problem with uid models is that information about individual packets is lost. Therefore, they cannot be used for simulations studying subtle protocol dynamics on individual ows. They can also not be used for simulators that act as components of network emulation systems that interact with real applications running on real networks. These real applications communicate using individual packets, so a simulator interacting with them must handle individual packets. One approach to maintaining packet information while reducing the overall trafc modeling cost is to use hybrid simulators that handle both packet and uid ows [5, 11, 13]. Trafc ows that must carry the full packet information are modeled using an event for each packet arrival or departure while background ows, for which less detailed information is required, are modeled using uid ows. A challenge faced by these systems is accurately modeling the interactions between packet ows and uid ows. This paper describes the design of a hybrid model that has been implemented within a parallel IP packet-level network simulator called the Internet Protocol Trafc and Network (IP-TN) simulator [12]. This simulator forms the basis of the IP-TNE network emulation system making it essential that abstraction techniques employed do not prohibit the modeling of individual packets. Results are presented showing the performance and accuracy achieved. The rest of the paper is laid out as follows. Section 2 describes related work in the area. Section 3 describes the de-

sign of the hybrid model. Section 4 presents the experimental methodology and Sections 5 and 6 present the results. Conclusions and future work are presented in Section 7.

2. Related Work
The accuracy and/or performance of uid-based techniques for network simulation have been examined in [2, 3, 4, 8, 9]. Reasonable accuracy has been achieved along with considerable performance improvement under certain circumstances. Compared to packet-level simulation, the largest performance gains are achieved with small networks and cases where the number of packets represented is much larger than the number of rate changes. For larger networks, a property described as the ripple effect can reduce the performance advantage in the uid-based simulator. The ripple effect describes the situation where the propagation of rate changes leads to rate changes in other ows which then need to be propagated. On/Off sources are commonly used as trafc models in the uid simulation literature. The use of these models makes it easy to study accuracy and performance issues. Some work has involved the use of uid-based TCP models [6, 7]. The hybrid technique in which packet ows and uid ows are integrated is a recent development. An extension adding uid models to the QualNet simulator is presented in [13]. QualNet is the commercial version of Global Mobile Information System Simulator (GloMoSim) [14]. The simulated system is divided into components that model at the packet level and components that use analytical uidbased models. The uid-based components calculate delays and loss of traversing packet ows and pass this information to the destination packet modeling components. The Hybrid Discrete-Continuous Flow Network Simulator (HDCF-NS) [5] also enables packet ows and uid ows to coexist. The manner in which uid ows are modeled is described in detail, but little information is given on how the packet and uid ows interact. A third hybrid approach is described in [11]. This utilizes two simulators that interact via the Georgia Tech Runtime Infrastructure Kit (RTIKIT) [1]. The HDCF-NS simulator is used to simulate the background uid ows and the parallel and distributed ns (pdns) simulator [10] is used to simulate the packet ows. Messages are sent to pdns from HDCF-NS via RTIKIT whenever uid changes occur. Packet loss and queuing delay in pdns are based on both packet and uid levels. The system currently does not allow packet ows to affect uid ows.

uid discrete-event models in a single simulator. However, little information is given in [5] on the hybrid implementation details of HDCF-NS. QualNet differs from our approach in that different sections of the network support either uid or packet ows but not both. Also the uid sections use an analytical model instead of a discrete-event model. The approach using both HDCF-NS and pdns differs from the approach taken here in that multiple simulators are used and packet ows cannot affect background uid ows. The approach taken in this paper adds uid modeling to the IP-TN simulator by adding a new type of output buffer at network links. This hybrid buffer must be able to process packets and uid advertisements. A uid advertisement species a rate change for a particular uid ow. It is possible for a packet and multiple advertisements to be sent at the same time; in this case they are represented by a single event. The IP routing lookup was modied to demultiplex packets and advertisements that arrive together. The hybrid buffer models a FIFO queue that operates in one of three modes: packet mode, uid mode or hybrid mode. Initially, the mode of the buffer is undened. If a buffer rst receives a packet, it enters packet mode and remains in packet mode as long as it only processes packets. If a buffer rst receives a uid advertisement, it enters uid mode and remains in uid mode as long as it only processes uid advertisements. A buffer switches to hybrid mode if it is in packet mode and receives a uid advertisement or if it is in uid mode and receives a packet. Currently, once a buffer switches into hybrid mode, it remains in hybrid mode for the remainder of the simulation. The hybrid buffer modes are described in the following sections.

3.1. Packet Mode


Operation of the hybrid buffer in packet mode is depicted in Figure 1, where is the buffer usage at time , is the maximum buffer capacity and is the buffer service rate, which is the bandwidth of the outgoing link. If there is

11 00 11 00
xC

x(t)

1 1 11 0 0 00 11 00 1 1 11 0 0 00 11 00 11 00 11 00 11 00 11 00

Figure 1. Hybrid buffer in packet mode. insufcient space in the buffer, the packet is dropped when it arrives. Otherwise, the packet is added to the buffer and the arrival time at the next node calculated based on queuing delay, transmission delay and propagation delay. A new event to represent the packet is generated and dispatched to the next node in the network.

3. Hybrid Implementation
The hybrid scheme presented in this paper is similar to the one used in HDCF-NS since it combines packet and

3.2. Fluid Mode


In uid mode, state is maintained for each uid ow indicating its advertised incoming rate , advertised outgoing rate and loss rate at time . Other state maintained by the buffer includes the aggregate input rate , the aggregate output rate , the aggregate loss rate , the buffer usage , the buffer capacity and the buffer service rate , which is the bandwidth of the outgoing link. Operation of the buffer in uid mode is depicted in Figure 2.

uid advertisements is calculated based on queuing and the propagation delay of the link. An event is then generated to signal the arrival of the uid advertisements at the next hop at the time calculated. Note that transmission delay is not included in the calculation of the arrival time. Characterizing the transmission delay is difcult since different ows may have different packet sizes. Usually, propagation delay is signicantly larger than transmission delay and therefore should not introduce much inaccuracy.

3.3. Hybrid Mode


In hybrid mode, a single buffer that operates in a similar way as the buffer in uid mode is used. As both packet ows and uid ows are handled, it is important that packet ow levels affect uid ow levels and vice versa. This is achieved by estimating the aggregate input rate of packet ows and adding this to the aggregate input rate of uid ows to get the overall aggregate input rate . An output rate is not advertised for the packet ows so there is only an aggregate output rate of uid ows . The operation of the buffer in hybrid mode can be seen in Figure 3.

(t) l(t) xC x(t)

(t)

Figure 2. Hybrid buffer in uid mode. When a uid advertisement(s) arrives at time , the buffer usage, packet loss, the new aggregate input rate and output ow rates are calculated. The buffer usage is calculated as follows:

where is the time at which the last uid advertisements arrived. If the calculated value of then is set to be 0. If the calculated value of then is set to be . Loss rates and output rates of individual ows at time are calculated after has been determined as follows: 1. (Underload/Empty) if and . and

11 00 11 00 (t)
p l(t)

f (t)

xC x(t)

11 00 11 00

f (t)

Figure 3. Hybrid buffer in hybrid mode. The process and calculations involved for handling uid advertisements is the same as for uid mode except that the current value for is included as part of . Fluid advertisements are sent for all uid ows with rate changes. Each time a packet arrives, a new estimate is calculated. The estimate is based on a sliding window of the last packets that have arrived. It is calculated by taking the sum of the size of the last packets and dividing it by the packets arrived. Other time interval in which the last rate estimation algorithms based on exponentially weighted moving averages and time windows were explored but were found to be too sensitive to parameter settings. Once the new estimate has been calculated it is compared with the current . If the difference between the new estimate and the current rate is greater than a threshold percentage, then the current rate is set to the new estimate. Modifying each time a packet arrives could lead to a greater impact from the ripple effect in uid ows. The threshold value allows for some control in balancing accuracy and performance. Each time is modied, a timeout event is generated. This allows to be reset to zero during a period when no packets are arriving. The timeout is set to occur when it is

then

2. (Underload/Draining) if and and . (Note that an then event must be generated for the predicted time that the buffer will empty, so output rates can be re-advertised.) and 3. (Overload/Filling) if and . (Note that an then event does not need to be generated for the predicted time that the buffer will become full, as the output rates of the ows will remain the same. Packet loss can be determined upon arrival of the next set of uid advertisements.) 4. (Overload/Loss) if ,

and

then and

For each ow in which the output rate changed, a uid advertisement is created. The next hop arrival time of the

expected that packets will have arrived according to the current . When the timeout occurs, is set to zero if no packets arrived since the timeout was generated. Otherwise, a new timeout is generated in the same manner as before. packets are still The next time a packet arrives, the last used in the estimation. For each packet arrival, buffer usage is calculated as in uid mode. Loss in uid ows, the aggregate input rate and new output rates of uid ows are calculated if was modied, or if the packet arrived with uid advertisements. If the buffer is full, the packet is lost with probability , which is the same proportion in which uid ows lose packets. If the packet is not lost, it is sent along with any uid advertisements that may have been created due to a change in . The arrival time at the next network node is calculated based on the queuing delay and propagation delay as is done in uid mode. Since uid modeling principles are used, the transmission delay of a packet is ignored. Also, if two packets were to arrive at the buffer at the same time, they would both arrive at the next network node at the same time. To account for transmission delays and queuing among packet ows, an alternative solution with two buffers could be used. One buffer could handle packets and the other buffer could handle uid ows. Effects of uid ows on packet ows could be modeled by adjusting buffer usage of the packet buffer based on the amount of data arriving from uid ows. An estimate of the aggregate input rate of packet ows could be used to affect the output rates and loss of the uid ows. The alternative solution could potentially lead to more accurate results, but would be difcult to implement efciently. Buffer usage in the two buffers could differ and this could result in events that signal arrivals of packets and uid advertisements being sent out of arrival time order. IP-TN uses conservative parallel discrete-event simulation techniques, which require that events sent between nodes are received in arrival time order. To achieve this, the alternative solution would require two events, instead of just one event, to be generated each time a packet or set of uid advertisements is to be sent. One event would be used to signal the actual sending of the packet or set of uid advertisements. The other event would be used to signal the arrival of the packet or set of uid advertisements at the next hop. The send events would be processed in send time order ensuring that arrival events are received in arrival time order.

4.1. Experimental Environment


Simulation experiments were conducted on an 8processor Compaq Proliant server with 700 MHz Intel PIII Xeon processors and 4 GB RAM. The Proliant was running RedHat Linux 7.3 with the v2.4.18 kernel. The GNU g++ V2.96 compiler was used with the -O2 optimization ag. Tests were run using sequential execution on one processor to examine effects of just the hybrid model without interference from the effects of parallelism. In runs comparing packet and hybrid simulations, identical random number seeds were used.

4.2. Network Model


The network model used for the simulation experiments is shown in Figure 4. This model is a variation of the classic tandem network model, but with bi-directional links. With this model, we can test the performance and the accuracy of our hybrid simulation methodology as the size of the network and the number of trafc ows are varied.

Figure 4. Simulated network model. Three main parameters characterize the network model:

, , and . is the number of foreground ows in the

4. Experimental Methodology
This section describes the methodology for the simulation experiments, including the experimental environment, network model, experimental design, and performance metrics.

network. These ows traverse the entire backbone of the network, from end to end. is the number of background ows on each link in the network backbone. One of the background ows traverses the entire network. The other background ows each traverse only a single link in the network backbone. The total number of trafc ows on any given backbone link is . These ows compete for resources on the network (i.e., for network bandwidth, and for the buffers at the router output queues). is the number of links (router hops) in the backbone of the network. In particular, is the number of queues at which ows interact and compete for resources. The details of the network topology model are as follows. Links from the trafc sources to routers have a 1 millisecond propagation delay and 10 Mbps transmission capacity, as do the links from routers to the trafc sinks. Each

link in the network backbone has a 5 millisecond propagation delay and transmission capacity Mbps. This capacity is set to achieve different target levels of offered network load in the experiments. Output buffers on routers have a maximum size that corresponds to 20 milliseconds of queuing delay for a link capacity of Mbps. In our experiments, the foreground ows are always modeled as packet ows. The background ows are modeled either as uid ows or as packet ows, depending on the simulation experiment.

Table 1. Experimental factors and levels for simulation experiments. Factor Foreground Flows Background Flows Router Hops Trafc Flow Type Background Flow Model Network Load Levels 1, 2, 4 2, 4, 8, 16, 32 1, 2, 4, 8, 16 Open-loop, Closed-loop Packet, Fluid Light(25%), Medium(70%), Heavy(100%)

4.3. Trafc Source Models


Two different types of trafc source models are used in the experiments: open-loop and closed-loop. An open-loop model generates trafc according to its statistical parameters, independent of the state of the network. That is, there is no feedback control in the model. All trafc moves unidirectionally from source to sink in our network model. A closed-loop model has a built-in feedback loop for trafc control. Data packets ow from source to sink in our network model, while acknowledgment packets traverse the reverse route. The open-loop source model used is an Exponential On/Off trafc source. In the On state, the source generates trafc at a specied peak rate of 5 Mbps. In the Off state, no trafc is generated. The sojourn times in each state are drawn independently from an exponential distribution with a specied mean. In our model, each source spends (on average) 50% of the time in each state with a resulting mean rate of 2.5 Mbps. During each On period, an average of 100 packets of size 576 bytes are generated. Both packet-based and uid-based versions of this model exist in IP-TN. The closed-loop source model used is a simulated version of TCP Reno. This protocol model includes TCPs three-way handshake, sequence numbers, acknowledgments, sliding window ow control, slow-start, congestion avoidance, timeouts, and retransmissions. In particular, each TCP source transmits packets according to TCPs ow control and congestion control algorithms. Only a packetbased version of this model exists in IP-TN at this time.

metrics for quantifying the results of the network simulation. The latter network-centric metrics are used to assess the accuracy of the hybrid simulation results compared to the packet simulation results. The metric used for simulation execution-time performance is relative speedup. This is dened as the ratio of the execution time for the packet simulation to the execution time for the hybrid simulation. Higher values of this metric indicate performance advantages for the hybrid simulation. The metrics used for assessing network-level performance include the mean end-to-end packet transfer latency and the jitter (e.g., standard deviation) of the end-to-end packet transfer latency. For simplicity, these performance metrics are calculated for only one of the foreground trafc ows in the network, called the primary ow. The results then focus on the relative error in these metrics for the primary ow in the hybrid simulation. That is, we express the latency results from the hybrid simulation as a percentage difference from the latency results for the packet simulation. A similar calculation applies for the jitter metric. The experiments with the closed-loop TCP trafc model use one additional metric, namely the TCP transfer duration. This metric represents the elapsed time between receiving the rst byte and the last byte of a TCP data transfer. This metric is used to assess the cumulative modeling error over the duration of a multi-packet TCP transfer.

4.4. Experimental Design


Table 1 summarizes the experimental factors and levels used in the experiments. A multi-factor experimental design is used. For space reasons, only a subset of these experiments appear in the paper. Packet rate estimation algorithm , and parameter values of were used.

5. Results for Open-Loop Trafc


The rst set of simulation experiments studies the performance and the accuracy of the hybrid implementation using open-loop trafc. Identical and independent Exponential On/Off sources, as described in Section 4.3, are used. The sources generate unidirectional trafc to the sinks. In these experiments, the number of router hops in the network is varied from 1 to 16. The number of foreground ows is varied from 1 to 4, while the number of background ows is varied from 2 to 32. Each simula-

4.5. Performance Metrics


The performance metrics fall into two main categories: metrics for simulation execution-time performance, and

20 18 16 Relative Speedup 14 12 10 8 6 4 2 0 1 2 4 N (a) Medium load, K=1 8 16 L=2 L=4 L=8 L=16 L=32 Relative Speedup

7 6 5 4 3 2 1 0 1 2 4 N (b) Heavy load, K=1 8 16 L=2 L=4 L=8 L=16 L=32

Figure 5. Plots of relative speedup of hybrid implementation vs N for (a) Medium load with K=1 and (b) Heavy load with K=1.

tion conguration was run 10 times using different random number seeds, with the average values for the performance metrics calculated from the 10 runs. Each run simulated 600 seconds. Three different levels of network load are studied: Light, Medium, and Heavy. These scenarios correspond to no packet loss, low packet loss, and high packet loss, respectively. For Light load, the backbone link capacities are set to Mbps, which is double the capacity required to handle ows that are simultaneously in the On state. The average offered network load is 25%, and queuing within the network is negligible. (For space reasons, these results are omitted from the paper.) For Medium load, Mbps. the backbone link capacities are set to The average offered network load is about 70%, though the peak load when many ows are active can clearly exceed the network capacity. These transient overloads induce queuing delays at the points of congestion, and occasional losses of packets. For Heavy load, the backbone link capacities are Mbps. This is exactly enough capacset to ity to handle the long-term average load from On/Off sources. Signicant queuing delays and packet losses can occur when instantaneous demand exceeds this capacity.

values are shown as a function of network size , the number of router hops. Note that the horizontal axes use a logarithmic scale. The simulation results in Figure 5 show (as expected) that the hybrid simulation is faster than the packet simulation. The relative speedup advantage varies from a factor of 2 to a factor of 20, depending on the network topology and trafc model used. Figure 5(a) presents the results for Medium network load. Here, the performance advantage of the hybrid simulation clearly increases as the number of (uid) background ows increases. However, there is a diminishing returns effect as well, which limits the relative speedup achieved as the network size increases. This phenomenon is attributed to the ripple effect: as the number of possible congestion points in the network increases, it is more likely that ow interactions and packet losses trigger rate changes in the uid ows, increasing the number of simulation events in the hybrid model. The diminishing returns effect is not present for the Light load scenario: relative speedup always improves as is increased and as is increased. The impact of the ripple effect is even more pronounced in Figure 5(b), for Heavy network load. The relative speedup values here are much lower than in Figure 5(a). Furthermore, adding more (uid) background ows does not always improve speedup. Initially, as the number of background ows is doubled from 2 to 4, and from 4 to 8, the speedup advantage of the hybrid simulation improves. However, doubling again to 16 improves speedup only marginally on small networks, and makes speedup worse ) for large networks. Doubling again (compared to to 32 reduces the speedup advantage (compared to )

5.1. Simulation Performance


Figure 5 shows the relative speedup results for the hybrid simulation compared to the packet simulation, for a sin). Figure 5(a) gle foreground ow in the network ( presents the results for Medium load, while Figure 5(b) presents the results for Heavy load. Each line on the graphs represents a different number of background ows; these are uid ows in the hybrid simulation, and packet ows in the packet simulation. In both graphs, the relative speedup

4 Relative Percent Error in Latency 3 2 1 0 -1 -2 1 2 4 N (a) Medium load, K=1 14 Relative Percent Error in Jitter Relative Percent Error in Jitter 12 10 8 6 4 2 0 1 2 4 N (c) Medium load, K=1 8 16 L=2 L=4 L=8 L=16 L=32 8 16 Relative Percent Error in Latency L=2 L=4 L=8 L=16 L=32

4 3 2 1 0 -1 -2 1 2 4 N (b) Heavy load, K=1 8 6 4 2 0 -2 -4 -6 -8 -10 -12 -14 1 2 4 N (d) Heavy load, K=1 8 16 L=2 L=4 L=8 L=16 L=32 8 16 L=2 L=4 L=8 L=16 L=32

Figure 6. Plots of relative percent error in latency vs N for (a) Medium load with K=1 and (b) Heavy load with K=1 and relative percent error in jitter vs N for (c) Medium load with K=1 and (d) Heavy load with K=1.

of the hybrid simulation across the full range of network sizes considered. In general, speedup tends to decrease as increases, due to the increased ripple effect. For larger or larger , the hybrid implementation could be slower than the packet implementation. foreground ows at The relative speedup for Medium load (not shown here) is about half that in Fig. This decrease makes sense, given ure 5(a) for the increase in the number of packet events in the simulation. The qualitative shape of the speedup curves remains the same as the number of foreground ows is varied.

5.2. Simulation Accuracy


Figure 6 shows the simulation accuracy results for the Medium and Heavy network load scenarios, for foreground ow. These graphs present the relative percentage error in mean end-to-end transfer latency (Figures 6(a) and (b)) and jitter (Figures 6(c) and (d)) for the hybrid sim-

ulation, compared to the packet simulation. These results show that the relative error in mean transfer latency is low (e.g., less than 4% for all cases depicted in Figure 6). Results for Light load (not shown here) also have a relative error less than 4%, as expected. The relative error in the jitter metric tends to be higher, though it is still under 15% in all cases considered. This observation implies that the distribution of end-to-end delays is similar in both the packet and hybrid simulations. For Light load (not shown here), relative error in jitter is almost -100%. This is because there is little or no jitter in the hybrid simulation due to negligible queueing, whereas there is some jitter in the packet simulation due to some queueing. The results in Figure 6 also show that the relative error in latency and jitter tends to decrease (and stabilize) as the number of background ows is increased. One possible explanation is that as the number of background ows increases, the ow interactions at the buffers increase, result-

16 Relative Percent Error in Latency 14 12 10 8 6 4 2 0 -2 1 2 4 N (a) Medium load, K=4 8 16 Relative Percent Error in Jitter L=2 L=4 L=8 L=16 L=32

100 L=2 L=4 L=8 L=16 L=32

80

60

40

20

0 1 2 4 N (b) Medium load, K=4 8 16

Figure 7. Plots of (a) relative percent error in latency vs N for Medium load with K=4 and (b) relative percent error in jitter vs N for Medium load with K=4.

ing in uid dynamics that better approximate the statistical multiplexing in the packet simulation. Furthermore, the variance of the background trafc tends to decrease relative to the mean as sources are aggregated, since the sources are independent. Increasing the number of foreground ows tends to increase the relative error in both the latency and jitter met. This rics. This effect is illustrated in Figure 7 for effect is attributed to the dynamics of the packet rate estimation algorithm.

uid-only execution of the background ows in between the arrivals of the foreground TCP transfers.

6.1. Simulation Accuracy


Figure 8 presents the simulation results from these experiments. The rst row of graphs (Figures 8(a) and (b)) is for background ows at 70% network load, while the second row of graphs (Figures 8(c) and (d)) shows the re background ows at 90% network load. sults for The load values represent the average offered load from the background ows, since the foreground ow is inactive most of the time. In all cases, there is only a single foreground TCP ow. The rst column of graphs (Figures 8(a) and (c)) is for a single hop network ( ), while the second column of graphs (Figures 8(b) and (d)) is for . The packet loss ratios indicated are for the background ow that traverses the entire network; the foreground ow should experience a similar packet loss ratio. In general, the average packet loss ratio increases with the number of hops traversed. These four graphs use scatterplots to present the simulation results. Each point in the plots represents the TCP transfer duration (in seconds, on the vertical axis) for a completed TCP connection with the transfer size in packets indicated on the horizontal axis. (Note that the vertical axis is log scale, while the horizontal axis is linear scale.) Each + represents a transfer time result from the packet simulation, while each x represents a result from the corresponding hybrid simulation. The results in Figure 8 show that there is close agreement between the TCP transfer durations reported by the packet and hybrid simulations. For many transfer sizes, the + and x points coincide, indicating that the hybrid model

6. Results for Closed-Loop Trafc


The second set of simulation experiments studies the accuracy of the hybrid simulation for closed-loop trafc. A Web client/server model is used to model a single foreground ow, with multiple TCP transfers taking place (one at a time, 100 seconds apart) on this foreground ow during the simulation. Background ows use the Exponential On/Off source model (packet or uid). The unidirectional background trafc competes with the TCP data packets owing from the server to the client. TCP acknowledgment packets return on the uncontested reverse channel. The purpose of the experiment is to compare TCP transfer durations for both the packet and hybrid simulations. In particular, we study the cumulative effect of the relative errors in packet transfer latencies on the overall TCP transfer duration observed by a Web client. For this experiment, we consider TCP transfer sizes ranging from 1 KB to 50 KB, which spans the typical range of Web document sizes. We focus only on the simulation accuracy results for this performance metric, using a single simulation run. The speedup results are not presented, since they are overly optimistic: they are dominated by efcient

100 Packet Model Hybrid Model Transfer Time (seconds) 10 Transfer Time (seconds)

100 Packet Model Hybrid Model 10

0.1

0.1

0.01 0 10 20 30 40 50 60 70 Transfer Size (packets) 80 90 100

0.01 0 10 20 30 40 50 60 70 Transfer Size (packets) 80 90 100

(a) N=1, L=8, packet loss=0.1% 100 Packet Model Hybrid Model Transfer Time (seconds) 10 Transfer Time (seconds) 10 100 Packet Model Hybrid Model

(b) N=8, L=8, packet loss=1%

0.1

0.1

0.01 0 10 20 30 40 50 60 70 Transfer Size (packets) 80 90 100

0.01 0 10 20 30 40 50 60 70 Transfer Size (packets) 80 90 100

(c) N=1, L=32, packet loss=0.5%

(d) N=8, L=32, packet loss=3%

Figure 8. Plots of transfer time vs transfer size for (a) N=1, L=8, packet loss=0.1% (b) N=8, L=8, packet loss=1% (c) N=1, L=32, packet loss=0.5% and (d) N=8, L=32, packet loss=3%.

provides an excellent approximation of the TCP transfer duration in the packet simulation. All four graphs show a distinctive structure representative of TCP. In particular, as the transfer size is increased, a step-like structure appears, indicating the additional roundtrip times required to complete the transfer. The step-like structure is most evident at Light load (not shown here), since there is little or no queuing delay in the network. At higher loads, queuing delays, packet losses, and retransmissions can add to the transfer duration, producing points above the lower bound corresponding to network round-trip times. The encouraging observation is the close agreement in transfer durations even for some points above the TCP lower bound. This suggests that the hybrid model produces queuing delays and packet losses that are similar to the packet model, triggering similar TCP behaviors at the endpoints. In addition, these results suggest that relative errors in modeling end-to-end packet transfer delay do not

accumulate; rather, they seem to average out over a multipacket transfer. This observation is particularly promising for network emulation purposes. The delity of the hybrid simulation is better for the single-hop network (Figures 8(a) and (c)) than for (Figures 8(b) and (d)), and better at 70% load (Figures 8(a) and (b)) than at 90% load (Figures 8(c) and (d)). There are some discrepancies between + and x points in all four graphs. These discrepancies may represent packet loss events triggered in one simulation model but not the other, or simply packet losses that occur at different places within the multi-packet transfer. Nevertheless, the distribution of transfer durations appears similar in both the packet and hybrid models.

7. Conclusions and Future Work


This paper presented a hybrid network simulation model that integrates both packet and uid ows. Initial results

show up to 20 times speedup using the hybrid approach over using a purely packet-based approach. Accuracy is within 4% for latency and 15% for jitter in many cases, though accuracy decreases as the number of packet foreground ows increases. Performance improves when more uid background ows are modeled, as long as congestion is not too high. Performance decreases in cases where a large congested network leads to the ripple effect dominating. Increasing the ratio of packet ows to uid ows also decreases performance. In the network model for which simulation results are presented, each link is modeled to have approximately the same congestion level. Therefore, each backbone link was a bottleneck when congestion levels were high. Since it is unusual for all links to be bottlenecks, performance levels achieved using these techniques with real simulation models is likely to be much higher than for the benchmark results presented. However, this approach may not be appropriate for use in models with large numbers of packet ows and high congestion. Further research is required to fully understand the impact the hybrid implementation has on accuracy and performance. It is possible that components of the hybrid implementation could be improved to offer greater accuracy and performance. Also, studies need to be performed to determine the performance capabilities using this approach along with parallel discrete event simulation techniques. Finally, experiments need to be performed to determine how much performance can be increased for real-time network emulation experiments using this approach.

8. Acknowledgments
Financial support for this research was provided by NSERC (Natural Sciences and Engineering Research Council of Canada), iCORE (Informatics Circle of Research Excellence) and ASRA (Alberta Science and Research Authority). Other members of the IP-TN development team include Mike Bonham, Roger Curry, Mark Fox, Hala Taleb, Kitty Wong and Xiao Zhong-e. The authors wish to thank Christiane Lemieux, who supervised a course project conducted by Cameron Kiddle on uid modeling techniques. Also, the authors wish to thank the anonymous referees for their constructive comments regarding the paper.

References
[1] R. M. Fujimoto, T. McLean, K. Permualla, and I. Tacic. Design of high performance RTI software. In Proceedings of the Fourth IEEE Workshop on Distributed Simulation and Real-Time Applications, pages 8996, 2000. [2] G. Kesidis, A. Singh, D. Cheung, and W. W. Kwok. Feasibility of uid event-driven simulation for ATM networks. In Proceedings of the IEEE Global Telecommunications Conference, pages 20132017, 1996.

[3] B. Liu, D. R. Figueiredo, Y. Guo, J. Kurose, and D. Towsley. A study of networks simulation efciency: Fluid simulation vs. packet-level simulation. In Proceedings of the Twentieth Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM), pages 12441253, 2001. [4] B. Liu, Y. Guo, J. Kurose, D. Towsley, and W. Gong. Fluid simulation of large scale networks: Issues and tradeoffs. In Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, pages 21362142, 1999. [5] B. Melamed, S. Pan, and Y. Wardi. Hybrid discretecontinuous uid-ow simulation. In Scalability and Trafc Control in IP Networks, Sonia Fahmy, Kihong Park, Editors, Proceedings of SPIE Vol. 4526, pages 263270, 2001. [6] V. Misra, W. Gong, and D. Towsley. Fluid-based analysis of a network of AQM routers supporting TCP ows with an application to RED. Computer Communication Review, 30(4):151160, 2000. [7] D. Nicol. Discrete event uid modeling of TCP. In Proceedings of the 2001 Winter Simulation Conference, pages 12911299, 2001. [8] D. Nicol, M. Goldsby, and M. Johnson. Fluid-based simulation of communication networks using SSF. In Proceedings of the 11th European Simulation Symposium, pages 270 274, 1999. [9] J. M. Pitts. Cell-rate modelling for accelerated simulation of ATM at the burst level. IEE Proceedings-Communications, 142(6):379385, 1995. [10] G. F. Riley, R. M. Fujimoto, and M. H. Ammar. A generic framework for parallelization of network simulations. In Proceedings of the Seventh International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, pages 128135, 1999. [11] G. F. Riley, T. M. Jaafar, and R. M. Fujimoto. Integrated uid and packet network simulations. In Proceedings of the Tenth IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, pages 511518, 2002. [12] R. Simmonds, R. Bradford, and B. Unger. Applying parallel discrete event simulation to network emulation. In Proceedings of the 14th Workshop on Parallel and Distributed Simulation, pages 1522, 2000. [13] T. K. Yung, J. Martin, M. Takai, and R. Bagrodia. Integration of uid-based analytical model with packet-level simulation for analysis of computer networks. In Internet Performance and Control of Network Systems II, Robert D. van der Mei, Frank Huebner-Szabo de Bucs, Editors, Proceedings of SPIE Vol. 4523, pages 130143, 2001. [14] X. Zeng, R. Bagrodia, and M. Gerla. GloMoSim: A library for parallel simulation of large-scale wireless networks. In Proceedings of the 12th Workshop on Parallel and Distributed Simulation, pages 154161, 1998.

Das könnte Ihnen auch gefallen