Sie sind auf Seite 1von 13

QoS Based Bandwidth Allocation Schemes for Passive Optical Networks

Nadav Aharony, Eyal Nave, Nir Naaman and Shay Auster


Department of Electrical Engineering

Technion Israel Institute of Technology Haifa, 32000, Israel


E-mail:{ nadav@, eyal@, auster@, cn17s02@}comnet.technion.ac.il Web: www.comnet.technion.ac.il/~cn17s02

Abstract
"Ethernet in the First Mile" (EFM) is an upcoming standard currently being drafted by the 802.3ah task force in the IEEE organization. One of the tracks in the standard deals with Ethernet over Passive Optical Network (EPON), a centralized point-to-multi-point network with a shared upstream channel. This paper deals with the problem of upstream bandwidth allocation in EPON networks. We suggest, analyze, and compare several bandwidth allocation algorithms. As a novel type of network, EPON presents many new challenges, and one of the main goals of this project was to get a feel for the network in order to decide on the next steps in its exploration. To that end, we have constructed an OPNET model of an EPON network based on the IEEE standard. The model contains one module for the network's central-office, the Optical Line Terminal (OLT), and another module for the customer premises device, the Optical Network Unit (ONU). The network model implements EPONs Time Division Multiple Access (TDMA) multiplexing scheme and supports the relevant messages as defined by the standard. Although designed for EPON, our OPNET model can be easily converted to simulate other shared, TDMA based, networks. The new OPNET modules were designed as a test-bed for bandwidth allocation algorithms; they enable smooth insertion of a wide range of algorithms and easy extension of the currently supported features. We used our model to simulate several bandwidth allocation algorithms starting from a simple static algorithm and advancing to more sophisticated algorithms that support Quality of Service (QoS) and guarantee fair bandwidth allocation. Our results demonstrate the impact of the bandwidth allocation algorithm on the overall network performance.

built with fiber-based connections to the home, and network providers are conducting field-tests and experiments with fiber access. Eventually, fiber access is predicted to replace the old copper infrastructure the world over. PON consists of two main types of end devices: An Optical Line Terminal (OLT) and an Optical Network Unit (ONU). The OLT resides in the Central office (CO) and is connected to an uplink fiber and a downlink fiber, linking it to the network end-units. The OLT is responsible for control and management of the PON, and also acts as the gateway to the outside world and adjacent networks. The ONU is the client-side of the network that resides near or inside the Customer Premises (CP). An ONU may serve a single residence or business, or it may serve several subscriber residences or an entire apartment building. PONs use point-tomultipoint topologies they may be connected as a tree, a bus, or a ring. Figure 1 illustrates the components of an EPON deployment.

1. Introduction
Passive Optical Network (PON) is an emerging access technology, offering a high bandwidth point-to-multipoint optical fiber network. It is called "passive", since there are no active devices (e.g. repeaters) along the route apart from the end units. The only interior elements used in such networks are passive combiners, couplers, and splitters. This greatly reduces the costs and complexity of the deployment and maintenance of the network. PONs are intended to solve the access networks bandwidth bottleneck by offering a costeffective, flexible, and high bandwidth solution. Today new housing developments in many places around the world are

Figure 1 - EPON Network Illustration The term uplink refers to information flowing from the end units the ONUs in our case, to the central office equipment - the OLT. The term downlink refers to the information flowing from the OLT to the ONUs. All communication within a PON is mediated by the OLT. On the downlink the OLT broadcasts all information to the fiber. Each ONU filters out only the transmissions that are directed to it (see Figure 2). Encryption and authentication features may be applied to the traffic in order to make sure

that only the intended party/parties have access to the information. On the uplink the ONUs send their traffic to the OLT which then forwards it to its destination. ONUs are not able to directly "see" the upstream traffic from their network peers (see Figure 3). The OLT is responsible for all bandwidth allocations in the network. An ONU is allowed to transmit only in time slots that have been assigned to it by the OLT. The OLT has great amount of flexibility in implementing a bandwidth allocation algorithm; allocations may change dynamically over time to adapt to the different bandwidth requirements of the ONUs. The bandwidth allocation algorithm may implement anything from a strict and even division of the bandwidth among all ONUs, to giving all bandwidth to a single ONU (both uplink and downlink). It is also up to the algorithm to maintain the level of QoS that has been guaranteed to each ONU. This work inspects different aspects of bandwidth allocation in PONs.

fact that most networks on both sides of the PON (the customer and the service providers) are based on Ethernet. Using Ethernet in the access link will save unnecessary format conversions. Another great benefit is that Ethernet equipment is widely available "off the shelf" and is manufactured by many vendors. A unique network management protocol has been devised by the EFM task force Multi Point Control Protocol (MPCP) [1]. MPCP defines a TDMA multiplexing scheme, in which the upstream channel is divided into time-slots. Each ONU is allocated time-slots in which to send its uplink traffic. The time slots are in granularity of a Time Quanta (TQ), which is defined as the time that it takes to broadcast 2 octets of data. The MPCP constitutes an absolute timing model. A global clock exists in the OLT, and the ONUs set their local clock to OLT clock. All control messages in the network are time-stamped with the local clock, and through them the devices are able to synchronize their network clocks. According to the standard, ONU devices must support the 802.1Q queuing, meaning a queuing system that supports 8 priorities (or traffic classes). There is no definition about how the priorities must be used. The priority queue may also be used for QoS purposes, where each traffic type is given a distinct priority. 1.2 Bandwidth Allocation in EPON This project concentrates on upstream bandwidth allocation in EPON. It is important to note that the 802.3ah standard does not dictate the bandwidth allocation algorithm and leaves it open to the implementation of each vendor/manufacturer. There are many possibilities for the management of the bandwidth. The simplest regime is "static allocation", in which each ONU is allocated a constant bandwidth allocation, or grant, which is re-allocated at constant intervals. This type of allocation is very inefficient, since the end-stations may not require the entire grant bandwidth all the time, and the wasted bandwidth might have been allocated to someone else. For this reason more advanced allocation algorithms have been devised, that are dynamic in nature. These algorithms usually have access to input about the current, near real-time needs of the end stations, and also have access to the service level agreement (SLA) definitions of each end-user. These schemes usually attempt to achieve fairness in the allocations to endusers of the same class. Dynamic algorithms may work either in a continuous timeline or as cycle based. A continuous timeline algorithm receives bandwidth requests from the end stations and allocates bandwidth according to them in a continuous fashion. A cycle based algorithm divides the timeline into consecutive "cycles", and calculates the bandwidth allocation for an entire cycle at a time. Ultimately, the bandwidth allocation processes in the OLT are supposed to reflect the QoS and SLA requirements and implement them in this segment of the network. The OLT is also responsible for taking into consideration all aspects of

Figure 2 - EPON TDMA Downlink

Figure 3 - EPON TDMA Uplink 1.1 Ethernet over PON (EPON) "Ethernet in the First Mile" (EFM) is an upcoming standard currently being drafted by the 802.3ah task force in the IEEE organization. It will define an access standard based on the Ethernet protocol for several media architectures and physical link types. One of the tracks in the standard deals with Ethernet over Passive Optical Network (EPON). EPON deals with a symmetric point-to-multi-point connection at very high speeds (Currently 1Gb/s, in the future 10Gb/s and possibly even more). The protocol does not limit the number of users, though a typical scenario refers to the connection of up to 64 end points per PON fiber. A main advantage of using Ethernet datagram over the PON is the

equipment and physical delays in order to avoid collisions. Collisions occur when more than one ONUs signal reach the OLTs receiver at the same time. The grants information is sent in MPCP GATE messages, which tell each ONU its start time to transmit and the length of the allocated transmission. In order to make correct allocation decisions, the ONUs send MPCP REPORT messages, informing the OLT of their queue status. Each allocation algorithm might use this information in a different way. Even if an ONU did not make request for any bandwidth for the next cycle, it would typically be allocated a minimal grant that would allow it to at least send a REPORT message for the cycle afterwards. EPONs bandwidth allocation scheme is different from that of other shared access networks such as data over cable networks. The standard for data over cable, DOCSIS [3], allows contention between end stations on the available bandwidth. In EPON there is no contention and therefore there is no danger of data collision. The downside to this mode is obligation to allocate the minimal grant to each endstation. The difference between the two networks is that the total available bandwidth of EPON is very large compared to coaxial cables (1 Gbps compared to only 30 Mbps), and the number of users is much smaller (a few dozens compared to a few hundreds or thousands). Because of this, the minimal grants for the ONU reports are negligible. 1.3 Goals In this project we attempt to explore some of the aspects of QoS bandwidth allocation in the 802.3ah EPON architecture. Since this is a novel network model and a very extensive field, this project can offer a starting point for more advanced research on the subject. The bandwidth allocation process can be divided into several sub-processes (that may or may not be independent to each other): Gathering of the input for the decision making process (such as bandwidth requests or the bandwidth definitions for each end unit). Dividing the available bandwidth between the end units (determining the quota for each) within a defined timeframe. Scheduling the allocated quotas of all end units for the defined time frame. Informing the end units when they are allowed to broadcast (parallel to the frequency of the scheduling changes). For each of these sub-processes there are many work modes and algorithms that may be thought up and compared. Since there are numerous approaches that can be used, the goal of this project is not to provide a complete and optimal solution, but rather to present a preliminary comparison of several algorithms, in order to get a feel for where to go next. The current project concentrates on a narrow section of the network mainly the bandwidth allocation in the uplink direction.

OPNET has been chosen as the simulation environment for the project. Since EPON is still a novel technology and the standard is still being developed, there were no ready-made simulation modules. A new environment and set of modules had to be designed and implemented.

2. Problem Definition
The problem we set out to investigate is the problem of bandwidth allocation of the EPONs uplink, with the following guidelines: Two types of network traffic are defined: o Committed Rate (CR) A specified bit rate that the ONU must be allocated by the network if it requests it. The requirement is for the average bit rate over one second, and there are no requirements for minimum delays between the bandwidth allocations within this timeframe. o Best Effort (BE) - Traffic that is not promised to the ONU in the SLA, and will be allocated only if all CR requirements in the network have been fulfilled and there is still available bandwidth to allocate. The only guideline is the attempt to allocate the available traffic as fairly as possible between the ONUs. Fairness is defined as follows: o Dont ask dont get An ONU that did not request any BE bandwidth will not be allocated any. o History window A certain history will be kept regarding the amount of BE bandwidth allocated to each ONU. ONUs that were allocated less bandwidth in this history window will be entitled to receive bandwidth until they are aligned with the others. The window size is not defined; it is a parameter to be tested, but note that too large a window might enable a single ONU that was quiet for a long time to take over all of the available BE bandwidth and deny service to the others. o Among ONUs that have the same amount of allocated history, fairness will be defined as an equal division of the bandwidth.

3. OPNET Implementation
In this section we describe the EPON environment that was implemented and its components. Since there was no available OPNET module for the EPON standard, we constructed an EPON modeling environment from scratch. This required simulating both the physical PON characteristics and the upcoming 802.3ah standard. Even though the standard is not complete at this time, the aspects of the 802.3ah that were implemented were ones that are at final stages of ratification. They were also designed with the option to easily modify them, if necessary. 3.1 Network Model Assumptions Several assumptions have been made in order to simplify the simulation environment and to allow better focus on the bandwidth allocation problem:

Static network configuration The number of ONUs remains constant, i.e., no ONUs are joining or leaving the network. Accordingly, the network registration processes defined in 802.3ah were not implemented. Single subscriber ONUs Each ONU is assumed to be serving only a single subscriber (and not, for example, an entire apartment building). When more than one distinct subscriber is served by an ONU, each subscriber will have his own SLA definitions and will need to receive specific allocations according to it. This will require increased complexity from the network devices and the allocation algorithms.

allocations must be received by the ONUs before the cycle starts, which provides them with ample time to transmit at their assigned time. For transmissions, there is another marker, "send_grants" which schedules an interrupt at a certain offset before the next cycle's start. This interrupt tells the OLT to send the GATE messages that have already been prepared by the algorithm. Before this can happen, another marker, "do_schedule" must instruct the OLT to actually "execute" the algorithm and make the necessary calculations. There are also other time markers that are used by the ONUs to schedule their GATE start and stop times, and to simulate the REPORT message generation and the actual sending of the outstanding packet queue in a realistic manner. As mentioned above, the basic unit of time is a Time Quanta, or TQ. All allocations are in multiplications of TQ units. All control packets created are time stamped with the current simulation time. Figure 4 depicts an example of the timeline from the OLT and ONU point of view (in a 3 ONU scenario):

Bandwidth allocated per ONU All bandwidth

allocations are done per ONU. The OLT does not instruct the ONU how to utilize the granted bandwidth. This is the simplest work mode and probably the most common at least for the initial deployments of EPON. More advanced allocation models are also available. For example, the "granted entity" might not be the ONU but a specific IP address at the customer's premise or specific traffic types. Another example is the allocation of a separate grant for each queue priority.

Global Clock All simulation modules use the same

global clock provided by the network. Consequently, the exchange of synchronization information, which is done in reality, is not implemented.

No Packet Loss The initial model assumes that no

packets are lost. Future versions would take into account the realistic bit error rate of about 10-12.

No Propagation Delays To simplify the initial model,

propagation delays have been ignored. Propagation delays will be added to future versions.

Part of the plans for future work include expansions of the simulation to include some of the leniencies that were stated, in order to make it more realistic. 3.2 TDMA Model and the Bandwidth Allocation Concept The TDMA timing regime was implemented using OPNET's global simulation time and the scheduling of self interrupts and remote interrupts in advance. These interrupts invoke processes that remove packets from the queue and transmit them, make REPORT packets, schedule future grants, etc. The ONUs set interrupts to start transmitting in offsets based on their GATEs start time and length. With the assumption that the OLT allocates the grants without overlaps, there should be no collisions of uplink data between different ONUs. The timeline is divided into cycles, which are defined as an attribute of the OLT. The division to cycles is not mandatory and is a matter of our implementation of the standard. The OLT utilizes several "markers" to schedule actions for itself in the future. The most significant marker on the timeline is a pointer to the time of the next cycle's start time. All the other action markers are set accordingly. As in reality, the GATE

Figure 4 - Bandwidth Allocation timing model; a 3 ONUs example. The top timeline depicts the OLTs point of view, the bottom timeline is seen by the ONUs. As mentioned, the timelines are divided into cycles. The cycles are marked in the OPNET environment by next_cycle_start interrupt. The OLTs interrupts are scheduled in advance, one cycle at a time. As described above, before the cycle starts, two actions must be completed the actual allocation for that cycle, and the broadcast of all GATE messages for the upcoming grant. Each ONU is allocated its grant in which it is allowed to send its data. At the beginning of each grant (or at its end) the ONU sends its REPORT message. The reports reach the OLT and the most recent report from each ONU is saved. When the do_schedule interrupt is invoked, the last report received is used by the allocation algorithm. In the depicted example, the REPORTs from ONUs 1 and 2 are received in time to be considered for the cycle N-1, whereas ONU 3s REPORT has to wait until cycle N. Note: The described timing model may easily be changed by changing the interrupt allocation scheme. For example, the allocation algorithm may be executed several times before the grants are sent, optimizing the original execution. 3.3 Modeling MPCP in OPNET As mentioned earlier, the MPCP defines different types of messages for managing the EPON. Currently the only MPCP

relevant messages needed to control the PON are the GATE and REPORT messages. The implementation of the MPCP messages in OPNET is handled with consideration to compatibility with the IEEE 802.3ah standard. If modifications should occur in the standard, the implementation of the MPCP messages can be easily modified to match the modifications in the standard, though functionality is not expected to be altered. GATE Message: GATE messages are sent by the OLT to each ONU, typically at every cycle, in order to control the TDMA regime and avoid collisions on the uplink bus. The main fields are start time and length, which declare the OLTs GATE specifications. An ONU must not transmit out of its allocated grant. Up to 4 different grants may be allocated within a single GATE packet. For each grant the GATE specifies its start time and length. In addition there is specification for each grant whether the ONU should attach a status report to the grants data.

Figure 6 - REPORT message 3.4 Simulating the PON's physical characteristics In order to connect the OLT to the ONUs, OPNETs bus linkage component and OPNETs bus tap were used. Two bus components are utilized - one simulates the uplink fiber and the other simulates the downlink fiber. The OLT transmits packets only on the downlink bus, and receives its packets only through the uplink bus. The ONUs transmit only to the uplink bus and receives their packets only through the downlink bus, therefore ONUs cannot communicate amongst themselves without going through the OLT first. The OLTs grant allocation mechanism and inter-packet/inter-grant gaps avoid any possibility of data collision on the bus.

Figure 5-GATE message REPORT Message: REPORT messages are used by the ONUs to inform the OLT of their packet load in each sub-queue. For each sub-queue, two values may be specified the total queue size, and the queue size up to a predefined threshold size. This second value may be used by certain allocation algorithms to improve the allocation efficiency, but it is out of scope for the current discussion. A bitmap specifies which of the values contain data.

Figure 7-EPON scenario implemented in OPNET with 4 ONUs 3.5 OLT Node Module The OLT node model is divided into two main data-paths: PON upstream and PON downstream (see Figure 8). The PON upstream is connected to the uplink fiber of the network through the pon_receiver bus receiver module. It receives all

data passed through the fiber, and passes it on to the packet_classifier process, which determines if the packet is a control packet destined to the OLT, or a data packet. The data packets are routed to the port_1_transmitter and passed on, out of the EPON network. Control packets are passed on to the scheduler process, which is the main process of the OLT device. The PON downstream is connected to the downlink fiber of the network through the pon_transmitter. It receives packets from olt_q, the OLTs queue process, which is based on OPNETs abc_fifo process model. The queue receives GATE messages from the scheduler, and optional downlink traffic from the outside. Since we concentrate on the upstream process, the downstream is used mostly to deliver the OLTs control messages. The external ports Tx and Rx can be used to test a TCP/IP connection between a station on the PON to the outside.

Figure 9-OLT scheduler state machine 3.5 ONU Node Module Similar to the OLT, the ONU device is also composed of PON upstream and PON downstream data paths (see Figure 10). The packets received from the pon_receiver connected to the PONs downlink are passed to the packet_filter, which passes on only the packets destined to the specific ONU. Other packets are destroyed in the packet_filter_sink. Next, the packet_classifier determines if the packet is a control packet or a data packet. The data packets are routed to the appropriate external port. Control packets are passed on to the scheduler. The different external ports are used to easily insert simulation traffic into the different priority queues, each of which corresponds to a different port number (currently only four priorities are used).

Figure 8-OLT node model implemented in OPNET The resource_manager is responsible for holding node-wide information (such as its MAC address) and execute node initialization actions, if needed. The scheduler (Figure 9), is responsible for managing the EPON upstream, and for implementing the timing model described earlier. It is composed of the following states: init Initializes the process model and specifically the selected allocation algorithm. Also jump-starts the TDMA scheme by allocating the initial cycle interrupts. idle The default state in-between process interrupts. msg_arrival Receives control messages that were routed to the scheduler process, and distributes them according to the message type. Currently it handles only REPORT messages, but it may be easily adapted to handle more. store_reports This process receives the REPORT and processes it as required by the selected allocation algorithm. schedule The key state of the scheduler process. The actual execution of the bandwidth allocation algorithm is done within this state. It relies on the formatted REPORT data from the store_reports process, and its output is in the format of a list of grant data for the upcoming cycle. send_grants Uses the output of the Schedule state to prepare standard GATE messages, and broadcast them to the PON.

Figure 10-ONU node model implemented in OPNET The resource_manager is responsible for holding node-wide information (such as its MAC address) and executing node initialization actions if needed.

The scheduler (Figure 11) is responsible for the implementation of the timing model from the ONUs point of view. It is composed of the following states: init Initializes the process model. idle The default state in-between process interrupts. msg_arrival Similar to the OLT state of the same name. Currently, it handles only GATE messages. gate_arrival This process receives the GATE message and is responsible for extracting the allocated grants data and for scheduling interrupts accordingly. gate_start Conducts necessary actions at the start-time of each grant.

Generation of a REPORT message At the appropriate interrupt, make_report state creates a REPORT packet and assigns its fields with appropriate values.

Figure 12-ONU queue state machine 4. Bandwidth Allocation Algorithms Three bandwidth allocation algorithms have been implemented: static, semi-static, and dynamic. The algorithms are described in the following subsections. 4.1 Static Allocation This is perhaps the simplest algorithm possible, and the one we used in our model development and validation stage. A cycle size parameter is specified, setting the size of each allocation cycle in units of TQ. The allocated grant size for each ONU is the same, and is set by the simple calculation:
grant = cycle _ size number _ of _ ONUs

Figure 11-ONU scheduler state machine onu_q, the ONUs queue process (as depicted in Figure 12), is implemented as an active queue, meaning that it is autonomous in the insertion and extraction of packets, according to its limitations and the allocated grants. As mentioned earlier, it comprises 8 sub-queues, in accordance with to 802.1Q. Control messages receive the highest priority. The queue may perform in one of three modes: Unlimited sub-queue size. Limited queue size with specific size for each subqueue. Limited queue size with shared memory for the subqueues, meaning that their size may vary and a high priority packet in one sub-queue may cause a taildrop of a lower priority packet in another sub-queue. The queue performs three main tasks: Insertion of packets managed by the ins_tail state, according to the selected work-mode for the queue size. Packet transmission - the beginning of the GATE is set up by the start_gate state, and the actual transmission is managed by send_head Within the GATE, packets are transmitted one-by-one. A single packet is sent and then the process sets an interrupt for the next packet to be sent at the actual time that the previous packet finishes, according to the ONU line-rate. For this implementation the queue has 2 idle states one for when the ONU is not transmitting (branch state) and one for when the ONU is within a grants timeframe (gate_idle).

This division implements fairness in the grant allocation. However, other divisions are also possible, as long as the sum of the static allocations does not exceed the cycles size. There is no consideration of the actual needs of each ONU, and each receives the same allocation in every cycle. Figure 13 depicts an example of the algorithm's allocation for 3 ONUs.

Figure 13-Static Allocation Illustration 4.2 Semi-Static Allocation Algorithm This simple algorithm is somewhat of a hybrid between the static allocation and the dynamic one. The semi-static algorithm, as opposed to the static algorithm, uses the REPORTs collected by the OLT from the ONUs to determine which ONUs requested bandwidth. Each REPORT acts as a Boolean variable that is False if the ONU did not request any

bandwidth, and True if the ONU requested bandwidth. The algorithm ignores the actual size of the bandwidth requests. Each ONU (both idle and requesting) is granted a minimal allocation in every cycle, which is sufficient for it to send a REPORT packet with requests for the next cycle. The remainder of the cycles length is divided equally among the ONUS that had bandwidth requests for this cycle. Unlike the static algorithm, the semi-static does not allocate bandwidth to ONUs that are idle .Figure 14 presents a sample output of the semi-static algorithm for 3 ONUs in the network.

cycle length the OLT enters high-load mode and remains in it for at least the duration of one window (see Figure 15).

Figure 15- Dynamic algorithm mode arbitration As mentioned, each window is divided into consecutive cycles. Within them, the algorithm allocates the CR requests first; if there is room left in the cycle the algorithm allocates the BE requests, according to the algorithms fairness guidelines. Due to length constraints, the details of the BE allocation mechanism are omitted from this paper. The main idea is that an ONU is allocated BE bandwidth according to the history of the BE allocations that it received since the beginning of the current window. ONUs that have been allocated less BE bandwidth get higher priority.

Figure 14-Semi-Static Allocation Example 4.3 Dynamic Allocation Algorithm The dynamic algorithm allows for a dynamic allocation of bandwidth, which varies from cycle to cycle and is adapted to the end-units SLA, the networks current requirements, and fairness in the division of excess bandwidth. The algorithm currently supports two types of traffic - best effort traffic (BE) and committed rate traffic (CR), which is bandwidth that ONU is entitled to but does not have delay/jitter limitations. Delay and jitter constraints force the allocation algorithm to schedule the grants with added definitions of mandatory lengths and intervals, which add a great deal to the complexity of the allocation problem. Three important elements are defined: Window A window is defined as the time interval to be used for the measurement of whether the network complies with the ONUs SLA constraints. In addition, it defines the history interval during which the algorithm enforces the fairness in the allocation of BE traffic among the networks ONUs. Cycle Each window is divided into several equal length cycles (the total number of cycles per window is a parameter of the algorithm). The cycle bounds the total amount of bandwidth that can be allocated to the entire network for each execution of the algorithm. Thus, the delay between the ONU requests and the corresponding responses is controlled. Sub-cycle A sub-cycle is enabled when the total sum of requests is lower than the remaining cycle size. In this case, each ONU receives a minimal grant and all requests are granted. The algorithm has two work modes, based on the network load: a simple mode for low traffic loads and a complex mode for high traffic loads. Decision points are defined along the time-line, where the algorithm chooses the work-mode for the next cycle or window. Low-load is selected if the sum of requests is less than cycle length; in this case the algorithm allocates a sub-cycle. If the sum of requests is more than

Figure 16-Dynamic algorithm bandwidth allocation to three ONUs in low-load (sub-cycles) and high-load (windows and sub-cycles) 5. Simulation Results 5.1 Initial testing Before starting the algorithm tests and comparisons, a simple test scenario was constructed in order to give a general perspective on how the prototype functions and to convey the general characteristics of the EPON as reflected by the simulation. The following section shows examples of several of the properties tested. ONU queue population vs. grant size These tests were conducted in order to verify the behavior of the ONU according to the allocated grant size. The allocation was done using the static allocation algorithm. A constant priority 1 source is used for all the simulations. The source was active between seconds 1 to 2 and then halted. The tests differ in the size of the allocated grant. The size of the grant was set by setting the number of ONUs in the PON for the allocation algorithms. The more ONUs are connected, the smaller the grant that each one receives. The queue size is unlimited for these tests, in order to see how high the queue fills up at each case.

Three scenarios were tested: one where the grant allocation rate is about the rate of the source bandwidth (titled largeGATEs; one where the allocated rate is much smaller than the source bandwidth, so queue explosion was expected (titled small-GATEs); and one where the grant size is in between (titled medium-GATEs). The results are depicted in Figure 17, showing the network acts as expected.

Figure 18 - ONU's sub-queues' size in bits vs. simulation time End-to-end delay vs. the number of active ONUs This simple test shows how the end-to-end delay increases with the number of ONUs. All sources are constant bit-rate of about 30 Mbps. The static allocation algorithm is used. Note that the delay starts to increase only after crossing the threshold of about 30 ONUs (which is parallel to a static allocation of about 30Mbps, same as the traffic sources rate), and continues with a linear increase. The specific delay value is a factor of the cycle size, but the main idea here was to see the networks trend. Data was collected through the execution of a series of simulations with a varying number of ONUs. Figure 17 - Queue size vs. sim time, TOP - "large-GATEs", MIDDLE - "medium-GATEs", BOTTOM - "smallGATEs" ONU queue overflow mode test In this experiment, we set out to test the shared-memory mode of the ONUs sub-queues. The total queue size was set to 400,000 bits, and two priorities were fed with source data. The source for the higher priority was active between seconds 1-2, and the lower priority source was active between seconds 1-2.5. Figure 18 shows how the higher priority dominates the total queue, dropping the lower prioritys packets. When the source is done and the packets are transmitted, the lower priority is enabled to add packets to the queue and transmit them.

Figure 19 - End-to-end delay in seconds vs. number of ONUs in PON 5.2 Algorithm Comparison For the comparison of the three algorithms behavior, a 16 ONU scenario was constructed. We wanted to create a diverse environment that would allow us to examine as many aspects of comparison as possible within the same simulation: The networks packets sources generated Ethernet packets. The packet length is drawn by an exponential distribution with a mean of 3000 bits, but packets of over 1500 bytes are discarded (Ethernet MTU size). The packet inter-arrival time is exponentially distributed. The average source bitrate was determined through the setting of different interarrival mean values. Three source modes were defined: o High Load 100 Mbps o Medium Load 50 Mbps

Low Load 5 Mbps

The 16 ONU sources were configured as follows: o 8 ONUs High load o 4 ONUs Medium load. o 3 ONUs Low Load o 1 ONUs source was idle throughout the simulation. Roughly, half of the ONU sources of each load type were defined to be stable they had the same packet generation parameters throughout the simulation. The rest of the ONUs were defined to have a burst that begins some time after the simulation starts, and ends some time before it finishes. The simulations duration was 0.69 seconds. The timeline was divided as follows: o Segment 0 - 00.0001 seconds Init margin, all sources idle. o Segment 1 - 0.00010.05 sec Only stable ONU sources are active. The mean of the networks total bitrate was 510 Mbps. o Segment 2 - 0.050.3 The bursty ONU sources join the stable ones, raising the mean total bit-rate to a peak of 1015 Mbps. o Segment 3 - 0.30.5 The bursty ONU sources all go down to a mean bit-rate of 5Mbps. The mean total bit-rate is now at 545 Mbps. o Segment 4 - 0.50.69 The bursty ONU sources stop transmitting, only the stable remain. Mean total bit rate of 510 Mbps. Allocation algorithm configuration: o TQ 16*10-9 [sec] o o Cycle size 500 [sec] Cycles per Window 10 (for dynamic alg.) Figure 21-Queueing Delay [sec] vs. Time [sec] for a HighLoad ONU In a 16 ONU scenario, the static algorithm would allocate about 30 Mbps per ONU. Clearly, a 100 Mbps high load will not be able to send all of its data and its queue would explode. During time segment 1, the dynamic and semi-static algorithms utilize the fact that they do not allocate bandwidth to non-requesting ONUs and are thus able to allocate more bandwidth to requesting ONUs. In this segment, the total requests are smaller than the network capacity, so the dynamic algorithm is able to allocate each ONU the amount it requested. The semi static division of available bandwidth is also sufficient for even the high-load ONUs request. Thus, the queuing delay of packets for these two algorithms during this segment tends towards zero. During segment 2, the bursty ONUs start requesting bandwidth, so the dynamic and semi-static algorithms have to take them into consideration. The semi static algorithm generates an output very similar to the static algorithm (all ONUs except the idle one are requesting some amount, so the cycles bandwidth is divided equally among 15 ONUs instead of 16 in the static). The dynamic algorithm is able to allocate the low and medium loads according to their specific requests. When these ONUs request less bandwidth than their

Figure 20-Total Bits Granted [bits] vs. Time [sec] for a High-Load ONU

Sample Results: Figure 20 and Figure 21 show the total number of bits that a high-load ONU was allocated and its queuing delay, respectively, produced by each algorithm in accordance with the timeline described above. Other results, not shown here, indicate that the queue size presents similar behavior to that of the queuing delays. Note that the queuing delay is proportional to the queue size. Figure 22 and Figure 23 show the same for low load ONUs, and Figure 24 and Figure 25 show the same for medium load.

10

allowed limit, the dynamic algorithm is able to allocate the additional bandwidth to the high load ONUs. During segment 3, the bursty ONUs requests falls down to 5 Mbps, but they are still requesting bandwidth. So while the dynamic algorithm adapts its allocations to the decrease in requests, the semi-static algorithm still allocates the same amount as before. During segment 4, the bursty ONUs stop requesting bandwidth. Here, the semi-static algorithm can stop taking them into consideration and can allocate more bandwidth to the other ONUs. Some additional observations: According to Figure 22, the low load ONU seems to be allocated smaller grants by the semi-static than by the static algorithm. This seems to contradict the fact that the smallest allocation possible for a requesting ONU is , the same as the static, and there should be no possibility that the static will allocate more than the semi-static. A closer look reveals the explanation: the lowloads request rate may sometimes be smaller than the REPORT packet frequency in the semi-static algorithm This means that there will be some cycles where the lowload request will be zero even when the source is active. In these cycles the ONU will not be allocated any bandwidth for data. Consequently, the total allocation slope is more moderate
Grant = Cycle _ Size Number _ of _ ONUs

Figure 23-Queueing Delay [bits] vs. Time [sec] for a LowLoad ONU The phenomenon described in the previous observation also affects the queuing delay. A packet arriving during a cycle in which the ONU was not allocated any bandwidth will be delayed more than when there is an allocation (such as in the static case). This is seen in Figure 23: there is an increase in the delays for the semi-static and the dynamic algorithms in time segment 2, where the ONU operates in high-load mode, in comparison to the static algorithm. The delays in the dynamic algorithm during segment 2 are higher than the semi-static delays. This is because the semistatic algorithm allocates more bandwidth than actually requested, so packets arriving after REPORT was sent may also be transmitted before they are actually reported. This lowers the overall delay of a packet in the queue. When the dynamic algorithm operates in low-load mode, the sub-cycles allocate all requests, and the next sub-cycle starts right after the current one ends. As seen in Figure 21, Figure 23, and Figure 25, the delays tend towards zero after the queues stabilize, a feature that is not possible in a workmode that defines a fixed cycle size, such as the static and semi-static algorithms and the high-load mode in the dynamic algorithm.

Figure 22-Total Bits Granted [bits] vs. Time [sec] for a Low-Load ONU

Figure 24-Total Bits Granted [bits] vs. Time [sec] for a Medium-Load ONU

11

Figure 25-Queueing Delay[sec] vs. Time [sec] for a Medium-Load ONU Figure 26 shows the queue delay for a high load burst ONU. At 0.3 sec the source drops from 100 Mbps to 5 Mbps, and shuts down completely after 0.5 seconds. In the figure we see that all three algorithms manage to empty the queue and handle the burst event. What is seen clearly is that the dynamic algorithm manages to empty the queue very fast, followed by the semi-static and the static algorithm.

Figure 27-Bits Granted [bits] vs. Time [sec] for Each Load Type Using Dynamic Algorithm In Figure 28 we see that the semi-static algorithm groups together the medium and high load ONUs, and that the allocations increase at similar rates for all requesting ONUs. The exception is the low load mode, which displays a lower slope, as was explained previously. Figure 29 shows the expected result that the static allocation treats all ONUs in the same manner and allocates the same bandwidth to all ONUs.

Figure 26-Queueing Delay [sec] vs. Time [sec] for a HighLoad Burst ONU Figure 27, Figure 28, and Figure 29 show the total allocation of each algorithm to each of the ONU types. Figure 27 shows the results for the dynamic algorithm. In it we see how the allocation is dependant on the ONUs specific needs, so each of the six types is treated separately. The reason that burst ONUs continue to receive noticeable allocations even after they shut down is that the total allocation also includes allocations for REPORT messages, and the dynamic algorithm working in low-load mode requests a large amount of REPORTS from all network stations (but remember that the bandwidth dedicated to those REPORTs is not utilized in low network loads). Another observation is that the low-load of 5 Mbps is negligible Figure 28-Bits Granted [bits] vs. Time [sec] for Each Load Type Using Semi-Static Algorithm

12

cycle). Concurrently, we are working on enhancements to the simulation environment, adding more features and bringing the simulation closer to realistic results (such as the addition of propagation delays to the network). Future work includes adding traffic types to the algorithm, such as delay-critical traffic. Another future direction is the testing of additional allocation algorithms of different natures. References [1] Point to Multipoint Ethernet Passive Optical Network (EPON) Tutorial, by Gerry Pesavento, EFM Taskforce http://www.ieee802.org/3/efm/public/jul01/tutorial/p esavento_1_0701.pdf [2] MPCP State of the Art, by Ariel Maislos et. al., EFM Taskforce http://grouper.ieee.org/groups/802/3/efm/public/jan0 2/maislos_1_0102.pdf [3] MPCP: Message Format, by Onn Haran et al, EFM Taskforce http://grouper.ieee.org/groups/802/3/efm/public/jan0 2/haran_1_0102.pdf [4] IEEE 802.3ah Ethernet in the First Mile Task Force web site, including task force meetings' presentation material and meeting summaries. http://grouper.ieee.org/groups/802/3/efm/public/ [5] CableLabs, Data-Over-Cable Service Interface Specifications - Radio Frequency Interface Specification (status: interim), March 1999.

Figure 29-Bits Granted [bits] vs. Time [sec] for Each Load Type Using Static Algorithm 6. Conclusions Although the research is only in its beginning, several conclusions are already eminent: For heterogeneous-source networks, the dynamic algorithm achieves the best division of the network bandwidth as it adapts the allocations to each end-stations needs. It also handles high-load bursts most effectively. For ONUs with low load sources in a highly loaded network, the dynamic algorithm shows the worst delay performance. This is because the other algorithms allocate more than the low load ONUs need, and they may use the extra allocations to send new packets faster without the need to report them. On the other hand, the rest of the ONUs in the network suffer more because there is wasted bandwidth while they still have outstanding requests. For the same reason, the static allocation provides better delay results for the low-load ONUs than the semi-static algorithm since it keeps allocating bandwidth even if it does not receive a request for it. The downsides of the static algorithms are clear: it wastes a lot of empty grants that may be needed by other ONUs, and it prevents over-subscription to the network. It limits the amount of ONUs/allocated bandwidth per ONU for the network. For ONUs that request less bandwidth than the semi-static algorithm eventually allocates them, the semi-static algorithm takes on the downsides of the static algorithm as described above. On the other hand, it handles situations with idle ONUs well. Time segments 3 and 4 show these two behavior characteristics. 7. Current Activities and Future Work We are currently in the process of exploring further the dynamic algorithm, mostly learning its behavior characteristics with different attribute parameters and work modes (such as testing the algorithms behavior without the low-load mode, or with a limit on the minimal size of a sub-

13

Das könnte Ihnen auch gefallen