Sie sind auf Seite 1von 73

White Paper

Eclipse Transportation of Gigabit Ethernet


ETSI

Introduction
The DAC GE is a GigE plug-in card for the Eclipse INU/INUe. This paper introduces the DAC GE, its features, function and operation, and illustrates its applications within an Eclipse network.

Contents

DAC GE Features and Function


- DAC GE Description - Transport Channel Capacity and RF bandwidth - Transport Channel / Link Options - Modes of Operation - Basic Port Parameters - Advanced Port and Switch Parameters

Operational Guidelines
- Band Plan and System Gain Implications - Platform Layouts - QoS -

VLAN Tagging / LAN Aggregation

- RWPR - Link Aggregation

Configuration and Diagnostics


- Configuration Screens - Portal Diagnostics Screens - ProVision Diagnostic Screens - Throughput Testing

Example Networks
- Inter-site Network - Adding In-house Capacity to a Telco Network - DSL Network - Municipal Broadband Network

Headquarters Harris Stratex Networks, Inc. Research Triangle Park 637 Davis Drive Morrisville, North Carolina 27560 United States Tel: 919-767-3230 Fax: 919-767-3233 www.harrisstratex.com

- WiMAX Backhaul Network - Metro Edge Switch

Summary Glossary

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 1 of 73

White Paper

DAC GE Features and Function


The DAC GE is one of over twelve different transport option cards for the Eclipse INU/INUe. This section describes its features, capacity and Ethernet bandwidth options, modes of operation, and operational parameters.

DAC GE Description
The DAC GE transports Gigabit Ethernet data. It incorporates a full-featured layer 2 switch with support for link aggregation, enhanced RSTP, VLAN tagging and extensive QoS options. Features include: Multiple user ports: three RJ-45 10/100/1000Base-T ports, and one SFP optical 1000Base-LX port. 2x channel ports for connection to radio or fiber links. Programmable port/channel switching fabric: transparent, VLAN, or mixed. Capacity increments of Nx2 Mbit/s, or Nx150 Mbit/s to a maximum 300 Mbit/s per DAC GE. Native Ethernet traffic configurable to ride side by side with native PDH E1 traffic. Extremely low latency, less than 360 microseconds for 2000 byte packets. Comprehensive QoS policing and prioritization options (802.1p and DiffServ). VLAN tagging (802.1Q and Q-in-Q). RWPRTM enhanced RSTP (802.1d-2004). Layer 2 link aggregation (802.3ad). Layer 1 link aggregation. Flow control (802.3x). Jumbo frames to 9600 bytes. Comprehensive RMON and performance indicators (RFC 1757). User-friendly configuration tool with a rich graphical interface. Compatibility with DAC ES, IDU ES, and IDU GE 20x.
Figure 1: DAC GE

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 2 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 2 illustrates the basic operational blocks. Ports 1 to 4 connect via the physical interface to an Ethernet switch, which supports user configuration of switching fabric (operational mode), speed, QoS, VLANs, RSTP, layer 2 link aggregation, flow control, frame size, and interconnection to transport channels C1 and C2. The gate array (FPGA) manages signal framing to/from the INU backplane bus, which provides channel interconnection to a RAC or RACs for over-air transmission, or to a DAC 155oM for fiber transport. The fully integrated switch analyzes the incoming Ethernet frames for source and destination MAC addresses and determines the output port/channel over which the frames will be delivered. Payload throughputs are determined by the configured port and channel speeds (bandwidth), QoS settings, and internal and external VLAN information. Table 1 lists typical specifications for the single-mode optical port.
Table 1: SFP Optical Port Specifications
Wavelength: Maximum launch power: Minimum launch power: Link distance 1310 nm -3 dBm -9.5 dBm Distances to 10 km / 6 miles with 9/125 m optical fiber; 550m / 600 yards with 50/125 m or 62.5/125 m fiber

Figure 2: DAC GE Architecture

NOTE: Nominal Nx2, 150 or 300 Mbit/s throughputs are used in this paper. For 150 and 300 Mbit/s selections, measured maximum throughputs for a 1518 byte frame are 152 and 308 Mbit/s respectively.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 3 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Transport Channel Capacity and Link Bandwidth


DAC GE Transport channel capacity is selected in multiples of 2 Mbit/s to a maximum of 200 Mbit/s, or in multiples of 150 Mbit/s to a maximum 300 Mbit/s. The channels are mapped via the FPGA to the backplane bus for cross-connection to a RAC or RACs for a radio link, or to a DAC 155oM for a fiber link (Nx2 Mbit/s only). Radio link capacity is configured to provide the required traffic (payload) capacity. The resultant RF bandwidth is a function of radio link capacity and selected modulation rate. The exception to this rule is adaptive modulation, where for a given RF channel bandwidth, the modulation rate, and hence capacity, is increased when path conditions permit. Radio link capacity can be dedicated to Ethernet, or shared with companion PDH or SDH traffic.

DAC GE Transport Channel Capacity


Ethernet channel throughput options are dependent on the Eclipse backplane bus setting; Nx2 Mbit/s or Nx150 Mbit/s. An Nx2 Mbit/s backplane supports a maximum of 200 Mbit/s on one channel (C1 or C2), or a total of 200 Mbit/s using both channels (C1 and C2). An Nx150 Mbit/s selection supports 300 Mbit/s on one channel (C1 or C2), or 150 Mbit/s on one or both channels. Each channel can be mapped to a different link, or to the same link. With an Nx2 Mbit/s setting a DAC GE is air-compatible with a DAC ES, IDU GE 20x, or IDU ES.

Link Capacity
Link capacity is configured to support the required Ethernet capacity, plus any companion E1 or SDH traffic. The maximum capacity that can be configured on one physical radio link is 200 Mbit/s for an Nx2 Mbit/s selection, or 300 Mbit/s for an Nx150 Mbit/s selection. The maximum capacity that can be supported on one INU/INUe (the backplane maximum) is also 200 Mbit/s for an Nx2 Mbit/s selection, or 300 Mbit/s for an Nx150 Mbit/s selection. (These maximums represent the backplane maximum and the maximum that can be transported over one radio link). The maximum capacity that can be configured on one fiber (DAC 155oM) link is 128 Mbit/s. Two or more physical links can be link aggregated to support a logical link with a capacity that is the sum of the individual link capacities. In this way co-located INUs with DAC GEs can be installed to provide up to a 1 Gbit/s connection.

Liquid Bandwidth
Liquid bandwidth refers to the Eclipse ability to seamlessly assign link capacity to Ethernet traffic, and to companion TDM E1 or STM1 traffic. This scalability is enabled by the unique universal modem design where Ethernet and/or TDM data is transported natively. The modulation process does not distinguish between the type of data to be transported, Ethernet or TDM; data is simply mapped into byte-wide frames to provide a particularly efficient and flexible wireless transport mechanism.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 4 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

The result is that when configured for Ethernet, and/or TDM data, the full configured link capacity is available for user throughput.

For an Nx2 Mbit/s backplane selection, assignment is fully scalable in 2 Mbit/s / E1 steps to optimize throughput granularity for network planning purposes. This is illustrated in Figure 3, which indicates possible assignments to Ethernet and to companion NxE1 capacity for a selected link capacity.
Figure 3: Payload Assignment Graph for Ethernet and Companion E1 traffic

With an Nx150 Mbit/s backplane selection the link capacity options are 150 Mbit/s or 300 Mbit/s. This applies to a radio link only. For a 150 Mbit/s link, capacity is dedicated to Ethernet or to STM1. For a 300 Mbit/s link, the 300 Mbit/s can be dedicated to Ethernet, or to 150 Mbit/s Ethernet and 1xSTM1, or to 2xSTM1.

Radio Link Capacity and RF Bandwidth: Fixed Modulation


Radio link capacity is configured within the RACs, where depending on the capacity selected, one or more modulation options are available to support different RF channel bandwidths. Two RAC types are available: RAC 30 or RAC 3X for standard link operation, where RAC 30 supports RF bandwidths up to 28 MHZ, and RAC 3X supports bandwidths from 28 to 56 MHz. RAC 40 or RAC 4X for Co-channel Dual Polarized (CCDP) link operation, where two links are operated on the same frequency channel; one using the vertical polarization, the other the horizontal. RAC 40 supports an RF bandwidth of 28 MHz, RAC 4X supports bandwidths of 28, 40, or 56 MHz.

Three ODU types are available, ODU 300sp, ODU 300hp, ODU 300ep. ODU 300sp supports Ethernet capacities to 80 Mbit/s on frequency bands 7 to 38 GHz, with QPSK or 16 QAM modulation options. Standard Tx power.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 5 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

ODU 300hp supports Ethernet capacities to 300 Mbit/s on frequency bands 6 to 38 GHz, with modulation options from QPSK to 256 QAM. High Tx power. ODU 300ep supports Ethernet capacities to 300 Mbit/s on 5, 13, or 15 GHz. Modulation options range from QPSK to 256 QAM. Extended Tx power.

Table 2 lists the RF bandwidths supported by Eclipse RAC/ODU combinations for Ethernet capacities from 40 to 300 Mbit/s. Table 2: Ethernet Capacity, RF Bandwidth, Modulation and RAC/ODU
Ethernet Capacity Mbit/s 40 40 65 80 100 130 130 150 RF Channel Bandwidth MHz 14 28 14 28 28 28 56 28 Modulation 16 QAM QPSK 64 QAM 16 QAM 32 QAM 64 QAM 16 QAM 128 QAM RAC RAC 30 RAC 30 RAC 30 RAC 30 RAC 30 RAC 40 RAC 30 RAC 40 RAC 3X RAC 4X RAC 30 RAC 3X RAC 40 RAC 3X RAC 4X RAC 3X RAC 4X RAC 3X RAC 4X RAC 3X RAC 4X RAC 3X RAC 4X RAC 4X RAC 3X RAC 4X ODU 300 sp, hp, ep sp, hp, ep hp, ep sp, hp, ep hp, ep hp, ep hp, ep hp, ep

1502 150 190 2002 200 200 300

40 56 28 40 56 56 56

64 QAM 16 QAM 256 QAM 128 QAM 64 QAM 32 QAM 128 QAM

hp, ep hp, ep hp, ep hp, ep hp, ep hp, ep hp, ep

1. 10 and 20 Mbit/s options are also available. 2. 5, 6, 10 or 11 GHz only for 40 MHz operation.

Radio Link Capacity and RF Bandwidth: Adaptive Modulation


Instead of using a fixed modulation rate to provide a guaranteed capacity and service availability under all path conditions, the modulation rate, and hence capacity, is increased when path conditions permit. On a typical link this means a higher capacity will be available for better than 99.5 percent of the time. It provides a particularly cost and traffic efficient solution when used in conjunction with data prioritization, where with appropriate QoS settings all high priority traffic, such as voice and video, continues to get through when path conditions are poor. Outside these conditions best effort lower priority traffic, such as email and file transfers, enjoys data bandwidths that can be two or three times the guaranteed bandwidth.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 6 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Adaptive modulation for Eclipse is provided by the RAC 30A plug-in. It uses one of three automatically and dynamically switched modulations - QPSK, 16 QAM or 64 QAM selected by an adaptive modulation engine that can handle up to 100 dB/s fading fluctuations. Modulation switching is hitless. During a change to a lower modulation, remaining higher priority traffic is not affected. Similarly, existing traffic is unaffected during a change to a higher modulation. Table 3 highlights RAC 30A function, whereby for a given RF channel bandwidth of 7, 14 or 28 MHz, a twofold improvement in data throughput is provided for a change from QPSK to 16 QAM, and a threefold improvement to 64 QAM. RAC 30A may be used as an upgrade solution for existing links to deliver higher capacities for minimum disruption and cost, and in new links where: A narrower channel bandwidth, such as 7 MHz, can be used instead of 14 MHz or 28 MHz, or An antenna up to two sizes smaller than normally required can be used.

Traffic can be Ethernet, TDM, or a mix of both, with a 2 Mbit/s or 1.5 Mbit/s granularity. RAC 30A is compatible with Eclipse ODU 300hp/ep/sp, and with the RAC 30 (V2 and V3), meaning that during an upgrade to RAC 30A operation there is no need to replace both ends at the same time. This simplifies the upgrade program while ensuring minimum downtime.
Table 3. RAC 30A Adaptive Modulation Figures

Higher Capacity Links


Where higher Ethernet capacities are required, two or more INUs are co-located to provide parallel-path links. These can be operated on different frequency channels, or more commonly the two links are operated on the same frequency channel using XPIC RAC 40s or RAC 4Xs in a CCDP configuration. In this way, Ethernet capacities to 600 Mbit/s (2 or 4 links), 900 Mbit/s (3 links), or 1000+ Mbit/s (4 links) are efficiently enabled. CCDP operation enables two equal-capacity links to operate within the same frequency channel using the vertical and horizontal polarizations. When linkaggregated, the capacity of the individual links is combined on a single user interface.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 7 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Note that while four 300 Mbit/s links can be installed to support a combined over-air data capacity of 1200 Mbit/s, where link aggregation is enabled, the maximum that can be supported on one DAC GE interface is 1000 Mbit/s. For information on DAC GE link aggregation see Link Aggregation. Figure 4 summarizes the radio channel (link) options on an Nx150 Mbit/s selection for Ethernet capacities from 150 to 600 Mbit/s. Up to 300 Mbit/s a single INU is used. For higher capacities two co-located INUs are used. Normally a 300 Mbit/s Ethernet link is installed using one radio link. This requires an RF channel bandwidth of 56 MHz. But on RF bands where 56 MHz channeling is not supported, or where a free 56 MHz channel is not available, two 150 Mbit/s links can be installed on one 28 MHz RF path using RAC 40s or RAC 4Xs for CCDP operation. Even on bands where 300 Mbit/s / 56 MHz channels are available, it may still be preferable to operate two 150 Mbit/s / 28 MHz channels to secure the additional system gain of such links, and to take advantage of the inherent redundancy provided by link aggregation should one of the links fail. See Band Plan and System Gain Implications.

Figure 4: Ethernet Radio Path Options for 150 to 600 Mbit/s

Capacity License
Capacity is licensed according to the required RAC capacity. The license is software enabled within a compact flash card, which plugs into the NCC. The base license is 20 Mbit/s for up to 6 RACs. Beyond this base, capacity is licensed on a per-RAC basis. Licensed capacity is field upgradeable.

DAC GE Transport Channel / Link Options


The following diagrams illustrate DAC GE configurations for one and two channel operation for simple links, ring links and aggregated links. Figure 5 illustrates single channel operation. A RAC 30 is used for operation on RF channel bandwidths to 28 MHz. Supports Ethernet capacities to 150 Mbit/s.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 8 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

A RAC 3X is used for operation on RF channel bandwidths from 28 to 56 MHz. Supports Ethernet capacities to 300 Mbit/s.

Figure 5: Simple Single Channel Radio Link

Figure 6 illustrates co-channel CCDP operation from one INU. This has particular application where 300 Mbit/s Ethernet data is required, but a 56 MHz radio channel is not available. Instead, the two RF links operate on the same 28 MHz RF channel, and are link aggregated to provide a single 300 Mbit/s user interface. RAC 40s are used. Each is configured for 150 Mbit/s. RF channel bandwidth is fixed at 28 MHz. The two 150 Mbit/s Ethernet channels can, if required, be operated as two independent Ethernet connections (no link aggregation). The radio links can also be configured for adjacent channel operation, using RAC 30s or RAC 3Xs.

Figure 6: Co- or Adjacent Channel Links

Figure 7 illustrates a ring node configuration where a single INU/DAC GE supports east and west traffic using RAC 30s for channel bandwidths to 28 MHz, or RAC 3Xs for bandwidths from 28 to 56 MHz. RWPRTM (Resilient Wireless Packet Ring) is enabled on the DAC GE to provide enhanced RSTP; an external RSTP switch is not required. With an Nx2 Mbit/s selection, ring link capacity can be split between Ethernet and E1 traffic. E1 circuits can be ring-protected using Eclipse Super-PDH ring protection, or simply configured for point-to-point operation. With an Nx2 Mbit/s selection the maximum capacity supported for Ethernet is 100 Mbit/s - less if E1 circuits are also configured. (Backplane maximum is 200 Mbit/s: 100 Mbit/s east and 100 Mbit/s west). With an Nx150 Mbit/s selection only Ethernet traffic (150 Mbit/s) is supported on the ring. (Backplane maximum is 300 Mbit/s: 150 Mbit/s east and 150 Mbit/s west).

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 9 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 7: Ring Configuration with One INU

Figure 8 illustrates a 300 Mbit/s ring node using 300 Mbit/s links. Two INUs, each with a DAC GE are required, with an Ethernet cable connection between the DAC GEs. RWPR is enabled in both DAC GEs such that each is a separately managed switch on the ring. RAC 3Xs are required. The RF channel bandwidth is 56 MHz.

A 300 Mbit/s ring can also be configured using two link-aggregated 150 Mbit/s links. The parallel-path links are first link aggregated, and then RWPR ring protected. A 600 Mbit/s ring requires four INUs at each ring node, using two link-aggregated 300 Mbit/s links east and west.
Figure 8: Ring Configuration with Two Co-located INUs

Figure 9 illustrates 600 Mbit/s L2 link-aggregation using two 300 Mbit/s links and RAC 4Xs for CCDP link operation. Both links operate on the same 56 MHz radio channel. A 600 Mbit/s aggregated link can also be configured using four co-path 150 Mbit/s links, or by using one 300 Mbit/s link with two 150 Mbit/s links. Similarly: A 450 Mbit/s link is established using one 300 Mbit/s link with one adjacentchannel 150 Mbit/s link, or by using three 150 Mbit/s links where two can be configured for CCDP operation, and the third configured on an adjacent channel. A 900 Mbit/s link is established using three 300 Mbit/s links. Two can be configured for CCDP operation; the third must be on an adjacent channel.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 10 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

A 1 Gbit/s link requires four 300 Mbit/s links. With CCDP operation the links are paired to operate on two adjacent 56 MHz radio channels.

Figure 9: 600 Mbit/s Link

For more information on platform options refer to Platform Layouts.

Modes of Operation
DAC GE supports three operational modes, transparent, mixed or VLAN, which determine the layer 2 (L2) port-to-port and port-to-channel switching fabric.

Transparent Mode
This is the default, broadcast mode, which includes options for L2 Link Aggregation.

Transparent Mode with Aggregation Disabled


All ports and channels are interconnected. It supports four customer LAN connections (ports 1 to 4) with bridging to two separate transport channels (C1 or C2).
Figure 10: Transparent Mode with Aggregation Disabled

To avoid a traffic loop, only C1 or C2 is used over the same radio path. C1 and C2 may be used where the DAC GE supports two back-to-back ring links where one channel is assigned to the east, the other to the west.

Transparent with Aggregation


Two or more links are configured to support a single logical link with a capacity that is the sum of the individual link capacities. Options are provided within Portal to select channel and/or port aggregation:

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 11 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

A channel selection of C1 and C2 applies where the link or links to be aggregated are installed on the same INU as the DAC GE. A typical application is the CCDP configuration of two 150 Mbit/s links to provide a 300 Mbit/s aggregate capacity on one 28 MHz radio channel. A channel plus port selection applies where the link or links to be aggregated are installed on separate, co-located INUs. A typical application is the CCDP configuration of two 300 Mbit/s links to provide a 600 Mbit/s aggregate capacity on one 56 MHz radio channel.

A customizable aggregation weighting or load balancing option is provided for use where the links to be aggregated are not of equal capacity. Balanced aggregation weights are default applied. However, where one of the aggregated links is of different capacity, such as a 300 Mbit/s link aggregated with a 150 Mbit/s link, the weighting on the 300 Mbit/s link should be set to 11, and on the 150 Mbit/s link, set to 5. The aggregation weights must be assigned such that they always total 16.

Figure 11 illustrates C1 and C2 aggregation; traffic on channels C1 and C2 is aggregated and bridged to ports P1 to P4 to support a common network connection on all ports. The default weighting applied is 8/8.
Figure 11: Transparent Mode with C1 and C2 Aggregation

For more information, including a layer 1 (L1) aggregation option, refer to Link Aggregation.

Mixed Mode
Mixed Mode supports two separate network connections where P1-C1 provides dedicated transport for port 1 traffic, and a second transparent/broadcast mode connection is provided with P2, P3, P4 and C2 interconnected.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 12 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 12: Mixed Mode Port and Channel Assignment

The two channels can be assigned on the same path, or used to support east and west paths in a ring network using an external RSTP switch, where C1 is assigned to one direction and C2 to the other.

VLAN Mode
VLAN Mode supports four separate LAN connections. P1-C1 is the same as for Mixed Mode, where dedicated transport is provided for port 1 traffic. For ports 2, 3 and 4, three separate (virtual) LANs (VLANs 2, 3 and 4) are multiplexed to C2, with internal Q-in-Q tagging of the packets ensuring correct end-to-end matching of LANs over the link.
Figure 13: VLAN Mode Port and Channel Assignment

The two channels can be assigned on the same path, or used to support east and west paths in a ring network using an external RSTP switch, where C1 is assigned to one direction and C2 to the other.

Basic Port Parameters


User selection/confirmation is provided for the following port-based parameters. Enabled/Disabled. A port must be enabled to allow traffic flow. Name. A port name or other relevant port data can be entered. Connection Type and Speed. Provides selection per-port of auto or manual settings for half or full duplex operation. In auto, the DAC GE self-sets these options based on the traffic type detected. Interface Type. Provides selection per port of auto or manual settings for the interface type; Mdi or MdiX (straight or cross-over respectively). Priority. Provides a four-level, low, medium-low, medium-high or high priority setting for each port. This port prioritization only has relevance to ports using a shared transport channel. Traffic is fair-queued so that traffic on a low priority port is allocated some bandwidth when availability is contested. Port Up. Indicates that a valid connection with valid Ethernet framing has been detected. Resolved. Indicates the DAC GE has resolved an auto selection for speed-duplex.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 13 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Advanced Port and Switch Parameters


Priority Mapping
Provides selection of queue-controller operation for the following options. Only one option can be selected and a selection applies to all ports. Port default. Enables the setting of a four-level port priority on each of the four ingress ports. This is the basic port priority option described in Basic Port Parameters above. 802.1p provides prioritization based on the three-bit priority field of the 802.1p VLAN tag. Each of the possible eight tag priority values (0 to 7, with 7 the highest) are mapped into a four-level (2-bit) priority level. Mapping is user configurable. DiffServ provides prioritization based on the DSCP (Differentiated Service Code Point) field in the IP header. It is designed to tag a packet so that it receives a particular forwarding treatment or per-hop-behavior (PHB) at each network node. The six bits available enable 64 discrete DSCP values or priorities (0 to 63), with 63 the highest. Mapping is user configurable. No priority. Incoming packets are passed transparently. For more information, see Traffic Priority.

Flow Control
Flow Control is implemented through use of IEEE 802.3x pause frames, which tell the terminal node to stop or restart transmission to ensure that the amount of data in the receive buffer does not exceed a high water mark. For more information, see Flow Control.

Disable Address Learning


Address Learning is default implemented to support efficient management of Ethernet traffic in multi-host situations. The option to disable Address Learning is primarily for use in a ring network where protection for the Ethernet traffic is provided by an external RSTP switch. To avoid conflict between the self-learning functions within the DAC GE and external RSTP switches during path failure situations, the DAC GE capability must be switched off. Failure to do this means that in the event of an Ethernet path failure, and subsequent re-direction of Ethernet traffic by the external switch to the alternate path, the DAC GE will prevent re-direction of current/recent traffic until its address register matures and deletes unused/un-responsive destination addresses, which may take several minutes.

Maximum Frame Size


Maximum Frame Size sets the largest frame that can be transmitted without it being broken down into smaller units (fragmented). The DAC GE supports jumbo-frames to 9600 bytes; the configurable range is 64 to 9600 bytes. A selection applies to all ports. The settable frame-size should not be set above 7500 bytes for bi-directional traffic. 9600 can be used for uni-directional requirements; frame sizes to 9600 bytes in one direction, and normal frame sizes in the other direction. For more information refer to MTU Size.

RWPRTM
DAC GE incorporates RSTP in the form of RWPR-enhanced RSTP. RSTP is a link management protocol for Layer 2 ring or mesh networks. It prevents the formation of network loops and provides path redundancy. When a link in the network fails, RSTP redirects traffic around the failure by unblocking a
Eclipse_DAC_GE_ETSI_080425.doc Page 14 of 73

3/25/2008

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

standby link. Service recovery (reconvergence) times are typically 2 to 7 seconds. RWPR (Resilient Wireless Packet Ring) employs patent-pending, Harris Stratex developed mechanisms to enhance RSTP reconvergence times. RWPR essentially eliminates failure detection time (less than 1 ms) using a unique rapidfailure-detection (RFD) algorithm, then uses dynamic message timing to accelerate the RSTP convergence process. The result is carrier-class reconvergence times - times as low as 50 ms.

For more information refer to Operational Guidelines: RWPR.

Link Status Propagation


Link Status Propagation enables externally-connected equipment to rapidly detect the status of a DAC GE channel. It operates by instantly forcing a port shutdown at both ends of the link in the event of a channel failure, such as a path fade, or at the far end of a link in the event of an Ethernet cable disconnection, or external device failure on a DAC GE port. A port shutdown is immediately detected by the connected equipment so that it can act on applicable alarm/switching options. For more information, refer to Link Status Propagation.

VLAN Tagging
DAC GE supports 802.1Q and Q-in-Q tagging. 802.1Q. Untagged frames are tagged. Q-in-Q. All frames are tagged, including those with existing tags.

Selections are made on a per-port basis. A VLAN ID can be entered (range 0 to 4095) or left as default. A VLAN membership filter can also be selected. With this capability DAC GE can tag, 802.1p prioritize, and aggregate Ethernet traffic from two, three or four ports onto a common trunk/channel. For more information on VLAN tagging, refer to Operational Guidelines, Customized VLAN Tagging.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 15 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Operational Guidelines
This section provides an introduction to DAC GE deployment. Topics addressed are: Band Plan and System Gain Implications Platform Layouts QoS VLAN Tagging / LAN Aggregation RWPR Link Aggregation

Band Plan and System Gain Implications


For the capacities of interest, ETSI band plans support RF channel bandwidths of 3.5, 7, 14, 28, 40 or 56 MHz with, depending on the capacity/bandwidth option, modulation rates from QPSK to 256 QAM.

Band Plan Implications


For 300 Mbit/s data, which is the maximum supported on one radio link, the RF channel size required is 56 MHz. However, in some countries or regions 56 MHz channeling may be unavailable, or is restricted to the higher bands (18 GHz and above). Where 300 Mbit/s data is required in such situations, two 150 Mbit/s links can be installed on 28 MHz channeling. The two links can be operated on the same 28 MHz channel using XPIC RAC 40s for CCDP operation, and where a single 300 Mbit/s user interface is required, DAC GE link aggregation is enabled.

System Gain Implications


The maximum data that can be transported on a channel is a function of the modulation rate; the higher the rate the higher the capacity, but the lower the system gain. At the low end, QPSK or 16 QAM is used. At the top, 128 or 256 QAM is used. There is a marked difference in system gain. For example a 150 Mbit/s ODU 300hp link on a 28 MHz channel (128 QAM) delivers a system gain at 18 GHz of 85.5 dB using a RAC 30v3, or 84.5 dB using a RAC 3X or 40. The same 150 Mbit/s on a 56 MHz channel uses 16 QAM and delivers a system gain of 93.5 dB using a RAC 3x or 4X. The 8 to 9 dB improvement represents a difference of about one antenna size at both ends, but comes at a cost of a 56 MHz channel rather than a 28 MHz channel. Where 300 Mbit/s is required the most obvious choice is one 300 Mbit/s link on a 56 MHz channel (128 QAM). However, 300 Mbit/s can also be provisioned using two 150 Mbit/s links on one 28 MHz channel (128 QAM) using CCDP operation, where aside from the benefits of 28 MHz channeling over 56 MHz, it may also be preferable for system gain reasons. Refer to Table 3, which compares 10-6 system gains at 18 GHz for the ODU 300hp.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 16 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Table 3: Capacity Versus Bandwidth, Modulation and System Gain ETSI 18 GHz
Link Capacity 300 Mbit/s 210 Mbit/s 200 Mbit/s 190 Mbit/s 150 Mbit/s RF Ch Bandwidth 56 MHz 56 MHz 56 MHz 28 MHz 28 MHz 28 MHz
1

Modulation 128 QAM 64 QAM 32 QAM 256 QAM 128 QAM 128 QAM 16 QAM

System Gain 81 dB 88 89 78 84.5 dB 85.5 dB 93.5 dB

RAC Type RAC 3X, 4X RAC 3X RAC 4X RAC 3X, 4X RAC 30, 3X, 40 RAC 30v3 RAC 3X, 4X

56 MHz 1: Enhanced system gain option for RAC 30v3

From this data it can be seen that: Maximum practical hop distances are higher for solutions using dual 150 Mbit/s links, compared to a single 300 Mbit/s link:
- Dual 150 Mbit/s links on one 28 MHz CCDP channel deliver a 3.5 dB to 4.5

dB gain advantage over a single 300 Mbit/s link on a 56 MHz channel, which equates to about one antenna size at one end. Not only do you get better system gain using two 150 Mbit/s links, you use half the channel bandwidth.
- Dual 150 Mbit/s links on one 56 MHz CCDP channel deliver a 12.5 dB gain

advantage over a single 300 Mbit/s link on a 56 MHz channel, which equates to more than one antenna size at both ends. Where a capacity of 150 Mbit/s is not sufficient, 200 Mbit/s may provide a solution, particularly where system gain is an issue for the higher 300 Mbit/s option. While using the same 56 MHz channeling as a 300 Mbit/s link, the 200 Mbit/s option delivers a system gain advantage of 7 or 8 dB, which equates to about one antenna size at both ends.

Fixed Versus Adaptive Modulation


Fixed modulation refers to a fixed modulation rate. For a required link capacity there can be several modulation and RF channel size options to allow a trade-off between system gain and RF channel size, based on the modulation rate used. Where a high system gain is needed - one that cannot be met by an increased antenna size - a more robust, lower modulation rate is required, which in turn dictates the need for a larger RF channel size. When a link path is planned, it is normally configured to provide optimum reliability under all path conditions. This is usually expressed as an availability figure, where 99.999% (five nines) availability over time is regarded as the industry standard objective. But the resilience needed to achieve this is typically required for just a small fraction of time, typically less than 0.1% to 0.5%. This means that for 99.9% to 99.5% of the time, a higher modulation rate could be used to achieve a higher capacity on the channel, or a lower RF channel size could be used to achieve the same capacity. This is when adaptive modulation provides real benefit. Adaptive modulation refers to the dynamic adjustment of modulation rate to ensure maximum data bandwidth is provided most of the time (on a given RF channel size), with a guaranteed bandwidth provided all of the time. For example, a link using robust QPSK modulation can have a system gain providing as much as 30 dB of fade margin, but is only needed to protect the link against worst-case fades that may occur for just a few minutes in a year. For the rest of the year the margin is not used. By using less robust but more efficient higher modulation rates, the available fade margin can be transformed into delivering more data throughput. Adaptive modulation
3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 17 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

dynamically changes the modulation so that the highest availability of capacity is provided at any given time. RAC 30A is the Eclipse adaptive modulation RAC card. When used in conjunction with the QoS traffic prioritization options provided on the DAC GE, they can be configured to ensure all high priority traffic continues to get through when path conditions deteriorate; only low priority best effort data, such as email and file transfer traffic, is discarded. It is especially applicable to longer hops where system gain is an issue to provide the fade margin needed for five-nines availability. Using fixed modulation, large and expensive antennas may be required to deliver the required path budget, which in turn may involve the added time and cost of installing high-strength support structures and associated planning approvals. But with adaptive modulation, a higher system gain (lower modulation rate) is switched into service to avoid what would otherwise be a path-fade situation, and DAC GE QoS settings are used to ensure all essential traffic is unaffected by the reduction in link capacity. The adjustment in system gain enabled by the RAC 30A between 64 QAM and QPSK operation is 16 to 18 dB, which broadly equates to two antenna sizes at both ends. For example, instead of 1.8m antennas, 0.6m antennas could be used! Refer to Transport Channel Capacity and Link Bandwidth for data on RAC 30A configuration options. Figure 14 illustrates the RAC 30A modulation/capacity steps and typical percent availability for each over time. QPSK, as the most robust modulation, is used to support critical traffic with a 99.999% availability. Less critical traffic is assigned to the higher modulations. Most importantly, the highest modulation is typically available for better than 99.5% of the time.
Figure 14. Adaptive Modulation Illustration

Redundancy Implications
Using a single INU, dual 150 Mbit/s links may be L1 or L2 or link aggregated to provide redundancy in the event one link fails. With appropriate traffic priority settings all high priority data will continue to get through in spite of the halved Ethernet bandwidth. To provide a similar level of redundancy on a single 300 Mbit/s link, hot-standby or diversity protection is required. Such redundancy is also provided where two INUs are used to support dual 190, 200, or 300 Mbit/s links, or where up to four INUs are used to support a total data capacity of 1 Gbit/s.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 18 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Platform Layouts
This section provides guidance on platform layouts for termination and intermediate nodes: A network termination node is a node used on single-hop links or a node installed at the end of a network. An intermediate node is a node used within ring and star networks that have two or more links configured to different nodes.

Network Termination Nodes


One INU supports Ethernet connections to 200 Mbit/s with an Nx2 Mbit/s backplane, or 300 Mbit/s with an Nx150 Mbit/s backplane. The INU is installed with: One RAC/ODU and one DAC GE for 1+0 non-protected link operation. Two RAC/ODUs and one DAC GE for 1+1 protected/diversity operation. Two RAC/ODUs and one DAC GE for 1+1 co-channel (CCDP) operation. One NPC card where 1+1 protection of the NCC card is required.

Where both Ethernet and PDH or SDH data are to be transported over the link, the appropriate DAC cards are also installed: One or more DAC 16x or DAC 4x for NxE1. One DAC 1x155o, DAC 2x155o or DAC 2x155e for STM1.

Figure 15 illustrates single-channel Ethernet-only operation with no NPC option.


Figure 15: Single Channel Link Node

Figure 16 illustrates CCDP link operation using RAC 40s or RAC 4Xs. Each link is configured to transport 150 Mbit/s, with the network connections to each held separate, or L2 or L1 link-aggregated to provide a single 300 Mbit/s logical link. A single dual-polarized antenna is used. RAC 40 CCDP for 2x150 Mbit/s, 128 QAM links on one 28 MHz RF channel. RAC 4X CCDP for 2x150 Mbit/s, 16 QAM links on one 56 MHz RF channel.

The two links may also be operated on adjacent channels using RAC 30s or RAC 3Xs. RAC 30 for 2x150 Mbit/s, 128 QAM links using two adjacent 28 MHz RF channels. RAC 3X for 2x150 Mbit/s, 16 QAM links using two adjacent 56 MHz RF channels.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 19 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 16: 300 Mbit/s (2x150 Mbit/s) CCDP Terminal

Where Ethernet throughputs of 450 Mbit/s, 600 Mbit/s, 900 Mbit/s, or higher are required, two or more Eclipse Nodes are co-located. Figure 17 illustrates a 600 Mbit/s configuration, with each Node supporting 300 Mbit/s on one 56 MHz RF channel using RAC 4X CCDP operation. The two 300 Mbit/s streams can be held as separate Ethernet links or, as shown, L2 link-aggregated on one INU to provide a single 600 Mbit/s interface.
Figure 17: 600 Mbit/s (2x300 Mbit/s) CCDP Terminal

Figure 18 illustrates a 600 Mbit/s configuration using four 150 Mbit/s co-path links, which are configured as two separate 2x150 Mbit/s CCDP links. The four 150 Mbit/s data streams can be held as separate network connections or link-aggregated, as shown, to provide one 600 Mbit/s logical link. A single dual-polarized antenna is used. Two 28 MHz RF channels are required - each RAC 40 CCDP pairing occupies one 28 MHz RF channel.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 20 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 18: 600 Mbit/s (4x150 Mbit/s) CCDP Terminal

More Information For more information on CCDP link operation, including protected CCDP ring links, refer the Eclipse User Manual, Volume II, Chapter 3, Co-channel Operation.

Ring and Star Ethernet Network Nodes


This section describes platform layouts and capacities for ring and star network nodes. The maximum capacity supported on a single Eclipse INU is 200 Mbit/s using an Nx2 Mbit/s backplane, or 300 Mbit/s using a 150 Mbit/s backplane. For a star node the backplane capacity calculation is a simple summing of the through capacity, and any dropped capacity. For a simple 2-link Nx2 Mbit/s node, this means that 200 Mbit/s can be passed through the node, RAC to RAC, or through plus drop, where some of the capacity is RAC to RAC, and the balance is RAC to DAC GE. Similarly, for a 2-link Nx150 Mbit/s node, 300 Mbit/s can be passed through RAC to RAC, or through plus drop, where 150 Mbit/s is passed through RAC to RAC, and 150 Mbit/s is dropped, RAC to DAC GE. For an Ethernet ring node, capacities on the east and west links are terminated on a DAC GE. The backplane capacity used is a simple summing of these link capacities. On a ring they are normally of equal capacity, meaning a maximum 100 Mbit/s ring is supported on an INU using an Nx2 Mbit/s backplane, or 150 Mbit/s using an Nx150 Mbit/s backplane.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 21 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

150 Mbit/s Ring Node Figure 19 illustrates a simple RWPR ring node, where the customers LAN is supported directly from the DAC GE operating on transparent mode. If the RSTP function is provided on external switches (RWPR not enabled on the DAC GE), mixed mode is used with P1/C1 supporting east or west, and P2 to P4 the opposite direction on C2. Link Status Propagation should also be enabled. For information on RWPR operation, see RWPR in Operational Guidelines.
Figure 19: Single INU Node

300 Mbit/s Ring Node Two topology options are illustrated, one for 300 Mbit/s radio links, the other for two CCDP 150 Mbit/s links with link aggregation. The backplane bus is set for Nx150 Mbit/s operation. Figure 20 illustrates east/west 300 Mbit/s link option. 56 MHz channels are required. The two co-located INUs are interconnected via their DAC GE ports. RWPR is configured on each DAC GE; each operates as a separate RSTP bridge on the ring.

Figure 20: 300 Mbit/s Node

Figure 21 illustrates the co-path CCDP 150 Mbit/s link option. Compared to the previous solution, which required 56 MHz channeling, this solution uses one 28 MHz channel east and west.
3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 22 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Separate DAC GEs are required for the aggregation and RWPR ring functions: DAC GE A in INU 1 aggregates the east 150 Mbit/s links to provide a single 300 Mbit/s connection on ports 1 to 4, using transparent mode with C1 and C2 aggregation. DAC GE B in INU 2 aggregates the west 150 Mbit/s links to provide a single 300 Mbit/s connection on ports 1 to 4, using transparent mode with C1 and C2 aggregation. DAC GE C in INU 2 provides the RWPR ring switch function and hosts the local LAN. Note that the east, west and local LAN interfaces are all port-connected; the DAC GE C transport channels are not configured (no backplane bus access is required). The east and west aggregated links are treated as one logical link by RWPR. If one link in the aggregated pair fails, ring switching does not occur - both links must fail to initiate switching.

Figure 21: 300 Mbit/s Node: 2x150 Mbit/s CCDP Links East and West

600 Mbit/s Ring Node Figure 22 illustrates a 600 Mbit/s solution. Paired RAC 4X CCDP 300 Mbit/s links are configured on single 56 MHz channels east and west. Separate DAC GEs are required for the aggregation and RWPR ring functions: DAC GE A in INU 1 west is configured for transparent mode, single-channel operation. DAC GE B in INU 2 west is configured for transparent mode with P1/C1 link aggregation to provide a 600 Mbit/s aggregate of west 1 and west 2 on P2. DAC GE C in INU 1 east is configured for transparent mode, single-channel operation. DAC GE D in INU 2 east is configured for transparent mode with P1/C1 link aggregation to provide a 600 Mbit/s aggregate of west 1 and west 2 on P2. DAC GE E in INU 2 east provides the RWPR ring switch function for the 600 Mbit/s east and west aggregated links and hosts the local LAN. The east, west and local LAN interfaces are all port-connected; the DAC GE E transport channels are not configured (no backplane bus access is required).
Eclipse_DAC_GE_ETSI_080425.doc Page 23 of 73

3/25/2008

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

The east and west aggregated links are treated as one logical link by RWPR. If one link in the aggregated pair fails, ring switching does not occur - both links must fail to initiate switching. Link status propagation should be configured on the A and C DAC GEs. See Link Status Propagation.

Figure 22: 600 Mbit/s Node: 2x300 Mbit/s CCDP Links East and West

Ethernet and E1 Ring Networks Figure 23 illustrates a ring network configured for Ethernet and E1 traffic, which requires an Nx2 Mbit/s backplane setting. RWPR is enabled for the Ethernet data, and Eclipse ring-wrapping is configured for the E1 circuits. In the example shown: The network is transporting 76 Mbit/s Ethernet plus 16xE1 ring-protected circuits. 76 Mbit/s Ethernet is carried on 38x2 Mbit/s circuits, which are configured as point-to-point circuits on the ring links. The 16xE1 Eclipse ring-protected circuits are all sourced/sunk at the core network site. At the core network site the resultant backplane bus usage is 200 Mbit/s (100x2 Mbit/s), which is the maximum for an INU/INUe:

The Ethernet capacity uses 38x2 Mbit/s east plus 38x2 Mbit/s west for a total backplane usage of 76x2 Mbit/s = 152 Mbit/s . The E1 circuits use 24xE1 at the core site (each drop-insert circuit uses 1 backplane bus circuit connections). For more information on backplane bus rules, refer to the Node Capacity Rules appendix in the Eclipse User Manual, or to the Harris Stratex Networks paper: Eclipse Super-PDH Ring Capacity Guide.
Eclipse_DAC_GE_ETSI_080425.doc Page 24 of 73

3/25/2008

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

The ring links must support 54x2 Mbit/s (38 + 16), which means the best fit for link (RAC) capacity is 64x2 Mbit/s. 52x2 Mbit/s is the next lowest configurable link capacity, which could be satisfied if the Ethernet capacity in the example is reduced to 72 Mbit/s, or the E1 ring circuits are reduced to 14xE1.

Figure 23: 100 Mbit/s Ring: Ethernet and E1 Circuits

QoS
QoS refers to parameters that affect traffic throughput: bandwidth, delay (latency), jitter and loss. These are discussed under: Traffic Priority Latency MTU Size Flow Control

Traffic Priority
QoS is mostly referred to in the context of a priority service for selected traffic where considerations go hand-in hand with bandwidth; the more restricted the Ethernet bandwidth, the greater the likelihood of delays and dropped frames, and the greater the need to provide priority for delay-sensitive multimedia traffic such as voice and video. Packetized voice (VoIP), while not demanding of bandwidth, is intolerant of delay, jitter, and packet loss. This also applies to video but with the added complication of it often being very bursty, with high bandwidth demands during scenes containing considerable movement.

Priority servicing also applies where a service differentiation is required, such the prioritization of one customers traffic over another. Generally, where throughput bottlenecks occur traffic is buffered, and it is how traffic in a buffer is queued and prioritized for transmission that concerns QoS priority management. The most common tools for this purpose are port-prioritization and frame/packet tagging. Note that DAC GE prioritization is fair-weighted to ensure that even low priority traffic is afforded some bandwidth when the available bandwidth is contested.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 25 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Port Prioritization
Port prioritization prioritizes traffic on one port over another. It is only applicable where two or more ports share a common channel. With DAC GE a four-level port prioritization applies: low, medium low, medium high, and high. Operation requires a Port Priority selection in the Priority Mapping screen.

Tag Prioritization
Unlike port prioritization, frame/packet tag prioritization allows traffic on one port to be prioritized over other traffic on the same port. Traffic is assigned a priority tag within the layer 3 DSCP (DiffServ) header, or layer 2 Class of Service (CoS / 802.1p) header, which depending on the application may be set from within the application itself, or applied by a network device such as switch with a port-based tagging capability. DAC GE can be configured to prioritize traffic using either tagging scheme. Incoming tagged frames are read and each frame is queued according to its tag priority level, and on the prioritization mapping applied within the DAC GE for tagged frames. 802.1p priority: incoming 802.1p tagged frames are queued and sent in order of priority. DiffServ tagged and untagged packets are not prioritized (unless subject to port prioritization). Note that 802.1p prioritization is set within the 802.1Q VLAN tagging options. DiffServ priority: incoming DiffServ tagged frames are queued and sent in order of priority. 802.1p tagged and untagged packets are not prioritized (unless subject to port prioritization).

As the DAC GE has a 4 level priority stack, user-configurable mapping is applied to accommodate the 8 levels of 802.1p priority tagging, and the 63 levels of Diffserv. Table 4 shows default mapping.
Table 4: DAC GE Default Priority Mapping Table
DAC GE Priority Level High Medium High Medium Low Low 802.1p Priority 6, 7 4, 5 2, 3 0, 1 DiffServ Priority 48 - 63 32 - 47 16 - 31 0 - 15

802.1p or DiffServ queuing priorities are not contested with port priority settings. The priority mapping options provide selection of 802.1p, DiffServ, port priority, or no priority. A selection applies to all ports. DAC GE includes a layer 2 tagging capability. See VLAN Tagging / LAN Aggregation.

Latency
Network latency refers to the time taken for a data packet to get from source to destination. For an IP network it is particularly relevant to voice (VoIP) or videoconferencing; the lower the latency, the better the quality. Latency is typically measured in microseconds or milliseconds for one-way and two-way (round-trip) transits. For phone conversations a one-way end-end latency of 200 ms is considered acceptable. Other applications are more tolerant; Internet access should be less than 5 seconds, whereas for non real-time applications such as email and file transfers, latency issues do not normally apply. For Eclipse 150 Mbit/s or 300 Mbit/s links, the per-hop latency (delay time) is captured in Table 5. The delays are primarily a function of:

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 26 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Link (RAC) FEC/interleaver operation, where the common buffer size means the buffer is filled and emptied at a faster rate for 300 Mbit/s links compared to 150 Mbit/s or lower. Normal packet processing delays within switch port/channel buffers; the larger the frame size, the higher the latency. (At 64 bytes the latency is primarily due to link FEC/interleaver operation)

Table 5: Typical One-way Latency for 150 and 300 Mbit/s Single-link Throughputs
Frame Size, Bytes 64 128 512 1024 1518 Latency 150 Mbit/s 150 uS 165 uS 220 uS 295 uS 367 uS 300 Mbit/s 79 uS 81 uS 99 uS 125 uS 149 uS

From this it can be seen that the latency of a DAC GE / DAC GE link or multiple links is well within the VoIP maximum of 200 mS. Other contributors to overall latency are the devices connected to the Eclipse network. For a VoIP circuit these will include the external gateway processes of voice encoding and decoding, IP framing, packetization and jitter buffers. Contributing to external network latency are devices such as routers and firewalls.

MTU Size
Within a GigE network jumbo-sized MTUs (Maximum Transmission Units) are supported. MTU refers to the byte size of a layer 3 packet before the addition of a header and footer in the layer 2 encapsulation (framing) of a packet. For 10 and 100 Mbit/s Ethernet the MTU maximum is typically 1500 bytes. For GigE, MTU sizes to 9000+ bytes are supported.

Jumbo frames are used where large amounts of data must be delivered with best efficiency. In practice care must be exercised, as not all network devices from source to destination may be able to handle jumbo frames, or at least large jumbo frames. DAC GE supports jumbo frames to 9600 bytes for uni-directional traffic 1 , and to 7500 bytes for bi-directional traffic. In practice frame sizes above 4000 bytes are seldom used by network operators.

Layer 2 Framing
Layer 2 switch framing adds a 14 byte (min) MAC/LLC header (Media Access Control) and a 4 byte FCS (Frame Check Sequence) footer, resulting in a 1518 frame size for a standard 1500 byte packet. The header and footer byte sizes remain constant for smaller packets, meaning they represent a higher percentage of the frame size as packet size reduces. Conversely, they represent a smaller percentage of the frame size as the packet size increases. See Figure 24 for a typical 1500 byte packet, and Table 6 for content description.

Frame sizes to 9600 bytes in one direction, and normal frame sizes in the other direction.
Eclipse_DAC_GE_ETSI_080425.doc Page 27 of 73

3/25/2008

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 24: Ethernet Frame Structure

Table 6: 10/100 Mbit/s Ethernet Frame Content Description Description


IFG PRE MAC/LLC Media Access Control / Logic Link Control Inter-Frame Gap Preamble (clocking), plus SFD Standard Ethernet Frame: Destination Address 6 bytes Source Address 6 bytes Length/Type 2 bytes 802.1Q Ethernet Frame: Destination Address 6 bytes Source Address 6 bytes VLAN Q Tag 4 bytes Length/Type 2 bytes 802.1Q-in-Q Ethernet Frame: Destination Address 6 bytes Source Address 6 bytes VLAN Q Tag 4 bytes VLAN Q-in-Q Tag 4 bytes Length/Type 2 bytes IP Header TCP Header Application Data FCS Frame Check Sequence

Bytes
12 min. 8 14

18

22

20* 20 1460** 4

* Typically 20 bytes but can be up to 60 bytes. ** 1460 bytes is the MSS (Maximum Segment Size) assuming a 20 byte IP header

Path MTU
Where the end-to-end MTU (path MTU) is larger than the source MTU there is no problem. If this is not the case and one or more devices in the path have a lower link MTU, packets will be fragmented, dropped or returned to source, depending on whether or not the Dont Fragment bit has been set in the IP header, and on whether or not correct MTU negotiation occurs. Ideally, TCP/IP at source will attempt to discover the downstream MTU and adjust (negotiate) packet sizes to the smallest link MTU in the path. The process can be summarized as:

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 28 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Where a router in the path discovers that it cannot send a packet on its outgoing port because the downstream path MTU is too low and fragmentation is not allowed, then under MTU negotiation it responds to source with an ICMP (Internet Control Message Protocol) message with information on the destination address that failed and the MTU allowed. The source then resets the packet size to accommodate the path MTU. Compared to always sending just small sized packets, or allowing packets to be fragmented on route and reassembled at destination, it can be shown that this method is most efficient it uses least network resources.

Where operators have complete control of the path, such as on a company LAN, ensuring all link MTUs support jumbo frames is not difficult. But on a WAN this is not the case, therefore discovering the path MTU is important when determining expected performance. Note that some router/switch vendors use proprietary trunking protocols between their products, which require jumbo frame support. For example ISL (Cisco) uses a frame size of 1554 bytes. In legacy networks, which have a mix of Fast and Gigabit Ethernet devices, and therefore the potential for MTU incompatibility, one solution is to segregate the GigE jumbo-frame traffic onto a VLAN that has a known path MTU; all packets for jumbo-framed transmission are tagged and partitioned in a VLAN in which all equipment supports the required MTU.

Applications
Frame size selection within the DAC GE has particular application where it is installed as an edge device. It can be set to provide a policing function to ensure over-sized frames are not sent; the local (source) TCP layer should discover this downstream MTU restriction and resize packets accordingly. By restricting the MTU at the edge it avoids unnecessary loading on downstream network devices devices up to the point of the roadblock. Note that Jumbo frames may cause instability on a network for applications that are sensitive to delay and jitter, such as voice where small 64 to 128 byte frame sizes are typically used. Where there is potential for applications using large frames to negatively impact applications using small frames, restricting the network MTU is an option. When operating with a DAC ES / IDU ES at the far end, bear in mind that its maximum frame size setting is 1536 bytes.

Flow Control
Flow control is a tool to assist traffic management in a congested network. Where traffic increases beyond the carrying capacity of a network, congestion will occur and packets will be discarded. For most protocols this does not pose a significant issue, as lost packets will be retransmitted. However, for voice or video and some data applications, lost data cannot be recovered and will be observed by the customer as a poor or unacceptable service. The Flow Control option on the DAC GE can mitigate this problem. While the DAC GE memory buffer absorbs short traffic bursts to smooth out delivery, if throughput increases beyond the carrying capacity of the radio its buffer will top-out, whereupon traffic will be discarded - unless flow control is enabled. With flow control a high water point is established in the buffer. When triggered, an 802.3x pause frame is sent back towards the source Ethernet address to force the sending device to reduce the rate at which it is forwarding traffic. This supports graceful reduction of traffic and results in radio link bandwidth being used more efficiently. For it to be fully effective all devices in the end-to-end path must support flow control.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 29 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 25: Flow Control Mechanism

In a congested network, attention must also be given to prioritizing traffic to ensure that which is more important is queued ahead of less important. See Traffic Priority.

VLAN Tagging / LAN Aggregation


VLANs (virtual LANs) enable the aggregation of two or more LANs (or VLANs) for transport as separate (segregated) network entities on a common trunk. For the DAC GE it means that up to four separate networks (sub-networks) can be transported over one radio channel. If a network is not segmented (single LAN), every message sent is broadcast throughout the LAN. Segmentation onto VLANs means that each is operated as a separate network; traffic on one will not be seen on another, to result in more efficient and secure network groupings. Groupings might be user or customer based, each on their own VLAN.

DAC GE provides options to automatically set VLAN tags on ingressing traffic, or to customize the process.

Automatic VLAN Tagging


An internal, automatic option is enabled in the VLAN Mode of Operation, where all traffic ingressing ports 2, 3 and 4 is transported over radio channel 2 to its matching (samenumber) port at the far end of the link using 802.1Q-in-Q tagging. This VLAN mode also supports a dedicated port 1-to-port 1 connection over radio channel 1. See VLAN Mode. The VLAN Mode is only for use in DAC GE-to-DAC GE links. The VLAN tagging does not exist beyond the far-end DAC GE. Figure 22 illustrates a typical application, where a DAC GE link is used to transport four separate LANs/VLANs between customer sites. The VLAN Mode does not assign a priority on the VLANs. However each port can be port-prioritized so that traffic ingressing a port can be prioritized against traffic from another port. See Traffic Priority.

Customized VLAN Tagging


The DAC GE VLAN Tagging screen supports 802.1Q and 802.1Q-in-Q tagging. The process includes with 802.1p prioritization tagging. 802.1Q: Only untagged frames are tagged. 802.1Q-in-Q: All frames are tagged. Those with an existing tag are doubletagged.

These options are only available for Transparent mode and on ports P2 to P4 for Mixed mode. A VLAN ID can be entered (range 0 to 4095) or left as default. At each end of the VLAN the IDs must be matching (have the same ID number).

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 30 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

A VLAN membership filter can also be selected. Only VLAN IDs within the membership range are allowed to transit the relevant port/channel. With this capability DAC GE can tag, prioritize and aggregate traffic from two, three or four ports onto a common radio trunk. At the far end of the DAC GE trunk, which may be over multiple hops, options are provided to remove the VLAN tags applied by DAC GE, or allow them to be retained intact for VLAN traffic management at downstream devices. VLAN tagging is typically used at the edge of a network to tag and assign a traffic priority. In this way up to four separate LANs (ports 1 to 4) can be carried as virtual LANs on a single Eclipse radio trunk; Eclipse acts both as an edge switch and as the radio trunk link to the core network. Each VLAN is held separate on the trunk and accorded the priority set within the 802.1p priority stack at all intermediary 802.1p devices. This allows network providers to discriminate on the service priority accorded over its network for each VLAN. See Traffic Priority.

Traffic Prioritization and VLAN Tagging Process


Figure 26 shows a simplified view of DAC GE processing of priority and VLAN tagging. A frame ingressing a port is checked for frame priority at the Categorizer, which, based on the DAC GE priority mapping settings, determines its status at its port ingress queue.
- The DAC GE switch has an ingress and egress queue on each port.

Between them they share a 16 k-byte memory pool for storing frames. When the egress queue exceeds a high threshold the ingress port(s) are stopped from forwarding more data meaning congestion is fed back to the ingress queue(s).
- Port priority (low to high) is normally used to prioritize traffic on one port over

that from another port where ports have a common channel (egress port).
- The same four-level low to high prioritization mechanism is used to prioritize

ingressing VLAN tagged traffic against untagged traffic. Dequeued frames are forwarded on an internal bus to the egress queue of the output port(s) or channel(s).
- For a port-to-channel connection, frames are forwarded from a port ingress

queue to the channel egress queue.


- With VLAN aggregation all frames from multiple ports are forwarded from

each port to the egress port of the assigned channel port. Dequeued egress frames are forwarded for transmission via the transmit modifier where, if configured, a VLAN tag is added. Adding a VLAN tag does not impact the prioritization/queuing of an ingressing frame on the DAC GE that has applied the tag. Its impact applies at downstream switches/routers (up to the point where the tag is stripped).
- At the downstream DAC GE its channel ingress port has no prioritization

capability, meaning 802.1p VLAN prioritization applied at the upstream DAC GE has no effect on the prioritization of VLANs ingressing the downstream DAC GE. But, as the DAC GE-to-DAC GE trunk has a fixed end-end capacity (over one or multiple hops), what is transmitted at one end is received at the other. There is no intermediate switch function or resizing of trunk capacity that would otherwise benefit from queuing and prioritization.
- However, if the associated downstream DAC GE port is connected to a

lower capacity interface, traffic from the resulting egress queue will be forwarded in priority order according to the priority settings for the port.
3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 31 of 73 Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

If incoming frames have a pre-existing VLAN tag, the priority setting assigned by an additional DAC GE VLAN tag (the outer tag) has priority at downstream 802.1p prioritized devices.

Figure 26: DAC GE Priority Processing and VLAN Tagging

The VLAN ID and priority of a Q or Q-in-Q tagged frame are captured in the MAC/LLC header. See Layer 2 framing in MTU Size. It is important to configure a DAC GE according to its function: Where the requirement is to simply transport up to four LANs/VLANs over a DAC GE-to-DAC GE trunk, use the VLAN Mode of Operation. Use the VLAN Tagging capability where the requirement calls for VLAN tags to be retained beyond a DAC GE-to-DAC GE trunk or where traffic from all four ports must be aggregated onto a common channel.
- The tagging options presented within the DAC GE VLAN Tagging screen

must be carefully analyzed to ensure appropriate selection. See VLAN Tagging for the options.
- The VLAN Tagging option most used for trunk aggregation is Q-in-Q.

Figure 27 illustrates the concept of LAN/VLAN aggregation using the options of VLAN operational mode and separately, the customizable VLAN tagging options.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 32 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 27: LAN Aggregation

Figure 28 illustrates how a VLAN trunk is carried into a WAN. VLAN tagging is applied at Site X using Q-in-Q. At Site Y Do Nothing is selected in the VLAN Tagging screen, so that the tags applied at Site X are partnered on the external switch.
Figure 28: LAN/VLAN Aggregation with Transparent Trunk Extension

Figure 29 depicts a practical application with a DAC GE configured as an edge switch in a Metro network. Four VLANs, each from different companies, are aggregated on an Eclipse radio trunk; Eclipse acts as the aggregating edge switch and radio access point to the wider network.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 33 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 29: Eclipse with DAC GE as a Metro Edge Switch

RWPRTM
Within Ethernet ring networks data can be protected using the redundancy available when two or more paths are provided between common end-points. Such networks may be provided entirely within an Eclipse ring network, or within a network combining Eclipse, third party devices and/or other Harris Stratex products. The contention that would otherwise occur with the arrival of looped Ethernet frames is managed by the Rapid Spanning Tree Protocol (RSTP), which creates a tree that spans all switches in the ring, forcing redundant paths into a standby, or blocked state. If subsequently one network segment becomes unreachable because of a device or link failure, the RSTP algorithm reconfigures the tree to activate the required standby path. RSTP is defined within IEEE 802.1d-2004 and is an evolution of the Spanning tree Protocol (STP). Normal RSTP service recovery (reconvergence) action involves a progressive exchange of messages between all nodes beginning with those immediately adjacent to the failure point. Reconvergence times normally range between 2 and 7 seconds, depending on the failure detection process. The RWPR implementation within the DAC GE accelerates RSTP reconvergence through application of a unique rapid-failure-detection (RFD) mechanism and dynamic Hello timing, to deliver reconvergence times as low as 50 ms. RWPR failure detection provides an end-to-end solution across each DAC GE to DAC GE link, meaning it acts independently of any intermediate hops (Eclipse repeaters or external switches). RWPR requires Eclipse SW release 3.4 or later; DAC GE users on earlier SW have access to RWPR capabilities through a software upgrade.

RWPR benefits include: Carrier-class network re-convergence times to better support time-sensitive service level agreements. Reliable and consistent RSTP operation, even in the presence of link fading. Support for radio and fiber links; both may be included in Eclipse ring networks. Aggregated links may be used within RWPR ring topologies to support 600+ Mbit/s rings.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 34 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Lower cost network solutions. Edge devices do not need to support RSTP on Eclipse connections.

Where an Eclipse ring network is to include external switches, or Eclipse radios are installed within an existing network to establish a ring, there are two primary options given that RSTP on external switches cannot inter-operate with Eclipse RWPR: The RSTP function is provided by the DAC GEs using RWPR, and the external switches are configured for transparent operation (not RSTP enabled). The network section comprising the external switch or switches is viewed simply as a path between the DAC GEs at each end of the path. The RSTP function is provided by external switches and RWPR is not enabled on the DAC GEs.

Introduction to RWPR Operation


RWPR configuration uses industry-standard RSTP procedures to set and act on switch priority, port cost and port priority. The STP algorithm within RSTP calculates the best path throughout a switched Layer 2 network. It defines a tree with a root switch, and a loop-free path from the root to all other switches in the network. All paths that are not needed to reach the root switch from within the network are placed in a blocked mode. If a path within the tree fails and a redundant (blocked) path exists, the spanning-tree algorithm recalculates the tree topology and activates the redundant path. Only the traffic affected by a topology change is interrupted. When a failed path is restored to service and the path provides a lower cost path to the root switch, RSTP will initiate a topology change to re-instate the restored path. The switches determine the tree structure automatically through the exchange at regular intervals of bridge protocol data unit (BPDU) messages. BPDUs contain information about the sending switch and its ports, including the switch MAC address, switch priority, port priority, and path cost. Spanning-tree uses this information to elect the root switch, and the path to and through other switches on the network, where for each switch a root-facing port and a designated port or ports 2 are set.
- For each switch in the network RSTP calculates the lowest path cost to root

on each port. It then sets the port with the lowest cost to root as its root port.
- When two ports on a switch form part of a loop, the spanning-tree path cost

and port ID values determine which port is put in the forwarding state and which is put in the discarding/blocking state. Each switch starts as a root switch with a zero root-path cost. After exchanging BPDUs with its neighbors, RSTP elects the root switch, and the topology of the network from the root switch.
- The root switch is the logical center of the network. - The switch with the highest switch priority (lowest switch ID) is elected as the

root switch.
- The switch ID comprises a user-settable priority value and the switch MAC

address. If switches are configured for the same priority value, or left as default, the switch with the lowest MAC address becomes the root switch.

Within an RWPR/RSTP context, port refers to a DAC GE port or channel; a channel is a radio-facing port.
Eclipse_DAC_GE_ETSI_080425.doc Page 35 of 73

3/25/2008

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Port cost and priority settings are used by spanning-tree to elect the network topology beneath the root switch. The spanning-tree algorithm uses data from both to determine an optimum network tree (optimum paths to root switch), with contesting ports set for forwarding or blocking.
- Path cost is set to represent data bandwidth (speed) available on the path,

and is assigned a value such that the higher the speed, the lower the cost. Highest priority is given to highest speed = lowest value cost. Costs are added through the network. If the path from a switch has a cost of 100, and the path from it to the next switch towards the root switch is also 100, the combined cost up to the second switch is 200. A lower cost route is always elected over a route with a higher cost.
- A port priority can be set to represent how well a port is located within a

network to pass traffic back to the root. Port priority is contained within a Port ID, which comprises a port priority setting, and the port number.
- Where costs to root are such that they cannot assist spanning-tree to set a

priority, the port with the lowest port ID is used to decide port states, such as forwarding or blocking. Where ports are set for the same port priority, spanning tree selects the port with the lowest port number as the forwarding port.
- If path costs are not assigned, spanning-tree uses port ID in its selection of

port status. Table 7 lists the RSTP port roles and states.
Table 7: RSTP Port Role RSTP Port Role
Root Port

RSTP Port State


Forwarding

Function
Root Port is assigned to the one port on each bridge that provides the lowest cost path to the Root bridge (switch).

Designated Port

Forwarding

Designated Port is assigned to the one port attached to each LAN (or point-to-point link) that provides the lowest cost path from that LAN to the Root bridge.

Backup Port

Discarding

Any operational bridge port that is not a Root Port or Designated Port is a Backup Port if that bridge is the designated bridge for the attached LAN. Backup Port acts as a backup for the path provided by a Designated Port in the direction of the leaves of the spanning tree. Any operational bridge port that is not a Root Port or Designated Port is an Alternate Port if that bridge is not the designated bridge for the attached LAN. An Alternate Port offers an alternate path in the direction of the Root bridge. Broken port or link down port. Port connected only to user LANs or equipment without bridge Administratively disabled

Alternate Port

Discarding

Unknown Port Edge Port Disabled Port

Discarding Forwarding Discarding

It is not essential for every switch within a ring to be spanning-tree enabled, but it is recommended. Switched ring segments that are not RWPR/RSTP enabled are not represented in the STP tree, and depending on the location of a path failure, may become isolated.
Eclipse_DAC_GE_ETSI_080425.doc Page 36 of 73

3/25/2008

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Switches that are not running spanning tree do forward BPDUs so that spanningtree switches at each side can exchange BPDUs.

Figure 30 illustrates a 150 Mbit/s ring layout where RSTP is enabled on the DAC GEs using the RWPR option; the customers LAN is supported directly from the DAC GE at each site. In this example the Eclipse backplane bus is set for Nx150 Mbit/s operation. Figure 31 illustrates example DAC GE RWPR settings for this network, which ensure correct election of the root switch at the network core, and establish the preferred topology on the remaining switches.
Figure 30: 150 Mbit/s Ring: Ethernet

Figure 31: Eclipse RWPR Switch Network Example

In this example network: The root switch is configured with the lowest bridge priority value. (Lowest value = highest priority). If the root switch fails, the lower-left switch would become the root switch. Data bandwidths are equal (150 Mbit/s) on all ring links. DAC GE RWPR costs (path costs) have been set to 300 on channels C1 and C2 on all DAC GEs. This means that from the root switch, RWPR costs are equal (1200) to the top-left switch. So RWPR costs alone do not help to elect a preferred route, clockwise or anti-clockwise, to the top-left switch.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 37 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

RSTP next looks at the port priority (RWPR priority) settings on the top-left equalpath-cost switch to determine channel port status; forwarding (root) or blocked. As the preferred route is clockwise, C2 has been configured with a lower RWPR priority (higher value number). So C2 becomes the blocked port and all traffic to this switch travels clockwise to/from the root switch via C1. If C1 and C2 on the top-left switch were configured with the same RWPR priority, RSTP would next examine the port numbers involved, to assist the election of a preferred route. In this example, C1 has the lowest port number so C1 would be confirmed as the forwarding (root) port, and C2 as the blocked port.

Where higher capacity RWPR networks are required, options include use of co-located INUs and use of the L2 link aggregation options: For a 300 Mbit/s ring two topology options are supported, one using 300 Mbit/s radio links on 56 MHz channels, the other using co-path 150 Mbit/s links on 28 MHz CCDP channels with L2 (layer 2) or L1 (layer 1) link aggregation. Two INUs are required at each network node, one to the east, the other to the west, with their DAC GEs port-port interconnected. A 600 Mbit/s ring may be constructed using two L2 aggregated 300 Mbit/s links east and west, or four L2 aggregated 150 Mbit/s links. When L2 or L1 aggregated links are used on a ring, the links are first linkaggregated on each hop, and then RWPR ring protected. Separate DAC GEs are required for L2 link aggregation and RWPR ring functions. East and west L2 aggregated links are treated as one logical link by RWPR. If one link in the aggregated pair fails, ring switching does not occur - both links must fail to initiate switching. East and west L1 aggregated links are also treated as one logical link by RWPR. However, because all traffic on the logical link is interrupted for about 10 ms if one link in the aggregated pair fails, ring switching does occur. Similarly, because a 10 ms traffic interrupt occurs when the failed link is restored, RSTP switch action will again be initiated, whereupon ring traffic will be restored to the link providing it is cost effective. See Layer 1 Backplane Bus Link Aggregation.

To view the INU/INUe layout for 150 Mbit/s to 600 Mbit/s ring nodes, see Platform Layouts: Ring and Star Nodes.

Reconvergence Times
When a link within an RSTP ring fails, failure detection time and network reconfiguration time must be included in the total reconvergence time. The following section contrasts the RWPR reconvergence process in an Eclipse ring network against the RSTP process in a wireless network using external RSTP switches.

Failure Recovery Times Using External RSTP Switches


When a point-to-point radio link fails, the failure can be due to a path or equipment failure, or both. But for most failure situations the Ethernet port on the radio will remain up, meaning no immediate indication is provided to a connected switch that the link has failed or is degraded (high BER). Under these situations an RSTP switch can only determine the status of a link using Hello BPDU (Bridge Protocol Data Unit) messaging.
3/25/2008

Under RSTP, Hello BPDUs are sent out of all switch ports on the network so that every switch in the network is aware of its neighbor. Hello BPDUs have a default 2 second time interval. When three BBDUs are missed in a row (total 6 seconds), that neighbor is presumed to be down and the switch initiates RSTP convergence.
Eclipse_DAC_GE_ETSI_080425.doc Page 38 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Some Ethernet links can bypass this lengthy process using an Ethernet PHY port shutdown capability. The Ethernet port on the link is electrically shut down (transmit muted) for a path failure or degrade. This is detected as a link failure (port down) on the companion RSTP switch port. Total detection time is generally within 200 to 500 ms.

Add to these times the typical RSTP network convergence times of between 200 ms and 1 second, and you have total failure recovery latencies in the order of 7 seconds for a Hello BPDU timeout process, or 1 to 2 seconds for a PHY port shutdown event.

Eclipse Carrier-Class RWPR Failure Recovery Latency


When an Eclipse link in an RWPR network fails, (software, equipment, path, or diagnostic failure event) its RFD (rapid failure detection) mechanism immediately forces initiation of RSTP convergence (within 1 ms). Additionally, a dynamic Hello time is used on the Ethernet ports to accelerate convergence under RSTP, knowing that during this period, port states can change frequently through message exchanges between neighbor switches. The polling timer is advanced to 10ms from a default 500 ms. This occurs when a switch receives a topology change message or when it detects a topology change event.

The result is that failure recovery latencies are considerably lowered compared to normal RSTP operation. For a 5 node RWPR ring typical maximum traffic outages are: 60 ms for a link down (link failure). 50 ms for a link up (failed link restored to service). 800 ms for an Ethernet PHY port-down. 40 ms for Ethernet PHY port up.

These times satisfy MEF guidelines for carrier class reliability (redundancy). Only an integrated switch solution can provide this level of performance. It cannot be matched by wireless networks using external switches.

Link Aggregation
Link aggregation groups a set of ports so that two network nodes can be interconnected using multiple links to increase link capacity and availability between them. When aggregated, two or more physical links operate as a single logical link with a traffic capacity that is the sum of the individual link capacities. This doubling, tripling or quadrupling of capacity is relevant where more capacity is required than can be provided on one physical link.

Link aggregation also provides redundancy between the aggregated links. If a link fails, its traffic is redirected onto the remaining link, or links. If the remaining link or links do not have the capacity needed to avoid a traffic bottleneck, appropriate QoS settings are used to prioritize traffic so that all high priority traffic continues to get through. To provide a similar level of redundancy without aggregation, hot-standby or diversity protection is required, but with such protection the standby equipment is not used to pass traffic.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 39 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Link aggregation can be implemented at different levels in the protocol hierarchy, and depending on the level will use different information to determine which packets, frames or bytes go over the different links. A layer 3 (L3) implementation uses source and or destination IP addresses in the IP header. Higher layer implementations use logical port information and other layer relevant information. Layer 2 (L2) link aggregation uses source and/or destination MAC address data in the Ethernet frame MAC/LLC header. A layer 1 (L1, physical layer) aggregation acts on the bit or byte data stream.

For Eclipse two modes of link aggregation can be configured: L2 link aggregation using the DAC GE switch. L1 link aggregation using circuit cross-connects on the INU/INUe backplane bus.

Layer 2 DAC GE Link Aggregation


DAC GE link aggregation was introduced in Modes of Operation and in Network Termination Nodes. The same rapid failure detection capability used for RWPR is also used to support fastswitched link aggregation. Traffic transfer from a failed link occurs within microseconds, well within the 50 ms carrier-class benchmark. Traffic streams transiting the logical link are split between the physical links based on their source and destination MAC addresses and the aggregation key allocated to each of the physical links, the aggregation group. This splitting prevents the occurrence of an IP loop, even though all traffic is sent and received on a common LAN interface at each end of the logical link. An aggregation key process sets the weighting (load balancing) of traffic between the links of the aggregated group. For the DAC GE there are 16 keys, and traffic is allocated between the keys on a random basis.
- Keys are only assigned to the channels and ports used to connect to the

links in the aggregated link group.


- The number of keys assigned to the channels / ports is based on the split of

capacity between the links. For example, if the aggregated group comprises two links of equal capacity, the keys are assigned as 8/8. But for a 300 Mbit/s link aggregated with a 150 Mbit/s link, the former should be allocated 11 keys, and the latter 5 keys. The number of keys applied must always total 16.
- The default assignment of keys is balanced; the 16 keys are split evenly, or

near-even for an odd-number split. This assignment can be edited to support instances where the links are not of equal capacity.
- Figure 32 shows the Edit Aggregation window for a C1, P1, P2 aggregated

grouping. Such a configuration would be used where three co-located INUs are used to support separate physical links, for example three 300 Mbit/s links to provide a single 900 Mbit/s logical link at ports P3 and P4.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 40 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 32: Edit Aggregation Window

Where there is only a single MAC address in play, such as a connection between two routers, L2 link aggregation is not effective 3 . All traffic goes via one link; the other link or links in the aggregation group carry no traffic. Where there are just a few MAC source and destination addresses in play, link load sharing (balancing) may be less than optimum, particularly where just one or a few of the sessions have very high throughput demands. With normal LAN traffic densities (20+ concurrent sessions), L2 aggregation keying generally ensures equitable balancing of traffic between the links. When a link fails, all traffic from the failed link is redirected to the remaining link, or links. If it is to more than one link, aggregation keying applies an equitable split between the links. When a failed link is returned to service, aggregation keying restores load sharing across all links. Links can be configured for Nx2 Mbit/s or Nx150 Mbit/s operation. Identical link capacities (DAC GE channel capacities) are normally configured, though this is not a requirement. For example a 300 Mbit/s link can be link aggregated with a 150 Mbit/s link to support a 450 Mbit/s logical link. Or a 65 Mbit/s link (32xE1) can be aggregated with a 105 Mbit/s link (52xE1) to support a 170 Mbit/s logical link.

Figure 33 illustrates C1, C2 and P1 aggregation, and the use of customized aggregation weighting for an application where C1 and C2 are each mapped to a 150 Mbit/s link, and P1 to a 300 Mbit/s link on a co-located INU. 8 keys are assigned to the 300 Mbit/s, and 4 to each 150 Mbit/s link.

At L2 a session is established between source and destination MAC addresses. Once established all traffic will use the established link, which will be over one radio link, or the other it cannot be split over both.
3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 41 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 33: Transparent Mode with C1, C2 and P1 Aggregation

Figure 34 illustrates the link aggregation concept. A 300 Mbit/s LAN connection is transported on two 150 Mbit/s L2 aggregated links. The aggregation process ensures that any one traffic stream is only transported over one link.
Figure 34: Basic Link Aggregation

Figure 35 illustrates combined link and VLAN aggregation. Four company VLANs are transported on a 300 Mbit/s logical link comprising two 150 Mbit/s aggregated links.
Figure 35: Combined Link and VLAN Aggregation

Two or more Eclipse Nodes may be co-located to provide two, three or four parallel paths with link aggregation used to support a single network connection. Figure 36 illustrates aggregation of two 300 Mbit/s links.
3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 42 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

On the upper link, Link Status Propagation must be selected to enable rapid detection of a failure by the link aggregation management function supported on the lower link. See Link Status Propagation.
Figure 36: Combined Link and VLAN Aggregation: 600 Mbit/s

Figure 37 illustrates a meshed campus network where link aggregation is used to provide a transport domain of linked switches that ensure any one switch has at least two paths to any other switch. Switch matrixing with link aggregation ensures that IP loops do not exist. In this example the DAC GE links are simply configured for transparent operation (two independent transport channels); the external switches at each end support the link and VLAN aggregation, and the workgroup connections. The benefits of such meshed topology with link aggregation include: Ports are used concurrently to support more ports for more bandwidth, compared to a star or spanning tree network. Provides active redundancy through multiple aggregated ports; if one port or link fails, there is an alternative. QoS settings are used to ensure that all high priority traffic continues to get through.

Figure 37: Campus Link Aggregation

Layer 1 Backplane Bus Link Aggregation


Eclipse L1 backplane bus aggregation applies to paired links on an INU. It is not applicable to link aggregation where the links are established on separate INUs. Traffic is split over two separate physical links, with the split applied on the INU backplane bus at the circuit level using circuit cross-connects. See Figure 38.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 43 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

The combined capacity of both links cannot exceed the INU backplane capacity, which is 200 Mbit/s for an Nx2 Mbit/s setting, or 300 Mbit/s for an Nx150 Mbit/s setting.

Because the link aggregation is made at the circuit (physical) level, no intervention from the DAC GE L2 switch is needed. Ethernet traffic is split between the link timeslots on a byte basis (parallel bus); data within an Ethernet frame is transported across both links. Compared to L2 aggregation it provides optimum payload sharing regardless of the throughput demands of individual user connections. Whether there is one, a few, or many concurrent sessions, traffic is always optimally split between the links. If each is of equal capacity, then traffic will be split 50/50 on each link. From software release 3.6, data recovery is provided in the event that one of the two links fails. Data that was assigned to the failed link is recovered on the remaining link (circuits mapped to the failed link are re-directed to the remaining link). When a failed link is returned to service normal circuit mapping is restored. When a recovery action occurs, either for a failed link situation, or when a failed link is restored to service, all traffic is affected for approximately 10 ms. The links may also be co-channel XPIC configured using RAC 40s or RAC 4Xs.
Figure 38: Layer 1 Aggregation

Layer 2 with Layer 1 Link Aggregation


L2 and L1 link aggregation can be deployed in tandem. For example where two DAC GEs are used to support four co-path links, L1 aggregation can be used for transport channel aggregation, and L2 for aggregating the traffic between the DAC GEs. For applications where just two physical links on one INU are to be aggregated, L1 aggregation is generally recommended as it supports equitable loading (load balancing) regardless of the number of data sessions in play. It also supports higher burst speeds compared to L2 aggregation.

Comparison of L2 and L1 Link Aggregation


Table 8 highlights main operating characteristics of the two Eclipse link aggregation options:

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 44 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Table 8: Comparison of DAC GE L2 and Eclipse Backplane L1 Link Aggregation


Function
Number of links that can be aggregated. Maximum aggregated throughput.

L2 Aggregation
4+. Supports aggregation across multiple INUs/DAC GEs. 1000 Mbit/s (4x300 Mbit/s links). While total over-air capacity is 1200 Mbit/s, the maximum supported on one DAC GE interface is 1000 Mbit/s. The additional 200 Mbit/s can be used to provide a separate LAN connection. Effective for multiple (20+) concurrent traffic streams. Not effective at all where one source/destination MAC address is in play, such as between two routers. Limited effectiveness where a few concurrent sessions are in play, especially so where one or two traffic streams dominate throughput on one link. Traffic is assigned to a link using an algorithm, which does a simple placing based on the source and destination MAC addresses for each traffic stream. Individual traffic streams cannot be split across links.

L1 Aggregation
Maximum 2. L1 aggregation is perINU. 300 Mbit/s (2x150 Mbit/s). One INU.

Load balancing

Fully effective on all traffic. The traffic stream from one DAC GE transport channel is simply split across both links on a byte-by-byte basis.

Session burst capacity

Individual session throughputs cannot burst beyond the capacity provided on the individual links in the link group. Yes. Traffic from a failed link is re-directed to the other link, or links. Service restoration times are typically less than 20 ms. Traffic on the surviving link(s) is not affected during the re-direction process, unless displaced by higher priority traffic from the failed link - traffic priority options are used to ensure all high priority traffic continues to get through on the remaining link, or links.

Throughputs can burst to the combined capacity provided on the aggregated link group. Yes. Traffic from a failed link is redirected to the remaining link (from SW release 3.6). During re-direction all traffic is interrupted for ~10 ms. Traffic priority options are used to ensure all high priority traffic continues to get through on the remaining link. Traffic is restored end-to-end across both links. All traffic is momentarily interrupted for ~10 ms when the failed link is returned to service. If one of the two physical links in an L1 aggregated pair fails, RSTP switch action is initiated. Similarly RSTP action is initiated when the failed link is returned to service. Traffic may switch back to this aggregated link if it is cost effective. None Not applicable Yes (from SW release 3.4)

Traffic redundancy

Recovery from a link failure

When a failed link is returned to service existing traffic is subdivided by the load sharing mechanism to assign traffic to the restored link. Only re-assigned traffic is momentarily interrupted. If one physical link within an aggregated link group fails, RSTP action is not initiated. Similarly no RSTP action occurs when the failed link is returned to service.

Ring network implications

Aggregation overhead Compatible with 3 party devices Can be used with Eclipse co-channel XPIC links
rd

Not significant No Yes

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 45 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Link Status Propagation


Link Status Propagation is used where externally-connected equipment is required to rapidly detect the status of a DAC GE channel. It operates by capturing the DAC GE channel status (up/down) on the DAC GE ports, to force a PHY (physical) port shutdown in the event of a channel failure. This port shutdown is immediately detected by the connected equipment so that it can act on applicable alarm actions, such as initiation of RSTP reconvergence or link aggregation switching. Specifically, it supports: Shut-down of the DAC GE user ports at both ends of a DAC GE link when a transport channel is down due to a radio link failure, or similar. When the transport channel is restored, the user ports are automatically restored at both ends. Shut-down of the DAC GE transport channel in the event of an Ethernet cable disconnection or external device failure on the relevant DAC GE port. The channel shut-down automatically propagates the failed condition to the far-end DAC-GE. When traffic is restored on the port, the transport channel is restored, and subsequently the far-end DAC GE port.

Applications that use link status propagation include: Ring/mesh networks using external RSTP switches. DAC GE L2 link aggregation, when co-located INUs are installed to provide the physical links. Link aggregation operation uses port status to confirm the status of the aggregated link, and is selected only on the partner INU/DAC GEs (the DAC GEs that are not hosting the link aggregation function). Refer to Figure 39.
- Link A, which hosts the aggregation function, requires knowledge of Link B

port status to confirm the operational status of Link B, the aggregated link.
- With link status propagation enabled on Link B, its operational status reflects

automatically via its port P1, to P1 on Link A.


- Link status propagation should not be enabled on Link A. It must not be

enabled where you wish to retain service on one link if either link fails. If enabled, P2 on link A will shutdown if either link fails.
Figure 39: Link Aggregation Example

Port status propagation can only by executed without ambiguity if there is a one-on-one (unique) relationship between related ports. It must not be enabled for many-to-many or one-to-many port/channel (nonunique) relationships. Aggregated ports are considered as one logical port, which means status propagation can be enabled in this instance.

For more information on the application and operation of Link Status Propagation, refer the Eclipse User Manual, Volume IV, Chapter 7.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 46 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Configuration and Diagnostics


This section introduces DAC GE configuration and diagnostic screens within Eclipse Portal and ProVision, and recommended throughput testing tools. Configuration and internal diagnostics are managed from Portal, the PC craft tool for Eclipse. For network-wide access these functions are also supported using ProVision, Harris Stratex Networks element manager. Portal is a web-enabled application supported in the Eclipse system software. Once installed on a PC it automatically downloads support from the radio as needed to ensure Portal always matches the version of system software supplied, or subsequently downloaded in any radio upgrade. ProVision is a network-wide manager of Eclipse Nodes and Terminals. ProVision is installed on a Windows or Solaris server, typically at a network operating center, and communicates with Eclipse radios using standard SNMP UDP/IP addressing and routing. Each Node or Terminal has a unique IP address.

Configuration Screens
Figure 40 shows the main Configuration screen; Figure 41, Figure 42 and Figure 43 show the supporting windows for RWPR, Priority Mapping and VLAN Tagging.

Main Configuration Screen


Functionality enabled within the Configuration screen includes: Transport channel capacity; 150 Mbit/s or 300 Mbit/s for a 150 Mbit/s backplane setting, or 2 to 200 Mbit/s for a 2 Mbit/s setting. See Transport Channel Capacity and RF Bandwidth. Mode of Operation; selection of Transparent (with or without aggregation), Mixed or VLAN. See Modes of Operation. Port on/off. Connection type and speed. See Basic Port Parameters. Flow control on/off. See Flow Control. Address learning on/off. See Disable Address Learning. Maximum frame size. See Maximum Frame Size. Port-Up indicates detection of a valid Ethernet connection and Ethernet framing. Resolved indicates that auto-negotiation has resolved speed-duplex. RWPR setting. See RWPR Configuration.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 47 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 40: DAC GE Configuration Screen: Basic Setting

RWPR Configuration
Figure 41 shows an RWPR Setting window for a 150 Mbit/s ring node. Transparent mode would normally be used. For information on RWPR and RSTP see Introduction to RWPR Operation. The RWPR-specific settings support: RWPR enable. The screen shows C1 and C2 enabled, with P1 supported on a 155 Mbit/s ring from C1 or C2. RWPR cost. Sets the path/port cost. RWPR priority. Sets the port/channel priority. Bridge priority. Sets the priority on the bridge (switch) ID. Priorities are set on each DAC GE to ensure the DAC GE at the logical center of the network is elected as the root switch. Bridge ID. A unique identifier where the most significant bits are occupied by the bridge priority setting, and the remainder by the switch MAC address. If a bridge priority is not set, the DAC GE with the lowest MAC address is elected as the root switch.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 48 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 41: DAC GE Configuration Screen: RWPR Setting Window

Priority Mapping
Figure 42 shows a Priority Mapping Screen, which provides a facility to view and select the priority mode and associated mapping for queuing at the DAC GE ports. DAC GE has a four level (two bit) priority field; the screen enables customization of the mapping of the 8 priority levels of 802.1p tagging or the 64 levels of DiffServ into the 4 levels of the DAC GE. See Traffic Priority. Default mappings are provided; Figure 42 shows defaults for 802.1p mapping.

Figure 42: DAC GE Priority Mapping Screen

VLAN Tagging
Figure 43 displays a VLAN Tagging screen, which supports options for each VLAN/port. The Mode options of 802.1p or Q-in-Q are only applicable to ports configured for transparent/broadcast mode, that is ports 1 to 4 for Transparent mode, and ports 2 to 4 for Mixed mode. See Customized VLAN Tagging. For each option a description panel appears at the bottom of the screen, which shows the impact of a selection on incoming and outgoing frames; Figure 34 illustrates the action for a Q in Q selection. Selection of a VLAN ID, priority and membership filter is also provided.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 49 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 43: DAC GE VLAN Tagging Screen

Portal Diagnostics Screens


Diagnostics for DAC GE are supported within the System/Controls, History, Performance and Alarms screens.

System/Controls Screens
Systems/Controls hosts three DAC GE screens: Diagnostics Port/Channel Status MAC Address Table

Diagnostics The Diagnostics screen, Figure 44, displays port status, operational mode, configured port and channel speeds, RWPR status, and a port shutdown prompt. The shutdown includes a safety timer function. If RWPR is configured, the data includes: RWPR bridge ID. Number of topology changes and the time since the last change. Cost to root, and port/channel RWPR status: learning, forwarding, discarding/blocking, or discarding/disabled.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 50 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 44: Diagnostics Screen

Port/Channel Status The Port/Channel Status screen, Figure 45 details:


Port and channel connection status and speed setting Port Rx and Tx frame totals for enabled ports Channel Rx and Tx frame totals Actual link Rx and Tx data rates

Figure 45: Port / Channel Status Screen

MAC Address Table The MAC Address Table, Figure 46 lists the addresses held in the MAC register, with filter options to sort by MAC Address, Port Members, Status, and VLAN ID.

An address can be entered to check its presence within the table. Port Member provides a port-based filter on the MAC address listing. One or more ports (P1 to P4), can be selected.
Eclipse_DAC_GE_ETSI_080425.doc Page 51 of 73

3/25/2008

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Status provides a filter on address type: dynamic, static or invalid. VLAN ID provides a filter on VLAN ID range: 0 to 4095.

Figure 46: MAC Address Table

Ethernet History Screens


The History screen is set to indicate 15-minute or daily options. 15-minute provides viewing of seven days worth of 15 minute data bins; daily provides one months worth of 1 day data bins. History data is held on a FIFO basis, with data in excess of the 15 minute or daily bin maximums deleted in favor of new-in data. See Figure 47. Data is collected (binned) from the time a terminal is powered on. The data captured includes: Ethernet Rx and Tx statistics for each of the four ports and two channels Events Configuration changes

To the left of the screen, summary windows display data in time order for each port and channel. Higher resolution data for the selected port/channel is displayed in the Detail window to the right. Throughputs, frame types, discards and errors are graphed. The period displayed (number of bins) is selected using the arrows above the main window, or by dragging the edges of the summary window. Each bin is presented as a vertical segment, with the width of the segment dependent on the period selected. A report range can be selected from within the main window by clicking in the window and dragging.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 52 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 47: Ethernet History Screen

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 53 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Performance Screen
The Performance screen supports two windows, Statistics and Graphs. Performance: Statistics The Performance Statistics screen, Figure 48, presents a full suite of RMON performance statistics, per port and channel, from the time a Start All command is entered. From this point data is updated for as long as Portal/ProVision is logged onto the Node or until the Stop All or Clear All buttons are selected. Viewing other screens does not affect the aggregation of performance statistics. Table 9 lists the RMON data and its description.
Figure 48: Performance Screen: Statistics

Table 9: RMON Performance Stats


Name InUcastPkts InBroadcastPkts InMulticastPkts HCInOctets InDiscards OutUCastPkts OutBroadcastPkts OutMulticastPkts OutMulticastPkts Dot3 InPauseFrames Dot3 FCSErrors Dot3 AlignmentErrors Dot3 FrameTooLongs Dot3 OutPauseFrames
3/25/2008

Description Number of Unicast frames received. Number of broadcast frames received. (InBroadcasts) Number of multicast frames received. (InMulticasts) Total data octets received in frames with a valid FCS (64-bit counter). (InGoodOctets) Number of valid frames received that are discarded due to a lack of buffer space. (InDiscards) Number of unicast frames transmitted. (OutUnicasts) Number of broadcast frames transmitted. (OutBroadcasts) Number of multicast frames transmitted. (OutMulticasts) Total data octets transmitted. (64-bit counter) (OutOctets) Number of valid pause frames received. Number of frames received with valid length and integral number of octets but invalid FCS. Number of frames received with valid length but non-integral number of octets. Number of frames received with valid FCS but which are more than 1518 octets long. Number of pause frames transmitted.
Eclipse_DAC_GE_ETSI_080425.doc Page 54 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Name Dot3 StatsLateCollisions Dot3 ExcessiveCollisions Dot3 MultipleCollisionFrames Dot3 SingleCollisionFrames Dot3 DeferredTransmissions InBadOctets UndersizedFrames InFragments In64Octets In127Octets In255Octets In511Octets In1023Octets InMaxOctets InJabber InFiltered

Description Number of times that a late collision was detected during a frame transmission. Number of frames discarded after 16 failed transmission attempts. Number of transmitted frames that experienced more than one collision. Number of transmitted frames that experienced exactly one collision. Number of transmitted frames that were delayed because the medium was busy. Total data octets received in frames with a valid FCS. Undersize and oversize frames are included. The count includes the FCS but not the preamble. Total frames received with a length of less than 64 octets but with a valid FCS. Total frames received with a length of less than 64 octets and an invalid FCS. Total frames received with a length of exactly 64 octets, including those with errors. Total frames received with a length of between 65 and 127 octets inclusive, including those with errors. Total frames received with a length of between 128 and 255 octets inclusive, including those with errors. Total frames received with a length of between 256 and 511 octets inclusive, including those with errors. Total frames received with a length of between 512 and 1023 octets inclusive, including those with errors. Total frames received with a length of between 1024 and MaxSize octets inclusive, including those with errors. Total frames received with a length of more than MaxSize octets but with an invalid FCS. If 802.1q is disabled on this port: Total valid frames received that are not forwarded to the destination port. Valid frames discarded due to a lack of buffer space are not included. If 802.1q is enabled on this port: Total valid frames received (tagged or untagged) that were discarded due to a unknown VID (i.e., the frame's VID was not in the VTU). Total frames transmitted with an invalid FCS. The total number of frames transmitted with a length of exactly 64 octets, including those with errors. (Out64Octets) The total number of frames transmitted with a length of between 65 and 127 octets inclusive, including those with errors. The total number of frames transmitted with a length of between 128 and 255 octets inclusive, including those with errors. The total number of frames transmitted with a length of between 256 and 511 octets inclusive, including those with errors. The total number of frames transmitted with a length of between 512 and 1023 octets inclusive, including those with errors Total frames transmitted with a length of between 1024 and 1522 octets inclusive, including those with errors. Total number of collisions during the frame transmission.

OutFCSErrored Out64Octets Out127Octets Out255Octets Out511Octets Out1023Octets OutMaxOctets Collisions

Performance: Graphs The Performance Graphs screen, Figure 49, graphs the Tx and Rx throughputs, discards and errors for each of the four ports and two channels in real time (updated at 2 second intervals). It also presents indications of port/channel status, and configured port and channel speeds/capacities.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 55 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 49: Performance Screen: Graphs

Alarms Screen
The Alarms screen uses a hierarchy tree to present a comprehensive set of alarms for hardware, software, diagnostics and traffic. This screen, together with the Performance and the System/Controls screens, enable the health of a DAC GE link and its traffic to be readily ascertained. The example screen shows a manually-opened alarm tree; normally the tree only opens to an alarm point, or points.
Figure 50: Alarms Screen

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 56 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

ProVision Ethernet Diagnostic Screens


While a Portal session to an INU can be opened from within ProVision to access all Portal configuration and diagnostics screens, ProVision also directly supports Ethernet diagnostics from its network-based screens for Ethernet performance, and RMON performance thresholds. From ProVision an INU/INUe is selected in the map or tree views of the network. Then the relevant DAC GE is selected, followed by selection of diagnostic options, which include Ethernet Performance, Performance Thresholds, and Circuit Trace.

Ethernet Performance
The Ethernet performance screens are similar in function to Portal Ethernet history screens, except that data for each port and channel is collected in 5 minute, 15 minute, or daily bins, with selection/presentation options for packet types, packet sizes, throughput, discards and errors. The following graphics illustrate these options. Figure 51 shows an expanded view of Unicast and broadcast packets-in on Channel 1 over a 1-hour period. For the selected port or channel up to three parameters can be displayed at one time, from a selection of: Unicast Packets In Broadcast Packets In Multicast Packets In Octets In Discards In

Figure 51: Expanded View of Unicast and Broadcast Packets In

Figure 52 shows the Packet Types screen, which displays in a pie chart for a selected period and port or channel, the relative occurrence of three types of packets: Unicast, Broadcast, Multicast.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 57 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 52: Packet Types

Figure 53 shows the Packet Sizes screen for a fast-Ethernet connection. For a selected period it displays the different sizes of packets within the total packets in and packets out for each port and channel.
Figure 53: Packet Sizes

Figure 54 shows the Throughput and Errors screen. For the selected period it displays receive and transmit throughput, errors and discards for a port or channel.
Figure 54: Ethernet Throughput, Errors and Discards

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 58 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Performance Thresholds
For a DAC GE the Performance Thresholds screens enable an alarm function to be associated with one or more RMON performance statistics, which are captured within 5 minute, 15 minute or daily bins for each port and channel. The example in Figure 55 shows that an information alarm event will be raised if 5 or more multicast packets occur on port 1 within a 5 minute period.
Figure 55: Performance Threshold Settings

Ethernet Bandwidth Utilization


The Ethernet Bandwidth Utilization screen displays an overview of the bandwidth usage for a selected part of the network, which can be set by region, devices or circuit. It allows users to rapidly identify if throughput demands are exceeding the maximum bandwidth available for the link or links. Normally, users select a Logical Link network or set of circuits to view. The data is displayed for enabled ports and channels. If a port was operated for three days and then disabled, the data from the ports enabled period would still be displayed, but no additional data would be saved. The example Bandwidth Utilization screen, Figure 56, shows RX and TX utilization for all circuits within a specified circuit bundle, using both a graphical and a table display.
Figure 56: Performance Threshold Settings

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 59 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Circuit Trace
This feature initiates the tracing of all circuits that originate or terminate from the selected DAC or from an Eclipse radio. It is not specific to the DAC GE, but if initiated from a DAC GE, only circuits to/from that DAC GE will be portrayed (assuming no prior circuit trace action has been initiated from another DAC or INU). The trace is presented as a map-view of the network, showing all INUs traversed by the traced circuit(s). The data associated with a trace includes: Circuit name (INU/DAC to INU/DAC) Circuit status Circuit capacity Link G.826 performance

Circuit trace is particularly useful for supporting and confirming circuit provisioning during network rollout. For example, upon completion of the required circuit routing within each INU you can globally verify the changes at the network level through the circuit trace feature. It is also a useful tool for locating incorrect circuit cross-connections and miss-matched data assignments within a network.

Throughput Testing
To independently test throughput, a two-tester test using testers such as the Sunrise Sunset MTT or Agilent FrameScope is recommended, as they provide unambiguous confirmation of throughput with greatest ease. Throughput can also be confirmed with a single tester and a loopback unit. Throughput testing using PC based software applications such as Iperf for Fast Ethernet connections is not recommended unless you are aware of their limitations. Such applications normally rely heavily on the Windows operating system, and performance will vary depending on the version of Windows, the Windows configuration, number of currently running applications, and the internal structure of the underlying PC hardware such as processor power and memory capacity. Bear in mind that the Eclipse Ethernet diagnostics screens (Portal and ProVision) do give a good indication of throughput and together with the RMON data support a robust picture of port and channel performance.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 60 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Example Networks
Examples are included for: Inter-site Network Connection Adding In-house Capacity to a Telco Network Connection DSL Network Municipal Broadband Network WiMAX Backhaul Network Metro Network Edge Switch

Inter-site Network
Figure 57 illustrates an Eclipse link installed to provide inter-site Ethernet and E1 services. The link application shows: A protected Eclipse 130 Mbit/s link (64 QAM, 28 MHz channel) delivers all network services between the sites. 128 Mbit/s is configured as a GigE connection.

2 Mbit/s supports an E1 PBX trunk connection. Other link options include: Adaptive modulation. RAC 30As are installed in place of the RAC 30s to provide adaptive modulation. Protected options are hot-standby or space diversity. This has particular application on longer hops where system gain is an issue on the rain-affected bands, 13 GHz to 38 GHz. Without adaptive modulation the path engineering required to deliver five-nines (99.999%) availability may mean large and expensive antennas are required, which in turn may involve the added time and cost of installing high-strength support structures and securing town planning approvals. But with adaptive modulation a higher system gain (lower modulation rate) is switched into service to avoid what would otherwise be a rain-fade situation, and QoS settings are used to ensure all essential traffic is unaffected by the reduction in link capacity. On a 28 MHz channel the RAC 30A adaptive modulation steps are 126 Mbit/s / 64 QAM, 84 Mbit/s / 16 QAM, or 42 Mbit/s QPSK. Taking the 18 GHz ODU 300hp as an example, the respective system gains for the three steps are 87.5 dBm, 96 dBm, and 105 dBm. The difference between 87.5 dBm and 105 dBm is 17.5 dBm, which broadly equates to two antenna sizes at both ends. For example, instead of 1.8m antennas, 0.6m antennas could be used. The synergy of RAC 30A adaptive modulation with the QoS awareness of the DAC GE means that link capacity and capital cost can be optimized in a way not possible using a fixed-modulation-rate link. Higher capacities. The link capacity can be extended to 1 Gbit/s. 150 Mbit/s can be configured on the same 28 MHz RF channel by using the 128 QAM option. 300 Mbit/s can be configured as a single protected link on a 56 MHz channel 300 Mbit/s can be configured as two 150 Mbit/s co-channel XPIC links on a 28 MHz channel, with link aggregation providing a single 300 Mbit/s network interface and link redundancy.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 61 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

600 Mbit/s can be configured as two 300 Mbit/s co-channel XPIC links on a 56 MHz channel, with link aggregation providing a single 600 Mbit/s network interface and link redundancy. 600 Mbit/s can also be configured using four 150 Mbit/s, co-channel XPIC links on two adjacent 28 MHz channels.

DAC GE protection. Where full redundancy is also required for the DAC GE function, two DAC GEs are installed, each supporting its own link, and an external switch is used to provide link aggregation. Note however, this will simply transfer the single-point-of-failure from the DAC GE to the external switch. Ethernet only. Operating an Ethernet-only link and using a TDM-over-IP converter at each end of the link to convert the E1 PBX traffic to IP. Cost savings by bypassing the need for data and PBX service provider connections between the sites, and having external network connections to just one site. - A check against inter-site lease rates should show break-even on a protected link occurring within one to three years, with significant savings thereafter. - A check against rates for maintaining separate Internet, Intranet, and PBX connections for each site should show even greater cost savings. Guaranteed access to bandwidth. The capacity on the link is dedicated to company use. It may be managed by the owners or installed and contractmanaged by a local services company. Owners have full control of service delivery and service quality. Reliability. A protected Eclipse installation will provide connection reliabilities to at least match that provided by leased fiber. Eclipse operates on licensed bands to ensure exclusive use of the allocated radio channel or channels over the link path.

Benefits include:

Figure 57: Inter-site Network

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 62 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Adding In-house Capacity to a Telco Network


Figure 58 illustrates a situation where a company wishes to stay with an existing service provider for inter-site data and voice services, and to deliver additional capacity in-house by installing an Eclipse GigE link. The GigE link operates with the service provider link in a link-aggregated or RSTP configuration. The application shows: Existing IP network connections provided by a Metro Ethernet service provider. An Eclipse link installed to provide up to 300 Mbit/s, or 200 Mbit/s Ethernet plus optional E1 PBX trunk connections. Network capacity. - Link Aggregation option: The total Ethernet connection capacity is the sum of the service provider and DAC GE capacities. - RSTP option: The connection capacity is the DAC GE capacity. The service provider connection serves only to provide redundancy. Network redundancy. - Link Aggregation: The DAC GE link is incorporated within a linkaggregated group comprising the DAC GE and the service-provider link. If one of the two traffic paths fails, traffic from the failed path is redirected to the remaining path. Traffic prioritization (VLAN priority tagging) is used to ensure all essential traffic continues to get through should the reduced bandwidth result in a bottleneck. - RSTP: The DAC GE link is a link in a layer 2 RSTP mesh network comprising the service provider and DAC GE paths. If the DAC GE link fails, traffic prioritization (VLAN priority tagging) is used to ensure all essential traffic continues to get through on the lower-capacity service provider connection. RSTP settings ensure that the main site is the root switch and that the preferred path to the satellite site is via the Eclipse link. - Legacy PBX trunk connections can be maintained using the NxE1 tribs on the INU DAC 4x or DAC 16x cards, with redundancy supported on the Ethernet network using a VoIP interface. Alternately, all trunks can be VoIP. - Network redundancy is provided door-to-door. While the service provider network should have built-in redundancy, this usually does not cover last mile connections. VLANS are used to support different company user groups within the company, such as R&D, Finance and Sales, to support optimum workgroup management, security, and traffic control. All external Internet and phone portals are provided through the Main site, to avoid the cost of maintaining separate external network connections to both sites. Cost savings. The additional capacity is provided in-house, thereby eliminating the cost of an extended-capacity lease rate from the service provider. Break-even on the installation of the INU / DAC GE link should occur within one to two years. And by removing the need to maintain separate external Internet and PBX services to both sites, even greater savings are possible. Reliability. The link-aggregated or RSTP solutions provide network redundancy right to the companys premises. Guaranteed access to bandwidth. All bandwidth on the Eclipse link is dedicated to company use.

Benefits include:

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 63 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 58. Inter-Site Link Aggregated or RSTP Application

DSL Network
Figure 59 illustrates a Digital Subscriber Line backhaul network to a fiber core. Radio is used to extend the reach of cable-based DSL to remote customer sites, for improved access and access speeds. The application shows: Eclipse Nodes supporting ring/mesh networks from the fiber core. The DAC GEs directly support the RSTP function using RWPR. External RSTP switches are not required. RWPR network capacity can be increased to 300 Mbit/s by installing CCDP links.

Benefits include: A radio solution supports roll out of new connections in a fraction of the time and cost of installing fiber. Extensions using radio may mean the difference between keeping the network inhouse, or leasing fiber or copper from a telco. By keeping it all in-house, operators retain full control of service quality and security. Eclipse RWPR supports carrier-class network re-convergence times. Reliability. The ring-based network topology and the inherently high reliability of Eclipse ensures highest levels of network integrity. ProVision installed at the NOC ensures full visibility of all Eclipse radios and their performance.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 64 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 59: Telecoms DSL Subscriber Network

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 65 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Municipal Network
Figure 60 illustrates a broadband network for a municipal corporation, health board, public utility, or similar organization. The application shows: Eclipse Nodes supporting 300 Mbit/s and 150 Mbit/s link connections between sites. All connections are protected, either within a ring, or for a spur site, by a hotstandby or diversity link. The ring RSTP function is provided by the Eclipse RWPR option. External RSTPcapable switches are not required. One Eclipse Node supports two separate east-west or co-path 150 Mbit/s radio links, or one 300 Mbit/s link. The long 20 km / 12 mile path is configured as two co-path (CCDP) XPIC links to secure the higher system gain provided by 150 Mbit/s links compared to a single 300 Mbit/s link. See System Gain Implications.
- Co-path 150 Mbit/s links are Eclipse link-aggregated to provide a single 300

Mbit/s logical link. L1 or L2 aggregation can be used. At a 300 Mbit/s repeater site (no local LAN connection) two back-to-back DAC GE equipped Nodes are installed with port-port interconnection between the DAC GEs. At a 150 Mbit/s repeater site (no local LAN connection) one Node without a DAC GE supports both directions. Interconnection between the two RAC/ODUs is simply configured on the INU backplane bus.

Other options include: Protected spur links can instead be configured for link aggregation. With link aggregation (L1 or L2) each supports redundancy for the other in the event of a link failure. For example, the protected 300 Mbit/s connection can be operated as two XPIC co-path 150 Mbit/s links, which use half the channel BW; 28 MHz as against 56 MHz.

Benefits include: Reliability. The network topology and the inherently high reliability of Eclipse ensures highest levels of network integrity. ProVision installed at the NOC ensures full visibility of all Eclipse radios and their performance, including Ethernet performance and statistics. The network is in-house. All bandwidth on the physical network is dedicated to corporation use. It may be managed by the owners or installed and contractmanaged by a local services company. Regardless, owners have full control of service delivery, service quality, security and network expansion. Significant cost savings compared to provisioning an equivalent capacity network with equivalent reliability from a local service provider. Carrier-class ring network re-convergence times using RWPR.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 66 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 60: Municipal Network

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 67 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

WiMAX Backhaul
Figure 61 shows a backhaul network using radio and fiber rings to the core. While a WiMAX application is shown with WiFi at some network ends, the solution also generally applies to cellular or other broadband networks. The application shows: Eclipse used as a cost efficient backhaul solution for WiMAX base stations.
- WiMAX products currently support up to about 20 Mbit/s per sector for a

four-sector base station. On short hops higher modulation rates are used to support higher capacities when needed. On long hops, +20 km / +12 miles, system gain implications may require the use of lower-order modulation with capacities restricted to not more than 1.5/2 Mbit/s. - WiMAX in-band backhaul does not scale well; when the network grows it soon runs out of capacity / RF bandwidth. - Eclipse on the other hand provides dedicated Fast Ethernet or GigE connectivity, and links can be engineered cost-efficiently to support capacity maximums on both short and long line-of-site distances. On the 1 Gbit/s ring two DAC GE links provide a 600 Mbit/s closure. The links operate in a single 56 MHz channel using CCDP RAC 4Xs. In this example, where the total DAC GE link capacity is less than the nominal ring capacity, RSTP switch settings may be configured so that traffic does not traverse the DAC GE link, unless there is a failure in one of the fiber links. - When a fiber link fails, traffic up to the break point going via the DAC GE links is restricted to a maximum 600 Mbit/s. Traffic traversing the ring to the other side of the break retains a 1 Gbit/s maximum. - The DAC GE links may be operated as two independent 300 Mbit/s links, with link aggregation set in the attached L2 switches, or the built-in link aggregation option can be used to present a single 600 Mbit/s logical link interface. - The two co-path links are operated on adjacent 56 MHz RF channels using a single dual polarized antenna at each end. - This example represents an effective and cost-efficient use of a radio closure. Similarly, a 300 Mbit/s DAC GE link is used as a closure on a 500 Mbit/s fiber ring. The capacity can be configured as a 300 Mbit/s radio link on a 56 MHz channel, or as 2x150 Mbit/s RAC 40 CCDP links on one 28 MHz channel. A 150 Mbit/s enhanced-RSTP ring is directly supported from DAC GE links using its RWPR settings. External RSTP switches are not required.

A single 80 Mbit/s DAC GE link is used to close a ring established on the WiMax base stations. Benefits include: With its dedicated, flexible and lower cost connection solutions, Eclipse provides a compelling alternative to the use of WiMAX base stations for backhaul, especially on longer hops and towards the core where a higher proportion of the data carried is backhaul. Where fiber connections are not available an Eclipse solution can be up and running in a fraction of the time and cost of installing fiber.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 68 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Figure 61: WiMAX Backhaul

Metro Edge Switch


Figure 62 shows a pair of Eclipse Nodes installed at a multi-tenanted office block to provide an edge function in a Metro network. The application shows: Co-located Eclipse Nodes supporting a 600 Mbit/s logical link using L2 link aggregation on separate 300 Mbit/s physical links. CCDP operation. The two 300 Mbit/s links occupy just one 56 MHz RF channel using RAC 4Xs. Link redundancy. If one of the links fails a 300 Mbit/s connection is maintained. A similar level of redundancy could be provided using a hot-standby link pairing, but maximum throughput is fixed at 300 Mbit/s; in effect aggregation allows you to capitalize on what would otherwise have been standby capacity. At the office site, customer VLANs are aggregated (VLAN aggregated) onto the 600 Mbit/s link. Customer VLAN frames are tagged (double tagged) using the DAC GE Q-in-Q VLAN tagging option. The table below illustrates the double tagging mechanism. The tagging includes an 802.1p priority classifier.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 69 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Customer

Customer VLAN ID (Inner Q Tag)

Link VLAN ID (Outer Q Tag) 1 2 3

A B C

10 20 30

At the core end of the link, the DAC GE Q-in-Q tagging can be stripped off, or maintained into the core network for downstream management purposes. Benefits include: Eclipse provides a fast and cost-effective connection to the core and does so with the built-in redundancy of link aggregation. A scaled approach to the required IP bandwidth. A single Eclipse Node is used to provide a 300 Mbit/s initial connection, and when more capacity is needed an additional Node is added to provide a total 600 Mbit/s. Providing a dual-polarized antenna is installed at the outset, the upgrade to 600 Mbit/s is very straightforward. Built-in Q-in-Q VLAN tagging. The DAC GE links can be fully integrated into the service provider VLAN hierarchy. Customer VLANs are held separate from the service provider VLANs. Customer VLANs can be prioritized to provide a differentiated service over the wireless VLAN trunk, which has application when establishing service level agreements (SLAs).

Figure 62: Metro Network Edge Application

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 70 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Summary
The carrier-class GigE capabilities of Eclipse deliver unequalled feature and capacity options for: Back-haul networks. Metro, access, and municipal networks. Backhaul networks where migration from PDH to Ethernet is required now, or in the future. Up to 100xE1 circuits can be configured at the outset and migrated as and when needed in 2 Mbit/s steps to Ethernet. Maximum Ethernet capacity is 300 Mbit/s per link. RSTP networks. The built-in PWPR capability delivers carrier-class reconvergence times. High capacity network connections to 1.2 Gbit/s. Two co-path links are used for 600 Mbit/s, three for 900 Mbit/s, and four for 1.2 Gbit/s. A single user interface can be presented for capacities to 1 Gbit/s using the built-in link aggregation option. 300 Mbit/s links can be paired for co-channel CCDP operation on a single 56 MHz channel.

Exceptional features include: Fast and gigabit native Ethernet transport options Throughputs to 200 Mbit/s or 300 Mbit/s over a single wireless connection Throughputs to 600 Mbit/s over a dual wireless connection Throughputs to 1200 Mbit/s over a quattro wireless connection High capacity CCDP XPIC operation to deliver 600 Mbit/s (2x300 Mbit/s) on one 56 MHz radio channel. Or 300 Mbit/s (2x150 Mbit/s) on one 28 MHz channel. Companion PDH traffic options - E1 side channels can be seamlessly included within the total payload in 2 Mbit/s steps Integrated 4-port wire speed GigE switch with three 1000Base-T ports and one optical 1000Base-LX port High throughput performance and extremely low latency Advanced QoS policing and prioritization capabilities Congestion avoidance using Pause Frames Carrier-class, RWPR-enhanced RSTP L2 and L1 link aggregation options User selection of L2 aggregation weighting (load balancing) Per port Q or Q-in-Q VLAN tagging Imbedded RMON on all ports and channels Comprehensive real-time and historical performance monitoring User-friendly configuration via a rich graphical interface Low, cost-competitive installation and operating costs

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 71 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

Glossary

802.1d 802.1p

IEEE standard for spanning tree. 802.1d-2004 defines the current RSTP iteration. IEEE standard for QoS traffic prioritization using three bits in the CoS header (defined in 802.1q) to allow switches to reorder packets based on priority level.

802.1Q IEEE standard for virtual LANs (VLANs). 802.3ad IEEE standard for layer 2 link aggregation. ACAP CCDP CoS Alternate-channel alternate-polarization Co-channel dual polarized. Class of service; a layer 2 header in an Ethernet frame.

DiffServ Differentiated services. A layer 3 packet header. DSL Digital subscriber line.

DSLAM Digital subscriber line access multiplexer. RFD L1 L2 L3 MSS MTU NTU PDH Phy QoS RSTP Rapid failure detection. Layer 1 Layer 2 Layer 3 Maximum segment size. Maximum transmission unit. Network terminating unit. Pleisiochronous digital hierarchy. Asynchronous multiplexing scheme in which multiple digital synchronous circuits run at slightly different clock rates. Physical, layer 1 level/interface. Quality of service. Rapid spanning tree protocol.

RWPRTM Resilient Wireless Packet Ring SDH Synchronous digital hierarchy. European standard for synchronous data communications over fiber-optic media. Transmission rates range from 51.84 Mbit/s (STM0) and 150.52 Mbit/s (STM1) through to 10+ Gbps. Small-form-factor pluggable. Service level agreement.
Eclipse_DAC_GE_ETSI_080425.doc Page 72 of 73

SFP SLA
3/25/2008

Copyright 2008 Harris Stratex Networks, all rights reserved.

White Paper

SONET Synchronous optical network. North American standard for synchronous data communications over fiber-optic media. Compatible with SDH for transmission rates ranging from 51.84 Mbit/s (OC1) and 150.52 Mbit/s (OC3) through to 10+ Gbps. STP TDM Spanning tree protocol. Time division multiplexing. Multiple low-speed signals are multiplexed to/from a high-speed channel, with each signal assigned a fixed time slot in a fixed rotation. Type of service; a layer 3 header in an IP packet. Virtual LAN. IEEE 802.1Q tagging mechanism. Acronym for Wireless Fidelity. WiFi is a trademark of The Wi-Fi Alliance (www.wi-fi.org).

ToS VLAN WiFi

WiMAX Acronym for Worldwide Interoperability for Microwave Access. Interoperability brand behind the IEEE 802.16 Metropolitan Area Network standards. XPIC Cross-polarized interference cancellation.

3/25/2008

Eclipse_DAC_GE_ETSI_080425.doc

Page 73 of 73

Copyright 2008 Harris Stratex Networks, all rights reserved.