Beruflich Dokumente
Kultur Dokumente
Product Description
Document Version: 30
October 2010
Notice
This document contains information that is proprietary to Ceragon Networks Ltd.
No part of this publication may be reproduced, modified, or distributed without prior written authorization
of Ceragon Networks Ltd.
This document is provided as is, without warranty of any kind.
Registered Trademarks
® ® ®
Ceragon Networks , FibeAir and CeraView are registered trademarks of Ceragon Networks Ltd.
Other names mentioned in this publication are owned by their respective holders.
Trademarks
TM TM TM TM, TM TM TM
CeraMap , ConfigAir , PolyView , EncryptAir CeraMon , EtherAir , and MicroWave Fiber , are
trademarks of Ceragon Networks Ltd.
Other names mentioned in this publication are owned by their respective holders.
Statement of Conditions
The information contained in this document is subject to change without notice.
Ceragon Networks Ltd. shall not be liable for errors contained herein or for incidental or consequential
damage in connection with the furnishing, performance, or use of this document or equipment supplied
with it.
Information to User
Any changes or modifications of equipment not expressly approved by the manufacturer could void the
user’s authority to operate the equipment and the warranty for such equipment.
Copyright © 2010 by Ceragon Networks Ltd. All rights reserved.
www.ceragon.com
Table of Contents
1 Introduction .................................................................................................. 7
1.1 FibeAir IP-10 G-Series main features ........................................................................ 8
1.2 Applications............................................................................................................. 9
1.2.1 Mobile Backhaul ..........................................................................................................9
1.2.2 Converged Fixed/Wireless Networks ............................................................................9
1.3 Advantages ............................................................................................................ 10
2 Overview .................................................................................................... 11
2.1 System Overview ................................................................................................... 11
2.1.1 Interfaces .................................................................................................................. 12
2.1.2 Available Assembly Options *..................................................................................... 14
2.2 RF Unit .................................................................................................................. 15
2.3 FibeAir IP-10 Value Structure ................................................................................. 16
2.4 FibeAir IP-10 Functionality ..................................................................................... 17
2.5 Features ................................................................................................................ 18
2.5.1 High Spectral Efficiency .............................................................................................. 18
2.5.2 Native2 Microwave Radio Technology ......................................................................... 19
2.5.3 Adaptive Coding & Modulation .................................................................................. 20
2.5.4 Enhancing Spectral Efficiency using XPIC ..................................................................... 21
2.5.5 Integrated Carrier Ethernet Switching ......................................................................... 22
2.5.6 Integrated Quality of Service (QoS)............................................................................. 23
2.5.7 Intelligent Ethernet Header Compression (patent-pending)......................................... 24
2.5.8 Extensive Radio Capacity/Utilization Statistics ............................................................ 24
2.5.9 In-Band Management ................................................................................................ 24
2.5.10 Synchronization Solution............................................................................................ 25
2.5.11 Integrated Nodal Solution .......................................................................................... 25
2.5.12 TDM Cross-Connect Unit ............................................................................................ 26
2.5.13 ABR - Capacity Doubling Innovation ........................................................................... 27
4 Typical Configurations................................................................................. 85
4.1 Point to point configurations ................................................................................. 85
4.1.1 1+0 ............................................................................................................................ 85
4.1.2 1+1 HSB ..................................................................................................................... 86
4.1.3 1+0 with 32 E1s/T1s ................................................................................................... 87
4.1.4 1+0 with 64 E1s/T1s ................................................................................................... 87
4.1.5 2+0/XPIC Link, with 64 E1/T1s, “no Multi-Radio” Mode .............................................. 88
4.1.6 2+0/XPIC Link, with 64 E1/T1s, “Multi-Radio” Mode ................................................... 89
4.1.7 2+0/XPIC Link, with 32 E1/T1s + STM1/OC3 Mux Interface, no Multi-Radio, up to 168
E1/T1s over the radio ............................................................................................................... 90
nXT1/E1
n X T1/E1
MEN
ETH
Control
IP-10 features impressive market-leading throughput capability together with advanced networking
functionality.
Some of the quick points that place IP-10 at the top of the wireless IP offerings:
Supports all licensed bands, from 6 to 38 GHz
Supports channel bandwidths of from 3.5 MHz to 56 MHz
Supports throughputs of from 10 to 500 Mbps per radio carrier (QPSK to 256 QAM)
Incorporates advanced integrated Ethernet switching capabilities
In addition, using unique Adaptive Coding & Modulation (ACM), your network benefits from non-stop,
dependable, capacity deliverance.
IP-10 G-Series
Supported radio configurations 1+0, 1+1 HSB, 1+1 SD/FD,
2+0 with XPIC
2+2 HSB with XPIC
XPIC option Yes
Max radio capacity 500Mbps
1Gbps using 2+0/XPIC
Multi-radio support 2+0 and 2+2 HSB
# of Ethernet interfaces 5 x FE RJ-45+
2 x GE combo (RJ-45/SFP)
Full Carrier Ethernet switching Yes
feature-set including ring
protection
# of E1/T1 integrated IDU 16 E1, 16T1, None
interfaces option
# of E1/T1s per radio carrier 84 E1/T1s
T-Card slot (additional 16 E1/T1 Yes
interfaces or STM1/OC3 Mux)
Nodal/XC/SNCP 1+1 support Yes
ABR (SNCP 1:1) support Yes
Sync unit option Yes
V.11/RS232 User Channel option 2 x Async V.11/RS232 or
1 x Sync V.11
For Cellular Networks, FibeAir IP-10 family supports both Ethernet and TDM for cellular backhaul
network migration to IP, within the same compact footprint. The system is suitable for all migration
scenarios where carrier-grade Ethernet and legacy TDM services are required simultaneously.
For WiMAX Networks, FibeAir IP-10 family enables connectivity between WiMAX base stations and
facilitating the expansion and reach of emerging WiMAX networks, FibeAir IP-10 provides a robust and
cost-efficient solution with advanced native Ethernet capabilities.
FibeAir IP-10 family offers cost-effective, high-capacity connectivity for carriers in cellular, WiMAX and
fixed markets. The FibeAir IP-10 platform supports multi-service and converged networking
requirements for both legacy and the latest data-rich applications and services.
Ceragon‟s FibeAir IP-10 delivers integrated high speed data, video and voice traffic in the most optimum
and cost-effective manner. Operators can leverage FibeAir IP-10 to build a converged network
infrastructure based on high capacity microwave to support multiple types of service.
FibeAir IP-10 is fully compliant with MEF-9 & MEF-14 standards for all service types (EPL, EVPL and
E-LAN) making it the ideal platform for operators looking to provide high capacity Carrier Ethernet
services meeting customers demand for coverage and stringent SLA.
Fans
TDM interfaces drawer
add-on slot
Craft 16 x E1/T1s
External
Terminal (optional) GND
Alarms RFU
(DB9)
(DB9) interface Power
Protection
Interface 2 x GE ”combo” 5 x FE (N-Type) -48V DC
Engineering User ports Electrical
order-wire (RJ45)
Channel Electrical (RJ45) (RJ45)
(optional) (optional) or Optical (SFP)
(RJ45)
Main Interfaces:
5 x 10/100Base-T
2 x GbE combo ports: 10/100/1000Base-T or SFP 1000Base-X
16 x T1/E1 (optional)
RFU/ODU interface, N-type connector
Additional Interfaces:
TDM T-Card Slot options:
16 x E1
16 x T1
1 x STM-1/OC-3
The T-cards are field-upgradable, and add a new dimension to the FibeAir IP-10 migration flexibility.
TDM interfaces
add-on card
(T-Card)
Terminal console
AUX package (optional):
TDM options:
o Ethernet only (no TDM)
o Ethernet + 16 x E1 + T-Card Slot
o Ethernet + 16 x T1 + T-Card Slot
Sync unit
XPIC support
With or without AUX package - EOW, User channel
Additional IDUs
Software license keys Assembly options Add-ons (IDU stacking)
RFU (6-38GHz)
Figure 3: FibeAir IP-10 – functional block diagram
V - Polarization
F1
Multi GbE
H - Polarization F2
At the heart of the IP-10 solution is Ceragon's market-leading Native2 microwave technology.
With this technology, the microwave carrier supports native IP/Ethernet traffic together with optional
native PDH. Neither traffic type is mapped over the other, while both dynamically share the same overall
bandwidth.
This unique approach allows you to plan and build optimal all-IP or hybrid TDM-IP backhaul networks
which make it ideal for any RAN (Radio Access Network) evolution path selected by the wireless
provider (including Green-Field 3.5G/4G all-IP installations).
In addition, Native2 ensures:
Very low link latency of <0.15 msecs @ 400 Mbps.
Very low overhead mapping for both Ethernet and TDM traffic, to the microwave radio frame.
High precision native TDM synchronization distribution.
2
Figure 5: Native Microwave Radio Technology
ACM employs the highest possible modulation during changing environmental conditions, which may be
from QPSK to 256 QAM.
The benefits of this dynamic feature include:
Maximized spectrum usage
Increased capacity over a given bandwidth
8 modulation/coding work points (~3 db system gain for each point change)
Supports both Ethernet and E1/T1 traffic
Hitless and errorless modulation/coding changes, based on signal quality
Adaptive Radio Tx Power per modulation for maximal system gain per working point
Configurable drop priority between E1/T1 traffic and Ethernet traffic
An integrated QoS mechanism enables intelligent congestion management to ensure that your
high priority traffic is not affected during link fading
Each E1/T1 is assigned a priority to enable differentiated E1/T1 dropping during severe link
degradation
XPIC (Cross Polarization Interference Canceller) is one of the best ways to break the barriers of spectral
efficiency. Using dual-polarization radio over a single-frequency channel, a dual polarization radio
transmits two separate carrier waves over the same frequency, but using alternating polarities. Despite its
obvious advantages, one must also keep in mind that typical antennas cannot completely isolate the two
polarizations.
V - Polarization
H - Polarization
The relative level of interference is referred to as cross-polarization discrimination (XPD). While lower
spectral efficiency systems (with low SNR requirements such as QPSK) can easily tolerate such
interferences, higher modulation schemes cannot and require cross-polarization interference canceler
(XPIC). The XPIC algorithm allows detection of both streams even under the worst levels of XPD such
as 10 dB. This is done by adaptively subtracting from each carrier the interfering cross carrier, at the right
phase and level. For high-modulation schemes such as 256 QAM, an improvement factor of more than 20
dB is required so that cross-interference does not limit performance anymore. XPIC implementation
involves system complexity and cost since the XPIC system requires each demodulator to cancel the other
channel interference.
MEF-9 & MEF-14 Up to 500Mbps per Advanced CoS Highly reliable & Extensive multi-
certified for all radio carrier classification integrated design layer management
service types (EPL, capabilities
Up to 1Gbps per Advanced traffic Fully redundant
EVPL and E-LAN)
channel (with XPIC) policing/rate-limiting 1+1/2+2 HSB & Ethernet service
CoS based packet nodal configurations OA&M – 802.1ag
Multi-Radio
queuing/buffering Hit-less ACM and Y.1731.
Integrated (QPSK – 256QAM)
with 8 queues Advanced Ethernet
non-blocking switch for enhanced radio
support statistics
with 4K VLANs link availability
802.1ad provider Hierarchical
scheduling schemes RSTP/MSTP
bridges (QinQ)
Traffic shaping Wireless Ethernet
Scalable nodal Ring/Mesh support
solution Tail-drop or WRED
802.3ad link
Scalable networks Color- aggregation
(1000’s of NEs)
awareness Fast link state
(CIR/EIR support) propagation
<50msec restoration
time (typical)
IP-10 integrated QoS enables support for differentiated Ethernet services with SLA assurance.
Two levels of QoS are supported – “Standard QoS” and “Enhanced QoS”.
The table below lists the main QoS features supported.
Per port, CoS and traffic type Per port, CoS and traffic type
ingress traffic rate-limiting (policing)
(Broadcast, Multicast, etc.) (Broadcast, Multicast, etc.)
Hierarchical scheduling:
Scheduling method SP, WRR or Hybrid 4 scheduling priorities + WFQ between
queues in same priority
Also: Statistics per CoS queue
Ethernet statistics RMON
(Transmitted & Dropped frames)
CIR/EIR support
CIR only CIR + EIR
(“Color-awareness” )
Intelligent Ethernet Header Compression improves effective throughput by up to 45% and does not affect
user traffic.
Ethernet Capacity increase by
packet size (bytes) compression
64 45%
96 29%
128 22%
256 11%
512 5%
IP-10 can optionally be managed in-band, via its radio and Ethernet interfaces. This method of
management eliminates the need for a dedicated interface and network. In-band management uses a
dedicated management VLAN, which is user-configurable.
FibeAir IP-10 synchronization solution ensures maximum flexibility by enabling the operator to select
any combination of techniques suitable for the network.
Any combinations of the following techniques can be used:
Synchronization using native E1/DS1 trails
“PTP optimized transport” transport
o Support IEEE-1588, NTP, etc.
o Guaranteed ultra-low PDV (<0.05 msec per hop)
o Unique support ACM and narrow channels
SyncE support (G.8262)
The Nodal solution features integrated Native2 networking functionality between all ports/radios, with
native Ethernet switching and native E1/Ti cross-connect, up to 84 E1s or 84 T1s per radio carrier, and
full high-availability support, including Cross-connect/switching elements, control/management elements,
radio carriers, and TDM/Ethernet interfaces.
Single IP address
The FibeAir IP-10 Cross Connect (XC) is a high-speed circuit connection scheme for transporting TDM
traffic from any given port "x" to any given port "y".
The system is composed of several inter-connected (stacked) IDUs, with integrated and centralized TDM
traffic switching.
The XC capacity is 180 E1 VCs (Virtual Containers) or 180 T1 VCs, whereby each E1/T1 interface or
"logical interface" in a radio in any unit of the stack can be assigned to any VC.
Integrated TDM Cross Connect is performed by defining end to end trails. Each trail consists of segments
represented by Virtual Containers (VCs). The XC functions as the forwarding mechanism between the
two ends of a trail.
Ceragon‟s native support for TDM traffic leverages the resiliency advantages of wireless SDH rings, with
their intrinsic Sub-Network Connection Protection (SNCP) path-protection capabilities. In SNCP,
information is redundantly transmitted on the ring in both “east” and “west” directions, while the receiver
selects which transmission to receive.
In today‟s super-competitive mobile industry, many carriers wish to reallocate the redundant protection
bandwidth for other uses, such as low-priority, high-volume data transfer. The benefits are clear –
exciting sales opportunities arise as newly-generated capacity can be sold to support the interpersonal
communications shift to Facebook, as well as the ever-growing demand for YouTube access.
No less importantly, this reallocation of bandwidth from TDM to Ethernet – and back – must be risk-free,
with no interruption of revenue-generating services.
In response to the needs described above, Ceragon proposes a novel approach to improve the efficiency of
ring-based protection, using a technique called Protected Adaptive Bandwidth Recovery (“ABR”),
which enables full utilization of the bidirectional capabilities inherent in ring technologies. With ABR,
the TDM-based information is transmitted in one direction only, while the unused protection capacity is
allocated for Ethernet traffic. In the event of a failure, the unused capacity is re-allocated for TDM
transmission.
This technique extends the Native2 approach to dynamic allocation of link capacity between TDM and
Ethernet flows to the network level.
Free bandwidth
for broadband E1 Main path
E1 alternate
path. Reserved
& allocated
Doubling E1 alternate
reserved path,
capacity no allocated
bandwidth
• Each E1/T1 flow consists of a primary and • Each E1 flow consists of a primary and a protection path
protection path • Capacity is RESERVED but NOT ALLOCATED. Capacity
• Both paths RESERVE & ALLOCATE capacity allocation happens only on demand during failure
• All allocated bandwidth is consumed and • In normal state, primary path consumes capacity while
cannot be used by other applications the rest can be used for other applications, such as
mobile broadband
Adaptive Coding and Modulation refers to the automatic adjustment that a wireless system can make in
order to optimize over-the-air transmission and prevent weather-related fading from causing
communication on the link to be disrupted. When extreme weather conditions, such as a storm, affect the
transmission and receipt of data and voice over the wireless network, an ACM-enabled radio system
automatically changes modulation allowing real-time applications to continue to run uninterrupted.
Varying the modulation also varies the amount of bits that are transferred per signal, thereby enabling
higher throughputs and better spectral efficiencies. For example, a 256 QAM modulation can deliver
approximately four times the throughput of 4 QAM (QPSK).
Ceragon Networks employs full-range dynamic ACM in its new line of high-capacity wireless backhaul
product - FibeAir IP-10. In order to ensure high transmission quality, Ceragon solutions implement
hitless/errorless ACM that copes with 90 dB per second fading. A quality of service awareness
mechanism ensures that high priority voice and data packets are never “dropped”, thus maintaining
even the most stringent service level agreements (SLAs).
The hitless/errorless functionality of Ceragon‟s ACM has another major advantage in that it ensures that
TCP/IP sessions do not time-out. Lab simulations have shown that when short fades occur (for example if
a system has to terminate the signal for a short time to switch between modulations) they may lead to
timeout of the TCP/IP sessions – even when the interruption is only 50 milliseconds. TCP/IP timeouts are
followed by a drastic throughput decrease over the time it takes for the TCP sessions to recover. This may
take as long as several seconds. With a hitless/errorless ACM implementation this problem can be
avoided.
So how does it really work? Let's assume a system configured for 128 QAM with ~170 Mbps capacity
over a 28 MHz channel. When the receive signal Bit Error Ratio (BER) level arrives at a predetermined
threshold, the system will preemptively switch to 64 QAM and the throughput will be stepped down to
~140 Mbps. This is an errorless, virtually instantaneous switch. The system will then run at 64 QAM until
the fading condition either intensifies, or disappears. If the fade intensifies, another switch will take the
system down to 32 QAM. If, on the other hand, the weather condition improves, the modulation will be
switched back to the next higher step (e.g. 128QAM) and so on, step by step .The switching will continue
automatically and as quickly as needed, and can reach all the way down to QPSK during extreme
conditions.
200 170 200 140 100 200 120 200 Mbps Capacity
(@ 28 MHz channel)
Unavailability
Ceragon's Adaptive Modulation has a remarkable synergy with the equipment's built-in Layer 2 Quality
of Service mechanism. Since QoS provides priority support for different classes of service, according to a
wide range of criteria (see below) it is possible to configure the system to discard only low priority
packets as conditions deteriorate. The FibeAir IP-10 platform can classify packets according to the most
external header, VLAN 802.1p, TOS / TC - IP precedence and VLAN ID. All classes use 4 levels of
prioritization with user selectable options between strict priority queuing and weighted fair queuing with
user configurable weights.
If the user wishes to rely on external switches QoS, Adaptive Modulation can work with them via the
flow control mechanism supported in the radio.
When planning ACM-based radio links, the radio planner attempts to apply the lowest transmit power
that will perform satisfactorily at the highest level of modulation. During fade conditions requiring a
modulation drop, most radio systems cannot increase transmit power to compensate for the signal
degradation, resulting in a deeper reduction in capacity. Ceragon‟s FibeAir IP-10 is capable of adjusting
power on the fly, optimizing the available capacity at every modulation point, as illustrated in Figure 8:
below. In the diagram, it is shown that operators that want to use ACM to benefit from high levels of
modulation (say, 256 QAM) will have to settle for low system gain, in this case, 18 dB for all the other
modulations as well. With FibeAir IP-10, operators can automatically adjust power levels, achieving the
extra 4 dB system gain that is required to maintain optimal throughput levels under all conditions.
Figure 8: Ceragon’s unique ACM with Adaptive Power vs. plain ACM
Another unique advantage of the FibeAir system is its ability to use these sophisticated adaptive
techniques also in a hybrid, TDM/packet model. Using Ceragon‟s innovative Native2 migration solution,
in which TDM and Ethernet traffic is natively and simultaneously carried over a single microwave link,
Both E1/DS1 and Ethernet services can have configurable priority. When more than one E1/DS1 channel
is connected to a cell site, one of the channels can be given a higher priority in order to maintain network
synchronization as well as a minimum level of service. The rest of the E1/DS1 channels may be
forwarded at a lower priority.
Figure 9: Ceragon’s unique Adaptive Coding & Modulation adaption for TDM
There are substantial benefits to be reaped from applying ACM in TDM networks as well. An operator
may increase capacity on an existing link while maintaining the same availability for its existing revenue-
generating services. Additional data E1/DS1s are easily offloaded in this virtual link to a channel offering
slightly lower availability. Optimally, one E1/DS1 can be given a higher priority connection to maintain
synchronization and a minimum level of service at all times (higher than five-9s).
The rest of the E1s/DS1s may be associated with a lower priority. When migrating to a packet network,
this model can still be effectively applied. It is important to note that it is possible to define packet-based
services at a higher priority than for TDM services, as some real-time services may run on new Ethernet
ports, while other, best-effort data services are forwarded over legacy TDM networks.
When operating in a dual-carrier configuration the system can be optionally configured to work in “multi-
radio” mode.
While in this mode, Traffic is divided among the two carriers optimally at the radio frame level without
requiring Ethernet Link Aggregation, and is not dependent on the number of MAC addresses, the number
of traffic flows or on their momentary traffic capacity. During fading events which causes ACM
modulation changes, each carrier fluctuates independently with hitless switchovers between modulations,
increasing capacity over a given bandwidth and maximizing spectrum utilization.
The result is 100% utilization of radio resources; traffic load is balanced based on instantaneous radio
capacity per carrier and is independent of data/application characteristics (# of flows, capacity per flow
etc.).
F1 + F2
GE/FE 2+2
(protected) Up to 1Gbps
F1 + F2
FE connection for
HSB protection signaling
3.3.1 Implementation
In a single channel application, when an interfering channel is transmitted on the same bandwidth as the
desired channel, the interference that results may lead to BER in the desired channel.
The ETSI standard specifies that for systems that carry a bit rate of STM-1 (155Mb/s) over a channel
separation of 27.5 MHz, the required co-channel interference sensitivity is 37 dB. (ETSI EN 302 217-2-2
V1.1.3 (2004-12), section D.4.3) This means that if the interfering channel is 37 dB below the desired
channel, the receiver will be at a threshold of BER=10e-6.
Ceragon products support a co-channel sensitivity of 33 dB at a BER of 10e-6. When applying XPIC, in
order to prevent interferences between the two transmitters, the system transmits the data using two
polarizations: horizontal and vertical. These polarizations, in theory, are orthogonal to each other, as
shown in the figure below
Note that at the right side of the figure you can see that “CarrierR” receives the “H+v” signal, which is the
combination of the desired signal “H” (horizontal) and the interfering signal “V” (in lower case, to denote
that it is the interfering signal). The same happens in “CarrierL” = “V+h”. The XPIC mechanism takes the
data from “CarrierR” and “CarrierL” and, using a cost function, produces the desired data.
The XPIC mechanism takes the data from “CarrierR” and “CarrierL” and, using a cost function, produces
the desired data. According to the ESTI standard, the limits of the co-channel interference sensitivity are
17 dB at 1 dB degradation and 13 dB at 3 dB degradation, for the system to be at a BER of 10e-6. (ETSI
EN 302 217-2-1 V1.1.3 (2004-12), section 6.5.2.1).
Ceragon XPIC reaches a BER of 10e-6 at a co-channel sensitivity of 5 dB! The improvement factor in an
XPIC system is defined as the SNR@threshold of 10e-6, with or without the XPIC mechanism.
XPIC radio may be used to deliver two separate data streams, such as 2xSTM1 or 2xFE, as shown at
Figure 4a. But it can also deliver a single stream of information such as gigabit Ethernet, or STM-4, as
shown at Figure 4b. The latest case requires a de-multiplexer to split the stream into two transmitters, and
it also needs a multiplexer to join it again in the right timing because the different channels may
experience a different delay. This feature is called “Multi-radio".
V
data V H data
V reciever
stream 1 transmitter stream 1
OMT OMT xpic
data H data
H reciever
stream 2 transmitter stream 2
Figure 13: (a) XPIC system delivering two independent data streams.
(b) XPIC system delivering a single data stream (multi-radio).
Carrier Ethernet is a high speed medium for MANs (Metro Area Networks). It defines native Ethernet
packet access to the Internet and is today being deployed more and more in wireless networks.
The first native Ethernet services to emerge were point to point-based, followed by emulated LAN
(multipoint to multipoint-based). Services were first defined and limited to metro area networks. They
have now been extended across wide area networks and are available worldwide from many service
providers.
The term "carrier Ethernet" implies that Ethernet services are "carrier grade". The benchmark for carrier
grade was set by the legacy TDM telephony networks, to describe services that achieve "five nines
(99.999%)" uptime. Although it is debatable whether carrier Ethernet will reach that level of reliability,
the goal of one particular standards organization is to accelerate the development and deployment of
services that live up to the name.
Carrier Ethernet is poised to become the major component of next-generation metro area networks, which
serve as the aggregation layer between customers and core carrier networks. A metro Ethernet network,
which uses IP Layer 3 MPLS forwarding, is currently the primary focus of carrier Ethernet activity.
The standard service types for Carrier Ethernet include:
E-Line Service: This service is employed for Ethernet private lines, virtual private lines, and
Ethernet Internet access.
Ceragon's FibeAir IP-10 includes a built-in Carrier Ethernet switch. The switch operates in one of two
modes:
Carrier Ethernet Switch - Carrier Ethernet is active.
IP-10
Ethernet
Radio
User Interface
Interfaces
Carrier Ethernet
Switch
IP-10
Ethernet
Radio
User Interface
Interface
The Metro Ethernet Forum (MEF) runs a Certification Program with the aim of promoting the
deployment of Carrier Ethernet in Access Networks, MANs, and WANs. The program offers certification
for Carrier Ethernet equipment supplied to service providers.
The program covers the following areas:
MEF-9: Service certification
MEF-14: Traffic management and service performance
FibeAir IP-10 is fully MEF-9 & MEF-14 certified for all Carrier Ethernet services (E-Line & E-LAN).
Standardized Services MEF-9 and MEF-14 certified for all service types (EPL, EVPL, and E-
LAN)
Scalability - Up to 500 Mbps per radio carrier
- Integrated non-blocking switch with 4K VLANs
- 802.1ad provider bridges (QinQ)
- Scalable nodal solution
- Scalable networks (1000s of NEs)
3.4.4.1 Overview
QoS is a method of classifications and scheduling employed to ensure that Ethernet packets are forwarded
and discarded according to their priority.
QoS works by slowing unimportant packets down, or, in cases of extreme network traffic, discarding
them entirely. This leaves room for important packets to reach their destination as quickly as possible.
Basically, once the router knows how much data it can queue on the modem at any given time, it can
"shape" traffic by delaying unimportant packets and "filling the pipe" with important packets first, then
using any leftover space to fill the pipe in descending order of importance.
Since QoS cannot speed up packets, it takes the total available upstream bandwidth, calculates how much
of the highest priority data it has, puts that in the buffer, and then goes down the line in priority until it
runs out of data to send, or the buffer fills up. Any excess data is held back or "re-queued" at the front of
the line, where it will be evaluated in the next pass.
Importance is determined by the priority of the packet. The number of levels depends on the router. As
the names imply, Low/Bulk priority packets get the lowest priority, while High/Premium packets get the
highest priority.
QoS packets may be prioritized by a number of criteria, including generated by applications themselves,
but the most common techniques are MAC Address, Ethernet Port, and TCP/IP Port.
Two levels of QoS are supported in IP-10 – “Standard QoS” and “Enhanced QoS”.
The FibeAir IP-10 platform stores and displays statistics in accordance with RMON and RMON2
standards.
The following groups of statistics can be displayed:
Ingress line receive statistics
Ingress radio transmit statistics
Egress radio receive statistics
Egress line transmit statistics
The statistics that can be displayed within each group include the following:
Ingress Line Receive Statistics
Sum of frames received without error
Sum of octets of all valid received frames
Number of frames received with a CRC error
Number of frames received with alignment errors
Number of valid received unicast frames
Number of valid received multicast frames
Number of valid received broadcast frames
Number of packets received with less than 64 octets
Number of packets received with more than 12000 octets (programmable)
Frames (good and bad) of 64 octets
Frames (good and bad) of 65 to 127 octets
Frames (good and bad) of 128 to 256 octets
Frames (good and bad) of 256 to 511 octets
Frames (good and bad) of 512 to 1023 octets
Frames (good and bad) of 1024 to 1518 octets
Frames (good and bad) of 1519 to 12000 octets
Ingress Radio Transmit Statistics
Sum of frames transmitted to radio
Sum of octets transmitted to radio
Number of frames dropped
Egress Radio Receive Statistics
Sum of valid frames received by radio
Sum of octets of all valid received frames
Sum of all frames received with errors
Egress Line Transmit Statistics
3.4.7.1 Overview
FibeAir IP-10 provides complete OA&M functionality at multiple layers, including:
Alarms and events
Maintenance signals (LOS, AIS, RDI, …)
Performance monitoring
Maintenance commands (Loopbacks, APS commands, …)
The following is a series of illustrations showing how FibeAir IP-10 is used to facilitate Carrier Ethernet
Services. The second and third illustrations show how IP-10 handles a node failure.
Carrier Ethernet Services Based on IP-10
Figure 26: Carrier Ethernet Services Based on IP-10 - Node Failure (continued)
Each IDU can be configured as a "main" or "extension" unit. The role an IDU plays is determined during
installation by its position in the traffic interconnection topology.
A main unit includes the following functions:
Central controller, management
TDM traffic cross-connect
Radio and line interfaces
An extension unit includes the following functions:
Radio and line interfaces
IP-10 design for the nodal solution is based on a "blade" approach. Viewing the unit from the rear, each
IDU can be considered a "blade" within a nodal enclosure. The same IP-10 unit can be used for both
terminal and nodal solutions.
For migration, the stacking concept offers an optimized tail-site solution and low initial foot-print
requirement for node sites. Additional foot-print is only required gradually as legacy equipment is being
swapped
For Greenfield, the stacking concept offers Low initial investment without compromising future growth
potential, and Risk-free deployment in face of unknown future growth pattern, including additional
capacity, additional sites, and additional redundancy.
IP-10 can be stacked using 2RU nodal enclosures. Each enclosure includes two slots for hot-swappable
1RU units. Additional nodal enclosures and units can be added in the field as required, without affecting
traffic. Up to six 1RU units (three adapters) can be stacked to form a single unified nodal device.
Using the stacking method, units in the bottom nodal enclosure act as main units, whereby a mandatory
active main unit can be located in either of the two slots, and an optional standby main unit can be
installed in the other slot. The switchover time is <50 msecs for all traffic affecting functions. Units
located in nodal enclosures other than the one on the bottom act as expansion units.
Radios in each pair of units can be configured as either dual independent 1+0 links, or single fully-
redundant 1+1 HSB links.
The following photos show the Nodal Enclosures and how they are stacked.
The nodal enclosure is a scalable unit. Each enclosure can be added to another enclosure for modular rack
installation.
The nodal solution management enables users to control the node as an integrated system, and provides
the means for the exchange of information between the IDUs in the stack.
The node is managed in an integrated manner through centralized management channels. The main unit‟s
control CPU is the node‟s central controller, and all management frames received from or sent to external
management applications must pass through it.
The node has a single IP management address, which is the address of the main unit (two addresses in
case of main unit protection).
Several methods can be used for IP-10 node management:
Local terminal CLI
CLI via telnet
Web based management
SNMP
PolyView NMS represents the node as a single unit
The Web EMS allows access to all IDUs in the stack from main window
In addition, the management system provides access to other network equipment through in-band or out-
of-band network management.
To ease the reading and analysis of several IDU alarms and logs, the system time should be synchronized
to the main unit‟s time.
Feature Configuration
Some features configuration is done through the main unit only: TDM XC, user registration, login,
alarms. Other features are configured individually in each extension unit: radio parameters, Ethernet
switch configuration.
Ethernet traffic in a nodal configuration is supported by interconnecting IDU switches with external
cables. Traffic flow (dropping to local ports, sending to radio) is performed by the switches, in
accordance with learning tables.
Each IDU in the stack can individually be configured for "smart pipe" or "carrier Ethernet switch" modes.
E1/T1 VC (Virtual Container) trails are supported, based on the integrated E1/T1 cross-connect. The XC
(cross-connect) function is performed by the active main unit. If a failure occurs, the backup main unit
takes over (<50 msecs down time). The XC capacity is 180 E1 VCs or 180 T1 VCs.
Each E1/T1 interface or "logical interface" in a radio in any unit in the stack can be assigned to any VC.
The XC is performed between two interfaces or "logical interfaces" with the same VC. XC functionality
is fully flexible. Any pair of E1/T1 interfaces, or radio "logical interfaces", can be connected. Each VC is
timed independently by the XC.
Integrated TDM Cross Connect is performed by defining end to end trails. Each trail consists of segments
represented by Virtual Containers (VCs). The XC functions as the forwarding mechanism between the
two ends of a trail.
Basic XC Operation
As shown in the illustration, trails are defined from one end of a line to the other. The XC forwards
signals generated by the radios to/from the IDUs based on their designated VCs. As in the example, The
cross connect may forward signals on Trail C from Radio 1, VC 3 to Radio 4, VC 1.
The cross connect function provides connectivity for the following types of configurations:
STM1/OC3
STM1/OC3
Interface
Interface
E1/T1
E1/T1
Interface
s Interfaces
E1/T1 trails are supported based on the integrated E1/T1 cross-connect (XC). The XC capacity is 180
E1/T1 bi-directional VC trails.
XC is performed between any two physical or logical interfaces in the node (in any main or expansion
unit) such as E1/T1 interface, radio VC (84 VCs supported per radio carrier), and STM1/OC3 mux
VC11/VC12. The function is performed by the “active” main unit. If a failure occurs, the backup main
unit takes over (<50 msecs down time).
Each VC trail is timed independently by the XC.
STM1/OC3
Interface
IP-10
Integrated
XC
IP-10 integrated
STM1/OC3 Mux
MW
Radio
Link
For trouble shooting end-to-end E1/T1 trails across the network, additional PM (performance monitoring)
is necessary. A trail is defined as E1/T1 data delivered unchanged from one line interface to another,
through one or more radio links.
In each node along the trail path, data can be assigned to a different VC number, but its identity across the
network is maintained by a “Trail ID” defined by the user.
Additional PM functionality provides end-to-end monitoring over data sent in a trail over the network.
IP-10 supports an integrated VC trail protection mechanism called Wireless SNCP (Sub network
Connection Protection).
With Wireless SNCP, a backup VC trail can optionally be defined for each individual VC trail.
For each backup VC, the following needs to be defined:
Two “branching points” from the main VC that it is protecting.
A path for the backup VC (typically separate from the path of the main VC that it is protecting).
For each direction of the backup VC, the following is performed independently:
At the first branching point, duplication of the traffic from the main VC to the backup VC.
At the second branching point, selection of traffic from either the main VC or the backup VC.
Traffic from the backup VC is used if a failure is detected in main VC.
Switch-over is performed within <50 msecs.
E1
IP-10
B
Main
Backup
VC
VC
IP-10
A
E1
IP-10
D
IP-10
B
IP-10 IP-10 E1 #2
E1 #2
C A E1 #1
IP-10
B
E1 #1
Thits feature provides a fully integrated solution for protected E1/T1 services over a mixed wireless-
optical network.
IP-10
Integrated XC
IP-10
D IP-10 integrated
STM-1/OC-3 mux
STM1/OC3
fiber link
IP-10
B
E1 #1
Ceragon proposes a novel approach to improve the efficiency of ring-based protection, using a technique
called Protected Adaptive Bandwidth Recovery (“ABR”), which enables full utilization of the
bidirectional capabilities inherent in ring technologies. With ABR, the TDM-based information is
transmitted in one direction only, while the unused protection capacity is allocated for Ethernet traffic. In
the event of a failure, the unused capacity is re-allocated for TDM transmission. In this paper, we take a
closer look at this solution, and at the technologies that are used to implement it. This technique extends
the Native2 approach to dynamic allocation of link capacity between TDM and Ethernet flows to the
network level.
Having selected a ring topology for wireless backhauling, a range of alternative protection schemes are
available for implementation.
A major drawback of ring topology is the allocation of redundant bandwidth in order to ensure network
availability. For example, the widely-implemented SNCP 1+1 unidirectional protection scheme, which
requires the simultaneous transmission of information in both directions on the ring, causes a loss of up to
50% of the ring‟s total bandwidth capacity.
A number of techniques have been devised for recovering and utilizing the lost bandwidth. The
techniques are described in the following sections.
SNCP 1+1 Very Fast. Phone service and For 100% recovery, ring must
Unidirectional synchronization not affected. reserve 50% spare capacity.
These protections schemes must be able to deal with additional challenges that add complexity to TDM
ring protection:
Hybrid Fiber/Microwave Rings. Microwave rings containing fiber segments must be able to
propagate E1 frames, fault indications, and other signals vital to the network.
Dual Homing. Protection rings remain vulnerable in situations where a fiber node suffers an
equipment failure. In order to ensure network availability, protection schemes must be able to
handle the forwarding of primary and standby transmissions from 2 different points of entry,
as shown in Error! Reference source not found. below.
2
Dual Homing with ABR-based Native
Ceragon‟s Native2 hybrid TDM & Ethernet technology, which allows for the transport of both TDM and
packet traffic over a unified microwave link, offers additional tools for the optimization of TDM traffic
over wireless rings.
In a typical SDH network, the receiving node monitors the transmission quality at its “east” and “west”
link interfaces, and selects the direction from which it will receive transmissions. The transmitting node,
therefore, sends traffic in both the east and west directions, causing the redundant use of bandwidth. This
form of protection is known as SNCP 1+1 Unidirectional Protection, and while it can generally provide
50 millisecond protection switching, it does so by reserving large quantities of bandwidth over a very
expensive wireless spectrum.
Ceragon‟s novel approach to the reduction of redundant protection bandwidth involves a change in the
role of the transmitting element. In this approach, the transmitting element determines the direction of
information transmission – east or west. The decision is based on the monitoring of status information
that the transmitting node receives from the network. The receiving node continues to monitor both
directions for the arrival of information, as described previously. This method achieves the goal of
protecting traffic without wasting capacity on unused reserved bandwidth.
The following section provides technological details on the implementation of this innovative feature, in
which Protected Adaptive Bandwidth Recovery (“ABR”) is applied to enable better spectrum
utilization for Ethernet services.
In Protected Adaptive Bandwidth Recovery (ABR), a protection mechanism based on SNCP 1:1
technology, the transmitting node selects a single direction in which to transmit information. The
direction is determined independently for each E1 path, based on status information sent periodically by
the receiving node back to the transmitter.
In the standby direction, the transmitting node – along with all the nodes in the standby path to the
receiver – removes the E1 bandwidth allocation, and sends periodic signals to the receiver to help it
monitor the transmissions from east and west. (Note: This requires special handling in hybrid fiber /
microwave networks). The de-allocated (recovered) E1 bandwidth can now be utilized by Ethernet
traffic.
The receiving node continues to accept information flows from either the east or west direction, and
detects the path in which the E1 payload is actually transmitted.
When a failure occurs in the working direction, the receiving node sends a Reverse Defect Indication
(RDI) signal to the transmitter, which automatically switches to the standby path.
ABR can be selected for any number of E1 channels, and the resulting path co-exists with all other paths
in the network – be they unidirectional, bidirectional, protected, or unprotected. The case study below
describes a real-life example of how ABR delivers normal-state Ethernet capacity that may triple the
Ethernet capacity delivered when using SNCP 1+1. While malfunctions under SNCP 1+1 automatically
result in network degradation to a worst-case scenario (known as “failure state”), a network fault under
ABR results in a level of degradation that depends on the exact location of the failure, and worst-case
degradation is usually avoided.
ABR can also be used in a dual homing configuration – in which there are 2 possible points of entry into
the ring network. This provides added resiliency in case of failure in the transmitting node. In dual
homing mode, one transmission node sends the E1 payload, while the other transmission node sends
“standby” signaling as mentioned earlier.
In segments of a microwave network that are connected by fiber-optic links, E1 frames must be
propagated onto the optical cable, and restored again on the next microwave segment. The same goes for
fault indicators. When a wireless E1 is de-allocated and its bandwidth freed for Ethernet traffic, the
periodic signals sent from the transmitter to the receiver are also propagated optically and then
regenerated on the next microwave segment.
In order to enable full utilization of the FibeAir platform‟s networking capabilities, Ceragon offers
PolyView™ - Ceragon‟s innovative, user-friendly Network Management System (NMS), designed for
managing large-scale wireless backhaul networks. PolyView, a fully integrated radio and networking
management platform, provides complete trail management support.
PolyView‟s efficient trail maintenance capabilities allow network technicians to create, delete, modify,
and monitor TDM trails. Trails can be built either automatically, based on user-defined trail endpoints, or
manually, according to varying degrees of manual input, with full resource control.
In Error! Reference source not found. below, the traffic emanating from 18 cell sites is merged into 4
ggregation sites, making up a metro ring consisting of 28 MHz channels in a 1+0 configuration. In our
basic scenario, 2G BTSs support 4 E1s each, yielding a total of 72 E1s. SNCP 1+1 Protection is
employed.
In this scenario, the main question is how to migrate the network to support 3G-based data services, given
the severe spectrum limitations. This common legacy configuration leaves us with almost no capacity for
Ethernet traffic – in this case, approximately 2.3 Mbps per site of guaranteed Ethernet traffic (assuming
64 Bytes frame size).
In the simple, TDM-only, SNCP 1+1 case presented in the figure above, all E1s flow in both directions,
meaning that 50% of the total capacity is reserved for failure states. In case of such a failure, E1s traffic
is forwarded in the opposite direction. From a capacity point of view, there is no difference between
normal state and failure state.
In the SNCP 1:1 scenario depicted in the above figure, TDM-only E1s flow only in one direction. An
alternate path is reserved, but no capacity is allocated. In case of a failure, E1s are re-routed in the
opposite direction over the reserved path, receiving the non-allocated capacity.
When planning a data network for broadband services, one should compute the guaranteed traffic
(Committed Information Rate – CIR), as well as the possible upside (Excess Information Rate – EIR).
Given the availability of bandwidth for both classes, we can determine the subscriber‟s overall Quality of
Experience.
In the scenario that appears in the figure above, when applying 100% protection – or in case of a worst
case failure, up to 14.5 Mbps of Ethernet capacity are available per site. The whole ring can support 262
Mbps of traffic. So if the 262 Mbps of protected path bandwidth is reserved but not allocated, Ethernet
capacity is increased to 29 Mbps per cell site aggregated into 116 Mbps in aggregation site S2 etc. In
Ethernet, the various failure state scenarios each have a different effect on capacity, as described in the
next section.
The figure below depicts 3 failure states of varying severities: ((denoted „2‟, „3‟ and „4‟)
Non-Affecting Failure. The failure in link A3 does not affect traffic, as STP has in any case
blocked this link. Ethernet traffic does not traverse this link.
Medium-Severity Failure. The link failure at A2 causes some traffic to flow normally, while
some traffic uses the reserved alternate path.
Worst-Case Scenario Failure – A failure in link A1 causes all traffic to flow over the reserved
alternate path
There is no need for an STP block in any of the failure scenarios (1-3), since at least one link in the ring is
in any case out of service.
Traditional protection schemes include bandwidth reservation and actual allocation of capacity for the
alternate path. The reasoning for this was simple – in failure state, the network would not be able to
restore connectivity in a timely fashion. Today, higher processing speeds and improved network recovery
algorithms allow products such as Ceragon‟s FibeAir IP-10 to restore connectivity instantly – without
pre-allocation of capacity. Therefore, while high-priority E1 traffic is protected, alternate path capacity is
reserved, but the unused capacity can be utilized for the delivery of broadband services, allowing data
users to enjoy additional capacity when it becomes available. Let‟s review an example:
2
A Native Ring with Protected-ABR at work
While 72 E1s lines are delivered all the time, only the relevant 36 E1s are actually carried on each path.
On the Ethernet side, up to 262 Mbps of data are available in normal state, while 41 Mbps guaranteed at
failure (in the worst case scenario).
17 Mbps of data per cell site vs. 2.3 mbps in SNCP 1+1
17 Mbps per cell site for A3 failure
6.4 Mbps per cell site for A2/A4 failure
In summary, ABR can provide much higher capacities in all scenarios, with the exception of worst case
failures. The increased capacity allows operators to improve customer stratification, and enhance
subscribers‟ overall Quality-of-Experience (QoE) with better performance in mail delivery, content
sharing, backup services, Facebook access, and video streaming.
Ceragon‟s ABR feature allows operators to reclaim unused E1 bandwidth and re-allocate it for Ethernet
traffic – without putting critical revenue-generating services at risk. Synchronization and other critical
signaling systems are preserved.
Ceragon‟s ABR approach has significant benefits over Pseudowire-based techniques when applied in a
2G-to-3G migration environment. It enables an operator to enjoy the inherent benefits of hybrid TDM
and Ethernet Microwave environments:
ABR Benefits: Double Data Capacity, with no Impact on TDM in Failure State
Doubles ring capacity by using the TDM protection path to provide extra capacity for Ethernet services.
Leaves revenue-generating 2G voice traffic unaffected in the migration process, with no need for protocol
conversion.
Protects network synchronization and clock using currently deployed E1s, without the need to test and
verify new clock recovery mechanisms. Clock recovery techniques are sensitive to delay and delay
variation, and therefore have a severe impact on the operator‟s deployment strategy, often limiting the
number of links in a chain or a ring.
QoS awareness enables the operator to associate the appropriate class of availability and class of service
to each traffic type:
Mobile carriers operating wireless backhaul networks are discovering the advantages of deploying ring-
based topologies, which include enhanced quality and reduced costs. While carriers can exploit the
inherent strengths of such networks – such as unequalled reliability, it is understood that the price to be
paid in bandwidth capacity may be too high.
Ceragon offers a range of solutions for capacity recovery, based on its Native2 TDM-to-packet migration
strategy, and on the Protected Adaptive Bandwidth Recovery (ABR) feature described in the previous
sections.
These solutions enable a risk-free migration from 2G TDM-based communications, to a mixed 2G and 3G
network carrying both TDM and Ethernet, to an all-packet multi-RAN environment. They can be
deployed both in a single link with dynamic allocation of capacity between TDM and Ethernet, and in a
ring where a protection scheme such as SNCP 1:1 can be selected to recover capacity for 3G traffic.
Ceragon‟s innovative ABR mechanism maintains TDM protection levels and bandwidth reservation, but
performs bandwidth allocation “just in time” when a fault condition occurs. As a result, the cell site
bandwidth capacity is significantly increased, while the subscriber‟s overall quality of experience is
enhanced as well. In short – Ceragon‟s solutions provide the simplest, most cost-effective, and most
reliable way to migrate to 3G while doubling capacity at “zero” incremental expense.
The flexibility of Ceragon‟s FibeAir® IP-10 family allows carriers to implement a wide range of
backhauling strategies – whether TDM-based, packet, or a combination thereof. Designed to help carriers
reach their IP migration goals, Ceragon‟s Native2 solution is an excellent platform for capacity
optimizations – in any topology.
Synchronizing the network is an essential part of any network design plan. Event timing determines how
the network is managed and secured, and provides the only frame of reference between all devices in the
network.
Several unique synchronization issues need to be addressed for wireless networks:
Phase/Frequency Lock: Applicable to GSM and UMTS-FDD networks.
o Limits channel interference between carrier frequency bands.
o Typical performance target: frequency accuracy of < 50 ppb.
Sync is the traditional technique used, with traceability to a PRS master clock carried over PDH/SDH
networks, or using GPS.
Phase Lock with Latency Correction: Applicable to CDMA, CDMA-2000, UMTS-TDD, and
WiMAX networks.
o Limits coding time division overlap.
o Typical performance target: frequency accuracy of < 20 - 50 ppb, phase difference of
< 1-3 msecs.
o GPS is the traditional technique used.
Wireless networks set to deploy over IP networks require a solution for carrying high precision timing to
base stations.
Throughout the globe, legacy SDH/PDH based TDM networks are being fragmented, leading to “islands
of TDM”.
Traditional TDM services are being carried over packet networks using Circuit Emulation over Packet
techniques (CESoP).
Two new approaches are being developed in an effort to meet the challenge of migration to IP:
Various ToP (Timing over Packet) techniques
Synchronous Ethernet
ToP refers to the distribution of frequency, phase, and absolute time information across an asynchronous
packet switched network.
The timing packet methods may employ a variety of protocols to achieve distribution, such as IEEE1588,
NTP, or RTP.
SyncE is standardized in ITU-T G.8261 and refers to a method whereby the clock is delivered on the
physical layer.
The method is based on SDH/TDM timing, with similar performance, and does not change the basic
Ethernet standards.
Ceragon's synchronization solution ensures maximum flexibility by enabling the operator to select any
combination of techniques suitable for the network.
Combinations of the following techniques can be used:
Synchronization using native E1/T1 trails
“PTP optimized transport” transport
o Support IEEE-1588, NTP, etc.
o Guaranteed ultra-low PDV (<0.05 msec per hop)
o Unique support ACM and narrow channels
SyncE support (G.8262)
Using this technique, each T1/E1 trail carries a native TDM clock, which is compliant with GSM and
UMTS synchronization requirements.
Ceragon's IP-10 implements PDH-like mechanism for providing the high precision synchronization of the
native TDM trails. This implementation ensures high-quality synchronization while keeping cost &
complexity low since it eliminates the need for sophisticated centralized SDH-grade "clock unit" at each
node. System is designed to deliver E1 traffic and recover E1 clock, complying with G.823
“synchronization port” jitter and wander. That means that user can use any (or all) of the system‟s E1
interfaces in order to deliver synchronization reference via the radio to remote site (e.g. Node-B).
Ceragon's unique PTP optimized transport mechanism ensures that PTP control frames (IEEE-1588, NTP,
etc.) are transported with maximum reliability and minimum delay variation, to provide the best possible
timing accuracy (frequency and phase) meeting the stringent requirement of emerging 4G technologies
(LTE, etc.).
PTP control frames are identified using the advanced integrated QoS classifier.
Frame delay variation of <0.05msec per hop for PTP control frames is supported including when ACM is
enabled and when operation with narrow radio channels.
The SyncE technique supports synchronized Ethernet outputs as the timing source to an all-IP RBS. This
method offers the same synchronization quality provided over E1 interfaces to legacy RBS.
Ceragon's SyncE supports two modes:
Synchronization is distributed natively over the radio links. In this mode, no TDM trails or E1 interfaces
at the tail sites are required!
Synchronization is provided by the E1/STM-1 clock source input at the fiber hub site (SSU/GPS).
Figure 42: FibeAir IP-10 G-Series Typical Configurations - 1+0 with 64 E1s/T1s
Ethernet traffic
o One of the units is acting as the "master" unit and is feeding
Ethernet traffic to both radio carriers
o Traffic is distributed between the 2 carries at the radio frame level
o The "Master" IDU can be configured for switch or pipe operation.
o The 2nd ("Slave") IDU has all its Ethernet interfaces and functionality effectively disabled.
TDM traffic
o Each of the 2 radio interfaces supports separate E1/T1 services
o E1/T1 Services can optionally be protected using SNCP
Figure 45:2+0/XPIC Link, with 32 E1/T1s + STM1/OC3 Mux Interface, no Multi-Radio, up to 168 E1/T1s
over the radio
Figure 49: 1+1 HSB Link with 16 E1/T1s + STM1/OC3 Mux Interface
(Up to 84 E1/T1s over the radio)
2
Figure 50: Native 2+2/XPIC/Multi-Radio MW Link, with 2xSTM1/OC3 Mux
(up to 168 E1/T1s over the radio)
Figure 51: Chain with 1+0 Downlink and 1+1 HSB Uplink, with STM1/OC3 Mux
Figure 52: Node with 2 x 1+0 Downlinks and 1 x 1+1 HSB Uplink
Figure 53: Chain with 1+1 Downlink and 1+1 HSB Uplink, with STM1/OC3 Mux
2
Figure 54: Native Ring with 3 x 1+0 Links + STM1/OC3 Mux Interface at Main Site
2
Figure 55: Native Ring with 3 x 1+1 HSB Links + STM-1 Mux Interface at Main Site
Figure 56: Node with 1 x 1+1 HSB Downlink and 1 x 1+1 HSB Uplink,
2
Figure 57: Native Ring with 4 x 1+0 Links, with STM1/OC3 Mux
2
Figure 58: Native Ring with 3 x 1+0 Links + Spur Link 1+0
2
Figure 59: Native Ring with 4 x 1+0 MW Links and 1 x Fiber Link (5 hops total),
2
Figure 60: Native Ring with 2 x 2+0/XPIC MW Links and 1 x Fiber Link (3 hops total), with 2 x STM1/OC3
Mux
5.4 PolyView
PolyView is Ceragon‟s powerful yet user-friendly NMS (Network Management System) that integrates
with other NMS platforms and systems in which no NMS is used. It provides management functions for
Ceragon‟s FibeAir systems at the network level, as well as at the individual network element level.
Using PolyView, you can perform the following for Ceragon elements in the network:
Performance Reporting
Inventory Reporting
Software Download
Configuration Management
Trail Management
View Current Alarms (with alarm synchronization)
View an Alarm Log
Create Alarm Triggers
PolyView's user interface, CeraMap™, enables fast and easy design of multi-layered network element
maps. CeraMap helps manage the network from its building stage to its ongoing maintenance and
configuration procedures.
PolyView supports all Ceragon FibeAir products, and compliments Ceragon‟s CeraView® and CeraWeb
by providing a higher (network) level of management support. PolyView is implemented in Java, which
enables it to run on different operating systems.
PolyView is security-protected, whereby configuration and software download operations can only be
performed by authorized system administrators.