Beruflich Dokumente
Kultur Dokumente
Table of Contents Technically, our tests evolved from last year's coverage,
confirming that many vendor solutions are further
Editor‘s Note ........................................................ 2 solidified and extended. We have focused Segment
Introduction .......................................................... 2 Routing, EVPNs, and SDN orchestration, covering more
Participants and Devices ........................................ 3 advanced scenarios with more participating vendors
Interoperability Test Results ..................................... 4 than before. From our point of view, it is the best time
right now for service providers to migrate to Segment
EVPN .................................................................. 4
Routing if you haven't done so yet. More vendors
Segment Routing ................................................. 12 joined the SRv6 tests as well, growing the ecosystem.
Topology ............................................................ 19 Given the sometimes mutually exclusive support, it is
SDN .................................................................. 21 likely that SR over MPLS, SR over VXLAN, and SRv6
FlexE.................................................................. 28 will coexist in the future. Vendors who support all three
Clocking ............................................................ 29 modes might play a key role in future networks.
Remote Collaboration Aspects .............................. 36 Open source and white box solutions take an
Summary ............................................................ 37 increasingly important role in our tests. One vendor
provided an open-source SDN controller based on
OpenDaylight. Two other vendors jointly provided a
white-box edge router solution. In previous years, we
Editor’s Note had already seen some open source and/or whitebox-
related solutions (not all of them returned for
Among the longstanding
participation this time). Clearly, the functionality and
series of annual EANTC multi-
maturity of the participating solutions are maturing.
vendor interoperability events
at Upperside's MPLS + SDN + There are many more results and successes to be
NFV World Congress, this shared, for example in the clocking area and with
year's event has turned out to regards to flexible Ethernet (FlexE). We hope that the
be special in multiple ways. white paper will provide all the details you are looking
Most importantly, the novel for!
Coronavirus (COVID-19) has
not managed to stop the hot staging. For two weeks in One final word: The MPLS + SDN + NFV World
early March, more than 100 attendees from 16 Congress has been rescheduled to June 30 to July 3,
vendors teamed up to validate the interoperability and 2020. (Check for updates at https://www.upperside-
integration of state-of-art network technologies. More conferences.com/). I really hope that the conference
than two-thirds of the participants joined remotely this will take place in beautiful Paris, and looking forward
time. At EANTC, we have conducted distributed interop to celebrating the technology innovations, our commu-
and performance tests for virtualization technologies nity and life in general with all of you. Until then, stay
(NFV) since 2017, which typically took much longer safe and healthy!
due to the availability constraints of individual teams.
The coordinated testing of physical equipment with
such a large group of remote participants in real-time is
Introduction
unprecedented in the industry. Within two weeks, we
This year, we designed the interoperability test cases to
successfully completed more than 500 test combi-
probe the maturity of transport network solutions to
nations thanks to the enthusiasm and dedication of
support 5G networks, data center networking evolu-
local and remote vendor teams, some of them in distant
tion, and multi-vendor domain orchestration. The event
time zones with inconvenient working hours.
covered more than 60 test cases evaluated in many
For the first time, we are publishing the white paper in multi-vendor combinations. We classified the test cases
advance of the actual showcase. The results of our in four main categories; Ethernet VPN (EVPN), Segment
testing are ready for deployment now, and we would Routing, Software-Defined Networking (SDN), and
like to share the details of the multi-vendor technology packet networks clock synchronization.
solutions already now. We will publish a series of eight
In the EVPN section, the participating vendors broadly
short videos on the event page covering the full range
supported the basic test scenarios for symmetric and
of test areas, starting on March 31st, co-presented by
asymmetric integrated routing/bridging (IRB). New test
the participating vendors, many with live multi-vendor
cases were added focusing on EVPN redundancy and
demos.
2
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
3
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
This white paper documents only positive results Routing with EVPN: Symmetric Model
(passed test combinations) individually with vendor and
device names. Failed test combinations are not Initially, the data center overlay was viewed as a layer
mentioned in diagrams; they are referenced anony- 2 domain. For example, between hosts separated by
mously to describe the state of the industry. Our an EVPN, the only communication, if any exists,
experience shows that participating vendors quickly remains in layer 2. However, if these hosts are on
proceed to solve interoperability issues after our test so different subnets, such inter-communication required a
there is no point in punishing them for their willingness Layer 3 data center gateway. Since 2013, the IETF
to learn by testing. Confidentiality is vital to encourage released the Integrated Routing and Switching draft to
manufacturers to participate with their latest - beta - replace the centralized gateway approach with an
solutions and enables a safe environment in which to integrated mechanism - IRB. The IRB functionality is
test and to learn. needed on the PE nodes to avoid inefficient forwarding
of overlay traffic. We tested the IRB for the sixth
consecutive year, based on the IETF specification which
Terminology
was latest updated in 2019. We wanted to witness
We use the term tested when reporting on multi-vendor that the rapid growth of the data center architecture
interoperability tests. The term demonstrated refers to remains interoperable given by the different options.
scenarios where a service or protocol was evaluated On the base of the single-homed PE solutions with IRB,
with equipment from a single vendor only. the all-active multi-homed PE moved into sight. The
mode of how ingress PE and egress PE interacted with
IRB included the symmetric as well as asymmetric
Test Equipment
semantic. The EVPN encapsulation allowed both the
With the help of participating test equipment vendors, EVPN-MPLS as well as EVPN-VXLAN. The layer 2
we generated and measured traffic, emulated and service types included the VLAN-based and the VLAN-
analyzed control and management protocols and bundle-aware EVPN. Each of the options received in
performed clock synchronization analysis. We thank return to the test, a versatile of rich combinations.
Calnex, Ixia and Spirent for their test equipment and
In the symmetric IRB test, the PEs established the EVPN
support throughout the hot staging.
with each other running symmetric IRB. In this mode,
both IP and MAC lookup were required at both ingress
and egress NVEs. We checked at the PEs and
expected via BGP to exchange RT 2 (host IP address
routing) and RT 5 (IP prefix) between the PEs.
4
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
5
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
Where does a host come from the datacenter, EVPN • Single-Homed PE: Arista 7280R3, Arrcus
learns it through the RT2 route. If the host moves, the QuantaMesh T4048-IX8A, Cisco Nexus 9300-FX,
unaged MAC address would lead to an inconsistency. Cisco Nexus 9300-FX2, Cisco Nexus 9300-GX,
EVPN therefore also provides a sequencing mechanism Juniper QFX10002-72Q, Nokia 7750 SR-1,
to track to where a host moves, referred to MAC Spirent TestCenter N4U
Mobility Extended Community as defined by RFC7432. • Spine: Arista 7280R2, Arrcus ArcRR
When a newly learned MAC address would be found
in the MAC table which had been learned from a
remote end, the sequence number of the MAC
Mobility Extended Community shall increase by one
and the value is carried out via the RT2. The EVPN
6
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
7
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
8
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
E-LAN
9
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
The IGMP proxy function optimizes network usage • PE: Arista 7050X3, Huawei NetEngine 8000 F1A,
through a selective multicast mechanism. The host Juniper QFX5120-48Y, Nokia 7750 SR-1, Spirent
expresses their interests in multicast groups on a given TestCenter N4U
subnet/VLAN by sending IGMP membership reports • Route Reflector: Arista 7280R2, Arrcus ArcRR
(Joins) for their interested multicast group(s). An IGMP
router periodically sends membership queries to find One vendor separated itself from the test. The PE in
out if there are hosts on that subnet still interested in this test did not install any IGMP flag into the RT6 route,
receiving multicast traffic for that group. The goal of the and the routes were discarded from other PEs.
IGMP proxy mechanism is to r educe the flood of IGMP Therefore, this solution was not compliant to the
messages (query and report) in EVPN instances standard.
between PE routers. The IETF draft (evpn-igmp-mld-
proxy) defines a set of multicast routes, among which Assisted Replication
Selective Multicast Ethernet Tag (Type 6: SMET) Route
Assisted Replication (AR) is under standardization in
enabled the basic setup of an EVPN IGMP proxy in this
the IETF for multicast distribution, it updates the ingress
test.
replication mechanism and introduces the remote node
To verify that proxy function has been enabled we first replication mechanism. The driven motivation behind is
sent multicast traffic without any Join messages that this helps software-based EVPN nodes to efficiently
generated from an emulated host. As expected, all replicate large amounts of broadcast and multicast
traffic was dropped. Then we generated the Join traffic to remote nodes without affecting their repli-
messages to the PE from the emulated host, and cation performance. With Assisted Replication feature,
checked at each PE that the multicast group has been ingress passes all broadcast and multicast traffic to an
learned and the RT 6 route with the "IGMP Proxy AR-replicator that replicates the traffic further in the
Support" flag enabled was installed in the routing EVPN service on its behalf. In other words, the
table. We sent multicast traffic and did not observe any replicator represents powerful resources, which will
packet loss. greatly simplify the layout requirements of ingress
nodes which need to be enabled with assisted
replication.
10
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
We verified at the ingress node with AR enabled that a The following devices successfully participated in the
route of the replicator is shown. Through port statistic, test:
we observed that both multicast and broadcast were
• Data Center Edge: Huawei NetEngine 8000 F1A,
traffic directed to the AR-replicator from the ingress,
Juniper QFX10002-72Q
and over there traffic was replicated on the AR-
replicator to the other leafs as expected. • Data Center Gateway: Huawei NetEngine 8000 X4,
Juniper MX204
The following devices successfully participated in the • Route Reflector: Juniper MX204
test:
• Leaf: Juniper QFX10002-72Q, Nokia 7750 SR-1 EVPN and IP-VPN Interworking
• Replicator: Juniper QFX10002-72Q, Expanding the scope of VPNs, an MPLS core network
Nokia 7750 SR-1 will emerge between data centers. The gateway
• Route Reflector: Arista 7280R2 requires dual data planes and shall provide translations
between EVPN routing and MPLS labels.
Figure 17: EVPN-VXLAN and EVPN-MPLS Interworking • Data Center Gateway: Arista 7280R2, Cisco 3600-
R, Huawei NetEngine 8000 X4, Juniper MX204
• Route Reflector: Juniper MX204
11
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
EVPN-VXLAN and EVPN-VXLAN Interworking The traditional IPv4/IPv6 BGP-based L3VPN certainly
found its place in the test over the SR-MPLS data plane
For EVPNs that traverse multiple data centers, each as well as the SRv6 data plane. EVPN services are
data center consists of a different AS, and the gateway located in the SRv6 data plane.
will be responsible for exporting routes from one
network segment to another.
IPv4/IPv6 BGP-based L3VPN over Segment
We did not observe any packet loss. Routing MPLS Data Plane
Segment Routing
The momentum of Segment Routing is magnifying
nowadays. This is happening because of technology
maturity which is provided in the market by the network
vendors supported by the deep belief of the customers
in the capabilities of the segment routing paradigm.
This year, the SR-MPLS interoperability testing includes Figure 20: L3VPN over Segment Routing MPLS
various topological combinations, by involving different (Underlay IGP with OSPFv2)
interior gateway protocols (IGP); ISIS and OSPF. The
SRv6 data plane has a good portion of test cases that Across the different IGP (IS-IS, OSPFv2) scenarios, all
are related to L3VPN, E-Line, TI-LFA and Egress Node participated vendors configured IPv4/IPv6 BGP-VPN
Protection. L3VPN services. We started the test by sending
bidirectional IPv4 and IPv6 traffic between each pair of
PE routers. The generated traffic (emulated customer
Business VPNs with Segment Routing
traffic) was encapsulated in two SR-MPLS labels; VPN &
Both layer 2 and layer 3 VPNs determined their transport segments. We verified the traffic flow
capabilities in segmented routing which in return between all the PE pairs without any packet loss.
provided a rich set of protocol features for testing.
12
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
The following devices successfully participated in the The following devices successfully participated in the
L3VPN for both IPv4 and IPv6 services over MPLS- L3VPN for both IPv4 and IPv6 services over MPLS-
based Segment Routing. The underlay IGP was based Segment Routing. The underlay IGP was IS-IS
OSPFv2. IPv6.
• P: Arrcus ArcRR, Nokia 7750 SR-1 • P: Juniper MX204
• PE: ADVA Activator, Cisco Nexus 9300-FX2, • PE: Juniper MX204, Nokia 7750 SR-1,
Delta AGC7648SV1, Huawei NetEngine 8000 X4, Spirent TestCenter N4U
Juniper ACX5448-D, Juniper MX204, Spirent
TestCenter N4U, Metaswitch NOS Toolkit, EVPN and L3VPN over SRv6 Data Plane
Nokia 7750 SR-1
IETF RFC 5120 defines an optional M-IS-IS topology
(with multiple topologies inside an IS-IS segment) that
requires several extensions to packet encoding and
other SPF procedures. Therefore, we added a new
physical topology beside the basic IS-IS topology. The
IS-IS standard topology was named as MT-0 for
backward compatibility.
13
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
In this case, the PE routers were determining from ISIS SR-MPLS and SRv6 facilitate the TI-LFA deployment by
and BGP VPN next-hop, what destination address to encoding the loop-free post-convergence path into the
use. We generated a combination of IPv4/IPv6 bidirec- segment list. The segment list steers the traffic toward a
tional traffic and verified the traffic flow between the PE specific repair node (aka PQ node) using Node-SID or
pairs and we didn't observe any packet loss between a specific link by Adjacency-SID.
the PE pairs.
The main target of the test was to verify the functionality
The following devices successfully participated in the and interoperability of PLR to detect a local link failure
tests: and accordingly, activate a pre-computed loop-free
• P in MT-0: Arrcus ArcRR, Juniper MX204 backup path. The PLR inserts extra segments to steer the
packets into the backup path. The 2020 test campaign
• P in MT: Arrcus ArcRR, Juniper MX204 involved a variety of TI-LFA scenarios. We run the test
• PE in MT-0: Arrcus QuantaMesh T7080-IXAE, Cisco across different protection mechanisms like link and
Nexus 9300-GX, Huawei NetEngine 8000 X4, Shared Risk Link Group (SRLG), and with different data
Keysight Ixia IxNetwork planes including SR-MPLS and SRv6.
• PE in MT: Cisco Nexus 9300-GX, Huawei
NetEngine 8000 X4, Keysight Ixia IxNetwork, TI-LFA for SR-MPLS
Spirent TestCenter N4U We built a physical full-mesh topology that consisted of
We tested EVPN over SRv6 between Huawei and four different network nodes to test link and SRLG TI-LFA
Keysight. There were two service types the E-Line over the SR-MPLS data plan. The participated vendors
service and EVPN L3VPN. We also generated Inter- configured the network nodes with an L3VPN BGP-VPN
subnet traffic for EVPN L3VPN using both MAC/IP service. We used Spirent TestCenter to generate IPv4
Route as well as IP prefix route. The physical topology traffic from the emulated CEs.
was as simple as direct connectivity between the PEs. Prior to the link failure, the ingress PE+PLR (network
node 1) forwarded the traffic to the directly connected
egress PE (network node 4). To simulate the link failure,
we asked the vendor of network node 4 to shut down
the link between network node 4 and network node 1
(the protected link), simultaneously the traffic was still
flowing from the traffic generator toward the ingress
PE. We observed in three of the cases the out of
service time between 22ms - 32 ms. The expected
Figure 24: EVPN over SRv6 value was 50 ms.
14
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
The following devices successfully participated in the The following devices successfully participated in the
TI-LFA over SR-MPLS: TI-LFA with SRLG over SR-MPLS test:
• Egress node: Huawei NetEngine 8000 X4, • Edge node: Arista 7280R2, Juniper MX204
Juniper MX204, Nokia 7750 SR-1
• P: Arista 7280R2
• P node: Arista 7280R2, Huawei NetEngine 8000
• PLR: Ciena 6500 T12, Nokia 7750 SR-1
X4, Nokia 7750 SR-1
• PQ: Huawei NetEngine 8000 X4, Juniper MX204
• PLR: Arista 7280R2, Huawei NetEngine 8000 X4,
Juniper MX204, Nokia 7750 SR-1 Two L3VPN pairs showed 1 - 4 s out of service time
• PQ: Arista 7280R2, Huawei NetEngine 8000 X4, which was not included in the report. One pair with IP
Juniper MX204, Nokia 7750 SR-1 service showed 3 s out of service time which was due
to convergence of IP and therefore understandable.
One pair showed 63 ms out of service time. The
expected value was 50 ms. During the TI-LFA with the TI-LFA over SRv6
SRLG test, we observed 20 ms out of service time for
the L3VPN service. For the next-generation networks that are built on top of
the SRv6 data plane, TI-LFA plays a fundamental role in
service protection against link, node, and SRLG
failures. TI-LFA SRv6 data plane applies the same
concept of inserting extra SID to the SRH of the original
packet by the PLR to steer the traffic toward the repair
or PQ node. The SRv6 SID should be inserted or
processed by SRv6 capable device. Thus, the imple-
mentation of TI-LFA in the SRv6 data plane requires the
PLR and PQ nodes to be SRv6-aware network nodes.
15
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
The following devices successfully participated in the SR-TE and Seamless BFD
test:
Seamless Bidirectional Forwarding Detection (S-BFD) is
• Egress node: Keysight Ixia IxNetwork
a light version of the classical BFD protocol. S-BFD
• P node: Cisco Nexus 9300-GX, Huawei NetEngine designed to work with the Segment Routing Traffic
8000 X4, Juniper cRPD Engineering (SR-TE) sessions for SR-TE path monitoring.
• PLR: Juniper MX204 If the S-BFD session fails, the S-BFD process brings
down the SR-TE session. But, if the S-BFD session is
• PQ and PQ node and PSP (Penultimate Segment
recovered, the S-FBD process brings up the SR-TE
Popping) function: Arrcus QuantaMesh T7080-IXAE
session and preempts the primary role quickly. The
In the SRLG test, we configured the link between PLR efficiency of S-BFD sources from the lean mechanism of
and two of the P nodes with the same SRLG ID, the PLR SR-TE path monitoring by implementing two logical
shall compute and install the backup path through the nodes; the initiator and the reflector. The initiator sends
link between itself and the PQ node. To emulate the link the S-BFD packets toward the reflector through the
failure, we ask the vendor to shut down the link monitored SR-TE policy. The reflector listens to the S-
between PLR and the P node while the traffic was BDF packets and reflects back the packets toward the
flowing. initiator.
We recognized packet loss for a period of less than 50 For S-BFD testing purposes, we built a square physical
ms, then the traffic was restored by directing the traffic topology which consisted of ingress PE, egress PE, and
toward PQ node. We identified that by monitoring the two P routers. We asked each pair of PEs to configure
packet counters of the interface connecting the PQ two SR-MPLS TE policies; one was the primary SR-MPLS
node. TE and the other one was the backup. Using Keysight
IxNetwork, we started generating IPv4 unidirectional
traffic from PE1 (Initiator) to PE2 (Reflector). As
expected, we observed the traffic was flowing through
the primary SR-MPLS TE path. To emulate the SR-MPLS
TE session tear down, we configured a simple packet
filter on the node P1 to filter and drop the SR-MPLS
frames. In less than 50 ms, the S-BFD session was
down and brought down the primary SR-MPLS TE and
we witnessed the traffic was switched over into the
backup SR-MPLS TE.
16
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
After a while, we deactivated the packet filter and the Juniper) must have in its routing table information about
S-BFD frames return to flow without any blocking. We how to handle packets received with END SID, and
checked the status of the S-BFD session and the primary how to reach the SRv6 locator of the next loop hop on
SR-MPLS TE session, both of them were up. Hereafter, the path to the final destination.
we confirmed the switch over of the traffic to the
primary SR-MPLS TE path again. The following devices successfully participated in the
test:
The following devices successfully participated in the • P with PSP function: Arrcus QuantaMesh
test: T7080-IXAE, Juniper MX204
• Initiator: Arista 7280R2, Huawei NetEngine 8000 • P: Cisco Nexus 9300-GX, Juniper cRPD
F1A, Huawei NetEngine 8000 M8 • PE: Huawei NetEngine 8000 X4, Keysight Ixia
• P node: Arista 7280R2, Huawei NetEngine 8000 IxNetwork
F1A, Huawei NetEngine 8000 M8, Nokia 7750
SR-1
• Reflector: Arista 7280R2, Huawei NetEngine 8000
M8, Keysight Ixia IxNetwork, Nokia 7750
SR-1
17
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
18
Multi-Vendor Interoperability Test 2020 Topology
Cisco NSO Huawei NCE Keysight Lumina SDN Nokia NSP Juniper NorthStar Lumina SDN Nokia NSP Juniper
Controller Controller Ixia IxNetwork Controller Controller Controller Controller Controller HealthBot
19 20
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
21
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
through the node 3 and node 4 for once PCE solution In this test, Juniper PCC was successfully sent the LSP-ID
as in the figure below. For other PCE solution, the PCC with zero value. In this test, Juniper and Nokia PCE
created the path through the node 3. We confirmed the were successfully deleted the LSP in the PCCs with
successful traffic flow between the PCCs. setting the delegation bit to one. During the path re-
optimization, Nokia PCE and Lumina SDN
Controller were able to optimize the path automatically
once we change the metric of the path.
During the LSP state synchronization test with Nokia's This test verified the PCC capability for requesting an
PCE, one PCC solution sent the LSP state report with the SR-TE/RSVP-TE path in a single IGP domain.
delegation flag with 0 after the PCEP session was back Furthermore, we tested LSP updates with LSP
again and the PCC solution expects from PCE to send a delegation, path optimization with LSP revocation.
PCinitiate message for setting the delegation in the LSP
The test setup included one centralized PCE and two PE
state report. Since the delegation flag was set to zero
nodes acted as PCC. Meanwhile, one transport node is
from PCC in the LSP state report, the PCE was not able
also used in the test for the path re-optimization
to update or delete the LSPs in the PCC. When the
purpose. PE nodes and the transport nodes are
PCEP session was continuously up and running, the LSP
configured with ISIS-SR in the MPLS domain to provide
state report was successfully reported with the
routing for the internal provider network. This routing
delegation bit one. In this test, Huawei PCC was able
information was used to generate labels. The routing
to set the delegation bit to one in the LSP state report
information was synchronized with PCE using the BGP-
after the PCEP session was up.
LS mechanism. As the LSP signaling protocol, two PCE
During the LSP deletion with Nokia PCE, Nokia PCE solutions configured with SR-TE. To verify the installed
sent the PCinitiate message with setting the R flag to LSP, the L3VPN service was configured with mp-BGP
one. After deleting the LSP, one PCC sent the LSP state service signaling protocol. The lowest IGP cost was set
report with non zero LSP-ID. The LSP-ID should be zero as constraining LSP to follow the optimal path.
in the LSP state report. This information will be used by
The test started with the PCEP session establishment
the PCE to remove the LSP in the PCE. Since the LSP-ID
between the PCE and the PCCs. After the successful
was not set to zero, PCE was not able to delete the LSP
PCEP session establishment, PCCs requested to PCE for
in the PCE side.
Nokia NSP Server Juniper MX204-6 Huawei NetEngine Juniper MX204-4 Nokia 7750 SR-1-2
8000 X4
Juniper NorthStar Nokia 7750 SR-1-2 Huawei NetEngine Juniper MX204-4 N/A
Controller 8000 X4
Lumina SDN Juniper MX204-4 Huawei NetEngine Nokia 7750 SR-1-2 N/A
Controller 8000 F1A
22
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
the path computation. Upon reception of the request, BGP Flowspec for IPv6
PCE computed the LSP path based on the obtain
information via BGP-LS and sent the path details to the BGP Flowspec defines a new Multi-Protocol BGP (MP-
PCC. Using the PCE computed path information, each BGP) Network Layer Reachability Information (NLRI) as
PCC in the topology created one transport path in the specified in RFC5575. The new NLRI collects layer 3
direction facing the next PCC. We confirmed the and layer 4 details that are used to define a Flow
successful traffic flow between the PCCs via the L3VPN Specification. The actions are assigned on the router
service. After successful path creation, we increased based on the flow specification.
the IGP cost in the links between the PE nodes. The test topology consists of one Flowspec controller
Meanwhile, PCCs delegated the LSP to the PCE for the and two PE nodes. As the first step, the BGP session
LSP update. In this case, PCE identified the path was established between the PE nodes. After the BGP
constraint automatically and re-optimize the path and session establishment, we generated 4 different UDP
sent the optimized path details to the PCCs. Both PCCs traffic streams between the same source and
created a new transport path in the direction facing the destination IPv6 address with different ports and DSCP
next PCC through the node 3. The traffic flow was values.
successful between the PCCs via the node 3. In the
case of LSP revocation, PCCs sent the LSP revocation All the four traffic streams were successfully forwarded
(no delegation) report to the PCE. After the LSP without any loss. In the next step, we established the
revocation, we modified the IGP cost on the current BGP session between the controller and the PE nodes
path. Since the LSP delegation was inactive, PCE was and configured the two Flowspec rules in the
not able to optimize the path itself. We performed the controller. The first rule is defined to match a
re-optimization command on the PCC and Nokia PCC destination IPv6 address and a UDP port ID for rate-
sent a PCReq to Nokia PCE for the path re- limiting scenario and the second rule is defined to
optimization. Upon reception of the PCC request, PCE match a destination IPv6 address and a DSCP value for
updated the LSP and sent the update message to PCCs. full traffic drop. After applied the Flowspec policies by
The PCCs created LSP in the direction facing the next the controller, we generated the same traffic streams
PCC as in the initial stage. The traffic flow was again. Arista 7280R2-X node met the two Flowspec
seamless between the PCCs. conditions and limited the traffic rate to 128 Kbit/s for
one traffic stream and dropped all the packets for
another stream. The remaining two streams were not
affected as expected. Since the Arrcus supports only
drop Flow-spec policy, the packet drop condition was
met in the streams as specified by the controller.
During the LSP revocation, one PCC solution achieved Figure 37: BGP Flowspec for IPv6
the LSP revocation by sending the LSP state as a
PC_report only massage which accepted by the PCE to
revocate the LSP delegation.
Nokia NSP Controller Huawei NetEngine 8000 F1A Nokia 7750 SR-1-2 Juniper MX204-4
23
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
Egress Peer Engineering with Segment The EPE controller learned the BGP Peering SID’s and
Routing the external topology of DUT2 via BGP-LS EPE routes.
The Segment Routing architecture can be directly DUT1 and DUT2 are configured with ISIS-SR in the
applied to the MPLS data plane with no change on the MPLS domain to provide routing for the internal
forwarding plane. It requires a minor extension to the network. In the DUT2, the paths to the DUT3 and DUT
existing link-state routing protocols. The SR-based BGP- 4 are configured with different SR color policy and set
EPE solution allows a centralized SDN controller to the DUT3 as the best transport path. We generated the
program any egress peer policy at ingress border IPv4 service traffic flow between the DUT1 and DUT3
routers or hosts within the domain. Thanks to the BGP- as well as DUT1 and DUT4. The traffic flow routed
LS extension it is possible to export BGP peering node through the DUT3 as expected. During the traffic flow,
topology information (including its peers, interfaces the EPE controller pushed the color based SR policy on
and peering ASs) in a way that is exploitable to the DUT1 using BGP-SR Policy to choose the DUT4 as
compute efficient BGP Peering Engineering policies and the transport port. After successfully applied the policy,
strategies. traffic was re-routed through the DUT4. This test
confirmed that DUT1 successfully encapsulated the
This test verifies that the EPE Segment Routing could be traffic for delivery over the designated data-plane
used to allocate MPLS label for each engineered peer interconnection link.
and use a Lable stack to steer traffic to a specific
destination. The test topology included a centralized
EPE controller and three Autonomous Systems (AS). In
the AS 65001, the DUT1 was configured as a PE
router, and DUT2 was configured as ingress
Autonomous System Boundary Router (i_ASBR).
Meanwhile, DUT3 and DUT4 are located in different
AS, namely AS 65002 and AS 65003. The ingress
port of DUT 1, egress ports of the DUT3 and DUT4
were connected to the traffic generator.
Keysight Ixia Nokia 7750 SR-1-2 Juniper MX204-4 Arrcus QuantaMesh Arrcus QuantaMesh
IxNetwork T4048-IX8A T4048-IX8A
Nokia NSP Controller Arista 7280R2-X Juniper MX204-4 Arrcus QuantaMesh Arrcus QuantaMesh
T4048-IX8A T4048-IX8A
Nokia NSP Controller Nokia 7750 SR-1-2 Juniper MX204-4 Arrcus QuantaMesh Arrcus QuantaMesh
T4048-IX8A T4048-IX8A
24
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
Cisco NSO Cisco NSO Lumina SDN Metaswitch Nokia Nokia Juniper
Orchestrator Controller Controller NOS Toolkit 7750 SR-1 7750 SR-1-2 MX204-6
25
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
The NETCONF protocol defines a simple mechanism YANG is a data modeling language that was
through which a network device can be managed, introduced to define the contents of a conceptual data
configuration data information can be retrieved and store that allows networked devices to be managed
new configuration data can be uploaded and using NETCONF. In this test, we verified that the IETF
manipulated. The protocol allows the device to expose L2VPN service YANG model (RFC 8466) can be used
a full and formal application programming interface to configure and manage L2VPNs. It verified VPWS
(API). Applications can use this straightforward API to specific parameters as well as BGP specific parameters
send and receive full and partial configuration data applicable for L2VPNs. A NETCONF compliant client
sets. was used as a centralized controller to configure a
group of PE nodes and provision L2VPN services.
In this test, We defined a set of configurable elements
on the DUTs and used NETCONF protocol from a
compliant client to change the parameters on the DUTs,
which runs the NETCONF server. The NETCONF client
was connected to the DUTs (NETCONF servers) and
established the NETCONF sessions between them.
After the session establishment, the NETCONF client
successfully retrieved the device running configuration
and also NETCONF client was able to retrieve specific
information like interface details using subtree filtering.
Then we verified some configuration change on the
DUTs which was performed by the NETCONF client.
We also confirmed that the NETCONF client was able
to retrieve some operational status and roll back the
configuration to its previous state. Finally, the
NETCONF client terminated the NETCONF session Figure 41: L2VPN Service Creation
successfully. Using NETCONF/YANG
26
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
This test verifies that the IETF L3VPN YANG model (RFC
8299) can be used to configure and manage the
L3VPN service. Meanwhile, we verified VRF specific
parameters as well as BGP specific parameters
applicable for L3VPNs. The procedure of the test is very
similar to the previous test. In this, we verified L3VPN
service creation and deletion while verifying that the
required traffic was flowing as expected.
27
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
28
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
Clocking
As the growing importance of the time synchronization
in the industry, and the crucial role it plays in modern
technologies like 5G; clocking tests become very
essential, not only regarding time accuracy and error
but also failover scenarios and security.
• PE: ECI NPT-1800, Huawei ATN980C The primary reference time clock (PRTC) was GPS using
an L1 antenna located on the roof of our lab. The
synchronization test team tested brand new software
versions, products, and interface types, including PTP
over 100 GbE.
29
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
Our tests helped to discover several small issues, but The following devices successfully participated in the
the R&D departments of the vendors reacted quickly test:
providing patches and troubleshooting support. • Boundary Clock: Juniper QFX5110-48S
• Grandmaster: Huawei NetEngine 8000 X4,
Phase/Time Partial Timing Support Microchip TimeProvider 4100
As a respond to the need for phase/time synchroni- • Impairment: Calnex Paragon-X
zation solution that is more feasible in non-greenfield
• Phase and Frequency Analyzer: Calnex Paragon-T
deployments, PTP telecom profile for phase/time
synchronization with partial timing support from the • Slave Clock: Huawei NetEngine 8000 M8,
network has been developed. Microchip TimeProvider 4100
We performed the test with the ITU-T G.8275.2 profile We had to wait the GM to lock on the BC - for one
(PTP telecom profile for Phase/Time-of-day synchroni- combination - before the applying the impairment,
zation with partial timing support from the network) because they were not able to lock and continued the
between the Grandmaster and the Boundary Clock, remaining steps successfully, some of the devices had
without any physical frequency reference – such as the limitation of having a GM on 10Gbps link and a
SyncE; the grandmaster clock was provided with GPS slave on 1Gbps which caused PTP locking issues, then
input, while the slave and boundary clock started from we had to use same speed for both.
a free-running condition.
Phase/Time Synchronization, Source
This setup emulates Partial Timing Support scenario,
using the impairment between the GM and the BC; the Failover
impairment was applied as per the PDV profile
Deploying a reliable time delivery method is a concept
according to G.8261 test case 12 using Calnex
of IEEE 1588-v2, and to achieve this, the clock source
Paragon-X device.
redundancy is required with primary and secondary
For this test case, we have 3 vendors Microchip, grandmasters provided to a clocking system. For PTP
Juniper, and Huawei. The devices swapped the roles to the whole chain includes a boundary clock and its
get as many pairs as possible. Calnex Paragon-X was slaves next to the clock source. The boundary clock
used to generate the impairment between the GM and determines the primary grandmaster with the best clock
BC, while Paragon-T was used to take the quality and interacts with the slaves via PTP to deliver
measurements. precise time over the network.
30
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
31
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
One slave clock did not send any PTP packets, but it
was unclear if the boundary clock sent any unexpected
sync message. Therefore, the test could not be locked.
32
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
Phase/Time Partial Timing Support The combination passed the G.8271 accuracy level 6.
33
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
34
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
Phase/Time GNSS Vulnerability Test In this test, we tested the BlueSky firewall from
Microchip, some ports can be configured as Verified
Providing the security demand for 5G, GNSS is a output ports, which able to detect the spoofed signal
reliable system. Professional GNSS receivers require to and Hardened ports which do not stop the signal.
be aware of all possible vulnerabilities which could be
exploited. Spoofing is an intelligent form of interference We connected the BlueSky to the GNSS antenna, and
that makes the receiver believe it is at a false location. it's outputs to the GMs, we emulated the spoofing by
disconnecting the GNSS signal which caused the
To protect these receivers against spoofing, a GNSS BlueSky running on its internal oscillator, and measured
security device can be used, which analyzes the GPS the time error on the slave clocks.
signal. GPS signal data is received and evaluated from
each satellite to ensure compliance along with On the Hardened ports, the slave clocks didn't detect
analyzing received signal characteristics. any lose of the GNSS signal and didn't show any
transient response, and showed the clock class 6 as the
This test verifies the GNSS Firewall capabilities of source of the time. Unlike the Verified ports which
detecting spoofed GNSS signal and stop it. detected the GNSS spoofing and stopped forwarding
the signal, which caused the Slave Clocks to rise an
alarm for Master Clock Class degradation, which
means the test was successfully passed.
35
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
Then we used the Calnex device to apply 250 µs Remote Collaboration Aspects
constant delay in one direction, the DUT detected the
delay and showed an alarm as expected. At EANTC, we have conducted many remote testing
The test was successfully passed. programs in the Network Functions Virtualization (NFV)
space, for example, the New IP Agency (NIA) test
series with more than 80 participants of which only 20
% were present locally, the Intel and Lenovo
performance testing program for a substantial number
of virtual network functions, or the NetSecOpen next-
gen firewall performance certification program.
36
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
Summary
Another test case included the seamless BFD
We successfully performed a wide range of EVPN tests, interoperability over SR-TE path. We successfully
among which IRB, all-active multi-homing, MAC examined the SRv6 data plane for EVPN services,
Mobility, and loop prevention extensively addressed traffic engineering, and TI-LFA. We reintroduced the
the EVPN control plane for MAC/route learning BGP Segment Routing this year with a spine layer
intelligence. Carrier Ethernet services tests included consisted of two nodes and three different vendors took
EVPNs with different service types and protocols that the role of the leaf node.
save resources and time between multicast nodes.
Finally, EVPN scenarios for data center interconnection The Clock Synchronization tests started with classic
were evaluated. Throughout the event, we observed Partial and Full Timing support implementation, went on
EVPN-capable devices supporting the entire test scope with failover resiliency tests and concluded with new
as well as solutions specializing in some advanced test tests of Class C Boundary clocks, GNSS security, and
cases. We observed only a few interoperability issues, PTP over MACsec. This year, the test coverage caught
helping vendors to further mature their EVPN up with recently ratified standards, and precision
implementations. elevated to meet the demands of new 5G deployment
scenarios. We were able to achieve G.8271 level 6
FlexE tests included two specific features: accuracy in most of the tests - and even for level 6C,
channelization and bandwidth adjustment. They were we tested the passive port feature and asymmetry
both successfully tested in this initial round of multi- detection.
vendor evaluation. We plan to expand on this area
with more test cases and more participating Security aspects are not very popular in PTP testing, but
implementations next year. they are becoming ever more important with the larger
scale field deployments to less secure locations in 5G
In the Path Computation Element Protocol (PCEP) test networks. We tested a GNSS firewall, and encrypting
area, we successfully tested the PCE- and PCC-initiated PTP packets with MACsec successfully, which are very
stateful path computation with different vendor difficult tests to pass.
combinations. We successfully confirmed traffic rate
limitation and traffic drop using BGP Flowspec for IPv6 Throughout all test areas, we faced a number of
with various rules. In the Egress Peer Engineering with interoperability issues and problems - as expected in
Segment Routing, the participating vendor this type of events. It was a great experience to see
implementations successfully redirected the traffic via very thoughtful, experienced, and knowledgeable
ingress Autonomous System Boundary Routers (i_ASBR) engineers working together, troubleshooting and
by modifying the BGP SR color policy in the ingress PE solving problems towards production-ready
router. interoperability of advanced technology solutions.
37
Multi-Vendor Interoperability Test 2020
Multi-Vendor Interoperability Test 2020
While every reasonable effort has been made to ensure accuracy and completeness of this publication,
the authors assume no responsibility for the use of any information contained herein.
All brand names and logos mentioned here are registered trademarks of their respective companies
in the United States and other countries.
20200401 v1.2
38