Sie sind auf Seite 1von 13

#214120

September 2014
Commissioned by
Huawei Technologies Co., Ltd

Huawei CloudEngine 7800/6800/5800 Series Data Center Switch


Performance, Virtualization, Data Center Features and SDN Evaluation

EXECUTIVE SUMMARY THE BOTTOM LINE


Huawei CloudEngine 7800/6800/5800 series switches are 40GbE/10GbE/GbE Huawei CloudEngine 7800/6800/5800 Series Data Center
data center switches developed by Huawei Technologies Co., Ltd. Switch:
Huawei commissioned Tolly to evaluate their CE7800/6800/5800 series
switch performance, virtualization capability, features and SDN
1 Supports FCoE (FCF, NPV, FSB modes and DCB) -
with CE7800/6800
functionalities.
2 Supports virtualization with iStack - virtualize 16
physical switches into 1 logical switch, and SVF
Tolly engineers verified that the CloudEngine switches provided high
performance with low power consumption, virtualization capability with (vertical virtualization) - virtualize multiple
Huawei’s iStack and SVF technologies, as well as numerous data center homogeneous or heterogeneous physical switches
features including VEPA, TRILL, FCoE (FCF, NPV, FSB modes and DCB), and into 1 logical switch with local forwarding on leaf
Huawei nCenter interoperation with VMware vCenter. nodes

Tests also show that the CloudEngine switches supported OpenFlow SDN 3 Supports large layer 2 TRILL network with 512
nodes and active-active TRILL edge
including interoperability with Huawei Agile Controller and the third party
controller “Ryu”, L2/L3 line-rate forwarding, multiple flow tables, policy-based 4 Supports OpenFlow SDN with topology discovery,
L2/L3 line-rate forwarding, multiple flow tables,
routing, and dynamic traffic engineering (TE).
policy-based routing and dynamic traffic
engineering with interoperability with Huawei
Agile Controller and third party SDN controllers

Huawei CloudEngine 7800/6800/5800 Series Data Center Switch Layer 2 Throughput


(as reported by Ixia IxNetwork 7.22.9.9.9EA)
Throughputt (% line rate))
Frame Sizes 64-Byte 128-Byte 256-Byte 512-Byte 1024-Byte 1280-Byte 1518-Byte 9216-Byte
CE7850-32Q-EI
- 100% 100% 100% 100% 100% 100% 100%
32 x 40GbE ports
CE6850-48S6Q-HI
100% 100% 100% 100% 100% 100% 100% 100%
48 x 10GbE + 6 x 40GbE ports
CE6810-48S4Q-EI
100% 100% 100% 100% 100% 100% 100% 100%
48 x 10GbE + 4 x 40GbE ports
CE5850-48T4S2Q-HI
100% 100% 100% 100% 100% 100% 100% 100%
48 x GbE + 4 x 10GbE + 2 x 40GbE ports
CE5810-48T4S-EI
100% 100% 100% 100% 100% 100% 100% 100%
48 x GbE + 4 x 10GbE ports
CE5810-24T4S-EI
100% 100% 100% 100% 100% 100% 100% 100%
24 x GbE + 4 x 10GbE ports
Note: The same type of ports were in snake traffic topology. For example, CE5810-24T4S-EI has 24 GbE ports in snake topology and four 10GbE ports in snake topology.
Source: Tolly, May 2014 Table 1

© 2014 Tolly Enterprises, LLC Tolly.com Page 1 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

depending upon the device and model. In


Test Results all cases, traffic of a particular topology was Huawei
snaked from port to port. Technologies, Co.,
Tolly engineers benchmarked the
performance and feature set of a range of As shown in Table 1, all models tested of Ltd
Huawei CloudEngine 7800/6800/5800 the 5800/6800 delivered line rate at every
Series Data Center top-of-rack (ToR) frame size tested from 64-byte to 9216- CloudEngine
Switches outfitted with Gigabit Ethernet, byte jumbo frames. The 7850 outfitted with
10GbE and 40GbE ports. 7800/6800/5800
32 40GbE ports delivered line rate
throughput at all frame sizes tested from Series Data Center Tested
The feature evaluation included:
virtualization, data center functionality and
128-byte to 9216-byte frames. Switches May
OpenFlow capabilities. Test results are Tolly engineers measured the latency at the 2014
summarized below and detailed in the Test same frame sizes in both 40GbE and 10
Performance
Setup and Methodology section. See Table GbE configurations. Evaluation and
4 for the list of all verified items.
In tests of 40GbE ports on the CE7850
Feature Validation
Performance switch, latency ranged from 0.60 μs to 0.73
μs. In tests of 10GbE ports on the CE6850 Engineers benchmarked various
switch, latency ranged from 0.87 μs to 0.95 combinations of ports across the
Layer 2 Throughput & μs. In tests of 10GbE ports on CE6810 CloudEngine 7800/6800/5800 family. Tests
switch, latency ranged from 0.81 μs to 1.37 were carried out according to the ATIS
Latency μs. See Table 2 for detailed results. recommendations and the results can be
For each device under test, the Layer 2 found In Table 3.
throughput was measured individually Power Consumption
across a range of frame sizes from 64-byte
through 9216-bytes. To assist network architects in determining
operational costs of the data center
Testing encompassed combinations of switches, Tolly engineers measured the
Gigabit Ethernet, 10GbE and 40GbE ports power consumption of the devices.

Huawei CloudEngine 7800/6800 Series Data Center Switch Layer 2 Latency


(as reported by Ixia IxNetwork 7.22.9.9.9EA and Spirent TestCenter)
Laten
ncy (μs)

Frame Sizes 64-Byte 128-Byte 256-Byte 512-Byte 1024-Byte 1280-Byte 1518-Byte 9216-Byte

CE7850-32Q-EI
0.60 0.62 0.63 0.68 0.73 0.73 0.73 0.73
40GbE port 1 to port 2 - Cut-through
CE6850-48S6Q-HI
0.87 0.87 0.93 0.94 0.95 0.94 0.94 0.92
10GbE port 1 to port 2 - Store-and-forward
CE6810-48S4Q-EI
0.81 0.86 0.96 1.16 1.38 1.37 1.38 1.37
10GbE port 1 to port 2 - Cut-through
Note: Line-rate traffic was used to test. Cut-through latency measured FIFO latency while store-and-forward latency measured LIFO latency. Thus, store-and-forward
results reported here do not include the time required to store the frame.
Source: Tolly, May 2014 Table 2

© 2014 Tolly Enterprises, LLC Tolly.com Page 2 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

aggregation switches to function as one


Features logical switch.
Data Center Features
With the use of server virtualization and
CE6850 switches were stacked together
Virtualization using iStack and served as the aggregation
cloud computing in data centers,
traditional networks face challenges
switch. Then, the stacked switch was
iStack including Layer 2 network scalability issues,
virtualized with multiple CE5810 switches
limited 4,094 VLANs, increased demands
iStack is Huawei’s technology to virtualize which served as access switches. See Figure
on switch MAC tables, network
multiple ToR switches to one logical switch. 1 for the topology.
requirements for FCoE traffic, and difficulty
Tolly engineers verified that 16 of enforcing network policies on virtual
Tolly engineers verified that switches in SVF
CE6850-48T4Q-EI switches were stacked machines (VMs) while they “live migrate” to
supported local forwarding on the leaf
with a ring or line topology using the iStack different hosts or even different data
nodes and that the stacking links
technology. centers.
supported link aggregation and load
balancing.
Super Virtual Fabric Tolly engineers verified a few features on
Tolly engineers swapped all CE5810 Huawei CE7800/6800/5800 switches to
Super Virtual Fabric (SVF) is Huawei’s
switches in the test bed with CE6810 solve these problems. TRILL was verified to
technology for vertical virtualization which
switches and verified the same features. expand the Layer 2 network. DCB features
can virtualize the access switches and core/
were verified to provide lossless Ethernet

Huawei CloudEngine 7800/6800/5800 Series Data Center Switch Power Consumption


(as reported by Chroma Programmable AC Source 6560)

Power Consum
mption (Watts)

ATIS TEER ATIS Weighted


ATIS Weighted (Gbps/Watts) Watts/Gbps
0% Traffic 30% Traffic 100% Traffic
Power

CE7850-32Q-EI
277.7 290.6 320.5 292.3 4.38 0.23
32 x 40GbE ports
CE6810-48S4Q-EI
124 130 136 130.0 4.92 0.20
48 x 10GbE + 4 x 40GbE ports
CE5850-48T4S2Q-HI
110.6 118.3 128.3 118.5 1.42 0.71
48 x GbE + 4 x 10GbE + 2 x 40GbE ports
CE5810-48T4S-EI
69.9 70.2 72.0 70.4 1.25 0.80
48 x GbE + 4 x 10GbE ports
CE5810-24T4S-EI
49.8 50.3 50.7 50.3 1.27 0.79
24 x GbE + 4 x 10GbE ports
Note: 1. Switches were fully loaded with fans and power supplies.
2. White cells are measured results. Green cells are calculated results.
3. Alliance for Telecommunications Industry Solutions (ATIS) weighted power = (power consumption with 0% traffic) x 0.1 + (power consumption with 30% traffic) x 0.8 +
(power consumption with 100% traffic) x 0.1. ATIS Telecommunication Energy Efficiency Ratio (TEER) = (maximum demonstrated throughput) / (ATIS weighted power).
4. ATIS weighted Watts/Gbps = 1 / (ATIS TEER).
5. iMIX traffic (5% 49 bytes frames, 20% 576 bytes frames, 42% 1,500 bytes frames and 33% 49-1500 bytes frames) was used.
Source: Tolly, May 2014 Table 3

© 2014 Tolly Enterprises, LLC Tolly.com Page 3 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

for FCoE. VEPA was verified to direct all as the VXLAN overlay network tunnel TRILL & High-Availability
network traffic of VMs to the physical endpoint and gateway.
switch for easier management. Huawei Tolly engineers verified that Huawei
nCenter and VSI Manager were evaluated Tolly engineers further verified that the CloudEngine switches supported
to provide network policy migration Huawei CE12800 data center core switch transparent interconnection of lots of links
following VMs’ live migration. supported Huawei’s Ethernet Virtual (TRILL) with a large Layer 2 TRILL network
Network (EVN) to provide L2 connectivity that consisted of 512 nodes. Additionally,
Tolly engineers also verified that Huawei across the L3 WAN network. This feature is engineers verified support for high
CE7850 and CE6850HI switches could act discussed in the Tolly Test Report #214119. availability with active-active TRILL edge.

Huawei CloudEngine 7800/6800/5800 Series Data Center Switch


Tolly Verified Performance and Features
Tolly Certified Performance and Features
Performance
 Line-rate forwarding
 10GbE port latency (cut-through) as low as 0.8μs, 40GbE port latency (cut-through) as low as 0.6μs
 Low power consumption

Virtualization
 iStack, virtualize 16 physical switches into one logical switch
Super Virtual Fabric (SVF) vertical virtualization (virtualize aggregation and access switches into one) with local

forwarding on leaf nodes
Data Center Featuress
Transparent Interconnection of Lots of Links (TRILL): support a large L2 network with up to 512 nodes,

High availability with TRILL: active-active TRILL edge - two nodes as a DFS group with one pseudo TRILL nickname
FCoE (FCF, NPV, FSB modes)
 Data Center Bridging (DCB) - PFC, ETS, DCBX
(not include CE5800 switches)
 802.1Qbg Virtual Edge Port Aggregator (VEPA)
Network Policy Migration: Controlled by Huawei nCenter, interoperate with VMware vCenter to implement in-service

policy migration with virtual machine live migration
Network Policy Migration: Controlled by Huawei VSI Manager, interoperate with VMware vCenter and IBM 5000V

virtual distributed switch using VEPA to implement in-service policy migration with virtual machine’s live migration
 Underlying network for VXLAN or NVGRE overlay network
CE7850 and CE6850HI acted as the VXLAN Tunnel Endpoint (VTEP)

CE7850 and CE6850HI acted as the VXLAN overlay network gateway

OpenFlow SDN
 Controlled by the Huawei Agile Controller or Third Party SDN Controllers (tested Ryu)
Topology discovery, L2/L3 line-rate forwarding, multiple flow tables, policy-based routing, dynamic traffic

engineering (TE)

Source: Tolly, May 2014 Table 4

© 2014 Tolly Enterprises, LLC Tolly.com Page 4 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

FCoE/DCB
Huawei Super Virtual Fabric (SVF) Test Bed
Engineers verified support for a key set of
data center functionality in the areas of
Fibre Channel over Ethernet (FCoE) with
data center bridging (DCB). The Huawei
CloudEngine switches supported FCF, NPV
and FSB modes for FCoE. Engineers also
verified the interoperability between the
Huawei CloudEngine switches and the
CNAs from major vendors including
Emulex, QLogic and Intel. See Table 4 and
the Test Methodology section for additional
details.

VEPA and Network Policy Migration


Tolly engineers verified interoperability
between Huawei nCenter and VMware
vCenter to implement in-service policy
migration with virtual machine migration.
When a virtual machines was live migrated
Source: Tolly, May 2014 Figure 1
to another host, the network policy (ACL
rules and QoS policies) for the virtual
machine group also migrated to the proper overcome the limitation of VLAN numbers network while Huawei CE12800 switch
different switch or port. by adding a new Layer 2 network segment could act as the gateway for the VXLAN or
header (VNI for VXLAN and VSI for NVGRE), NVGRE overlay network
Engineers also verified interoperability and reduce the demands of the MAC tables
between Huawei VSI Manager, VMware on the physical switches. OpenFlow Software
vCenter and the IBM 5000V virtual
distributed switch using 802.1Qbg Virtual As the underlying physical network only Defined Networking
Edge Port Aggregator (VEPA) to implement needs to provide Layer 3 connectivity for
in-service policy migration with virtual the tunnel endpoints (e.g. virtual switches), Tolly engineers verified various capabilities
machine live migration. the physical switches do not need to in the area of software defined networking
change much. Huawei CE7800/6800/5800 (SDN).
acted as the underlying network in the
Overlay Network - VXLAN and
VXLAN and NVGRE overlay network Topology Discovery
NVGRE environment during the test.
The Huawei Agile Controller supports
Two major data center overlay network displaying the whole network topology by
To allow the virtual environment using
technologies are Virtual Extensible LAN the LLDP topology discovery capability
VXLAN or NVGRE to communicate with
(VXLAN) and Network Virtualization using from the CE7800/6800/5800 switches.
other non-VXLAN or non-NVGRE endpoints
Generic Routing Encapsulation (NVGRE).
as well as provide Layer 3 connectivity for
VXLAN or NVGRE endpoints in different Third-Party SDN Controller with
The overlay network technologies can
provide Layer 2 connectivity for tunnel network segments, a gateway is needed. Multiple Flow Tables
endpoints (e.g virtual switches) over a Tolly engineers verified that the Huawei
CE7850 and CE6850-HI switches could act In addition to verifying SDN management
physical Layer 3 network. It can expand the via the Huawei Agile Controller, engineers
Layer 2 network for the virtual machines, as the gateway for the VXLAN overlay
also verified that the Huawei devices could

© 2014 Tolly Enterprises, LLC Tolly.com Page 5 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

be managed by a third-party controller - in traffic using the RFC2544 latency test suite SVF
this test, the Ryu SDN Framework. Multiple in the Ixia IxNetwork. Store-and-forward
flow tables were applied to the latency (LIFO) was measured for the Super Virtual Fabric (SVF) is Huawei’s
CloudEngine switches from the Ryu CE6850 switch using the RFC2544 latency technology for vertical virtualization which
controller. test suite in the Spirent TestCenter. Thus, can virtualize the access switches and core/
store-and-forward results reported here do aggregation switches to be one logical
Flow Table Performance not include the time required to store the switch.
frame. See Table 2 for results.
Engineers verified that the CloudEngine CE6850 switches were stacked together
switches delivered line-rate Layer 2 and using iStack and acted as the aggregation
Layer 3 performance with two 10GbE ports
Power Consumption switch. Then the stacked switch was
using SDN-based flow tables. The power consumption was measured virtualized with multiple CE5810 switch
using the same traffic topology as the which acted as access switches. See Figure
Policy-based Routing throughput test. According to the ATIS 1 for the topology.
standard for data center switches, the
Tolly engineers verified that policy controls Tolly engineers verified that switches in SVF
power consumption of 0% traffic, 30%
could be used to route traffic through support local forwarding. Also, the stacking
traffic and 100% traffic were measured
specific switches as configured via SDN. links between the aggregation switch and
using iMIX traffic (5% 49 bytes frames, 20%
576 bytes frames, 42% 1,500 bytes frames the access switches supported load
Dynamic Traffic Engineering and 33% 49-1500 bytes frames). Then the balancing.
Tolly engineers verified that dynamic traffic ATIS weighted power, ATIS TEER and ATIS
Tolly engineers swapped all CE5810
engineering (TE) could be used to adjust weighted Watts/Gbps of the switch were
switches in the test bed into CE6810
the forwarding path dynamically based on calculated. See the notes of Table 3 for
switches and verified the same features.
traffic load. additional details.

ATIS standard refers to the “Energy Data Center Features


Test Setup & Efficiency for Telecommunications
Equipment: Methodology for TRILL
Methodology Measurement and Reporting for Router
Transparent Interconnection of Lots of
and Ethernet Switch Products” document
Links (TRILL) uses Layer 3 routing
Performance published by Alliance for Telecommunications
Industry Solutions (https://www.atis.org/
techniques to build a large Layer 2
network. Engineers used Spirent TestCenter
Throughput docstore/product.aspx?id=25324).
to simulate one TRILL node on one port
and 510 TRILL nodes on the other port.
CE7850-32Q-EI, CE6850-48S6Q-HI,
CE6810-48S4Q-EI, CE5850-48T4S2Q-HI,
Virtualization Both ports were connected to the
CE5810-48T4S-EI and CE5810-24T4S-EI were CloudEngine switch under test. Tolly
iStack engineers verified that the switch under
tested using the RFC2544 throughput test
suite in the Ixia IxNetwork. For each device iStack is Huawei’s technology to virtualize test showed all 511 TRILL neighbors.
under test (DUT), all available same type of multiple ToR switches to one logical switch. Together with the switch under test, the
ports were in snake topology. See Table 1 Tolly engineers verified that 16 whole TRILL network included 512 nodes.
for results. CE6850-48T4Q-EI switches were stacked as
CE7850, CE6850 and CE5850 switches were
a ring or line topology using the iStack
all tested.
Latency technology.

Cut-through latency (FIFO) of the Engineers also configured two CE6850


CE7850-32Q-EI and CE6810-48S4Q-EI was switches in active-active status as a DFS
measured as port to port with line-rate group with a pseudo TRILL nickname for

© 2014 Tolly Enterprises, LLC Tolly.com Page 6 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

high availability and verified the fast FCoE - FCF Mode mode single-hop as well as multi-hop. In
failover for switch and link failures. the single-hop test, only one CE6850
When an Fibre Channel over Ethernet switch was used to connect the SAN
FCoE (FCoE) switch operates in the FCoE storage and the physical server. In the
Forwarder (FCF) mode, it encapsulates FC multi-hop test, two CE6850 were used to
Tolly engineers verified that Huawei frames in Ethernet frames and uses FCoE connect the SAN storage and the physical
CE6850 could act in FCF, NPV or FSB mode virtual links to simulate physical FC links. It server in serial. In both tests, the physical
for FCoE. CNAs from major vendors provides standard FC switching capabilities server can mount and access the LUNs in
including Emulex, QLogic and Intel were and features on a lossless Ethernet the SAN storage without any problem.
used during the test to verify Huawei network.
CE6850 switch’s interoperability with them. One Emulex OneConnect OCe11102-FM
Tolly engineers verified that the dual-port 10G/s FCoE Converged Network
CE6850-48S4Q-EI switch supported FCF Adapter (CNA) was used on the physical

Huawei CloudEngine Switch Performance Test Bed

CE7850-32Q-EI

CE6850-48S6Q-HI

CE6850-48S4Q-EI

CE6810-48S4Q-EI

CE5850-48T4S2Q-HI

CE5810-48T4S-EI Ixia XM12 IP Performance Tester

CE5810-24T4S-EI

Note: The same type of ports on one switch were in snake topology: All available 40GbE ports in snake topology, all available 10GbE ports in snake
topology and all GbE ports in snake topology

Source: Tolly, May 2014 Figure 2

© 2014 Tolly Enterprises, LLC Tolly.com Page 7 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

server to connect to the CE6850 switch The CE6850 switch was then connected to negotiate ETS and PFC settings. Only when
under test. one Huawei CE12800 switch in FCF mode. the ETS and PFC settings match between
Another port of Spirent TestCenter was the CE6850 switch and the connected
FCoE - NPV Mode connected to the CE12800 switch on the Spirent TestCenter, ETS and PFC could
other end to simulate one SAN storage function.
An Fibre Channel Storage Area Network (FC
(FCoE target).
SAN) needs a large number of edge
DCB - PFC and ETS
switches that are directly connected to The FCoE session was created between the
nodes (servers and storage). FCoE switches simulated FCoE initiator and target. The When Priority-based Flow Control (PFC) is
in FCoE N-port Virtualization (NPV) mode CE6850 then stores the FCoE session enabled on a switch port for inbound traffic
can expand the number of switches in an information by FIP snooping. with certain 802.1p priorities, the port
FC SAN. sends back-pressure signals to reduce the
Tolly engineer then verified that only FCoE sending rate of those priorities from the
Fabric is the main network with FCoE traffic matching the MAC address of the upstream device if network congestion
switches in FCF mode. NPV switches reside simulated SAN storage could be forwarded occurs.
between nodes and core FCoE FCF to the FCoE initiator. FCoE traffic not
switches on the edge of the fabric. NPV matching the MAC address of the FCoE Enhanced Transmission Selection (ETS)
switches forward FCoE traffic from it’s session or other types of non-FCoE traffic implement QoS based on Priority Groups
connected nodes to the core FCF switch. matching the MAC address can neither be (PG). In the configuration of ETS, engineers
The NPV switch appears as an FCF switch to passed to the FCoE initiator by the CE6850 need to match 802.1p priority 0 to 7 to PG0,
nodes and as a node to the core FCF switch. switch in FSB mode. PG1 and PG15 offered by the Huawei
CE6850 switch. PG0, PG1 and PG15 use PQ
Tolly engineers verified that the Huawei Engineers also used real physical server and +DRR. PG15 uses Priority Queuing (PQ) to
CE6850 switch could work in NPV mode SAN storage with the CE6850 in FSB mode have limitless bandwidth and host
and interoperate with the Brocade and CE12800 in FCF mode to verify the management or IPC traffic. PG0 and PG1
VDX6700 FCoE switch in FCF mode. One connectivity. The physical server could use Deficit Round Robin (DRR) to have
physical server was connected to the access the storage without any problem. weighted bandwidth left by PG15.
Huawei CE6850 switch while one SAN One Intel CNA was used on the physical
storage was connected to the Brocade server to connect to the CE6850 switch. In the test, engineers enabled PFC for
VDX6700 FCoE switch. FCoE traffic was priority 3 (default priority for FCoE traffic) to
forwarded to the Brocade VDX6700 switch Data Center Bridging (DCB) make sure FCoE traffic will not have frame
by CE6850 and the physical server access loss. Then priority 0, 1, 2, 4 and 5 were
the SAN storage without any problem. Data Center Bridging (DCB) is a suite of IEEE assigned to PG0; priority 3 (FCoE traffic) was
standards to provides many advantages for assigned to PG1; priority 6 and 7 (IPC traffic)
One QLogic 8200 series 10Gbps CNA was data centers such as lossless Ethernet for was assigned to PG15. The weight ratio for
used on the physical server to connect to FCoE traffic. Tolly engineers verified DCBX, PG1 (FCoE traffic) and PG0 (LAN traffic) was
the Huawei CE6850 switch. PFC and ETS which are components of DCB set to 3:2.
for the Huawei CE6850 switch.
FCoE - FSB Mode There was one 10GbE link between the
DCB - DCBX CE6850 switch under test and the receiving
FCoE switch in FCoE Initialization Protocol
port of the Spirent TestCenter. Engineers
Snooping Bridge (FSB) mode does not Data Center Bridging Capability Exchange
sent 8Gbps IPC traffic with 802.1p priority 7,
support FC protocol itself. It uses FCoE protocol is an extension of Link Layer Data
4Gbps FCoE traffic with priority 3 and 2Gbps
Initialization Protocol (FIP) snooping to Protocol (LLDP) to discover peers and
regular Ethernet traffic with priority 0 using
prevent attacks. exchange configuration information
two 10GbE ports of Spirent TestCenter to
between DCB-compliant switches.
One port of Spirent TestCenter simulating a bypass the bandwidth of the 10GbE link at
server (FCoE initiator) was connected to Tolly engineers verified that the Huawei the receiving end.
one Huawei CE6850 switch in FSB mode. CE6850 switch could use DCBX protocol to

© 2014 Tolly Enterprises, LLC Tolly.com Page 8 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

Tolly engineers then verified that the


receiving end received all 8Gbps IPC traffic Huawei CloudEngine Switches Network Policy Migration Test Bed A
which was in PG15 to have limitless
bandwidth, 1.2Gbps FCoE traffic which was
in PG1 and 0.8Gbps regular Ethernet traffic
which was in PG0. The receiving rate of
1.2Gbps FCoE traffic and 0.8Gbps regular
Ethernet traffic matched the configured 3:2
weight ratio for PG1 and PG0.

Tolly engineers also verified that the


sending rate from TestCenter’s sending
ports of the FCoE traffic reduced to be as
1.2Gbps as total (0.6Gbps from each port)
because PFC was enabled. Thus the FCoE
traffic had 0 frame loss. The sending rate of
the regular Ethernet traffic with priority 0
was still 2Gbps and had frame loss because
priority 0 was not enabled for PFC.

Network Policy Migration


Huawei provides two tools to help migrate Source: Tolly, May 2014 Figure 3
the network policy (ACL and QoS rules)
along with virtual machines’ live migration
in VMware vSphere environment.
Huawei CloudEngine Switches Network Policy Migration Test Bed B
Network Policy Migration - nCenter
The first tool is the nCenter component
under Huawei’s eSight net work
management application. Engineers first
configured nCenter to manage the
CE12800 and the CE6850 switch under test.
In nCenter, engineers then configured the
I P a d d re s s o f V M w a re v Ce n t e r
5.0.0_623373 which managed two
VMware ESXi 5.0.0_623860 hosts. So
nCenter could use vCenter’s APIs to
interoperate with vCenter and migrate
network policies on Huawei CloudEngine
switches along with the virtual machines.
See Figure 3 for the test bed.

One ACL policy (denying destination IP as


an outbound policy) were assigned to the 5000V refers to IBM 5000V virtual distributed switch
VM group which contains one VM on the
first ESXi host. Engineers live migrated the Source: Tolly, May 2014 Figure 4

© 2014 Tolly Enterprises, LLC Tolly.com Page 9 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

VM to another ESXi host and used ping to Overlay Network Gateway - VXLAN between VTEP-1 and VTEP-2 so the two
check the connectivity. TestCenter ports could always communicate.
As shown in Figure 5, one Huawei CE7850
The VM was accessible all the time on the switch and one Huawei CE6850HI switch Engineers then verified the Layer 2 and
network with the traffic not matching the (CE7850-2 and CE6850-2 in the test bed) Layer 3 connectivity between the VXLAN
ACL (deny policy). Tolly engineers verified acted as the VXLAN Tunnel End Points overlay network and the traditional
that the ACL policy were migrated and (VTEP). The CE7850 or CE6850HI switch at network. CE6850HI-1 in the test bed
enforced to the proper different switch top acted as the VXLAN overlay network simulated the traditional network out of
with the VM’s migration. gateway. the VXLAN overlay network. The CE7850 or
the CE6850HI switch at the top could use
Network Policy Migration - VSI Engineers first verified the Layer 2 and its port connected to CE6850-1 as a VTEP
Layer 3 connectivity in the VXLAN network. and provide Layer 2 and Layer 3
Manager with VEPA When VTEP1 and VTEP2 were with the connectivity between the VXLAN overlay
The method above can control the ACL and same network segment header VNI and the network and the traditional network.
QoS policies between different hosts but two Spirent TestCenter ports were in the
same subnet, the TestCenter ports were
cannot control the traffic between VMs on
the same host and in the same VLAN able to communicate with each other
OpenFlow SDN
because the traffic only goes into the using the VXLAN network. When VTEP1 SDN - Topology Discovery
vSwitch on the host without passing to the and VTEP2 were with different network
physical Huawei CE6850 switch. segment header VNIs (in two different Two Huawei CE6850 switches and one
VXLAN network segments) and the two Huawei CE7850 switch were configured to
Virtual Edge Port Aggregator (VEPA) Spirent TestCenter ports were in different connect to the Huawei Agile Controller.
standard is developed to direct all network subnets, the CE7850 or CE6850 switch at Wireshark was used to capture the traffic
traffic of any virtual machine to the physical top acted as the gateway and provided between the controller and the switches.
switch. So the ACL and QoS policies on the VXLAN network Layer 3 connectivity
physical switch can control all network
traffic of any virtual machine. The built-in
vSwitch in VMware ESXi host does not Huawei CloudEngine Switches VXLAN Gateway Test Bed
support VEPA natively. So, engineers
installed the IBM distributed virtual switch
5000V on the VMware ESXi hosts and
enabled VEPA for the 5000V. Then
engineers configured Huawei VSI Manager
to work with IBM 5000V, VMware vCenter
and the CE6850 switch under test. See
Figure 4 for the test bed.

One ACL policy (denying destination IP as


an outbound policy) were assigned to the
VM group which contains one VM on ESXi
Host B. Engineers live migrated the VM
from ESXi Host B to ESXi Host C and using
ping to check the connectivities.

Tolly engineers verified that the ACL policy


were migrated and enforced to the proper
different port of the CE6850 switch with
Source: Tolly, May 2014 Figure 5
the VM’s migration.

© 2014 Tolly Enterprises, LLC Tolly.com Page 10 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

Tolly engineers verified that the Hello, SDN - Layer 2 and Layer 3 Line-rate as Set filed and Decrease TTL commands to
Features_Request, Features_Reply, let the switches know the forwarding path
Set_Config, Get_Config_Request, Forwarding and change the MAC addresses and the
Get_Config_Reply, Multipart_Request, Traditional Layer 2 and Layer 3 forwarding TTL value of the traffic.
Multipart_Reply, Packet_In, Flow_Mode are based on MAC and FIB tables. When
and FLow_Removed packets with the managed the Huawei Agile Controller, the Tolly engineers verified the procedure
OpenFlow 1.3 protocol were all captured. CloudEngine switches can forward traffic above using two Huawei CE6850 and one
OpenFlow protocol header for the packets by using the flow tables applied from the Huawei CE7850 switches with the Huawei
all included version 0x04 which is defined SDN controller instead of using MAC and Agile Controller.
as the OpenFlow 1.3 protocol. FIB tables. Line-rate traffic was used with 10GbE ports
The Huawei Agile Controller also used LLDP When a switch receives traffic, it passes the on the switches to test. In both the Layer 2
to discover the topology of the switches. traffic to the controller. The controller learns forwarding test and Layer 3 forwarding
Tolly engineers verified that the topology the MAC and IP addresses of the traffic and test, there was no frameloss with 10Gbps
map in the Huawei Agile Controller shows uses a shortest path algorithm to calculate 128-byte frames.
the network topology and topology the Layer 2 and Layer 3 forwarding paths
change accurately. with the network topology it discovered. SDN - Policy-based Routing
For Layer 2 forwarding, the controller then Tolly engineers verified that along with the
SDN - Multiple Flow Tables with applies flow tables with the proper Output shortest path algorithm used by the
Third Party SDN Controllers command to each switch. So the switches controller, certain policies can be defined.
know how to forward the traffic. For Layer 3 In the test, engineers defined that traffic
Tolly engineers verified that the Huawei forwarding, the controller applies flow from and to certain IPs must go through
CE6850 and CE7850 could be controlled by tables with the Output command as well as one specific switch. Then all the matched
the third party SDN controller - Ryu SDN
Framework (http://osrg.github.io/ryu/) 3.8.
Engineers used Wireshark to capture traffic. Huawei CloudEngine Switches SDN Test Bed
Hello, Features_Request, Features_Reply,
Set_Config, Get_Config_Request,
Get_Config_Reply, Multipart_Request,
Multipart_Reply, Packet_In, Flow_Mode
and FLow_Removed packets with the
OpenFlow 1.3 protocol were all verified
between the Ryu and the CloudEngine
switches under test.

After traffic was passed to the CE6850 and


CE7850 switches, the Ryu controller
advertised multiple flow tables to the
Huawei CE6850 and CE7850 switches with
Flow_Mod packets. Engineers verified that
the flow tables were successfully applied to
the switches.

Source: Tolly, May 2014 Figure 6

© 2014 Tolly Enterprises, LLC Tolly.com Page 11 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

traffic went through that switch even between the upper two hosts was with
though the path was not the shortest one. bidirectional 4Gbps. The traffic between
Other traffic still followed the shortest the lower two hosts was increased from
paths. 0Gbps to 10Gbps and then fell back to
3Gbps granularly.
SDN - Dynamic Traffic Engineering
Tolly engineers verified that, when traffic
The Huawei Agile Controller can control the between the lower two hosts reached
switches to adjust the forwarding paths 4Gbps, the traffic path changed to the
dynamically according to the traffic load. backup path calculated by the controller as
shown in Figure 6. When the traffic
Network topology is shown in Figure 6. All
dropped to less than 4Gbps, the traffic path
hosts were simulated by Spirent TestCenter.
went back to be the shortest path.
The traffic between the two upper hosts
(traffic in purple and brown) had higher
priority. The traffic between the two lower
hosts (traffic in blue and green) had lower
priority.

The rate threshold was set to be 8Gbps to


fall back to the backup path. The traffic

Devices Under Test

CE7850-32Q-EI CloudEngine 7850EI V100R003C00SPC600

CE6850-48S6Q-HI CloudEngine 6850HI V100R003C00SPC600

C6850-48S4Q-EI CloudEngine 6850EI V100R003C00SPC600

CE6810-48S4Q-EI CloudEngine 6810EI V100R003C00SPC600

CE5850-48T4S2Q-HI CloudEngine 5850HI V100R003C00SPC600

CE5810-48T4S-EI CloudEngine 5810EI V100R003C00SPC600

CE5810-24T4S-EI CloudEngine 5810EI V100R003C00SPC600

Source: Tolly, May 2014 Table 5

© 2014 Tolly Enterprises, LLC Tolly.com Page 12 of 13


Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

Test Equipment Summary


About Tolly The Tolly Group gratefully acknowledges the providers
of test equipment/software used in this project.
The Tolly Group companies have been
delivering world-class IT services for more Vendor Product Web
than 25 years. Tolly is a leading global
provider of third-party validation services
for vendors of IT products, components XM12 Chassis
Ixia
and services. IxNetwork 7.22.9.9.9EA
http://www.ixiacom.com
You can reach the company by E-mail at
sales@tolly.com, or by telephone at
+1 561.391.5610. HWS-11U-KIT Chassis
Spirent TestCenter v3.95
Visit Tolly on the Internet at: TestCenter v4.20.0576.0000 http://www.spirent.com
http://www.tolly.com

Terms of Usage
This document is provided, free-of-charge, to help you understand whether a given product, technology or service merits additional
investigation for your particular needs. Any decision to purchase a product must be based on your own assessment of suitability
based on your needs. The document should never be used as a substitute for advice from a qualified IT or business professional. This
evaluation was focused on illustrating specific features and/or performance of the product(s) and was conducted under controlled,
laboratory conditions. Certain tests may have been tailored to reflect performance under ideal conditions; performance may vary
under real-world conditions. Users should run tests based on their own real-world scenarios to validate performance for their own
networks.
Reasonable efforts were made to ensure the accuracy of the data contained herein but errors and/or oversights can occur. The test/
audit documented herein may also rely on various test tools the accuracy of which is beyond our control. Furthermore, the
document relies on certain representations by the sponsor that are beyond our control to verify. Among these is that the software/
hardware tested is production or production track and is, or will be, available in equivalent or better form to commercial customers.
Accordingly, this document is provided "as is," and Tolly Enterprises, LLC (Tolly) gives no warranty, representation or undertaking,
whether express or implied, and accepts no legal responsibility, whether direct or indirect, for the accuracy, completeness, usefulness
or suitability of any information contained herein. By reviewing this document, you agree that your use of any information contained
herein is at your own risk, and you accept all risks and responsibility for losses, damages, costs and other consequences resulting
directly or indirectly from any information or material available on it. Tolly is not responsible for, and you agree to hold Tolly and its
related affiliates harmless from any loss, harm, injury or damage resulting from or arising out of your use of or reliance on any of the
information provided herein.
Tolly makes no claim as to whether any product or company described herein is suitable for investment. You should obtain your own
independent professional advice, whether legal, accounting or otherwise, before proceeding with any investment or project related
to any information, products or companies described herein. When foreign translations exist, the English document is considered
authoritative. To assure accuracy, only use documents downloaded directly from Tolly.com.  No part of any document may be
reproduced, in whole or in part, without the specific written permission of Tolly. All trademarks used in the document are owned by
their respective owners. You agree not to use any trademark in or as the whole or part of your own trademarks in connection with
any activities, products or services which are not ours, or in a manner which may be confusing, misleading or deceptive or in a
manner that disparages us or our information, projects or developments.

214120 ivcofs3 yx-2015-02-02-VerK

© 2014 Tolly Enterprises, LLC Tolly.com Page 13 of 13

Das könnte Ihnen auch gefallen