Beruflich Dokumente
Kultur Dokumente
CONTENTS
Overview ............................................................................................................................................................................................................................................. 3
MCT Topologies ................................................................................................................................................................................................................................ 3
MCT Components............................................................................................................................................................................................................................ 5
Traffic Flow......................................................................................................................................................................................................................................... 6
Single Link Aggregation Group Entity in MCT ........................................................................................................................................................................ 7
MAC Database Update (MDUP) over Cluster Control Protocol........................................................................................................................................ 8
Layer 2 Protocol Support in MCT................................................................................................................................................................................................ 9
xSTP ................................................................................................................................................................................ 9
MRP ................................................................................................................................................................................ 9
Layer 2 xSTP BPDU Tunneling ..................................................................................................................................... 10
Keep-alive VLANs ......................................................................................................................................................... 11
VRRP/VRRP-E Implementation on MCT...............................................................................................................................................................................11
MCT Topology A Implementation: Single-Level MCT on Brocade NetIron XMR/MLX and CER/CES ...................... 12
MCT Topology B Implementation: Multi-tier MCT on the Brocade NetIron XMR/MLX and CER/CES ..................... 15
MCT Topology C Implementation: Brocade NetIron XMR/MLX and CER/CES MCT Integration in a Layer
2 MRP Metro Ring ........................................................................................................................................................ 19
Conclusion........................................................................................................................................................................................................................................21
About Brocade................................................................................................................................................................................................................................21
OVERVIEW
Multi-Chassis Trunking (MCT) is a trunk that initiates in a single MCT-unaware server or switch and
terminates at two Brocade MCT-aware switches. MCT allows links that are physically connected to two
Brocade MCT-aware switches to appear to downstream device as coming from a single device as part of a
single link aggregation trunk interface. Multi-Chassis Trunking is available on the Brocade MLX Series and
Brocade NetIron XMR, CER, and CES devices. At the time of writing this paper, two peers can be configured
as an MCT systemand the two peers can be the same device type or a mix of any two of the platforms
listed above.
In a data center network environment, Link Aggregation (LAG) trunks are commonly deployed to provide
link-level redundancy and increase the link capacity between network devices. However, LAG trunks do not
provide switch-level redundancy. If the switch to which the LAG trunk is attached fails, the entire LAG trunk
loses network connectivity. With MCT, member links of the LAG are connected to two MCT-aware switches,
which are directly connected using an Inter-Chassis Link (ICL) to enable data flow and control messages
between them. In the MCT deployment scenario, all links are active and can be load shared using hash
algorithm. If one MCT switch fails, a data path will remain through the other switch with milliseconds rang of
traffic convergence time, which dramatically increase the network resilience and performance.
MCT TOPOLOGIES
Brocade NetIron MCT topologies include the following:
Single-level MCT on the Brocade NetIron XMR/MLX and CER/CES (topology A). This topology
comprises access switches dual-homed to the Brocade NetIron XMR/MLX or CER/CES with a switch
link aggregation trunk interface with Gigabit Ethernet (GbE) or 10 GbE links. This topology can also
consist of link aggregation trunk interface with each endpoint host connected either with one or more
links to each XMR/MLX or CER/CES.
Keep-alive
VLAN
Brocade Brocade
XMR/MLX ICL XMR/MLX
Brocade
NetIron
Brocade CER/CES
NetIron
CER/CES
Multi-tier MCT on the Brocade NetIron XMR/MLX and CER/CES (topology B). This topology comprises
a pair of access switches (typically Brocade NetIron CER/CES) in MCT mode with a unique LAG trunk
interface configured between the access MCT switches and a pair of aggregation/core layer switches,
which are also in MCT mode ( typically Brocade NetIron XMR/MLX). It is often called a double-sided
MCT.
Layer 3
network
Keep-alive
VLAN
Brocade Brocade
XMR/MLX ICL XMR/MLX
Brocade Brocade
NetIron NetIron
CER/CES ICL CER/CES
Brocade NetIron XMR/MLX and CER/CES MCT integration in a Layer 2 MRP metro ring (topology C):
This topology comprises pairs of Brocade NetIron XMR/MLX and CER/CES in MCT mode in a
Layer 2 Metro Ring Protocol (MRP) metro ring topology. The ICL between the pair of MCT switches
is part of the MRP ring and is designed to always be in non-blocking mode. This topology provides
one more layer of aggregation in the MRP ring topology and active/active path to the dual-home
connected servers.
Servers Aggregation
LAG Brocade MLX
MCT Routed
Layer 3
METRO RING network
Brocade
MLX MCT client
MCT
edge ports
Servers
LAG
MCT COMPONENTS
To properly understand MCT, consider Figure 4, which shows an example of MCT deployment,
functions and features.
Cluster: ABC
ICL
CEP CEP
CCEP CCEP
LAG
End End
stations stations
MCT peer switches. A pair of switches connected as peers through the ICL. The LAG interface is spread
across two MCT peer switches and it acts as the single logical endpoint to the MCT client.
MCT client. The MCT client is the device that connects with MCT peer switches through an IEEE
802.3ad link. It can be a switch or an endpoint server host in the single-level MCT topology or another
pair of MCT switches in a multi-tier MCT topology.
MCT Inter-Chassis Link (ICL). A single-port or multi-port GbE or 10 GbE interface between the two MCT
peer switches. This link is typically a standard IEEE 802.3ad Link Aggregation interface. ICL ports
should not be untagged members of any VLAN. The ICL is a tagged Layer 2 link, which carries
packets for multiple VLANs. MCT VLANS are the VLANs on which MCT clients are operating. On the
Brocade NetIron XMR/MLX, non-MCT VLANs can co-exist with MCT VLANs on the ICL. However, on the
Brocade NetIron CES/CER, only MCT VLANs are carried over ICL.
NOTE: For MCT VLANs, MAC learning is disabled on ICL ports, while MAC learning is enabled on ICL
port for non-MCT VLANs.
MCT Cluster Client Edge Port (CCEP). A physical port on one of the MCT peer switches that is a
member of the LAG interface to the MCT client. To have a running MCT instance, at least one Link
Aggregation Interface is needed with a member port on each peer switch.
MCT Cluster Edge Port (CEP). A port on MCT peer switches that is neither a Cluster Client Edge Port nor
an ICL port.
MCT Cluster Communication Protocol (CCP). A Brocade proprietary protocol that provides reliable,
point-to-point transport to synchronize information between peers. CCP comprises two main
components: CCP peer management and CCP client management. CCP peer management deals with
establishing, and maintaining TCP transport session between peers, while CCP client management
provides event-based, reliable packet transport to CCP peers.
TRAFFIC FLOW
MCT configuration is optimized to ensure that traffic through an MCT-capable system is symmetric.
In Figure 5, for example, traffic from the server redirected to the core or the server attached to another
access switch reaches a Brocade MLX (Agg 1 on the left) and the receiving Brocade MLX routes it directly to
the core or switches it directly to the destination access switch without unnecessarily passing it to the peer
Brocade MLX. Similarly, the traffic reaching the Brocade MLX (Agg 1 on the right) from the core is forwarded
toward the access switch without traversing the MCT peer Brocade MLX switch. This can be achieved
regardless of which Brocade MLX switch aggregation device is the primary Virtual Router Redundancy
Protocol (VRRP) device for a given VLAN.
L3 L3
ICL ICL
Agg 1 Agg 2 Agg 1 Agg 2
L2 L2
The cluster ID is user configurable on each MCT peer and unique across the MCT system
Where the <cluster-name> parameters specify the cluster name with a limit of 64 characters
and the <cluster-id> parameters specify the cluster ID (1-65535).
The client bridge ID is also user configurable on each MCT peer and unique for each client device
(switch or server).
Where the <id> parameters specify the remote bridge ID; possible values are 1 35535.
FDB FDB
SWITCH A SWITCH B
The following MDB resolution algorithm is used on all the MDBs in a given switch to identify which
MAC should be installed in FDB. The algorithm works as follows:
1. The MACs learned locally are given the highest priority or the cost of 0(zero) so that they are always selected
as best MAC.
2. Each MAC is advertised with a cost; low-cost MACs are given preference over high-cost MACs.
3. If a MAC is moved from an MCT MAC to a regular MAC, a MAC move message is sent to the peer and the peer
should also move the MAC from CCEP ports to ICL links adjusting the MDBs.
4. If the cost of a MAC is same, then the MAC learned from the lower RBridge ID wins and is installed in the FDB
Cluster Local MAC (CL). MACs that are learned on VLANs that belongs to a cluster VLAN range and on
CEP ports locally. MACs are synchronized to the cluster peer and subject to aging.
Cluster Remote MAC (CR). MACs that are learned via MDUP messages from the peer (CL on the peer).
The MACs are always programmed on the ICL port and they do not age. They are deleted only when
they are deleted from the peer. A remote MDB is created for these MACs with a cost of 1 (one).
Cluster Client Local MAC (CCL). MACs that are learned on VLANs that belongs to cluster VLAN range
and on CCEP ports. The MACs are synchronized to the cluster peer and subject to aging. A local MDB is
created for these MACs with a cost of 0 (zero).
Cluster Client Remote MAC (CCR). MACs that are learned via MDUP messages from the peer (CCL on
the peer). The MACs are always programmed on the corresponding CCEP port and they do not age.
They are deleted only when they are deleted from the peer. A remote MDB is created for the MACs with
a cost of 1 (one).
Only one of the MCT peers sends BPDU towards the MCT client. It is decided by whichever is the
designated bridge on the ICL. Three new STP states are added in the MCT implementation:
The BLK_BY_ICL state indicates that the superior BPDUs were received on this interface, which could
have led to blocking of the ICL interface, so the ICL port guard mechanism has been triggered on this
port.
The FWD_BY_MCT state indicates that the MCT peer has set the CCEP state to forwarding.
The BLK_BY_MCT state indicates that the MCT peer has set the CCEP state to blocking.
MRP
Metro Ring Protocol (MRP) is a Brocade proprietary protocol that provides a scalable Layer 2 loop-free
ring topology typically for a Metropolitan Area Networks (MANs) and fast reconvergence compared to
spanning tree protocols. The pair of MCT switches can act as a single logic node in the MRP topology
the only restriction being that the ICL interface cannot be configured as an MRP secondary interface
since the ICL interface cannot be in blocking state. MRP shouldnt be enabled on an MCT CCEP port
and vice versa. MCT-MRP integration provides a solution with active-active dual homing to the MRP
ring, high availability, and fast recovery.
To disable xSTP BPDU tunneling globally, enter a command such as the following:
NetIron(config)#no cluster-l2protocol-forward
To disable xSTP BPDU tunneling on an interface, enter a command such as the following:
NetIron(config-if-e1000-1/2)#cluster-l2protocol-forward disable
MCT switch
pair acting
Brocade Brocade as a single
MLX MLX MCT logical switch
Layer 2 logical
switch
Standard
link aggregation
IEEE 802.3ad
Keep-alive VLANs
Using a LAG trunk interface for the ICL between the MCT peer switches is a best practice to provide
link redundancy. However, an optional keep-alive VLAN can be configured to start transverse
connectivity check messages when the ICL link fails. Only one VLAN can be configured as the keep-
alive VLAN. The MCT is operating in client isolation loose mode by default, which means that in the
event that the CCP fails because the ICL link fails:
If the keep-alive VLAN is configuredthe MCT performs master/slave negotiation. After the negotiation,
the client port will be active and forward traffic only on the master MCT switch.
If no keep-alive VLAN configuredthe client ports on both MCP peer switches will remain active and
forward traffic independently.
The MCT can also operate in client isolation strict mode. If the CCP fails, the client interfaces on both
MCT peer switches are administratively shut down. In this mode, the client is completely isolated from
the network when the CCP is not operational. The same isolation mode should be configured on both
MCT switches.
NetIron(config-cluster-TOR)#client-isolation strict
ICL normal operation Client ports on both MCT Client ports on both MCT Client ports on both
peers active peers active MCT peers active
ICL failure Client ports active only Client ports on both MCT All client ports shut
on Master MCT node peers active down
ICL failure in an MCT- Client ports active only Not recommended or All client ports shut
MRP topology on Master MCT node supported down
The MCT switch that acts as backup router needs to ensure that packets sent to a VRRP-E virtual IP
address can be L2 switched to the VRRP-E master router for forwarding. The MCT switch that acts as
master router syncs the VRRP-E MAC to the other MCT switch that acts as a backup router. Both data
traffic and VRRP-E control traffic travel through the ICL unless the short-path forwarding feature is
enabled.
With the VRRP-E server virtualization feature, short-path forwarding, enabled, the MCT VRRP-E backup
switch can forward both Layer 2 and Layer 3 packets to the VRRP-E master switch without going
through ICL, which provides a VRRP active-active topology.
Layer 3 network
1/7 1/7
CES 1 CES2
Create VLANs (including the session VLAN used by the CCP) and assign ports to VLANs. Only ICL ports
should be assigned to session VLANs.
Create a LAG on the MCT switches; in this example, there are 4 x LAGs on each MCT switch:
LAG 1 serves as an ICL link and LAG 2 to LAG 4 are the connections from the MCT switch to the clients
(access switches and server host).
Configure the MCT cluster in operation mode and the MCT cluster client. One MCT cluster client matches
each access switch or host respectively. Note the following:
If the ICL or client interfaces needs to be configured as a LAG interface, then only the primary port of
the LAG needs to be specified in the ICL or client configuration.
Once the cluster is deployed, only the cluster member VLANs can be modified. Other configurations are
not allowed be changed.
Once the client is deployed, any configuration under the client cannot be changed.
Configure VRRP-E on the MCT client VLAN 2: switch MLX1 is the master and switch MLX2 is the backup.
Note that if short-path-forwarding is enabled, the backup VRRP-E switch forwards both Layer 2 and Layer 3
traffic.
Layer 3 interfaces and protocols needs to be configured and enabled on the interfaces facing the Layer 3
core layer so that the subnets of the access layer can be advertised out. In an MCT implementation,
Brocade recommends that you redistribute the related routes to routing protocols.
Layer 3 network
1/6 1/6
Create VLANs (including the session VLAN used by the CCP) and assign the ports to VLANs. Layer 2 VLANs
span from the Brocade NetIron CES access switch up to Brocade MLX aggregation/core switches. VE of
the MCT-VLAN 2 is configured only on the Brocade MLX aggregation/core switch.
Create LAGs on the MCT switches, both on the Brocade MLX pair and Brocade NetIron CES pair. The LAG
between the Brocade MLX MCT switch and the Brocade NetIron CES MCT switch is virtually a single entity of
LAG interface.
Configure the cluster operation mode and cluster client. The Brocade MLX pair of MCT switches is the client
of the Brocade NetIron CES pair of MCT switches. The Brocade NetIron CES pair of MCT switches is the
client of the pair of MCT switches. The Brocade NetIron CES pair of MCT switches also have another client,
which is the server connected to the Brocade NetIron CES MCT switches through the standard IEEE
802.3ad protocol.
Only the aggregation/core Brocade MLX MCT switches need to be configured with VRRP/VRRP-E.
Switch MLX1 is the master and switch MLX2 is the backup. Note that if short-path-forwarding is enabled as
recommended, the backup VRRP-E switch will forward both Layer 2 and Layer 3 traffic.
Layer 3 interfaces and protocol needs to be configured and enabled on the interfaces facing the Layer 3
core layer so that subnets for the access layer can be advertised out. The Layer 2 and Layer 3 boundary
sits on the Brocade MLX MCT switches in the aggregation/core layer. In the MCT implementation, Brocade
recommends redistributing the routes to routing protocols.
Servers
MCT
METRO RING
Brocade
MLX
MCT
Create VLANs and enable MRP on the VLANs. If the MCT switches are configured as MRP masters, make
sure that the ICL ports on the MCT switches are not configured as secondary ports.
Configure the cluster operation mode and cluster client. One cluster client matches each access switch or
host respectively.
CONCLUSION
Brocade MCT provides a number of important benefits for a Layer 2 network in addition to a set of
enhancements for Layer 3 interconnect specifically resulting from the Layer 2 capabilities. With MCT,
customers can achieve enhanced system availability through redundant systems, loop management
without the use of Spanning Tree Protocol, full system bandwidth high availability, rapid link-failure recovery,
and link aggregation to any IEEE 802.3ad-capable edge device.
ABOUT BROCADE
Brocade provides innovative, end-to-end network solutions that help the worlds leading organizations
transition smoothly to a virtualized world where applications and information can reside anywhere. These
solutions deliver the unique capabilities for a more flexible IT infrastructure with unmatched simplicity, non-
stop networking, optimized applications, and investment protection. As a result, organizations in a wide
range of industries can achieve their most critical business objectives with greater simplicity and a faster
return on investment.
For more information about Brocade products and solutions, visit www.brocade.com.
2010 Brocade Communications Systems, Inc. All Rights Reserved. 09/10 GA-IG-326-00
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and
Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks,
MyBrocade, and VCS are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries.
Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning
any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes
to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes
features that may not be currently available. Contact a Brocade sales office for information on feature and product availability.
Export of technical data contained in this document may require an export license from the United States government.