Beruflich Dokumente
Kultur Dokumente
Link failure
STP reconvergence network is down
Layer 3 as an alternative
Greater complexity; higher cost VM mobility limited to rack
2010 Brocade Communications Systems, Inc. CONFIDENTIALFor Internal Use Only
Imagine if
There was no requirement for STP in Layer 2 networks All paths in the networks were utilized with traffic automatically distributed Link failure did not result in a temporary outage and paths were always deterministic The network provided low latency, lossless transmission and could carry both IP and storage traffic, without compromise
Ethernet Fabric
L3 to Agg. Layer
L2 STP
!
?
Distributed vSwitch
Imagine if
There were no physical barriers of VM migration
Ethernet Fabric
Distributed Intelligence
Network Management
Challenges Today
Core
Layer 3 BGP, EIGRP, OSPF, PIM
Aggregation/ Distribution
Layer 2/3 IS-IS, OSPF, PIM, RIP
NIC Mgmt.
HBA Mgmt.
Drives up OpEx
Imagine if
You could logically eliminate a layer of the network
You could connect 10, 20 edge switches and manage them as one You could scale the network without added complexity There was a common tool to manage all components of the SAN and LAN
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Self-forming Arbitrary topology Network aware of all members, devices, VMs Masterless control, no reconfiguration VAL interaction
Logically flattens and collapses network layers Scale edge and manage as if single switch Auto-configuration Centralized or distributed mgmt; end-to-end
Dynamic Services
2010 Brocade Communications Systems, Inc. CONFIDENTIALFor Internal Use Only
Connectivity over Distance, Native Fibre Channel, Security Services, Layer 4-7, etc.
9
11
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
Standards-based
Extends existing Ethernet infrastructure
12
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
13
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
14
Dynamic Services
Active Path #1
Active Path #2
Establishes shortest paths through the Layer 2 network Uninterrupted response to link failures Backward-compatible and connects into existing infrastructures Delivers multiple hops for all traffic types (including FCoE)
Utilizes data center proven FSPF Link State Protocol
15
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
SAN A SAN B
LAN
16
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
Masterless Control
Switch or link failure does not require full fabric reconvergence
17
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
Port Profile
Brocade QOS, ACLs, Policies Network VLAN ID Advisor (BNA) Storage Zoning
Port Profile ID
Profile Distribution
3. Server admin binds VM MAC address to Port Profile ID 4. MAC address/Port Profile ID association pulled by BNA; sent to fabric 5. Intra- and inter- host switching and profile enforcement offloaded from physical servers
2010 Brocade Communications Systems, Inc. CONFIDENTIALFor Internal Use Only
MAC Bindings
Server Mgmt
18
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
Physical Virtual
Server
vNIC
vNIC
Virtual Switch
vNIC
vNIC
NIC
Switch
19
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
Will scale to greater than 1000 device ports without added management
20
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
VCS simplified deployment, scalability, and management of the network Enable VCS on each switch
VCS
VCS
21
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
22
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
Purpose-designed hardware
Switches with unique functionality can be added to the VCS fabric
Layer 4-7
23
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
Maintains fabric separation while extending VCS services to secondary site (e.g. discovery, distributed configuration, AMPP)
Site A
Site B
24
Ethernet Fabric
Distributed Intelligence
Logical Chassis
Dynamic Services
LAN
FC SAN
Brocade DCX
FC Storage FC Storage
2010 Brocade Communications Systems, Inc. CONFIDENTIALFor Internal Use Only
25
VCS architecture
11/16/2010
26
Classic
Core
Aggregation
Self healing
2-switch at ToR
Access
VDX
VDX
Servers
1 Gbps Servers
2010 Brocade Communications Systems, Inc.
10 Gbps Servers
27
Core
28
Classic
Core
Aggregation
Access
Servers
Core
High density 10 Gbps LAG to VCS aggregation Logical Chassis Aggregation Router in Distribution Area
Build out aggregation as needed Supports 30 racks of servers
30
Fabric
Core
Edge
Servers
10 Gbps Servers
31
Dual connectivity into fabric for each server/storage array Low cost Twinax cabling in rack
Fiber optic cabling only used for connectivity from edge VCS to core
Single virtual LAG per fabric Reduced management and maximum resiliency
Servers and Storage with 1 Gbps, 10 Gbps, and DCB Connectivity
2010 Brocade Communications Systems, Inc.
32
Core
Connect Ethernet fabrics into Fibre Channel SAN new servers have access to existing storage ***Enabled through future VDX switch with native FC ports
LAG
Edge
33
BROCADE SOLUTIONS
MLX
2011
DCX
ADX
LAN
VCS
VCS
SAN
EXT
iSCSI
NAS
iSCSI
NAS
FCoE
FC
34
35
Environmental Flexibility
10 Gb and 1 Gb supported on every port Direct-attached copper, active optical, and SFP optical connectivity options Less than 17 switch depth and reversible front-to-back airflow
Ethernet Fabric
vLAG (Similar to MLX MCT) Sometime referred as internal VCS LAGs AMPP Hypervisor agnostic - VMWare and Hyper-V tested (but should not be any issue with others) Multi-hop internal FCoE Callisto-F with Native FC ports in roadmap
2010 Brocade Communications Systems, Inc. Company Proprietary Information 11/16/2010
37
ISL Trunking
No license
Brocade ISL Trunking provides high link utilization, easeof-use Frame-level, hardware-based trunking Frames are evenly distributed across links in the trunk Built into Brocade fabric switching ASIC ISL Trunks automatically form Ports must belong to the same port group in the switch Once both switches are in VCS mode, multiple ISLs automatically form a trunk No configuration necessary
11/16/2010
38
iSCSI DCB
Advantages
Provide deterministic delivery of iSCSI traffic, Maximize iSCSI throughput and Minimize TCP re-transmissions by eliminating congestive packet loss Use DCB Ethernet Enhancements (PFC)(ETS) Use DCBX Distribute DCB configuration to iSCSI devices PFC and ETS existing iSCSI Priority New TLV required Switch advertises priority to be used for iSCSI Priority must be PFC enabled Advertisement of configuration only Switch will not verify or enforce iSCSI device compliance Requires that device supports the new TLV DCBX 1.01 compliant TLV Application Protocol ID = 3260 Priority map indicates iSCSIpriority
2010 Brocade Communications Systems, Inc. Company Proprietary Information 11/16/2010
39
Licenses
VDX 6720-24 (min 16 ports) VCS 2-node License (Default) VCS Multi-node License FCoE License 8 port POD(Ports on Demand) License VDX 6720-60 (min 40 ports) VCS 2-node License (Default) VCS Multi-node License FCoE License 10 port POD(Ports on Demand) License
Sentinel implementation, allowing XML files to load licenses, future Temporal lic support
2010 Brocade Communications Systems, Inc. Company Proprietary Information 11/16/2010
40
Brocade MLXe
41
Ideal for...
High performance computing Dense data centers Very large-enterprise core
42
Brocade MLXe
MLXe Layout
1 240G Slot
HSF
3 240G Slot
1 3 5
4
9 11 13 15
Mgmt
Mgmt
43
PS
PS
PS MLXe-4
PS
Mgmt
240G Slot
7 5 7
2 4 6 43
PS PS
8
PS PS
HSF
HSF
PS PS
240G Slot
240G Slot
10 12 14 16
PS PS
HSF
HSF
240G Slot
240G Slot Mgmt
240G Slot
MLXe-16
PS
PS
PS
PS
MLXe-8
240G Slot
Mgmt
HSF
HSF
ICL
100GbE
100GbE
Over 30 Tbps
ICL
All links active (vs. active/ passive) and forwarding Layer 2/3 traffic
VRRP-E Backup Active/ Active
MCT
10GbE 10GbE
Highest resiliency High resiliency: < 200ms link or node failover Over 30 Tbps switching capacity in Multi-Chassis for investment protection 32x100G wire-speed ports for demanding networks
16 November 2010
44
Core
ICL
MCT
LAG
LAG
Simplified architecture and scalable multipath Layer 2 domain No Spanning Tree Protocol (STP) within Brocade VCS cloud or core VM-aware infrastructure
Access
Brocade VCS
Investment protection
Servers Future-proof and convergence-ready DCB/FCoE architecture Pay-as-you-grow model for server access
Standby Active
16 November 2010
45
MCT 2
LAG
Fewer network elements to manage No STP from server to core Single pane NMS
Core
Deterministic sub-200 ms link and node failover from server to core Collapsed access/aggregation for lower device failure and higher MTBF
48-T-A Blade
48-T-A Blade
LAG
LAG
Servers
Investment protection
Standby Active
1/10/100GbE scalable wire-speed connectivity options 2 million MAC entries for large VM environments
16 November 2010
46
Deterministic sub-200 ms link and node failover from server to core Hitless failover within the FCX stack
MCT 1
Access
LAG
Investment protection
Standby Active
Servers
1/10/100 GbE scalable wire-speed connectivity options Compatible with third-party devices Pay-as-you-grow model for server access
16 November 2010
47
Core
MCT 2
10 GbE
10 GbE
Access
LAG
10 GbE
10 GbE
Investment protection
1/10/100 GbE scalable wire-speed connectivity options Dual-speed 1/10 GbE with Brocade TurboIron Server I/O consolidation FC and 10 GbE DCB/FCoE with Brocade 8000
Servers
16 November 2010
48
Winner
Investment Protection
Highly Programmable architecture that scales as you grow Flexible migration paths to higher density configurations without forklifts Simplified management. Virtualization ready.
Green
Lowest energy cost per Gbps Energy Efficient Front-to-Back airflow on highest density models
Brocade MLXe
Lowest TCO
Low CapEx and OpEx Broad range of chassis sizes for deployment simplicity Proven Reliability track record lowers risk
2010 Brocade Communications Systems, Inc. CONFIDENTIALFor Internal Use Only 11/16/2010
49
Highlights
48-port 1 GbE interface module
Eight MRJ21 connectors Six 1 GbE links per connector
Key differentiators
Industry-leading 1 GbE capacity in a single router Lowest power draw per GbE port All the advanced capabilities of the NetIron MLX modules Advanced data center virtualization with multi-VRFs, VLANs, and MPLS/VPLS Advanced load balancing and resiliency at the data center access layer
8x10GE-M Module
Market Segments Benefits Platforms
All
High density 10G module for large scale network build-out; reduced CAPEX and OPEX
MLX
Product Positioning
High density wire-speed 10GE modules Leapfrog Cisco and Juniper to regain industry 10G leadership 256 wire-speed 10GbE ports in a single MLX-32 router
Hardware
8x10G-M New hSFM required
Competitive Differentiation:
Industry leading 10G wire speed port density (256) Industry-leading power efficiency: < 31W per 10G port Converged Enhanced Ethernet (CEE) and H-QoS capable Extremely power efficient modules consuming ~42% less power consumption per 10GE than existing 4x10G
51
Campus Access
Greatest flexibility
Dual 10/1 GbE ToR and Aggregation 10 GbE HPCC and iSCSI SAN/NAS Blade Server Aggregation Collapsed Midmarket Aggregation and Core 24 1 GbE/10 GbE ports for seamless migration Four 10/100/1000 Mbps copper ports Redundant, removable, and load-sharing AC power supplies Hot-swappable triple-fan assembly
Operational efficiency
Best-in-class power efficiency Rack space saving Brocade Assurance Limited Lifetime Warranty
Optimum flexibility
Fixed and chassis configurations with interchangeable modules
Advanced functionality
Content switching and rewrite Transparent cache switching Hardware SSL acceleration TCP/HTTP multiplexing Multi-site redundancy using GSLB, FWLB VMware application provisioning
11/16/2010
55
Network Resources VM VM VM
VM
VM VM Resources
VM
VM
VM
VM
VM
VM
VM
Application Resources
vCenter
VM Metrics/Management
ServerIron ADX
Virtual Infrastructure
Application Traffic
Custom App
11/16/2010
Physical Infrastructure
57
11/16/2010
58
11/16/2010
59