Sie sind auf Seite 1von 38

BROCADE ONE

Eldad Siminchi
Pre Sale Manager, Israel

THE ENTIRE NETWORK IS YOUR DATA CENTER


Acquired Foundry
2008
•1 5 ye a rs o f in d u stry •IP a n d E th e rn e t p io n e e r,
le a d in g in p e rfo rm a n ce ,
•Price / p e rfo rm a n ce le a d e r
re lia b ility , sca la b ility &
e n e rg y e fficie n cy •D C , E n te rp rise & S P
•9 0 % o f fo rtu n e 1 0 0 0 D C s •1 5 , 0 0 0 + cu sto m e rs W W
•7 0 % S A N m a rke t sh a re •S w itch in g & R o u tin g
•3 2 4 p a te n ts W W & 3 2 8 exp e rtise
p e n d in g •1 0 G E / 1 0 0 G E le a d e rsh ip
•C o n siste n tly first to m a rke t

•E n d to e n d fa m ily o f 1 6 G
FC
A look into the future…….

TODAY

© 2009 Brocade Communications Systems, Inc. 3


All Rights Reserved.
Virtual Cluster Switching (VCS)

VCS
Logical
Ethernet Fabric Chassis
Distributed Intelligence

Resilient while maintain topology flexibility Self-forming Logically flattens and collapses network layers
Massive scalability with integrated simplicity Arbitrary topology Scale edge and manage as if single switch
gh Performance, multiple pathsFabric
with automation
is aware of all members, devices, VMsAuto-configuration
erently flat architecture for Virtualization & clouds
Masterless Centralized or distributed mgmt; end-to-end
control, no reconfiguration

Connectivity over Distance, Native Fiber


Dynamic Services Channel, Security Services, Layer 4-7, etc.

© 2009 Brocade Communications Systems, Inc. 4


All Rights Reserved.
VCS

Ethernet Fabric Details Ethernet Fabric


Distributed Intelligence
Logical
Chassis

Dynamic Services

—1st true Ethernet fabric —Transparent Interconnection


– Layer 2 technology of Lots of Links (TRILL)
—Link speed agnostic – Active multipath
—Data Center Bridging – Multihop routing
– Highly available, rapid
(DCB) recovery
– Lossless, deterministic
– ECMP – Equal Cost Multi
– Priority-based Flow Control Paths
(PFC)
—LAN/SAN Convergence
– Enhanced Transmission
Selection (ETS) Ready
– Data Center Bridging – FCoE and iSCSI traffic
Exchange (DCBX) —Standards-based
– Extends existing Ethernet
infrastructure 5
© 2009 Brocade Communications Systems, Inc.
All Rights Reserved.
Ethernet Fabrics
A new network architecture

Classic Hierarchical Ethernet Ethernet Fabric


Architecture Architecture
Core

Core

Core
Access Aggregation

Edge
Scalability

Servers with 10 Gbps Servers with 10 Gbps Connections


Connections
•Classic architectures often require •Fabric architectures flatten and
three tiers in the physical network seamlessly scale out the Layer 2
•STP disables links in the fabric to network at the edge
prevent loops, limiting network •All links in the VCS fabric are active
utilization and it is managed as one
•Each switch has to be managed •Switches in the VCS fabric are managed
individually at one
• •
VCS

Distributed Intelligence Details Ethernet Fabric


Distributed Intelligence
Logical
Chassis

Dynamic Services

• Distributed Fabric • Shared Port Profiles


Services information
• Fabric is self-forming • Automatic Migration of Port
• Information shared across Profiles (AMPP)
all fabric members • Enables seamless VM
• Fabric is aware of all migration without
devices connected compromise
• Masterless Control • Optimized Virtual Access
• Switch or link failure Layer
does not require full • VEPA; frees host resources
fabric re convergence from switching and policy
enforcement
VCS

Distributed Intelligence Details Ethernet Fabric


Distributed Intelligence
Logical
Chassis

Automatic Migration of Port Profiles (AMPP)



Dynamic Services

vAllows VM to move with the Port Profile


network automatically Brocade
Port Profile ID

reconfiguring Network
Advisor
QOS, ACLs,
Policies
VLAN ID
1. Port Profiles created, managed ( BNA ) Storage Zoning

in fabric; distributed Port


Profile
MAC s
2. Discovered by BNA; pushed to Bindin
gs
orchestration tools
3. Server admin binds VM MAC
address to Port Profile ID

Port Profiles
MAC Bindings
4. MAC address/Port Profile ID
association pulled by BNA;
sent to fabric
5. Intra- and inter- host
switching and profile
enforcement offloaded from
physical servers Server
Mgmt
VCS

Distributed Intelligence Details Ethernet Fabric


Distributed Intelligence
Logical
Chassis

Optimized Virtual Access Layer



Dynamic Services

• Today, access to the network lives in


the virtual hypervisor Physical
Virtual
• Consumes valuable host resources
• Virtual switch is offloaded to the
physical switch v v v v
Server
N N N N
• Eliminates the software switch; the I I I I
advantages of a distributed virtual switch C C C C
plus Distributed Intelligence Virtual Switch
• Leverages Virtual Ethernet Port Aggregator
(VEPA) technology
• Virtual NICs are offloaded to the NIC
physical NIC
• Leverages Virtual Ethernet Bridging (VEB)
technology
• Host resources are freed up for
applications
• Gives 5-20% of host resources back to
applications Switch
• VMs have direct I/O with the network
• Network simplicity; common access across
entire VCS; network is managed in the
VCS

Logical Chassis Details Ethernet FabricDistributed Intelligence


Logical
Chassis

Single Logical Switch behavior



Dynamic Services

Private VLANs
LACP SPAN
LLDP
802.1x
IGMP Snooping
sFlow
—VCS fabric behaves like a
DCB transparent LAN service
– For example, BDPUs in STP
environments are passed through
the fabric
—Fabric protocols used within the
TRILL fabric
DCB
Fabric – TRILL, DCB, fabric services, etc.
Services
—Industry-standard protocols used
to communicate outside the
fabric
– LACP, 802.1x, sFlow, etc.
© 2009 Brocade Communications Systems, Inc. 10
All Rights Reserved.
VCS Architecture

© 2009 Brocade Communications Systems, Inc. 11


All Rights Reserved.
Award-Winning Brocade VDX 6720 Data Center Switch
Cloud Networking now has an Ethernet Fabric

Brocade One architecture signals a new era in data


center management . Rather than requiring customers to
standardize a particular set of servers, storage and networking
gear from one vendor to lower operating costs, the VDX 6720
signals the embedding of management intelligence at Layer 2 of
the network… And while most IT organizations have yet to
appreciate the impact of this fundamental change to networking
in the data center, this offering heralds a change in the way
we think about managing data centers for years to come .
http://www.ctoedge.com/content/10-most-important-enterprise-it-products-2010?slide=11

Most Important
Enterprise IT
Products of
2010

© 2009 Brocade Communications Systems, Inc. 12


All Rights Reserved.
Brocade VDX 6720 Data Center Switches
— Built for the Virtualized Data Center
– Uses Brocade fabric switching ASICs
– First switches to run new Brocade Network Operating
System
– Brocade VCS fabric technology
– Automatic Migration of Port Profiles (AMPP)
— Best-In-Class Performance and Density
– 24- and 60-port models with Ports On Demand
– Non-blocking, cut-through architecture, wire-speed
– 600 ns port-to-port latency; 1.8 us across port groups
— Environmental Flexibility
– 10 Gb and 1 Gb supported on every port
– Direct-attached copper, active optical, and SFP optical
connectivity options
– Less than 17” switch depth and reversible front-to-back
airflow
— Enables Network Convergence
– Complete FCoE support, multihop
– iSCSI DCB support
— Highly Resilient and Efficient Design Data Center Access
– Hot code load and activation
– Remote Lights Out Management
– Simplistic design, optimal power efficiency

© 2009 Brocade Communications Systems, Inc. 13


All Rights Reserved.
Brocade VDX Product Family
Delivering Brocade VCS technology

A new family of Ethernet Fabric switches

In Customer Production already 2011 - 2012

6720-24 6720-60

24 ports 60 ports 24 and 60 ports 48 ports VCS technology Wire-speed


in blade server chassis with VCS
1/10 Gbps 1/10 Gbps 1/10 Gbps FC 1 Gbps chassis capabilities
ports for
600 ns latency High Density connectivity to High-density Manage all Allows Ethernet
SAN 1 Gbps to switches as one fabrics to scale
Fastest Eth Wire-speed VCS fabric further
switch available Gives servers
connected to
storage in a
SAN

© 2009 Brocade Communications Systems, Inc. 14


All Rights Reserved.
VCS

Dynamic Services Details Ethernet FabricDistributed Intelligence Logical


Chassis

•Data Center to Data Center Connectivity


Dynamic Services

• Dynamic Service to connect Data •VCS Fabric Extension capabilities


Centers •Delivers high performance
• Extend the layer 2 domain over accelerated connectivity with
distance full line rate compression
• Maintains fabric separation •Secures data in-flight with full
while extending VCS services line rate encryption
to secondary site (e.g. •Load balances throughput and
discovery, distributed provides full failover across
configuration, AMPP) multiple connections

Site A Fabric Extension Site B


Service
VCS VCS
Public
Routed
Network

Encryption, Compression,
Multicasting

Fabric Extension
Service
VCS

Dynamic Services Details Ethernet FabricDistributed Intelligence Logical


Chassis

•Native Fibre Channel Connectivity


Dynamic Services

• Provide VCS Ethernet Fabric with •VCS Native Fibre Channel


native connectivity to FC Capabilities
storage •Adds Brocade’s Fibre Channel
• Connect FC storage locally functionality into the VCS
fabric
• Leverage new or existing Fibre
Channel SAN resources •8 Gbps, 16 Gbps FC, frame-level
ISL Trunking, Virtual Channels
with QoS, etc.

LAN Native Fibre FC SAN


Channel Brocade
VCS DCX

FC
Storage
FC
Storage
VCS Use Case #1
• 1/10 GbpsTop-of-Rack Access – Architecture
WAN Preserves existing
architecture
Core

Leverages existing core/agg


Co-exists with existing ToR
switches
Supports 1 and 10 Gbps
MLX w/ MCT,
Cisco w/ vPC/VSS, server connectivity
Aggr
egat

or other
Active-active network
ion

Load splits across


connections
Existing 1 Gbps
Access Switches LAG No single point failure
Self healing
Acce

VCS VCS Fast link reconvergence


ss

2-switch
VCS at ToR < 250 milliseconds
High-density access with
flexible subscription
ratios
Serv
ers

Supports up to 36 servers per


rack with 4:1 subscription
1 Gbps 1/10 Gbps 10 Gbps
Servers Servers Servers
1 GbE 10 GbE DCB

VCS Use Case #1 10 GbE Passive


Link
Logical Chassis
1/10 Gbps Top-of-Rack Access – Topology

Active/Active server
MLX w/ MCT, connections
Cisco w/ vPC/VSS,
or other Aggregation Servers only see one ToR
switch
Half the server connections
Classic 10 GbE LAG VCS 10 GbE Reduced switch management
Top-of-Rack Top-of-Rack
LAG Half the number of logical
2-switch VCS switches to manage
per Rack
Unified uplinks
4 links
20 ports One LAG per VCS
72 ports vLAG
Classic ToR VCS ToR

LAG Utilization Active/ Active/


Passive Active
20 Gbps per
server; 20 Gbps per Connections per 4 2
Active/Passiv 4:1 10 Gbps server; Server
Subscription Active/Active
e Ratio Logical Switches 2 1
to Aggregation per Rack
LAG per Rack 2 1
Up to 36
Servers per
Rack
VCS Use Case #1
1/10 Gbps Top-of-Rack Access – Layout

Core Preserves existing network


architecture
Leverage VCS technology in
stages

Aggregation
2-switch VCS in each server
2-switch VCS at Switches at rack
the Top of Each the End of
Rack Each Row Managed as a single switch
1 Gbps and 10 Gbps
connectivity
Highly available;
active/active
High performance
connectivity to End-of-Row
Aggregation
One LAG to core for
simplified management and
rapid failover
Servers with 1 Gbps or 10
Gbps Connectivity
VCS Use Case #2
10 Gbps Top-of-Rack Access for Blade Servers –

Architecture Preserves existing


WAN architecture
Leverages existing core/agg
Core

Co-exists with existing ToR


switches
Provides low-cost, first stage
aggregation
MLX w/ MCT, High density blade servers
Cisco w/ vPC/VSS,
Aggr
egat

or other without stress on existing


ion

aggregation
Reduces cabling out of rack
Active-active network
Existing ToR LAG
Switches
Load splits across connections
No single point failure
Acce

VCS VCS
ss

2-switch Self healing


VCS at ToR
Fast link reconvergence
< 250 milliseconds
High-density ToR aggregation
with flexible subscription
Serv
ers

ratios
Supports up to 4 blade chassis
Blade Servers Blade Servers per rack with 2:1 subscription
with 1 Gbps Switches with 10 Gbps Switches/Passthrough
Modules
1 GbE 10 GbE DCB

Logical
VCS Use Case #2 10 GbE Chassis

10 Gbps Top-of-Rack Access for Blade Servers – Topology


1st stage network


MLX w/ MCT, aggregation
Cisco w/ vPC/VSS,
or other Aggregation Ethernet fabric at ToR
Aggregates 4 blade server
chassis per rack (8 access
switches)
LAG
High performance 2:1
subscription through VCS
2-switch VCS
per Rack Reduced switch management
32 ports
8 links Half the number of logical
64 ports
ToR switches to manage
vLAG 4:1 10 Gbps
Subscription
Ratio
Unified uplinks
8 links per Through 1st
Stage One LAG per VCS
Blade Switch
Aggregation
Future: Blade switches
become members of the VCS
fabric
Drastic reduction in switch
Dual 10 Gbps Up to 4 Blade management
Switch Chassis per
Modules per Rack = 64
Chassis (any Servers
vendor)
VCS Use Case #2
10 Gbps Top-of-Rack Access for Blade Servers – Layout

Core
Preserves existing network
architecture
Leverage VCS technology in
stages
Switches at the End
2-switch VCS at the Top of of Each Row; 2nd 2-switch VCS in each server
Each Rack; 1st Stage Stage Aggregation rack
Aggregation
Managed as a single switch
1st stage aggregation of 10
Gbps blade switches
High performance
connectivity to End-of-Row
Aggregation
One LAG to core for
simplified management and
rapid failover

Blade Servers with 10 Gbps


Connectivity
VCS Use Case #3
10 Gbps Aggregation; 1 Gbps Top-of-Rack Access –

Architecture
WAN Low cost, highly flexible
MLX w/ MCT, logical chassis at
Core

Cisco w/ vPC/VSS,
or other
aggregation layer
Building block scalability
LAG Per port price similar to a
Scalable VCS ToR switch
Aggregation
Availability, reliability,
Aggr
egat

manageability of a chassis
ion

Flexible subscription ratios


ToR Switch
Stack Ideal aggregator for 1 Gbps
Existing Access (Brocade FCX or ToR switches
Switches other)
Supports 1080 servers in 30
Acce

racks with 5:1 subscription,


ss

assuming 4 NICs per server


Optimized multi-path
network
No single point failure
Serv

STP not necessary


ers

Existing 1 Gbps New 1 Gbps


Servers Servers
1 GbE 10 GbE DCB

Logical
VCS Use Case #3 10 GbE Chassis

10 Gbps Aggregation; 1 Gbps Top-of-Rack Access – Topology


MLX w/ MCT,
Cisco w/ vPC/VSS,
or other Core
Scalable VCS Aggregation
LAG Cost effective building
blocks
270 usable ports with 1:1
subscription through VCS
Supports 30 Racks
User-determined port count
1:1 Wire-Speed and subscription ratio
of Servers (1080 90 ports
servers), Assuming Logical
2:1 Subscription 180 ports Chassis;
270 Usable
Aggregates 1 GbE Access
Ports 3-switch stack in each server
rack
LAG across stack members to
VCS
LAG 6 Links (2 per FCX) Reduced management; no single
3-switch ToR FCX Stack, point of failure
Juniper EX, or other
(144 ports)

Up to 36 Servers
per Rack; 4 GbE
Connections per
Server
VCS Use Case #3
•10 Gbps Aggregation; 1 Gbps Top-of-Rack Access – Layout

3-switch stack in each rack


Core
Managed as a single
switch
Redundancy throughout
network, without STP
3-switch FCX stack
(or other) at the
VCS Aggregation
in Distribution
High density 10 Gbps LAG
Top of Each Rack Area to VCS aggregation
Logical Chassis Aggregation
Router in Distribution Area
Build out aggregation as
needed
Supports 30 racks of servers
High performance, resilient
connection to Core
One LAG for simplified
management and rapid failover

Servers with 1 Gbps


Connectivity
VCS Use Case #4
• 1/10 Gbps Access; Collapsed Network – Architecture
WAN
MLX w/ MCT,
Cisco w/ vPC/VSS, Flatter, simpler network
Core

or other design
Logical two-tier architecture
Ethernet fabrics at the edge
LAG
Greater layer 2
VCS Edge
Fabrics scalability/flexibility
SAN Increased sphere of VM
mobility
Edge

Seamless network expansion


Optimized multi-path
Fibre Channel network
Connections to SAN
All paths are active
No single point failure
STP not necessary
Server
s

1/10 Gbps 10 Gbps


Servers Servers
1 GbE 10 GbE DCB

Logical
VCS Use Case #4a 10 GbE Chassis

•1/10 Gbps Access; Collapsed Network – Topology – ToR Mesh

Scale-out VCS edge fabric


MLX w/ MCT,
Cisco w/ vPC/VSS, Self aggregating, flattens
or other Core the network
Clos Fabric topology for
1 Links per VCS flexible subscription
member to Core ratios
Router (20 Total)
312 usable ports per 10-
L3
switch VCS

( )
2 ports per
36 ports switch
ECMP
10 Switch VCS
Supports 144 servers in 4
racks, all with 10 Gbps
4 links to other switch Fabric; connections
in rack; 9 links to
adjacent switches 200 Usable Ports
Drastic reduction in
vLAG
management
Each VCS managed as a
single logical chassis
Enables network
convergence
DCB and TRILL capabilities
for multi-hop FCoE and
enhanced iSCSI

Up to 36 Servers Servers with 1 Gbps, 10


per Rack; 5 Racks Gbps, and DCB Connectivity
per VCS
VCS Use Case #4a
•1/10 Gbps Access; Collapsed Network – Layout – ToR Mesh
2 VCS fabric members in
each rack
Dual connectivity into
Core fabric for each
server/storage array
Horizontal Stacking Using Low cost Twinax cabling in
ToR Mesh architecture rack
2nd stage VCS fabric
members in a middle-of-
5 Racks
row rack
2 Fabric Members
per Rack per Fabric Low cost Laserwire cabling
from top-of-rack switches
1 VCS fabric per 4 racks
of servers (assuming 36
servers per rack)
Fiber optic cabling only
used for connectivity
from edge VCS to core
Single vLAG per fabric
Reduced management and
maximum resiliency

Servers and Storage with 1 Gbps,


10 Gbps, and DCB Connectivity
1 GbE 10 GbE DCB

Logical
VCS Use Case #4b 10 GbE Chassis

1/10 Gbps Access; Collapsed Network – Topology – Clos


Fabric
Scale-out VCS edge fabric
MLX w/ MCT,
Cisco w/ vPC/VSS, Self aggregating, flattens
or other Core the network
L3
ECMP Clos Fabric topology for
6 Links per Trunk flexible subscription ratios
(24 Total ) 10 Switch Fabric;
312 Usable Ports 312 usable ports per 10-
switch VCS
( )
12 ports per
48 ports switch
6:1 Subscription
Ratio to Core
48 Ports Available
for FC SAN Supports 144 servers in 4
Connectivity or VCS
Expansion
racks, all with 10 Gbps
vLAG connections
Drastic reduction in
( )
12 ports per
36 ports switch
management
Each VCS managed as a single
logical chassis
Enables network convergence
DCB and TRILL capabilities
Up to 36 Servers
per Rack; 4 Racks for multi-hop FCoE and
per VCS enhanced iSCSI

Servers with 1 Gbps, 10


Gbps, and DCB Connectivity
VCS Use Case #4b
1/10 Gbps Access; Collapsed Network – Layout – Clos

Fabric
2 fabric members in each
rack
Core Dual connectivity into fabric
for each server/storage array
Low cost Twinax cabling in
rack

2nd Stage of
2nd stage fabric members in
2 Fabric Members
Fabric in Middle- a middle-of-row rack
of-Row 4 Racks
per Rack per Fabric Low cost Laserwire cabling
from top-of-rack switches
1 VCS fabric per 4 racks of
servers (assuming 36 servers
per rack)
Fiber optic cabling only
used for connectivity from
edge VCS to core
Single LAG per fabric
Reduced management and
maximum resiliency
Servers and Storage with 1 Gbps,
10 Gbps, and DCB Connectivity
VCS Use Case #5
• 1/10 Gbps Access; Network Convergence – Architecture
WAN Flatter, simpler network
design
MLX w/ MCT; Logical two-tier architecture
Core

8x10 DCB Blade


VCS fabrics at the edge
Greater layer 2
LAG
scalability/flexibility
VCS Edge
Increased sphere of VM
Fabrics mobility
Seamless network expansion
Optimized multi-path
Edge

network
All paths are active
No single point failure
STP not necessary
Convergence ready
Server

End-to-end enhanced Ethernet


(DCB)
Multi-hop FCoE support
s

1/10 Gbps 10 Gbps 10 Gbps 10 Gbps 10 Gbps


Servers iSCSI Storage Servers FCoE Storage FCoE/iSCSI Lossless iSCSI
Storage
1 GbE 10 GbE DCB

Logical
VCS Use Case #5 10 GbE Chassis

•1/10 Gbps Access; Network Convergence – Topology

Scale-out VCS edge fabric


MLX w/ MCT
Core Self aggregating, flattens
the network
6 Links per Trunk LAG
Clos Fabric topology for
(24 Total ) 10 Switch VCS flexible subscription ratios
Fabric; 312 usable ports per 10-
312 Usable Ports
switch VCS
( )
12 ports per
48 ports switch
6:1 Subscription Available ports for FC SAN
Connectivity or VCS
Ratio in VCS Fabric
Expansion Supports 144 servers in 4
racks, all with 10 Gbps
vLAG connections
Drastic reduction in
( )
12 ports per
36 ports switch management
Each VCS managed as a single
logical chassis
Enables network convergence
Up to 36 Servers DCB and TRILL capabilities
per Rack; 4 Racks
per VCS for multi-hop FCoE and
enhanced iSCSI
Servers with 1 Gbps, 10 10 Gbps DCB
Gbps, and DCB Connectivity FCoE/iSCSI
Storage
VCS Use Case #5
1/10 Gbps Access; Network Convergence – Layout

2 fabric members in each


rack
Core Dual connectivity into fabric
for each server/storage array
Low cost Twinax cabling in
rack
2nd stage fabric members in
2nd Stage of
Fabric in Middle-
a middle-of-row rack
2 Fabric Members of-Row 4 Racks
per Rack per Fabric
Low cost Laserwire cabling
from top-of-rack switches
1 VCS fabric per 4 racks of
servers (assuming 36 servers
per rack)
Fiber optic cabling only
used for connectivity from
edge VCS to core
Single LAG per fabric
Reduced management and
maximum resiliency
Servers and Storage with 1 Gbps,
10 Gbps, and DCB Connectivity
VCS Use Case #6
• 1/10 Gbps Access; Convergence + FC SAN – Architecture

WAN
Leverage existing resources
Core

MLX w/ MCT;
8x10 DCB FC and FCoE Connect Ethernet fabrics into
Blade Storage
Fibre Channel SAN – new
servers have access to
existing storage
LAG DCB/FCoE Link
to SAN Core Maximum storage flexibility
Fibre Channel, FCoE, iSCSI,
NAS
VCS Edge
Edge

Fabric Deploy the right storage


Access to technology without isolating
FC and it
FCoE Optimal performance,
Storage
availability
No single point failure
Frame-level, hardware-based
Server

trunking between nodes


s

10 Gbps Tier 1 Servers with 8


Servers Gbps FC
1 GbE 10 GbE DCB

Logical
VCS Use Case #6 10 GbE Chassis

1/10 Gbps Access; Convergence + FC SAN – Topology


SAN A SAN B
Future Network Path

MLX w/ MCT
DCX w/
FCoE
VCS fabric connectivity
Core Blades into Fibre Channel SAN
6 Links per High performance Ethernet
Trunk to SAN
LAG trunks from VCS to DCX core
Allows shared storage
resources to exist in SAN
Fibre Channel and FCoE
storage
Can be accessed by servers
with Converged Network
Adapters
Future connectivity from
Fibre Channel FCoE
converged LAN aggregation
Storage Storage to SAN core
MLX with DCB connects to DCX
with FCoE blade

Servers with 1 Gbps, 10


Gbps, and DCB Connectivity
Brocade Ecosystem
Providing investment protection and best-of-class choice

for highly virtualized networks


Hypervisor Hyper-V

Server

Network Brocade One architecture

Security

Storage
THANK YOU
Not All FABRICS Are Created EQUAL
• Manual Configuration
• Rigid Topologies
• Inadequate Resiliency
• Chassis are not Fabrics
• Port extenders are not Fabrics