Beruflich Dokumente
Kultur Dokumente
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. 2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell, the DELL logo, and the DELL badge, PowerConnect, EqualLogic, PowerEdge and PowerVault are trademarks of Dell Inc. Broadcom is a registered trademark of Broadcom Corporation. Cisco and Cisco Nexus are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and other countries. Intel is a registered trademark of Intel Corporation in the U.S. and other countries. Microsoft, Windows, Windows Server, and Active Directory are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries.
SWRA1013
Table of Contents
1 2 3 4 Introduction ....................................................................................................................................................... 2 Objective ............................................................................................................................................................. 3 Conclusions........................................................................................................................................................ 4 Reference architecture..................................................................................................................................... 5 4.1 4.2 4.3 4.4 Reference architecture overview ........................................................................................................... 5 Server configuration ..................................................................................................................................7 Array configuration ....................................................................................................................................7 Switch configuration..................................................................................................................................7 Overview of the switch configuration ............................................................................................7 Global Switch Setting: Configure no-drop class and jumbo frames ....................................... 8 Array and server port configuration ............................................................................................... 9 Link Aggregation Group configuration........................................................................................ 10 Switch port mapping ...................................................................................................................13 TCP/IP configuration .................................................................................................................. 14
SWRA1013
ii
Acknowledgements
This whitepaper was produced by the PG Storage Infrastructure and Solutions team between October 2011 and December 2011 at the Dell Labs facility in Round Rock, Texas. The team that created this whitepaper: Ron Bellomio, Tony Ansley, and Margaret Boeneke
Feedback
We encourage readers of this publication to provide feedback on the quality and usefulness of this information. You can submit feedback to this email address:
SISfeedback@Dell.com
SWRA1013
1 Introduction
The Cisco Nexus 5020 is a two-rack-unit (2RU), 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE), and Fibre Channel switch built to provide 1.04 terabits per second (Tbps) throughput with very low latency. It has 40 fixed 10 Gigabit Ethernet and FCoE Small Form-Factor Pluggable Plus (SFP+) ports. The hardware of the first 16 fixed ports supports both 10 Gigabit Ethernet and Gigabit Ethernet, providing a smooth migration path to 10 Gigabit Ethernet. Two expansion module slots can be configured to support up to 12 additional 10 Gigabit Ethernet and FCoE SFP+ ports, up to 16 Fibre Channel switch ports, or a combination of both. The switch has a serial console port and an out-ofband 10/100/1000-Mbps Ethernet management port. The switch is powered by 1+1 redundant, hotpluggable power supplies and 4+1 redundant, hot-pluggable fan modules to provide highly reliable front-to-back cooling.
SWRA1013
2 Objective
This document provides details on configuring the Cisco Nexus 5020 switch for use with Dell EqualLogic PS Series storage arrays. The goal of this exercise is not to provide a comprehensive set of possible configurations, but to illustrate one possible solution that provides excellent performance and scalability as validated by testing in our labs. Note: The Cisco Nexus 5020 switch is targeted at converged DCB environments, but the switch does not fully support the IEEE DCB standards for iSCSI over DCB. Because of this, this reference architecture is focused on using the Cisco Nexus 5020 switch as a non-DCB Ethernet network infrastructure.
The test objectives used while testing the Cisco Nexus 5020 switch configuration are defined below: Test the ability of the switch configuration to pass iSCSI traffic as defined by realistic application workloads and server/storage configurations while meeting stringent networking performance parameters. Determine the scalability behavior of the switch configuration for a standardized set of I/O workloads and provide sizing guidance in terms of the number of storage arrays and servers that can be supported by a SAN consisting of Cisco Nexus 5020 switches.
SWRA1013
3 Conclusions
Cisco Nexus 5020 switch passed switch validation objectives in a dual switch configuration with up to 12 arrays. With the Cisco Nexus 5020 switches, the SAN scaled easily to support 12 arrays and 12 hosts. Sequential Write performance scaled linearly to 97% of the theoretical baseline in terms of throughput as measured at the host. Sequential Read Performance scaled linearly to 91% in terms of throughput as measured at the host. The Random Read/Write performance in terms of IOs per second scaled to 88% of the theoretical baseline on a per-server performance basis. TCP retransmissions from arrays, as polled periodically from array counters and SAN Headquarters (SANHQ) were low (< 0.1%) across all test configurations. This is another indicator that there are no bottlenecks or design issues within the switch that limited the ability of the switch to support the reference architecture. Recommended number of arrays is up to 12+ arrays in a dual switch configuration due to port count limits.
In conclusion, our test results indicate that the only limitation this reference architecture has with respect to scaling an EqualLogic SAN is in the number of available ports that two switches provide, not with any limitations in the performance of the switch.
SWRA1013
4 Reference architecture
Because our goal is to help you deploy a switch-based SAN easily and quickly, we use a standard host configuration and a standard EqualLogic Group configuration, accept default switch settings wherever possible, and employ all accepted best-practice recommendations for EqualLogic SANs. When developing this reference architecture, we used the following guidelines: The Cisco Nexus 5020 switch is targeted at converged DCB environments, but the switch does not fully support the IEEE DCB standards for iSCSI over DCB. Because of this, this reference architecture is focused on using the Cisco Nexus 5020 switch as a non-DCB Ethernet network infrastructure. All hosts have two iSCSI Ethernet ports attached to the SAN. All NICs are configured based on the default, out of the box settings where possible. o The exceptions are the use of Jumbo Frames and Flow Control, both of which are enabled for all testing. Microsoft Windows Server is the operating system used for all hosts. We use EqualLogic Host Integration Toolkit for all hosts. In particular, we use the MPIO Device Specific Module to provide EqualLogic-aware multi-pathing. Host connections to the SAN are equal to the number of active array ports connected to the SAN. o Since each PS6x10 series array has two active array ports, for each array in the test configuration there is one host connected to the SAN. All SAN infrastructures are single subnet, non-routed designs. All SAN infrastructures are based on standard IPv4 addressing. All testing includes three pre-defined standardized workloads that reflect various types of realworld SAN utilization.
Note: For more information on EqualLogic SAN design common practice recommendations, consult the EqualLogic Configuration Guide that can be found at www.delltechcenter.com/page/equallogic+configuration+guide
SWRA1013
SWRA1013
Server specifications Server Model BIOS Intel 5500-5520 Chipset OS PowerEdge R610 2.1.9 A02 Windows Server 2008 R2 SP1 Host Converged Network Adapter (CNA) Model Intel x520 Dual Port Advanced Network Services Driver Installed FCoE using Data Center Bridging Driver Not Installed iSCSI using Data Center Bridging Driver Installed iSCSI Initiator Microsoft Windows Server 2008R2 SP1 Intel x520 Provider: Intel Version: 2.8.32.0 Date: 3/4/2011 MPIO Configuration Dell EqualLogic Host Integration Toolkit Version 3.5.1 Dell EqualLogic MPIO Device Specific Module Maximum Sessions per Slice: 2 Maximum Sessions per Volume: 6
EqualLogic storage Array Model Firmware Enabled performance load balancing in pools
SWRA1013
Table 3
Switch settings Cisco Nexus 5020 Dynamic Link Aggregation Group (LACP LAG) Flow control enabled on each port channel group Enable jumbo frames policy Enable spanning-tree on edge-port Enable jumbo frames policy Set Layer 2 mode Enable Port Activate Flowcontrol 5.0(3)N2(2) Cisco 10G Active SFP-H10GB-CU3M 3 meter cable Array: SFP+ SR Optical Transceiver (DP/N 0N743D); Switch: Cisco Nexus SFP SR Optical Transceiver (SFP-10G-SR, 10-2415-03) LC-LC Fiber Optic Cable Switch: Cisco Nexus SFP SR Optical Transceiver (SFP-10G-SR, 10-2415-03) LC-LC Fiber Optic Cable
4.4.2 Global Switch Setting: Configure no-drop class and jumbo frames
To get flow control to properly work on the individual ports, you must configure a no-drop class and also apply jumbo frames. The following CLI commands must be done on both switches. 1. Define qos class-map sw1(config)# ip access-list all_traffic sw1(config-acl)# permit ip any any sw1(config)# class-map type qos class-nodrop sw1(config-cmap-qos)# match access-group name all_traffic 2. Define qos policy-map sw1(config)# policy-map type qos policy-qos sw1(config-pmap-qos)# class type qos class-nodrop sw1(config-pmap-c-qos)# set qos-group 2 3. Apply qos policy-map under system qos sw1(config)# system qos sw1(config-sys-qos)# service-policy type qos input policy-qos 4. Define network-qos Class-Map sw1(config)# class-map type network-qos class-nodrop
SWRA1013
sw1(config-cmap-nq)# match qos-group 2 5. Configure class-nodrop as no-drop class in network-qos policy-map and add jumbo frames sw1(config)# policy-map type network-qos policy-nq sw1(config-pmap-nq)# class type network-qos class-nodrop sw1(config-pmap-nq-c)# mtu 9216 sw1(config-pmap-nq-c)# pause no-drop 6. Apply network-qos policy-map under system qos context sw1(config-pmap-nq-c)# system qos sw1(config-sys-qos)# service-policy type network-qos policy-nq 7. Configure bandwidth allocation using queuing policy-map sw1(config)# class-map type queuing class-nodrop sw1(config-cmap-qos)# match qos-group 2 sw1(config-cmap-qos)# policy-map type queuing policy-queuing sw1(config-policy-c-que)# class type queuing class-default sw1(config-policy-c-que# bandwidth percent 5 sw1(config-policy-c-que)# class type queuing class-fcoe sw1(config-policy-c-que))# bandwidth percent 0 sw1(config-policy-c-que)# class type queuing class-nodrop sw1(config-policy-c-que)# bandwidth percent 95 sw1(config-pmap-c-que)# system qos sw1(config-sys-qos)# service-policy type queuing output policy-queuing sw1(config-sys-qos)# 8. Enable PAUSE for all interfaces and PortChannel sw1(config)interface e1/1 sw1(config-if)# flowcontrol send on sw1(config-if)# flowcontrol receive om
SWRA1013
mode. For this reference architecture, switch ports 0 through port 36 on each switch were used for either server or array connections. The following commands were used to configure the array and server ports on each switch. sw1# configure Enter configuration commands one per line. To end, press <Ctrl> <z>. sw1(config)# interface ethernet 1/1-36 sw1(config-if-range)# flowcontrol send on sw1(config-if-range)# flowcontrol receive on sw1(config-if-range)# spanning-tree port type edge default sw1(config-if)# exit
sw1(config)# interface ethernet 1/1-36 sw1(config-if-range)# switchport mode access sw1(config-if-range)# switchport access vlan 1 sw1(config-if-range)# do copy running-config startup-config sw1(config-if-range)# exit sw1(config)# exit
SWRA1013
10
2. Add interfaces to port channel sw1# configure Enter configuration commands one per line. To end, press <Ctrl> <z>. sw1(config)# feature lacp sw1(config)# exit sw1# configure Enter configuration commands one per line. To end, press <Ctrl> <z>. sw1(config)# interface ethernet 1/37-40 sw1(config-if-range)# channel-group 1 mode passive sw1(config-if-range)# exit sw1(config)# exit 3. Configure port channel sw1# configure Enter configuration commands one per line. To end, press <Ctrl> <z>. sw1(config)# interface port-channel 1 sw1(config-if)# switchport mode trunk sw1(config-if)# switchport trunk allowed vlan 1 sw1(config-if)# flowcontrol send on sw1(config-if)# flowcontrol receive on sw1(config-if)# exit sw1# conf 4. CONFIGURE VPC (Virtual Port Channel) sw1(config)# feature vpc sw1(config)# vpc domain 1 sw1(config-vpc-domain)# peer-keepalive destination 192.168.7.252 192.168.7.251 on sw2) sw1(config-vpc-domain)# int ethernet 1/37-40 sw1(config-if-range)# channel-group 1 mode active (used
SWRA1013
11
sw1(config-if-range)# int po1 sw1(config-if)# vpc peer-link sw1(config-if)# switchport mode trunk sw1(config-if)# switchport trunk allowed vlan 1 sw1(config-if)# exit sw1(config)# exit
SWRA1013
12
SWRA1013
13
Array EQL01 EQL02 EQL03 EQL04 EQL05 EQL06 EQL07 EQL08 EQL09 EQL10 EQL11 EQL12
Port 0 192.168.1.11 192.168.1.12 192.168.1.13 192.168.1.14 192.168.1.15 192.168.1.16 192.168.1.17 192.168.1.18 192.168.1.19 192.168.1.30 192.168.1.31 192.168.1.32
Port 1 192.168.1.21 192.168.1.22 192.168.1.23 192.168.1.24 192.168.1.25 192.168.1.26 192.168.1.27 192.168.1.28 192.168.1.29 192.168.1.40 192.168.1.41 192.168.1.42
Server SVR01 SVR02 SVR03 SVR04 SVR05 SVR06 SVR07 SVR08 SVR09 SVR10 SVR11 SVR12
NIC 1 192.168.1.210 192.168.1.212 192.168.1.214 192.168.1.216 192.168.1.218 192.168.1.220 192.168.1.222 192.168.1.224 192.168.1.226 192.168.1.228 192.168.1.230 192.168.1.232
NIC 2 192.168.1.211 192.168.1.213 192.168.1.215 192.168.1.217 192.168.1.219 192.168.1.221 192.168.1.223 192.168.1.225 192.168.1.227 192.168.1.229 192.168.1.231 192.168.1.233
SWRA1013
14
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.