Sie sind auf Seite 1von 512

Front cover

IBM PureFlex System and IBM Flex System Products and Technology
Describes the IBM Flex System Enterprise Chassis and compute node technology Provides details about available I/O modules and expansion options Explains networking and storage configurations

David Watts Randall Davis Dave Ridley

ibm.com/redbooks

International Technical Support Organization IBM PureFlex System and IBM Flex System Products and Technology October 2013

SG24-7984-03

Note: Before using this information and the product it supports, read the information in Notices on page xi.

Fourth Edition (October 2013) This edition applies: IBM PureFlex System IBM Flex System Enterprise Chassis IBM Flex System Manager IBM Flex System x220 Compute Node IBM Flex System x222 Compute Node IBM Flex System x240 Compute Node IBM Flex System x440 Compute Node IBM Flex System p260 Compute Node IBM Flex System p270 Compute Node IBM Flex System p24L Compute Node IBM Flex System p460 Compute Node IBM Flex System V7000 Storage Node IBM 42U 1100mm Enterprise V2 Dynamic Rack IBM PureFlex System 42U Rack and 42U Expansion Rack
Copyright International Business Machines Corporation 2012, 2013. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii xiv xvi xvi xvi

Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii October 2013, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii August 2013, Third Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii February 2013, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 IBM Flex System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.1 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 IBM Flex System Enterprise Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2.3 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.4 Expansion nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.5 Storage nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.6 I/O modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3 This book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Chapter 2. IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Configurators for IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 PureFlex solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 PureFlex Solution for IBM i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 PureFlex Solution for SmartCloud Desktop Infrastructure . . . . . . . . . . . . . . . . . . 2.4 IBM PureFlex System Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Available Express configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 PureFlex Express storage requirements and options . . . . . . . . . . . . . . . . . . . . . . 2.4.6 Video, keyboard, mouse option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.8 Available software for Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . 2.4.9 Available software for x86-based compute nodes . . . . . . . . . . . . . . . . . . . . . . . . 2.5 IBM PureFlex System Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Enterprise configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 Top-of-rack switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.6 PureFlex Enterprise storage options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Copyright IBM Corp. 2012, 2013. All rights reserved.

11 12 13 14 15 15 16 17 17 20 20 21 21 24 25 25 26 27 27 30 30 31 31 32 iii

2.5.7 Video, keyboard, and mouse option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.8 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.9 Available software for Power Systems compute node . . . . . . . . . . . . . . . . . . . . . 2.5.10 Available software for x86-based compute nodes . . . . . . . . . . . . . . . . . . . . . . . 2.6 Services for IBM PureFlex System Express and Enterprise . . . . . . . . . . . . . . . . . . . . . 2.6.1 PureFlex FCoE Customization Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 PureFlex Services for IBM i. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 Software and hardware maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 IBM SmartCloud Entry for Flex system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Management network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Chassis Management Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Compute node management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Integrated Management Module II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Flexible service processor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 I/O modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 IBM Flex System Manager functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Software features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 User interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.5 Mobile System Management application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.6 Flex System Manager CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34 35 35 35 36 37 38 38 39 41 42 43 43 44 46 47 47 48 49 50 50 54 58 66 66 68

Chapter 4. Chassis and infrastructure configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.1.1 Front of the chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.1.2 Midplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.1.3 Rear of the chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.1.4 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.1.5 Air filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.1.6 Compute node shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.1.7 Hot plug and hot swap components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.2 Power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.3 Fan modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.4 Fan logic module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.5 Front information panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.6 Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.7 Power supply selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.7.1 Power policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.7.2 Number of power supplies required for N+N and N+1 . . . . . . . . . . . . . . . . . . . . . 95 4.8 Fan module population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.9 Chassis Management Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.10 I/O architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.11 I/O modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.11.1 I/O module LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.11.2 Serial access cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 4.11.3 I/O module naming scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.11.4 Switch to adapter compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

iv

IBM PureFlex System and IBM Flex System Products and Technology

4.11.5 IBM Flex System EN6131 40Gb Ethernet Switch . . . . . . . . . . . . . . . . . . . . . . . 4.11.6 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch . . . . . . . . 4.11.7 IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switch . . . . . 4.11.8 IBM Flex System Fabric SI4093 System Interconnect Module . . . . . . . . . . . . . 4.11.9 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module . . . . . . . . . . . . . . 4.11.10 IBM Flex System EN2092 1Gb Ethernet Scalable Switch . . . . . . . . . . . . . . . 4.11.11 IBM Flex System FC5022 16Gb SAN Scalable Switch. . . . . . . . . . . . . . . . . . 4.11.12 IBM Flex System FC3171 8Gb SAN Switch . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.13 IBM Flex System FC3171 8Gb SAN Pass-thru. . . . . . . . . . . . . . . . . . . . . . . . 4.11.14 IBM Flex System IB6131 InfiniBand Switch . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Infrastructure planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12.1 Supported power cords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12.2 Supported PDUs and UPS units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12.3 Power planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12.4 UPS planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12.5 Console planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12.6 Cooling planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12.7 Chassis-rack cabinet compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 IBM 42U 1100mm Enterprise V2 Dynamic Rack . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 IBM PureFlex System 42U Rack and 42U Expansion Rack . . . . . . . . . . . . . . . . . . . 4.15 IBM Rear Door Heat eXchanger V2 Type 1756 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 IBM Flex System x220 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Processor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.6 Memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.7 Internal disk storage controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.8 Supported internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.9 Embedded 1 Gb Ethernet controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.10 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.11 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.12 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 IBM Flex System x222 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Processor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.7 Supported internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.8 Expansion Node support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.9 Embedded 10Gb Virtual Fabric adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.10 Mid-mezzanine I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.11 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.12 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 IBM Flex System x240 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

117 121 129 136 142 144 148 155 158 160 161 161 162 162 168 169 169 170 172 178 180 185 186 186 186 190 190 191 193 193 201 206 209 209 211 211 215 216 216 219 219 220 222 223 225 226 226 228 231 232 234 234

Contents

5.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Features and specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.5 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.6 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.7 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.8 Standard onboard features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.9 Local storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.10 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.11 Embedded 10 Gb Virtual Fabric adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.12 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.13 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.14 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 IBM Flex System x440 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Processor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.6 Memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.7 Internal disk storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.8 Embedded 10Gb Virtual Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.9 I/O expansion options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.10 Network adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.11 Storage host bus adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.12 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.13 Light path diagnostics panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.14 Operating systems support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 IBM Flex System p260 and p24L Compute Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 IBM Flex System p24L Compute Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.5 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.6 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.7 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.8 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.9 Active Memory Expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.10 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.11 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.12 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 IBM Flex System p270 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.3 Comparing the p260 and p270 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.4 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.5 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.6 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.7 IBM POWER7+ processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.8 Memory subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.9 Active Memory Expansion feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
IBM PureFlex System and IBM Flex System Products and Technology

235 237 239 239 240 242 245 258 259 266 268 269 271 274 275 275 278 279 280 281 282 284 290 291 294 295 295 296 297 298 298 301 301 302 304 304 305 308 310 313 315 316 317 318 319 320 321 322 323 324 325 327 329

5.7.10 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.11 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.12 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 IBM Flex System p460 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.3 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.4 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.5 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.6 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.7 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.8 Active Memory Expansion feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.9 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.10 Local storage and cover options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.11 Hardware RAID capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.12 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.13 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.14 Integrated features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.15 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 IBM Flex System PCIe Expansion Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.3 Supported PCIe adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.4 Supported I/O expansion cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 IBM Flex System Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.1 Supported nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.2 Features on Demand upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.3 Cache upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.4 Supported HDD and SSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.1 Form factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.2 Naming structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.3 Supported compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.4 Supported switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.5 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter. . . . . . . . . . . . . . . . . . 5.11.6 IBM Flex System EN4132 2-port 10Gb Ethernet Adapter. . . . . . . . . . . . . . . . . 5.11.7 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter. . . . . . . . . . . . . . . . . 5.11.8 IBM Flex System EN6132 2-port 40Gb Ethernet Adapter. . . . . . . . . . . . . . . . . 5.11.9 IBM Flex System CN4054 10Gb Virtual Fabric Adapter . . . . . . . . . . . . . . . . . . 5.11.10 IBM Flex System CN4058 8-port 10Gb Converged Adapter . . . . . . . . . . . . . 5.11.11 IBM Flex System EN4132 2-port 10Gb RoCE Adapter. . . . . . . . . . . . . . . . . . 5.11.12 IBM Flex System FC3172 2-port 8Gb FC Adapter . . . . . . . . . . . . . . . . . . . . . 5.11.13 IBM Flex System FC3052 2-port 8Gb FC Adapter . . . . . . . . . . . . . . . . . . . . . 5.11.14 IBM Flex System FC5022 2-port 16Gb FC Adapter . . . . . . . . . . . . . . . . . . . . 5.11.15 IBM Flex System FC5024D 4-port 16Gb FC Adapter . . . . . . . . . . . . . . . . . . . 5.11.16 IBM Flex System FC5052 2-port and FC5054 4-port 16Gb FC Adapters. . . . 5.11.17 IBM Flex System FC5172 2-port 16Gb FC Adapter . . . . . . . . . . . . . . . . . . . . 5.11.18 IBM Flex System IB6132 2-port FDR InfiniBand Adapter . . . . . . . . . . . . . . . . 5.11.19 IBM Flex System IB6132 2-port QDR InfiniBand Adapter. . . . . . . . . . . . . . . . 5.11.20 IBM Flex System IB6132D 2-port FDR InfiniBand Adapter. . . . . . . . . . . . . . .

329 333 333 334 335 335 338 338 340 341 342 345 349 350 351 353 353 354 355 355 356 357 359 361 362 363 364 366 367 368 370 371 372 373 374 376 377 378 380 381 384 387 389 391 393 394 396 398 400 401 403

Chapter 6. Network integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

Contents

vii

Choosing the Ethernet switch I/O module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual local area networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scalability and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Highly available topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Link aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 NIC teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.5 Trunk failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.6 Virtual Router Redundancy Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 FCoE capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Virtual Fabric vNIC solution capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Virtual Fabric mode vNIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Switch-independent mode vNIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Unified Fabric Port feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Easy Connect concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Stacking feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Openflow support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11 802.1Qbg Edge Virtual Bridge support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 SPAR feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.13 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.13.1 Management tools and their capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.14 Summary and conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7. Storage integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 IBM Flex System V7000 Storage Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 V7000 Storage Node types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Controller Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.3 Expansion Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.4 SAS cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.5 Host interface cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.6 Fibre Channel over Ethernet with a V7000 Storage Node . . . . . . . . . . . . . . . . . 7.1.7 V7000 Storage Node drive options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.8 Features and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.9 Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.10 Configuration restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 External storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 IBM XIV Storage System series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 IBM System Storage DS8000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 IBM System Storage DS5000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 IBM System Storage V3700 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 IBM System Storage DS3500 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.7 IBM network-attached storage products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.8 IBM FlashSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.9 IBM System Storage TS3500 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.10 IBM System Storage TS3310 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.11 IBM System Storage TS3200 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.12 IBM System Storage TS3100 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 FC requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 FC switch selection and fabric interoperability rules . . . . . . . . . . . . . . . . . . . . . . 7.4 FCoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.1 6.2 6.3 6.4

406 408 409 411 413 416 417 419 420 421 422 423 424 426 427 429 430 432 433 433 434 436 437 439 440 444 445 450 452 454 454 455 455 457 458 459 460 462 463 463 464 464 465 465 466 467 467 468 468 468 469 473

viii

IBM PureFlex System and IBM Flex System Products and Technology

7.5 7.6 7.7 7.8

iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HA and redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Backup solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.1 Dedicated server for centralized LAN backup. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.2 LAN-free backup for nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.1 Implementing Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.2 iSCSI SAN Boot specific considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

475 476 478 478 479 480 481 481 481

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Related publications and education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 485 486 486 487

Contents

ix

IBM PureFlex System and IBM Flex System Products and Technology

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

Copyright IBM Corp. 2012, 2013. All rights reserved.

xi

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Active Cloud Engine Active Memory AIX AIX 5L AS/400 BladeCenter DB2 DS4000 DS8000 Easy Tier EnergyScale eServer FICON FlashCopy FlashSystem IBM IBM FlashSystem IBM Flex System IBM Flex System Manager IBM SmartCloud iDataPlex Linear Tape File System Netfinity POWER Power Systems POWER6 POWER6+ POWER7 POWER7+ PowerPC PowerVM PureApplication PureData PureFlex PureSystems Real-time Compression Redbooks Redbooks (logo) ServerProven ServicePac Storwize System Storage System Storage DS System x Tivoli Tivoli Storage Manager FastBack VMready X-Architecture XIV

The following terms are trademarks of other companies: Intel, Intel Xeon, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Linear Tape-Open, LTO, Ultrium, the LTO Logo and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, or service names may be trademarks or service marks of others.

xii

IBM PureFlex System and IBM Flex System Products and Technology

Preface
To meet todays complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more. The IBM PureFlex System combines no-compromise system designs along with built-in expertise and integrates them into complete, optimized solutions. At the heart of PureFlex System is the IBM Flex System Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications. The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high-speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy now, and to scale to meet your needs in the future. This IBM Redbooks publication describes IBM PureFlex System and IBM Flex System. It highlights the technology and features of the chassis, compute nodes, management features, and connectivity options. Guidance is provided about every major component, and about networking and storage connectivity. This book is intended for customers, Business Partners, and IBM employees who want to know the details about the new family of products. It assumes that you have a basic understanding of blade server concepts and general IT knowledge.

Copyright IBM Corp. 2012, 2013. All rights reserved.

xiii

Authors
This book was produced by a team of specialists from around the world working at the International Technical Support Organization, Raleigh Center. David Watts is a Consulting IT Specialist at the IBM ITSO Center in Raleigh. He manages residencies and produces IBM Redbooks publications on hardware and software topics that are related to IBM Flex System, IBM System x, and BladeCenter servers and associated client platforms. He has authored over 200 books, papers, and Product Guides. He holds a Bachelor of Engineering degree from the University of Queensland (Australia), and has worked for IBM in the United States and Australia since 1989. David is an IBM Certified IT Specialist and a member of the IT Specialist Certification Review Board. Randall Davis is a Senior IT Specialist working in the System x pre-sales team for IBM Australia as a Field Technical Sales Support (FTSS) specialist. He regularly performs System x, BladeCenter, and Storage demonstrations for customers at the IBM Demonstration Centre in Melbourne, Australia. He also helps instruct Business Partners and customers on how to configure and install the BladeCenter. His areas of expertise are the IBM BladeCenter, System x servers, VMware, and Linux. Randall started at IBM as a System 36 and AS/400 Engineer in 1989. Dave Ridley is the PureFlex and Flex System Technical Product Manager for IBM in the United Kingdom and Ireland. His role includes product transition planning, supporting marketing events, press briefings, managing the UK loan pool, running early ship programs, and supporting the local sales and technical teams. He is based in Horsham in the United Kingdom, and has worked for IBM since 1998. In addition, he has been involved with IBM x86 products for 27 years.

Thanks to the authors of the previous editions of this book. Authors of the second edition, IBM PureFlex System and IBM Flex System Products and Technology, published in February 2013, were: David Watts Dave Ridley Authors of the first edition, IBM PureFlex System and IBM Flex System Products and Technology, published in July 2012, were: David Watts Randall Davis Richard French Lu Han Dave Ridley Cristian Rojas

xiv

IBM PureFlex System and IBM Flex System Products and Technology

Thanks to the following people for their contributions to this project: From IBM marketing: TJ Aspden Michael Bacon John Biebelhausen Mark Cadiz Bruce Corregan Mary Beth Daughtry Meleata Pinto Mike Easterly Diana Cunniffe Kyle Hampton From IBM development: Mike Anderson Sumanta Bahali Wayne Banks Barry Barnett Keith Cramer Mustafa Dahnoun Dean Duff Royce Espey Kaena Freitas Jim Gallagher Dottie Gardner Sam Gaver Phil Godbolt Mike Goodman John Gossett Tim Hiteshew Andy Huryn Bill Ilas Don Keener Caroline Metry Meg McColgan Mark McCool Rob Ord Greg Pruett Mike Solheim Fang Su Vic Stankevich Tan Trinh Rochelle White Dale Weiler Mark Welch Al Willard Botond Kiss Shekhar Mishra Sander Kim Dean Parker Hector Sanchez David Tareen David Walker Randi Wood Bob Zuber

From the International Technical Support Organization: Kevin Barnes Tamikia Barrow Mary Comianos Deana Coble Others from IBM around the world: Kerry Anders Simon Casey Bill Champion Jonathan A Tyrrell Others from other companies: Tom Boucher, Emulex Brad Buland, Intel Jeff Lin, Emulex Chris Mojica, QLogic Brent Mosbrook, Emulex Jimmy Myers, Brocade Haithuy Nguyen, Mellanox Brian Sparks, Mellanox Matt Wineberg, Brocade Michael L. Nelson Kiron Rakkar Matt Slavin Fabien Willmann Shari Deiana Cheryl Gera Ilya Krutov Karen Lawrence

Preface

xv

Now you can become a published author, too!


Heres an opportunity to spotlight your skills, grow your career, and become a published authorall at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks


Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html

xvi

IBM PureFlex System and IBM Flex System Products and Technology

Summary of changes
This section describes the technical changes that were made in this edition of the book and in previous editions. This edition might also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-7984-03 for IBM PureFlex System and IBM Flex System Products and Technology as created or updated on October 15, 2013 3:46 pm.

October 2013, Fourth Edition


This revision reflects the addition, deletion, or modification of new and changed information that is described here.

New information
The following new products were added to the book: IBM PureFlex System Express IBM PureFlex System Enterprise IBM SmartCloud Entry 3.2 These products are described in Chapter 2, IBM PureFlex System on page 11. Important: The Flex System components that were announced in October 2013 will be covered in the next edition of this book.

August 2013, Third Edition


This revision reflects the addition, deletion, or modification of new and changed information that is described below.

New information
The following new products and options were added to the book: IBM Flex System x222 Compute Node IBM Flex System p260 Compute Node (POWER7+ SCM) IBM Flex System p270 Compute Node (POWER7+ DCM) IBM Flex System p460 Compute Node (POWER7+ SCM) IBM Flex System EN6132 2-port 40Gb Ethernet Adapter IBM Flex System FC5052 2-port 16Gb FC Adapter IBM Flex System FC5054 4-port 16Gb FC Adapter IBM Flex System FC5172 2-port 16Gb FC Adapter IBM Flex System FC5024D 4-port 16Gb FC Adapter IBM Flex System IB6132D 2-port FDR InfiniBand Adapter IBM Flex System Fabric SI4093 System Interconnect Module IBM Flex System EN6131 40Gb Ethernet Switch

Copyright IBM Corp. 2012, 2013. All rights reserved.

xvii

February 2013, Second Edition


This revision reflects the addition, deletion, or modification of new and changed information that is described below.

New information
The following new products and options were added to the book: IBM SmartCloud Entry V2.4 IBM Flex System Manager V1.2 IBM Flex System Fabric EN4093R 10Gb Scalable Switch IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch FoD license upgrades for the IBM Flex System FC5022 16Gb SAN Scalable Switch IBM PureFlex System 42U Rack 2100-W power supply option for the Enterprise Chassis New options and models of the IBM Flex System x220 Compute Node IBM Flex System x440 Compute Node Additional solid-state drive options for all x86 compute nodes IBM Flex System p260 Compute Node, model 23X with IBM POWER7+ processors New memory options for the IBM Power Systems compute nodes IBM Flex System Storage Expansion Node IBM Flex System PCIe Expansion Node IBM Flex System CN4058 8-port 10Gb Converged Adapter IBM Flex System EN4132 2-port 10Gb RoCE Adapter IBM Flex System V7000 Storage Node

Changed information
The following updates were made to existing product information: Updated the configurations of IBM PureFlex System Express, Standard, and Enterprise Switch stacking feature of Ethernet switches FCoE and iSCSI support

xviii

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 1.

Introduction
During the last 100 years, information technology moved from a specialized tool to a pervasive influence on nearly every aspect of life. From tabulating machines that counted with mechanical switches or vacuum tubes to the first programmable computers, IBM has been a part of this growth. The goal has always been to help customers to solve problems. IT is a constant part of business and of general life. The expertise of IBM in delivering IT solutions has helped the planet become more efficient. As organizational leaders seek to extract more real value from their data, business processes, and other key investments, IT is moving to the strategic center of business. To meet these business demands, IBM has introduced a new category of systems. These systems combine the flexibility of general-purpose systems, the elasticity of cloud computing, and the simplicity of an appliance that is tuned to the workload. Expert integrated systems are essentially the building blocks of capability. This new category of systems represents the collective knowledge of thousands of deployments, established guidelines, innovative thinking, IT leadership, and distilled expertise. The offerings are designed to deliver value in the following ways: Built-in expertise helps you to address complex business and operational tasks automatically. Integration by design helps you to tune systems for optimal performance and efficiency. Simplified experience, from design to purchase to maintenance, creates efficiencies quickly. These offerings are optimized for performance and virtualized for efficiency. These systems offer a no-compromise design with system-level upgradeability. The capability is built for cloud, containing built-in flexibility and simplicity.

Copyright IBM Corp. 2012, 2013. All rights reserved.

IBM PureFlex System is an expert integrated system. It is an infrastructure system with built-in expertise that deeply integrates with the complex IT elements of an infrastructure. This chapter describes the IBM PureFlex System and the components that make up this compelling offering and includes the following topics: 1.1, IBM PureFlex System on page 3 1.2, IBM Flex System overview on page 6 1.3, This book on page 10

IBM PureFlex System and IBM Flex System Products and Technology

1.1 IBM PureFlex System


To meet todays complex and ever-changing business demands, you need a solid foundation of server, storage, networking, and software resources. Furthermore, it must be simple to deploy, and able to quickly and automatically adapt to changing conditions. You also need access to, and the ability to take advantage of, broad expertise and proven guidelines in systems management, applications, hardware maintenance and more. IBM PureFlex System is a comprehensive infrastructure system that provides an expert integrated computing system. It combines servers, enterprise storage, networking, virtualization, and management into a single structure. Its built-in expertise enables organizations to manage and flexibly deploy integrated patterns of virtual and hardware resources through unified management. These systems are ideally suited for customers who want a system that delivers the simplicity of an integrated solution while still able to tune middleware and the runtime environment. IBM PureFlex System uses workload placement that is based on virtual machine compatibility and resource availability. By using built-in virtualization across servers, storage, and networking, the infrastructure system enables automated scaling of resources and true workload mobility. IBM PureFlex System has undergone significant testing and experimentation so that it can mitigate IT complexity without compromising the flexibility to tune systems to the tasks businesses demand. By providing flexibility and simplicity, IBM PureFlex System can provide extraordinary levels of IT control, efficiency, and operating agility. This combination enables businesses to rapidly deploy IT services at a reduced cost. Moreover, the system is built on decades of expertise. This expertise enables deep integration and central management of the comprehensive, open-choice infrastructure system. It also dramatically cuts down on the skills and training that is required for managing and deploying the system. IBM PureFlex System combines advanced IBM hardware and software along with patterns of expertise. It integrates them into three optimized configurations that are simple to acquire and deploy so you get fast time to value. IBM PureFlex System is built and integrated before shipment so it can be quickly deployed into the data center. PureFlex System is shipped complete, integrated within a rack incorporating the all the required power, networking and SAN cabling together with all the associated switches, compute nodes, and storage. Figure 1-1 on page 4 shows an IBM PureFlex System 42U rack, complete with its distinctive PureFlex door.

Chapter 1. Introduction

Figure 1-1 IBM PureFlex System

The PureFlex System includes the following configurations: IBM PureFlex System Express, which is designed for small and medium businesses and is the most affordable entry point for PureFlex System. IBM PureFlex System Standard, which is optimized for application servers with supporting storage and networking, and is designed to support your key ISV solutions. IBM PureFlex System Enterprise, which is optimized for transactional and database systems. It has built-in redundancy for highly reliable and resilient operation to support your most critical workloads.

IBM PureFlex System and IBM Flex System Products and Technology

These configurations are summarized in Table 1-1.


Table 1-1 IBM PureFlex System configurations Component IBM PureFlex System 42U Rack IBM Flex System Enterprise Chassis IBM Flex System Fabric EN4093 10Gb Scalable Switch IBM Flex System FC3171 8Gb SAN Switcha IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch a IBM Flex System Manager Node IBM Flex System Manager software license Chassis Management Module Chassis power supplies (std/max) Chassis 80 mm fan modules (std/max) IBM Flex System V7000 Storage Nodeb IBM Storwize V7000 Disk Systemb IBM Storwize V7000 Software IBM PureFlex System Express 1 1 1 1 1 IBM PureFlex System Standard 1 1 1 2 2 IBM PureFlex System Enterprise 1 1 2 with both port-count upgrades 2 2

1 IBM Flex System Manager with 1-year service and support 2 2/6 4/8 Yes (redundant controller) Yes (redundant controller) Base with 1-year software maintenance agreement Optional Real Time Compression

1 IBM Flex System Manager Advanced with 3-year service and support 2 4/6 6/8 Yes (redundant controller) Yes (redundant controller) Base with 3-year software maintenance agreement Real Time Compression

1 Flex System Manager Advanced with 3-year service and support 2 6/6 8/8 Yes (redundant controller) Yes (redundant controller) Base with 3-year software maintenance agreement Real Time Compression

a. Select the IBM Flex System FC3171 8Gb SAN Switch or IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch module. b. Select the IBM Flex System V7000 Storage Node that is installed inside the Enterprise chassis or the external IBM Storwize V7000 Disk System.

The fundamental building blocks of the three IBM PureFlex System solutions are the compute nodes, storage nodes, and networking of the IBM Flex System Enterprise Chassis.

Chapter 1. Introduction

1.2 IBM Flex System overview


IBM Flex System is a full system of hardware that forms the underlying strategic basis of IBM PureFlex System and IBM PureApplication System and forms the underlying hardware basis of other IBM PureSystems offerings. IBM Flex System optionally includes a management appliance, known as Flex System Manager. IBM Flex System is the next generation blade chassis offering from IBM, which features the latest innovations and advanced technologies. The major components of the IBM Flex System are described next.

1.2.1 IBM Flex System Manager


IBM Flex System Manager (FSM) is a high performance scalable systems management appliance with a preinstalled software stack. It is designed to optimize the physical and virtual resources of the Flex System infrastructure while simplifying and automating repetitive tasks. Flex System Manager provides easy system setup procedures with wizards and built-in expertise, and consolidated monitoring for all of your resources, including compute, storage, networking and virtualization. It is an ideal solution that allows you to reduce administrative expense and focus your efforts on business innovation. A single user interface controls the following features: Intelligent automation Resource pooling Improved resource usage Complete management integration Simplified setup As an appliance, Flex System Manager is delivered preinstalled onto a dedicated compute node platform, which is designed to provide a specific purpose. It is intended to configure, monitor, and manage IBM Flex System resources in up to 16 IBM Flex System Enterprise Chassis, which optimizes time-to-value. FSM provides an instant resource-oriented view of the Enterprise Chassis and its components, which provides vital information for real-time monitoring. An increased focus on optimizing time-to-value is evident in the following features: Setup wizards, including initial setup wizards, provide intuitive and quick setup of the Flex System Manager. The Chassis Map provides multiple view overlays to track health, firmware inventory, and environmental metrics. Configuration management for repeatable setup of compute, network, and storage devices. Remote presence application for remote access to compute nodes with single sign-on. Quick search provides results as you type. Beyond the physical world of inventory, configuration, and monitoring, IBM Flex System Manager enables virtualization and workload optimization for a new class of computing: Resource usage: Detects congestion, notification policies, and relocation of physical and virtual machines that include storage and network configurations within the network fabric.

IBM PureFlex System and IBM Flex System Products and Technology

Resource pooling: Pooled network switching, with placement advisors that consider virtual machine (VM) compatibility, processor, availability, and energy. Intelligent automation: Automated and dynamic VM placement that is based on usage, hardware predictive failure alerts, and host failures. Figure 1-2 shows the IBM Flex System Manager appliance.

Figure 1-2 IBM Flex System Manager

1.2.2 IBM Flex System Enterprise Chassis


The IBM Flex System Enterprise Chassis is the foundation of the Flex System offering, which features 14 standard (half-width) Flex System form factor compute node bays in a 10U chassis that delivers high-performance connectivity for your integrated compute, storage, networking, and management resources. Up to a total of 28 independent servers can be accommodated in each Enterprise Chassis, if double dense x222 compute nodes are deployed. The chassis is designed to support multiple generations of technology, and offers independently scalable resource pools for higher usage and lower cost per workload. With the ability to handle up 14 Nodes, supporting the intermixing of IBM Power Systems and Intel x86, the Enterprise Chassis provides flexibility and tremendous compute capacity in a 10U package. Additionally, the rear of the chassis accommodates four high speed I/O bays that can accommodate up to 40 GbE high speed networking, 16 Gb Fibre Channel or 56 Gb InfiniBand. With interconnecting compute nodes, networking, and storage that uses a high performance and scalable mid-plane, the Enterprise Chassis can support latest high speed networking technologies. The ground-up design of the Enterprise Chassis reaches new levels of energy efficiency through innovations in power, cooling, and air flow. Simpler controls and futuristic designs allow the Enterprise Chassis to break free of one size fits all energy schemes. The ability to support the workload demands of tomorrows workloads is built in with a new I/O architecture, which provides choice and flexibility in fabric and speed. With the ability to use Ethernet, InfiniBand, Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI, the Enterprise Chassis is uniquely positioned to meet the growing and future I/O needs of large and small businesses.

Chapter 1. Introduction

Figure 1-3 shows the IBM Flex System Enterprise Chassis.

Figure 1-3 The IBM Flex System Enterprise Chassis

1.2.3 Compute nodes


IBM Flex System offers compute nodes that vary in architecture, dimension, and capabilities. Optimized for efficiency, density, performance, reliability, and security, the portfolio includes a range of IBM POWER and Intel Xeon-based nodes that are designed to make full use of the full capabilities of these processors that can be mixed within the same Enterprise Chassis. Power Systems nodes are available in two and four socket variety, which uses the IBM POWER7 & IBM POWER7+ processors. Also available is a POWER7 node that is optimized for cost effective deployment of Linux. Compute Nodes that use Intel processors are available that range from the two socket Intel Xeon E5-2400 product family and the two socket Intel E5-2600 product family to the four socket Intel E5-4800 product family. Up to 28 two socket Intel Xeon E5-2400 servers can be deployed in a single enterprise chassis where high-density cloud, virtual desktop, or server virtualization is wanted.

IBM PureFlex System and IBM Flex System Products and Technology

Figure 1-4 shows a four socket IBM POWER7 compute node, the p460.

Figure 1-4 IBM Flex System p460 Compute Node

The nodes are complemented with leadership I/O capabilities of up to 16 channels of high-speed I/O lanes per standard wide node bay and 32 lanes per full wide node bay. Various I/O adapters and matching I/O Modules are available.

1.2.4 Expansion nodes


Expansion nodes can be attached to certain standard form factor (half-width) Flex System compute nodes, which allows the expansion of the nodes capabilities with locally attached storage or PCIe adapters. The IBM Flex System Storage Expansion Node provides locally attached disk expansion to the x240 and x220. SAS and SATA disk are supported. With the attachment of the IBM Flex System PCIe Expansion Node, an x220 or x240 can have up to four PCIe adapters attached. High performance GPUs can also be installed within the PCIe Expansion Node from companies such as Intel and NVIDIA.

1.2.5 Storage nodes


The storage capabilities of IBM Flex System give you advanced functionality with storage nodes in your system and make full use of your existing storage infrastructure through advanced virtualization. Storage is available within the chassis by using the IBM Flex System V7000 Storage Node that integrates with the Flex System Chassis or externally with the IBM Storwize V7000. IBM Flex System simplifies storage administration with a single user interface for all your storage. The management console is integrated with the comprehensive management system. These management and storage capabilities allow you to virtualize third-party storage with nondisruptive migration of your current storage infrastructure. You can also make use of intelligent tiering so you can balance performance and cost for your storage needs. The solution also supports local and remote replication and snapshots for flexible business continuity and disaster recovery capabilities. Flex System can also be connected to various external storage systems.

Chapter 1. Introduction

1.2.6 I/O modules


The range of available modules and switches to support key network protocols allows you to configure IBM Flex System to fit in your infrastructure. However, you can do so without sacrificing the ability to be ready for the future. The networking resources in IBM Flex System are standards-based, flexible, and fully integrated into the system. This combination gives you no-compromise networking for your solution. Network resources are virtualized and managed by workload. These capabilities are automated and optimized to make your network more reliable and simpler to manage. IBM Flex System gives you the following key networking capabilities: Supports the networking infrastructure that you have today, including Ethernet, FC, FCoE, and InfiniBand. Offers industry-leading performance with 1 Gb, 10 Gb, and 40 Gb Ethernet, 8 Gb and 16 Gb Fibre Channel, QDR and FDR InfiniBand. Provides pay-as-you-grow scalability so you can add ports and bandwidth when needed. Networking in data centers is undergoing a transition from a discrete traditional model to a more flexible, optimized model. The network architecture in IBM Flex System was designed to address the key challenges customers are facing today in their data centers. The key focus areas of the network architecture on this platform are unified network management, optimized and automated network virtualization, and simplified network infrastructure. Providing innovation, leadership, and choice in the I/O module portfolio uniquely positions IBM Flex System to provide meaningful solutions to address customer needs. Figure 1-5 shows the IBM Flex System Fabric EN4093R 10Gb Scalable Switch.

Figure 1-5 IBM Flex System Fabric EN4093R 10Gb Scalable Switch

1.3 This book


This book describes the IBM Flex System components in detail. It addresses the technology and features of the chassis, compute nodes, management features, connectivity and storage options. It starts with a description of the systems management features of the product portfolio.

10

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 2.

IBM PureFlex System


IBM PureFlex System is one member of the IBM PureSystems range of expert integrated systems. PureSystems deliver Application as a Service (AaaS) such as the PureApplication System and PureData System, and Infrastructure as a Service (IaaS), which can be enabled with IBM PureFlex System. This chapter includes the following topics: 2.1, Introduction on page 12 2.2, Components on page 13 2.3, PureFlex solutions on page 15 2.4, IBM PureFlex System Express on page 17 2.5, IBM PureFlex System Enterprise on page 27 2.6, Services for IBM PureFlex System Express and Enterprise on page 36 2.7, IBM SmartCloud Entry for Flex system on page 39

Copyright IBM Corp. 2012, 2013. All rights reserved.

11

2.1 Introduction
IBM PureFlex System provides an integrated computing system that combines servers, enterprise storage, networking, virtualization, and management into a single structure. You can use its built-in expertise to manage and flexibly deploy integrated patterns of virtual and hardware resources through unified management. PureFlex System includes the following features: Configurations that ease acquisition experience and match your needs. Optimized to align with targeted workloads and environments. Designed for cloud with the SmartCloud Entry option. Choice of architecture, operating system, and virtualization engine. Designed for simplicity with integrated, single-system management across physical and virtual resources. Ships as a single integrated entity directly to you. Includes factory integration and lab services optimization. Revised in the fourth quarter of 2013, IBM PureFlex System now consolidates the three previous offerings (Express, Standard, and Enterprise) into two simplified pre-integrated offerings (Express and Enterprise) that support the latest compute, storage, and networking requirements. Clients can select from either of these offerings that help simplify ordering and configuration. As a result, PureFlex System helps cut the cost, time, and complexity of system deployments, which reduces the time to gain real value. Latest enhancements include support for the latest compute nodes, I/O modules, and I/O adapters with the latest release of software, such as IBM SmartCloud Entry with the latest Flex System Manager release. PureFlex 4Q 2013 includes the following enhancements: New PureFlex Express New PureFlex Enterprise New Rack offerings for Express: 25U, 42U (or none) New compute nodes: x222, p270, p460 New networking support: 10 GbE Converged New SmartCloud Entry v3.2 offering The IBM PureFlex System includes the following offerings: Express: An infrastructure system for small-sized and midsized businesses; the most cost-effective entry point with choice and flexibility to upgrade to higher function. For more information, see 2.4, IBM PureFlex System Express on page 17. Enterprise: An infrastructure system that is optimized for scalable cloud deployments with built-in redundancy for highly reliable and resilient operation to support critical applications and cloud services. For more information, see 2.5, IBM PureFlex System Enterprise on page 27.

12

IBM PureFlex System and IBM Flex System Products and Technology

2.2 Components
A PureFlex System configuration features the following main components: A preinstalled and configured IBM Flex System Enterprise Chassis. Choice of compute nodes with IBM POWER7, POWER7+, or Intel Xeon E5-2400 and E5-2600 processors. IBM Flex System Manager that is preinstalled with management software and licenses for software activation. IBM Flex System V7000 Storage Node or IBM Storwize V7000 external storage system. The following hardware components are preinstalled in the IBM PureFlex System rack: Express: 25U, 42U rack, or no rack configured Enterprise: 42U rack only Choice of software: Operating system: IBM AIX, IBM i, Microsoft Windows, Red Hat Enterprise Linux, or SUSE Linux Enterprise Server Virtualization software: IBM PowerVM, KVM, VMware vSphere, or Microsoft Hyper V SmartCloud Entry 3.2 (for more information, see 2.7, IBM SmartCloud Entry for Flex system on page 39) Complete pre-integrated software and hardware Optional onsite services available to get you up and running and provide skill transfer The hardware differences between Express and Enterprise are summarized in Table 2-1. The base configuration of the two offerings is shown that can be further customized within the IBM configuration tools.
Table 2-1 PureFlex System hardware overview configurations Components PureFlex Rack Flex System Enterprise Chassis Chassis power supplies/ fans Flex System Manager Compute nodes (one minimum) POWER or x86 based VMware ESXi USB key Top of rack switches Integrated 1GbE switch Integrated 10GbE switch Integrated 16Gb Fibre Channel Converged 10GbE switch (FCoE) IBM Storwize V7000 or V7000 Storage Node PureFlex Express Optional: 42U, 25U or no rack Required. Single chassis only 2/6 Required p260, p270, p460, x220, x222, x240, x440 Selectable on x86 nodes Optional: Integrated by client Selectable (redundant) Selectable (redundant) Selectable (redundant) Selectable (Redundant or non-redundant) Required & selectable PureFlex Enterprise Required: 42U Rack Required: Multi-chassis, 1, 2, or 3 chassis 4/8 Required p260, p270, p460, x220, x222, x240, x440 Selectable on x86 nodes Integrated by IBM Selectable (redundant) Selectable (redundant) Selectable (redundant) Selectable (redundant) Required and selectable

Chapter 2. IBM PureFlex System

13

Components Media enclosure

PureFlex Express Selectable DVD or DVD and tape

PureFlex Enterprise Selectable DVD or DVD and tape

PureFlex System software can also be customized in a similar manner to the hardware components of the two offerings. Enterprise has a slightly different composition of software defaults than Express, which are summarized in Table 2-2.
Table 2-2 PureFlex software defaults overview Software Storage Flex System Manager (FSM) IBM Virtualization Virtualization customer installed Operating systems Express Enterprise

Storwize V7000 or Flex System V7000 Base Real Time Compression (optional) FSM Standard Upgradeable to Advanced PowerVM Standard Upgradeable to Enterprise FSM advanced Selectable to Standarda PowerVM Enterprise Selectable to Standard

VMware, Microsoft Hyper-V, KVM, Red Hat, and SUSE Linux AIX Standard (V6 and V7), IBM i (7.1, 6.1). RHEL (6), SUSE (SLES 11) Customer installed: Windows Server, RHEL, SLES Power SC Standard (AIX only) Tivoli Provisioning Manager (x86 only) SmartCloud Entry (optional) Standard 1 year, upgradeable to three years

Security Cloud Software maintenance

a. Advanced is required for Power Systems

2.2.1 Configurators for IBM PureFlex System


The following configurators can be used to configure a PureFlex System: x-config For configurations that are comprised only of x86 compute nodes, use x-config, which is available at this website: http://www.ibm.com/systems/x/hardware/configtools.html e-config For configurations that are composed of x86 and Power Systems compute nodes or only Power Systems compute nodes, use e-config, which is available at this website: http://www.ibm.com/services/econfig/announce/

14

IBM PureFlex System and IBM Flex System Products and Technology

2.3 PureFlex solutions


To enhance the integrated offerings that are available from IBM, two new PureFlex based solutions are available. One is focused on IBM i and the other on Virtual Desktop. These solutions, which can be selected within the IBM configurators for ease of ordering, are integrated at the IBM factory before they are delivered to the client. Services are also available to complement these PureFlex Solutions offerings.

2.3.1 PureFlex Solution for IBM i


The IBM PureFlex System Solution for IBM i is a combination of IBM i and an IBM PureFlex System with POWER and x86 processor-based compute nodes that provides a completely integrated business system. By consolidating their IBM i and x86 based applications onto a single platform, the solution offers an attractive alternative for small and midsized clients who want to reduce IT costs and complexity in a mixed environment. The PureFlex Solution for IBM i is based on the PureFlex Express offering and includes the following features: Complete integrated hardware and software solution: Simple, one button ordering fully enabled in configurator All hardware is pre-configured, integrated, and cabled Software preinstall of IBM i OS, PowerVM, Flex System Manager, and V7000 Storage software Reliability and redundancy IBM i clients demand: Redundant switches and I/O Pre-configured Dual VIOS servers Internal storage with pre-configured drives RAID and Mirrored Optimally sized to get started quickly: p260 compute node configured for IBM i x86 compute node configured for x86 workloads Ideal for infrastructure consolidation of multiple workloads Management integration across all resources Flex System Manager simplifies management of all resources within PureFlex IBM Lab Services (optional) to accelerate deployment Skilled PureFlex and IBM i experts perform integration, deployment, and migration services onsite from IBM or can be Business Partner delivered.

Chapter 2. IBM PureFlex System

15

2.3.2 PureFlex Solution for SmartCloud Desktop Infrastructure


The IBM PureFlex System Solution for SmartCloud Desktop Infrastructure (SDI) offers lower costs and complexity of existing desktop environments while securely manages a growing mobile workforce. This integrated infrastructure solution is made available for clients who want to deploy desktop virtualization. It is optimized to deliver performance, fast time to value, and security for Virtual Desktop Infrastructure (VDI) environments. The solution uses IBMs breadth of hardware offerings, software, and services to complete successful VDI deployments. It contains predefined configurations that are highlighted in the reference architectures that include integrated Systems Management and VDI management nodes. PureFlex Solution for SDI provides performance and flexibility for VDI and includes the following features: Choice of compute nodes for specific client requirements, including x222 high-density node. Windows Storage Servers and Flex System V7000 Storage Node provide block and file storage for non-persistent and persistent VDI deployments. Flex System Manager and Virtual Desktop Management Servers easily and efficiently manage virtual desktops and VDI infrastructure. Converged FCoE offers clients superior networking performance. Windows 2012 and VMware View are available. New Reference Architectures for Citrix Xen Desktop and VMware View are available. For more information about these and other VDI offerings, see the IBM SmartCloud Desktop Infrastructure page at this website: http://ibm.com/systems/virtualization/desktop-virtualization/

16

IBM PureFlex System and IBM Flex System Products and Technology

2.4 IBM PureFlex System Express


The tables in this section represent the hardware, software, and services that make up an IBM PureFlex System Express offering. The following items are described: 2.4.1, Available Express configurations 2.4.2, Chassis on page 20 2.4.3, Compute nodes on page 20 2.4.4, IBM Flex System Manager on page 21 2.4.5, PureFlex Express storage requirements and options on page 21 2.4.6, Video, keyboard, mouse option on page 24 2.4.7, Rack cabinet on page 25 2.4.8, Available software for Power Systems compute nodes on page 25 2.4.9, Available software for x86-based compute nodes on page 26 To specify IBM PureFlex System Express in the IBM ordering system, specify the indicator feature code that is listed in Table 2-3 for each machine type.
Table 2-3 Express indicator feature code AAS feature code EFDA EBM1 XCC feature code EFDA Not applicable Description IBM PureFlex System Express Indicator Feature Code IBM PureFlex System Express with PureFlex Solution for IBM i Indicator Feature Code

2.4.1 Available Express configurations


The PureFlex Express configuration is available in a single chassis as a traditional Ethernet and Fibre Channel combination or converged networking configurations that use Fibre Channel over Ethernet (FCoE) or Internet Small Computer System Interface (iSCSI). The required storage in these configurations can be an IBM Storwize V7000 or an IBM Flex System V7000 Storage Node. Compute nodes can be Power or x86 based, or a combination of both. The IBM Flex System Manager provides the system management for the PureFlex environment Ethernet and Fibre Channel combinations have the following characteristics: Power, x86 or hybrid combinations of compute nodes 1Gb or 10Gb Ethernet adapters or LAN on Motherboard (LOM, x86 only) 1Gb or 10Gb Ethernet switches 16Gb (or 8Gb for x86 only) Fibre Channel adapters 16Gb (or 8Gb for x86 only) Fibre Channel switches FCoE configurations have the following characteristics: Power, x86 or hybrid combinations of compute nodes 10Gb Converged Network Adapters (CNA) or LOM (x86 only) 10Gb Converged Network switch or switches

Configurations
There are seven different configurations that are orderable within the PureFlex express offering. These offerings cover various redundant and non-redundant configurations with the different types of protocol and storage controllers.

Chapter 2. IBM PureFlex System

17

Table 2-4 summarizes the PureFlex Express offerings.


Table 2-4 PureFlex Express Offerings Configuration
Networking Ethernet Networking Fibre Channel Number of Switches (up to 16)a V7000 Storage node or Storwize V7000 Chassis Rack TF3 KVM Tray Media Enclosure V7000 Options

1A
10 GbE

2A
10 GbE

2B
10GbE

3A
1 GbE

3B
1 GbE

4A
10 GbE

4B
10 GbE

FCoE

FCoE

FCoE

16 Gb

16 Gb

16 Gb

16 Gb

V7000 Storage Node

V7000 Storage Node

Storwize V7000

V7000 Storage Node

Storwize V7000

V7000 Storage Node

Storwize V7000

1 Chassis with 2 Chassis management modules, fans, and power supple units (PSUs) None or 42U or 25U (+PDUs) Optional DVD only DVD and Tape

Storage Options: (24 HDD, 22 HDD + 2 SSD, 20 HDD + 4 SSD or Custom) Storwize expansion (limit to single rack in Express, overflow storage rack in Enterprise), nine units per controller Up to two Storwize V7000 controllers and up to nine IBM Flex System V7000 Storage Nodes. VIOS, AIX, IBM i, and SCE on first Controller P260, p270, p460, x222, x240, x220, x440 CN4058 8-port 10Gb Converged Adapter EN2024 4-port 1Gb Ethernet Adapter EN4054 4-port 10Gb Ethernet Adapter

V7000 Content Nodes POWER Nodes Ethernet I/O Adapters POWER nodes Fibre Channel I/O Adapters x86 Nodes Ethernet I/O adapters

Not applicable

FC5054 4-port 16Gb FC Adapter

CN4054 10Gb Virtual Fabric Adapter

EN2024 4-port 1Gb Ethernet Adapter LAN on Motherboard (2port 10GbE)

EN4054 4-port 10Gb Ethernet Adapter LAN on Motherboard (2-port 10GbE)

x86 Nodes Fibre Channel I/O Adapters ESXi USB Key Port FoD Activations IBM i PureFlex Solution

Not applicable

FC5022 16Gb 2-port Fibre Channel adapter FC3052 8Gb 2-port Fibre Channel adapter FC5024D 4-port Fibre Channel adapter

Optional with x86 Nodes Ports are computed during configuration based on chassis switch, node type, and the I/O adapter selection.

Not configur-abl e

Available

Available

Not configurable

VDI PureFlex Solution

Not configurable

a. Switches that are required are determined by total number of racks

18

IBM PureFlex System and IBM Flex System Products and Technology

Example configuration
There are seven configurations for PureFlex Express as described in Table 2-4 on page 18. Configuration 2B features a single chassis with an external V7000 Storwize controller. This solution uses FCoE and includes the Converged Switch module CN3093 to provide an FC Forwarder. This means that only converged adapters must be installed on the node and that the CN4093 breaks out Ethernet and Fibre Channel externally from the chassis. Figure 2-1 shows the connections, including the Fibre Channel and Ethernet data networks and the management network that is presented to the Access Points within the PureFlex Rack. The green box signifies the chassis and its components with the inter-switch link between the two switches. Because this is an Express solution, it is an entry configuration.

Node Bays 1 to 14 CN4093 CN4093

CMM

ISL

CMM

Access Points

Storwize V7000

Midplane Connections Management 1GbE Data 10GbE Data 40GbE Data 8Gb FC

Label Label

Chassis Boundary Chassis Elements Rack Mounted Elements

Figure 2-1 PureFlex Express with FCoE and external V7000 Storwize

Chapter 2. IBM PureFlex System

19

2.4.2 Chassis
The IBM Flex System Enterprise Chassis contains all the components of the PureFlex Express configuration with the exception of the IBM Storwize V7000 and any expansion enclosure. The chassis is installed in a 25U or 42U rack. The compute nodes, storage nodes, switch modules and IBM Flex System Manager are installed in the chassis. When the V7000 Storage Node is chosen as the storage type, a no rack option is also available. Table 2-5 lists the major components of the Enterprise Chassis, including the switches and options. Feature codes: The tables in this section do not list all feature codes. Some features are not listed here for brevity.
Table 2-5 Components of the chassis and switches AAS feature code 7893-92X 7955-01M A0TF ESW7 ESW2 EB28 EB29 3286 3771 5370 9039 3592 XCC feature code 8721-HC1 8731-AC1 3598 A3J6 A3HH 5053 3268 5075 A2RQ 5084 A0TM A0UE Description IBM Flex System Enterprise Chassis IBM Flex System Manager IBM Flex System EN2092 1Gb Ethernet Scalable Switch IBM Flex System Fabric EN4093R 10Gb Scalable Switch IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch IBM SFP+ SR Transceiver IBM SFP RJ45 Transceiver IBM 8Gb SFP+ Software Optical Transceiver IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch Brocade 8Gb SFP+ Software Optical Transceiver Base Chassis Management Module Additional Chassis Management Module

2.4.3 Compute nodes


The PureFlex System Express requires at least one of the following compute nodes: IBM Flex System p24l, p260, p270, or p460 Compute Nodes, IBM POWER7, or POWER7+ based (see Table 2-6) IBM Flex System x220, x222, x240, or x440 Compute Nodes, x86 based (see Table 2-7 on page 21)
Table 2-6 Power Based Compute Nodes AAS feature code 0497 0437 ECSD MTM 1457-7FL 7895-22x 7895-23A Description IBM Flex System p24L Compute Node IBM Flex System p260 Compute Node IBM Flex System p260 Compute Node (POWER7+, 4 cores only)

20

IBM PureFlex System and IBM Flex System Products and Technology

AAS feature code ECS3 0438 ECS9 ECS4

MTM 7895-23X 7895-42X 7895-43X 7954-24X

Description IBM Flex System p260 Compute Node (POWER7+) IBM Flex System p460 Compute Node IBM Flex System p460 Compute Node (POWER7+) IBM Flex System p270 Compute Node (POWER7+)

Table 2-7 x86 based compute nodes AAS feature code ECS7 ECSB 0457 ECSB MTM 7906-25X 7916-27X 7863-10X 7917-45X Description IBM Flex System x220 Compute Node IBM Flex System x222 Compute Node IBM Flex System x240 Compute Node IBM Flex System x440 Compute Node

2.4.4 IBM Flex System Manager


The IBM Flex System Manager (FSM) is a high-performance, scalable system management appliance. It is based on the IBM Flex System x240 Compute Node. The FSM hardware comes preinstalled with Systems Management software that you can use to configure, monitor, and manage IBM PureFlex Systems. The IBM Flex System Manager 7955-01M includes the following features: Intel Xeon E5-2650 8C 2.0GHz 20MB 1600MHz 95W 32GB of 1333MHz RDIMMs memory Two 200GB, 1.8", SATA MLC SSD in a RAID 1 configuration 1TB 2.5 SATA 7.2K RPM hot-swap 6 Gbps HDD IBM Open Fabric Manager Optional FSM advanced, which adds VM Control Enterprise license

2.4.5 PureFlex Express storage requirements and options


The PureFlex Express configuration requires a SAN-attached storage system. The following storage options are available: IBM Storwize V7000 IBM Flex System V7000 Storage Node The required number of drives depends on drive size and compute node type. All storage is configured with RAID-5 with a single hot spare that is included in the total number of drives. The following configurations are available: Power Systems compute nodes only, 16 x 300GB, or 8 x 600GB drives Hybrid (Power and x86), 16 x 300GB, or 8 x 600GB drives Multi-chassis configurations require 24x 300GB drives SmartCloud Entry is optional with Express; if selected, the following drives are available: x86 based nodes only, including SmartCloud Entry, 8 x 300 GB, or 8x 600 GB drives Hybrid (both Power and x86) with SmartCloud Entry, 16x 300 GB, or 600 GB drives
Chapter 2. IBM PureFlex System

21

Solid-state drives (SSDs) are optional. However, if they are added to the configuration, they are normally used for the V7000 Easy Tier function, which improves system performance.

IBM Storwize V7000


The IBM Storwize V7000 that is shown in Figure 2-2 is one of the two storage options that is available in a PureFlex Express configuration. This option is installed in the same rack as the chassis. Other expansion units can be added in the same rack or an adjoining rack, depending on the quantity that is ordered.

Figure 2-2 IBM Storwize V7000

The IBM Storwize V7000 consists of the following components, disk, and software options: IBM Storwize V7000 Controller (2076-124) SSDs: 200 GB 2.5-inch 400 GB 2.5-inch Hard disk drives (HDDs): 300 GB 2.5-inch 10K 300 GB 2.5-inch 15K 600 GB 2.5-inch 10K 800 GB 2.5-inch 10K 900 GB 2.5-inch 10K 1 TB 2.5-inch 7.2K 1.2 TB 2.5-inch 10K

Expansion Unit (2076-224): up to 9 per V7000 Controller IBM Storwize V7000 Expansion Enclosure (24 disk slots) Optional software: IBM Storwize V7000 Remote Mirroring IBM Storwize V7000 External Virtualization IBM Storwize V7000 Real-time Compression

IBM Flex System V7000 Storage Node


IBM Flex System V7000 Storage Node (as shown in Figure 2-3 on page 23) is one of the two storage options that is available in a PureFlex Express configuration. This option uses four compute node bays (2 wide x 2 high) in the Flex chassis. Up to two expansion units can also be in the Flex chassis, each using four compute node bays. External expansion units are also supported.

22

IBM PureFlex System and IBM Flex System Products and Technology

Figure 2-3 IBM Flex System V7000 Storage Node

The IBM Flex System V7000 Storage Node consists of the following components, disk, and software options: IBM Storwize V7000 Controller (4939-A49) SSDs: 200GB 2.5-inch 400GB 2.5-inch 800 GB 2.5-inch HDDs: 300GB 2.5-inch 10K 300GB 2.5-inch 15K 600GB 2.5-inch 10K 800GB 2.5-inch 10K 900GB 2.5-inch 10K 1 TB 2.5-inch 7.2K 1.2TB 2.5-inch 10K

Expansion Unit (4939-A29) IBM Storwize V7000 Expansion Enclosure (24 disk slots) Optional software: IBM Storwize V7000 Remote Mirroring IBM Storwize V7000 External Virtualization IBM Storwize V7000 Real-time Compression

7226 Multi-Media Enclosure


The 7226 system (as shown in Figure 2-4) is a rack-mounted enclosure that can be added to any PureFlex Express configuration and features two drive bays that can hold one or two tape drives, and up to four slim-design DVD-RAM drives. These drives can be mixed in any combination of any available drive technology or electronic interface in a single 7226 Multimedia Storage Enclosure.

Figure 2-4 7226 Multi-Media Enclosure

Chapter 2. IBM PureFlex System

23

The 7226 enclosure media devices offers support for SAS, USB, and Fibre Channel connectivity, depending on the drive. Support in a PureFlex configuration includes the external USB and Fibre Channel connections. Table 2-8 shows the Multi-Media Enclosure and available PureFlex options.
Table 2-8 Multi-Media Enclosure and options Machine type 7226 7226-1U3 7226-1U3 7226-1U3 Feature Code Model 1U3 5763 8248 8348 Description Multi-Media Enclosure DVD Sled with DVD-RAM USB Drive Half-high LTO Ultrium 5FC Tape Drive Half-high LTO Ultrium 6 FC Tape Drive

2.4.6 Video, keyboard, mouse option


The IBM 7316 Flat Panel Console Kit that is shown in Figure 2-5 is an option to any PureFlex Express configuration that can provide local console support for the FSM and x86 based compute nodes.

Figure 2-5 IBM 7316 Flat Panel Console

The console is a 19-inch, rack-mounted 1U unit that includes a language-specific IBM Travel Keyboard. The console kit is used with the Console Breakout cable that is shown in figure Figure 2-6. This cable provides serial, video, and two USB ports. The Console Breakout cable can be attached to the keyboard, video, and mouse (KVM) connector on the front panel of x86 based compute nodes, including the FSM.

Figure 2-6 Console Breakout cable

24

IBM PureFlex System and IBM Flex System Products and Technology

The CMM in the chassis also allows direct connection to nodes via the internal chassis management network that communicates to the FSP or iMM2 on the node to allow remote out-of-band management.

2.4.7 Rack cabinet


The Express configuration includes the options of being shipped with or without a rack. Rack options includes 25U or 42U size. Table 2-9 lists the major components of the rack and options.
Table 2-9 Components of the rack AAS feature code 42U 7953-94X EU21 EC01 EC03 EC02 25U 7014-S25 ERGA 93072RX None None No Rack 4650 None No Rack specify IBM S2 25U Standard Rack PureFlex door Gray Door 93634AX None None None None IBM 42U 1100mm Enterprise V2 Dynamic Rack PureFlex door Gray Door Side Cover Kit (Black) Rear Door (Black/flat) XCC feature code Description

2.4.8 Available software for Power Systems compute nodes


In this section, we describe the software that is available for Power Systems compute nodes.

VIOS, AIX and IBM i


VIOS are preinstalled on each Power Systems compute node with a primary operating system on the primary node of the PureFlex Express configuration. The primary OS can be one of the following options: AIX v6.1 AIX v7.1 IBM i v7.1

Chapter 2. IBM PureFlex System

25

RHEL and SUSE Linux on Power


VIOS is preinstalled on each Linux on Power selected compute node for the virtualization layer. Client operating systems, such as Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES), can be ordered with the PureFlex Express configuration, but they are not preinstalled. The following Linux on Power versions are available: RHEL v5U9 POWER7 RHEL v6U4 POWER7 or POWER7+ SLES v11SP2

2.4.9 Available software for x86-based compute nodes


x86-based compute nodes can be ordered with VMware ESXi 5.1 hypervisor preinstalled to an internal USB key. Operating systems that are ordered with x86 based nodes are not preinstalled. The following operating systems are available for x86 based nodes: Microsoft Windows Server 2008 Release 2 Microsoft Windows Server Standard 2012 Microsoft Windows Server Datacenter 2012 Microsoft Windows Server Storage 2012 RHEL SLES

26

IBM PureFlex System and IBM Flex System Products and Technology

2.5 IBM PureFlex System Enterprise


The tables in this section represent the hardware, software, and services that make up IBM PureFlex System Enterprise. We describe the following items: 2.5.1, Enterprise configurations 2.5.2, Chassis on page 30 2.5.3, Top-of-rack switches on page 30 2.5.4, Compute nodes on page 31 2.5.5, IBM Flex System Manager on page 31 2.5.6, PureFlex Enterprise storage options on page 32 2.5.7, Video, keyboard, and mouse option on page 34 2.5.8, Rack cabinet on page 35 2.5.9, Available software for Power Systems compute node on page 35 2.5.10, Available software for x86-based compute nodes on page 35 To specify IBM PureFlex System Enterprise in the IBM ordering system, specify the indicator feature code that is listed in Table 2-10 for each machine type.
Table 2-10 Enterprise indicator feature code AAS feature code EFDC EVD1 XCC feature code EFDC EVD1 Description IBM PureFlex System Enterprise Indicator Feature Code IBM PureFlex System Enterprise with PureFlex Solution for SmartCloud Desktop Infrastructure

2.5.1 Enterprise configurations


PureFlex Enterprise is available in a single or multiple chassis (up to three chassis per rack) configuration as a traditional Ethernet and Fibre Channel combination or a converged solution that uses Converged Network Adapters and FCoE. All chassis in the configuration must use the same connection technology. The required storage in these configurations can be a IBM Storwize V7000 or a IBM Flex System V7000 Storage Node. Compute nodes can be Power or x86 based, or a hybrid combination that includes both. The IBM Flex System Manager provides the system management. Ethernet and Fibre Channel Combinations have the following characteristics: Power, x86 or hybrid combinations of compute nodes 1Gb or 10Gb Ethernet adapters or LAN on Motherboard (LOM, x86 only) 10Gb Ethernet switches 16Gb (or 8Gb for x86 only) Fibre Channel adapters 16Gb (or 8Gb for x86 only) Fibre Channel switches CNA configurations have the following characteristics: Power, x86 or hybrid combinations of compute nodes 10Gb Converged Network Adapters (CNA) or LOM (x86 only) 10Gb Converged Network switch or switches

Configurations
There are eight different orderable configurations within the enterprise PureFlex offerings. These offerings cover various redundant and non-redundant configurations along with the different types of protocol and storage controllers.

Chapter 2. IBM PureFlex System

27

Table 2-11 summarizes the PureFlex Enterprise offerings that are fully configurable within the IBM configuration tools.
Table 2-11 PureFlex Enterprise Offerings Configuration
Networking Ethernet Networking Fibre Channel Number of Switches up to 18 maximum.a

5A
10 GbE FCoE

5B
10 GbE FCoE

6A
10GbE FCoE

6B
10 GbE FCoE

7A
10 GbE 16 Gb

7B
10 GbE 16 Gb

8A
10 GbE 16 Gb

8B
10GbE 16Gb

1x: 2/8 2x: 10 3x: 12 V7000 Storage Node

1x: 2/8 2x: 10 3x: 12 Storwize V7000

1x: 4/10 2x: 14 3x: 18 V7000 Storage Node

1x: 4/10 2x: 14 3x: 18 Storwize V7000

1x: 4/10 2x: 14 3x: 18 V7000 Storage Node

1x: 4/10 2x: 14 3x: 18 Storwize V7000

V7000 Storage Node or Storwize V7000

V7000 Storage Node

Storwize V7000

Chassis Rack TF3 KVM Tray Media enclosure V7000 Options

1, 2, or 3x Chassis with two Chassis management modules, fans, and PSUs 42U Rack mandatory Optional DVD only DVD and tape

Storage Options: (24 HDD, 22 HDD + 2 SSD, 20 HDD + 4 SSD or Custom) Storwize expansion (limit to single rack in Express, overflow storage rack in Enterprise): nine units per controller Up to two Storwize V7000 controllers, up to nine IBM Flex System V7000 Storage Nodes VIOS, AIX, IBM i, and SCE on first Controller

V7000 Content Nodes POWER Nodes Ethernet I/O Adapters POWER nodes Fibre Channel I/O Adapters x86 Nodes Ethernet I/O adapters x86 Nodes Fibre Channel I/O Adapters ESXi USB Key Port FoD Activations IBM i PureFlex Solution VDI PureFlex Solution

P260, p270, p460, x222, x240, x220, x440 CN4058 8-port 10Gb Converged Adapter EN4054 4-port 10Gb Ethernet Adapter

Not applicable

FC5054 4-port 16Gb FC Adapter

CN4054 10Gb Virtual Fabric Adapter

EN4054 4-port 10Gb Ethernet Adapter LAN on Motherboard (2-port 10GbE)

Not applicable

FC5022 2-port 16Gb FC Adapter FC3052 2-port 8Gb FC Adapter FC5024D 4-port 16Gb FC Adapter

Optional; for x86 compute nodes only Ports are computed during configuration based upon chassis switch, node type, and the I/O adapter selection. Not configurable

Supported

Not configurable

a. 1x = 1 Chassis, 2x = 2 Chassis & 3x = 3 Chassis

28

IBM PureFlex System and IBM Flex System Products and Technology

Example configuration
There are eight different configuration starting points for PureFlex Enterprise, as described in Table 2-11 on page 28. These configurations can be enhanced further with multi-chassis and other storage configurations. Figure 2-7 shows an example of the wiring for base configuration 6B, which is an Enterprise PureFlex that uses an external Storwize V7000 enclosure and CN4093 10Gb Converged Scalable Switch converged infrastructure switches. Also included are external SAN B24 switches and Top-of-Rack (TOR) G8264 Ethernet switches. The TOR switches allow for the data networks to allow other chassis to be configured into this solution (not shown).
G8264 TOR Access Points 40Gb ISL CN4093 CN4093 G8264 TOR

Node Bays 1 to 14

CMM

CMM

G8062 1Gb Mgt

G8062 1Gb Mgt

SAN 2498-B24 TOR Storwize V7000

SAN 2498-B24 TOR

Midplane Connections Management 1GbE Data 10GbE Data 40GbE Data 8Gb FC

Label Label

Chassis Boundary Chassis Elements Rack Mounted Elements

Figure 2-7 PureFlex Enterprise with External V7000 and FCoE

Chapter 2. IBM PureFlex System

29

There is a management network that is included in this configuration that is composed of a 1GbE G8062 network switch. The Access points within the PureFlex chassis provide connections from the clients network into the internal networking infrastructure of the PureFlex system and connections into to the Management network.

2.5.2 Chassis
Table 2-12 lists the major components of the IBM Flex System Enterprise Chassis, including the switches. Feature codes: The tables in this section do not list all feature codes. Some features are not listed here for brevity.
Table 2-12 Components of the chassis and switches AAS feature code 7893-92X 7955-01M A0TF ESW2 ESW7 EB28 EB29 3286 3771 5370 9039 3592 XCC feature code 8721-HC1 8731-AC1 3598 A3HH A3J6 5053 3268 5075 A2RQ 5084 A0TM A0UE Description IBM Flex System Enterprise Chassis IBM Flex System Manager IBM Flex System EN2092 1Gb Ethernet Scalable Switch IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch IBM Flex System Fabric EN4093R 10Gb Scalable Switch IBM SFP+ SR Transceiver IBM SFP RJ45 Transceiver IBM 8Gb SFP+ Software Optical Transceiver IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch Brocade 8Gb SFP+ Software Optical Transceiver Base Chassis Management Module Other Chassis Management Module

2.5.3 Top-of-rack switches


The PureFlex Enterprise configuration can consist of a compliment of six TOR switches, two IBM System Networking RackSwitch G8052, two IBM System Networking RackSwitch G8264, and two IBM System Storage SAN24B-4 Express switches. These switches are required in a multi-chassis configuration and are optional in a single chassis configuration. The TOR switch infrastructure is in place for aggregation purposes, which consolidate the integration point of a multi-chassis system to core networks.

30

IBM PureFlex System and IBM Flex System Products and Technology

Table 2-13 lists the switch components.


Table 2-13 Components of the Top-of-Rack Ethernet switches AAS feature code 1455-48E 1455-64C 2498-B24 XCC feature code 7309-G52 7309-HC3 2498-24E Description IBM System Networking RackSwitch G8052R IBM System Networking RackSwitch G8264R IBM System Storage SAN24B-4 Express

2.5.4 Compute nodes


The PureFlex System Enterprise requires one or more of the following compute nodes: IBM Flex System p24L, p260, p270, or p460 Compute Nodes, IBM POWER7, or POWER7+ based (see Table 2-14) IBM Flex System x220, x222, x240 or x440 Compute Nodes, x86 based (see Table 2-15)
Table 2-14 Power Systems compute nodes AAS feature code 0497 0437 ECSD ECS3 0438 ECS9 ECS4 MTM 1457-7FL 7895-22x 7895-23A 7895-23X 7895-42X 7895-43X 7954-24X Description IBM Flex System p24L Compute Node IBM Flex System p260 Compute Node IBM Flex System p260 Compute Node (POWER7+ 4 core only) IBM Flex System p260 Compute Node (POWER7+) IBM Flex System p460 Compute Node IBM Flex System p460 Compute Node (POWER7+) IBM Flex System p270 Compute Node (POWER7+)

Table 2-15 x86 based compute nodes AAS feature code ECS7 ECSB 0457 ECS8 MTM 7906-25X 7916-27X 7863-10X 7917-45X Description IBM Flex System x220 Compute Node IBM Flex System x222 Compute Node IBM Flex System x240 Compute Node IBM Flex System x440 Compute Node

2.5.5 IBM Flex System Manager


The IBM Flex System Manager (FSM) is a high-performance, scalable system management appliance. It is based on the IBM Flex System x240 Compute Node. The FSM hardware comes preinstalled with Systems Management software that you can use to configure, monitor, and manage IBM PureFlex Systems. FSM is based on the following components: Intel Xeon E5-2650 8C 2.0GHz 20MB 1600MHz 95W 32GB of 1333 MHz RDIMMs memory Two 200GB, 1.8", SATA MLC SSD in a RAID 1 configuration
Chapter 2. IBM PureFlex System

31

1TB 2.5 SATA 7.2K RPM hot-swap 6 Gbps HDD

IBM Open Fabric Manager Optional FSM advanced, adds VM Control Enterprise license

2.5.6 PureFlex Enterprise storage options


Any PureFlex Enterprise configuration requires a SAN-attached storage system. The following storage options that are available are the integrated storage node or the external Storwize unit: IBM Storwize V7000 IBM Flex System V7000 Storage Node The required numbers of drives depends on drive size and compute node type. All storage is configured with RAID5 with a single Hot Spare that is included in the total number of drives. The following configurations are available: Power based nodes only, 16 x 300GB, or 8 x 600GB drives Hybrid (both Power and x86), 16 x 300GB, or 8 x 600GB drives x86 based nodes only including SmartCloud Entry, 8 x 300GB, or 8x 600GB drives Hybrid (both Power and x86) with SmartCloud Entry, 16x 300GB, or 600GB drives Multi-chassis configurations require 24x 300GB drives SSDs are optional; however, if they are added to the configuration, they are normally used for the V7000 Easy Tier function improving system performance.

IBM Storwize V7000


The IBM Storwize V7000 is one of the two storage options that is available in a PureFlex Enterprise configuration. This option can be rack mounted in the same rack as the Enterprise chassis. Other expansion units can be added in the same rack or a second rack, depending on the quantity ordered. The IBM Storwize V7000 consists of the following components, disk, and software options: IBM Storwize V7000 Controller (2076-124) SSDs: 200 GB 2.5-inch 400 GB 2.5-inch HDDs: 300 GB 2.5-inch 10K 300 GB 2.5-inch 15K 600 GB 2.5-inch 10K 800 GB 2.5-inch 10K 900 GB 2.5-inch 10K 1 TB 2.5-inch 7.2K 1.2 TB 2.5-inch 10K

Expansion Unit (2076-224): Up to nine per V7000 Controller IBM Storwize V7000 Expansion Enclosure (24 disk slots)

32

IBM PureFlex System and IBM Flex System Products and Technology

Optional software: IBM Storwize V7000 Remote Mirroring IBM Storwize V7000 External Virtualization IBM Storwize V7000 Real-time Compression

IBM Flex System V7000 Storage Node


IBM Flex System V7000 Storage Node is one of the two storage options that is available in a PureFlex Enterprise configuration. This option uses four compute node bays (2 wide x 2 high) in the Flex chassis. Up to two expansion units also can be in the Flex chassis, each using four compute node bays. External expansion units are also supported. The IBM Flex System V7000 Storage Node consists of the following components, disk, and software options: SSDs: 200GB 2.5-inch 400GB 2.5-inch 800 GB 2.5-inch HDDs: 300GB 2.5-inch 10K 300GB 2.5-inch 15K 600GB 2.5-inch 10K 800GB 2.5-inch 10K 900GB 2.5-inch 10K 1 TB 2.5-inch 7.2K 1.2TB 2.5-inch 10K

Expansion Unit (4939-A29) IBM Storwize V7000 Expansion Enclosure (24 disk slots) Optional software: IBM Storwize V7000 Remote Mirroring IBM Storwize V7000 External Virtualization IBM Storwize V7000 Real-time Compression

7226 Multi-Media Enclosure


The 7226 system that is shown in Figure 2-8 is a rack-mounted enclosure that can be added to any PureFlex Enterprise configuration and features two drive bays that can hold one or two tape drives, one or two RDX removable disk drives, and up to four slim-design DVD-RAM drives. These drives can be mixed in any combination of any available drive technology or electronic interface in a single 7226 Multimedia Storage Enclosure.

Figure 2-8 7226 Multi-Media Enclosure

Chapter 2. IBM PureFlex System

33

The 7226 enclosure media devices offers support for SAS, USB, and Fibre Channel connectivity, depending on the drive. Support in a PureFlex configuration includes the external USB and Fibre Channel connections. Table 2-16 shows the Multi-Media Enclosure and available PureFlex options.
Table 2-16 Multi-Media Enclosure and options Machine/Type 7226 7226-1U3 7226-1U3 7226-1U3 Feature Code Model 1U3 5763 8248 8348 Description Multi-Media Enclosure DVD Sled with DVD-RAM USB Drive Half-high LTO Ultrium 5 FC Tape Drive Half-high LTO Ultrium 6 FC Tape Drive

2.5.7 Video, keyboard, and mouse option


The IBM 7316 Flat Panel Console Kit that is shown in Figure 2-9 is an option to any PureFlex Enterprise configuration that can provide local console support for the FSM and x86 based compute nodes.

Figure 2-9 IBM 7316 Flat Panel Console

The console is a 19-inch, rack-mounted 1U unit that includes a language-specific IBM Travel Keyboard. The console kit is used with the Console Breakout cable that is shown in Figure 2-10. This cable provides serial, video, and two USB ports. The Console Breakout cable can be attached to the KVM connector on the front panel of x86 based compute nodes, including the FSM.

Figure 2-10 Console Breakout cable

34

IBM PureFlex System and IBM Flex System Products and Technology

The CMM in the chassis also allows direct connection to nodes via the internal chassis management network that communicates to the FSP or iMM2 on the node, which allows remote out-of-band management.

2.5.8 Rack cabinet


The Enterprise configuration includes an IBM PureFlex System 42U Rack. Table 2-17 lists the major components of the rack and options.
Table 2-17 Components of the rack AAS feature code 7953-94X EU21 EC01 EC03 EC02 XCC feature code 93634AX None None None None Description IBM 42U 1100mm Enterprise V2 Dynamic Rack PureFlex Door Gray Door (selectable in place of EU21) Side Cover Kit (Black) Rear Door (Black/flat)

2.5.9 Available software for Power Systems compute node


In this section, we describe the software that is available for the Power Systems compute node.

Virtual I/O Server, AIX and IBM i


VIOS is preinstalled on each Power Systems compute node with a primary operating system on the primary node of the PureFlex Express configuration. The primary OS can be one of the following options: AIX v6.1 AIX v7.1 IBM i v7.1

RHEL and SUSE Linux on Power


VIOS is preinstalled on each Linux on Power compute node for the virtualization layer. Client operating systems (such as RHEL and SLES) can be ordered with the PureFlex Express configuration, but they are not preinstalled. The following Linux on Power versions are available: RHEL v5U9 POWER7 RHEL v6U4 POWER7 or POWER7+ SLES v11SP2

2.5.10 Available software for x86-based compute nodes


x86 based compute nodes can be ordered with VMware ESXi 5.1 hypervisor preinstalled to an internal USB key. Operating systems that are ordered with x86 based nodes are not preinstalled. The following operating systems are available for x86 based nodes: Microsoft Windows Server 2008 Release 2 Microsoft Windows Server Standard 2012 Microsoft Windows Server Datacenter 2012 Microsoft Windows Server Storage 2012
Chapter 2. IBM PureFlex System

35

RHEL SLES

2.6 Services for IBM PureFlex System Express and Enterprise


Services are recommended, but can be decoupled from a PureFlex configuration. The following offerings are available and can be added to either PureFlex offering: PureFlex Introduction This three-day offering provides IBM Flex System Manager and storage functions but does not include external integration, virtualization, or cloud. It covers the setup of one node. PureFlex Virtualized This offering is a five-day Standard services offering that includes all tasks of the PureFlex Introduction and expands the scope to include virtualization, another FC switch, and up to four nodes in total. PureFlex Enterprise This offering provides advanced virtualization (including VMware clustering) but does not include external integration or cloud. It covers up to four nodes in total. PureFlex Cloud This pre-packaged offering is available which, in addition to all the tasks that are included in the PureFlex Virtualized offering, adds the configuration of the SmartCloud Entry environment, basic network integration, and implementation of up to 13 nodes in the first chassis. PureFlex Extra Chassis Add-on This offering is a services offering that extends the implementation of another chassis (up to 14 nodes), and up to two virtualization engines (for example, VMware ESXi, KVM, or PowerVM VIOS). As shown in Table 2-18, the four main offerings are cumulative; for example, Enterprise takes seven days in total and includes the scope of the Virtualized and Introduction services offerings. PureFlex Extra Chassis is per chassis.
Table 2-18 PureFlex Service offerings Function delivered PureFlex Intro 3 days Included PureFlex Virtualized 5 days Included PureFlex Enterprise 7 days Included PureFlex Cloud 10 days Included PureFlex Extra Chassis Add-on 5 days No add-on

One node FSM Configuration Discovery, Inventory Review Internal Storage configuration Basic Network Integration using pre-configured switches (factory default) No external SAN integration No FCoE changes No Virtualization No Cloud Skills Transfer

36

IBM PureFlex System and IBM Flex System Products and Technology

Function delivered

PureFlex Intro 3 days Not included

PureFlex Virtualized 5 days Included

PureFlex Enterprise 7 days Included

PureFlex Cloud 10 days Included

PureFlex Extra Chassis Add-on 5 days Configure up to 14 nodes within one chassis Up to two virtualization engines (ESXi, KVM, or PowerVM) Configure up to 14 nodes within one chassis Up to two virtualization engines (ESXi, KVM, or PowerVM) Configure up to 14 nodes within one chassis Up to two virtualization engines (ESXi, KVM, or PowerVM)

Basic virtualization (VMware, KVM, and VMControl) No external SAN Integration No Cloud Up to four nodes

Advanced virtualization Server pools or VMware cluster configured (VMware or VMControl) No external SAN integration No FCoE Config Changes No Cloud Configure SmartCloud Entry Basic External network integration No FCoE Config changes No external SAN integration First chassis is configured with 13 nodes

Not included

Not included

Included

Included

Not included

Not included

Not included

Included

In addition to the offerings that are listed in Table 2-18 on page 36, two other services offerings are now available for PureFlex System and PureFlex IBM i Solution: PureFlex FCoE Customization Service and PureFlex Services for IBM i.

2.6.1 PureFlex FCoE Customization Service


This new services customization offers of 1 day length and provides the following features: Design a new FCoE solution to meet customer requirements Change FCoE VLAN from default Modify internal FCoE Ports Change FCoE modes and Zoning The prerequisite for the FCoE customization service is PureFlex Intro, Virtualized, or Cloud Service and that FCoE is on the system. Limited two pre-configured switches in the single chassis, no External SAN configurations, other chassis, or switches are included.

Chapter 2. IBM PureFlex System

37

2.6.2 PureFlex Services for IBM i


This package offers five days of support the IBM i PureFlex Solution. IBM performs the following PureFlex Virtualized services for a single Power node: Provisioning of a virtual server through VMControl basic provisioning for the Power node: Prepare, capture, and deploy an IBM i virtual server. Perform System Health and Monitoring with basic Automation Plans. Review Security and roles-based access. Services on a single x86 node: Verify VMware ESXi installation, create a virtual machine (VM), and install a Windows Server operating system on the VM. Install and configure vCenter on the VM. This service includes the following prerequisites: One p460 Power compute node Two IBM Flex System Fabric EN2092 10Gb Scalable Ethernet switch modules Two IBM Flex System 16Gb FC5022 chassis SAN scalable switches One IBM Flex System V7000 Storage node This service does not include the following features: External SAN integration FCoE configuration changes Other chassis or switches

2.6.3 Software and hardware maintenance


The following service and support offerings can be selected to enhance the standard support that is available with IBM PureFlex System: Service and Support: Software maintenance: 1-year 9x5 (9 hours per day, 5 days per week). Hardware maintenance: 3-year 9x5 Next Business Day service. 24x7 Warranty Service Upgrade Maintenance and Technical Support (MTS): three years with one microcode analysis per year.

38

IBM PureFlex System and IBM Flex System Products and Technology

2.7 IBM SmartCloud Entry for Flex system


IBM SmartCloud Entry is an easy to deploy, simple to use software offering that features a self-service portal for workload provisioning, virtualized image management, and monitoring. It is an innovative, cost-effective approach that also includes security, automation, basic metering, and integrated platform management IBM SmartCloud Entry is the first tier in a three-tier family of cloud offerings that is based on the Common Cloud Stack (CCS) foundation. The following offerings form the CCS: SmartCloud Entry SmartCloud Provisioning SmartCloud Orchestrator IBM SmartCloud Entry is an ideal choice to get started with a private cloud solution that can scale and expand the number of cloud users and workloads. More importantly, SmartCloud Entry delivers a single, consistent cloud experience that spans multiple hardware platforms and virtualization technologies, which makes it a unique solution for enterprises with heterogeneous IT infrastructure and a diverse range of applications. SmartCloud Entry provides clients with comprehensive IaaS capabilities. For enterprise clients who are seeking advanced cloud benefits, such as deployment of multi-workload patterns and Platform as a Service (PaaS) capabilities, IBM offers various advanced cloud solutions. Because IBMs cloud portfolio is built on a common foundation, clients can purchase SmartCloud Entry initially and migrate to an advanced cloud solution in the future. This standardized architecture facilitates client migrations to the advanced SmartCloud portfolio solutions. SmartCloud Entry offers simplified cloud administration with an intuitive interface that lowers administrative overhead and improves operations productivity with an easy self-service user interface. It is open and extensible for easy customization to help tailor to unique business environments. The ability to standardize virtual machines and images reduces management costs and accelerates responsiveness to changing business needs. Extensive virtualization engine support includes the following hypervisors: PowerVM VMware vSphere 5 KVM Microsoft Hyper-V The latest release of PureFlex (announced October 2013) allows the selection of SmartCloud Entry 3.2. This now supports Microsoft Hyper-V and Linux KVM using OpenStack. The product also allows the use of OpenStack APIs. Also included is IBM Image Construction and Composition Tool (ICCT). ICCT on SmartCloud is a web-based application that simplifies and automates virtual machine image creation. ICCT is provided as an image that can be provisioned on SmartCloud. You can simplify the creation and management of system images with the following capabilities: Create golden master images and software appliances by using corporate-standard operating systems. Convert images from physical systems or between various x86 hypervisors. Reliably track images to ensure compliance and minimize security risks.
Chapter 2. IBM PureFlex System

39

Optimize resources, which reduces the number of virtualized images and the storage required for them. Reduce time to value for new workloads with the following simple VM management options: Deploy application images across compute and storage resources. Offer users self-service for improved responsiveness. Enable security through VM isolation, project-level user access controls. Simplify deployment; there is no need to know all the details of the infrastructure. Protect your investment with support for existing virtualized environments. Optimize performance on IBM systems with dynamic scaling, expansive capacity and continuous operation. Improve efficiency with a private cloud that includes the following capabilities: Delegate provisioning to authorized users to improve productivity. Implement pay-per-use with built-in workload metering. Standardize deployment to improve compliance and reduce errors with policies and templates. Simplify management of projects, billing, approvals and metering with an intuitive user interface. Ease maintenance and problem diagnosis with integrated views of both physical and virtual resources. For more information about IBM SmartCloud Entry on Flex System, see this website: http://www.ibm.com/systems/flex/smartcloud/bto/entry/

40

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 3.

Systems management
IBM Flex System Manager (the management component of IBM Flex System Enterprise Chassis) and compute nodes are designed to help you get the most out of your IBM Flex System installation. They also allow you to automate repetitive tasks. These management interfaces can significantly reduce the number of manual navigational steps for typical management tasks. They offer simplified system setup procedures by using wizards and built-in expertise to consolidate monitoring for physical and virtual resources. It must be noted that from August 2013, Power Systems Nodes installed within a Flex System chassis can alternatively be managed by Hardware Management Console (HMC) and Integrated Virtualization Manager (IVM). This allows clients with existing rack-based Power Systems servers to use a single management tool to manage their rack and Flex System nodes. However, systems management that is implemented in this way means none of the cross element management functions that are available with FSM (such as management of x86 Nodes, Storage, Networking, system pooling, or advanced virtualization function) are available. For the most complete and sophisticated broad management of a Flex System environment, the FSM is recommended. This chapter includes the following topics: 3.1, Management network on page 42 3.2, Chassis Management Module on page 43 3.3, Security on page 46 3.4, Compute node management on page 47 3.5, IBM Flex System Manager on page 50

Copyright IBM Corp. 2012, 2013. All rights reserved.

41

3.1 Management network


In an IBM Flex System Enterprise Chassis, you can configure separate management and data networks. The management network is a private and secure Gigabit Ethernet network. It is used to complete management-related functions throughout the chassis, including management tasks that are related to the compute nodes, switches, storage, and the chassis. The management network is shown in Figure 3-1 as the blue line. It connects the Chassis Management Module (CMM) to the compute nodes (and storage node, which is not shown), the switches in the I/O bays, and the Flex System Manager (FSM). The FSM connection to the management network is through a special Broadcom 5718-based management network adapter (Eth0). The management networks in multiple chassis can be connected through the external ports of the CMMs in each chassis through a GbE top-of-rack switch. The yellow line in the Figure 3-1 shows the production data network. The FSM also connects to the production network (Eth1) so that it can access the Internet for product updates and other related information.

Eth1: Embedded 2-port 10 GbE controller with Virtual Fabric Connector Eth0: Special GbE management network adapter

Enterprise Chassis Flex System Manager System x compute node Power Systems compute node

Eth0 IMM

Eth1 IMM FSP

CMM

Port

I/O bay 1

I/O bay 2

CMMs in other Enterprise Chassis

CMM

CMM

CMM Data Network

Top-of-Rack Switch Management Network

Management workstation

Figure 3-1 Separate management and production data networks

42

IBM PureFlex System and IBM Flex System Products and Technology

Tip: The management node console can be connected to the data network for convenient access. One of the key functions that the data network supports is the discovery of operating systems on the various network endpoints. Discovery of operating systems by the FSM is required to support software updates on an endpoint, such as a compute node. The FSM Checking and Updating Compute Nodes wizard assists you in discovering operating systems as part of the initial setup.

3.2 Chassis Management Module


The CMM provides single-chassis management and is used to communicate with the management controller in each compute node. It provides system monitoring, event recording, and alerts. It also manages the chassis, its devices, and the compute nodes. The chassis supports up to two chassis management modules. If one CMM fails, the second CMM can detect its inactivity, self-activate, and take control of the system without any disruption. The CMM is central of the management of the chassis, and is required in the Enterprise Chassis. The following section describes the usage models of the CMM and its features. For more information, see 4.9, Chassis Management Module on page 101.

3.2.1 Overview
The CMM is a hot-swap module that provides basic system management functions for all devices that are installed in the Enterprise Chassis. An Enterprise Chassis comes with at least one CMM and supports CMM redundancy. The CMM is shown in Figure 3-2.

Figure 3-2 Chassis Management Module

Chapter 3. Systems management

43

Through an embedded firmware stack, the CMM implements functions to monitor, control, and provide external user interfaces to manage all chassis resources. You can use the CMM to perform the following functions, among others: Define login IDs and passwords. Configure security settings such as data encryption and user account security. The CMM contains an LDAP client that can be configured to provide user authentication through one or more LDAP servers. The LDAP server (or servers) to be used for authentication can be discovered dynamically or manually pre-configured. Select recipients for alert notification of specific events. Monitor the status of the compute nodes and other components. Find chassis component information. Discover other chassis in the network and enable access to them. Control the chassis, compute nodes, and other components. Access the I/O modules to configure them. Change the startup sequence in a compute node. Set the date and time. Use a remote console for the compute nodes. Enable multi-chassis monitoring. Set power policies and view power consumption history for chassis components.

3.2.2 Interfaces
The CMM supports a web-based graphical user interface that provides a way to perform chassis management functions within a supported web browser. You can also perform management functions through the CMM command-line interface (CLI). Both the web-based and CLI interfaces are accessible through the single RJ45 Ethernet connector on the CMM, or from any system that is connected to the same network. The CMM has the following default IPv4 settings: IP address: 192.168.70.100 Subnet: 255.255.255.0 User ID: USERID (all capital letters) Password: PASSW0RD (all capital letters, with a zero instead of the letter O) The CMM does not have a fixed static IPv6 IP address by default. Initial access to the CMM in an IPv6 environment can be done by using the IPv4 IP address or the IPv6 link-local address. The IPv6 link-local address is automatically generated based on the MAC address of the CMM. By default, the CMM is configured to respond to DHCP first before it uses its static IPv4 address. If you do not want this operation to take place, connect locally to the CMM and change the default IP settings. For example, you can connect locally by using a notebook. The web-based GUI brings together all the functionality that is needed to manage the chassis elements in an easy-to-use fashion consistently across all System x IMM2 based platforms.

44

IBM PureFlex System and IBM Flex System Products and Technology

Figure 3-3 shows the Chassis Management Module login window.

Figure 3-3 CMM login window

Figure 3-4 shows an example of the Chassis Management Module front page after login.

Figure 3-4 Initial view of CMM after login

Chapter 3. Systems management

45

3.3 Security
The focus of IBM on smarter computing is evident in the improved security measures that are implemented in IBM Flex System Enterprise Chassis. Todays world of computing demands tighter security standards and native integration with computing platforms. For example, the push towards virtualization increased the need for more security. This increase comes as more mission-critical workloads are consolidated on to fewer and more powerful servers. The IBM Flex System Enterprise Chassis takes a new approach to security with a ground-up chassis management design to meet new security standards. The following security enhancements and features are provided in the chassis: Single sign-on (central user management) End-to-end audit logs Secure boot: IBM Tivoli Provisioning Manager and CRTM Intel TXT technology (Intel Xeon -based compute nodes) Signed firmware updates to ensure authenticity Secure communications Certificate authority and management Chassis and compute node detection and provisioning Role-based access control Security policy management Same management protocols that are supported on BladeCenter AMM for compatibility with earlier versions Insecure protocols are disabled by default in CMM, with Locks settings to prevent user from inadvertently or maliciously enabling them Supports up to 84 local CMM user accounts Supports up to 32 simultaneous sessions Planned support for DRTM CMM supports LDAP authentication The Enterprise Chassis ships Secure, and supports the following security policy settings: Secure: Default setting to ensure a secure chassis infrastructure and includes the following features: Strong password policies with automatic validation and verification checks Updated passwords that replace the manufacturing default passwords after the initial setup Only secure communication protocols such as Secure Shell (SSH) and Secure Sockets Layer (SSL) Certificates to establish secure, trusted connections for applications that run on the management processors Legacy: Flexibility in chassis security, which includes the following features: Weak password policies with minimal controls Manufacturing default passwords that do not have to be changed

46

IBM PureFlex System and IBM Flex System Products and Technology

Unencrypted communication protocols, such as Telnet, SNMPv1, TCP Command Mode, FTP Server, and TFTP Server The centralized security policy makes Enterprise Chassis easy to configure. In essence, all components run with the same security policy that is provided by the CMM. This consistency ensures that all I/O modules run with a hardened attack surface. The CMM and the IBM Flex System Manager management node each have their own independent security policies that control, audit, and enforce the security settings. The security settings include the network settings and protocols, password and firmware update controls, and trusted computing properties such as secure boot. The security policy is distributed to the chassis devices during the provisioning process.

3.4 Compute node management


Each node in the Enterprise Chassis has a management controller that communicates upstream through the CMM-enabled 1 GbE private management network that enables management capability. Different chassis components that are supported in the Enterprise Chassis can implement different management controllers. Table 3-1 shows the different management controllers that are implemented in the chassis components.
Table 3-1 Chassis components and their respective management controllers Chassis components Intel Xeon processor-based compute nodes Power Systems compute nodes Chassis Management Module Management controller Integrated Management Module II (IMM2) Flexible service processor (FSP) Integrated Management Module II (IMM2)

The management controllers for the various Enterprise Chassis components have the following default IPv4 addresses: CMM:192.168.70.100 Compute nodes: 192.168.70.101-114 (corresponding to the slots 1-14 in the chassis) I/O Modules: 192.168.70.120-123 (sequentially corresponding to chassis bay numbering) In addition to the IPv4 address, all I/O modules support link-local IPv6 addresses and configurable external IPv6 addresses.

3.4.1 Integrated Management Module II


The Integrated Management Module II (IMM2) is the next generation of the IMMv1 (first released in the Intel Xeon Nehalem-EP-based servers). It is present on all IBM systems with Intel Xeon Romley and Sandy Bridge processors, and features a complete rework of hardware and firmware. The IMM2 enhancements include a more responsive user interface, faster power on, and increased remote presence performance. The IMM2 incorporates a new web-based user interface that provides a common look and feel across all IBM System x software products. In addition to the new interface, the following other major enhancements from IMMv1 are included: Faster processor and more memory IMM2 manageable northbound from outside the chassis, which enables consistent management and scripting with System x rack servers
Chapter 3. Systems management

47

Remote presence: Increased color depth and resolution for more detailed server video Active X client in addition to Java client Increased memory capacity (~50 MB) provides convenience for remote software installations No IMM2 reset is required on configuration changes because they become effective immediately without reboot Hardware management of non-volatile storage Faster Ethernet over USB 1 Gb Ethernet management capability Improved system power-on and boot time More detailed information for UEFI detected events enables easier problem determination and fault isolation User interface meets accessibility standards (CI-162 compliant) Separate audit and event logs Trusted IMM with significant security enhancements (CRTM/TPM, signed updates, authentication policies, and so on) Simplified update and flashing mechanism Addition of Syslog alerting mechanism provides you with an alternative to email and SNMP traps Support for Features on-demand (FoD) enablement of server functions, option card features, and System x solutions and applications First Failure Data Capture: One button web press starts data collection and download For more information about IMM2, see Chapter 5, Compute nodes on page 185. For more information, see the following publications: Integrated Management Module II Users Guide: http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5086346 IMM and IMM2 Support on IBM System x and BladeCenter Servers, TIPS0849: http://www.redbooks.ibm.com/abstracts/tips0849.html

3.4.2 Flexible service processor


Several advanced system management capabilities are built into Power Systems compute nodes (p24L, p260, p460, p270). An FSP handles most of the server-level system management. The FSP that is used in Power Systems compute nodes is the same service processor that is used on POWER rack servers. It has system alerts and Serial over LAN (SOL) capability The FSP provides out-of-band system management capabilities, such as system control, runtime error detection, configuration, and diagnostic procedures. Generally, you do not interact with the FSP directly. Rather, you interact by using tools such as IBM Flex System Manager and Chassis Management Module. The Power Systems compute nodes all have one FSP each.

48

IBM PureFlex System and IBM Flex System Products and Technology

The FSP provides an SOL interface, which is available by using the CMM and the console command. The Power Systems compute nodes do not have an on-board video chip, and do not support keyboard, video, and mouse (KVM) connections. Server console access is obtained by a SOL connection only. SOL provides a means to manage servers remotely by using a CLI over a Telnet or SSH connection. SOL is required to manage servers that do not have KVM support or that are attached to the FSM. SOL provides console redirection for Software Management Services (SMS) and the server operating system. The SOL feature redirects server serial-connection data over a LAN without requiring special cabling by routing the data through the CMM network interface. The SOL connection enables Power Systems compute nodes to be managed from any remote location with network access to the CMM. SOL offers the following functions: Remote administration without KVM Reduced cabling and no requirement for a serial concentrator Standard Telnet/SSH interface, which eliminates the requirement for special client software The CMM CLI provides access to the text-console command prompt on each server through an SOL connection. This configuration allows the Power Systems compute nodes to be managed from a remote location.

3.4.3 I/O modules


The I/O modules have the following base functions: Initialization Configuration Diagnostic tests (both power-on and concurrent) Status Reporting In addition, the following set of protocols and software features are supported on the I/O modules: A configuration method over the Ethernet management port. A scriptable SSH CLI, a web server with SSL support, Simple Network Management Protocol v3 (SNMPv3) Agent with alerts, and a sFTP client. Server ports that are used for Telnet, HTTP, SNMPv1 agents, TFTP, FTP, and other insecure protocols are DISABLED by default. LDAP authentication protocol support for user authentication. For Ethernet I/O modules, 802.1x enabled with policy enforcement point (PEP) capability to allow support of TNC (Trusted Network Connect). The ability to capture and apply a switch configuration file and the ability to capture a first failure data capture (FFDC) data file. Ability to transfer files by using URL update methods (HTTP, HTTPS, FTP, TFTP, sFTP). Various methods for firmware updates, including FTP, sFTP, and TFTP. In addition, firmware updates by using a URL that includes protocol support for HTTP, HTTPs, FTP, sFTP, and TFTP. SLP discovery and SNMPv3.
Chapter 3. Systems management

49

Ability to detect firmware/hardware hangs, and ability to pull a crash-failure memory dump file to an FTP (sFTP) server. Selectable primary and backup firmware banks as the current operational firmware. Ability to send events, SNMP traps, and event logs to the CMM, including security audit logs. IPv4 and IPv6 on by default. The CMM management port supports IPv4 and IPv6 (IPV6 support includes the use of link local addresses. Port mirroring capabilities: Port mirroring of CMM ports to internal and external ports. For security reasons, the ability to mirror the CMM traffic is hidden and is available only to development and service personnel Management virtual local area network (VLAN) for Ethernet switches: A configurable management 802.1q tagged VLAN in the standard VLAN range of 1 - 4094. It includes the CMMs internal management ports and the I/O modules internal ports that are connected to the nodes.

3.5 IBM Flex System Manager


The FSM is a high-performance, scalable system management appliance. It is based on the IBM Flex System x240 Compute Node. For more information about the x240, see 5.4, IBM Flex System x240 Compute Node on page 234. The FSM hardware comes preinstalled with systems management software that you can use to configure, monitor, and manage IBM Flex System resources in up to sixteen chassis.

3.5.1 IBM Flex System Manager functions


The IBM Flex System Manager includes the following high-level features and functions: Supports a comprehensive, pre-integrated system that is configured to optimize performance and efficiency Automated processes that are triggered by events simplify management and reduce manual administrative tasks Centralized management reduces the skills and the number of steps it takes to manage and deploy a system Enables comprehensive management and control of energy usage and costs Automates responses for a reduced need for manual tasks such as custom actions and filters, configure, edit, relocate, and automation plans Storage device discovery and coverage in integrated physical and logical topology views Full integration with server views, including virtual server views, enables efficient management of resources The preinstall contains a set of software components that are responsible for running management functions. These components are activated by using the available IBM Feature on Demand (FoD) software entitlement licenses. They are licensed on a per-chassis basis, so you need one license for each chassis you plan to manage. The management node comes without any entitlement licenses, so you must purchase a license to enable the required FSM functions. The part numbers are listed later in this section. 50
IBM PureFlex System and IBM Flex System Products and Technology

IBM Flex System Manager base feature set that is preinstalled offers the following function: Support up to 16 managed chassis Support for up to 224 Nodes Support up to 5,000 managed elements Auto-discovery of managed elements Overall health status Monitoring and availability Hardware management Security management Administration Network management (Network Control) Storage management (Storage Control) Virtual machine lifecycle management (VMControl Express) The IBM Flex System Manager Advanced feature set upgrade offers the following advanced features: Image management (VMControl Standard) Pool management (VMControl Enterprise) Advanced network monitoring and quality of service (QoS) configuration (Service Fabric Provisioning) The Fabric Provisioning upgrade offers advanced network monitoring and quality of service (QoS) configuration (Service Fabric Provisioning). Fabric provisioning functionality is included in the advanced feature set. It is also available as a separate Fabric Provisioning feature upgrade for the base feature set. It also can be ordered as a separate fabric provisioning upgrade for the Flex System Manager node via the HVEC order route. Upgrade licenses: The Advanced Upgrade and the Fabric Provisioning feature upgrade are mutually exclusive. Either the Advance Upgrade or the Fabric Provisioning feature can be applied on top of the base feature set license, but not both. The Service Fabric provisioning upgrade is not selectable in AAS. The part number to order the management node is shown in Table 3-2.
Table 3-2 Ordering information for IBM Flex System Manager node HVEC 8731-A1xa AAS 7955-01Mb Description IBM Flex System Manager node

a. x in the Part number represents a country-specific letter (for example, the EMEA part number is 8731A1G, and the US part number is 8731A1U). Ask your local IBM representative for specifics. b. This part number is ordered as part of the IBM PureFlex System

The part numbers to order FoD software entitlement licenses are shown in the following tables. The part numbers for the same features are different in different countries. Ask your local IBM representative for specifics. Table 3-3 on page 52 shows the following sets of part numbers: Column 1: for Latin America and Europe/Middle East/Africa Column 2: For US, Canada, Asia Pacific, and Japan

Chapter 3. Systems management

51

Table 3-3 HVEC Ordering information for FoD licenses Part number LA & EMEA US, CAN, AP, JPN Description

Base feature set 95Y1174 95Y1179 90Y4217 90Y4222 IBM Flex System Manager Per Managed Chassis with 1-Year Software Support and Subscription (software S&S) IBM Flex System Manager Per Managed Chassis with 3-Year software S&S

Advanced feature set upgradea 94Y9219 94Y9220 90Y4249 00D7554 IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with 1-Year software S&S IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with 3-Year software S&S

Fabric Provisioning feature upgradea 95Y1178 95Y1183 90Y4221 90Y4226 IBM Flex System Manager Service Fabric Provisioning with 1-Year S&S IBM Flex System Manager Service Fabric Provisioning with 3-Year S&S

a. The Advanced Upgrade and Fabric Provisioning licenses are applied on top of the IBM FSM base license

Table 3-4 shows the indicator codes that are selected when configuring Flex System Manager in AAS by using e-config. This also selects the relevant options for one or three years of S&S that is included in the configurator output.
Table 3-4 7955-01M Flex System Manager feature codes Description

Feature code

Advanced feature set upgradea EB31 EB32 FSM Platform Software Bundle Pre-load Indicator FSM Platform Virtualization Software Bundle Pre-load Indicator

a. The FSM Platform Virtualization Software Bundle Pre-load Indicator are applied on top of the FSM Platform Software Bundle Pre-load Indicator

Flex System manager Licensing examples


To help explain the required part numbers, in this section we describe two examples of Flex System Manager licensing. Included in each example are the part numbers that are required for Latin America, Europe/Middle East/Africa and then for US, Canada, Asia Pacific, and Japan.

Example 1
A client wants to manage four Flex System chassis with one FSM, no advanced license function, with three years of support and subscription (S&S). The client purchases the following products: One Flex System Manager node Four IBM Flex System Manager per managed chassis with 3-Year SW S&S

52

IBM PureFlex System and IBM Flex System Products and Technology

Table 3-5 shows the part numbers and quantity that is required. The following sets of part numbers are shown: Column 1: For Latin America and Europe/Middle East/Africa Column 2: For US, Canada, Asia Pacific, and Japan
Table 3-5 Example 1 part numbers Part number Qty 1 4 LA & EMEA 8731-A1xa 95Y1179 US, CAN, AP, JPN 8731-A1xa 90Y4222 Description IBM Flex System Manager node IBM Flex System Manager Per Managed Chassis with three-year software S&S

a. x in the Part number represents a country-specific letter (for example, the EMEA part number is 8731A1G, and the US part number is 8731A1U). Ask your local IBM representative for specifics.

Example 2
The client wants to manage four Flex System chassis in total, two chassis are located on one site and two on another, with a local FSM installed in a chassis at each of these sites. They require advance functionality with three-year S&S. A client purchases the following products: Two Flex System Manager Nodes Four IBM Flex System Manager per managed chassis with three-year software S&S Four IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with three-year software S&S. Table 3-6 shows the part numbers and quantity that are required. The following sets of part numbers are shown: Column 1: For Latin America and Europe/Middle East/Africa Column 2: For US, Canada, Asia Pacific, and Japan
Table 3-6 Example 2 part numbers Part number Qty 2 4 4 LA & EMEA 8731-A1xa 95Y1179 94Y9220 US, CAN, AP, JPN 8731-A1xa 90Y4222 00D7554 Description IBM Flex System Manager node IBM Flex System Manager Per Managed Chassis with three-year software S&S IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with three-year software S&S

a. x in the Part number represents a country-specific letter (for example, the EMEA part number is 8731A1G, and the US part number is 8731A1U). Ask your local IBM representative for specifics.

Chapter 3. Systems management

53

3.5.2 Hardware overview


Fundamentally, the FSM from a hardware point of view is a locked-down compute node with a specific hardware configuration. This configuration is designed for optimal performance of the preinstalled software stack. The FSM looks similar to the Intel- based x240. However, there are slight differences between the system board designs, so these two hardware nodes are not interchangeable. Figure 3-5 shows a front view of the FSM.

Figure 3-5 IBM Flex System Manager

54

IBM PureFlex System and IBM Flex System Products and Technology

Figure 3-6 shows the internal layout and major components of the FSM.

Cover

Heat sink Microprocessor Microprocessor heat sink filler SSD and HDD backplane Hot-swap storage cage SSD interposer SSD drives I/O expansion adapter ETE adapter

SSD mounting insert Air baffles

Hot-swap storage drive Storage drive filler DIMM filler

DIMM

Figure 3-6 Exploded view of the IBM Flex System Manager node, showing major components

Chapter 3. Systems management

55

The FSM comes preconfigured with the components that are described in Table 3-7.
Table 3-7 Features of the IBM Flex System Manager node Feature Model numbers Processor Memory SAS Controller Disk Integrated NIC Systems Management Description 8731-A1x (XCC, x-config) 7955-01M (AAS, e-config) 1x Intel Xeon processor E5-2650 8C 2.0 GHz 20 MB Cache 1600 MHz 95 W 8 x 4 GB (1x4 GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM One LSI 2004 SAS Controller 1 x IBM 1TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD 2 x IBM 200GB SATA 1.8" MLC SSD (configured in an RAID-1) Embedded dual-port 10 Gb Virtual Fabric Ethernet controller (Emulex BE3) Dual-port 1 GbE Ethernet controller on a management adapter (Broadcom 5718) Integrated Management Module II (IMM2) Management network adapter

Figure 3-7 shows the internal layout of the FSM. Filler slot for Processor 2 Processor 1

Drive bays

Management network adapter

Figure 3-7 Internal view that shows the major components of IBM Flex System Manager

56

IBM PureFlex System and IBM Flex System Products and Technology

Front controls
The FSM has similar controls and LEDs as the IBM Flex System x240 Compute Node. Figure 3-8 shows the front of an FSM with the location of the control and LEDs highlighted.
Solid state drive LEDs
a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaa 2 aaaaaaaaaaaaaaaaaaaaaaaa a aa aa aa aa aa aaaa aa aa aa aaaa a aaaaaaaaaaaaaaaaaaaa aa a aa aaaa aaaa aaaaaaaaaaaaaaaaaa aaaa aa aa aa aa aa aa aa a aa aa aa aa aa aa aa aa aa aa aa a 1 a aaaaaa aaaa aaaa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaa aaaa aaaa aa aa aa aa aa aa aa aa aa a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa a a a aa a aa a aa a aa aa aa aa aa aa aa aa aaaaaaaaaaaaa a aaaaaaaaaaaaaaaaaaaaaa aa a a a a a a a a a a a aaaaaaaaaaaa 0 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Power button/LED

Identify LED

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa a aaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa a aaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aaaa aa aa a aa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaa aa aa aa a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaa aaa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaa aaaaaaaaaaaaaaaaaaaaaa aaa a aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa a aa aa aaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

USB connector KVM connector

Hard disk drive activity LED

Fault Hard disk drive LED status LED Check log LED

Figure 3-8 FSM front panel showing controls and LEDs

Storage
The FSM ships with 2 x IBM 200 GB SATA 1.8" MLC SSD and 1 x IBM 1 TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD drives. The 200 GB SSD drives are configured in an RAID-1 pair that provides roughly 200 GB of usable space. The 1 TB SATA drive is not part of a RAID group.

Chapter 3. Systems management

57

The partitioning of the disks is listed in Table 3-8.


Table 3-8 Detailed SSD and HDD disk partitioning Physical disk SSD SSD SSD HDD HDD HDD HDD HDD Virtual disk size 50 MB 60 GB 80 GB 40 GB 40 GB 60 GB 80 GB 30 GB Description Boot disk OS/Application disk Database disk Update repository Dump space Spare disk for OS/Application Spare disk for database Service Partition

Management network adapter


The management network adapter is a standard feature of the FSM and provides a physical connection into the private management network of the chassis. The adapter is shown in Figure 3-6 on page 55 as the everything-to-everything (ETE) adapter. The management network adapter contains a Broadcom 5718 Dual 1GbE adapter and a Broadcom 5389 8-port L2 switch. This card is one of the features that makes the FSM unique when compared to all other nodes that are supported by the Enterprise Chassis. The management network adapter provides a physical connection into the private management network of the chassis. The connection allows the software stack to have visibility into the data and management networks. The L2 switch on this card is automatically set up by the IMM2 and connects the FSM and the onboard IMM2 into the same internal private network.

3.5.3 Software features


The IBM Flex System Manager management software includes the following main features: Monitoring and problem determination: A real-time multichassis view of hardware components with overlays for more information Automatic detection of issues in your environment through event setup that triggers alerts and actions Identification of changes that might affect availability Server resource usage by virtual machine or across a rack of systems Hardware management Automated discovery of physical and virtual servers and interconnections, applications, and supported third-party networking Configuration profiles that integrate device configuration and update steps into a single interface, which dramatically improves the initial configuration experience Inventory of hardware components Chassis and hardware component views: Hardware properties

58

IBM PureFlex System and IBM Flex System Products and Technology

Component names and hardware identification numbers Firmware levels Usage rates

Network management: Management of network switches from various vendors Discovery, inventory, and status monitoring of switches Graphical network topology views Support for KVM, pHyp, VMware virtual switches, and physical switches VLAN configuration of switches Integration with server management Per-virtual machine network usage and performance statistics that are provided to VMControl Logical views of servers and network devices that are grouped by subnet and VLAN Network management (advanced feature set or fabric provisioning feature): Defines QoS settings for logical networks Configures QoS parameters on network devices Provides advanced network monitors for network system pools, logical networks, and virtual systems Storage management: Discovery of physical and virtual storage devices Physical and logical topology views Support for virtual images on local storage across multiple chassis Inventory of physical storage configuration Health status and alerts Storage pool configuration Disk sparing and redundancy management Virtual volume management Support for virtual volume discovery, inventory, creation, modification, and deletion

Virtualization management (base feature set) Support for VMware, Hyper-V, KVM, and IBM PowerVM Create virtual servers Edit virtual servers Manage virtual servers Relocate virtual servers Discover virtual server, storage, and network resources, and visualize the physical-to-virtual relationships Virtualization management (advanced feature set) Create new image repositories for storing virtual appliances and discover existing image repositories in your environment Import external, standards-based virtual appliance packages into your image repositories as virtual appliances Capture a running virtual server that is configured the way you want, complete with guest operating system, running applications, and virtual server definition

Chapter 3. Systems management

59

Import virtual appliance packages that exist in the Open Virtual Machine Format (OVF) from the Internet or other external sources Deploy virtual appliances quickly to create virtual servers that meet the demands of your ever-changing business needs Create, capture, and manage workloads Create server system pools, which enable you to consolidate your resources and workloads into distinct and manageable groups Deploy virtual appliances into server system pools Manage server system pools, including adding hosts or more storage space, and monitoring the health of the resources and the status of the workloads in them Group storage systems together by using storage system pools to increase resource usage and automation Manage storage system pools by adding storage, editing the storage system pool policy, and monitoring the health of the storage resources I/O address management: Manages assignments of Ethernet MAC and Fibre Channel WWN addresses. Monitors the health of compute nodes, and automatically, without user intervention, replaces a failed compute node from a designated pool of spare compute nodes by reassigning MAC and WWN addresses. Preassigns MAC addresses, WWN addresses, and storage boot targets for the compute nodes. Creates addresses for compute nodes, saves the address profiles, and deploys the addresses to the slots in the same or different chassis. Other features: Resource-oriented chassis map provides instant graphical view of chassis resource that includes nodes and I/O modules: Fly-over provides instant view of individual server (node) status and inventory Chassis map provides inventory view of chassis components, a view of active statuses that require administrative attention, and a compliance view of server (node) firmware Actions can be taken on nodes such as working with server-related resources, showing and installing updates, submitting service requests, and starting the remote access tools Resources can be monitored remotely from mobile devices, including Apple iOS based devices, Google Android -based devices, and RIM BlackBerry based devices. Flex System Manager Mobile applications are separately available under their own terms and conditions as outlined by the respective mobile markets. Ability to open video sessions and mount media such as DVDs with software updates to their servers from their local workstation Remote KVM connections Remote Virtual Media connections (mount CD/DVD/ISO/USB media) Power operations against servers (Power On/Off/Restart)

Remote console:

Hardware detection and inventory creation Firmware compliance and updates 60


IBM PureFlex System and IBM Flex System Products and Technology

Health status (such as processor usage) on all hardware devices from a single chassis view Automatic detection of hardware failures: Provides alerts Takes corrective action Notifies IBM of problems to escalate problem determination

Administrative capabilities, such as setting up users within profile groups, assigning security levels, and security governance Bare metal deployment of hypervisors (VMware ESXi, KVM) through centralized images

New function for Flex System Manager release 1.3


Announced on the 6th August 2013, FSM V1.3 includes the following enhancements that are released to support new hardware and to incorporate more function based on client feedback: Support for newly announced hardware Support for the p270 and x222 nodes, I/O modules and support for new options. Enhanced chassis support Support for managing 16 chassis, 224 Nodes and 5000 endpoints from one FSM. PowerVM Management enhancements: Remote restart for Power Systems, which provides the capability to activate a partition on any appropriately-configured running server in the unlikely event that the partitions original server and any associated service partitions or management entities become unavailable. Create and manage shared storage pools Resize disk during deploy Relocate to specific target and Pin VM to specific host Set priority for relocate order FSM capacity usage This tool within FSM allows the system administrator to monitor overall resources of the FSM for usage. It also provides recommendations to the user on how to manager capacity with thresholds that are presented in different colors (green, yellow or red) in the window. The Default view shows a quick view of FSM Capacity and indicates the following metrics: Average active users Current number of managed endpoints Average CPU usage Average disk IO usage Average memory usage Current disk space usage

Also, warnings are generated if metrics exceed a set of predefined thresholds. The warning includes full details of the specific warning and recommendations to help rectify the situation. Thresholds are presented as green, yellow or red. It is also possible to configure the thresholds. Further, a capacity usage report can be generated that shows overall usage, the current status of key parameters, and a list of historical data. Deploy compute node image enhancements: Increased supported OS Images in repository from 2 to 5
Chapter 3. Systems management

61

Improved MAC address support: pNIC VNIC, Virtual addresses Improved OS support: ESXi Image 5.1.1, REL 6.4, REL KVM platform agent for VMControl and ESXi 5000V Agent V1.1.0 Bare metal deployment patterns included for the new x222 nodes. Configuration Patterns enhancements: Pattern stored in LDAP New path options for initial setup New I/O adapter options Unique x222 configuration pattern support and independent node failover support New Keep settings option for boot order configuration Improved guidance for deployment of a pattern Increased supported OS Images in repository from 2 - 5 Improved MAC address support Improved dialog Ability to edit and deploy patterns during initial FSM setup Multiple improvements in usability of patterns

Enhancements to console: View scheduled Jobs in mouse flyover Compliance issues show in console scoreboard and on chassis map compliance overlay Flex firmware views and Compliance issue views; compliance automatically marked when new updates are imported More backup and restore context-sensitive help IEEE 802.1Qbg support added for Power Nodes Performance enhancement for Inventory export (much faster export) Compare installed fixes for IBM i between one installed system and another Smart Zoning enhancements: Simplified interactions between storage and server, no need to pre-zone Create storage volume enhancements; can automatically zone host and storage when zoning was not previously configured. Only needed zoning operations are performed to ensure host and storage can communicate with each other: If zoning is not enabled, it is enabled If a zone set is not created, it is created If a zone does not exist for host and storage, one is created

Management extended to support System x3950 SAP HANA appliance: Manual discovery and inventory Power Control Remote Access System Configuration System Health and Status Release Management (firmware, software installation and update) Service and Support

62

IBM PureFlex System and IBM Flex System Products and Technology

Supported agents, hardware, operating systems, and tasks


IBM Flex System Manager provides four tiers of agents for managed systems. For each managed system, you must choose the tier that provides the amount and level of capabilities that you need for that system. Select the level of agent capabilities that best fits the type of managed system and the management tasks you must perform. IBM Flex System Manager features the following agent tiers: Agentless in-band Managed systems without any FSM client software installed. FSM communicates with the managed system through the operating system. Agentless out-of-band Managed systems without any FSM client software installed. FSM communicates with the managed system through something other than the operating system, such as a service processor or an HMC. Platform Agent Managed systems with Platform Agent installed. FSM communicates with the managed system through the Platform Agent. Common Agent Managed systems with Common Agent installed. FSM communicates with the managed system through the Common Agent. Table 3-9 lists the agent tier support for the IBM Flex System managed compute nodes. Managed nodes include x86 nodes that supports Windows, Linux and VMware, and Power Nodes compute nodes that support IBM AIX, IBM i, and Linux.
Table 3-9 Agent tier support by management system type Agent tier Managed system type Compute nodes that run AIX Compute nodes that run IBM i Compute nodes that run Linux Compute nodes that run Linux and supporting SSH Compute nodes that run Windows Compute nodes that run Windows and supporting SSH or distributed component object model (DCOM) Compute nodes that run VMware Other managed resources that support SSH or SNMP Agentless in-band Yes Yes No Yes No Yes Agentless out-of-band Yes Yes Yes Yes Yes Yes Platform Agent No Yes Yes Yes Yes Yes Common Agent Yes Yes Yes Yes Yes Yes

Yes Yes

Yes Yes

Yes No

Yes No

Table 3-10 on page 64 summarizes the management tasks that are supported by the compute nodes that depend on the agent tier.

Chapter 3. Systems management

63

Table 3-10 Compute node management tasks that are supported by the agent tier Agent tier Managed system type Command automation Hardware alerts Platform alerts Health and status monitoring File transfer Inventory (hardware) Inventory (software) Problems (hardware status) Process management Power management Remote control Remote command line Resource monitors Update manager Agentless in-band No No No No No No Yes No No No No Yes No No Agentless out-of-band No Yes No No No Yes No Yes No Yes Yes No No No Platform Agent No Yes Yes Yes No Yes Yes Yes No No No Yes Yes Yes Common Agent Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes

Table 3-11 shows the supported virtualization environments and their management tasks.
Table 3-11 Supported virtualization environments and management tasks Virtualization environment Management task Deploy virtual servers Deploy virtual farms Relocate virtual servers Import virtual appliance packages Capture virtual servers Capture workloads Deploy virtual appliances Deploy workloads Deploy server system pools Deploy storage system pools AIX and Linuxa Yes No Yes Yes Yes Yes Yes Yes Yes Yes IBM i Yes No No Yes Yes Yes Yes Yes No No VMware vSphere Yes Yes Yes No No No No No No No Microsoft Hyper-V Yes No No No No No No No No No Linux KVM Yes Yes Yes Yes Yes Yes Yes Yes Yes No

a. Linux on Power Systems compute nodes

64

IBM PureFlex System and IBM Flex System Products and Technology

Table 3-12 shows the supported I/O switches and their management tasks.
Table 3-12 Supported I/O switches and management tasks EN2092 1 Gb Ethernet Yes Yes Yes Yes Yes Yes No EN4093 and EN4093R 10 Gb Ethernet Yes Yes Yes Yes Yes Yes Yes CN4093 10 Gb Converged Yes Yes Yes Yes Yes Yes No FC3171 8 Gb FC Yes Yes Yes Yes Yes Yes No FC5022 16 Gb FC Yes Yes Yes Yes No No No

Management task Discovery Inventory Monitoring Alerts Configuration management Automated logical network provisioning (ALNP) Stacked switch

Table 3-13 shows the supported virtual switches and their management tasks.
Table 3-13 Supported virtual switches and management tasks Virtualization environment Virtual switch Management task Discovery Inventory Configuration management Automated logical network provisioning (ALNP) Linux KVM Platform Agent Yes Yes Yes Yes VMware vSphere VMware Yes Yes Yes Yes IBM 5000V Yes Yes Yes Yes PowerVM PowerVM Yes Yes Yes Yes Hyper-V Hyper-V No No No No

Table 3-14 shows the supported storage systems and their management tasks.
Table 3-14 Supported storage systems and management tasks Storage system Management task Storage device discovery Inventory collection Monitoring (alerts and status) Integrated physical and logical topology views Show relationships between storage and server resources Perform logical and physical configuration View and manage attached devices VMControl provisioning V7000 Storage Node Yes Yes Yes Yes Yes Yes Yes Yes IBM Storwize V7000 Yes Yes Yes No Yes Yes No Yes

Chapter 3. Systems management

65

3.5.4 User interfaces


IBM Flex System Manager supports the following management interfaces: Web interface IBM FSM Explorer console Mobile System Management application Command-line interface

Web interface
The following browsers are supported by the management software web interface: Mozilla Firefox versions 3.5.x, 3.6.x, 7.0, and Extended Support Release (ESR) 10.0.x Microsoft Internet Explorer versions 7.0, 8.0, and 9.0

IBM FSM Explorer console


The IBM FSM Explorer console provides an alternative resource-based view of your resources and helps you manage your Flex System environment with intuitive navigation of those resources. You can perform the following tasks in IBM FSM Explorer: Configure local storage, network adapters, boot order, and Integrated Management Module (IMM) and Unified Extensible Firmware Interface (UEFI) settings for one or more compute nodes before you deploy operating-system or virtual images to them. Install operating system images on IBM X-Architecture compute nodes. Browse resources, view the properties of resources, and perform some basic management tasks, such as power on and off, collect inventory, and working with LEDs. Use the Chassis Map to edit compute node details, view server properties, and manage compute node actions. Work with resource views, such as All Systems, Chassis and Members, Hosts, Virtual Servers, Network, Storage, and Favorites. Perform visual monitoring of status and events. View event history and active status. View inventory. Perform visual monitoring of job status. For other tasks, IBM FSM Explorer starts IBM Flex System Manager in a separate browser window or tab. You can return to the IBM FSM Explorer tab when you complete those tasks.

3.5.5 Mobile System Management application


The Mobile System Management application is a simple and no cost tool that you can download for a mobile device that has an Android, Apple iOS, or BlackBerry operating system. You can use the Mobile System Management application to monitor your IBM Flex System hardware remotely. The Mobile System Management application provides access to the following types of IBM Flex System information: Health and Status: Monitor health problems and check the status of managed resources. Event Log: View the event history for chassis, compute nodes, and network devices.

66

IBM PureFlex System and IBM Flex System Products and Technology

Chassis Map (hardware view): Check the front and rear graphical hardware views of a chassis. Chassis List (components view): View a list of the hardware components that are installed in a chassis. Inventory Management: See the Vital Product Data (VPD) for a managed resource (for example, serial number or IP address). Multiple chassis management: Manage multiple chassis and multiple management nodes from a single application. Authentication and security: Secure all connections by using encrypted protocols (for example, SSL), and secure persistent credentials on your mobile device. You can download the Mobile System Management application for your mobile device from one of the following app stores: Google Play for the Android operating system iTunes for the Apple iOS BlackBerry App World

New in Flex System Manager Mobile 1.2.0


With the latest release of Mobile Manager, the following enhancements were added: Power Actions: Perform the following actions on Compute Nodes: Power on Power off Restart Shut down and power off LED flash LED on LED off

Perform actions on CMM, such as Virtual Reseat and Restart Primary CMM Recent Jobs View a list of the recent jobs (last 24 hours) that were run from mobile or desktop. Event Log Easier to toggle between event log and status. Chassis Map (hardware view): Check the front and rear graphical hardware views for a chassis Overlay the graphical views with Power and Error LEDs Inventory Management See the Vital Product Data (VPD) and Firmware Levels for managed resources Authentication and Security: Simpler connection menu Accept unsigned certificates For more information about the application, see the Mobile System Management application page at this website: http://www.ibm.com/systems/flex/fsm/mobile/

Chapter 3. Systems management

67

3.5.6 Flex System Manager CLI


The CLI is an important interface for the IBM Flex System Manager management software. You can use it to accomplish simple tasks directly or as a scriptable framework for automating functions that are not easily accomplished from a GUI. The IBM Flex System Manager management software includes a library of commands that you can use to configure the management software or perform many of the systems management operations that can be accomplished from the management software web-based interface. For more information, see the IBM Flex System Manager product publications available from the IBM Flex System Information Center at this website: http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp At the Information Center, search for the following publications: Installation and User's Guide Systems Management Guide Commands Reference Guide Management Software Troubleshooting Guide

68

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 4.

Chassis and infrastructure configuration


The IBM Flex System Enterprise Chassis (machine type 8721) is a 10U next-generation server platform with integrated chassis management. It is a compact, high-density, high-performance, rack-mount, and scalable platform system. It supports up to 14 standard (half-wide) compute nodes that share common resources, such as power, cooling, management, and I/O resources within a single Enterprise Chassis. In addition, it can also support up to seven full-width nodes or three 4-bay (full-wide & double-high) nodes when the shelves are removed. You can mix and match standard, 2-bay, and 4-bay nodes to meet your specific hardware needs. This chapter includes the following topics: 4.1, Overview on page 70 4.2, Power supplies on page 79 4.3, Fan modules on page 82 4.4, Fan logic module on page 85 4.5, Front information panel on page 86 4.6, Cooling on page 87 4.8, Fan module population on page 99 4.7, Power supply selection on page 92 4.9, Chassis Management Module on page 101 4.10, I/O architecture on page 104 4.11, I/O modules on page 112 4.12, Infrastructure planning on page 161 4.13, IBM 42U 1100mm Enterprise V2 Dynamic Rack on page 172 4.14, IBM PureFlex System 42U Rack and 42U Expansion Rack on page 178 4.15, IBM Rear Door Heat eXchanger V2 Type 1756 on page 180

Copyright IBM Corp. 2012, 2013. All rights reserved.

69

4.1 Overview
Figure 4-1 shows the Enterprise Chassis as seen from the front. The front of the chassis includes 14 horizontal bays with robust removable dividers that allow nodes and future elements to be installed within the chassis. Nodes can be Compute, Storage, or Expansion type. The nodes can be installed when the chassis is powered. The chassis uses a die-cast mechanical bezel for rigidity to allow shipment of the chassis with nodes installed. This chassis construction allows for tight tolerances between nodes, shelves, and the chassis bezel. These tolerances ensure accurate location and mating of connectors to the midplane.

Figure 4-1 IBM Flex System Enterprise Chassis

The Enterprise Chassis includes the following major components: Fourteen standard (half-wide) node bays. It can also support seven, two-bay or three, four-bay nodes with the shelves removed. 8271-A1x: Up to Six 2500W power modules that provide N+N or N+1 redundant power. 8271-LRx: Up to Six 2100W Power Modules that provide N+N or N+1 redundant power. Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules). Four physical I/O modules. An I/O architectural design capable of providing the following features: Up to eight lanes of I/O to an I/O adapter. Each lane capable of up to 16 Gbps. A maximum of 16 lanes of I/O to a half-wide node with two adapters. Various networking solutions that include Ethernet, Fibre Channel, FCoE, and InfiniBand. Two IBM Chassis Management Module (CMMs). The CMM provides single-chassis management support.

70

IBM PureFlex System and IBM Flex System Products and Technology

Table 4-1 lists the quantity of components that comprise the 8271 machine type:
Table 4-1 8721 Enterprise Chassis configuration 8721-A1x 1 1 2 0 4 2 1 2 1 8721-LRx 1 1 0 2 4 2 1 2 1 Description IBM Flex System Enterprise Chassis Chassis Management Module 2500W power supply unit 2100W Power supply unita 80 mm fan modules 40 mm fan modules Console breakout cable C19 to C20 2M power cables Rack mount kit

a. 2100W power supply units are also available through the CTO process

More Console Breakout Cables can be ordered, if required. The console breakout cable connects to the front of an x86 node and allows Keyboard, Video, USB, and Serial to be attached locally to that node. For more information about alternative methods, see 4.12.5, Console planning on page 169. The Chassis Management Module (CMM) includes built-in console re-direction via the CMM Ethernet port.
Table 4-2 Ordering part number and feature code Part number 81Y5286 Feature code A1NF Description IBM Flex System Console Breakout Cable

Figure 4-2 on page 72 shows the component parts of the chassis with the shuttle removed. The shuttle forms the rear of the chassis where the I/O Modules, power supplies, fan modules, and CMMs are installed. The Shuttle is removed only to gain access to the midplane or fan distribution cards in the rare event of a service action.

Chapter 4. Chassis and infrastructure configuration

71

Chassis

Chassis management module

40mm fan Fan CMM module logic filler module

Power supply filler

I/O module

80mm fan module 80mm fan filler

Fan distribution cards

Midplane Power supply

Rear LED card

Shuttle

Figure 4-2 Enterprise Chassis component parts

Within the chassis, a personality card holds vital product data (VPD) and other information that is relevant to the particular chassis. This card can be replaced only under service action, and is not normally accessible. The personality card is attached to the midplane, as shown in Figure 4-4 on page 74.

72

IBM PureFlex System and IBM Flex System Products and Technology

4.1.1 Front of the chassis


Figure 4-3 shows the bay numbers and air apertures on the front of the Enterprise Chassis.
Upper airflow inlets
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaa aa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaa a aa a aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaa aaaa aa aa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaa aa aaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaa aaaa aa a aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaa aa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a a a a aa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aa aa aa a aaaaaaaaaaaa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a a a a a a a a aaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a aa aaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aaaa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aaaa aa aa aa aa aa aa aa aa aa aa a a aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaa a a a a a a a a a aaaa aa aa aa aa aa aa aa aaaa aa aa aa aa aa aa aa aaaaaa aa aa aa aa aa aa aa aa aa aaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaa a a a a a a a a a a a a a a a a a aaaaaaaa a aaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aaaa aaaaaaaaaaaaaaa aaaaaaaaaaaaaa aa aaaa aa aaaaaaaaaaaaa aa aa aaaaaaaaaaaaaa aa aa aa aa aa aaaaaaaaaaa aaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaa aa aa aa aa aa aa aa aa aa a aaaaaa aa aaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aa aaaaaa aaaaaaaaaaa aa aaaaaaaaaaaaaaaaa a a a a aaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a a aaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaa aa aa aaaaaaaaaaa aa aa aa aa aa aa a a a a aa aa aa aa aa aa aa aaaaaaaaa aa aa aa aa aa aaaaaaaaa aaaaaaaa aa aaaaaaaaa aa aaaaaaaa aaaaaaaaa a a a a a aaaaaaaaa a aaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaa aa aa aaaaaaaaaaa aa aa aa aa aa aa a a a a a aa aa aa aa aa aaaaaaaaaaa aa aa aa aa aaaaaaaaaaa aaaaaa aa aaaaaaaaaaa a a a a aaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaa aa aa aa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa aaaaaa aaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaa aa aa aa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa aaaaaa aaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaa aa aa aaaaaaaaaaa aa aa aa a a a a a aa aa aa aa aa aa aa aa aaaaaaaaaaa aa aa aa aa aaaaaaaaaaa aaaaaa aa aaaaaaaaaaa a a a a aaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaa aa aa aa aa aa aaaa a aaaaaaaa aa aa aaaaaa aa aaaa aaaa aa aaaa aa aaaa aaaa aaaa aaaa aa aa aa aa aa aa a aa aa aaa aaaaaa aa aa aaaa a aaa a a a aaaaaaaaaaaa aaaaaaa a a a a a a aaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaa aa aa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aaaaaa aa aa aa aaaa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaa aa aa aa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a a a a aaaaaaaaaaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa a a a a a aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a aaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aa aaaaaa aa aa aa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaa aa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaa aaaa aa aa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaa a a a aa aa aa aa aa aa aaaa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaa aa aa aa aa aa a a aa a a aaaa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaa aaaa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa a a aaaaaa aa aa aaaa aaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaaaaaa aaaaaaaaaaaaaa aa a aaaaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaa aa aa aa aa aaaaaaaaaaaaa aaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaa aa aa aa aa aa aa aa aa aa a a a a aa aaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaa aa aaaaaaaaaaa aa aaaaaaaaaaaaaaaaa a a a a a aaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaa

Bay 13

Bay 14

Bay 11

Bay 12

Bay 9

Bay 10

Bay 7

Bay 8

Bay 5

Bay 6

Bay 3

Bay 4

Bay 1

a aaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a aaaa aaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aaaaaa aaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aaaaaa aaaaaa a a a a a a a a a a a a a a a aaaaaa aa aa aa aaaaaa aaaa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a aaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Bay 2

Information Panel Figure 4-3 Front view of the Enterprise Chassis

Lower airflow Inlets

The chassis includes the following features on the front: The front information panel on the lower left of the chassis Bays 1 - 14 that support nodes and FSM Lower airflow inlet apertures that provide air cooling for switches, CMMs, and power supplies Upper airflow inlet apertures that provide cooling for power supplies For efficient cooling, each bay in the front or rear in the chassis must contain a device or filler. The Enterprise Chassis provides several LEDs on the front information panel that can be used to obtain the status of the chassis. The Identify, Check log, and the Fault LED are also on the rear of the chassis for ease of use.

Chapter 4. Chassis and infrastructure configuration

73

4.1.2 Midplane
The midplane is the circuit board that connects to the compute nodes from the front of the chassis. It also connects to I/O modules, fan modules, and power supplies from the rear of the chassis. The midplane is located within the chassis and can be accessed by removing the Shuttle assembly. Removing the midplane is rare and only necessary in case of service action. The midplane is passive, which is to say that there are no electronic components on it. The midplane has apertures to allow air to pass through. When no node is installed in a standard node bay, the Air Damper is completely closed for that bay, which gives highly efficient scale up cooling. The midplane has reliable industry standard connectors on both sides for power supplies, fan distribution cards, switches, I/O modules and nodes. The chassis design allows for highly accurate placement and mating of connectors from the nodes, I/O modules, and Power supplies to the midplane, as shown in Figure 4-4.

Midplane front view Node power connectors Management connectors I/O module connectors

Midplane rear view Power supply connectors CMM connectors

I/O adapter connectors


Figure 4-4 Connectors on the midplane

Fan power and signal connectors

Personality card connector

The midplane uses a single power domain within the design. This a cost-effective overall solution and optimizes the design for a preferred 10U Height. Within the midplane, there are five separate power and ground planes for distribution of the main 12.2 Volt power domain through the chassis.

74

IBM PureFlex System and IBM Flex System Products and Technology

The midplane also distributes I2C management signals and some 3.3v for powering management circuits. The power supplies source their fan power from the midplane. Figure 4-4 on page 74 shows the connectors on both sides of the midplane.

4.1.3 Rear of the chassis


Figure 4-5 shows the rear view of the chassis.

Figure 4-5 Rear view of Enterprise Chassis

The following components can be installed into the rear of the chassis: Up to two CMMs. Up to six 2500W or 2100W power supply modules. Up to six fan modules that consist of four 80 mm fan modules and two 40 mm fan modules. More fan modules can be installed for a total of 10 modules. Up to four I/O modules.

4.1.4 Specifications
Table 4-3 shows the specifications of the Enterprise Chassis 8721-A1x.
Table 4-3 Enterprise Chassis specifications Feature Machine type-model Form factor Maximum number of compute nodes supported Specifications System x ordering sales channel: 8721-A1x or 8721-LRx Power Systems sales channel: 7893-92Xa 10U rack mounted unit 14 half-wide (single bay), 7 full-wide (two bays), or 3 double-height full-wide (four bays). Mixing is supported.

Chapter 4. Chassis and infrastructure configuration

75

Feature Chassis per 42U rack Nodes per 42U rack Management

Specifications 4 56 half-wide, or 28 full-wide One or two Chassis Management Modules for basic chassis management. Two CMMs form a redundant pair. One CMM is standard in 8721-A1x and 8271-LRx. The CMM interfaces with the Integrated Management Module II (IMM2) or flexible service processor (FSP) integrated in each compute node in the chassis and also to the integrated storage node. An optional IBM Flex System Managera management appliance provides comprehensive management that includes virtualization, networking, and storage management. Up to eight lanes of I/O to an I/O adapter, with each lane capable of up to 16 Gbps bandwidth. Up to 16 lanes of I/O to a half wide-node with two adapters. Various networking solutions include Ethernet, Fibre Channel, FCoE, and InfiniBand 8721-A1x: Six 2500W power modules that can provide N+N or N+1 redundant power. Two are standard in this model. 8721-LRx: Six 2100W power modules that can provide N+N or N+1 redundant power. Two are standard in this model. Power supplies are 80 PLUS Platinum certified and provide over 94% efficiency at 50% load and 20% load. Power capacity of 2500 watts output rated at 200 VAC. Each power supply contains two independently powered 40 mm cooling fan modules. Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules). Four 80 mm and two 40 mm fan modules are standard in model 8721-A1x and 8721-LRx. Height: 440 mm (17.3) Width: 447 mm (17.6) Depth, measured from front bezel to rear of chassis: 800 mm (31.5 inches) Depth, measured from node latch handle to the power supply handle: 840 mm (33.1 inches) Minimum configuration: 96.62 kg (213 lb) Maximum configuration: 220.45 kg (486 lb) 6.3 to 6.8 bels Operating air temperature 5C to 40C Input power: 200 - 240 VAC (nominal), 50 or 60 Hz Minimum configuration: 0.51 kVA (two power supplies) Maximum configuration: 13 kVA (six 2500W power supplies) 12,900 watts maximum

I/O architecture

Power supplies

Fan modules Dimensions

Weight Declared sound level Temperature Electrical power

Power consumption

a. When you order the IBM Flex System Enterprise Chassis through the Power Systems sales channel, the IBM Flex System Manager is required if PowerVM software is selected on a power node.

For data center planning, the chassis is rated to a maximum operating temperature of 40C. For comparison, BC-H is rated to 35C (110v operation is not supported). The AC operating range is 200 - 240 VAC.

76

IBM PureFlex System and IBM Flex System Products and Technology

4.1.5 Air filter


There is an optional airborne contaminate filter that can be fitted to the front of the chassis, as listed in Table 4-4.
Table 4-4 IBM Flex System Enterprise Chassis airborne contaminant filter ordering information Part number 43W9055 43W9057 Description IBM Flex System Enterprise Chassis airborne contaminant filter IBM Flex System Enterprise Chassis airborne contaminant filter replacement pack

The filter is attached to and removed from the chassis, as shown in Figure 4-6.

Figure 4-6 Dust filter

4.1.6 Compute node shelves


A shelf is required for standard (half-wide) bays. The chassis ships with these shelves in place. To allow for installation of the full-wide or larger, shelves must be removed from the chassis. Remove the shelves by sliding two blue latches on the shelf towards the center and then sliding the shelf out of the chassis.

Chapter 4. Chassis and infrastructure configuration

77

Figure 4-7 shows removal of a shelf from Enterprise Chassis.

Shelf

Tabs

Figure 4-7 Shelf removal

4.1.7 Hot plug and hot swap components


The chassis follows the standard color coding scheme that is used by IBM for touch points and hot swap components. Touch points are blue, and are found on the following locations: Fillers that cover empty fan and power supply bays Handle of nodes Other removable items that cannot be hot-swapped Hot Swap components have orange touch points. Orange tabs are found on fan modules, fan logic modules, power supplies, and I/O Module handles. The orange designates that the items are hot swap, and can be removed and replaced while the chassis is powered. Table 4-5 shows which components are hot swap and which are hot plug.
Table 4-5 Hot plug and hot swap components Component Node I/O Module 40 mm Fan Pack 80 mm Fan Pack Power Supply Fan logic module Hot plug Yes Yes Yes Yes Yes Yes Hot swap Noa Yesb Yes Yes Yes Yes

a. Node must be powered off, in standby before removal. b. I/O Module might require reconfiguration, and removal is disruptive to any communications that are taking place.

Nodes can be plugged into the chassis while the chassis is powered. The node can then be powered on. Power the node off before removal.

78

IBM PureFlex System and IBM Flex System Products and Technology

4.2 Power supplies


Power supplies (or power modules) are available with 2500W or 2100W rating. Power supplies are hot pluggable and are at the rear of the chassis. The standard chassis models ship with two 2500W power supplies or two 2100W power supplies, depending on the model. For more information, see Table 4-1 on page 71. The 2100W power supplies provide a more cost-effective solution for deployments with lower power demands. The 2100W power supplies also have the advantage in that they draw a maximum of 11.8A as opposed to the 13.8A of the 2500W power supply. This means that when you are using a 30A supply which is UL derated to 24A when a PDU is used, two 2100W supplies can be connected to the same PDU with 0.4A remaining. Thus, for 30A UL derated PDU deployments that are common in North America, the 2100W power supply can be advantageous. For more information, see 4.12.3, Power planning on page 162. Population information for the 2100W and 2500W power supplies can be found in 4.7, Power supply selection on page 92, which describes planning information for the nodes that are being installed. A maximum of six power supplies can be installed within the Enterprise Chassis. Support of power supplies: Mixing of 2100W and 2500W power supplies is not supported in the same chassis. The 2500W supplies are 2500 watts output rated at 200 - 208 VAC (nominal), and 2750 W at 220 - 240 VAC (nominal). The power supply has an oversubscription rating of up to 3538 Watts output at 200VAC. The power supply operating range is 200 - 240 VAC. The power supplies also contain two dual independently powered 40 mm cooling fan supplies that are powered not from the power supply, but from the chassis midplane. The fan supplies are variable speed and controlled by the chassis fan logic. The 2100W power supplies are 2100 watts output power that is rated at 200 - 240 VAC. Similar to the 2500W unit, this power supply also supports oversubscription; the 2100W unit can run up to 2895 W for a short duration. As with the 2500W units, the 2100W supplies have two independently powered dual 40 mm cooling fans that pick up power from the midplane included within the power supply assembly. Table 4-6 shows the ordering information for the Enterprise Chassis power supplies.
Table 4-6 Power supply module option part numbers Part number 43W9049 47C7633 Feature codesa A0UC / 3590 A3JH / 3666 Description IBM Flex System Enterprise Chassis 2500W Power Module IBM Flex System Enterprise Chassis 2100W Power Module Chassis models where standard 8721-A1x (x-config) 7893-92X (e-config) 8721-LRx

a. The first feature code listed is for configurations that are ordered through System x sales channels (HVEC) that use x-config. The second feature code is for configurations that are ordered through the IBM Power Systems channel (AAS) that use e-config.

Chapter 4. Chassis and infrastructure configuration

79

Table 4-7 shows the Feature Codes that are used when you are ordering through the Power Systems channel route (AAS) via e-config.
Table 4-7 Power Supply feature codes AAS (Power Brand) Description 2100 Wa 2500 W Feature Code for base power supplies (quantity must be 2) 9036 9059 Feature code for additional power supplies (quantity must be 0, 2, or 4) 3666 3590

a. IBM Flex Systems only, not supported in PureFlex configurations

For power supply population, Table 4-11 on page 93 lists details of the support for compute nodes supported based on type and number of power supplies that are installed in the chassis and the power policy enabled (N+N or N+1). Both the 2500W and 2100W power supplies are 80 PLUS Platinum certified. The 80 PLUS certification is a performance specification for power supplies that are used within servers and computers. The standard has several ratings, such as Bronze, Silver, Gold, Platinum. To meet the 80 PLUS Platinum standard, the power supply must have a power factor (PF) of 0.95 or greater at 50% rated load and efficiency equal to or greater than the following values: 90% at 20% of rated load 94% at 50% of rated load 91% at 100% of rated load For more information about 80 PLUS certification, see this website: http://www.plugloadsolutions.com Table 4-8 lists the efficiency of the 2500W Enterprise Chassis power supplies at various percentage loads at different input voltages.
Table 4-8 2500W power supply efficiency at different loads for 200 - 208 VAC and 220 - 240 VAC Load Input voltage Output power Efficiency 10% load 200- 208V 250 W 93.2% 220- 240V 275 W 93.5% 20% load 200- 208V 500 W 94.2% 220- 240V 550 W 94.4% 50% load 200- 208V 1250 W 94.5% 220- 240V 1375 W 92.2% 100% load 200- 208V 2500W 91.8% 220- 240V 2750 W 91.4%

Table 4-9 lists the efficiency of the 2100W Enterprise Chassis power supplies at various percentage loads at 230 VAC nominal voltage.
Table 4-9 2100W power supply efficiency at different loads for 230 VAC Load @ 230 VAC Output Power Efficiency 10% load 210 W 92.8% 20% load 420 W 94.1% 50% load 1050 W 94.2% 100% load 2100 W 91.8%

Figure 4-8 on page 81 shows the location of the power supplies within the enterprise chassis where two power supplies are installed into bay 4 and bay 1. Four power supply bays are shown with fillers that must be removed to install power supplies into the bays. Similar to the fan bay fillers, there are blue touch point and finger hold apertures (circular) that are located below the blue touch points to make the filler removal process easy and intuitive.

80

IBM PureFlex System and IBM Flex System Products and Technology

Population information for the 2100W and 2500W power supplies can be found in Table 4-11 on page 93, which describes the number of power supplies that are required dependent on the nodes being deployed.
Power supply bay 6

Power supply bay 3

Power supply bay 5

Power supply bay 2

Power supply bay 4

Power supply bay 1

Figure 4-8 Power supply locations

With 2500W power supplies, the chassis allows power configurations to be N+N redundancy with most node types. Table 4-11 on page 93 shows the support matrix. Alternatively, a chassis can operate in N+1, where N can equal 3, 4, or 5. All power supply supplies are combined into a single 12.2v DC power domain within the chassis. This combination distributes power to each of the compute nodes, I/O modules, and ancillary components through the Enterprise Chassis midplane. The midplane is a highly reliable design with no active components. Each power supply is designed to provide fault isolation and is hot swappable. Power monitoring of the DC and AC signals allows the CMM to accurately monitor the power supplies. The integral power supply fans are not dependent upon the power supply being functional because they operate and are powered independently from the chassis midplane. Power supplies are added as required to meet the load requirements of the Enterprise Chassis configuration. There is no need to over provision a chassis and power supplies can be added as the nodes are installed. For more information about power-supply unit planning, see Table 4-11 on page 93. Figure 4-9 on page 82 shows the power supply rear view and highlights the LEDs. There is a handle for removal and insertion of the power supply and a removal latch operated by thumb, so the PSU can easily be unlatched and removed with one hand.

Chapter 4. Chassis and infrastructure configuration

81

Removal latch Pull handle

LEDs (left to right): AC power DC power Fault

Figure 4-9 2500W power supply

The rear of the power supply has a C20 inlet socket for connection to power cables. You can use a C19-C20 power cable, which can connect to a suitable IBM DPI rack power distribution unit (PDU). The Power Supply options that are shown in Table 4-6 on page 79 ship with a 2.5m intra-rack power cable (C19 to C20). The rear LEDs indicate the following conditions: AC Power: When lit green, the AC power is being supplied to the PSU inlet. DC Power: When lit green, the DC power is being supplied to the chassis midplane. Fault: When lit amber, there is a fault with the PSU. Before you remove any power supplies, ensure that the remaining power supplies have sufficient capacity to power the Enterprise Chassis. Power usage information can be found in the CMM web interface.

4.3 Fan modules


The Enterprise Chassis supports up to 10 hot pluggable fan modules that consist of two 40 mm fan modules and eight 80 mm fan modules. A chassis can operate with a minimum of six hot-swap fan modules installed, which consist of four 80 mm fan modules and two 40 mm fan modules. The fan modules plug into the chassis and connect to the fan distribution cards. More 80 mm fan modules can be added as required to support chassis cooling requirements.

82

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-10 shows the fan bays in the back of the Enterprise Chassis.
Fan bay 10 Fan bay 5

Fan bay 4 Fan bay 9

Fan bay 3 Fan bay 8

Fan bay 7

Fan bay 2

Fan bay 6

Fan bay 1

Figure 4-10 Fan bays in the Enterprise Chassis

For more information about how to populate the fan modules, see 4.6, Cooling on page 87. Figure 4-11 shows a 40 mm fan module,

Removal latch Pull handle

Power on LED

Fault LED

Figure 4-11 40 mm fan module

The two 40 mm fan modules in fan bays 5 and 10 distribute airflow to the I/O modules and chassis management modules. These modules ship preinstalled in the chassis. Each 40 mm fan module contains two 40 mm counter rotating fan pairs, side-by-side.
Chapter 4. Chassis and infrastructure configuration

83

The 80 mm fan modules distribute airflow to the compute nodes through the chassis from front to rear. Each 80 mm fan module contains two 80 mm fan modules, back-to-back within the module, which are counter rotating. Both fan modules have an electromagnetic compatibility (EMC) mesh screen on the rear internal face of the module. This design also provides a laminar flow through the screen. Laminar flow is a smooth flow of air, sometimes called streamline flow. This flow reduces turbulence of the exhaust air and improves the efficiency of the overall fan assembly. The following factors combine to form a highly efficient fan design that provides the best cooling for lowest energy input: Design of the entire fan assembly Fan blade design Distance between and size of the fan modules EMC mesh screen Figure 4-12 shows an 80 mm fan module.

Removal latch Pull handle

Power on LED

Fault LED

Figure 4-12 80 mm fan module

The minimum number of 80 mm fan modules is four. The maximum number of individual 80 mm fan modules that can be installed is eight. Both fan modules have two LED indicators that consist of a green power-on indicator and an amber fault indicator. The power indicator lights when the fan module has power, and flashes when the module is in the power save state. Table 4-10 lists the specifications of the 80 mm Fan Module Pair option. Pairs and singles: When the modules are ordered as an option, they are supplied as a pair. When the modules are configured by using feature codes, they are single fans.
Table 4-10 80 mm Fan Module Pair option part number Part number 43W9078 (two fans) Feature codea A0UA / 7805 (one fan) Description IBM Flex System Enterprise Chassis 80 mm Fan Module

a. The first feature code listed is for configurations that are ordered through System x sales channels (HVEC) by using x-config. The second feature code is for configurations that are ordered through the IBM Power Systems channel (AAS) by using e-config.

84

IBM PureFlex System and IBM Flex System Products and Technology

For more information about airflow and cooling, see 4.6, Cooling on page 87.

4.4 Fan logic module


There are two fan logic modules included within the chassis, as shown in Figure 4-13.

Fan logic bay 2

Fan logic bay 1

Figure 4-13 Fan logic modules on the rear of the chassis

Fan logic modules are multiplexers for the internal I2C bus, which is used for communication between hardware components within the chassis. Each fan pack is accessed through a dedicated I2C bus, switched by the Fan Mux card, from each CMM. The fan logic module switches the I2C bus to each individual fan pack. This module can be used by the Chassis Management Module to determine multiple parameters, such as fan RPM. There is a fan logic module for the left and right side of the chassis. The left fan logic module access the left fan modules, and the right fan logic module accesses the right fan modules. Fan presence indication for each fan pack is read by the fan logic module. Power and fault LEDs are also controlled by the fan logic module.

Chapter 4. Chassis and infrastructure configuration

85

Figure 4-14 shows a fan logic module and its LEDs.

Figure 4-14 Fan logic module

As shown in Figure 4-14, there are two LEDs on the fan logic module. The power-on LED is green when the fan logic module is powered. The amber fault LED flashes to indicate a faulty fan logic module. Fan logic modules are hot swappable. For more information about airflow and cooling, see 4.6, Cooling on page 87

4.5 Front information panel


Figure 4-15 shows the front information panel.

!
White backlit IBM logo Identify LED Check log LED Fault LED

Figure 4-15 Front information panel

The following items are shown on the front information panel: White Backlit IBM Logo: When lit, this logo indicates that the chassis is powered. Locate LED: When lit (blue) solid, this LED indicates the location of the chassis. When the LED is flashing, this LED indicates that a condition occurred that caused the CMM to indicate that the chassis needs attention. Check Error Log LED: When lit (amber), this LED indicates that a noncritical event occurred. This event might be an incorrect I/O module that is inserted into a bay, or a power requirement that exceeds the capacity of the installed power modules. Fault LED: When lit (amber), this LED indicates that a critical system error occurred. This error can be an error in a power module or a system error in a node.

86

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-16 shows the LEDs that are on the rear of the chassis.

Identify LED

Check log LED

Fault LED

Figure 4-16 Chassis LEDs on the rear of the unit (lower right)

4.6 Cooling
This section describes the Enterprise Chassis cooling system. The flow of air within the Enterprise Chassis follows a front-to-back cooling path. Cool air is drawn in at the front of the chassis and warm air is exhausted to the rear. Air is drawn in through the front node bays and the front airflow inlet apertures at the top and bottom of the chassis. There are two cooling zones for the nodes: a left zone and a right zone. The cooling process can be scaled up as required, based on which node bays are populated. For more information about the number of fan modules that are required for nodes, see 4.8, Fan module population on page 99. When a node is removed from a bay, an airflow damper closes in the midplane. Therefore, no air is drawn in through an unpopulated bay. When a node is inserted into a bay, the damper is opened by the node insertion, which allows for cooling of the node in that bay.

Chapter 4. Chassis and infrastructure configuration

87

Figure 4-17 shows the upper and lower cooling apertures. Upper cooling apertures
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaa aaaa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaa aaaa aaaa aa a aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaa aaaa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaa aa aa aaaa aa aa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaa aa a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaa aa a a aa aaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aaaa aaaa a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa a aaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaa aaaaaa aa aa a a a a a a a a a a a a a aa aa aa aa aa aaaaaaaaaaaa aaaaaa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaa aaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaa aaaa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aaaa aaaaaaaaaaaaaaa aaaaaaaaaaaaaa aa aaaa aa aaaaaaaaaaaaa aa aa aaaaaaaaaaaaaa aa aa aa aa aa aaaaaaaaaaa aa aa aa aa aa aa aaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa a aa aa aa aaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa a aa aa aa aaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa a a a a aaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaa aa aa aaaaaaaaaaa aa aa aa aa aa aa a a a a a aa aa aa aa aa aaaaaaaaaaa aa aa aa aa aaaaaaaaaaa aaaaaa aa aaaaaaaaaaa aa aaaaaa aaaaaaaaaaa a a a a aaaaaaaaaaa a aaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaa aa aa aaaaaaaaaaa aa aa aa aa aa aa a a a a a aa aa aa aa aa aaaaaaaaaaa aa aa aa aa aaaaaaaaaaa aaaa aa aa aaaaaaaaaaa aa aaaa aa aaaaaaaaaaa a a a a aaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaa aa aa aa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa a a aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaa aa aa aa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa a a aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaa aa aa a a a a a a aa aa aa aaaaaaaaaaa aa aa aa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaa aa aa aa aa aa aaaa a aaaaaaaa aa aa aaaaaa aa aaaa aaaa aa aaaa aaaa aaaa aaaa aaaa aaa aa aaaa aaaa aaaa aaaaa aaaa aaaa aaaa a aaaaa aa aa aa aa aa aaaaaaaaa a a a a a a a aaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaa aaaa aa aa aa aaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaa aa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaaaaa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaa aa aaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a a a a a a a a a aa aa aaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaa aa aa aa aaaa aaaa aaaa aa aaaaaa aa aa aaaaaa aa aa aa aa aa aa aa aaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa a aaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa a aaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aaaa aaaaaaaaaaaaaaa aaaaaaaaaaaaaa aa aaaa aa aaaaaaaaaaaaa aa aa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaa aa aa aa aa aa aaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa a aa aa aaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa a a a aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaa a a a a a a a a a a

a aaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a aaaa aaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaa aaaaaa aa aa aa aa aa aa aa aaaaaaaaaaaa aaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa a aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaa aaaaaa aa aa aa aa aa aa aa aaaaaaaaaaaa aaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa a a a aaaaaaaaaaaa aaaa aa aa aa a aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa a a a a a a a a a a a a a a a a a a a a aa

Lower cooling apertures


Figure 4-17 Enterprise Chassis lower and upper cooling apertures

Various fan modules are present in the chassis to assist with efficient cooling. Fan modules consist of 40 mm and 80 mm types, and are contained within hot pluggable fan modules. The power supplies also have two integrated, independently powered 40 mm fan modules. The cooling path for the nodes begins when air is drawn in from the front of the chassis. The airflow intensity is controlled by the 80 mm fan modules in the rear. Air passes from the front of the chassis, through the node, through openings in the Midplane, and then into a plenum chamber. Each plenum is isolated from the other, providing separate left and right cooling zones. The 80 mm fan packs on each zone then move the warm air from the plenum to the rear of the chassis. In a two-bay wide node, the air flow within the node is not segregated because it spans both airflow zones.

88

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-18 shows a chassis with the outer casing removed for clarity to show airflow path through the chassis. There is no airflow through the chassis midplane where a node is not installed. The air damper is opened only when a node is inserted in that bay. Node installed in Bay 14 Cool airflow in 80 mm fan pack

Cool airflow in

Warm Airflow Midplane

Node installed in Bay 1

Figure 4-18 Airflow into chassis through the nodes and exhaust through the 80 mm fan packs

Chapter 4. Chassis and infrastructure configuration

89

Figure 4-19 shows the path of air from the upper and lower airflow inlet apertures to the power supplies. Nodes Power Supply Cool airflow in

Cool airflow in Midplane


Figure 4-19 Airflow path power supplies

90

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-20 shows the airflow from the lower inlet aperture to the 40 mm fan modules. This airflow provides cooling for the switch modules and CMM installed in the rear of the chassis.

Nodes 40 mm fan module

Airflow

I/O modules

CMM

Figure 4-20 40 mm fan module airflow

The right-side 40 mm fan module cools the right switches, while the left 40 mm fan module cools the left pair of switches. Each 40 mm fan module has a pair of counter rotating fans for redundancy. Cool air flows in from the lower inlet aperture at the front of the chassis. It is drawn into the lower openings in the CMM and I/O Modules where it provides cooling for these components. It passes through and is drawn out the top of the CMM and I/O modules. The warm air is expelled to the rear of the chassis by the 40 mm fan assembly. This expulsion is shown by the red airflow arrows in Figure 4-20. The removal of the fan pack exposes an opening in the bay to the 80 mm fan packs that are located below. A back flow damper within the fan bay then closes. The backflow damper prevents hot air from reentering the system from the rear of the chassis. The 80 mm fan packs cool the switch modules and the CMM while the fan pack is being replaced. Chassis cooling is implemented as a function of the following components: Node configurations Power Monitor circuits Component temperatures Ambient temperature

Chapter 4. Chassis and infrastructure configuration

91

This results in lower airflow volume (measured in cubic feet per minute or CFM) and lower cooling energy that is spent at a chassis level. This system also maximizes the temperature difference across the chassis (known generally as the Delta T) for more efficient room integration. Monitored Chassis level airflow usage is displayed to enable airflow planning and monitoring for hot air recirculation. Five Acoustic Optimization states can be selected. Use the one that best balances performance requirements with the noise level of the fans. Chassis level CFM usage is available to you for planning purposes. In addition, ambient health awareness can detect potential hot air recirculation to the chassis.

4.7 Power supply selection


The chassis power supplies that are needed to power the installed compute nodes and other chassis components depends on a number of power-related selections, including the wattage of the power supplies, 2100W or 2500W. The 2100W power supplies might offer a lower-cost alternative to the 2500W power supplies, where the nodes might be deployed within the 2100W power envelope. The 2100W power supplies provide a more cost-effective solution for deployments with lower power demands. The 2100W power supplies also have the advantage that they draw a maximum of 11.8A as opposed to the 13.8A of the 2500W power supply. This means that when you are using a 30A supply which is UL derated to 24A when you are using a PDU, two 2100W supplies can be connected to the same PDU with 0.4A remaining. Thus, for 30A UL derated PDU deployments that are common in North America, the 2100W power supply might be advantageous. For more information, see 4.12.3, Power planning on page 162. Support of power supplies: Mixing of 2100W and 2500W power supplies is not supported in the same chassis. A chassis that is powered by the 2100W power supplies might offer a lower-cost alternative for specific compute node configurations that are supported within the 2100W PSU power envelope. As the number of nodes in a chassis is expanded, more power supplies can be added as required. This chassis design allows cost effective scaling of power configurations. If there is not enough DC power available to meet the load demand, the Chassis Management Module automatically powers down devices to reduce the load demand. Table 4-11 on page 93 shows the number of compute nodes that can be installed based on the following factors: The model of compute node that is installed The capacity of the power supply that is installed (2100W or 2500W) The power policy enabled (N+N or N+1) The number of power supplies that are installed (4, 5 or 6) For x86 compute nodes, the thermal design power (TDP) rating of the processors For power policies, N+N means a fully redundant configuration where there are duplicate power supplies for each supply that is needed for full operation. N+1 means there is only one redundant power supply and all other supplies are needed for full operation.

92

IBM PureFlex System and IBM Flex System Products and Technology

In Table 4-11, the colors of the cells have the following meanings: Supported with no limitations as to the number of compute nodes that can be installed Supported but with limitations on the number of compute nodes that can be installed. As you can see, a full complement of any compute nodes at all TDP ratings are supported if all six power supplies are installed and an N+1 power policy is selected.
Table 4-11 Specific number of compute nodes supported based on installed power supplies
Compute node CPU TDP rating 50 W 60 W 70 W 80 W 95 W x222 50 W 60 W 70 W 80 W 95 W x240 60 W 70 W 80 W 95 W 115 W 130 W 135 W x440 95 W 115 W 130 W p24L p260 p270 p460 FSM V7000 All All All All 95 W N/A 2100W power supplies N+1, N=5 6 total 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 7 7 7 14 14 14 7 2 3 N+1, N=4 5 total 14 14 14 14 14 14 14 14 14 13 14 14 14 14 14 14 14 7 7 7 12 12 12 6 2 3 N+1, N=3 4 total 14 14 14 14 14 13 12 11 10 9 14 13 13 12 11 11 10 6 5 5 9 9 9 4 2 3 N+N, N=3 6 total 14 14 14 14 14 14 13 12 11 10 14 14 13 12 12 11 11 6 6 5 10 10 9 5 2 3 N+1, N=5 6 total 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 7 7 7 14 14 14 7 2 3 2500W power supplies N+1, N=4 5 total 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 7 7 7 14 14 14 7 2 3 N+1, N=3 4 total 14 14 14 14 14 14 14 14 13 12 14 14 14 14 14 13 13 7 7 6 12 12 12 6 2 3 N+N, N=3 6 total 14 14 14 14 14 14 14 14 14 13 14 14 14 14 14 14 14 7 7 7 13 13 12 6 2 3

x220

Chapter 4. Chassis and infrastructure configuration

93

The following assumptions are made: All Compute Nodes are fully configured. Throttling and oversubscription are enabled. Tip: For more information about exact configuration support, see the Power configurator at this website: http://ibm.com/systems/bladecenter/resources/powerconfig.html

4.7.1 Power policies


The following power management policies can be selected to dictate how the chassis is protected in the case of potential power module or supply failures. These policies are configured by using the Chassis Management Module graphical interface: AC Power source redundancy Power is allocated under the assumption that no throttling of the nodes is allowed if a power supply fault occurs. This is an N+N configuration. AC Power source redundancy with compute node throttling allowed Power is allocated under the assumption that throttling of the nodes are allowed if a power supply fault occurs. This is an N+N configuration. Power Module Redundancy Maximum input power is limited to one less than the number of power modules when more than one power module is present. One power module can fail without affecting compute note operation. Multiple power node failures can cause the chassis to power off. Some compute nodes might not be able to power on if doing so exceeds the power policy limit. Power Module Redundancy with compute node throttling allowed This mode can be described as oversubscription mode. Operation in this mode assumes that a nodes load can be reduced (or throttled) to the continuous load rating within a specified time. This process occurs following a loss of one or more power supplies. The Power Supplies can exceed their continuous rating of 2500w for short periods. This is for an N+1 configuration. Basic Power Management This allows the total output power of all power supplies to be used. When operating in this mode, there is no power redundancy. If a power supply fails or an AC feed to one or more supplies is lost, the entire chassis might shut down. There is no power throttling. The chassis is run by using one of these power capping policies: No Power Capping Maximum input power is determined by the active power redundancy policy. Static Capping This sets an overall chassis limit on the maximum input power. In a situation where powering on a component can cause the limit to be exceeded, the component is prevented from powering on.

94

IBM PureFlex System and IBM Flex System Products and Technology

4.7.2 Number of power supplies required for N+N and N+1


A total of six power supplies can be installed. Therefore, in an N+N configuration, the options available are two, four, or six power supplies. For N+1, the total number can be anywhere between two and six. Depending on the node type, reference should be made to Table 4-12 if 2500W power supplies are used, or to Table 4-13 on page 96 if 2100W power supplies are used. For example: If eight x222 nodes are required to be installed with N+1 redundancy by using 2500W power supplies, from Table 4-12 a minimum of three power supplies are required for support. Table 4-12 and Table 4-13 on page 96 show the highest TDP rating of processors for each node type. In some configurations, the power supplies cannot power the quantity of nodes, which is highlighted in the tables as NS (not sufficient). It is impossible to physically install more than seven full-wide compute nodes in a chassis, as shown in Figure 4-12 on page 84. Table 4-12 and Table 4-13 on page 96 assume that the same type of node is being configured. Refer to the power configurator for mixed configurations of different node types within a chassis.
Table 4-12 Number of 2500W power supplies required for each node type x220 at 95Wa Nodes N+N 6 6 4 4 4 4 4 4 4 4 2 2 2 2 N+1 4 4 3 3 3 3 3 3 3 3 2 2 2 2 x222 at 95Wa N+N NSb 6 6 6 6 6 6 4 4 4 4 4 2 2 N+1 5 5 4 4 4 4 4 3 3 3 3 3 2 2 x240 at 135Wa N+N 6 6 6 6 6 6 4 4 4 4 4 4 2 2 N+1 5 4 4 4 4 4 3 3 3 3 3 3 2 2 6 6 6 4 4 4 2 5 4 4 3 3 3 2 Not applicable. It is not physically possible to install more than seven x440s into a chassis. x440 at 130Wa N+N N+1 p260 N+ N NSb 6 6 6 6 6 6 4 4 4 4 4 2 2 N+1 5 5 4 4 4 4 4 3 3 3 3 3 2 2 p270 N+ N NSb NSb 6 6 6 6 6 4 4 4 4 4 2 2 N+1 5 5 4 4 4 4 4 3 3 3 3 3 2 2 p460 N+N N+1

14 13 12 11 10 9 8 7 6 5 4 3 2 1

These configuratio ns are not applicable. It is not physically possible to install more than seven p460s into a chassis. NSb 6 6 6 4 4 2 5 4 4 4 3 3 2

a. Number of power supplies is based on x86 compute nodes with processors of the highest TDP rating. b. Not supported. The number of nodes exceeds the capacity of the power supplies.

Chapter 4. Chassis and infrastructure configuration

95

Table 4-13 Number of 2100W power supplies required for each node type Nodes x220 at 95Wa N+N 6 6 6 6 6 4 4 4 4 4 4 4 2 2 N+1 4 4 4 4 4 3 3 3 3 3 3 3 2 2 x222 at 95Wa N+N NSb NSb NS
b

x240 at 135Wa N+N NSb NSb NS 6 6 6 6 6 4 4 4 4 4 2


b

x440 at 130Wa N+N N+1

p260 N+N NSb N+1 6 6 5 5 5 4 4 4 4 3 3 3 3 2

p270 N+N NSb NSb NS


b

p460 N+N N+1

N+1 6 5 5 5 5 4 4 4 4 3 3 3 3 2

N+1 5 5 5 5 4 4 4 4 3 3 3 3 3 2

N+1 6 6 5 5 5 4 4 4 4 3 3 3 3 2

14 13 12 11 10 9 8 7 6 5 4 3 2 1

NSb 6 6 6 6 6 4 4 4 4 2

Not applicable. It is not physically possible to install more than seven x440s into a chassis.

NSb NS
b

NSb 6 6 6

NSb NSb 6 6 6 6 4 4 4 4 2

These configuratio ns are not applicable. It is not physically possible to install more than seven p460s into a chassis. NSb NSb 6 6 6 4 4 6 5 5 4 4 3 3

NSb NSb 6 6 4 4 4

5 5 4 4 3 3 3

6 6 4 4 4 4 2

a. Number of power supplies is based on x86 compute nodes with processors of the highest TDP rating. b. Not supported. The number of nodes exceeds the capacity of the power supplies.

Tip: For more information about the exact configuration, see the Power configurator at this website: http://ibm.com/systems/bladecenter/resources/powerconfig.html

96

IBM PureFlex System and IBM Flex System Products and Technology

The chassis ships with Power supply bay 1 and 4 preinstalled, as shown in Figure 4-21. In the case of N+N, this can power four x220 nodes as shown, with 2500W power supplies according to Table 4-12 on page 95.

13 11 9 7 5 3 1 Node Bays Front View

14 12 10 8 6 4 2 Power Supply Bays Rear View 4 4 1 1 5 2 6 3

Figure 4-21 Two power supplies installed with four x220 nodes in N+N

Eight x220 nodes with 2500W N+N configuration is shown in Figure 4-22 where another pair of power supplies in bays 2 and 5 were installed into the enterprise chassis.

13 11 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 12 10 8 8 6 6 4 4 2 2 Power Supply Bays Rear View 4 4 1 1 5 5 2 2 6 3

Figure 4-22 Four power supplies installed with eight x220 nodes in N+N

Chapter 4. Chassis and infrastructure configuration

97

Figure 4-23 shows the full six power supplies installed with 14 x220 nodes that use 2500W power supplies. This is a supported N+N configuration according to Table 4-12 on page 95.

13 13 11 11 9 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 14 12 12 10 10 8 8 6 6 4 4 2 2 Power Supply Bays Rear View 4 4 1 1 5 5 2 2 6 6 3 3

Figure 4-23 Six power configuration with fourteen x220 nodes in N+N.

Power supplies selected for an N+1 configuration


The chassis ships with two power supplies installed. As shown in Table 4-12 on page 95, 2500W power supplies allow up to four x220 nodes to be installed with N+1 redundancy.

13 11 9 7 5 3 1 Node Bays Front View

14 12 10 8 6 4 2 Power Supply Bays Rear View 4 4 1 1 5 2 6 3

Figure 4-24 Two power supplies installed with four x220 nodes in N+1

98

IBM PureFlex System and IBM Flex System Products and Technology

When eight x220 nodes are installed and N+1 with 2500W power supplies is required, checking Table 4-12 on page 95 shows support with three power supplies, as shown in Figure 4-25.

13 11 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 12 10 8 8 6 6 4 4 2 2 Power Supply Bays Rear View 4 4 1 1 5 2 2 6 3

Figure 4-25 Eight x220 nodes with 3 2500W power supplies in N+1 configuration

When 14 x220 nodes are required and N+1 is wanted that uses 2500W power supplies, then four 2500W power supplies are required according to Table 4-12 on page 95. Figure 4-26 shows this redundancy configuration of N+1 where in this case N=3.

13 13 11 11 9 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 14 12 12 10 10 8 8 6 6 4 4 2 2 Power Supply Bays Rear View 4 4 1 1 5 5 2 2 6 3

Figure 4-26 Fourteen x220 nodes with 4 2500W power supplies in N+1 configuration

4.8 Fan module population


The fan modules are populated depending on the nodes that are installed. To support the base configuration and up to four nodes, a chassis ships with four 80 mm fan modules and two 40 mm fan modules preinstalled. When you install more nodes, install the nodes, fan modules, and power supplies from the bottom upwards.

Chapter 4. Chassis and infrastructure configuration

99

The minimum configuration of 80 mm fan modules is four, which provides cooling for a maximum of four nodes. This configuration is shown in Figure 4-27 and is the base configuration.

13 11 9 7 5 3 1 Node Bays Front View

14 12 10 8 6 4 2 Cooling zone Cooling zone 7 6 2 1 9 8 4 3

Rear View

Figure 4-27 Four 80 mm fan modules allow a maximum of four nodes installed

Installing six 80 mm fan modules allows another four nodes to be supported within the chassis. Therefore, the maximum is eight, as shown in Figure 4-28.

13 11 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 12 10 8 8 6 6 4 4 2 2 Cooling zone Cooling zone 7 6 2 1 9 8 4 3

Rear View

Figure 4-28 Six 80 mm fan modules allow for a maximum of eight nodes

100

IBM PureFlex System and IBM Flex System Products and Technology

To cool more than eight nodes, all fan modules must be installed as shown in Figure 4-29.

13 13 11 11 9 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 14 12 12 10 10 8 8 6 6 4 4 2 2 9 8 4 3

7 6 Cooling zone

2 1 Cooling zone

Rear View

Figure 4-29 Eight 80 mm fan modules support for 9 - 14 nodes

If there are insufficient fan modules for the number of nodes that are installed, the nodes might be throttled.

4.9 Chassis Management Module


The CMM provides single chassis management and the networking path for remote keyboard, video, mouse (KVM) capability for compute nodes within the chassis. The chassis can accommodate one or two CMMs. The first is installed into CMM Bay 1, the second into CMM bay 2. Installing two provides CMM redundancy. Table 4-14 lists the ordering information for the second CMM.
Table 4-14 Chassis Management Module ordering information Part number 68Y7030 Feature codea A0UE / 3592 Description IBM Flex System Chassis Management Module

a. The first feature code listed is for configurations that are ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations that are ordered through the IBM Power Systems channel (AAS) by using e-config.

Chapter 4. Chassis and infrastructure configuration

101

Figure 4-30 shows the location of the CMM bays on the back of the Enterprise Chassis.

Figure 4-30 CMM Bay 1 and Bay 2

The CMM provides the following functions: Power control Fan management Chassis and compute node initialization Switch management Diagnostics Resource discovery and inventory management Resource alerts and monitoring management Chassis and compute node power management Network management The CMM includes the following connectors: USB connection: Can be used for insertion of a USB media key for tasks such as firmware updates. 10/100/1000 Mbps RJ45 Ethernet connection: For connection to a management network. The CMM can be managed through this Ethernet port. Serial port (mini-USB): For local serial (CLI) access to the CMM. Use the cable kit that is listed in Table 4-15 for connectivity.
Table 4-15 Serial cable specifications Part number 90Y9338 Feature codea A2RR Description IBM Flex System Management Serial Access Cable Contains two cables: Mini-USB-to-RJ45 serial cable Mini-USB-to-DB9 serial cable

a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.

102

IBM PureFlex System and IBM Flex System Products and Technology

The CMM includes the following LEDs that provide status information: Power-on LED Activity LED Error LED Ethernet port link and port activity LEDs Figure 4-31 shows the CMM connectors and LEDs.

Figure 4-31 Chassis Management Module

The CMM also incorporates a reset button, which features the following functions (depending upon how long the button is held in): When pressed for less than 5 seconds, the CMM restarts. When pressed for more than 5 seconds (for example 10 - 15 seconds), the CMM configuration is reset to manufacturing defaults and then restarts. For more information about how the CMM integrates into the Systems Management architecture, see 3.2, Chassis Management Module on page 43.

Chapter 4. Chassis and infrastructure configuration

103

4.10 I/O architecture


The Enterprise Chassis can accommodate four I/O modules that are installed in vertical orientation into the rear of the chassis, as shown in Figure 4-32.
I/O module bay 1 I/O module bay 3 I/O module bay 2 I/O module bay 4

Figure 4-32 Rear view that shows the I/O Module bays 1 - 4

If a node has a two-port integrated LAN on Motherboard (LOM) as standard, modules 1 and 2 are connected to this LOM. If an I/O adapter is installed in the nodes I/O expansion slot 1, modules 1 and 2 are connected to this adapter. Modules 3 and 4 connect to the I/O adapter that is installed within I/O expansion bay 2 on the node. These I/O modules provide external connectivity, and connect internally to each of the nodes within the chassis. They can be Switch or Pass-thru modules, with a potential to support other types in the future.

104

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-33 shows the connections from the nodes to the switch modules.
LOM connector (remove when I/O expansion adapter is installed)

4 lanes (KX-4) or 4 10 Gbps lanes (KR)

Node LOM bay 1 with LOM

I/O module 1

I/O module 3

I/O module 2 Node LOM bay 2 with I/O expansion adapter

I/O module 4

Node bay 14

14 internal groups (of 4 lanes each), one to each node.

Figure 4-33 LOM, I/O adapter, and switch module connections

The node in bay 1 in Figure 4-33 shows that when shipped with an LOM, the LOM connector provides the link from the node system board to the midplane. Some nodes do not ship with LOM. If required, this LOM connector can be removed and an I/O expansion adapter can be installed in its place. This configuration is shown on the node in bay 2 in Figure 4-33

Chapter 4. Chassis and infrastructure configuration

105

Figure 4-34 shows the electrical connections from the LOM and I/O adapters to the I/O modules, which all takes place across the chassis midplane.

Node M1 1 M2

Switch . . . 1

Node M1 2 M2

Switch . . . 2

Node M1 3 M2

Switch . . . 3

Node M1 14 M2

Switch . . . 4

Each line between an I/O adapter and a switch is four links


Figure 4-34 Logical lay out of node to switch interconnects

A total of two I/O expansion adapters (designated M1 and M2 in Figure 4-34) can be plugged into a half-wide node. Up to four I/O adapters can be plugged into a full-wide node. Each I/O adapter has two connectors. One connects to the compute nodes system board (PCI Express connection). The second connector is a high-speed interface to the midplane that mates to the midplane when the node is installed into a bay within the chassis. As shown in Figure 4-34, each of the links to the midplane from the I/O adapter (shown in red) are four links wide. Exactly how many links are used on each I/O adapter is dependent on the design of the adapter and the number of ports that are wired. Therefore, a half-wide node can have a maximum of 16 I/O links and a full wide node can have 32 links.

106

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-35 shows an I/O expansion adapter.

PCIe connector Midplane connector

Guide block to ensure correct installation


Figure 4-35 I/O expansion adapter

Adapters share a common size (100 mm x 80 mm)

Each of these individual I/O links or lanes can be wired for 1 Gb or 10 Gb Ethernet, or 8 Gbps or 16 Gbps Fibre Channel. The application-specific integrated circuit (ASIC) type on the I/O Expansion adapter dictates the number of links that are enabled. Some ASICs are two-port and some are four port and some I/O expansion adapters contain two ASICs. For a two-port ASIC, one port can go to one switch and one port to the other. This configuration is shown in Figure 4-36 on page 108. In the future, other combinations can be implemented. In an Ethernet I/O adapter, the wiring of the links is to the IEEE 802.3ap standard, which is also known as the Backplane Ethernet standard. The Backplane Ethernet standard has different implementations at 10 Gbps, being 10GBASE-KX4 and 10GBASE-KR. The I/O architecture of the Enterprise Chassis supports the KX4 and KR. The 10GBASE-KX4 uses the same physical layer coding (IEEE 802.3 clause 48) as 10GBASE-CX4, where each individual lane (SERDES = Serializer/DeSerializer) carries 3.125 Gbaud of signaling bandwidth. The 10GBASE-KR uses the same coding (IEEE 802.3 clause 49) as 10GBASE-LR/ER/SR, where the SERDES lane operates at 10.3125 Gbps. Each of the links between I/O expansion adapter and I/O module can be 4x 3.125 Lanes/port (KX-4) or 4x 10 Gbps Lanes (KR). This choice is dependent on the expansion adapter and I/O Module implementation.

Chapter 4. Chassis and infrastructure configuration

107

Figure 4-36 shows how the integrated two-port 10 Gb LOM connects through a LOM connector to switch 1. This implementation provides a pair of 10 Gb lanes. Each lane connects to a 10 Gb switch or 10 Gb pass-through module that is installed in I/O module bays in the rear of the chassis. The LOM connector is sometimes referred to as a periscope connector because of its shape.
10 Gbps KR lane

P1 LOM Connector P1 P2 LOM Half-wide node P1 2-Port I/O adapter in slot 1 I/O adapter in slot 2

P2

Figure 4-36 LOM implementation: Emulex 10 Gb Virtual Fabric onboard LOM to I/O Module

A half-wide compute node with two standard I/O adapter sockets and an I/O adapter with two ports is shown in Figure 4-37. Port 1 connects to one switch in the chassis and Port 2 connects to another switch in the chassis. With 14 compute nodes of this configuration installed in the chassis, each switch requires 14 internal ports for connectivity to the compute nodes.

P1 P3 P5 P7 P2 P4 P6 P8

x1 Ports

P2

Figure 4-37 I/O adapter with a two-port ASIC

x1 Ports

P1 P3 P5 P7 P2 P4 P6 P8 I/O modules

108

IBM PureFlex System and IBM Flex System Products and Technology

Another possible implementation of the I/O adapter is the four-port. Figure 4-38 shows the interconnection to the I/O module bays for such I/O adapters that uses a single four-port ASIC.

Half-wide node P1 P2 P3 P4 ASIC 4-Port

I/O adapter in slot 1

P1 P3 P5 P7 P2 P4 P6 P8

x1 Ports x1 Ports

I/O adapter in slot 2

P1 P3 P5 P7 P2 P4 P6 P8 I/O modules

Figure 4-38 I/O adapter with a four-port single ASIC

In this case, with each node having a four-port I/O adapter in I/O adapter slot 1, each I/O module requires 28 internal ports enabled. This configuration highlights another key feature of the I/O architecture: scalable on-demand port enablement. Sets of ports are enabled by using IBM Features on Demand (FoD) activation licenses to allow a greater number of connections between nodes and a switch. With two lanes per node to each switch and 14 nodes requiring four ports that are connected, each switch must have 28 internal ports enabled. You also need sufficient uplink ports enabled to support the wanted bandwidth. FoD feature upgrades enable these ports. Finally, Figure 4-39 on page 110 shows an eight-port I/O adapter that is using two, four-port ASICs.

Chapter 4. Chassis and infrastructure configuration

109

Half-wide node

I/O adapter in slot 1 ASIC 4-Port

P0 P1 P4 P5 P2 P3 P6 P7

P1 P3 P5 P7 P2 P4 P6 P8

ASIC 4-Port

I/O adapter in slot 2

P1 P3 P5 P7 P2 P4 P6 P8 I/O modules

Figure 4-39 I/O adapter with 8 port Dual ASIC implementation

Six ports active: In the case of the CN4058 8-port 10Gb Converged Adapter, although this is a eight port adapter, the currently available switches only support up to six of those ports (three ports to each of two installed switches). With these switches, three of the four lanes per module can be enabled.

110

IBM PureFlex System and IBM Flex System Products and Technology

The architecture allows for a total of eight lanes per I/O adapter, as shown in Figure 4-40. Therefore, a total of 16 I/O lanes per half wide node is possible. Each I/O module requires the matching number of internal ports to be enabled.

Node A1 bay 1 A2

. .. . Switch . . .. .. .. . bay 1 . .

Node A1 bay 2 A2

. .. . Switch . . .. .. .. . bay 3 . .

Node A1 bay 13/14 A2 A3 A4

. .. . Switch . . .. .. .. . bay 2 . .

. Switch . . .. . .. . bay 4 . .. .. .

Figure 4-40 Full chassis connectivity: Eight ports per adapter

For more information about port enablement by using FoD, see 4.11, I/O modules on page 112. For more information about I/O expansion adapters that install on the nodes, see 5.8.1, Overview on page 335.

Chapter 4. Chassis and infrastructure configuration

111

4.11 I/O modules


I/O modules are inserted into the rear of the Enterprise Chassis to provide interconnectivity within the chassis and external to the chassis. This section describes the I/O and Switch module naming scheme. There are four I/O Module bays at the rear of the chassis. To insert an I/O module into a bay, first remove the I/O filler. Figure 4-41 shows how to remove an I/O filler and insert an I/O module into the chassis by using the two handles.

Figure 4-41 Removing an I/O filler and installing an I/O module

4.11.1 I/O module LEDs


I/O Module Status LEDs are at the bottom of the module when inserted into the chassis. All modules share three status LEDs, as shown in Figure 4-42.

Serial port for local management

OK

Identify

Switch error

Figure 4-42 Example of I/O module status LEDs

112

IBM PureFlex System and IBM Flex System Products and Technology

The LEDs indicate the following conditions: OK (power) When this LED is lit, it indicates that the switch is on. When it is not lit and the amber switch error LED is lit, it indicates a critical alert. If the amber LED is also not lit, it indicates that the switch is off. Identify You can physically identify a switch by making this blue LED light up by using the management software. Switch Error When this LED is lit, it indicates a POST failure or critical alert. When this LED is lit, the system-error LED on the chassis is also lit. When this LED is not lit and the green LED is lit, it indicates that the switch is working correctly. If the green LED is also not lit, it indicates that the switch is off

4.11.2 Serial access cable


The switches (and CMM) support local command-line interface (CLI) access through a USB serial cable. The mini-USB port on the switch is near the LEDs, as shown in Figure 4-42 on page 112. A cable kit with supported serial cables can be ordered as listed in Table 4-16.
Table 4-16 Serial cable Part number 90Y9338 Feature codea A2RR Description IBM Flex System Management Serial Access Cable

a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.

The part number 90Y9338 includes the following cables: Mini-USB-to-RJ45 serial cable Mini-USB-to-DB9 serial cable

Chapter 4. Chassis and infrastructure configuration

113

4.11.3 I/O module naming scheme


The I/O module naming scheme follows a logical structure, similar to that of the I/O adapters. Figure 4-43 shows the I/O module naming scheme. This scheme might be expanded to support future technology.

IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

EN2092
Fabric Type: EN = Ethernet FC = Fibre Channel CN = Converged Network IB = InfiniBand SI = System Interconnect Series: 2 for 1 Gb 3 for 8 Gb 4 for 10 Gb 5 for 16 Gb 6 for 56 Gb & 40 Gb Vendor name where A=01: 02 = Brocade 09 = IBM 13 = Mellanox 17 = QLogic Maximum number of ports available to each node: 1 = One 2 = Two 3 = Three

Figure 4-43 IBM Flex System I/O Module naming scheme

114

IBM PureFlex System and IBM Flex System Products and Technology

4.11.4 Switch to adapter compatibility


This section lists switch to adapter interoperability.

Ethernet switches and adapters


Table 4-17 lists Ethernet switch-to-card compatibility. Switch upgrades: To maximize the usable port count on the adapters, the switches might need more license upgrades.
Table 4-17 Ethernet switch to card compatibility EN2092 1Gb Switch Part number Feature code (XCC / AAS)a None None None None 49Y7900 A10Y / 1763 90Y3466 A1QY / EC2D None None / 1762 90Y3554 A1R1 / 1759 None None / EC24 None None / EC26 90Y3482 A3HK / A3HK Part number Feature codesa x220 Onboard 1Gb x222 Onboard 10Gb x240 Onboard 10Gb x440 Onboard 10Gb EN2024 4-port 1Gb Ethernet Adapter EN4132 2-port 10 Gb Ethernet Adapter EN4054 4-port 10Gb Ethernet Adapter CN4054 10Gb Virtual Fabric Adapter CN4058 8-port 10Gb Converged Adapter EN4132 2-port 10Gb RoCE Adapter EN6132 2-port 40Gb Ethernet Adapter 49Y4294 A0TF / 3598 Yes Yesc Yes Yes Yes No Yes Yes Yese No No CN4093 10Gb Switch 00D5823 A3HH / ESW2 Yesb Yesc Yes Yes Yes No Yes Yes Yesf No No EN4093R 10Gb Switch 95Y3309 A3J6 / ESW7 Yes Yesc Yes Yes Yes Yes Yes Yes Yesf Yes No EN4093 10Gb Switch 49Y4270 A0TB / 3593 Yes Yesc Yes Yes Yes Yes Yes Yes Yesf Yes No EN4091 10Gb Pass-thru 88Y6043 A1QV / 3700 Yes No Yes Yes Yesd Yes Yesd Yesd Yesd Yes No SI4093 10Gb SIM 95Y3313 A45T / ESWA Yes Yesc Yes Yes Yes Yes Yes Yes Yes Yes No EN6131 40Gb Switch 90Y9346 A3HJ / ESW6 No No Yes Yes No Yes Yes Yes No Yes Yes

a. The first feature code that is listed is for configurations that are ordered through System x sales channels (XCC by using x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (AAS by using e-config) b. 1 Gb is supported on the CN4093s two external 10 Gb SFP+ ports only. The 12 external Omni Ports do not support 1 GbE speeds. c. Either Upgrade 1 or Upgrade 2 is required to enable enough internal switch ports to connect to both servers in the x222. d. Only two of the ports of this adapter are connected when used with the EN4091 10Gb Pass-thru. e. Only four of the eight ports of CN4058 adapter are connected with the EN2092 switch. f. Only six of the eight ports of the CN4058 adapter are connected with the CN4093, EN4093, EN4093R switches.

Chapter 4. Chassis and infrastructure configuration

115

Fibre Channel switches and adapters


Table 4-18 lists Fibre Channel switch-to-card compatibility.
Table 4-18 Fibre Channel switch to card compatibility FC5022 16Gb 12-port Part number Feature codes FC3172 2-port 8Gb FC Adapter FC3052 2-port 8Gb FC Adapter FC5022 2-port 16Gb FC Adapter FC5052 2-port 16Gb FC Adapter FC5054 4-port 16Gb FC Adapter FC5172 2-port 16Gb FC Adapter FC5024D 4-port 16Gb FC Adapter
a

FC5022 16Gb 24-port 00Y3324 A3DP / ESW5 Yes Yes Yes Yes Yes Yes Yes

FC5022 16Gb 24-port ESB 90Y9356 A2RQ / 3771 Yes Yes Yes Yes Yes Yes Yes

FC3171 8Gb switch 69Y1930 A0TD / 3595 Yes Yes No No No Yes No

FC3171 8Gb Pass-thru 69Y1934 A0TJ / 3591 Yes Yes No No No Yes No

Part number 69Y1938 95Y2375 88Y6370 95Y2386 95Y2391 69Y1942 95Y2379

Feature codes (XCC / AAS)a A1BM / 1764 A2N5 / EC25 A1BP / EC2B A45R / EC23 A45S / EC2E A1BQ / A1BQ A3HU / A3HU

88Y6374 A1EH / 3770 Yes Yes Yes Yes Yes Yes Yes

a. The first feature code that is listed is for configurations that are ordered through System x sales channels (XCC by using x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (AAS by using e-config)

InfiniBand switches and adapters


Table 4-19 lists InfiniBand switch to card compatibility.
Table 4-19 InfiniBand switch to card compatibility IB6131 InfiniBand Switch Part number 90Y3454 None 90Y3486 Feature codes (XCC / AAS)a A1QZ / EC2C None / 1761 A365 / A365 IB6132 2-port FDR InfiniBand Adapter IB6132 2-port QDR InfiniBand Adapter IB6132D 2-port FDR InfiniBand Adapter Part number Feature codea 90Y3450 A1EK / 3699 Yesb Yes Yesb

a. The first feature code that is listed is for configurations that are ordered through System x sales channels (XCC by using x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (AAS by using e-config) b. To operate at FDR speeds, the IB6131 switch needs the FDR upgrade. For more information, see Table 4-44 on page 160.

116

IBM PureFlex System and IBM Flex System Products and Technology

4.11.5 IBM Flex System EN6131 40Gb Ethernet Switch


The IBM Flex System EN6131 40Gb Ethernet Switch with the EN6132 40Gb Ethernet Adapter offer the performance that you must support clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications, which reduces task completion time and lowers the cost per operation. This switch offers 14 internal and 18 external 40 Gb Ethernet ports that enable a non-blocking network design. It supports all Layer 2 functions so servers can communicate within the chassis without going to a top-of-rack (ToR) switch, which helps improve performance and latency.

Figure 4-44 IBM Flex System EN6131 40Gb Ethernet Switch

This 40 Gb Ethernet solution can deploy more workloads per server without running into I/O bottlenecks. If there are failures or server maintenance, clients can also move their virtual machines much faster by using 40 Gb interconnects within the chassis. The 40 GbE switch and adapter are designed for low latency, high bandwidth, and computing efficiency for performance-driven server and storage clustering applications. They provide extreme scalability for low-latency clustered solutions with reduced packet hops. The IBM Flex System 40 GbE solution offers the highest bandwidth without adding any significant power impact to the chassis. It can also help increase the system usage and decrease the number of network ports for further cost savings.

18x QSFP+ ports (up to 40 Gbps)

Switch release handle (one each side)

RS-232 serial port

Gigabit Ethernet management port

Switch LEDs

Figure 4-45 External ports of the IBM Flex System EN6131 40Gb Ethernet Switch

The front panel contains the following components: LEDs that shows the following statuses of the module and the network: Green power LED indicates that the module passed the power-on self-test (POST) with no critical faults and is operational. Identify LED: This blue LED can be used to identify the module physically by illuminating it through the management software. The fault LED (switch error) indicates that the module failed the POST or detected an operational fault. Eighteen external QSFP+ ports for 10 Gbps, 20 Gbps, or 40 Gbps connections to the external network devices. An Ethernet physical link LED and an Ethernet Tx/Rx LED for each external port on the module.

Chapter 4. Chassis and infrastructure configuration

117

One mini-USB RS-232 console port that provides another means to configure the switch module. This mini-USB-style connector enables the connection of a special serial cable (the cable is optional and it is not included with the switch). For more information, see Table 4-21. Table 4-20 shows the part number and feature codes that are used to order the EN6131 40Gb Ethernet Switch.
Table 4-20 Part number and feature code for ordering Description IBM Flex System EN6131 40Gb Ethernet Switch Part number 90Y9346 Feature code (x-config / e-config) A3HJ / ESW6

QSFP+ Transceivers ordering: No QSFP+ (quad small form-factor pluggable plus) transceivers or cables are included with the switch. They must be ordered separately. The switch does not include a serial management cable. However, IBM Flex System Management Serial Access Cable 90Y9338 is supported and contains two cables, a mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable, either of which can be used to connect to the switch module locally for configuration tasks and firmware updates. Table 4-21 lists the supported cables and transceivers.
Table 4-21 Supported transceivers and direct attach cables Description Serial console cables IBM Flex System Management Serial Access Cable Kit QSFP+ transceiver and optical cables - 40 GbE IBM QSFP+ 40GBASE-SR Transceiver (Requires either cable 90Y3519 or cable 90Y3521) 10m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) 30m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) QSFP+ direct-attach cables - 40 GbE 3m FDR InfiniBand Cable 3m IBM QSFP+ to QSFP+ Cable 5m IBM QSFP+ to QSFP+ Cable 7m IBM QSFP+ to QSFP+ Cable 90Y3470 49Y7891 00D5810 00D5813 A227 / None A1DQ / EB2H A2X8 / ECBN A2X9 / ECBP 49Y7884 90Y3519 90Y3521 A1DR / EB27 A1MM / EB2J A1MN / EB2K 90Y9338 A2RR / A2RR Part number Feature code (x-config / e-config)

The EN6131 40Gb Ethernet Switch has the following features and specifications: MLNX-OS operating system Internal ports: A total of 14 internal full-duplex 40 Gigabit ports (10, 20, or 40 Gbps auto-negotiation). One internal full-duplex 1 GbE port that is connected to the chassis management module. 118
IBM PureFlex System and IBM Flex System Products and Technology

External ports: A total of 18 ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs (10, 20, or 40 Gbps auto-negotiation). QSFP+ modules and DACs are not included and must be purchased separately. One external 1 GbE port with RJ-45 connector for switch configuration and management. One RS-232 serial port (mini-USB connector) that provides another means to configure the switch module. Scalability and performance: 40 Gb Ethernet ports for extreme bandwidth and performance. Non-blocking architecture with wire-speed forwarding of traffic and an aggregated throughput of 1.44 Tbps. Support for up to 48,000 unicast and up to 16,000 multicast media access control (MAC) addresses per subnet. Static and LACP (IEEE 802.3ad) link aggregation, up to 720 Gb of total uplink bandwidth per switch, up to 36 link aggregation groups (LAGs), and up to 16 ports per LAG. Support for jumbo frames (up to 9,216 bytes). Broadcast/multicast storm control. IGMP snooping to limit flooding of IP multicast traffic. Fast port forwarding and fast uplink convergence for rapid STP convergence. Availability and redundancy: IEEE 802.1D STP for providing L2 redundancy. IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical delay-sensitive traffic such as voice or video. VLAN support: Up to 4094 VLANs are supported per switch, with VLAN numbers 1 - 4094. 802.1Q VLAN tagging support on all ports. Security: Up to 24,000 rules with VLAN-based, MAC-based, protocol-based, and IP-based access control lists (ACLs). User access control (multiple user IDs and passwords). RADIUS, TACACS+, and LDAP authentication and authorization. Quality of service (QoS): Support for IEEE 802.1p traffic processing. Traffic shaping that is based on defined policies. Four Weighted Round Robin (WRR) priority queues per port for processing qualified traffic. Priority-Based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow control to allow the switch to pause traffic based on the 802.1p priority value in each packets VLAN tag. Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for allocating link bandwidth based on the 802.1p priority value in each packets VLAN tag.

Chapter 4. Chassis and infrastructure configuration

119

Manageability: IPv4 and IPv6 host management. Simple Network Management Protocol (SNMP V1, V2, and V3). Web-based GUI. Industry standard CLI (IS-CLI) through Telnet, SSH, and serial port. Link Layer Discovery Protocol (LLDP) to advertise the device's identity, capabilities, and neighbors. Firmware image update (TFTP, FTP, and SCP). Network Time Protocol (NTP) for clock synchronization. Monitoring: Switch LEDs for external port status and switch module status indication. Port mirroring for analyzing network traffic passing through the switch. Change tracking and remote logging with the syslog feature. Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW collector/analyzer is required elsewhere). POST diagnostic tests. The switch supports the following Ethernet standards: IEEE 802.1AB Link Layer Discovery Protocol IEEE 802.1D Spanning Tree Protocol (STP) IEEE 802.1p Class of Service (CoS) prioritization IEEE 802.1Q Tagged VLAN (frame tagging on all ports when VLANs are enabled) IEEE 802.1Qbb Priority-Based Flow Control (PFC) IEEE 802.1Qaz Enhanced Transmission Selection (ETS) IEEE 802.1w Rapid STP (RSTP) IEEE 802.3ab 1000BASE-T copper twisted pair Gigabit Ethernet IEEE 802.3ad Link Aggregation Control Protocol IEEE 802.3ba 40GBASE-SR4 short range fiber optics 40 Gb Ethernet IEEE 802.3ba 40GBASE-CR4 copper 40 Gb Ethernet IEEE 802.3u 100BASE-TX Fast Ethernet IEEE 802.3x Full-duplex Flow Control The EN6131 40Gb Ethernet Switch can be installed in bays 1, 2, 3, and 4 of the Enterprise Chassis. A supported Ethernet adapter must be installed in the corresponding slot of the compute node (slot A1 when I/O modules are installed in bays 1 and 2 or slot A2 when I/O modules are installed in bays 3 and 4). If a four-port 10 GbE adapter is used, only up to two adapter ports can be used with the EN6131 40Gb Ethernet Switch (one port per switch). For more information including example configurations, see the IBM Redbooks Product Guide IBM Flex System EN6131 40Gb Ethernet Switch, TIPS0911, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0911.html?Open

120

IBM PureFlex System and IBM Flex System Products and Technology

4.11.6 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
The IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch provides unmatched scalability, performance, convergence, and network virtualization. It also delivers innovations to help address a number of networking concerns and provides capabilities that help you prepare for the future. The switch offers full Layer 2/3 switching and FCoE Full Fabric and Fibre Channel NPV Gateway operations to deliver a converged and integrated solution. It is installed within the I/O module bays of the IBM Flex System Enterprise Chassis. The switch can help you migrate to a 10 Gb or 40 Gb converged Ethernet infrastructure and offers virtualization features such as Virtual Fabric and IBM VMready, and the ability to work with IBM Distributed Virtual Switch 5000V. Figure 4-46 shows the IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch.

Figure 4-46 IBM Flex System Fabric CN4093 10 Gb Converged Scalable Switch

The CN4093 switch is initially licensed for 14 10-GbE internal ports, two external 10-GbE SFP+ ports, and six external Omni Ports enabled. The following ports can be enabled: A total of 14 more internal ports and two external 40 GbE QSFP+ uplink ports with Upgrade 1. A total of 14 more internal ports and six more external Omni Ports with the Upgrade 2 license options. Upgrade 1 and Upgrade 2 can be applied on the switch independently from each other or in combination for full feature capability. Table 4-22 shows the part numbers for ordering the switches and the upgrades.
Table 4-22 Part numbers and feature codes for ordering Description Switch module IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch Features on Demand upgrades IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 1) IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 2) 00D5845 00D5847 A3HL / ESU1 A3HM / ESU2 00D5823 A3HH / ESW2 Part number Feature code (x-config / e-config)

Chapter 4. Chassis and infrastructure configuration

121

Description Management cable IBM Flex System Management Serial Access Cable

Part number

Feature code (x-config / e-config)

90Y9338

Neither QSFP+ or SFP+ transceivers or cables are included with the switch. They must be ordered separately (see Table 4-24 on page 124). The switch does not include a serial management cable. However, IBM Flex System Management Serial Access Cable 90Y9338 is supported and contains two cables, a mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable, either of which can be used to connect to the switch locally for configuration tasks and firmware updates. The following base switch and upgrades are available: Part number 00D5823 is for the physical device, which comes with 14 internal 10 GbE ports enabled (one to each node bay), two external 10 GbE SFP+ ports that are enabled to connect to a Top of Rack switch or other devices identified as EXT1 and EXT2, and six Omni Ports that are enabled to connect to Ethernet or Fibre Channel networking infrastructure, depending on the SFP+ cable or transceiver that is used. The six Omni ports are from the 12 labeled on the switch as EXT11 through EXT22. Part number 00D5845 (Upgrade 1) can be applied on the base switch when you need more uplink bandwidth with two 40 GbE QSFP+ ports that can be converted into 4x 10 GbE SFP+ DAC links with the optional break-out cables. These are labeled EXT3, EXT7 or EXT3-EXT6, EXT7-EXT10 if converted. This upgrade also enables 14 more internal ports, for a total of 28 ports, to provide more bandwidth to the compute nodes using 4-port expansion cards. Part number 00D5847 (Upgrade 2) can be applied on the base switch when you need more external Omni Ports on the switch or if you want more internal bandwidth to the node bays. The upgrade enables the remaining six external Omni Ports from range EXT11 through EXT22, plus 14 internal 10 Gb ports, for a total of 28 internal ports, to provide more bandwidth to the compute nodes using four-port expansion cards. Both 00D5845 (Upgrade 1) and 00D5847 (Upgrade 2) can be applied on the switch at the same time so that you can use six ports on an eight-port expansion card, and use all the external ports on the switch. Table 4-23 shows the switch upgrades and the ports they enable.
Table 4-23 CN4093 10 Gb Converged Scalable Switch part numbers and port upgrades Part number Feature codea Description Internal 10Gb Base switch (no upgrades) Add Upgrade 1 Add Upgrade 2 Add both Upgrade 1 and Upgrade 2 14 28 28 42 Total ports that are enabled External 10Gb SFP+ 2 2 2 2 External 10Gb Omni 6 6 12 12 External 40Gb QSFP+ 0 2 0 2

00D5823 00D5845 00D5847 00D5845 00D5847

A3HH / ESW2 A3HL / ESU1 A3HM / ESU2 A3HL / ESU1 A3HM / ESU2

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

122

IBM PureFlex System and IBM Flex System Products and Technology

Each upgrade license enables more internal ports. To make full use of those ports, each compute node needs the following appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches). Adding Upgrade 1 or Upgrade 2 requires a four-port Ethernet adapter (two ports of the adapter to each switch) to use all the internal ports. Adding both Upgrade 1 and Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) to use all the internal ports.

Front panel
Figure 4-47 shows the main components of the CN4093 switch.
2x 10 Gb ports 2x 40 Gb uplink ports (standard) (enabled with Upgrade 1) 12x Omni Ports (6 standard, 6 with Upgrade 2)

SFP+ ports

QSFP+ ports

SFP+ ports

Switch release handle (one each side)

Management ports

Switch LEDs

Figure 4-47 IBM Flex System Fabric CN4093 10 Gb Converged Scalable Switch

The front panel contains the following components: LEDs that shows the status of the switch module and the network: The OK LED indicates that the switch module passed the power-on self-test (POST) with no critical faults and is operational. Identify: You can use this blue LED to identify the switch physically by illuminating it through the management software. The error LED (switch module error) indicates that the switch module failed the POST or detected an operational fault. One mini-USB RS-232 console port that provides another means to configure the switch module. This mini-USB-style connector enables connection of a special serial cable. (The cable is optional and it is not included with the switch. For more information, see Table 4-24 on page 124. Two external SFP+ ports for 1 Gb or 10 Gb connections to external Ethernet devices. Twelve external SFP+ Omni Ports for 10 Gb connections to the external Ethernet devices or 4/8 Gb FC connections to the external SAN devices. Omni Ports support: 1 Gb is not supported on Omni Ports. Two external QSFP+ port connectors to attach QSFP+ modules or cables for a single 40 Gb uplink per port or splitting of a single port into 4x 10 Gb connections to external Ethernet devices. A link OK LED and a Tx/Rx LED for each external port on the switch module. A mode LED for each pair of Omni Ports indicating the operating mode. (OFF indicates that the port pair is configured for Ethernet operation, and ON indicates that the port pair is configured for Fibre Channel operation.)

Chapter 4. Chassis and infrastructure configuration

123

Cables and transceivers


Table 4-24 lists the supported cables and transceivers.
Table 4-24 Supported transceivers and direct-attach cables Description Serial console cables IBM Flex System Management Serial Access Cable Kit SFP transceivers - 1 GbE (supported on two dedicated SFP+ ports) IBM SFP RJ-45 Transceiver (does not support 10/100 Mbps) IBM SFP SX Transceiver IBM SFP LX Transceiver SFP+ transceivers - 10 GbE (supported on SFP+ ports and Omni Ports) IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver 10GBase-SR SFP+ (MMFiber) transceiver SFP+ direct-attach cables - 10 GbE (supported on SFP+ ports and Omni Ports) 1m IBM Passive DAC SFP+ 3m IBM Passive DAC SFP+ 5m IBM Passive DAC SFP+ QSFP+ transceiver and cables - 40 GbE (supported on QSFP+ ports) IBM QSFP+ 40GBASE-SR Transceiver (requires either cable 90Y3519 or cable 90Y3521) 10m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) 30m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) QSFP+ breakout cables - 40 GbE to 4 x 10 GbE (supported on QSFP+ ports) 1m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 3m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 5m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable QSFP+ direct-attach cables - 40 GbE (supported on QSFP+ ports) 1m QSFP+ to QSFP+ DAC 3m QSFP+ to QSFP+ DAC SFP+ transceivers - 8 Gb FC (supported on Omni Ports) IBM 8Gb SFP+ Software Optical Transceiver 44X1964 5075 / 3286 49Y7890 49Y7891 A1DP / EB2B A1DQ / EB2H 49Y7886 49Y7887 49Y7888 A1DL / EB24 A1DM / EB25 A1DN / EB26 49Y7884 90Y3519 90Y3521 A1DR / EB27 A1MM / EB2J A1MN / EB2K 90Y9427 90Y9430 90Y9433 A1PH / ECB4 A1PJ / ECB5 A1PK / ECB6 46C3447 90Y9412 44W4408 5053 / EB28 A1PM / ECB9 4942 / 3382 81Y1618 81Y1622 90Y9424 3268 / EB29 3269 / EB2A A1PN / ECB8 90Y9338 A2RR / A2RR Part number Feature code (x-config / e-config)

124

IBM PureFlex System and IBM Flex System Products and Technology

Features and specifications


The IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch has the following features and specifications: Internal ports: A total of 42 internal full-duplex 10 Gigabit ports. (A total of 14 ports are enabled by default. Optional FoD licenses are required to activate the remaining 28 ports.) Two internal full-duplex 1 GbE ports that are connected to the CMM. External ports: Two ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, 10GBASE-LR, or SFP+ copper direct-attach cables (DACs)). These two ports are enabled by default. SFP+ modules and DACs are not included and must be purchased separately. A total of 12 IBM Omni Ports. Each of them can operate as 10 Gb Ethernet (support for 10GBASE-SR, 10GBASE-LR, or 10 GbE SFP+ DACs), or auto-negotiating as 4/8 Gb Fibre Channel, depending on the SFP+ transceiver that is installed in the port. The first six ports are enabled by default. An optional FoD license is required to activate the remaining six ports. SFP+ modules and DACs are not included and must be purchased separately. Omni Ports support: Note: Omni Ports do not support 1 Gb Ethernet operations. Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs. (Ports are disabled by default. An optional FoD license is required to activate them.) Also, you can use break-out cables to break out each 40 GbE port into four 10 GbE SFP+ connections. QSFP+ modules and DACs are not included and must be purchased separately. One RS-232 serial port (mini-USB connector) that provides another means to configure the switch module. Scalability and performance: 40 Gb Ethernet ports for extreme uplink bandwidth and performance. Fixed-speed external 10 Gb Ethernet ports to use the 10 Gb core infrastructure. Non-blocking architecture with wire-speed forwarding of traffic and aggregated throughput of 1.28 Tbps on Ethernet ports. Media access control (MAC) address learning: Automatic update, and support for up to 128,000 MAC addresses. Up to 128 IP interfaces per switch. Static and LACP (IEEE 802.3ad) link aggregation, up to 220 Gb of total uplink bandwidth per switch, up to 64 trunk groups, and up to 16 ports per group. Support for jumbo frames (up to 9,216 bytes). Broadcast/multicast storm control. IGMP snooping to limit flooding of IP multicast traffic. IGMP filtering to control multicast traffic for hosts that participate in multicast groups. Configurable traffic distribution schemes over trunk links that are based on source/destination IP or MAC addresses, or both. Fast port forwarding and fast uplink convergence for rapid STP convergence.

Chapter 4. Chassis and infrastructure configuration

125

Availability and redundancy: Virtual Router Redundancy Protocol (VRRP) for Layer 3 router redundancy. IEEE 802.1D STP for providing L2 redundancy. IEEE 802.1s Multiple STP (MSTP) for topology optimization. Up to 32 STP instances are supported by a single switch. IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical delay-sensitive traffic, such as voice or video. Per-VLAN Rapid STP (PVRST) enhancements. Layer 2 Trunk Failover to support active/standby configurations of network adapter teaming on compute nodes. Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off. VLAN support: Up to 1024 VLANs supported per switch, with VLAN numbers from 1 - 4095 (4095 is used for management modules connection only). 802.1Q VLAN tagging support on all ports. Private VLANs. Security: VLAN-based, MAC-based, and IP-based access control lists (ACLs). 802.1x port-based authentication. Multiple user IDs and passwords. User access control. Radius, TACACS+, and LDAP authentication and authorization. Quality of service (QoS): Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing. Traffic shaping and re-marking based on defined policies. Eight Weighted Round Robin (WRR) priority queues per port for processing qualified traffic. IP v4 Layer 3 functions: Host management. IP forwarding. IP filtering with ACLs, with up to 896 ACLs supported. VRRP for router redundancy. Support for up to 128 static routes. Routing protocol support (RIP v1, RIP v2, OSPF v2, and BGP-4), for up to 2048 entries in a routing table. Support for DHCP Relay. Support for IGMP snooping and IGMP relay. Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM).

126

IBM PureFlex System and IBM Flex System Products and Technology

IP v6 Layer 3 functions: IPv6 host management (except for a default switch management IP address). IPv6 forwarding. Up to 128 static routes. Support for OSPF v3 routing protocol. IPv6 filtering with ACLs. Virtualization: Virtual NICs (vNICs): Ethernet, iSCSI, or FCoE traffic is supported on vNICs. Unified fabric ports (UFPs): Ethernet or FCoE traffic is supported on UFPs 802.1Qbg Edge Virtual Bridging (EVB) is an emerging IEEE standard for allowing networks to become virtual machine (VM)-aware: Virtual Ethernet Bridging (VEB) and Virtual Ethernet Port Aggregator (VEPA) are mechanisms for switching between VMs on the same hypervisor. Edge Control Protocol (ECP) is a transport protocol that operates between two peers over an IEEE 802 LAN providing reliable and in-order delivery of upper layer protocol data units. Virtual Station Interface (VSI) Discovery and Configuration Protocol (VDP) allows centralized configuration of network policies that persists with the VM, independent of its location. EVB Type-Length-Value (TLV) is used to discover and configure VEPA, ECP, and VDP.

VMready. Converged Enhanced Ethernet: Priority-Based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow control to allow the switch to pause traffic that is based on the 802.1p priority value in each packets VLAN tag. Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for allocating link bandwidth that is based on the 802.1p priority value in each packets VLAN tag. Data center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows neighboring network devices to exchange information about their capabilities. Fibre Channel over Ethernet (FCoE): FC-BB5 FCoE specification compliant. Native FC Forwarder switch operations. End-to-end FCoE support (initiator to target). FCoE Initialization Protocol (FIP) support.

Fibre Channel: Omni Ports support 4/8 Gb FC when FC SFPs+ are installed in these ports. Full Fabric mode for end-to-end FCoE or NPV Gateway mode for external FC SAN attachments (support for IBM B-type, Brocade, and Cisco MDS external SANs). Fabric services in Full Fabric mode: Name Server Registered State Change Notification (RSCN) Login services Zoning
Chapter 4. Chassis and infrastructure configuration

127

Stacking: Hybrid stacking support (from two to six EN4093/EN4093R switches with two CN4093 switches) FCoE support vNIC support 802.1Qbg support Manageability Simple Network Management Protocol (SNMP V1, V2, and V3) HTTP browser GUI Telnet interface for CLI SSH Secure FTP (sFTP) Service Location Protocol (SLP) Serial interface for CLI Scriptable CLI Firmware image update (TFTP and FTP) Network Time Protocol (NTP) for switch clock synchronization

Monitoring Switch LEDs for external port status and switch module status indication. Remote Monitoring (RMON) agent to collect statistics and proactively monitor switch performance. Port mirroring for analyzing network traffic that passes through a switch. Change tracking and remote logging with syslog feature. Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer is required elsewhere). POST diagnostic tests. The following features are not supported by IPv6: Default switch management IP address SNMP trap host destination IP address Bootstrap Protocol (BOOTP) and DHCP RADIUS, TACACS+, and LDAP QoS metering and re-marking ACLs for out-profile traffic VMware Virtual Center (vCenter) for VMready Routing Information Protocol (RIP) Internet Group Management Protocol (IGMP) Border Gateway Protocol (BGP) Virtual Router Redundancy Protocol (VRRP) sFLOW

Standards supported
The switches support the following standards: IEEE 802.1AB data center Bridging Capability Exchange Protocol (DCBX) IEEE 802.1D Spanning Tree Protocol (STP) IEEE 802.1p Class of Service (CoS) prioritization IEEE 802.1s Multiple STP (MSTP) IEEE 802.1Q Tagged VLAN (frame tagging on all ports when VLANs are enabled) IEEE 802.1Qbg Edge Virtual Bridging IEEE 802.1Qbb Priority-Based Flow Control (PFC) 128
IBM PureFlex System and IBM Flex System Products and Technology

IEEE 802.1Qaz Enhanced Transmission Selection (ETS) IEEE 802.1x port-based authentication IEEE 802.1w Rapid STP (RSTP) IEEE 802.2 Logical Link Control IEEE 802.3 10BASE-T Ethernet IEEE 802.3ab 1000BASE-T copper twisted pair Gigabit Ethernet IEEE 802.3ad Link Aggregation Control Protocol IEEE 802.3ae 10GBASE-SR short range fiber optics 10 Gb Ethernet IEEE 802.3ae 10GBASE-LR long range fiber optics 10 Gb Ethernet IEEE 802.3ba 40GBASE-SR4 short range fiber optics 40 Gb Ethernet IEEE 802.3ba 40GBASE-CR4 copper 40 Gb Ethernet IEEE 802.3u 100BASE-TX Fast Ethernet IEEE 802.3x Full-duplex Flow Control IEEE 802.3z 1000BASE-SX short range fiber optics Gigabit Ethernet IEEE 802.3z 1000BASE-LX long range fiber optics Gigabit Ethernet SFF-8431 10GSFP+Cu SFP+ Direct Attach Cable FC-BB-5 FCoE For more information, see the IBM Redbooks Product Guide IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch, TIPS0910, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0910.html?Open

4.11.7 IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switch
The IBM Flex System EN4093 and IBM Flex System 4093R 10Gb Scalable Switches are 10 Gb 64-port upgradeable midrange to high-end switch modules. They offer Layer 2/3 switching designed for installation within the I/O module bays of the Enterprise Chassis. The latest EN4093R switch adds more capabilities to the EN4093, that is, Virtual NIC (Stacking), Unified fabric port (Stacking), Edge virtual bridging (Stacking), and CEE/FCoE (Stacking), and so it is ideal for clients that are looking to implement a converged infrastructure with NAS, iSCSI, or FCoE. For FCoE implementations, the EN4093R acts as a transit switch that forwards FCoE traffic upstream to another devices, such as the Brocade VDX or Cisco Nexus 5548/5596, where the FC traffic is broken out. For a detailed function comparison, see Table 4-27 on page 135. Each switch contains the following ports: Up to 42 internal 10 Gb ports Up to 14 external 10 Gb uplink ports (enhanced small form-factor pluggable (SFP+) connectors) Up to 2 external 40 Gb uplink ports (quad small form-factor pluggable (QSFP+) connectors) These switches are considered suitable for clients with the following requirements: Building a 10 Gb infrastructure Implementing a virtualized environment Requiring investment protection for 40 Gb uplinks Wanting to reduce total cost of ownership (TCO) and improve performance while maintaining high levels of availability and security Wanting to avoid oversubscription (traffic from multiple internal ports that attempt to pass through a lower quantity of external ports, leading to congestion and performance impact)
Chapter 4. Chassis and infrastructure configuration

129

The EN4093 and 4093R 10Gb Scalable Switches are shown in Figure 4-48.

Figure 4-48 IBM Flex System EN4093/4093R 10 Gb Scalable Switch

As listed in Table 4-25, the switch is initially licensed with 14 10-Gb internal ports that are enabled and 10 10-Gb external uplink ports enabled. Further ports can be enabled, including the two 40 Gb external uplink ports with the Upgrade 1 and four more SFP+ 10Gb ports with Upgrade 2 license options. Upgrade 1 must be applied before Upgrade 2 can be applied.
Table 4-25 IBM Flex System Fabric EN4093 10Gb Scalable Switch part numbers and port upgrades Part number 49Y4270 Feature codea A0TB / 3593 Product description Total ports that are enabled Internal IBM Flex System Fabric EN4093 10Gb Scalable Switch: 10x external 10 Gb uplinks 14x internal 10 Gb ports IBM Flex System Fabric EN4093R 10Gb Scalable Switch: 10x external 10 Gb uplinks 14x internal 10 Gb ports IBM Flex System Fabric EN4093 10Gb Scalable Switch (Upgrade 1): Adds 2x external 40 Gb uplinks Adds 14x internal 10 Gb ports IBM Flex System Fabric EN4093 10Gb Scalable Switch (Upgrade 2) (requires Upgrade 1): Adds 4x external 10 Gb uplinks Add 14x internal 10 Gb ports 14 10 Gb uplink 10 40 Gb uplink 0

05Y3309

A3J6 / ESW7

14

10

49Y4798

A1EL / 3596

28

10

88Y6037

A1EM / 3597

42

14

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

130

IBM PureFlex System and IBM Flex System Products and Technology

The key components on the front of the switch are shown in Figure 4-49.
14x 10 Gb uplink ports (10 standard, 4 with Upgrade 2) 2x 40 Gb uplink ports (enabled with Upgrade 1)

Switch release handle (one either side)

SFP+ ports

QSFP+ ports

Management ports

Switch LEDs

Figure 4-49 IBM Flex System EN4093/4093R 10 Gb Scalable Switch

Each upgrade license enables more internal ports. To make full use of those ports, each compute node needs the following appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches) Upgrade 1 requires a four-port Ethernet adapter (two ports of the adapter to each switch) Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) Consideration: Adding Upgrade 2 enables another 14 internal ports, for a total of 42 internal ports, with three ports that are connected to each of the 14 compute nodes in the chassis. To make full use of all 42 internal ports, a six-port adapter is required, such as the CN4058 Adapter. Upgrade 2 still provides a benefit with a four-port adapter because this upgrade enables an extra four external 10 Gb uplink as well. The rear of the switch has 14 SPF+ module ports and two QSFP+ module ports. The QSFP+ ports can be used to provide two 40 Gb uplinks or eight 10 Gb ports. Use one of the supported QSFP+ to 4x 10 Gb SFP+ cables that are listed in Table 4-26. This cable splits a single 40 Gb QSPFP port into 4 SFP+ 10 Gb ports. The switch is designed to function with nodes that contain a 1Gb LOM, such as the IBM Flex System x220 Compute Node. To manage the switch, a mini USB port and an Ethernet management port are provided. The supported SFP+ and QSFP+ modules and cables for the switch are listed in Table 4-26.
Table 4-26 Supported SFP+ modules and cables Part number Feature codea Description

Serial console cables 90Y9338 A2RR / A2RR IBM Flex System Management Serial Access Cable Kit

Small form-factor pluggable (SFP) transceivers - 1 GbE 81Y1618 81Y1622 90Y9424 3268 / EB29 3269 / EB2A A1PN / ECB8 IBM SFP RJ-45 Transceiver (does not support 10/100 Mbps) IBM SFP SX Transceiver IBM SFP LX Transceiver

Chapter 4. Chassis and infrastructure configuration

131

Part number

Feature codea

Description

SFP+ transceivers - 10 GbE 46C3447 90Y9412 44W4408 5053 / None A1PM / ECB9 4942 / 3382 IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver 10GBase-SR SFP+ (MMFiber) transceiver

SFP+ Direct Attach Copper (DAC) cables - 10 GbE 90Y9427 90Y9430 90Y9433 A1PH / ECB4 A1PJ / ECB5 A1PK / ECB6 1m IBM Passive DAC SFP+ 3m IBM Passive DAC SFP+ 5m IBM Passive DAC SFP+

QSFP+ transceiver and cables - 40 GbE 49Y7884 90Y3519 90Y3521 A1DR / EB27 A1MM / EB2J A1MN / EB2K IBM QSFP+ 40GBASE-SR Transceiver (Requires either cable 90Y3519 or cable 90Y3521) 10m IBM MTP Fiberoptic Cable (requires transceiver 49Y7884) 30m IBM MTP Fiberoptic Cable (requires transceiver 49Y7884)

QSFP+ breakout cables - 40 GbE to 4x10 GbE 49Y7886 49Y7887 49Y7888 A1DL / EB24 A1DM / EB25 A1DN / EB26 1m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable 3m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable 5m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable

QSFP+ Direct Attach Copper (DAC) cables - 40 GbE 49Y7890 49Y7891 A1DP / EB2B A1DQ / EB2H 1m QSFP+ to QSFP+ DAC 3m QSFP+ to QSFP+ DAC

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

The EN4093/4093R 10Gb Scalable Switch has the following features and specifications: Internal ports: A total of 42 internal full-duplex 10 Gigabit ports (14 ports are enabled by default). Optional FoD licenses are required to activate the remaining 28 ports. Two internal full-duplex 1 GbE ports that are connected to the chassis management module. External ports: A total of 14 ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC cables. A total of 10 ports are enabled by default. An optional FoD license is required to activate the remaining four ports. SFP+ modules and DAC cables are not included and must be purchased separately. Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs (ports are disabled by default; an optional FoD license is required to activate them). QSFP+ modules and DAC cables are not included and must be purchased separately.

132

IBM PureFlex System and IBM Flex System Products and Technology

One RS-232 serial port (mini-USB connector) that provides another means to configure the switch module. Scalability and performance: 40 Gb Ethernet ports for extreme uplink bandwidth and performance. Fixed-speed external 10 Gb Ethernet ports to take advantage of 10 Gb core infrastructure. Autosensing 10/1000/1000 external Gigabit Ethernet ports for bandwidth optimization. Non-blocking architecture with wire-speed forwarding of traffic and aggregated throughput of 1.28 Tbps. Media Access Control (MAC) address learning: Automatic update, support of up to 128,000 MAC addresses. Up to 128 IP interfaces per switch. Static and Link Aggregation Control Protocol (LACP) (IEEE 802.3ad) link aggregation: Up to 220 Gb of total uplink bandwidth per switch, up to 64 trunk groups, up to 16 ports per group. Support for jumbo frames (up to 9,216 bytes). Broadcast/multicast storm control. Internet Group Management Protocol (IGMP) snooping to limit flooding of IP multicast traffic. IGMP filtering to control multicast traffic for hosts that participate in multicast groups. Configurable traffic distribution schemes over trunk links that are based on source/destination IP or MAC addresses, or both. Fast port forwarding and fast uplink convergence for rapid STP convergence. Availability and redundancy: Virtual Router Redundancy Protocol (VRRP) for Layer 3 router redundancy. IEEE 802.1D Spanning Tree Protocol (STP) for providing L2 redundancy. IEEE 802.1s Multiple STP (MSTP) for topology optimization, up to 32 STP instances are supported by single switch. IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical delay-sensitive traffic like voice or video. Rapid Per-VLAN STP (RPVST) enhancements. Layer 2 Trunk Failover to support active/standby configurations of network adapter that team on compute nodes. Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off. Virtual local area network (VLAN) support: Up to 1024 VLANs supported per switch, with VLAN numbers that range from 1 to 4095 (4095 is used for the management modules connection only). 802.1Q VLAN tagging support on all ports. Private VLANs. Security: VLAN-based, MAC-based, and IP-based access control lists (ACLs) 802.1x port-based authentication Multiple user IDs and passwords
Chapter 4. Chassis and infrastructure configuration

133

User access control Radius, TACACS+ and LDAP authentication and authorization Quality of service (QoS): Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing. Traffic shaping and remarking based on defined policies. Eight weighted round robin (WRR) priority queues per port for processing qualified traffic. IP v4 Layer 3 functions: Host management IP forwarding IP filtering with ACLs, up to 896 ACLs supported VRRP for router redundancy Support for up to 128 static routes Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4), up to 2048 entries in a routing table Support for Dynamic Host Configuration Protocol (DHCP) Relay Support for IGMP snooping and IGMP relay Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM) 802.1Qbg support IP v6 Layer 3 functions: IPv6 host management (except default switch management IP address) IPv6 forwarding Up to 128 static routes Support for OSPF v3 routing protocol IPv6 filtering with ACLs

Virtualization: Virtual Fabric with virtual network interface card (vNIC) 802.1Qbg Edge Virtual Bridging (EVB) IBM VMready Converged Enhanced Ethernet: Priority-based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow control to allow the switch to pause traffic. This function is based on the 802.1p priority value in each packets VLAN tag. Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for allocating link bandwidth that is based on the 802.1p priority value in each packets VLAN tag. Data center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows neighboring network devices to exchange information about their capabilities. Manageability: Simple Network Management Protocol (SNMP V1, V2, and V3) HTTP browser GUI Telnet interface for CLI 134
IBM PureFlex System and IBM Flex System Products and Technology

Secure Shell (SSH) Serial interface for CLI Scriptable CLI Firmware image update: Trivial File Transfer Protocol (TFTP) and File Transfer Protocol (FTP) Network Time Protocol (NTP) for switch clock synchronization Monitoring: Switch LEDs for external port status and switch module status indication. Remote monitoring (RMON) agent to collect statistics and proactively monitor switch performance. Port mirroring for analyzing network traffic that passes through the switch. Change tracking and remote logging with syslog feature. Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer is required elsewhere). POST diagnostic procedures. Stacking: Up to eight switches in a stack FCoE support (EN4093R only) vNIC support (support for FCoE on vNICs) Table 4-27 compares the EN4093 to the EN4093R.
Table 4-27 EN4093 and EN4093R supported features Feature Layer 2 switching Layer 3 switching Switch Stacking Virtual NIC (stand-alone) Virtual NIC (stacking) Unified Fabric Port (stand-alone) Unified Fabric Port (stacking) Edge virtual bridging (stand-alone) Edge virtual bridging (stacking) CEE/FCoE (stand-alone) CEE/FCoE (stacking) EN4093 Yes Yes Yes Yes Yes Yes No Yes Yes Yes No EN4093R Yes Yes Yes Yes Yes Yes No Yes Yes Yes Yes

Both the EN4093 and EN4093R support vNIC+ FCoE and 802.1Qbg + FCoE stand-alone (without stacking). The EN4093R supports vNIC + FCOE with stacking or 802.1Qbg + FCoE with stacking. For more information, see the IBM Redbooks Product Guide IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switches, TIPS0864, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0864.html?Open
Chapter 4. Chassis and infrastructure configuration

135

4.11.8 IBM Flex System Fabric SI4093 System Interconnect Module


The IBM Flex System Fabric SI4093 System Interconnect Module enables simplified integration of IBM Flex System into your existing networking infrastructure. The SI4093 System Interconnect Module requires no management for most data center environments, which eliminates the need to configure each networking device or individual ports, thus reducing the number of management points. It provides a low latency, loop-free interface that does not rely upon spanning tree protocols, thus removing one of the greatest deployment and management complexities of a traditional switch. The SI4093 System Interconnect Module offers administrators a simplified deployment experience while maintaining the performance of intra-chassis connectivity. The SI4093 System Interconnect Module is shown in Figure 4-50.

Figure 4-50 IBM Flex System Fabric SI4093 System Interconnect Module

The SI4093 System Interconnect Module is initially licensed for 14 10-Gb internal ports enabled and 10 10-Gb external uplink ports enabled. Further ports can be enabled, including 14 internal ports and two 40 Gb external uplink ports with Upgrade 1, and 14 internal ports and four SFP+ 10 Gb external ports with Upgrade 2 license options. Upgrade 1 must be applied before Upgrade 2 can be applied. The key components on the front of the switch are shown in Figure 4-49 on page 131.
14x 10 Gb uplink ports (10 standard, 4 with Upgrade 2) 2x 40 Gb uplink ports (enabled with Upgrade 1)

Switch release handle (one either side)

SFP+ ports

QSFP+ ports

Management ports

Switch LEDs

Figure 4-51 IBMIBM Flex System Fabric SI4093 System Interconnect Module

136

IBM PureFlex System and IBM Flex System Products and Technology

Table 4-28 shows the part numbers for ordering the switches and the upgrades.
Table 4-28 Ordering information Description Interconnect module IBM Flex System Fabric SI4093 System Interconnect Module Features on Demand upgrades SI4093 System Interconnect Module (Upgrade 1) SI4093 System Interconnect Module (Upgrade 2) 95Y3318 95Y3320 A45U / ESW8 A45V / ESW9 95Y3313 A45T / ESWA Part number Feature code (x-config / e-config)

Important: SFP and SFP+ (small form-factor pluggable plus) transceivers or cables are not included with the switch. They must be ordered separately. For more information, see Table 4-29 on page 138. The following base switch and upgrades are available: Part number 95Y3313 is for the physical device, and it includes 14 other internal 10-Gb ports enabled (one to each node bay) and 10 external 10-Gb ports enabled for connectivity to an upstream network, plus external servers and storage. All external 10 Gb ports are SFP+ based connections. Part number 95Y3318 (Upgrade 1) can be applied on the base interconnect module to make full use of four-port adapters that are installed in each compute node. This upgrade enables 14 other internal ports, for a total of 28 ports. The upgrade also enables two 40 Gb uplinks with QSFP+ connectors. These QSFP+ ports can also be converted to four 10 Gb SFP+ DAC connections by using the appropriate fan-out cable. This upgrade requires the base interconnect module. Part number 95Y3320 (Upgrade 2) can be applied on top of Upgrade 1 when you want more uplink bandwidth on the interconnect module or if you want more internal bandwidth to the compute nodes with the adapters capable of supporting six ports (such as CN4058). The upgrade enables the remaining four external 10 Gb uplinks with SFP+ connectors, plus 14 other internal 10 Gb ports, for a total of 42 ports (three to each compute node).

Chapter 4. Chassis and infrastructure configuration

137

Table 4-29 lists the supported port combinations on the interconnect module and the required upgrades.
Table 4-29 Supported port combinations Quantity required Supported port combinations 14x internal 10 GbE 10x external 10 GbE 28x internal 10 GbE 10x external 10 GbE 2x external 40 GbE 42x internal 10 GbEa 14x external 10 GbE 2x external 40 GbE Base switch, 95Y3313 1 1 Upgrade 1, 95Y3318 0 1 Upgrade 2, 95Y3320 0 0

a. This configuration uses six of the eight ports on the CN4058 adapter that are available for IBM Power Systems compute nodes.

Supported cables and transceivers


Table 4-30 lists the supported cables and transceivers.
Table 4-30 Table 3. Supported transceivers and direct-attach cables Description Serial console cables IBM Flex System Management Serial Access Cable Kit SFP transceivers - 1 GbE IBM SFP RJ-45 Transceiver (does not support 10/100 Mbps) IBM SFP SX Transceiver IBM SFP LX Transceiver SFP+ transceivers - 10 GbE IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver 10GBase-SR SFP+ (MMFiber) transceiver SFP+ direct-attach cables - 10 GbE 1m IBM Passive DAC SFP+ 3m IBM Passive DAC SFP+ 5m IBM Passive DAC SFP+ 90Y9427 90Y9430 90Y9433 A1PH / ECB4 A1PJ / ECB5 A1PK / ECB6 46C3447 90Y9412 44W4408 5053 / EB28 A1PM / ECB9 4942 / 3282 81Y1618 81Y1622 90Y9424 3268 / EB29 3269 / EB2A A1PN / ECB8 90Y9338 A2RR / None Part number Feature code (x-config / e-config)

138

IBM PureFlex System and IBM Flex System Products and Technology

Description QSFP+ transceiver and cables - 40 GbE IBM QSFP+ 40GBASE-SR Transceiver (Requires either cable 90Y3519 or cable 90Y3521) 10m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) 30m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) QSFP+ breakout cables - 40 GbE to 4x10 GbE 1m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 3m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 5m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable QSFP+ direct-attach cables - 40 GbE 1m QSFP+ to QSFP+ DAC 3m QSFP+ to QSFP+ DAC

Part number

Feature code (x-config / e-config)

49Y7884 90Y3519 90Y3521

A1DR / EB27 A1MM / EB2J A1MN / EB2K

49Y7886 49Y7887 49Y7888

A1DL / EB24 A1DM / EB25 A1DN / EB26

49Y7890 49Y7891

A1DP / EB2B A1DQ / EB2H

With the flexibility of the interconnect module, you can make full use of the technologies that are required for the following environments: For 1 GbE links, you can use SFP transceivers plus RJ-45 cables or LC-to-LC fiber cables, depending on the transceiver. For 10 GbE, you can use direct-attached cables (DAC, also known as Twinax), which come in lengths 1 - 5 m. These DACs are a cost-effective and low-power alternative to transceivers, and are ideal for all 10 Gb Ethernet connectivity within the rack, or even connecting to an adjacent rack. For longer distances, there is a choice of SFP+ transceivers (SR or LR) plus LC-to-LC fiber optic cables. For 40 Gb links, you can use QSFP+ to QSFP+ cables up to 3 m, or QSFP+ transceivers and MTP cables for longer distances. You also can break out the 40 Gb ports into four 10 GbE SFP+ DAC connections by using break-out cables.

Features and specifications


The SI4093 System Interconnect Module includes the following features and specifications: Modes of operations: Transparent (or VLAN-agnostic) mode In VLAN-agnostic mode (default configuration), the SI4093 transparently forwards VLAN tagged frames without filtering on the customer VLAN tag, which provides an end host view to the upstream network. The interconnect module provides traffic consolidation in the chassis to minimize TOR port usage, and it enables server-to-server communication for optimum performance (for example, vMotion). It can be connected to the FCoE transit switch or FCoE gateway (FC Forwarder) device. Local Domain (or VLAN-aware) mode In VLAN-aware mode (optional configuration), the SI4093 provides more security for multi-tenant environments by extending client VLAN traffic isolation to the interconnect module and its uplinks. VLAN-based access control lists (ACLs) can be configured on the SI4093. When FCoE is used, the SI4093 operates as an FCoE transit switch, and it must be connected to the FCF device.

Chapter 4. Chassis and infrastructure configuration

139

Internal ports: A total of 42 internal full-duplex 10 Gigabit ports (14 ports are enabled by default; optional FoD licenses are required to activate the remaining 28 ports). Two internal full-duplex 1 GbE ports are connected to the chassis management module. External ports: A total of 14 ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ copper direct-attach cables (DAC). A total of 10 ports are enabled by default. An optional FoD license is required to activate the remaining four ports. SFP+ modules and DACs are not included and must be purchased separately. Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs. (Ports are disabled by default. An optional FoD license is required to activate them.) QSFP+ modules and DACs are not included and must be purchased separately. One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module. Scalability and performance: 40 Gb Ethernet ports for extreme uplink bandwidth and performance. External 10 Gb Ethernet ports to use 10 Gb upstream infrastructure. Non-blocking architecture with wire-speed forwarding of traffic and aggregated throughput of 1.28 Tbps. Media access control (MAC) address learning: automatic update, support for up to 128,000 MAC addresses. Static and LACP (IEEE 802.3ad) link aggregation, up to 220 Gb of total uplink bandwidth per interconnect module. Support for jumbo frames (up to 9,216 bytes). Availability and redundancy: Layer 2 Trunk Failover to support active/standby configurations of network adapter teaming on compute nodes. Built in link redundancy with loop prevention without a need for Spanning Tree protocol. VLAN support Up to 32 VLANs supported per interconnect module SPAR partition, with VLAN numbers 1 - 4095 (4095 is used for management modules connection only). 802.1Q VLAN tagging support on all ports. Security: VLAN-based access control lists (ACLs) (VLAN-aware mode). Multiple user IDs and passwords. User access control. Radius, TACACS+, and LDAP authentication and authorization.

Quality of service (QoS) Support for IEEE 802.1p traffic classification and processing. Virtualization Switch Independent Virtual NIC (vNIC2) Ethernet, iSCSI, or FCoE traffic is supported on vNICs

140

IBM PureFlex System and IBM Flex System Products and Technology

Switch partitioning (SPAR): SPAR forms separate virtual switching contexts by segmenting the data plane of the switch. Data plane traffic is not shared between SPARs on the same switch. SPAR operates as a Layer 2 broadcast network. Hosts on the same VLAN attached to a SPAR can communicate with each other and with the upstream switch. Hosts on the same VLAN but attached to different SPARs communicate through the upstream switch. SPAR is implemented as a dedicated VLAN with a set of internal server ports and a single uplink port or link aggregation (LAG). Multiple uplink ports or LAGs are not allowed in SPAR. A port can be a member of only one SPAR.

Converged Enhanced Ethernet: Priority-Based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow control to allow the switch to pause traffic based on the 802.1p priority value in each packets VLAN tag. Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for allocating link bandwidth based on the 802.1p priority value in each packets VLAN tag. Data Center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows neighboring network devices to exchange information about their capabilities. Fibre Channel over Ethernet (FCoE): FC-BB5 FCoE specification compliant. FCoE transit switch operations. FCoE Initialization Protocol (FIP) support. Manageability: IPv4 and IPv6 host management. Simple Network Management Protocol (SNMP V1, V2, and V3). Industry standard command-line interface (IS-CLI) through Telnet, SSH, and serial port. Secure FTP (sFTP). Service Location Protocol (SLP). Firmware image update (TFTP and FTP/sFTP). Network Time Protocol (NTP) for clock synchronization. IBM System Networking Switch Center (SNSC) support. Monitoring: Switch LEDs for external port status and switch module status indication. Change tracking and remote logging with syslog feature. POST diagnostic tests.

Supported standards
The switches support the following standards: IEEE 802.1AB Data Center Bridging Capability Exchange Protocol (DCBX) IEEE 802.1p Class of Service (CoS) prioritization IEEE 802.1Q Tagged VLAN (frame tagging on all ports when VLANs are enabled) IEEE 802.1Qbb Priority-Based Flow Control (PFC) IEEE 802.1Qaz Enhanced Transmission Selection (ETS) IEEE 802.3 10BASE-T Ethernet IEEE 802.3ab 1000BASE-T copper twisted pair Gigabit Ethernet
Chapter 4. Chassis and infrastructure configuration

141

IEEE 802.3ad Link Aggregation Control Protocol IEEE 802.3ae 10GBASE-SR short range fiber optics 10 Gb Ethernet IEEE 802.3ae 10GBASE-LR long range fiber optics 10 Gb Ethernet IEEE 802.3ba 40GBASE-SR4 short range fiber optics 40 Gb Ethernet IEEE 802.3ba 40GBASE-CR4 copper 40 Gb Ethernet IEEE 802.3u 100BASE-TX Fast Ethernet IEEE 802.3x Full-duplex Flow Control IEEE 802.3z 1000BASE-SX short range fiber optics Gigabit Ethernet IEEE 802.3z 1000BASE-LX long range fiber optics Gigabit Ethernet SFF-8431 10GSFP+Cu SFP+ Direct Attach Cable For more information, see the IBM Redbooks Product Guide IBM Flex System Fabric SI4093 System Interconnect Module, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0864.html?Open

4.11.9 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module


The EN4091 10Gb Ethernet Pass-thru Module offers a one-for-one connection between a single node bay and an I/O module uplink. It has no management interface and can support both 1 Gb and 10 Gb dual-port adapters that are installed in the compute nodes. If quad-port adapters are installed in the compute nodes, only the first two ports have access to the pass-through modules ports. The necessary 1 GbE or 10 GbE module (SFP, SFP+ or DAC) must also be installed in the external ports of the pass-through. This configuration supports the speed (1 Gb or 10 Gb) and medium (fiber optic or copper) for adapter ports on the compute nodes. The IBM Flex System EN4091 10Gb Ethernet Pass-thru Module is shown in Figure 4-52.

Figure 4-52 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module

The ordering part number and feature codes are listed in Table 4-31.
Table 4-31 EN4091 10Gb Ethernet Pass-thru Module part number and feature codes Part number 88Y6043 Feature codea A1QV / 3700 Product Name IBM Flex System EN4091 10Gb Ethernet Pass-thru

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

142

IBM PureFlex System and IBM Flex System Products and Technology

The EN4091 10Gb Ethernet Pass-thru Module includes the following specifications: Internal ports 14 internal full-duplex Ethernet ports that can operate at 1 Gb or 10 Gb speeds External ports Fourteen ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC. SFP+ modules and DAC cables are not included, and must be purchased separately. Unmanaged device that has no internal Ethernet management port. However, it can provide its VPD to the secure management network in the CMM. Supports 10 Gb Ethernet signaling for CEE, FCoE, and other Ethernet-based transport protocols. Allows direct connection from the 10 Gb Ethernet adapters that are installed in compute nodes in a chassis to an externally located Top of Rack switch or other external device. Consideration: The EN4091 10Gb Ethernet Pass-thru Module has only 14 internal ports. As a result, only two ports on each compute node are enabled, one for each of two pass-through modules that are installed in the chassis. If four-port adapters are installed in the compute nodes, ports 3 and 4 on those adapters are not enabled. There are three standard I/O module status LEDs, as shown in Figure 4-42 on page 112. Each port has link and activity LEDs. Table 4-32 lists the supported transceivers and DAC cables.
Table 4-32 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module Part number Feature codesa Description

SFP+ transceivers - 10 GbE 44W4408 46C3447 90Y9412 4942 / 3282 5053 / None A1PM / None 10 GbE 850 nm Fibre Channel SFP+ Transceiver (SR) IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver

SFP transceivers - 1 GbE 81Y1622 81Y1618 90Y9424 3269 / EB2A 3268 / EB29 A1PN / None IBM SFP SX Transceiver IBM SFP RJ45 Transceiver IBM SFP LX Transceiver

Direct-attach copper (DAC) cables 81Y8295 81Y8296 81Y8297 95Y0323 95Y0326 95Y0329 A18M / EN01 A18N / EN02 A18P / EN03 A25A / None A25B / None A25C / None 1m 10GE Twinax Act Copper SFP+ DAC (active) 3m 10GE Twinax Act Copper SFP+ DAC (active) 5m 10GE Twinax Act Copper SFP+ DAC (active) 1m IBM Active DAC SFP+ Cable 3m IBM Active DAC SFP+ Cable 5m IBM Active DAC SFP+ Cable

Chapter 4. Chassis and infrastructure configuration

143

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

For more information, see the IBM Redbooks Product Guide IBM Flex System EN4091 10Gb Ethernet Pass-thru Module, TIPS0865, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0865.html?Open

4.11.10 IBM Flex System EN2092 1Gb Ethernet Scalable Switch


The EN2092 1Gb Ethernet Switch provides support for L2/L3 switching and routing. The switch includes the following ports: Up to 28 internal 1 Gb ports Up to 20 external 1 Gb ports (RJ45 connectors) Up to 4 external 10 Gb uplink ports (SFP+ connectors) The switch is shown in Figure 4-53.

Figure 4-53 IBM Flex System EN2092 1Gb Ethernet Scalable Switch

As listed in Table 4-33, the switch comes standard with 14 internal and 10 external Gigabit Ethernet ports enabled. Further ports can be enabled, including the four external 10 Gb uplink ports. Upgrade 1 and the 10 Gb Uplinks upgrade can be applied in either order.
Table 4-33 IBM Flex System EN2092 1Gb Ethernet Scalable Switch part numbers and port upgrades Part number 49Y4294 Feature codea A0TF / 3598 Product description IBM Flex System EN2092 1Gb Ethernet Scalable Switch: 14 internal 1 Gb ports 10 external 1 Gb ports IBM Flex System EN2092 1Gb Ethernet Scalable Switch (Upgrade 1): Adds 14 internal 1 Gb ports Adds 10 external 1 Gb ports IBM Flex System EN2092 1Gb Ethernet Scalable Switch (10 Gb Uplinks), which adds 4 external 10 Gb uplinks

90Y3562

A1QW / 3594

49Y4298

A1EN / 3599

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

144

IBM PureFlex System and IBM Flex System Products and Technology

The key components on the front of the switch are shown in Figure 4-54.
20x external 1 Gb ports (10 standard, 10 with Upgrade 1) 4x 10 Gb uplink ports (enabled with Uplinks upgrade)

RJ45 ports
Figure 4-54 IBM Flex System EN2092 1Gb Ethernet Scalable Switch

SFP+ ports Management Switch port LEDs

The standard switch has 14 internal ports, and the Upgrade 1 license enables 14 more internal ports. To make full use of those ports, each compute node needs the following appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter that is installed in each compute node (one port of the adapter goes to each of two switches). Upgrade 1 requires a four-port Ethernet adapter that is installed in each compute node (two ports of the adapter to each switch). The standard has 10 external ports enabled. More external ports are enabled with the following license upgrades: Upgrade 1 enables 10 more ports for a total of 20 ports Uplinks Upgrade enables the four 10 Gb SFP+ ports. These upgrades can be installed in either order. This switch is considered ideal for clients with the following characteristics: Still use 1 Gb as their networking infrastructure. Are deploying virtualization and require multiple 1 Gb ports. Want investment protection for 10 Gb uplinks. Looking to reduce TCO and improve performance, while maintaining high levels of availability and security. Looking to avoid oversubscription (multiple internal ports that attempt to pass through a lower quantity of external ports, leading to congestion and performance impact). The switch has three switch status LEDs (see Figure 4-42 on page 112) and one mini-USB serial port connector for console management. Uplink Ports 1 - 20 are RJ45, and the 4 x 10 Gb uplink ports are SFP+. The switch supports either SFP+ modules or DAC cables. The supported SFP+ modules and DAC cables for the switch are listed in Table 4-34.
Table 4-34 IBM Flex System EN2092 1Gb Ethernet Scalable Switch SFP+ and DAC cables Part number SFP transceivers 81Y1622 3269 / EB2A IBM SFP SX Transceiver Feature codea Description

Chapter 4. Chassis and infrastructure configuration

145

Part number 81Y1618 90Y9424 SFP+ transceivers 44W4408 46C3447 90Y9412 DAC cables 90Y9427 90Y9430 90Y9433

Feature codea 3268 / EB29 A1PN / None

Description IBM SFP RJ45 Transceiver IBM SFP LX Transceiver

4942 / 3282 5053 / None A1PM / None

10 GbE 850 nm Fibre Channel SFP+ Transceiver (SR) IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver

A1PH / None A1PJ / ECB5 A1PK / None

1m IBM Passive DAC SFP+ 3m IBM Passive DAC SFP+ 5m IBM Passive DAC SFP+

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

The EN2092 1 Gb Ethernet Scalable Switch includes the following features and specifications: Internal ports: A total of 28 internal full-duplex Gigabit ports; 14 ports are enabled by default. An optional FoD license is required to activate another 14 ports. Two internal full-duplex 1 GbE ports that are connected to the chassis management module. External ports: Four ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC. These ports are disabled by default. An optional FoD license is required to activate them. SFP+ modules are not included and must be purchased separately. A total of 20 external 10/100/1000 1000BASE-T Gigabit Ethernet ports with RJ-45 connectors; 10 ports are enabled by default. An optional FoD license is required to activate another 10 ports. One RS-232 serial port (mini-USB connector) that provides another means to configure the switch module. Scalability and performance: Fixed-speed external 10 Gb Ethernet ports for maximum uplink bandwidth Autosensing 10/1000/1000 external Gigabit Ethernet ports for bandwidth optimization Non-blocking architecture with wire-speed forwarding of traffic MAC address learning: Automatic update, support of up to 32,000 MAC addresses Up to 128 IP interfaces per switch Static and LACP (IEEE 802.3ad) link aggregation, up to 60 Gb of total uplink bandwidth per switch, up to 64 trunk groups, up to 16 ports per group Support for jumbo frames (up to 9,216 bytes)

146

IBM PureFlex System and IBM Flex System Products and Technology

Broadcast/multicast storm control IGMP snooping for limit flooding of IP multicast traffic IGMP filtering to control multicast traffic for hosts that participate in multicast groups Configurable traffic distribution schemes over trunk links that are based on source/destination IP or MAC addresses, or both Fast port forwarding and fast uplink convergence for rapid STP convergence Availability and redundancy: VRRP for Layer 3 router redundancy IEEE 802.1D STP for providing L2 redundancy IEEE 802.1s MSTP for topology optimization, up to 32 STP instances that are supported by a single switch IEEE 802.1w RSTP (provides rapid STP convergence for critical delay-sensitive traffic like voice or video) RPVST enhancements Layer 2 Trunk Failover to support active/standby configurations of network adapter teaming on compute nodes Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off VLAN support: Up to 1024 VLANs supported per switch, with VLAN numbers that range from 1 to 4095 (4095 is used for the management modules connection only) 802.1Q VLAN tagging support on all ports Private VLANs Security: VLAN-based, MAC-based, and IP-based ACLs 802.1x port-based authentication Multiple user IDs and passwords User access control Radius, TACACS+, and Lightweight Directory Access Protocol (LDAP) authentication and authorization QoS: Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing Traffic shaping and remarking based on defined policies Eight WRR priority queues per port for processing qualified traffic IP v4 Layer 3 functions: Host management IP forwarding IP filtering with ACLs, up to 896 ACLs supported VRRP for router redundancy Support for up to 128 static routes

Chapter 4. Chassis and infrastructure configuration

147

Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4), up to 2048 entries in a routing table Support for DHCP Relay Support for IGMP snooping and IGMP relay Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM). IP v6 Layer 3 functions: IPv6 host management (except default switch management IP address) IPv6 forwarding Up to 128 static routes Support for OSPF v3 routing protocol IPv6 filtering with ACLs

Virtualization: VMready Manageability: Simple Network Management Protocol (SNMP V1, V2, and V3) HTTP browser GUI Telnet interface for CLI SSH Serial interface for CLI Scriptable CLI Firmware image update (TFTP and FTP) NTP for switch clock synchronization

Monitoring: Switch LEDs for external port status and switch module status indication RMON agent to collect statistics and proactively monitor switch performance Port mirroring for analyzing network traffic that passes through the switch Change tracking and remote logging with the syslog feature Support for the sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer is required elsewhere) POST diagnostic functions For more information, see the IBM Redbooks Product Guide IBM Flex System EN2092 1Gb Ethernet Scalable Switch, TIPS0861, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0861.html?Open

4.11.11 IBM Flex System FC5022 16Gb SAN Scalable Switch


The IBM Flex System FC5022 16Gb SAN Scalable Switch is a high-density, 48-port 16 Gbps Fibre Channel switch that is used in the Enterprise Chassis. The switch provides 28 internal ports to compute nodes by way of the midplane, and 20 external SFP+ ports. These system area network (SAN) switch modules deliver an embedded option for IBM Flex System users who deploy storage area networks in their enterprise. They offer end-to-end 16 Gb and 8 Gb connectivity. The N_Port Virtualization mode streamlines the infrastructure by reducing the number of domains to manage. It allows you to add or move servers without impact to the SAN. Monitoring is simplified by using an integrated management appliance. Clients who use an end-to-end Brocade SAN can make use of the Brocade management tools.

148

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-55 shows the IBM Flex System FC5022 16Gb SAN Scalable Switch.

Figure 4-55 IBM Flex System FC5022 16Gb SAN Scalable Switch

Three versions are available, as listed in Table 4-35: 12-port and 24-port switch modules and a 24-port switch with the Enterprise Switch Bundle (ESB) software. The port count can be applied to internal or external ports by using a feature that is called Dynamic Ports on Demand (DPOD). Ports counts can be increased with license upgrades, as described in Port and feature upgrades on page 150.
Table 4-35 IBM Flex System FC5022 16Gb SAN Scalable Switch part numbers Part number 88Y6374 00Y3324 90Y9356 Feature codesa A1EH / 3770 A3DP / ESW5 A1EJ / 3771 Description IBM Flex System FC5022 16Gb SAN Scalable Switch IBM Flex System FC5022 24-port 16Gb SAN Scalable Switch IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch Ports enabled by default 12 24 24

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

Table 4-36 provides a feature comparison between the FC5022 switch models.
Table 4-36 Feature comparison by model Feature FC5022 16Gb 24-port ESB Switch 90Y9356 Number of active ports Number of SFP+ included Full fabric Access Gateway Advanced zoning Enhanced Group Management ISL Trunking Adaptive Networking Advanced Performance Monitoring Fabric Watch 24 None Included Included Included Included Included Included Included Included FC5022 24-port 16Gb SAN Scalable Switch 00Y3324 24 2x 16 Gb SFP+ Included Included Included Included Optional Not available Not available Optional FC5022 16Gb SAN Scalable Switch 88Y6374 12 None Included Included Included Included Not available Not available Not available Not available

Chapter 4. Chassis and infrastructure configuration

149

Feature

FC5022 16Gb 24-port ESB Switch 90Y9356

FC5022 24-port 16Gb SAN Scalable Switch 00Y3324 Not available Not available

FC5022 16Gb SAN Scalable Switch 88Y6374 Not available Not available

Extended Fabrics Server Application Optimization

Included Included

The part number for the switch includes the following items: One IBM Flex System FC5022 16Gb SAN Scalable Switch or IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch Important Notices Flyer Warranty Flyer Documentation CD-ROM The switch does not include a serial management cable. However, IBM Flex System Management Serial Access Cable 90Y9338 is supported and contains two cables: a mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable. Either cable can be used to connect to the switch locally for configuration tasks and firmware updates.

Port and feature upgrades


Table 4-37 lists the available port and feature upgrades. These are all IBM Features on Demand license upgrades.
Table 4-37 FC5022 switch upgrades 24-port 16 Gb ESB switch Description FC5022 16Gb SAN Switch (Upgrade 1) FC5022 16Gb SAN Switch (Upgrade 2) FC5022 16Gb Fabric Watch Upgrade FC5022 16Gb ISL/Trunking Upgrade 90Y9356 No Yes No No 24-port 16 Gb SAN switch 00Y3324 No Yes Yes Yes 16 Gb SAN switch 88Y6374 Yes Yes Yes Yes

Part number 88Y6382 88Y6386 00Y3320 00Y3322

Feature codesa A1EP / 3772 A1EQ / 3773 A3HN / ESW3 A3HP / ESW4

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

With DPOD, ports are licensed as they come online. With the FC5022 16Gb SAN Scalable Switch, the first 12 ports that report (on a first-come, first-served basis) on boot are assigned licenses. These 12 ports can be any combination of external or internal Fibre Channel ports. After all the licenses are assigned, you can manually move those licenses from one port to another port. Because this process is dynamic, no defined ports are reserved except ports 0 and 29. The FC5022 16Gb ESB Switch has the same behavior. The only difference is the number of ports.

150

IBM PureFlex System and IBM Flex System Products and Technology

Table 4-38 shows the total number of active ports on the switch after you apply compatible port upgrades.
Table 4-38 Total port counts after you apply upgrades Total number of active ports 24-port 16 Gb ESB SAN switch Ports on Demand upgrade Included with base switch Upgrade 1, 88Y6382 (adds 12 ports) Upgrade 2, 88Y6386 (adds 24 ports) 90Y9356 24 Not supported 48 24-port 16 Gb SAN switch 00Y3324 24 Not supported 48 16 Gb SAN switch 88Y6374 12 24 48

Transceivers
The FC5022 12-port and 24-port ESB SAN switches come without SFP+, which must be ordered separately to provide outside connectivity. The FC5022 24-port SAN switch comes standard with two Brocade 16 Gb SFP+ transceivers; more SFP+ can be ordered if required. Table 4-39 lists the supported SFP+ options.
Table 4-39 Supported SFP+ transceivers Part number 88Y6416 88Y6393 Feature codea 5084 / 5370 A22R / 5371 Description Brocade 8 Gb SFP+ Software Optical Transceiver Brocade 16 Gb SFP+ Optical Transceiver

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

Benefits
The switches offer the following key benefits: Exceptional price and performance for growing SAN workloads The FC5022 16Gb SAN Scalable Switch delivers exceptional price and performance for growing SAN workloads. It achieves this through a combination of market-leading 1,600 MBps throughput per port and an affordable high-density form factor. The 48 FC ports produce an aggregate 768 Gbps full-duplex throughput, plus any external eight ports can be trunked for 128 Gbps inter-switch links (ISLs). Because 16 Gbps port technology dramatically reduces the number of ports and associated optics and cabling required through 8/4 Gbps consolidation, the cost savings and simplification benefits are substantial. Accelerating fabric deployment and serviceability with diagnostic ports Diagnostic Ports (D_Ports) are a new port type that is supported by the FC5022 16Gb SAN Scalable Switch. They enable administrators to quickly identify and isolate 16 Gbps optics, port, and cable problems, which reduces fabric deployment and diagnostic times. If the optical media is found to be the source of the problem, it can be transparently replaced because 16 Gbps optics are hot-pluggable.

Chapter 4. Chassis and infrastructure configuration

151

A building block for virtualized, private cloud storage The FC5022 16Gb SAN Scalable Switch supports multi-tenancy in cloud environments through VM-aware end-to-end visibility and monitoring, QoS, and fabric-based advanced zoning features. The FC5022 16Gb SAN Scalable Switch enables secure distance extension to virtual private or hybrid clouds with dark fiber support. They also enable in-flight encryption and data compression. Internal fault-tolerant and enterprise-class reliability, availability, and serviceability (RAS) features help minimize downtime to support mission-critical cloud environments. Simplified and optimized interconnect with Brocade Access Gateway The FC5022 16Gb SAN Scalable Switch can be deployed as a full-fabric switch or as a Brocade Access Gateway. It simplifies fabric topologies and heterogeneous fabric connectivity. Access Gateway mode uses N_Port ID Virtualization (NPIV) switch standards to present physical and virtual servers directly to the core of SAN fabrics. This configuration makes it not apparent to the SAN fabric, which greatly reduces management of the network edge. Maximizing investments To help optimize technology investments, IBM offers a single point of serviceability that is backed by industry-renowned education, support, and training. In addition, the IBM 16/8 Gbps SAN Scalable Switch is in the IBM ServerProven program, which enables compatibility among various IBM and partner products. IBM recognizes that customers deserve the most innovative, expert integrated systems solutions.

Features and specifications


FC5022 16Gb SAN Scalable Switches have the following features and specifications: Internal ports: 28 internal full-duplex 16 Gb FC ports (up to 14 internal ports can be activated with Port-on-Demand feature, remaining ports are reserved for future use) Internal ports operate as F_ports (fabric ports) in native mode or in access gateway mode Two internal full-duplex 1 GbE ports connect to the chassis management module External ports: Twenty external ports for 16 Gb SFP+ or 8 Gb SFP+ transceivers that supporting 4 Gb, 8 Gb, and 16 Gb port speeds. SFP+ modules are not included and must be purchased separately. Ports are activated with Port-on-Demand feature. External ports can operate as F_ports, FL_ports (fabric loop ports), or E_ports (expansion ports) in native mode. They can operate as N_ports (node ports) in access gateway mode. One external 1 GbE port (1000BASE-T) with RJ-45 connector for switch configuration and management. One RS-232 serial port (mini-USB connector) that provides another means to configure the switch module. Access gateway mode (N_Port ID Virtualization - NPIV) support. Power-on self-test diagnostics and status reporting. ISL Trunking (licensable) allows up to eight ports (at 16, 8, or 4 Gbps speeds) to combine. These ports form a single, logical ISL with a speed of up to 128 Gbps (256 Gbps full duplex). This configuration allows for optimal bandwidth sage, automatic path failover, and load balancing.

152

IBM PureFlex System and IBM Flex System Products and Technology

Brocade Fabric OS delivers distributed intelligence throughout the network and enables a wide range of value-added applications. These applications include Brocade Advanced Web Tools and Brocade Advanced Fabric Services (on certain models). Supports up to 768 Gbps I/O bandwidth. A total of 420 million frames switches per second, 0.7 microseconds latency. 8,192 buffers for up to 3,750 km extended distance at 4 Gbps FC (Extended Fabrics license required). In-flight 64 Gbps Fibre Channel compression and decompression support on up to two external ports (no license required). In-flight 32 Gbps encryption and decryption on up to two external ports (no license required). A total of 48 Virtual Channels per port. Port mirroring to monitor ingress or egress traffic from any port within the switch. Two I2C connections able to interface with redundant management modules. Hot pluggable, up to four hot pluggable switches per chassis. Single fuse circuit. Four temperature sensors. Managed with Brocade Web Tools. Supports a minimum of 128 domains in Native mode and Interoperability mode. Nondisruptive code load in Native mode and Access Gateway mode. 255 N_port logins per physical port. D_port support on external ports. Class 2 and Class 3 frames. SNMP v1 and v3 support. SSH v2 support. Secure Sockets Layer (SSL) support. NTP client support (NTP V3). FTP support for firmware upgrades. SNMP/Management Information Base (MIB) monitoring functionality that is contained within the Ethernet Control MIB-II (RFC1213-MIB). End-to-end optics and link validation. Sends switch events and syslogs to the CMM. Traps identify cold start, warm start, link up/link down and authentication failure events. Support for IPv4 and IPv6 on the management ports. The FC5022 16Gb SAN Scalable Switches come standard with the following software features: Brocade Full Fabric mode: Enables high performance 16 Gb or 8 Gb fabric switching. Brocade Access Gateway mode: Uses NPIV to connect to any fabric without adding switch domains to reduce management complexity. Dynamic Path Selection: Enables exchange-based load balancing across multiple Inter-Switch Links for superior performance.

Chapter 4. Chassis and infrastructure configuration

153

Brocade Advanced Zoning: Segments a SAN into virtual private SANs to increase security and availability. Brocade Enhanced Group Management: Enables centralized and simplified management of Brocade fabrics through IBM Network Advisor.

Enterprise Switch Bundle software licenses


The IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch includes a complete set of licensed features. These features maximize performance, ensure availability, and simplify management for the most demanding applications and expanding virtualization environments. This switch comes with 24 port licenses that can be applied to internal or external links on this switch. This switch also includes the following ESB software licenses: Brocade Extended Fabrics Provides up to 1000 km of switches fabric connectivity over long distances. Brocade ISL Trunking Allows you to aggregate multiple physical links into one logical link for enhanced network performance and fault tolerance. Brocade Advanced Performance Monitoring Enables performance monitoring of networked storage resources. This license includes the TopTalkers feature. Brocade Fabric Watch Monitors mission-critical switch operations. Fabric Watch now includes the new Port Fencing capabilities. Adaptive Networking Adaptive Networking provides a rich set of capabilities to the data center or virtual server environments. It ensures high priority connections to obtain the bandwidth necessary for optimum performance, even in congested environments. It optimizes data traffic movement within the fabric by using Ingress Rate Limiting, QoS, and Traffic Isolation Zones Server Application Optimization (SAO) This license optimizes overall application performance for physical servers and virtual machines. When it is deployed with Brocade Fibre Channel host bus adapters (HBAs), SAO extends Brocade Virtual Channel technology from fabric to the server infrastructure. This license delivers application-level, fine-grain QoS management to the HBAs and related server applications.

Supported Fibre Channel standards


The switches support the following Fibre Channel standards: FC-AL-2 INCITS 332: 1999 FC-GS-5 ANSI INCITS 427 (includes FC-GS-4 ANSI INCITS 387: 2004) FC-IFR INCITS 1745-D, revision 1.03 (under development) FC-SW-4 INCITS 418:2006 FC-SW-3 INCITS 384: 2004 FC-VI INCITS 357: 2002 154
IBM PureFlex System and IBM Flex System Products and Technology

FC-TAPE INCITS TR-24: 1999 FC-DA INCITS TR-36: 2004, includes the following standards: FC-FLA INCITS TR-20: 1998 FC-PLDA INCIT S TR-19: 1998 FC-MI-2 ANSI/INCITS TR-39-2005 FC-PI INCITS 352: 2002 FC-PI-2 INCITS 404: 2005 FC-PI-4 INCITS 1647-D, revision 7.1 (under development) FC-PI-5 INCITS 479: 2011 FC-FS-2 ANSI/INCITS 424:2006 (includes FC-FS INCITS 373: 2003) FC-LS INCITS 433: 2007 FC-BB-3 INCITS 414: 2006 FC-BB-2 INCITS 372: 2003 FC-SB-3 INCITS 374: 2003 (replaces FC-SB ANSI X3.271: 1996 and FC-SB-2 INCITS 374: 2001) RFC 2625 IP and ARP Over FC RFC 2837 Fabric Element MIB MIB-FA INCITS TR-32: 2003 FCP-2 INCITS 350: 2003 (replaces FCP ANSI X3.269: 1996) SNIA Storage Management Initiative Specification (SMI-S) Version 1.2 and includes the following standards: SNIA Storage Management Initiative Specification (SMI-S) Version 1.03 ISO standard IS24775-2006. (replaces ANSI INCITS 388: 2004) SNIA Storage Management Initiative Specification (SMI-S) Version 1.1.0 SNIA Storage Management Initiative Specification (SMI-S) Version 1.2.0 For more information, see the IBM Redbooks Product Guide IBM Flex System FC5022 16Gb SAN Scalable Switches, TIPS0870, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0870.html?Open

4.11.12 IBM Flex System FC3171 8Gb SAN Switch


The IBM Flex System FC3171 8Gb SAN Switch is a full-fabric Fibre Channel switch module. It can be converted to a pass-through module when configured in transparent mode. Figure 4-56 shows the IBM Flex System FC3171 8Gb SAN Switch.

Figure 4-56 IBM Flex System FC3171 8Gb SAN Switch

Chapter 4. Chassis and infrastructure configuration

155

The I/O module has 14 internal ports and 6 external ports. All ports are licensed on the switch because there are no port licensing requirements. Ordering information is listed in Table 4-40.
Table 4-40 FC3171 8Gb SAN Switch Part number 69Y1930 Feature codea A0TD / 3595 Product Name IBM Flex System FC3171 8Gb SAN Switch

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

No SFP modules and cables are supplied as standard. The ones that are listed in Table 4-41 are supported.
Table 4-41 FC3171 8Gb SAN Switch supported SFP modules and cables Part number 44X1964 39R6475 Feature codesa 5075 / 3286 4804 / 3238 Description IBM 8 Gb SFP+ Software Optical Transceiver 4 Gb SFP Transceiver Option

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

You can reconfigure the FC3171 8Gb SAN Switch to become a pass-through module by using the switch GUI or CLI. The module can then be converted back to a full function SAN switch at some future date. The switch requires a reset when you turn on or off transparent mode. The switch can be configured by using the following methods: Command Line Access the switch by using the console port through the CMM or through the Ethernet port. This method requires a basic understanding of the CLI commands. QuickTools Requires a current version of the Java runtime environment on your workstation before you point a web browser to the switchs IP address. The IP address of the switch must be configured. QuickTools does not require a license and code is included. On this switch when in Full Fabric mode, access to all of the Fibre Channel Security features is provided. Security includes additional services of SSL and SSH, which are available. In addition, RADIUS servers can be used for device and user authentication. After SSL or SSH is enabled, the security features are available to be configured. Configuring security features allows the SAN administrator to configure which devices are allowed to log on to the Full Fabric Switch module. This process is done by creating security sets with security groups. These sets are configured on a per switch basis. The security features are not available when in pass-through mode. The FC3171 8Gb SAN Switch includes the following specifications and standards: Fibre Channel standards: 156 C-PH version 4.3 FC-PH-2 FC-PH-3 FC-AL version 4.5

IBM PureFlex System and IBM Flex System Products and Technology

FC-AL-2 Rev 7.0 FC-FLA FC-GS-3 FC-FG FC-PLDA FC-Tape FC-VI FC-SW-2 Fibre Channel Element MIB RFC 2837 Fibre Alliance MIB version 4.0

Fibre Channel protocols: Fibre Channel service classes: Class 2 and class 3 Operation modes: Fibre Channel class 2 and class 3, connectionless External port type: Full fabric mode: Generic loop port Transparent mode: Transparent fabric port Internal port type: Full fabric mode: F_port Transparent mode: Transparent host port/NPIV mode Support for up to 44 host NPIV logins Port characteristics: External ports are automatically detected and self-configuring Port LEDs illuminate at startup Number of Fibre Channel ports: 6 external ports and 14 internal ports Scalability: Up to 239 switches maximum depending on your configuration Buffer credits: 16 buffer credits per port Maximum frame size: 2148 bytes (2112 byte payload) Standards-based FC, FC-SW2 Interoperability Support for up to a 255 to 1 port-mapping ratio Media type: SFP+ module

2 Gb specifications: 2 Gb fabric port speed: 1.0625 or 2.125 Gbps (gigabits per second) 2 Gb fabric latency: Less than 0.4 msec 2 Gb fabric aggregate bandwidth: 80 Gbps at full duplex 4 Gb specifications: 4 Gb switch speed: 4.250 Gbps 4 Gb switch fabric point-to-point: 4 Gbps at full duplex 4 Gb switch fabric aggregate bandwidth: 160 Gbps at full duplex 8 Gb specifications: 8 Gb switch speed: 8.5 Gbps 8 Gb switch fabric point-to-point: 8 Gbps at full duplex 8 Gb switch fabric aggregate bandwidth: 320 Gbps at full duplex Nonblocking architecture to prevent latency System processor: IBM PowerPC For more information, see the IBM Redbooks Product Guide IBM Flex System FC3171 8Gb SAN Switch and Pass-thru, TIPS0866, which is available at: http://www.redbooks.ibm.com/abstracts/tips0866.html?Open

Chapter 4. Chassis and infrastructure configuration

157

4.11.13 IBM Flex System FC3171 8Gb SAN Pass-thru


The IBM Flex System FC3171 8Gb SAN Pass-thru I/O module is an 8 Gbps Fibre Channel Pass-thru SAN module. It has 14 internal ports and six external ports and is shipped with all ports enabled. Figure 4-57 shows the IBM Flex System FC3171 8 Gb SAN Pass-thru module.

Figure 4-57 IBM Flex System FC3171 8Gb SAN Pass-thru

Ordering information is listed in Table 4-42.


Table 4-42 FC3171 8Gb SAN Pass-thru part number Part number 69Y1934 Feature codea A0TJ / 3591 Description IBM Flex System FC3171 8Gb SAN Pass-thru

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

Exception: If you must enable full fabric capability later, do not purchase this switch. Instead, purchase the FC3171 8Gb SAN Switch. There are no SFPs supplied with the switch and must be ordered separately. Supported transceivers and fiber optic cables are listed in Table 4-43.
Table 4-43 FC3171 8Gb SAN Pass-thru supported modules and cables Part number 44X1964 39R6475 Feature code 5075 / 3286 4804 / 3238 Description IBM 8 Gb SFP+ Software Optical Transceiver 4 Gb SFP Transceiver Option

The FC3171 8Gb SAN Pass-thru can be configured by using the following methods: Command Line Access the module by using the console port through the Chassis Management Module or through the Ethernet port. This method requires a basic understanding of the CLI commands. QuickTools Requires a current version of the JRE on your workstation before you point a web browser to the modules IP address. The IP address of the module must be configured. QuickTools does not require a license and the code is included.

158

IBM PureFlex System and IBM Flex System Products and Technology

The pass-through module supports the following standards: Fibre Channel standards: C-PH version 4.3 FC-PH-2 FC-PH-3 FC-AL version 4.5 FC-AL-2 Rev 7.0 FC-FLA FC-GS-3 FC-FG FC-PLDA FC-Tape FC-VI FC-SW-2 Fibre Channel Element MIB RFC 2837 Fibre Alliance MIB version 4.0

Fibre Channel protocols: Fibre Channel service classes: Class 2 and class 3 Operation modes: Fibre Channel class 2 and class 3, connectionless External port type: Transparent fabric port Internal port type: Transparent host port/NPIV mode Support for up to 44 host NPIV logins Port characteristics: External ports are automatically detected and self- configuring Port LEDs illuminate at startup Number of Fibre Channel ports: 6 external ports and 14 internal ports Scalability: Up to 239 switches maximum depending on your configuration Buffer credits: 16 buffer credits per port Maximum frame size: 2148 bytes (2112 byte payload) Standards-based FC, FC-SW2 Interoperability Support for up to a 255 to 1 port-mapping ratio Media type: SFP+ module

Fabric point-to-point bandwidth: 2 Gbps or 8 Gbps at full duplex 2 Gb Specifications: 2 Gb fabric port speed: 1.0625 or 2.125 Gbps (gigabits per second) 2 Gb fabric latency: Less than 0.4 msec 2 Gb fabric aggregate bandwidth: 80 Gbps at full duplex 4 Gb Specifications: 4 Gb switch speed: 4.250 Gbps 4 Gb switch fabric point-to-point: 4 Gbps at full duplex 4 Gb switch fabric aggregate bandwidth: 160 Gbps at full duplex 8 Gb Specifications: 8 Gb switch speed: 8.5 Gbps 8 Gb switch fabric point-to-point: 8 Gbps at full duplex 8 Gb switch fabric aggregate bandwidth: 320 Gbps at full duplex System processor: PowerPC Maximum frame size: 2148 bytes (2112 byte payload)

Chapter 4. Chassis and infrastructure configuration

159

Nonblocking architecture to prevent latency For more information, see the IBM Redbooks Product Guide IBM Flex System FC3171 8Gb SAN Switch and Pass-thru, TIPS0866, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0866.html?Open

4.11.14 IBM Flex System IB6131 InfiniBand Switch


IBM Flex System IB6131 InfiniBand Switch is a 32-port InfiniBand switch. It has 18 FDR/QDR (56/40 Gbps) external ports and 14 FDR/QDR (56/40 Gbps) internal ports for connections to nodes. This switch ships standard with quad data rate (QDR) and can be upgraded to 14 data rate (FDR). Figure 4-58 shows the IBM Flex System IB6131 InfiniBand Switch.

Figure 4-58 IBM Flex System IB6131 InfiniBand Switch

Ordering information is listed in Table 4-44.


Table 4-44 IBM Flex System IB6131 InfiniBand Switch Part number and upgrade option Part number Feature codesa A1EK / 3699 Product Name IBM Flex System IB6131 InfiniBand Switch: 18 external QDR ports 14 QDR internal ports IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade): upgrades all ports to FDR speeds

90Y3450

90Y3462

A1QX / ESW1

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

Running the MLNX-OS, this switch has one external 1 Gb management port and a mini USB Serial port for updating software and debug use. These ports are in addition to InfiniBand internal and external ports. The switch has 14 internal QDR links and 18 CX4 uplink ports. All ports are enabled. The switch can be upgraded to FDR speed (56 Gbps) by using the FOD process with part number 90Y3462 as listed in Table 4-44. There are no InfiniBand cables that are shipped as standard with this switch and these must be purchased separately. Supported cables are listed in Table 4-45.
Table 4-45 IB6131 InfiniBand Switch InfiniBand supported cables Part number 49Y9980 90Y3470 Feature codesa 3866 / 3249 A227 / ECB1 Description IB QDR 3m QSFP Cable Option (passive) 3m FDR InfiniBand Cable (passive)

a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.

160

IBM PureFlex System and IBM Flex System Products and Technology

The switch includes the following specifications: IBTA 1.3 and 1.21 compliance Congestion control Adaptive routing Port mirroring Auto-Negotiation of 10 Gbps, 20 Gbps, 40 Gbps, or 56 Gbps Measured node-to-node latency of less than 170 nanoseconds Mellanox QoS: 9 InfiniBand virtual lanes for all ports, eight data transport lanes, and one management lane High switching performance: Simultaneous wire-speed any port to any port Addressing: 48K Unicast Addresses maximum per Subnet, 16K Multicast Addresses per Subnet Switch throughput capability of 1.8 Tb/s For more information, see the IBM Redbooks Product Guide IBM Flex System IB6131 InfiniBand Switch, TIPS0871, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0871.html?Open

4.12 Infrastructure planning


This section addresses the key infrastructure planning areas of power, uninterruptible power supply (UPS), cooling, and console management that must be considered when you deploy the IBM Flex System Enterprise Chassis. For more information about planning your IBM Flex System power infrastructure, see IBM Flex System Enterprise Chassis Power Guide, WP102111, which is available at this website: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102111

4.12.1 Supported power cords


The Enterprise Chassis supports the power cords that are listed in Table 4-46. One power cord (feature 6292), is shipped with each power supply option or standard with the server (one per standard power supply).
Table 4-46 Supported power cords Part number 40K9772 39Y7916 None 00D7192 00D7193 00D7194 Feature code 6275 6252 6292 A2Y3 A2Y4 A2Y5 Description 4.3m, 16A/208V, C19 to NEMA L6-20P (US) power cord 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable 2 m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable 4.3 m, US/CAN, NEMA L15-30P - (3P+Gnd) to 3X IEC 320 C19 4.3 m, EMEA/AP, IEC 309 32A (3P+N+Gnd) to 3X IEC 320 C19 4.3 m, A/NZ, (PDL/Clipsal) 32A (3P+N+Gnd) to 3X IEC 320 C19

Chapter 4. Chassis and infrastructure configuration

161

4.12.2 Supported PDUs and UPS units


Table 4-47 lists the supported PDUs.
Table 4-47 Supported power distribution units Part number 39Y8923 39Y8938 39Y8939 39Y8940 39Y8948 46M4002 46M4003 46M4140 46M4134 46M4167 71762MX 71762NX 71763MU 71763NU Description DPI 60A 3-Phase C19 Enterprise PDU w/ IEC309 3P+G (208V) fixed power cords 30amp/125V Front-end PDU with NEMA L5-30P connector 30amp/250V Front-end PDU with NEMA L6-30P connector 60amp/250V Front-end PDU with IEC 309 60A 2P+N+Gnd connector DPI Single Phase C19 Enterprise PDU w/o power cords IBM 1U 9 C19/3 C13 Active Energy Manager DPI PDU IBM 1U 9 C19/3 C13 Active Energy Manager 60A 3-Phase PDU IBM 0U 12 C19/12 C13 50A 3-Phase PDU IBM 0U 12 C19/12 C13 Switched and Monitored 50A 3-Phase PDU IBM 1U 9 C19/3 C13 Switched and Monitored 30A 3-Phase PDU IBM Ultra Density Enterprise PDU C19 PDU+ (WW) IBM Ultra Density Enterprise PDU C19 PDU (WW) IBM Ultra Density Enterprise PDU C19 3-Phase 60A PDU+ (NA) IBM Ultra Density Enterprise PDU C19 3-Phase 60A PDU (NA)

Table 4-48 lists the supported UPS units.


Table 4-48 Supported uninterruptible power supply units Part number 21303RX 21304RX 53956AX 53959KX Description IBM UPS 7500XHV IBM UPS 10000XHV IBM 6000VA LCD 4U Rack UPS (200V/208V) IBM 11000VA LCD 5U Rack UPS (230V)

4.12.3 Power planning


The Enterprise Chassis can have a maximum of six power supplies installed, so consider how to provide the best power optimized source. Both N+N and N+1 configurations are supported for maximum flexibility in power redundancy. This allows a configuration of balanced 3-phase power input into a single or group of chassis. Consideration must be given to the nodes that are being installed within the chassis to ensure sufficient power supplies are installed to deliver the required redundancy. For more information, see 4.7, Power supply selection on page 92. Each power supply in the chassis has a 16A C20 three-pin socket, and can be fed by a C19 power cable from a suitable supply.

162

IBM PureFlex System and IBM Flex System Products and Technology

The chassis power system is designed for efficiency by using data center power that consists of 3-phase, 60A Delta 200 VAC (North America), or 3-phase 32A wye 380-415 VAC (international). The chassis can also be fed from single phase 200-240VAC supplies if required. The power is scaled as required, so as more nodes are added, the power and cooling increases accordingly. For power planning, Table 4-11 on page 93 shows the number of power supplies needed for N+N or N+1, node dependent. This section explains single phase and 3-phase example configurations for North America and worldwide, starting with 3-phase and assumes that you have power budget in your configuration to deliver N+N or N+1, given your particular node configuration. The 2100W power modules have the advantage in North America that they draw a maximum 11.8A as opposed to 13.8A of the 2500W power modules. This means that when you are using a 30A supply, which is derated to 24A with a PDU, up to two 2100W power modules can be connected to the same PDU with 0.4A remaining. With 2500W power modules, only one power module can be connected to a 30A PDU at the maximum (label) rating. Thus, for North America, the 2100W power module is advantageous for 30A supply PDU deployments. Figure 4-59 shows two chassis that were populated with six 2100W power supplies. Six 30A PDUs were configured to supply power to the two chassis.

Each PDU has up to 22.8A (label rating) draw by two 2100W Flex Chassis power supplies (2x 11.8A). 0.4A capacity remaining on a 24A Derated PDU.

6x 2100W power supplies 11.8A 11.8A 11.8A

11.8A

Each 30A PDU 200-240V Can provide up to 24A

11.8A

11.8A

11.8A

11.8A

6x 71762NX Ultra Density Enterprise PDU

11.8A 11.8A

11.8A 11.8A = Cables

6x 40K9614 IBM DPI 30A 1ph Cord with NEMA L6-30P connector (71762NX + 40K9614 = FC 6500)

Figure 4-59 2100W power supplies optimized for use with 30A UL Derated PDU

Chapter 4. Chassis and infrastructure configuration

163

Power cabling: 32 A at 380 - 415 V 3-phase (International)


Figure 4-60 shows one 3-phase, 32A wye PDU (worldwide, WW) that provides power feeds for two chassis. In this case, an appropriate 3-phase power cable is selected for the Ultra-Dense Enterprise PDU+. This cable then splits the phases and supplies one phase to each of the three power supplies within each chassis. One 3-phase 32A wye PDU can power two fully populated chassis within a rack. A second PDU can be added for power redundancy from an alternative power source, if the chassis is configured for N+N and meets the requirements for this as shown in Table 4-11 on page 93. Figure 4-60 shows a typical configuration given a 32A 3-phase wye supply at 380-415VAC (often termed WW or International) for N+N. Ensure the node deployment meets the requirements that are shown in Table 4-11 on page 93.

IEC320 16A C19-C20 3m power cable

46M4002 1U 9 C19/3 C13 Switched and monitored DPI PDU

L3 N G

L2 L1

L3 N G

L2 L1

40K9611 IBM DPI 32a Cord (IEC 309 3P+N+G)

= Power cables

Figure 4-60 Example power cabling 32A at 380-415V 3-phase: international

The maximum number of Enterprise Chassis that can be installed with a 42U rack is four. Therefore, the chassis requires a total of four 32A 3-phase wye feeds to provide for a redundant N+N configuration.

164

IBM PureFlex System and IBM Flex System Products and Technology

Power cabling: 60 A at 208 V 3-phase (North America)


In North America, the chassis requires four 60A 3-phase delta supplies at 200 - 208 VAC. A configuration that is optimized for 3-phase configuration is shown in Figure 4-61.

IEC320 16A C19-C20 3m power cable

46M4003 1U 9 C19/3 C13 Switched and monitored DPI PDI

L1 G L2 L3 L2

L1 G L3

46M4003 Includes fixed IEC60309 3P+G 60A line cord

Figure 4-61 Example of power cabling 60 A at 208 V 3-phase

Chapter 4. Chassis and infrastructure configuration

165

Power cabling: Single Phase 63 A (International)


Figure 4-62 shows International 63A single phase supply feed example. This example uses the switched and monitored PDU+ with an appropriate power cord. Each 2500W PSU can draw up to 13.85A from its supply. Therefore, a single chassis can easily be fed from a 63A single phase supply, leaving 18.45A available capacity. This capacity can feed a single PSU on a second chassis power supply (13.85A). It also can be available for the PDU to supply further items in the rack, such as servers or storage devices.

46M4002 1U 9 C19/3 C13 Switched and monitored DPI PDI

N G

N G

40K9613 IBM DPI 63a Cord (IEC 309 P+N+G) = Cables

Figure 4-62 Single phase 63 A supply

166

IBM PureFlex System and IBM Flex System Products and Technology

Power cabling: 60 A 200 VAC single phase supply (North America)


In North America, UL derating means that a 60 A PDU supplies only 48 Amps. At 200 VAC, the 2500W power supplies in the Enterprise Chassis draw a maximum of 13.85 Amps. Therefore, a single phase 60A supply can power a fully configured chassis. A further 6.8 A is available from the PDU to power other items within the chassis, such as servers or storage, as shown in Figure 4-63.

46M4002 1U 9 C19/3 C13 Switched and monitored DPI PDI

L1 G L2 L3 L2

L1 G L3

40K9615 IBM DPI 60a Cord (IEC 309 2P+G) Building power = 200 VAC, 60 Amp, 1 Phase (48A supplied by PDU after UL derating)

= Cables

Figure 4-63 60 A 200 VAC single phase supply

For more information about planning your IBM Flex System power infrastructure, see IBM Flex System Enterprise Chassis Power Requirements Guide, WP102111, which is available at this website: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102111

Chapter 4. Chassis and infrastructure configuration

167

4.12.4 UPS planning


It is possible to power the Enterprise Chassis with a UPS, which provides protection in case of power failure or interruption. IBM does not offer a 3-phase UPS. However, single phase UPS available from IBM can be used to supply power to a chassis, at 200VAC and 220VAC. An alternative is to use third-party UPS product if 3-phase is required. At international voltages, the 11000VA UPS is ideal for powering a fully loaded chassis. Figure 4-64 shows how each power feed can be connected to one of the four 20A outlets on the rear of the UPS. This UPS requires hard wiring to a suitable supply by a qualified electrician.

53959KX IBM UPS11000 5U

= Cables

Figure 4-64 Two UPS11000 international single-phase (208 - 230 VAC)

In North America, the available UPS at 200-208VAC is the UPS6000. This UPS has two outlets that can be used to power two of the power supplies within the chassis. In a fully loaded chassis, the third pair of power supplies must be connected to another UPS. Figure 4-65 shows this UPS configuration.

53956AX IBM UPS6000 4U

= Cables

Figure 4-65 Two UPS 6000 North American (200 - 208 VAC)

For more information, see IBM 11000VA LCD 5U Rack Uninterruptible Power Supply, TIPS0814, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0814.html?Open

168

IBM PureFlex System and IBM Flex System Products and Technology

4.12.5 Console planning


The Enterprise Chassis is a lights out system and can be managed remotely with ease. However, the following methods can be used to access an individual nodes console: Each x86 node can be individually connected to by physically plugging in a console breakout cable to the front of the node. (One console breakout cable is supplied with each chassis). This cable presents a 15pin video connector, two USB sockets, and a serial cable out the front. Connecting a portable screen and USB keyboard and mouse near the front of the chassis enables quick connection into the console breakout cable and access directly into the node. This configuration is often called crash cart management capability. Connect an SCO, VCO2, or UCO (Conversion Option) that is connected into the front of each x86 node via a local console cable to a Global or Local Console Switch. Although supported, this is not a particularly elegant method because there are a significant number of cables to be routed from the front of a chassis in the case of 28 servers (14 x222 Compute Nodes). Connection to the FSM management interface by browser allows remote presence to each node within the chassis. Connection remotely into the Ethernet management port of the CMM by using the browser allows remote presence to each node within the chassis. Connect directly to each IMM2 on a node and start a remote console session to that node through the IMM. Local KVM, such as was possible with the BladeCenter Advanced Management Module, is not possible with Flex System. The CMM does not present a KVM port externally. The ordering part number and feature code is shown in Table 4-49.
Table 4-49 Ordering part number and feature code Part number 81Y5286 Feature codea A1NF Description IBM Flex System Console Breakout Cable

a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.

4.12.6 Cooling planning


The chassis is designed to operate in ASHRAE class A3 operating environments, which means temperatures up to 40 C (104 F) or altitudes up to 10,000 ft (3,000 m). The airflow requirements for the Enterprise Chassis are from 270 CFM (cubic feet per minute) to a maximum of 1020 CFM. The Enterprise Chassis includes the following environmental specifications: Humidity, non-condensing: -12C dew point (10.4F) and 8% - 85% relative humidity Maximum dew point: 24C (75F) Maximum elevation: 3050 m (10.006 ft) Maximum rate of temperature change: 5C/hr (41F/hr) Heat Output (approximate): Maximum configuration: potentially 12.9kW

Chapter 4. Chassis and infrastructure configuration

169

The 12.9kW figure is only a potential maximum, where the most power hungry configuration is chosen and all power envelopes are maximum. For a more realistic figure, use the IBM Power Configurator tool to establish specific power requirements for a configuration, which is available at this website: http://www.ibm.com/systems/x/hardware/configtools.html Data center operation at environmental temperatures above 35 C can generally be operated in a free air cooling environment where outside air is filtered and then used to ventilate the data center. This is the definition of ASHRAE class A3 (and also the A4 class, which raises the upper limit to 45 C). A conventional data center does not normally run with computer room air conditioning (CRAC) units up to 40 C because the risk of failures of CRAC or power to the CRACs failing gives limited time for shutdowns before over-temperature events occur. IBM Flex System Enterprise Chassis is suitable for operation in an ASHRAE class A3 environment that is installed in operating and non-operating mode. Information about ASHRAE 2011 thermal guidelines, data center classes, and white papers can be found at the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) website: http://www.ashrae.org The chassis can be installed within IBM or non-IBM racks. However, the IBM 42U 1100mm Enterprise V2 Dynamic Rack does offer in North America a single floor tile wide and two tiles deep. More information about this sizing, see 4.13, IBM 42U 1100mm Enterprise V2 Dynamic Rack on page 172. If installed within a non-IBM rack, the vertical rails must have clearances to EIA-310-D. There must be sufficient room in front of the vertical front rack-mounted rail to provide minimum bezel clearance of 70 mm (2.76 inches) depth. The rack must be sufficient to support the weight of the chassis, cables, power supplies, and other items that are installed within. There must be sufficient room behind the rear of the rear rack rails to provide for cable management and routing. Ensure the stability of any non-IBM rack by using stabilization feet or baying kits so that it does not become unstable when it is fully populated. Finally, ensure that sufficient airflow is available to the Enterprise Chassis. Racks with glass fronts do not normally allow sufficient airflow into the chassis, unless they are specialized racks that are specifically designed for forced air cooling. Airflow information in CFM is available from the IBM Power Configurator tool.

4.12.7 Chassis-rack cabinet compatibility


IBM offers an extensive range of industry-standard, EIA-compatible rack enclosures and expansion units. The flexible rack solutions help you consolidate servers and save space, while allowing easy access to crucial components and cable management.

170

IBM PureFlex System and IBM Flex System Products and Technology

Table 4-50 lists the IBM Flex System Enterprise Chassis supported in each rack cabinet.
Table 4-50 The chassis that is supported in each rack cabinet Part number 93634PX 93634EX 93634CX 93634DX 93634AX 93634BX 201886X 93072PX 93072RX 93074RX 99564RX 99564XX 93084PX 93084EX 93604PX 93604EX 93614PX 93614EX 93624PX 93624EX 14102RX 14104RX Feature code A1RC A1RD A3GR A3GS A31F A31G 2731 6690 1042 1043 5629 5631 5621 5622 7649 7650 7651 7652 7653 7654 1047 1048 None None None None None Rack cabinet IBM 42U 1100 mm Enterprise V2 Deep Dynamic Rack IBM 42U 1100 mm Dynamic Enterprise V2 Expansion Rack IBM PureFlex System 42U Rack IBM PureFlex System 42U Expansion Rack IBM PureFlex System 42U Rack IBM PureFlex System 42U Expansion Rack IBM 11U Office Enablement Kit IBM S2 25U Static Standard Rack IBM S2 25U Dynamic Standard Rack IBM S2 42U Standard Rack IBM S2 42U Dynamic Standard Rack IBM S2 42U Dynamic Standard Expansion Rack IBM 42U Enterprise Rack IBM 42U Enterprise Expansion Rack IBM 42U 1200 mm Deep Dynamic Rack IBM 42U 1200 mm Deep Dynamic Expansion Rack IBM 42U 1200 mm Deep Static Rack IBM 42U 1200 mm Deep Static Expansion Rack IBM 47U 1200 mm Deep Static Rack IBM 47U 1200 mm Deep Static Expansion Rack IBM eServer Cluster 25U Rack IBM Linux Cluster 42U Rack IBM Netfinity Rack IBM Netfinity Rack IBM Netfinity Enterprise Rack IBM Netfinity Enterprise Rack Expansion Cabinet IBM Netfinity NetBAY 22 Enterprise Chassis Yesa Yesa Yesb Yesb Yesc Yesc Yesd Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No No No No No

9306-900 9306-910 9306-42P 9306-42X 9306-200

a. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated front to back cable raceways. For more information, including images, see 4.13, IBM 42U 1100mm Enterprise V2 Dynamic Rack on page 172.

Chapter 4. Chassis and infrastructure configuration

171

b. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated front to back cable raceways, and includes a unique PureFlex door. For more information, including images of the door, see 4.14, IBM PureFlex System 42U Rack and 42U Expansion Rack on page 178. c. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated front to back cable raceways, and includes the original square blue design of unique PureFlex Logod Door, shipped between Q2 and Q4 2012. d. This Office Enablement kit is specifically designed for the IBM BladeCenter S Chassis. The Flex System Enterprise Chassis can be installed within the 11U office enablement kit with 1U of space remaining; however, the acoustic footprint of a configuration might not be acceptable for office use. We recommend that an evaluation be performed before deployment in an office environment.

Racks that have glass-fronted doors do not allow sufficient airflow for the Enterprise Chassis, such as the Netfinity racks that are shown in Table 4-50 on page 171. In some cases with the older Netfinity racks, the chassis depth is such that the Enterprise Chassis cannot be accommodated within the dimensions of the rack.

4.13 IBM 42U 1100mm Enterprise V2 Dynamic Rack


The IBM 42U 1100mm Enterprise V2 Dynamic Rack is an industry-standard 24-inch rack that supports the Enterprise Chassis, BladeCenter, System x servers, and options. It is available in Primary or Expansion form. The expansion rack is designed for baying and has no side panels. It ships with a baying kit. After it is attached to the side of a primary rack, the side panel that is removed from the primary rack is attached to the side of the expansion rack. The available configurations are shown in Table 4-51.
Table 4-51 Rack options and part numbers Model 9363-4PX 9363-4EX Description IBM 42U 1100mm Enterprise V2 Dynamic Rack IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack Details Rack ships with side panels and is stand-alone. Rack ships with no side panels, and is designed to attach to a primary rack

This 42U rack conforms to the EIA-310-D industry standard for a 24-inch, type A rack cabinet. The dimensions are listed in Table 4-52.
Table 4-52 Dimensions of IBM 42U 1100mm Enterprise V2 Dynamic Rack, 9363-4PX Dimension Height Width Depth Weight Value 2009 mm (79.1 in.) 600 mm (23.6 in.) 1100 mm (43.3 in.) 174 kg (384 lb), including outriggers

The rack features outriggers (stabilizers) allowing for movement while populated.

172

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-66 shows the 9363-4PX rack.

Figure 4-66 9363-4PX Rack (note tile width relative to rack)

The IBM 42U 1100mm Enterprise V2 Dynamic Rack includes the following features: A perforated front door allows for improved air flow. Square EIA Rail mount points. Six side-wall compartments support 1U-high PDUs and switches without taking up valuable rack space. Cable management rings are included to help cable management. Easy to install and remove side panels are a standard feature. The front door can be hinged on either side, which provides flexibility to open in either direction. Front and rear doors and side panels include locks and keys to help secure servers. Heavy-duty casters with the use of outriggers (stabilizers) come with the 42U Dynamic racks for added stability, which allows movement of the rack while loaded. Tool-less 0U PDU rear channel mounting reduces installation time and increases accessibility. 1U PDU can be mounted to present power outlets to the rear of the chassis in side pocket openings. Removable top and bottom cable access panels in both front and rear. IBM is a leading vendor with specific ship-loadable designs. These kinds of racks are called dynamic racks. The IBM 42U 1100mm Enterprise V2 Dynamic Rack and IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack are dynamic racks.

Chapter 4. Chassis and infrastructure configuration

173

A dynamic rack has extra heavy-duty construction and sturdy packaging that can be reused for shipping a fully loaded rack. They also have outrigger casters for secure movement and tilt stability. Dynamic racks also include a heavy-duty shipping pallet that includes a ramp for easy on and off maneuvering. Dynamic racks undergo more shock and vibration testing, and all IBM racks are of welded rather than the more flimsy bolted construction. Figure 4-67 shows the rear view of the 42U 1100mm Flex System Dynamic Rack.

Mountings for IBM 0U PDU

Cable raceway

Outriggers
Figure 4-67 42U 1100mm Flex System Dynamic Rack rear view, with doors and sides panels removed

The IBM 42U 1100mm Enterprise V2 Dynamic Rack rack also provides more space than previous rack designs, for front cable management of SAS cables exiting the V7000 Storage Node and the PCIe Expansion Node. There are four cable raceways on each rack, with two on each side. The raceways allow cables to be routed from the front of the rack, through the raceway, and out to the rear of the rack, which is required when connecting an externally mounted Storwize Expansion unit to an integrated V7000 Storage Node.

174

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-68 shows the cable raceways.

Cable raceway
Figure 4-68 Cable raceway (as viewed from rear of rack)

Figure 4-69 shows a cable raceway when viewed inside the rack looking down. Cables can enter the side bays of the rack from the raceway, or pass from one side bay to the other, passing vertically through the raceway. These openings are at the front and rear of each raceway.

Cable raceway

Cable raceway vertical apertures

Front Vertical mounting Rail


Figure 4-69 Cable Raceway at front of rack viewed from above

The 1U rack PDUs can also be accommodated in the side bays. In these bays, the PDU is mounted vertically in the rear of the side bay and presents its outlets to the rear of the rack. Four 0U PDUs can also be vertically mounted in the rear of the rack.

Chapter 4. Chassis and infrastructure configuration

175

Rear vertical aperture blocked by a PDU: When a PDU is installed in a rear side pocket bay, it is not possible to use the cable raceway vertical apertures at the rear. The rack width is 600 mm (which is a standard width of a floor tile in many locations) to complement current raised floor data center designs. Dimensions of the rack base are shown in Figure 4-70.

600 mm 46 mm 199 mm 65 mm

1100 mm

65 mm 458 mm

Front of Rack
Figure 4-70 Rack dimensions

The rack has square mounting holes that are common in the industry, onto which the Enterprise Chassis and other server and storage products can be mounted. For implementations where the front anti-tip plate is not required, an air baffle/air recirculation prevention plate is supplied with the rack. You might not want to use the plate when an airflow tile must be positioned directly in front of the rack.

176

IBM PureFlex System and IBM Flex System Products and Technology

This air baffle, which is shown in Figure 4-71, can be installed to the lower front of the rack. It helps prevent warm air from the rear of the rack from circulating underneath the rack to the front, which improves the cooling efficiency of the entire rack solution.

Recirculation prevention plate

Figure 4-71 Recirculation prevention plate

Chapter 4. Chassis and infrastructure configuration

177

4.14 IBM PureFlex System 42U Rack and 42U Expansion Rack
The IBM PureFlex System 42U Rack and IBM PureFlex System 42U Expansion Rack are optimized for use with IBM Flex System components, IBM System x servers, and BladeCenter systems. Their robust design allows them to be shipped with equipment already installed. The rack footprint is 600 mm x 1100 mm. The IBM PureFlex System 42U Rack is shown in Figure 4-72.

Figure 4-72 IBM PureFlex System 42U Rack

These racks are usually shipped as standard with a PureFlex system, but they are available for ordering by clients who want to deploy rack solutions with a similar design across their data center. The door design also can be fitted to existing deployed PureFlex System racks that have the original solid blue door design that shipped from Q2 2012 onwards. Table 4-53 shows the available options and associated part numbers for the two PureFlex racks and the PureFlex door.
Table 4-53 PureFlex system racks and rack door Model 9363-4CX / A3GR 9363-4DX / A3GS 44X3132 / EU21 Description IBM PureFlex System 42U Rack IBM PureFlex System 42U Expansion Rack IBM PureFlex System Rack Door Details Primary Rack. Ships with side doors. Expansion Rack. Ships with no side doors, but with a baying kit to join onto a primary rack. Front door for rack that is embellished with PureFlex design.

These racks share the rack frame design of the IBM 42U 1100mm Enterprise V2 Dynamic Rack, but ship with a PureFlex branded door. The door can be ordered separately. These IBM PureFlex System 42U racks are industry-standard 19-inch racks that support IBM PureFlex System and Flex System chassis, IBM System x servers, and BladeCenter chassis.

178

IBM PureFlex System and IBM Flex System Products and Technology

The racks conform to the EIA-310-D industry standard for 19-inch, type A rack cabinets, and have outriggers (stabilizers), which allows for movement of large loads. The optional IBM Rear Door Heat eXchanger can be installed into this rack to provide a superior cooling solution, and the entire cabinet will still fit on a standard data center floor tile (width). For more information, see 4.15, IBM Rear Door Heat eXchanger V2 Type 1756 on page 180. The front door is hinged on one side only. The rear door can be hinged on either side and can be removed for ease of access when cabling or servicing systems within the rack. The front door is a unique PureFlex -branded front door that allows for excellent airflow into the rack. The rack includes the following features: Six side-wall compartments support 1U-high power distribution units (PDUs) and switches without taking up valuable rack space. Cable management slots are provided to route hook-and-loop fasteners around cables. Side panels are a standard feature and are easy to install and remove. Front and rear doors and side panels include locks and keys to help secure servers. Horizontal and vertical cable channels are built into the frame. Heavy-duty casters with outriggers (stabilizers) come with the 42U rack for added stability, which allows for movement of large loads. Tool-less 0U PDU rear channel mounting is provided. A 600 mm standard width to complement current raised-floor data center designs. Increase in depth to from 1,000 mm to 1,100 mm to improve cable management. Increase in door perforation to maximize airflow. Support for tool-less 0U PDU mounting, and 1U PDU easy installation of 1U PDUs. Front-to-back cable raceways for easy routing of cables such as Fibre Channel or SAS. Support for shipping of fully integrated solutions. Vertical cable channels that are built into the frame. Lockable doors and side panels. Heavy-duty casters to help safely move large loads in the rack. Front stabilizer plate. The door can be ordered as a separate part number for attaching to existing PureFlex racks. Rack specifications for the two IBM PureFlex System Racks and the PureFlex Rack door are shown in Table 4-54.
Table 4-54 IBM PureFlex System Rack specifications Description 9363-4CX Description PureFlex System 42U Rack Dimension Height Width Depth Weight Value 2009 mm (79.1 in.) 604 mm (23.8 in.) 1100 mm (43.3 in.) 179 kg (394 lb), including outriggers

Chapter 4. Chassis and infrastructure configuration

179

Description 9363-4DX

Description PureFlex System 42U Expansion Rack

Dimension Height Width Depth Weight

Value 2009 mm (79.1 in.) 604 mm (23.8 in.) 1100 mm (43.3 in.) 142 kg (314 lb), including outriggers 1924 mm (75.8 in.) 597 mm (23.5 in.) 90 mm (3.6 in.) 19.5 kg (43 lb)

44X3132

IBM PureFlex System Rack Door kit

Height Width Depth Weight

4.15 IBM Rear Door Heat eXchanger V2 Type 1756


The IBM Rear Door Heat eXchanger V2 is designed to attach to the rear of the following racks: IBM 42U 1100mm Enterprise V2 Dynamic Rack IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack It provides effective cooling for the warm air exhausts of equipment that is mounted within the rack. The heat exchanger has no moving parts to fail and no power is required. The rear door heat exchanger can be used to improve cooling and reduce cooling costs in a high-density HPC Enterprise Chassis environment. The physical design of the door is slightly different from that of the existing Rear Door Heat Exchanger (32R0712) that is marketed by IBM System x. This door has a wider rear aperture, as shown in Figure 4-73. It is designed for attachment specifically to the rear of an IBM 42U 1100mm Enterprise V2 Dynamic Rack or IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack.

Figure 4-73 Rear Door Heat Exchanger

180

IBM PureFlex System and IBM Flex System Products and Technology

Attaching a rear door heat exchanger to the rear of a rack allows up to 100,000 BTU/hr or 30kw of heat to be removed at a rack level. As the warm air passes through the heat exchanger, it is cooled with water and exits the rear of the rack cabinet into the data center. The door is designed to provide an overall air temperature drop of up to 25C measured between air that enters the exchanger and exits the rear. Figure 4-74 shows the internal workings of the IBM Rear Door Heat eXchanger V2.

Figure 4-74 IBM Rear Door Heat eXchanger V2

The supply inlet hose provides an inlet for chilled, conditioned water. A return hose delivers warmed water back to the water pump or chiller in the cool loop. It must meet the water supply requirements for secondary loops.
Table 4-55 Rear door heat exchanger Model 1756-42X Description IBM Rear Door Heat eXchanger V2 for 9363 Racks Details Rear door heat exchanger that can be installed to the rear of the 9363 Rack

Chapter 4. Chassis and infrastructure configuration

181

Figure 4-75 shows the percentage heat that is removed from a 30 KW heat load as a function of water temperature and water flow rate. With 18 Degrees at 10 (gpm), 90% of 30 kW heat is removed by the door.
% heat removal as function of water temperature and flow rate for given rack power, rack inlet temperature, and rack air flow rate 140 130 120 110

Water temperature 12C * 14C * 16C * 18C * 20C * 22C * 24C * Rack Power (W) = 30000 Tinlet, air (C) = 27 Airflow (cfm) = 2500 4 6 8 10 12 14

% heat removal

100 90 80 70 60 50 Water flow rate (gpm)

Figure 4-75 Heat removal by Rear Door Heat eXchanger V2 at 30 KW of heat

For efficient cooling, water pressure and water temperature must be delivered in accordance with the specifications listed in Table 4-56. The temperature must be maintained above the dew point to prevent condensation from forming.
Table 4-56 1756 RDHX specifications Rear Door heat exchanger V2 Depth Width Height Empty Weight Filled Weight Temperature Drop Water Temperature Specifications 129 mm (5.0 in) 600 mm (23.6 in) 1950 mm (76.8 in) 39 kg (85 lb) 48 kg (105 lb) Up to 25C (45F) between air exiting and entering RDHX Above Dew Point: 18C 1C (64.4F 1.8F) for ASHRAE Class 1 Environment 22C 1C (71.6F 1.8F) for ASHRAE Class 2 Environment Minimum: 22.7 liters (6 gallons) per minute, Maximum: 56.8 liters (15 gallons) per minute

Required water flow rate (as measured at the supply entrance to the heat exchanger)

182

IBM PureFlex System and IBM Flex System Products and Technology

The installation and planning guide provides lists of suppliers that can provide coolant distribution unit solutions, flexible hose assemblies, and water treatment that meet the suggested water quality requirements. It takes three people to install the rear door heat exchanger. The exchanger requires a non-conductive step ladder to be used for attachment of the upper hinge assembly. Consult the planning and implementation guides before proceeding. The installation and planning guides can be found at this website: http://www.ibm.com/support/entry/portal/

Chapter 4. Chassis and infrastructure configuration

183

184

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 5.

Compute nodes
This chapter describes the IBM Flex System servers or compute nodes. The applications that are installed on the compute nodes can run natively on a dedicated physical server or they can be virtualized in a virtual machine that is managed by a hypervisor layer. The IBM Flex System portfolio of compute nodes includes Intel Xeon processors and IBM POWER7 processors. Depending on the compute node design, nodes can come in one of the following form factors: Half-wide node: Occupies one chassis bay, half the width of the chassis (approximately 215 mm or 8.5 in.). An example is the IBM Flex System x240 Compute Node. Full-wide node: Occupies two chassis bays side-by-side, the full width of the chassis (approximately 435 mm or 17 in.). An example is the IBM Flex System p460 Compute Node. This chapter includes the following topics: 5.1, IBM Flex System Manager on page 186 5.2, IBM Flex System x220 Compute Node on page 186 5.3, IBM Flex System x222 Compute Node on page 216 5.4, IBM Flex System x240 Compute Node on page 234 5.5, IBM Flex System x440 Compute Node on page 275 5.6, IBM Flex System p260 and p24L Compute Nodes on page 298 5.7, IBM Flex System p270 Compute Node on page 318 5.8, IBM Flex System p460 Compute Node on page 335 5.9, IBM Flex System PCIe Expansion Node on page 356 5.10, IBM Flex System Storage Expansion Node on page 363 5.11, I/O adapters on page 370

Copyright IBM Corp. 2012, 2013. All rights reserved.

185

5.1 IBM Flex System Manager


The IBM Flex System Manager (FSM) is a high-performance, scalable system management appliance that is based on the IBM Flex System x240 Compute Node. The FSM hardware comes preinstalled with systems management software that you can use to configure, monitor, and manage IBM Flex System resources in up to four chassis. For more information about the hardware and software of the FSM, see 3.5, IBM Flex System Manager on page 50.

5.2 IBM Flex System x220 Compute Node


The IBM Flex System x220 Compute Node, machine type 7906, is the next generation cost-optimized compute node that is designed for less demanding workloads and low-density virtualization. The x220 is efficient and equipped with flexible configuration options and advanced management to run a broad range of workloads. This section includes the following topics: 5.2.1, Introduction on page 186 5.2.2, Models on page 190 5.2.3, Chassis support on page 190 5.2.4, System architecture on page 191 5.2.5, Processor options on page 193 5.2.6, Memory options on page 193 5.2.7, Internal disk storage controllers on page 201 5.2.8, Supported internal drives on page 206 5.2.9, Embedded 1 Gb Ethernet controller on page 209 5.2.10, I/O expansion on page 209 5.2.11, Integrated virtualization on page 211 5.2.12, Systems management on page 211 5.2.13, Operating system support on page 215

5.2.1 Introduction
The IBM Flex System x220 Compute Node is a high-availability, scalable compute node that is optimized to support the next-generation microprocessor technology. With a balance of cost and system features, the x220 is an ideal platform for general business workloads. This section describes the key features of the server.

186

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-1 shows the front of the compute node and highlights the location of the controls, LEDs, and connectors.
Two 2.5 HS drive bays

Light path diagnostics panel

USB port

Console breakout cable port

Power

LED panel

Figure 5-1 IBM Flex System x220 Compute Node

Figure 5-2 shows the internal layout and major components of the x220.

Cover

Heat sink Microprocessor heat sink filler I/O expansion adapter

Left air baffle

Microprocessor Hard disk drive backplane

Hard disk drive cage Hot-swap hard disk drive Right air baffle

Hard disk drive bay filler

DIMM

Figure 5-2 Exploded view of the x220, showing the major components

Chapter 5. Compute nodes

187

Table 5-1 lists the features of the x220.


Table 5-1 IBM Flex System x220 Compute Node specifications Components Form factor Chassis support Processor Specification Half-wide compute node. IBM Flex System Enterprise Chassis. Up to two Intel Xeon Processor E5-2400 product family processors. These processors can be eight-core (up to 2.3 GHz), six-core (up to 2.4 GHz), or quad-core (up to 2.2 GHz). There is one QPI link that runs at 8.0 GTps, L3 cache up to 20 MB, and memory speeds up to 1600 MHz. The server also supports one Intel Pentium Processor 1400 product family processor with two cores, up to 2.8 GHz, 5 MB L3 cache, and 1066 MHz memory speeds. Intel C600 series. Up to 12 DIMM sockets (six DIMMs per processor) using LP DDR3 DIMMs. RDIMMs and UDIMMs are supported. 1.5 V and low-voltage 1.35 V DIMMs are supported. Support for up to 1600 MHz memory speed, depending on the processor. Three memory channels per processor (two DIMMs per channel). Supports two DIMMs per channel operating at 1600 MHz (2 DPC @ 1600 MHz) with single and dual rank RDIMMs. With LRDIMMs: Up to 384 GB with 12x 32 GB LRDIMMs and two E5-2400 processors. With RDIMMs: Up to 192 GB with 12x 16 GB RDIMMs and two E5-2400 processors. With UDIMMs: Up to 48 GB with 12x 4 GB UDIMMs and two E5-2400 processors. Half of these maximums and DIMMs count with one processor installed. ECC, Chipkill (for x4-based memory DIMMs). Optional memory mirroring and memory rank sparing. Two 2.5-inch hot-swap serial-attached SCSI (SAS)/Serial Advanced Technology Attachment SATA drive bays that support SAS, SATA, and SSD drives. Optional support for up to eight 1.8-inch SSDs. Onboard ServeRAID C105 supports SATA drives only. With two 2.5-inch hot-swap drives: Up to 2 TB with 1 TB 2.5-inch NL SAS HDDs Up to 2.4 TB with 1.2 TB 2.5-inch SAS HDDs Up to 2 TB with 1 TB 2.5-inch NL SATA HDDs Up to 1 TB with 512 GB 2.5-inch SATA SSDs. An intermix of SAS and SATA HDDs and SSDs is supported. With 1.8-inch SSDs and ServeRAID M5115 RAID adapter, you can have up to 4 TB with eight 512 GB 1.8-inch SSDs. Software RAID 0 and 1 with integrated LSI-based 3 Gbps ServeRAID C105 controller; supports SATA drives only. Non-RAID is not supported. Optional ServeRAID H1135 RAID adapter with LSI SAS2004 controller, supports SAS/SATA drives with hardware-based RAID 0 and 1. An H1135 adapter is installed in a dedicated PCIe 2.0 x4 connector and does not use either I/O adapter slot (see Figure 5-3 on page 189). Optional ServeRAID M5115 RAID adapter with RAID 0, 1, 10, 5, 50 support and 1 GB cache. M5115 uses the I/O adapter slot 1. Can be installed in all models, including models with an embedded 1 GbE Fabric Connector. Supports up to eight 1.8-inch SSD with expansion kits. Optional flash-backup for cache, RAID 6/60, and SSD performance enabler. Some models (see Table 5-2 on page 190): Embedded dual-port Broadcom BCM5718 Ethernet Controller that supports Wake on LAN and Serial over LAN, IPv6. TCP/IP offload Engine (TOE) not supported. Routes to chassis I/O module bays 1 and 2 through a Fabric Connector to the chassis midplane. The Fabric Connector precludes the use of I/O adapter slot 1, with the exception that the M5115 can be installed in slot 1 while the Fabric Connector is installed. Remaining models: No network interface standard; optional 1 Gb or 10 Gb Ethernet adapters.

Chipset Memory

Memory maximums

Memory protection Disk drive bays

Maximum internal storage (Raw)

RAID support

Network interfaces

188

IBM PureFlex System and IBM Flex System Products and Technology

Components PCI Expansion slots

Specification Two connectors for I/O adapters; each connector has PCIe x8+x4 interfaces. Includes an Expansion Connector (PCIe 3.0 x16) for future use to connect a compute node expansion unit. Dedicated PCIe 2.0 x4 interface for ServeRAID H1135 adapter only. USB ports: One external and two internal ports for an embedded hypervisor. A console breakout cable port on the front of the server provides local KVM and serial ports (cable standard with chassis; additional cables are optional). UEFI, IBM IMM2 with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, and remote presence. Support for IBM Flex System Manager, IBM Systems Director, and IBM ServerGuide. Power-on password, administrator's password, and Trusted Platform Module V1.2. Matrox G200eR2 video core with 16 MB video memory that is integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors. Three-year customer-replaceable unit and onsite limited warranty with 9x5/NBD. Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, VMware vSphere. For more information, see 5.2.13, Operating system support on page 215. Optional service upgrades are available through IBM ServicePac offerings: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, and remote technical support for IBM hardware and selected IBM and OEM software. Width: 217 mm (8.6 in.), height: 56 mm (2.2 in.), depth: 492 mm (19.4 in.) Maximum configuration: 6.4 kg (14.11 lb).

Ports

Systems management Security features Video Limited warranty Operating systems supported Service and support

Dimensions Weight

Figure 5-3 shows the components on the system board of the x220.
Hot-swap drive bay backplane Processor 2 and six memory DIMMs USB port 2 Broadcom Ethernet I/O connector 1 Fabric Connector

Light path diagnostics

Optional ServeRAID H1135

USB port 1

Processor 1 and I/O connector 2 six memory DIMMs

Expansion Connector

Figure 5-3 Layout of the IBM Flex System x220 Compute Node system board

Chapter 5. Compute nodes

189

5.2.2 Models
The current x220 models are shown in Table 5-2. All models include 4 GB of memory (one 4 GB DIMM) running at either 1333 MHz or 1066 MHz (depending on model).
Table 5-2 Models of the IBM Flex System x220 Compute Node, type 7906
Model Intel Processor E5-2400: 2 maximum Pentium 1400: 1 maximum Memory RAID adapter Disk baysa Disks Embed 1 GbEb I/O slots (used/ max) 1 / 2b 1 / 2b 1 / 2b 1 / 2b 1 / 2b

7906-A2x

1x Intel Pentium 1403 2C 2.6 GHz 5 MB 1066 MHz 80 W 1x Intel Xeon E5-2430L 6C 2.0 GHz 15 MB 1333 MHz 60 W 1x Intel Xeon E5-2403 4C 1.8 GHz 10 MB 1066 MHz 80 W 1x Intel Xeon E5-2420 6C 1.9 GHz 15 MB 1333 MHz 95 W 1x Intel Xeon E5-2418L 4C 2.0 GHz 10 MB 1333 MHz 50 W 1x Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95 W 1x Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95 W 1x Intel Xeon E5-2440 6C 2.4 GHz 15 MB 1333 MHz 95 W 1x Intel Xeon E5-2450 8C 2.1 GHz 20 MB 1600 MHz 95 W 1x Intel Xeon E5-2470 8C 2.3 GHz 20 MB 1600 MHz 95 W

1x 4 GB UDIMM (1066 MHz)c 1x 4 GB UDIMM 1333 MHz 1x 4 GB RDIMM (1066 MHz)c 1x 4 GB RDIMM 1333 MHz 1x4GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHzc 1x 4 GB RDIMM 1333 MHzc

ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105

2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap

Open

Standard

7906-B2x

Open

Standard

7906-C2x

Open

Standard

7906-D2x

Open

Standard

7906-F2x

Open

Standard

7906-G2x

Open

No

0/2 1 / 2b 1 / 2b

7906-G4x

Open

Standard

7906-H2x

Open

Standard

7906-J2x

Open

No

0/2

7906-L2x

Open

No

0/2

a. The 2.5-inch drive bays can be replaced and expanded with 1.8-inch bays and a ServeRAID M5115 RAID controller. This configuration supports up to eight 1.8-inch SSDs. b. These models include an embedded 1 Gb Ethernet controller. Connections are routed to the chassis midplane by using a Fabric Connector. Precludes the use of I/O connector 1 (except the ServeRAID M5115). c. For A2x and C2x, the memory operates at 1066 MHz, the memory speed of the processor. For J2x and L2x, memory operates at 1333 MHz to match the installed DIMM, rather than 1600 MHz.

5.2.3 Chassis support


The x220 type 7906 is supported in the IBM Flex System Enterprise Chassis as listed in Table 5-3.
Table 5-3 x220 chassis support Server x220 BladeCenter chassis (all) No IBM Flex System Enterprise Chassis Yes

190

IBM PureFlex System and IBM Flex System Products and Technology

Up to 14 x220 Compute Nodes can be installed in the chassis in 10U of rack space. The actual number of x220 systems that can be powered on in a chassis depends on the following factors: The TDP power rating for the processors that are installed in the x220 The number of power supplies installed in the chassis The capacity of the power supplies installed in the chassis (2100 W or 2500 W) The power redundancy policy used in the chassis (N+1 or N+N) Table 4-11 on page 93 provides guidelines about the number of x220 systems that can be powered on in the IBM Flex System Enterprise Chassis, based on the type and number of power supplies installed. The x220 is a half wide compute node and requires that the chassis shelf is installed in the IBM Flex System Enterprise Chassis. Figure 5-4 shows the chassis shelf in the chassis.

Figure 5-4 The IBM Flex System Enterprise Chassis showing the chassis shelf

The shelf is required for half-wide compute nodes. To allow for installation of the full-wide or larger, shelves must be removed from within the chassis. Remove the shelves by sliding the two latches on the shelf towards the center, and then sliding the shelf from the chassis.

5.2.4 System architecture


The IBM Flex System x220 Compute Node features the Intel Xeon E5-2400 series processors. The Xeon E5-2400 series processor has models with either four, six, or eight cores per processor with up to 16 threads per socket. The processors have the following features: Up to 20 MB of shared L3 cache Hyper-Threading Turbo Boost Technology 2.0 (depending on processor model) One QPI link that runs at up to 8 GT/s One integrated memory controller Three memory channels that support up to two DIMMs each

Chapter 5. Compute nodes

191

The x220 also supports an Intel Pentium 1403 or 1407 dual-core processor for entry-level server applications. Only one Pentium processor is supported in the x220. CPU socket 2 must be left unused, and only six DIMM sockets are available. Figure 5-5 shows the system architecture of the x220 system.
(optional)

x4 ESI link Intel Xeon Processor 1 Intel C600 PCH

PCIe 2.0 x4

ServeRAID H1135 Internal USB Front USB HDDs or SSDs

Front KVM port USB x1 DDR3 DIMMs 3 memory channels 2 DIMMs per channel QPI link (up to 8 GT/s) IMM v2 USB Video & serial Management to midplane PCIe 2.0 x2 PCIe 3.0 x8+x4 I/O connector 1 Intel Xeon Processor 2 PCIe 3.0 x4 PCIe 3.0 x4 PCIe 3.0 x8+x4 I/O connector 2 PCIe 3.0 x16 Sidecar connector

1 GbE LOM

Figure 5-5 IBM Flex System x220 Compute Node system board block diagram

The IBM Flex System x220 Compute Node has the following system architecture features as standard: Two 1356-pin, Socket B2 (LGA-1356) processor sockets An Intel C600 PCH Three memory channels per socket Up to two DIMMs per memory channel 12 DDR3 DIMM sockets Support for UDIMMs and RDIMMs One integrated 1 Gb Ethernet controller (1 GbE LOM in diagram) One LSI 2004 SAS controller Integrated software RAID 0 and 1 with support for the H1135 LSI-based RAID controller One IMM2 Two PCIe 3.0 I/O adapter connectors with one x8 and one x4 host connection each (12 lanes total). One internal and one external USB connector

192

IBM PureFlex System and IBM Flex System Products and Technology

5.2.5 Processor options


The x220 supports the processor options that are listed in Table 5-4. The server supports one or two Intel Xeon E5-2400 processors, but supports only one Intel Pentium 1403 or 1407 processor. The table also shows which server models have each processor standard. If no corresponding model for a particular processor is listed, the processor is available only through the configure-to-order (CTO) process.
Table 5-4 Supported processors for the x220 Part number Feature codea Intel Xeon processor description Models where used

Intel Pentium processors None Noneb A1VZ / None A1W0 / None Intel Pentium 1403 2C 2.6 GHz 5 MB 1066 MHz 80 W Intel Pentium 1407 2C 2.8 GHz 5 MB 1066 MHz 80 W A2x -

Intel Xeon processors Noneb 90Y4801 90Y4800 90Y4799 90Y4797 90Y4796 90Y4795 90Y4793 A3C4 / None A1VY / A1WC A1VX / A1WB A1VW / A1WA A1VU / A1W8 A1VT / A1W7 A1VS / A1W6 A1VQ / A1W4 Intel Xeon E5-1410 4C 2.8 GHz 10 MB 1333 MHz 80 W Intel Xeon E5-2403 4C 1.8 GHz 10 MB 1066 MHz 80 W Intel Xeon E5-2407 4C 2.2 GHz 10 MB 1066 MHz 80 W Intel Xeon E5-2420 6C 1.9 GHz 15 MB 1333 MHz 95 W Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95 W Intel Xeon E5-2440 6C 2.4 GHz 15 MB 1333 MHz 95 W Intel Xeon E5-2450 8C 2.1 GHz 20 MB 1600 MHz 95 W Intel Xeon E5-2470 8C 2.3 GHz 20 MB 1600 MHz 95 W C2x D2x G2x, G4x H2x J2x L2x

Intel Xeon processors - Low power 00D9528 00D9527 90Y4805 00D9526 90Y4804 A3C7 / A3CA A3C6 / A3C9 A1W2 / A1WE A3C5 / A3C8 A1W1 / A1WD Intel Xeon E5-2418L 4C 2.0 GHz 10 MB 1333 MHz 50 W Intel Xeon E5-2428L 6C 1.8 GHz 15 MB 1333 MHz 60 W Intel Xeon E5-2430L 6C 2.0 GHz 15 MB 1333 MHz 60 W Intel Xeon E5-2448L 8C 1.8 GHz 20 MB 1600 MHz 70 W Intel Xeon E5-2450L 8C 1.8 GHz 20 MB 1600 MHz 70 W F2x B2x -

a. The first feature code is for processor 1 and second feature code is for processor 2. b. The Intel Pentium 1407 and Intel Xeon E5-1410 are available through CTO or special bid only.

5.2.6 Memory options


IBM DDR3 memory is compatibility tested and tuned for optimal performance and throughput. IBM memory specifications are integrated into the light path diagnostic procedures for immediate system performance feedback and optimum system uptime. From a service and support standpoint, IBM memory automatically assumes the IBM system warranty, and IBM provides service and support worldwide.

Chapter 5. Compute nodes

193

The x220 supports LP DDR3 memory LRDIMMs, RDIMMs, and UDIMMs. The server supports up to six DIMMs when one processor is installed, and up to 12 DIMMs when two processors are installed. Each processor has three memory channels, with two DIMMs per channel. The following rules apply when you select the memory configuration: Mixing 1.5 V and 1.35 V DIMMs in the same server is supported. In such a case, all DIMMs operate at 1.5 V. The maximum number of ranks that are supported per channel is eight. The maximum quantity of DIMMs that can be installed in the server depends on the number of processors. For more information, see the Maximum quantity row in Table 5-5 and Table 5-6 on page 195. All DIMMs in all processor memory channels operate at the same speed, which is determined as the following lowest values: Memory speed that is supported by a specific processor. Lowest maximum operating speed for the selected memory configuration that depends on rated speed. For more information, see the Maximum operating speed section in Table 5-5 and Table 5-6 on page 195. The shaded cells indicate that the speed that is indicated is the maximum that the DIMM allows. Cells that are highlighted with a gray background indicate when the specific combination of DIMM voltage and number of DIMMs per channel still allows the DIMMs to operate at rated speed.
Table 5-5 Maximum memory speeds (Part 1 - UDIMMs and LRDIMMs) Spec Rank Part numbers Rated speed Rated voltage Operating voltage Maximum quantitya Largest DIMM Max memory capacity Max memory at rated speed Maximum operating speed 1 DIMM per channel 2 DIMMs per channel 1333 MHz 1066 MHz 1333 MHz 1066 MHz 1333 MHz 1066 MHz 1333 MHz 1066 MHz 1066 MHz 1066 MHz 1333 MHz 1066 MHz 1.35 V 12 2 GB 24 GB 12 GB Single rank 49Y1403 (2 GB) 1333 MHz 1.35 V 1.5 V 12 2 GB 24 GB 12 GB 1.35 V 12 4 GB 48 GB 24 GB UDIMMs Dual rank 49Y1404 (4 GB) 1333 MHz 1.35 V 1.5 V 12 4 GB 48 GB 24 GB 1.35 V 12 32 GB 384 GB N/A LRDIMMs Quad rank 90Y3105 (32 GB) 1333 MHz 1.35 V 1.5 V 12 32 GB 384 GB 192 GB

a. The maximum quantity that is supported is shown for two processors installed. When one processor is installed, the maximum quantity that is supported is half of that shown.

194

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-6 Maximum memory speeds (Part 2 - RDIMMs) Spec Rank Part numbers Rated speed Rated voltage Operating voltage Max quantitya Largest DIMM Max memory capacity Max memory at rated speed 1.35 V 12 4 GB 48 GB 48 GB Single rank 49Y1406 (4 GB) 1333 MHz 1.35 V 1.5 V 12 4 GB 48 GB 48 GB 1.35 V 12 8 GB 96 GB 96 GB RDIMMs Dual rank 49Y1407 (4 GB) 49Y1397 (8 GB) 1333 MHz 1.35 V 1.5 V 12 8 GB 96 GB 96 GB 90Y3109 (4 GB) 1600 MHz 1.5 V 1.5 V 12 4 GB 48 GB 48 GB Quad rank 49Y1400 (16 GB) 1066 MHz 1.35 V 1.35 V 12 16 GB 192 GB N/A 1.5 V 12 16 GB 192 GB N/A

Maximum operating speed (MHz) 1 DIMM per channel 2 DIMMs per channel 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1600 MHz 1600 MHz 800 MHz 800 MHz 800 MHz 800 MHz

a. The maximum quantity that is supported is shown for two processors installed. When one processor is installed, the maximum quantity that is supported is half of that shown.

The following memory protection technologies are supported: ECC Chipkill (for x4-based memory DIMMs; look for x4 in the DIMM description) Memory mirroring Memory sparing If memory mirroring is used, DIMMs must be installed in pairs (minimum of one pair per processor). Both DIMMs in a pair must be identical in type and size. If memory rank sparing is used, a minimum of one quad-rank DIMM or two single-rank or dual-rank DIMMs must be installed per populated channel. These DIMMs do not need to be identical. In rank sparing mode, one rank of a DIMM in each populated channel is reserved as spare memory. The size of a rank varies depending on the DIMMs installed. Table 5-7 lists the memory options available for the x220 server. DIMMs can be installed one at a time, but for performance reasons, install them in sets of three (one for each of the three memory channels) if possible.
Table 5-7 Supported memory DIMMs Part number Feature codea Description

Unbuffered DIMM (UDIMM) modules 49Y1403 49Y1404 A0QS 8648 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP UDIMM 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP UDIMM

Chapter 5. Compute nodes

195

Part number

Feature codea

Description

Registered DIMMs (RDIMMs) - 1333 MHz and 1066 MHz 49Y1406 49Y1407 49Y1397 49Y1563 49Y1400 8941 8942 8923 A1QT 8939 4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066MHz LP RDIMM

Registered DIMMs (RDIMMs) - 1600 MHz 49Y1559 90Y3178 90Y3109 00D4968 A28Z A24L A292 A2U5 4GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM 4GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM 8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM 16GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM

Load-reduced DIMMs (LRDIMMs) 90Y3105 A291 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM

a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.

DIMM installation order


This section describes the recommended order in which DIMMs should be installed, based on the memory mode that is used. The x220 boots with just one memory DIMM installed per processor. However, the suggested memory configuration is to balance the memory across all the memory channels on each processor to use the available memory bandwidth. Use one of the following suggested memory configurations where possible: Three or six memory DIMMs in a single processor x220 server Six or 12 memory DIMMs in a dual processor x220 server This sequence spreads the DIMMs across as many memory channels as possible. For best performance and to ensure a working memory configuration, install the DIMMs in the sockets as shown in the following sections for the following supported modes: Independent channel mode Rank sparing mode Mirrored channel mode

Memory DIMM installation: Independent channel mode


The following guidelines are only for when the processors are operating in Independent channel mode. Independent channel mode provides a maximum of 96 GB of usable memory with one installed microprocessor, and 192 GB of usable memory with two installed microprocessors (using 16 GB DIMMs).

196

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-8 shows DIMM installation if you have one processor installed.
Table 5-8 Suggested DIMM installation with one processor installed (independent channel mode) Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3

Number of processors

Number of DIMMs

DIMM 11

DIMM 12

DIMM 10

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

DIMM 7 DIMM 7 x x x x x

1 1 x 1 1 1 x 1

1 2 3 4 5 6 x x x x x x x x x x x x

x x x x x x x x x

a. For optimal memory performance, populate all memory channels equally.

Table 5-9 shows DIMM installation if you have two processors installed.
Table 5-9 Suggested DIMM installation with two processors installed (independent channel mode) Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3

Number of processors

Number of DIMMs

DIMM 11

DIMM 12

DIMM 10

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

2 2 2 2 x 2 2 2 2 2 2 x 1

2 3 4 5 6 7 8 9 10 11 12 x x x x x x x x x x x x x x x x x x x x x x x x

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

a. For optimal memory performance, populate all memory channels equally.

Chapter 5. Compute nodes

DIMM 8 x x x x x x x x x x x

DIMM 8

197

Memory DIMM installation: Rank-sparing mode


The following guidelines are only for when the processors are operating in rank-sparing mode. In rank-sparing mode, one rank is held in reserve as a spare for the other ranks in the same channel. If the error threshold is passed in an active rank, the contents of that rank are copied to the spare rank in the same channel. The failed rank is taken offline and the spare rank becomes active. Rank sparing in one channel is independent of rank sparing in other channels. If a channel contains only one DIMM and the DIMM is single or dual ranked, do not use rank sparing. The x220 boots with one memory DIMM installed per processor. However, with rank-sparing mode, if you use all quad ranked DIMMs, use the tables for Independent channel mode for a single processor (see Table 5-8 on page 197) or for two processors (see Table 5-9 on page 197). At least one DIMM pair must be installed for each processor. This sequence spreads the DIMMs across as many memory channels as possible. For best performance and to ensure a working memory configuration in rank sparing mode with single or dual ranked DIMMs, install the DIMMs in the sockets as shown in the following tables. Table 5-10 shows DIMM installation if you have one processor that is installed with rank sparing mode enabled by using single or dual ranked DIMMs.
Table 5-10 Suggested DIMM installation with one processor in rank-sparing mode Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3

Number of processors

Number of DIMMs

DIMM 11

DIMM 12

DIMM 10

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

DIMM 7

1 1 x 1

2 4 6 x x x x x x

x x x

x x x

a. For optimal memory performance, populate all memory channels equally

198

IBM PureFlex System and IBM Flex System Products and Technology

DIMM 8

Table 5-11 shows DIMM installation if you have two processors that are installed with rank sparing, by using dual or single ranked DIMMs.
Table 5-11 Suggested DIMM installation with 2 processors, rank-sparing mode, single or dual ranked Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3

Number of processors

Number of DIMMs

DIMM 11

DIMM 12

DIMM 10

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

DIMM 7 x x x x x

2 2 x 2 2 x 2

4 6 8 10 12 x x x x x x x x x x x x

x x x x x

x x x x x x x x x x x x x

a. For optimal memory performance, populate all memory channels equally

Memory DIMM installation: Mirrored-channel mode


Table 5-12 lists the memory DIMM installation order for the x220, with one or two processors that are installed when mirrored-channel mode is used. In mirrored-channel mode, the channels are paired and both channels in a pair store the same data. For each microprocessor, DIMM channels 2 and 3 form one redundant pair, and channel 1 is unused. Because of the redundancy, the effective memory capacity of the compute node is half the installed memory capacity. The maximum memory is limited because one channel remains unused.
Table 5-12 The DIMM installation order for mirrored-channel mode DIMM paira 1st 2nd 3rd One processor that is installed 3 and 5 4 and 6 Two processors that are installed 3 and 5, 8 and 10 4 and 6 7 and 9

a. The pair of DIMMs must be identical in capacity, type, and rank count.

Chapter 5. Compute nodes

DIMM 8 x x x x x

199

Table 5-13 and Table 5-14 show the suggested DIMM installation in mirrored channel mode for one or two processors.
Table 5-13 Suggested DIMM installation with one processor - mirrored channel mode Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3

Number of processors

Number of DIMMsb

DIMM 11

DIMM 12

DIMM 10

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

DIMM 7 DIMM 7 x

1 x 1

4 6

x x x

x x x

a. For optimal memory performance, populate all memory channels equally. b. The pair of DIMMs must be identical in capacity, type, and rank count. Table 5-14 Suggested DIMM installation with two processors - mirrored channel mode Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3

Number of processors

Number of DIMMsb

DIMM 11

DIMM 12

DIMM 10

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

2 2

4 6 8

x x x x x

x x x x x x

x x x

a. For optimal memory performance, populate all memory channels equally. b. The pair of DIMMs must be identical in capacity, type, and rank count.

Memory installation considerations for IBM Flex System x220 Compute Node
Use the following general guidelines when you determine the memory configuration of your IBM Flex System x220 Compute Node: All memory installation considerations apply equally to one- and two-processor systems. All DIMMs must be DDR3 DIMMs. Memory of different types (RDIMMs, and UDIMMs) cannot be mixed in the system. If you mix DIMMs with 1.35 V and 1.5 V, the system runs all of them at 1.5 V and you lose the energy advantage. If you mix DIMMs with different memory speeds, all DIMMs in the system run at the lowest speed. You cannot mix non-mirrored channel and mirrored channel modes.

200

IBM PureFlex System and IBM Flex System Products and Technology

DIMM 8 x x x

DIMM 8

Install memory DIMMs in order of their size, with the largest DIMM first. The correct installation order is the DIMM slot farthest from the processor first (DIMM slots 5, 8, 3, 10, 1, and 12). Install memory DIMMs in order of their rank, with the largest DIMM in the DIMM slot farthest from the processor. Start with DIMM slots 5 and 8 and work inwards. Memory DIMMs can be installed one DIMM at a time. However, avoid this configuration because it can affect performance. For maximum memory bandwidth, install one DIMM in each of the three memory channels (three DIMMs at a time). Populate equivalent ranks per channel. Physically, DIMM slots 2, 4, 6, 7, 9, and 11 must be populated (actual DIMM or DIMM filler). DIMM slots 1,3, 5, 8, 10, and 12 do not require a DIMM filler. Different memory modes require a different population order (see Table 5-12 on page 199, Table 5-13 on page 200, and Table 5-14 on page 200).

5.2.7 Internal disk storage controllers


The x220 server has two 2.5-inch hot-swap drive bays that are accessible from the front of the blade server, as shown in Figure 5-1 on page 187. The server optionally supports 1.8-inch solid-state drives (SSDs), as described in ServeRAID M5115 configurations and options on page 203. The x220 supports the following disk controllers: ServeRAID C105: An onboard SATA controller with software RAID capabilities ServeRAID H1135: An entry level hardware RAID controller ServeRAID M5115: An advanced RAID controller with cache, back up, and RAID options These three controllers are mutually exclusive. Table 5-15 lists the ordering information.
Table 5-15 Internal storage controller ordering information Part number Integrated 90Y4750 90Y4390 Feature code None A1XJ A2XW Description ServeRAID C105 ServeRAID H1135 Controller for IBM Flex System and IBM BladeCenter ServeRAID M5115 SAS/SATA Controller Maximum quantity 1 1 1

ServeRAID C105 controller


On standard models, the two 2.5-inch drive bays are connected to a ServeRAID C105 onboard SATA controller with software RAID capabilities. The C105 function is embedded in the Intel C600 chipset. The C105 has the following features: Support for SATA drives (SAS is not supported) Support for RAID 0 and RAID 1 (non-RAID is not supported) 6 Gbps throughput per port Support for up to two volumes Support for virtual drive sizes greater than 2 TB

Chapter 5. Compute nodes

201

Fixed stripe unit size of 64 KB Support for MegaRAID Storage Manager management software Consideration: There is no native (in-box) driver for Windows and Linux. The drivers must be downloaded separately. In addition, there is no support for VMware, Hyper-V, Xen, or SSDs.

ServeRAID H1135
The x220 also supports an entry level hardware RAID solution with the addition of the ServeRAID H1135 Controller for IBM Flex System and BladeCenter. The H1135 is installed in a dedicated slot, as shown in Figure 5-3 on page 189. When the H1135 adapter is installed, the C105 controller is disabled. The H1135 has the following features: Based on the LSI SAS2004 6 Gbps SAS 4-port controller PCIe 2.0 x4 host interface CIOv form factor (supported in the x220 and BladeCenter HS23E) Support for SAS, SATA, and SSD drives Support for RAID 0, RAID 1, and non-RAID 6 Gbps throughput per port Support for up to two volumes Fixed stripe size of 64 KB Native driver support in Windows, Linux, and VMware S.M.A.R.T. support Support for MegaRAID Storage Manager management software

ServeRAID M5115
The ServeRAID M5115 SAS/SATA Controller (90Y4390) is an advanced RAID controller that supports RAID 0, 1, 10, 5, 50, and optional 6 and 60. It includes 1 GB of cache, which can be backed up to flash memory when it is attached to an optional supercapacitor. The M5115 attaches to the I/O adapter 1 connector. It can be attached even if the Fabric Connector is installed (used to route the embedded Gb Ethernet to chassis bays 1 and 2). The ServeRAID M5115 cannot be installed if an adapter is installed in I/O adapter slot 1. When the M5115 adapter is installed, the C105 controller is disabled. The ServeRAID M5115 supports the following combinations of 2.5-inch drives and 1.8-inch SSDs: Up to two 2.5-inch drives only Up to four 1.8-inch drives only Up to two 2.5-inch drives, plus up to four 1.8-inch SSDs Up to eight 1.8-inch SSDs For more information about these configurations, see ServeRAID M5115 configurations and options on page 203. The ServeRAID M5115 controller has the following specifications: Eight internal 6 Gbps SAS/SATA ports. PCI Express 3.0 x8 host interface. 6 Gbps throughput per port. 800 MHz dual-core IBM PowerPC processor with an LSI SAS2208 6 Gbps ROC controller. Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411. 202
IBM PureFlex System and IBM Flex System Products and Technology

Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342. Support for SAS and SATA HDDs and SSDs. Support for intermixing SAS and SATA HDDs and SSDs. Mixing different types of drives in the same array (drive group) is not recommended. Support for SEDs with MegaRAID SafeStore. Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447). Support for up to 64 virtual drives, up to 128 drive groups, and up to 16 virtual drives per drive group. Also supports up to 32 physical drives per drive group. Support for LUN sizes up to 64 TB. Configurable stripe size up to 1 MB. Compliant with DDF CoD. S.M.A.R.T. support. MegaRAID Storage Manager management software.

ServeRAID M5115 configurations and options


The x220 with the addition of the M5115 controller supports 2.5-inch drives or 1.8-inch SSDs, or combinations of the two. Table 5-16 lists the ServeRAID M5115 and associated hardware kits.
Table 5-16 ServeRAID M5115 and supported hardware kits for the x220 Part number 90Y4390 90Y4424 90Y4425 90Y4426 Feature code A2XW A35L A35M A35N Description ServeRAID M5115 SAS/SATA Controller ServeRAID M5100 Series Enablement Kit for IBM Flex System x220 ServeRAID M5100 Series IBM Flex System Flash Kit for x220 ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220 Maximum supported 1 1 1 1

At least one hardware kit is required with the ServeRAID M5115 controller. The following hardware kits enable specific drive support: ServeRAID M5100 Series Enablement Kit for IBM Flex System x220 (90Y4424) enables support for up to two 2.5-inch HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection. This enablement kit replaces the standard two-bay backplane that is attached through the system board to an onboard controller. The new backplane attaches with an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit. MegaRAID CacheVault flash cache protection uses NAND flash memory that is powered by a supercapacitor to protect data that is stored in the controller cache. This module eliminates the need for the lithium-ion battery that is commonly used to protect DRAM cache memory on PCI RAID controllers.

Chapter 5. Compute nodes

203

To avoid data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash. This process uses power from the supercapacitor. After power is restored to the RAID controller, the saved data is transferred from the NAND flash back to the DRAM cache. The DRAM cache can then be flushed to disk. Tip: The Enablement Kit is only required if 2.5-inch drives are to be used. This kit is not required if you plan to install four or eight 1.8-inch SSDs only. ServeRAID M5100 Series IBM Flex System Flash Kit for x220 (90Y4425) enables support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay SSD backplane that attaches with an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, so this kit does not have a supercapacitor. ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220 (90Y4426) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles (left and right) that can attach two 1.8-inch SSD attachment locations. It also contains flex cables for attachment to up to four 1.8-inch SSDs. Table 5-17 shows the kits that are required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, you need the M5115 controller, the Flash kit, and the SSD Expansion kit.
Table 5-17 ServeRAID M5115 hardware kits Drive support that is required Maximum number of 2.5-inch drives 2 0 2 0 Maximum number of 1.8-inch SSDs 0 4 (front) 4 (internal) 8 (both) => => => => ServeRAID M5115 90Y4390 Required Required Required Required Required Required Components required Enablement Kit 90Y4424 Required Required Required Required Flash Kit 90Y4425 SSD Expansion Kit 90Y4426

204

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-6 shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (see row 1 of Table 5-17 on page 204).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Enablement Kit for x220 (90Y4424)
ServeRAID M5115 controller

MegaRAID CacheVault flash cache protection

Replacement 2-drive backplane

Figure 5-6 The ServeRAID M5115 and the Enablement Kit installed

Figure 5-7 shows how the ServeRAID M5115 and Flash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (see row 4 of Table 5-17 on page 204).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Flash Kit for x220 (90Y4425) and ServeRAID M5100 Series SSD Expansion Kit for x220 (90Y4426)
ServeRAID M5115 controller

Flash Kit: Replacement 4-drive SSD backplane and drive bays

SSD Expansion Kit: Four SSDs on special air baffles above DIMMs (no CacheVault flash protection)

Eight drives supported: - Four internal drives - Four front-accessible drives

Figure 5-7 ServeRAID M5115 with Flash and SSD Expansion Kits installed

The eight SSDs are installed in the following locations: Four in the front of the system in place of the two 2.5-inch drive bays Two in a tray above the memory banks for processor 1 Two in a tray above the memory banks for processor 2

Chapter 5. Compute nodes

205

Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance accelerator, and SSD caching enabler. The FoD license upgrades are listed in Table 5-18.
Table 5-18 Supported upgrade features Part number 90Y4410 90Y4412 90Y4447 Feature code A2Y1 A2Y2 A36G Description Maximum supporte d 1 1 1

ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System ServeRAID M5100 Series Performance Accelerator for IBM Flex System (MegaRAID FastPath) ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System (MegaRAID CacheCade Pro 2.0)

The following features are included: RAID 6 Upgrade (90Y4410) Adds support for RAID 6 and RAID 60. This license is an FoD license. Performance Accelerator (90Y4412) The Performance Accelerator for IBM Flex System, which is implemented by using the LSI MegaRAID FastPath software and provides high-performance I/O acceleration for SSD-based virtual drives. It uses a low-latency I/O path to increase the maximum IOPS capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is an FoD license. SSD Caching Enabler for traditional hard disk drives (90Y4447) The SSD Caching Enabler for IBM Flex System, which implemented by using the LSI MegaRAID CacheCade Pro 2.0 and is designed to accelerate the performance of HDD arrays. It can do so with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache to help maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a FoD license. This feature requires that at least one SSD drive is installed.

5.2.8 Supported internal drives


The x220 supports 1.8-inch and 2.5-inch drives.

Supported 1.8-inch drives


The 1.8-inch solid-state drives that are supported by the ServeRAID M5115 are listed in Table 5-19.
Table 5-19 Supported 1.8-inch solid-state drives Part number 49Y6124 43W7746 Feature code A3AP 5420 Description IBM 400GB SATA 1.8" MLC Enterprise SSD IBM 200 GB SATA 1.8-inch MLC SSD Maximum supported 8 8

206

IBM PureFlex System and IBM Flex System Products and Technology

Part number 49Y6119 00W1120 43W7726 49Y5993 49Y5834 00W1222 00W1227

Feature code A3AN A3HQ 5428 A3AR A3AQ A3TG A3TH

Description IBM 200GB SATA 1.8" MLC Enterprise SSD IBM 100GB SATA 1.8" MLC Enterprise SSD IBM 50 GB SATA 1.8-inch MLC SSD IBM 512 GB SATA 1.8-inch MLC Enterprise Value SSD IBM 64 GB SATA 1.8-inch MLC Enterprise Value SSD IBM 128GB SATA 1.8" MLC Enterprise Value SSD IBM 256GB SATA 1.8" MLC Enterprise Value SSD

Maximum supported 8 8 8 8 8 8 8

Supported 2.5-inch drives


The 2.5-inch drive bays support SAS or SATA HDDs or SATA SSDs. Table 5-20 lists the supported 2.5-inch drive options. The maximum quantity that is supported is two.
Table 5-20 2.5-inch drive options for internal disk storage Part number Feature code Supported by ServeRAID controller Description C105 H1135 M5115

10K SAS hard disk drives 42D0637 49Y2003 81Y9650 00AD075 5599 5433 A282 A48S IBM 300 GB 10K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 600 GB 10K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 900 GB 10K 6 Gbps SAS 2.5-inch SFF HS HDD IBM 1.2TB 10K 6Gbps SAS 2.5'' G2HS HDD No No No No Supported Supported Supported Supported Supported Supported Supported Supported

15K SAS hard disk drives 42D0677 90Y8926 81Y9670 5536 A2XB A283 IBM 146 GB 15K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD IBM 300 GB 15K 6 Gbps SAS 2.5-inch SFF HS HDD No No No Supported Supported Supported Supported Supported Supported

10K and 15K Self-encrypting drives (SED) 90Y8944 90Y8913 90Y8908 81Y9662 00AD085 A2ZK A2XF A3EF A3EG A48T IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS SED IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS SED IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS SED IBM 900GB 10K 6Gbps SAS 2.5" SFF G2HS SED IBM 1.2TB 10K 6Gbps SAS 2.5'' G2HS SED No No No No No Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported

SAS-SSD Hybrid drive 00AD102 A4G7 IBM 600GB 10K 6Gbps SAS 2.5'' G2HS Hybrid No Supported Supported

Chapter 5. Compute nodes

207

Part number NL SATA 81Y9722 81Y9726 81Y9730 NL SAS 42D0707 90Y8953 81Y9690

Feature code

Supported by ServeRAID controller Description C105 H1135 M5115

A1NX A1NZ A1AV

IBM 250 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD IBM 500 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD IBM 1 TB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD

Supported Supported Supported

Supported Supported Supported

Supported Supported Supported

5409 A2XE A1P3

IBM 500 GB 7200 6 Gbps NL SAS 2.5-inch SFF Slim-HS HDD IBM 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDD IBM 1 TB 7.2 K 6 Gbps NL SAS 2.5-inch SFF HS HDD

No No No

Supported Supported Supported

Supported Supported Supported

Solid-state drives - Enterprise 41Y8331 41Y8336 41Y8341 00W1125 43W7718 49Y6129 49Y6134 49Y6139 49Y6195 A4FL A4FN A4FQ A3HR A2FN A3EW A3EY A3F0 A4GH S3700 200GB SATA 2.5" MLC HS Enterprise SSD S3700 400GB SATA 2.5" MLC HS Enterprise SSD S3700 800GB SATA 2.5" MLC HS Enterprise SSD IBM 100GB SATA 2.5" MLC HS Enterprise SSD IBM 200 GB SATA 2.5-inch MLC HS SSDa IBM 200GB SAS 2.5" MLC HS Enterprise SSD IBM 400GB SAS 2.5" MLC HS Enterprise SSD IBM 800GB SAS 2.5" MLC HS Enterprise SSD IBM 1.6TB SAS 2.5" MLC HS Enterprise SSD No No No No No No No No No Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported

Solid-state drives - Enterprise Value 49Y5839 90Y8648 90Y8643 49Y5844 A3AS A2U4 A2U3 A3AU IBM 64 GB SATA 2.5-inch MLC HS Enterprise Value SSD IBM 128GB SATA 2.5" MLC HS Enterprise Value SSD IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD IBM 512 GB SATA 2.5-inch MLC HS Enterprise Value SSD No No No No Supported Supported Supported Supported Supported Supported Supported Supported

a. Withdrawn from marketing.

IBM Flex System Storage Expansion Node


The x220 also supports the IBM Flex System Storage Expansion Node, which provides another 12 drive bays. For more information, see 5.10, IBM Flex System Storage Expansion Node on page 363.

208

IBM PureFlex System and IBM Flex System Products and Technology

5.2.9 Embedded 1 Gb Ethernet controller


Some models of the x220 include an Embedded 1 Gb Ethernet controller (also known as LOM) built into the system board. Table 5-2 on page 190 lists which models of the x220 include the controller. Each x220 model that includes the controller also has the Compute Node Fabric Connector that is installed in I/O connector 1 and physically screwed onto the system board. The Compute Node Fabric Connector provides connectivity to the Enterprise Chassis midplane. Figure 5-3 on page 189 shows the location of the Fabric Connector. The Fabric Connector enables port 1 on the controller to be routed to I/O module bay 1. Similarly, port 2 is routed to I/O module bay 2. The Fabric Connector can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1. The Embedded 1 Gb Ethernet controller has the following features: Broadcom BCM5718 based Dual-port Gigabit Ethernet controller PCIe 2.0 x2 host bus interface Supports Wake on LAN Supports Serial over LAN Supports IPv6 Consideration: TCP/IP offload engine (TOE) is not supported.

5.2.10 I/O expansion


Like other IBM Flex System compute nodes, the x220 has two PCIe 3.0 I/O expansion connectors for attaching I/O adapters. On the x220, each of these connectors has 12 PCIe lanes. These lanes are implemented as one x8 link (connected to the first application-specific integrated circuit (ASIC) on the installed adapter) and one x4 link (connected to the second ASIC on the installed adapter). The I/O expansion connectors are high-density 216-pin PCIe connectors. Installing I/O adapters allows the x220 to connect to switch modules in the IBM Flex System Enterprise Chassis. The x220 also supports the IBM Flex System PCIe Expansion Node, which provides up to another six adapter slots: two Flex System I/O adapter slots and up to four standard PCIe slots. For more information, see 5.9, IBM Flex System PCIe Expansion Node on page 356.

Chapter 5. Compute nodes

209

Figure 5-8 shows the rear of the x240 compute node and the locations of the I/O connectors.

I/O connector 1

I/O connector 2

Figure 5-8 Rear of the x220 compute node showing the locations of the I/O connectors

Table 5-21 lists the I/O adapters that are supported in the x220.
Table 5-21 Supported I/O adapters for the x220 compute node Part number Feature code Ports Description

Ethernet adapters 49Y7900 90Y3466 90Y3554 90Y3482 A10Y A1QY A1R1 A3HK 4 2 4 2 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter IBM Flex System EN4132 2-port 10Gb Ethernet Adapter IBM Flex System CN4054 10Gb Virtual Fabric Adapter IBM Flex System EN6132 2-port 40Gb Ethernet Adapter

Fibre Channel adapters 69Y1938 95Y2375 88Y6370 95Y2386 95Y2391 69Y1942 A1BM A2N5 A1BP A45R A45S A1BQ 2 2 2 2 4 2 IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC3052 2-port 8Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter IBM Flex System FC5052 2-port 16Gb FC Adapter IBM Flex System FC5054 4-port 16Gb FC Adapter IBM Flex System FC5172 2-port 16Gb FC Adapter

InfiniBand adapters 90Y3454 A1QZ 2 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

Consideration: Any supported I/O adapter can be installed in either I/O connector. However, you must be consistent across the chassis and all compute nodes.

210

IBM PureFlex System and IBM Flex System Products and Technology

5.2.11 Integrated virtualization


The x220 offers USB flash drive options that are preinstalled with versions of VMware ESXi. This software is an embedded version of VMware ESXi and is fully contained on the flash drive without requiring any disk space. The USB memory key plugs into one of the two internal USB ports on the x220 system board, as shown in Figure 5-3 on page 189. If you install USB keys in both USB ports, both devices are listed in the boot menu. You can use this configuration to boot from either device, or set one as a backup in case the first gets corrupted. The supported USB memory keys are listed in Table 5-22.
Table 5-22 Virtualization options Part number 41Y8300 41Y8307 41Y8311 41Y8298 Feature code A2VC A383 A2R3 A2G0 Description IBM USB Memory Key for VMware ESXi 5.0 IBM USB Memory Key for VMware ESXi 5.0 Update1 IBM USB Memory Key for VMware ESXi 5.1 IBM Blank USB Memory Key for VMware ESXi Downloadsa Maximum supported 1 1 1 2

a. The Blank USB Memory Key requires the download of the VMware vSphere (ESXi) Hypervisor with IBM Customization image, which is available at this website: http://ibm.com/systems/x/os/vmware/

There are two types of USB keys, preloaded keys or blank keys. Blank keys allow you to download an IBM customized version of ESXi and load it onto the key. The x240 supports one or two keys installed, but only certain combinations: The following combinations are supported: One preload key One blank key One preload key and one blank key Two blank keys Two preload keys is an unsupported combination. Installing two preloaded keys prevents ESXi from booting as described at this website: http://kb.vmware.com/kb/1035107 Having two keys installed provides a backup boot device. Both devices are listed in the boot menu, which allows you to boot from either device or to set one as a backup in case the first one gets corrupted.

5.2.12 Systems management


The following section describes some of the systems management features that are available with the x220.

Front panel LEDs and controls


The front of the x220 includes several LEDs and controls that help with systems management. They include a hard disk drive (HDD) activity LED, status LEDs, and power, identify, check log, fault, and light path diagnostic LEDs.

Chapter 5. Compute nodes

211

Figure 5-9 shows the location of the LEDs and controls on the front of the x220.
Hard disk drive activity LED Hard disk drive status LED

USB port

Identify LED

Fault LED

NMI control

Console Breakout Cable port

Power button / LED

Check log LED

Figure 5-9 The front of the x220 with the front panel LEDs and controls shown

Table 5-23 describes the front panel LEDs.


Table 5-23 x220 front panel LED information LED Power Color Green Description This LED lights solid when system is powered up. When the compute node is initially plugged into a chassis, this LED is off. If the power-on button is pressed, the IMM flashes this LED until it determines that the compute node can power up. If the compute node can power up, the IMM powers the compute node on and turns on this LED solid. If the compute node cannot power up, the IMM turns off this LED and turns on the information LED. When this button is pressed with the server out of the chassis, the light path LEDs are lit. A user can use this LED to locate the compute node in the chassis by requesting it to flash from the chassis management module console. The IMM flashes this LED when instructed to by the Chassis Management Module. This LED functions only when the server is powered on. The IMM turns on this LED when a condition occurs that prompts the user to check the system error log in the Chassis Management Module. This LED lights solid when a fault is detected somewhere on the compute node. If this indicator is on, the general fault indicator on the chassis front panel should also be on. Each hot-swap hard disk drive has an activity LED, and when this LED is flashing, it indicates that the drive is in use. When this LED is lit, it indicates that the drive failed. If an optional IBM ServeRAID controller is installed in the server, when this LED is flashing slowly (one flash per second), it indicates that the drive is being rebuilt. When the LED is flashing rapidly (three flashes per second), it indicates that the controller is identifying the drive.

Location

Blue

Check error log Fault Hard disk drive activity LED Hard disk drive status LED

Yellow Yellow Green Yellow

212

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-24 describes the x220 front panel controls.


Table 5-24 x220 front panel control information Control Power on / off button Characteristic Recessed with Power LED Description If the server is off, pressing this button causes the server to power up and start loading. When the server is on, pressing this button causes a graceful shutdown of the individual server so it is safe to remove. This process includes shutting down the operating system (if possible) and removing power from the server. If an operating system is running, you might have to hold the button for approximately 4 seconds to initiate the shutdown. This button must be protected from accidental activation. Group it with the Power LED. Causes an NMI for debugging purposes.

NMI

Recessed. It can be accessed only by using a small pointed object.

Power LED
The status of the power LED of the x220 shows the power status of the compute node. It also indicates the discovery status of the node by the Chassis Management Module. The power LED states are listed in Table 5-25.
Table 5-25 The power LED states of the x220 compute node Power LED state Off On; fast flash mode On; slow flash mode On; solid Status of compute node No power to compute node Compute node has power Chassis Management Module is in discovery mode (handshake) Compute node has power Power in stand-by mode Compute node has power Compute node is operational

Exception: The power button does not operate when the power LED is in fast flash mode.

Light path diagnostic procedures


For quick problem determination when located physically at the server, the x220 offers the following three step guided path: 1. The Fault LED on the front panel. 2. The light path diagnostics panel, which is shown in Figure 5-10 on page 214. 3. LEDs next to key components on the system board.

Chapter 5. Compute nodes

213

The x220 light path diagnostics panel is visible when you remove the server from the chassis. The panel is on the upper right of the compute node as shown in Figure 5-10.

Figure 5-10 Location of x220 light path diagnostics panel

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis. The meaning of each LED in the light path diagnostics panel is listed in Table 5-26.
Table 5-26 Light path panel LED definitions LED LP S BRD MIS NMI TEMP MEM ADJ Color Green Yellow Yellow Yellow Yellow Yellow Yellow Meaning The light path diagnostics panel is operational System board error is detected A mismatch has occurred between the processors, DIMMs, or HDDs within the configuration as reported by POST An NMI has occurred An over-temperature condition has occurred that was critical enough to shut down the server A memory fault has occurred. The corresponding DIMM error LEDs on the system board should also be lit. A fault is detected in the adjacent expansion unit (if installed)

Integrated Management Module II


Each x220 compute node has an IMM2 onboard and uses the UEFI to replace the older BIOS interface. The IMM2 provides the following major features as standard: IPMI v2.0-compliance Remote configuration of IMM2 and UEFI settings without the need to power on the server

214

IBM PureFlex System and IBM Flex System Products and Technology

Remote access to system fan, voltage, and temperature values Remote IMM and UEFI update UEFI update when the server is powered off Remote console by way of a serial over LAN Remote access to the system event log Predictive failure analysis and integrated alerting features; for example, by using Simple Network Management Protocol (SNMP) Remote presence, including remote control of server by using a Java or Active x client Operating system failure window (blue screen) capture and display through the web interface Virtual media that allows the attachment of a diskette drive, CD/DVD drive, USB flash drive, or disk image to a server Remember: Unlike IBM BladeCenter, the assigned TCP/IP address of the IMM is available on the local network. You can use this address to remotely manage the x220 by connecting directly to the IMM independent of the IBM Flex System Manager or Chassis Management Module. For more information about the IMM, see 3.4.1, Integrated Management Module II on page 47.

5.2.13 Operating system support


The following operating systems are supported by the x220: Microsoft Windows Server 2008 HPC Edition Microsoft Windows Server 2008 R2 Microsoft Windows Server 2008, Datacenter x64 Edition Microsoft Windows Server 2008, Enterprise x64 Edition Microsoft Windows Server 2008, Standard x64 Edition Microsoft Windows Server 2008, Web x64 Edition Microsoft Windows Server 2012 Red Hat Enterprise Linux 5 Server with Xen x64 Edition Red Hat Enterprise Linux 5 Server x64 Edition Red Hat Enterprise Linux 6 Server x64 Edition SUSE Linux Enterprise Server 10 for AMD64/EM64T SUSE Linux Enterprise Server 11 for AMD64/EM64T SUSE Linux Enterprise Server 11 with Xen for AMD64/EM64T VMware ESX 4.1 VMware ESXi 4.1 VMware vSphere 5 VMware vSphere 5.1 ServeRAID C105: There is no native (in-box) driver for the ServeRAID C105 controller for Windows and Linux; the drivers must be downloaded separately. The ServeRAID C105 controller does not support for VMware, Hyper-V, Xen, or solid-state drives (SSDs). For more information about the latest list of supported operating systems, see the IBM ServerProven page at this website: http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml

Chapter 5. Compute nodes

215

5.3 IBM Flex System x222 Compute Node


The IBM Flex System x222 Compute Node is a high-density dual-server offering that is designed for virtualization, dense cloud deployments, and hosted clients. The x222 has two independent compute nodes in one mechanical package, which means that the x222 has a dense design that allows up to 28 servers to be housed in a single 10U Flex System Enterprise Chassis. Compute Node versus Server: In this section, the term Compute Node refers to the entire x222. The term server refers to each independent half of the x222. This section includes the following topics: 5.3.1, Introduction on page 216 5.3.2, Models on page 219 5.3.3, Chassis support on page 219 5.3.4, System architecture on page 220 5.3.5, Processor options on page 222 5.3.6, Memory options on page 223 5.3.7, Supported internal drives on page 225 5.3.8, Expansion Node support on page 226 5.3.9, Embedded 10Gb Virtual Fabric adapter on page 226 5.3.10, Mid-mezzanine I/O adapters on page 228 5.3.11, Integrated virtualization on page 231 5.3.12, Systems management on page 232 5.3.13, Operating system support on page 234

5.3.1 Introduction
The IBM Flex System x222 Compute Node is a high-density offering that is designed to maximize the computing power that is available in the data center. With a balance between cost and system features, the x222 is an ideal platform for dense workloads, such as virtualization. This section describes the key features of the server. Figure 5-11 shows the front of the x222 Compute Node showing the location of the controls, LEDs, and connectors.

x222 Compute Node

Upper server Lower server Light path diagnostics LED panel

USB port

2.5 SS HDD bay Console breakout (or 2x 1.8 HS) cable port

Power

Figure 5-11 The IBM Flex System x222 Compute Node

216

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-12 shows the internal layout and major components of the x222.

Upper system-board assembly Upper air baffle Fabric connector DIMM I/O expansion adapter

Microprocessor Heat sink filler Lower heat sink Lower air baffle Simple-swap hard disk drive

Lower system-board assembly

Hard disk drive bay filler Solid-state drive mounting sleeve

Figure 5-12 Exploded view of the x222, showing the major components

Table 5-27 lists the features of the x222.


Table 5-27 IBM Flex System x222 Compute Node specifications Components Form factor Chassis support Processor Specification Standard Flex System form factor with two independent servers. IBM Flex System Enterprise Chassis. Up to four processors in a standard (half-width) Flex System form factor. Each separate server: Up to two Intel Xeon Processor E5-2400 product family CPUs with eight-core (up to 2.3 GHz), six-core (up to 2.4 GHz), or quad-core (up to 2.2 GHz), one QPI link that runs at 8.0 GTps, L3 cache up to 20 MB, and memory speeds up to 1600 MHz. The two separate servers are independent and cannot be combined to form a single, four-socket system. Chipset Intel C600 series

Chapter 5. Compute nodes

217

Components Memory

Specification Up to 24 DIMM sockets in a standard (half-width) Flex System form factor. Each separate server: Up to 12 DIMM sockets (six DIMMs per processor) by using Low Profile (LP) DDR3 DIMMs. RDIMMs and LRDIMMs are supported. 1.5 V and low-voltage 1.35 V DIMMs are supported. There is support for up to 1600 MHz memory speed, depending on the processor. There are three memory channels per processor (two DIMMs per channel). Supports two DIMMs per channel operating at 1600 MHz (two DPC @ 1600MHz) with single and dual rank RDIMMs. Each separate server: With LRDIMMs: Up to 384 GB with 12x 32 GB LRDIMMs and two processors With RDIMMs: Up to 192 GB with 12x 16 GB RDIMMs and two processors ECC, Chipkill, optional memory mirroring, and memory rank sparing. Each separate server: One 2.5" simple-swap SATA drive bay supporting SATA and SSD drives. Optional SSD mounting kit to convert a 2.5 simple-swap bay into two 1.8 hot-swap SSD bays. Each separate server: Up to 1 TB using a 2.5 SATA simple-swap drive or up to 512 GB using two 1.8 SSDs and the SSD Expansion Kit. None Each separate server: Two 10 Gb Ethernet ports with Embedded 10Gb Virtual Fabric Ethernet LAN on motherboard (LOM) controller; Emulex BE3 based. Routes to chassis bays 1 and 2 through a Fabric Connector to the midplane. Features on Demand upgrade to FCoE and iSCSI. Usage of both ports on both servers requires two scalable Ethernet switches in the chassis, each upgraded to enable 28 internal switch ports. Each separate server: One connector for an I/O adapter; PCI Express 3.0 x16 interface. Supports special mid-mezzanine I/O cards that are shared by both servers. Only one card is needed to connect both servers. Each separate server: One external, two internal USB ports for an embedded hypervisor. A console breakout cable port on the front of the server provides local KVM and serial ports (one cable is provided as standard with chassis; more cables optional). Each separate server: UEFI, IBM Integrated Management Module II (IMM2) with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, and remote presence. Support for IBM Flex System Manager, IBM Systems Director, and IBM ServerGuide. Power-on password and admin password, Trusted Platform Module (TPM) 1.2. Each separate server: Matrox G200eR2 video core with 16 MB video memory that is integrated into IMM2. The maximum resolution is 1600x1200 at 75 Hz with 16 M colors. Three-year, customer-replaceable unit and onsite limited warranty with 9x5/NBD. Microsoft Windows Server 2008 R2 and 2012, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, and VMware ESXi 4.1, 5.0 and 5.1. For more information, see 5.2.13, Operating system support on page 215. Optional country-specific service upgrades are available through IBM ServicePacs: 6, 4, or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software. Width: 217 mm (8.6 in.), height: 56 mm (2.2 in.), depth: 492 mm (19.4 in.) Maximum configuration: 8.2 kg (18 lb)

Memory maximums

Memory protection Disk drive bays Maximum internal storage (Raw) RAID support Network interfaces

PCI Expansion slots

Ports

Systems management

Security features Video Limited warranty Operating systems supported Service and support

Dimensions Weight

218

IBM PureFlex System and IBM Flex System Products and Technology

5.3.2 Models
The current x222 models are shown in Table 5-28. All models include 2x 8 GB of memory (one 8 GB DIMM per server).
Table 5-28 Standard models Model Intel Xeon processors (2 max per server, 4 total) 2x E5-2418L 4C 2.0GHz 10MB 1333MHz 50W 2x E5-2430L 6C 2.0GHz 15MB 1333MHz 60W 2x E5-2450L 8C 1.8GHz 20MB 1600MHz 70W 2x E5-2403 4C 1.8GHz 10MB 1066MHz 80W 2x E5-2407 4C 2.2GHz 10MB 1066MHz 80W 2x E5-2420 6C 1.9GHz 15MB 1333MHz 95W 2x E5-2430 6C 2.2GHz 15MB 1333MHz 95W 2x E5-2430 6C 2.2GHz 15MB 1333MHz 95W 2x E5-2440 6C 2.4GHz 15MB 1333MHz 95W 2x E5-2450 8C 2.1GHz 20MB 1600MHz 95W 2x E5-2470 8C 2.3GHz 20MB 1600MHz 95W Disk adapter (1 per server) SATA (non-RAID) SATA (non-RAID) SATA (non-RAID) SATA (non-RAID) SATA (non-RAID) SATA (non-RAID) SATA (non-RAID) SATA (non-RAID) SATA (non-RAID) SATA (non-RAID) SATA (non-RAID) Disk bays (1 per server) 2x 2.5 SS 2x 2.5 SS 2x 2.5 SS 2x 2.5 SS 2x 2.5 SS 2x 2.5 SS 2x 2.5 SS 2x 2.5 SS 2x 2.5 SS 2x 2.5 SS 2x 2.5 SS Disks Networking (2 per server) 4x 10 GbE 4x 10 GbE 4x 10 GbE 4x 10 GbE 4x 10 GbE 4x 10 GbE 4x 10 GbE 4x 10 GbE 2x InfiniBanda 4x 10 GbE 4x 10 GbE 4x 10 GbE I/O slots (used/max) 0/1 0/1 0/1 0/1 0/1 0/1 0/1 1/1 0/1 0/1 0/1

7916-A2x 7916-B2x 7916-C2x 7916-D2x 7916-F2x 7916-G2x 7916-H2x 7916-H6x 7916-J2x 7916-M2x 7916-N2x

Open Open Open Open Open Open Open Open Open Open Open

a. Model H6x includes the IBM Flex System IB6132D 2-port FDR InfiniBand Adapter.

5.3.3 Chassis support


The x222 type 7916 is supported in the IBM Flex System Enterprise Chassis as shown in Table 5-29.
Table 5-29 x222 chassis support Server x222 BladeCenter chassis (all) No IBM Flex System Enterprise Chassis Yes

Chapter 5. Compute nodes

219

Up to 14 x222 Compute Nodes (up to 28x separate servers) can be installed in the chassis in 10U of rack space. The actual number of x222 systems that can be powered on in a chassis depends on the following factors: The TDP power rating for the processors that are installed in the x222. The number of power supplies installed in the chassis. The capacity of the power supplies installed in the chassis (2100 W or 2500 W). The power redundancy policy used in the chassis (N+1 or N+N). Table 4-11 on page 93 provides guidelines about the number of x222 systems that can be powered on in the IBM Flex System Enterprise Chassis, based on the type and number of power supplies that are installed.

5.3.4 System architecture


The x222 Compute Node contains two individual and independent servers. The servers share power and network connections to the IBM Flex System Enterprise Chassis, but they operate as two separate servers. It is not possible to combine the servers to form a single four-socket server. Figure 5-13 shows the x222 open and the two separate servers, upper and lower.

I/O adapter connector for upper server

Power and signal interconnect

I/O connector 2 on shared adapter (InfiniBand or FC) with connections top and bottom to each server

The two servers are located on the top and bottom halves of the x222 node.

I/O connector 1 shared with both servers for 10 GbE (2 ports for each server)

Figure 5-13 The x222 open, showing the two servers

Each server within the IBM Flex System x222 Compute Node has the following system architecture features as standard: Two 1356-pin, Socket B2 (LGA-1356) processor sockets An Intel C600 series Platform Controller Hub Three memory channels per socket 220
IBM PureFlex System and IBM Flex System Products and Technology

Up to two DIMMs per memory channel 12 DDR3 DIMM sockets Support for RDIMMs and LRDIMMs One integrated 10 Gb Ethernet controller (10 GbE LOM in Figure 5-14) One IMM2 One connector for attaching to a mid-mezzanine I/O adapter One SATA connector for one 2.5 simple-swap SAS HDD or SSD (or two 1.8 SSDs with the optional 1.8 enablement kit) Two internal and one external USB connector Figure 5-14 shows the system architecture of the x222 system.

x4 ESI link Intel Xeon Processor 0 Intel C600 PCH USB DDR3 DIMMs 3 memory channels 2 DIMMs per channel QPI link (up to 8 GT/s) x1 USB

2.5 HDD or SSD (2x 1.8 with optional kit) 2x Internal USB Front USB Front KVM port Video & serial Management

IMM2

PCIe 3.0 x8 Intel Xeon Processor 1 PCIe 3.0 x16

10 GbE LOM I/O connector

Chassis Midplane

PCIe 3.0 x8

Upper server Lower server


PCIe 3.0 x8 Intel Xeon Processor 1 I/O connector PCIe 3.0 x16 10 GbE LOM Management QPI link (up to 8 GT/s) IMM2

Server interconnect

Mid-mezz Adapter

Fabric Connector

PCIe 3.0 x8 DDR3 DIMMs 3 memory channels 2 DIMMs per channel

1Gb Switch

Management Connector

x1

USB USB

Video & serial Front KVM port Front USB 2x Internal USB 2.5 HDD or SSD (2x 1.8 with optional kit)

Intel Xeon Processor 0 x4 ESI link

Intel C600 PCH

Figure 5-14 IBM Flex System x222 Compute Node block diagram Chapter 5. Compute nodes

221

5.3.5 Processor options


Each server within the IBM Flex System x222 Compute Node features the Intel Xeon E5-2400 series processors. The Xeon E5-2400 series processor has models with either four, six, or eight cores per processor with up to 16 threads per socket. The processors include the following features: Up to 20 MB of shared L3 cache Hyper-Threading Turbo Boost Technology 2.0 (depending on processor model) One QPI link that runs at up to 8 GT/s One integrated memory controller Three memory channels that support up to two DIMMs each The x222 supports the processor options that are listed in Table 5-30. The x222 supports up to four Intel Xeon E5-2400 processors, one or two in each independent server. All four processors that are used in an x222 must be identical. The table also shows which server models have each processor as standard. If no corresponding model for a particular processor is listed, the processor is available only through the configure-to-order (CTO) process. Important: It is not possible to combine the servers to form a single four-socket server. Each of the two-socket servers are independent from each other with the exception of shared power, a shared dual-ASIC I/O adapter, and a shared fabric connector to the midplane.
Table 5-30 Supported processors for the x222 Part number Feature codea Intel Xeon processor description Models where used

Intel Xeon processors 00D1266 00D1265 00D1264 00D1263 00D1262 00D1261 00D1260 A35X / A370 A35W / A36Z A35V / A36Y A35U / A36X A35T / A36W A35S / A36V A35R / A36U Intel Xeon E5-2403 4C 1.8GHz 10MB 1066MHz 80W Intel Xeon E5-2407 4C 2.2GHz 10MB 1066MHz 80W Intel Xeon E5-2420 6C 1.9GHz 15MB 1333MHz 95W Intel Xeon E5-2430 6C 2.2GHz 15MB 1333MHz 95W Intel Xeon E5-2440 6C 2.4GHz 15MB 1333MHz 95W Intel Xeon E5-2450 8C 2.1GHz 20MB 1600MHz 95W Intel Xeon E5-2470 8C 2.3GHz 20MB 1600MHz 95W D2x F2x G2x H2x, H6x J2x M2x N2x

Intel Xeon processors - Low power 00D1269 00D1271 00D1268 00D1270 00D1267 A360 / A373 A362 / A375 A35Z / A372 A361 / A374 A35Y / A371 Intel Xeon E5-2418L 4C 2.0GHz 10MB 1333MHz 50W Intel Xeon E5-2428L 6C 1.8GHz 15MB 1333MHz 60W Intel Xeon E5-2430L 6C 2.0GHz 15MB 1333MHz 60W Intel Xeon E5-2448L 8C 1.8GHz 20MB 1333MHz 70W Intel Xeon E5-2450L 8C 1.8GHz 20MB 1600MHz 70W A2x B2x C2x

a. The first feature code is for processor 1 and the second feature code is for processor 2.

222

IBM PureFlex System and IBM Flex System Products and Technology

5.3.6 Memory options


IBM DDR3 memory is compatibility tested and tuned for optimal performance and throughput. IBM memory specifications are integrated into the light path diagnostics panel for immediate system performance feedback and optimum system uptime. From a service and support standpoint, IBM memory automatically assumes the IBM system warranty, and IBM provides service and support worldwide. The servers in the x222 support Low Profile (LP) DDR3 memory RDIMMs and LRDIMMs. UDIMMs are not supported. Each of the two servers in the x222 has 12 DIMM sockets. Each server supports up to six DIMMs when one processor is installed and up to 12 DIMMs when two processors are installed. Each processor has three memory channels, and there are two DIMMs per channel. The following rules apply when you select the memory configuration: Mixing 1.5 V and 1.35 V DIMMs in the same server is supported. In such a case, all DIMMs operate at 1.5 V. The maximum number of ranks that are supported per channel is eight. The maximum quantity of DIMMs that can be installed in each server in the x222 depends on the number of processors, as shown in the Max. qty supported row in Table 5-31 on page 224 and Table 5-32 on page 224. All DIMMs in all processor memory channels operate at the same speed, which is determined as the lowest value of the following situations: The memory speed that is supported by a specific processor. The lowest maximum operating speed for the selected memory configuration that depends on the rated speed, as shown under the Max. operating speed section in Table 5-31 on page 224. Table 5-31 on page 224 and Table 5-32 on page 224 show the maximum memory speeds that are achievable based on the installed DIMMs and the number of DIMMs per channel. Table 5-31 on page 224 and Table 5-32 on page 224 also show the maximum memory capacity at any speed that is supported by the DIMM and the maximum memory capacity at the rated DIMM speed. In Table 5-31 on page 224 and Table 5-32 on page 224, cells that are highlighted with a gray background indicate when the specific combination of DIMM voltage and number of DIMMs per channel still allows the DIMMs to operate at rated speed. Important: The quantities and capacities are for one server within the x222 (that is, half of the x222). The maximums for the entire x222 (both servers) is twice these numbers.

Chapter 5. Compute nodes

223

Table 5-31 Maximum memory speeds: RDIMMs Spec Rank Part numbers Single rank 49Y1406 (4 GB) 49Y1559 (4 GB) RDIMMs Dual rank 49Y1407 (4 GB) 49Y1397 (8 GB) 49Y1563 (16 GB) 1333 MHz 1.35 V 1.5 V 12 4 GB 48 GB 48 GB 1.35 V 12 16 GB 192 GB 129 GB 1.5 V 12 16 GB 192 GB 192 GB 90Y3178 (4 GB) 90Y3109 (8 GB) 00D4968(16GB) 1600 MHz 1.5 V 1.5 V 12 16 GB 192 GB 192 GB

Rated speed Rated voltage Operating voltage Max quantitya Largest DIMM Max memory capacity Max memory at rated speed

1333 MHz 1.35 V 1.35 V 12 4 GB 48 GB 48 GB 1.5 V 12 4 GB 48 GB 48 GB

1600 MHz 1.5 V

Maximum operating speed (MHz) 1 DIMM per channel 2 DIMMs per channel 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1600 MHz 1600 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1600 MHz 1600 MHz

a. The maximum quantity that is supported is shown for two installed processors. When one processor is installed, the maximum quantity that is supported is half of that shown. Table 5-32 Maximum memory speeds - LRDIMMs Spec Rank Part numbers Rated speed Rated voltage Operating voltage Max quantitya 1.35 V 12 32 GB 384 GB N/A LRDIMMs Quad rank 90Y3105 (32 GB) 1333 MHz 1.35 V 1.5 V 12 32 GB 384 GB 192 GB

Largest DIMM Max memory capacity Max memory at rated speed Maximum operating speed (MHz) 1 DIMM per channel 2 DIMMs per channel

1066 MHz 1066 MHz

1333 MHz 1066 MHz

a. The maximum quantity that is supported is shown for two installed processors. When one processor is installed, the maximum quantity that is supported is half of that shown.

224

IBM PureFlex System and IBM Flex System Products and Technology

The following memory protection technologies are supported: ECC Chipkill (for x4-based memory DIMMs; look for x4 in the DIMM description) Memory mirroring Memory sparing If memory mirroring is used, the DIMMs must be installed in pairs (minimum of one pair per processor) and both DIMMs in a pair must be identical in type and size. If memory rank sparing is used, a minimum of one quad-rank DIMM or two single-rank or dual-rank DIMMs must be installed per populated channel (the DIMMs do not need to be identical). In rank sparing mode, one rank of a DIMM in each populated channel is reserved as spare memory. The size of a rank varies depending on the DIMMs that are installed. Table 5-33 lists the memory options that are available for the x222. DIMMs can be installed one at a time in each server, but for performance reasons, install them in sets of three (one for each of the three memory channels).
Table 5-33 Memory options for the x222 Part number Feature code Description Models where used

Registered DIMMs (RDIMMs) - 1333 MHz 49Y1406 49Y1407 49Y1397 49Y1563 8941 8942 8923 A1QT 4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM -

Registered DIMMs (RDIMMs) - 1600 MHz 49Y1559 90Y3178 90Y3109 00D4968 A28Z A24L A292 A2U5 4GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM 4GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM 8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM 16GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM All -

Load-reduced DIMMs (LRDIMMs) 90Y3105 A291 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM -

5.3.7 Supported internal drives


Each of the two servers in the x222 has one 2.5-inch simple-swap drive bay that is accessible from the front of the unit (as shown in Figure 5-11 on page 216). Each server offers a 6 Gbps SATA controller that is implemented by the Intel C600 series chipset. Each 2.5-inch drive bay supports a SATA HDD or SATA SSD. The 2.5-inch drive bay can be replaced with two 1.8-inch hot-swap bays for SSDs by first installing the Flex System SSD Expansion Kit in to the 2.5-inch bay. RAID functionality is not provided by the chipset and, if required, must be implemented by the operating system.

Chapter 5. Compute nodes

225

Table 5-34 lists the supported drives in the x222.


Table 5-34 Supported drives Part number Feature code Description Maximum supported per servera

1.8-inch drives and expansion kit 00W0366 00W1120 49Y6119 A3HV A3HQ A3AN IBM Flex System SSD Expansion Kit (used to convert the 2.5-inch bay in to two 1.8-inch bays) IBM 100GB SATA 1.8" MLC Enterprise SSD IBM 200GB SATA 1.8" MLC Enterprise SSD 1 2 2

2.5-inch drives 90Y8974 90Y8979 90Y8984 90Y8989 90Y8994 A369 A36A A36B A36C A36D IBM 500GB 7.2K 6Gbps SATA 2.5'' G2 SS HDD IBM 1TB 7.2K 6Gbps SATA 2.5'' G2 SS HDD IBM 128GB SATA 2.5 MLC Enterprise Value SSD for Flex System x222 IBM 256GB SATA 2.5 MLC Enterprise Value SSD for Flex System x222 IBM 100GB SATA 2.5 MLC Enterprise SSD for Flex System x222 1 1 1 1 1

a. The quantities that are listed here are for each of the separate servers within the x222 node.

5.3.8 Expansion Node support


The x222 does not support the IBM Flex System Storage Expansion Node or the IBM Flex System PCIe Expansion Node.

5.3.9 Embedded 10Gb Virtual Fabric adapter


Each server in the x222 Compute Node includes an Embedded 10Gb Virtual Fabric adapter (also known as LAN on Motherboard or LOM) built in to the system board. The x222 has one Fabric Connector (which is physically on the lower server) and the Ethernet connections from both Embedded 10 Gb VFAs are routed through it. Figure 5-15 on page 227 shows the internal connections between the Embedded 10Gb VFAs and the switches in chassis bays 1 and 2.

226

IBM PureFlex System and IBM Flex System Products and Technology

Embedded 10 GbE

Fabric connector for embedded 10 GbE

Base switch ports

Upgrade 1 switch ports Switch bay 1 Ethernet

x222 Upper server

. . . . . .

. . .

Lower server

. . . . . .

Switch bay 2 Ethernet

. . .

Upper server traffic routes to Upgrade 1 ports on both switches Lower server traffic routes to Base ports on both switches

Figure 5-15 Embedded 10 Gb VFA connectivity to the switches

The following components are shown in Figure 5-15: The blue lines show that the two Ethernet ports in the upper server route to switches in bay 1 and bay 2. These connections require that the switch have Upgrade 1 enabled so as to enable the second bank of internal ports, ports 15 - 28. The red lines show that the two Ethernet ports in the lower server also route to switches in bay 1 and bay 2. These connections both go to the base ports of the switch, ports 1 - 14. Switch upgrade 1 required: You must have Upgrade 1 enabled in the two switches. Without this feature upgrade, the upper server does not have any Ethernet connectivity. For more information about supported Ethernet switches, see 4.11.4, Switch to adapter compatibility on page 115. The Embedded 10Gb VFA is based on the Emulex BladeEngine 3 (BE3), which is a single-chip, dual-port 10 Gigabit Ethernet (10GbE) Ethernet Controller. The Embedded 10Gb VFA includes the following features: PCI-Express Gen2 x8 host bus interface Supports multiple virtual NIC (vNIC) functions TCP/IP Offload Engine (TOE enabled) SR-IOV capable RDMA over TCP/IP capable iSCSI and FCoE upgrade offering through FoD Table 5-35 on page 228 lists the ordering information for the IBM Flex System Embedded 10Gb Virtual Fabric Upgrade, which enables the iSCSI and FCoE support on the Embedded 10Gb Virtual Fabric adapter. Two licenses required: To enable the FCoE/iSCSI upgrade for both servers in the x222 Compute Node, two licenses are required.

Chapter 5. Compute nodes

227

Table 5-35 Feature on Demand upgrade for FCoE and iSCSI support Part number 90Y9310 Feature code A2TD Description IBM Virtual Fabric Advanced Software Upgrade (LOM) Maximum supported per servera 1 per server 2 per x222 Compute Node

a. To enable the FCoE/iSCSI upgrade for both servers in the x222 Compute Node, two licenses are required.

5.3.10 Mid-mezzanine I/O adapters


In addition to the Embedded 10GbE VFAs on each server, the x222 supports one I/O adapter that is shared between the two servers and is routed to the I/O Modules that are installed in bays 3 and 4 of the chassis. The shared I/O adapter is mounted in the lower server, as shown in Figure 5-16. The adapter has two host interfaces, one on either side, for connecting to the servers. Each host interface is PCI Express 3.0 x16.
I/O expansion adapter (installed in lower server)

Lower server

I/O adapter connector to upper server

I/O adapter connector to lower server (underside)

Midplane interface

Figure 5-16 Location of the I/O adapter

Table 5-36 lists the supported adapters. Adapters are shared between the two servers with half of the ports routing to each server.
Table 5-36 Network adapters Part number 90Y3486 95Y2379 Feature code A365 A3HU Description IBM Flex System IB6132D 2-port FDR InfiniBand adapter IBM Flex System FC5024D 4-port 16Gb FC adapter Number of ports 2 4 Maximum supporteda 1 1

a. One adapter is supported per x222 Compute Node. The adapter is shared between the two servers within the x222.

228

IBM PureFlex System and IBM Flex System Products and Technology

A compatible I/O module must be installed in the corresponding I/O bays in the chassis, as shown in the Table 5-37.
Table 5-37 Adapter to I/O bay correspondence Upper or Lower server Adapter With FC5024D 4-port With IB6132D 2-port Corresponding I/O module bay in the chassis Module bay 1 (Upgrade 1a) Module bay 2 (Upgrade 1a ) Module bay 1 (Base) Module bay 2 (Base) Module bay 3 Module bay 4 Module bay 3 Module bay 4

Upper server

Embedded 10 GbE Virtual Fabric adapter Embedded 10 GbE Virtual Fabric adapter Port 1

Port 1 Port 2 Port 1 Port 2 Not used Port 1 Port 1 Not used

Lower server

Upper server

Lower server

I/O Expansion adapter FC5024D 4-port 16Gb FC or IB6132D 2-port FDR InfiniBand

Port 2 Port 1 Port 2

a. Requires a scalable switch with 28 or more internal ports enabled. For the EN2092, EN4093, EN4093R, and SI4093 switches, this means Upgrade 1 is required. For the CN4093, Upgrade 1 or Upgrade 2 is required.

For more information about the supported switches, see 4.11.4, Switch to adapter compatibility on page 115. The FC5024D is a four-port adapter where two ports are routed to each server. Port 1 of each server is connected to the switch in bay 3 and Port 2 of each server is connected to the switch in bay 4. To make full use of all four ports, you must install a supported Fibre Channel switch in both switch bays.

Chapter 5. Compute nodes

229

Figure 5-17 shows how the FC5024D 4-port 16 Gb FC adapter and the Embedded 10Gb VFAs are connected to the Ethernet and Fibre Channel switches installed in the chassis.
Base switch ports . . . . . . Upgrade 1 switch ports Switch bay 1 Ethernet

Embedded 10 GbE

Fabric connector for embedded 10 GbE

. . .

x222 Upper server . . .

Switch bay 3 FC

. . .

Lower server

. . . . . .

Switch bay 2 Ethernet

. . .

Ethernet traffic upper server Ethernet traffic lower server Fibre Channel traffic PCIe traffic

FC5024D 4-port 16 Gb FC adapter

. . .

Switch bay 4 FC

. . .

Figure 5-17 Logical layout of the interconnects: Ethernet and Fibre Channel

The FC5024D 4-port 16Gb FC Adapter is supported by the following switches: IBM Flex System FC5022 16Gb SAN Scalable Switch IBM Flex System FC5022 24-port 16Gb SAN Scalable Switch IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch Fibre Channel switch ports: The Fibre Channel switches in bays 3 and 4 use Ports on Demand to enable both internal and external ports. You should ensure that enough ports are licensed to activate all internal ports and all needed external ports. For more information, see 4.11.11, IBM Flex System FC5022 16Gb SAN Scalable Switch on page 148. For more information about this adapter, see 5.11.15, IBM Flex System FC5024D 4-port 16Gb FC Adapter on page 394. The IB6132D is a two-port adapter and has one port that is routed to each server. One port of the adapter connects to the InfiniBand switch in switch bay 3 and the other adapter port connects to the InfiniBand switch in switch bay 4 in the chassis. The IB6132D requires that two InfiniBand switches be installed in the chassis.

230

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-18 shows how the IB6132D 2-port FDR InfiniBand adapter and the four ports of the two Embedded 10 GbE VFAs are connected to the Ethernet and InfiniBand switches that are installed in the chassis.
Base switch ports . . . . . . Upgrade 1 switch ports Switch bay 1 Ethernet

Embedded 10 GbE

Fabric connector for embedded 10 GbE

. . .

x222 Upper server . . .

Switch bay 3 InfiniBand

. . .

Lower server

. . . . . .

Switch bay 2 Ethernet

. . .

Ethernet traffic upper server Ethernet traffic lower server InfiniBand traffic PCIe traffic

IB6132D 2-port FDR InfiniBand

. . .

Switch bay 4 InfiniBand

. . .

Figure 5-18 Logical layout of the interconnects: Ethernet and InfiniBand

The IB6132D 2-port FDR InfiniBand Adapter is supported by the IBM Flex System IB6131 InfiniBand Switch. To use the adapter at FDR speeds, the switch needs the FDR upgrade. For more information, see 4.11.14, IBM Flex System IB6131 InfiniBand Switch on page 160. For more information about this adapter, see 5.11.20, IBM Flex System IB6132D 2-port FDR InfiniBand Adapter on page 403.

5.3.11 Integrated virtualization


Each server in the x222 supports the ESXi hypervisor on a USB memory key through two internal USB ports. The supported USB memory keys are listed in Table 5-38.
Table 5-38 Virtualization options Part number 41Y8298 41Y8307 41Y8311 Feature code A2G0 A383 A2R3 Description IBM Blank USB Memory Key for VMware ESXi Downloadsa IBM USB Memory Key for VMware ESXi 5.0 Update1 IBM USB Memory Key for VMware ESXi 5.1 Maximum supported 2 1 1

Chapter 5. Compute nodes

231

a. The Blank USB Memory Key requires the download of the VMware vSphere (ESXi) Hypervisor with IBM Customization image, which is available at this website: http://ibm.com/systems/x/os/vmware/

There are two types of USB keys: preload keys or blank keys. Blank keys allow you to download an IBM customized version of ESXi and load it onto the key. Each server supports one or two keys to be installed, but only in the following combinations: One preload key (keys that are preloaded at the factory) One blank key (a key that you download the customized image) One preload key and one blank key Two blank keys Two preload keys is an unsupported combination. Installing two preload keys prevents ESXi from booting. This is similar to the error as described at this website: http://kb.vmware.com/kb/1035107 Having two keys that are installed provides a backup boot device. Both devices are listed in the boot menu, which allows you to boot from either device or to set one as a backup in case the first one becomes corrupted.

5.3.12 Systems management


Each server in the x222 Compute Node contains an IBM Integrated Management Module II (IMM2), which interfaces with the advanced management module in the chassis. The combination of these features provides advanced service-processor control, monitoring, and an alerting function. If an environmental condition exceeds a threshold or if a system component fails, LEDs on the system board are lit to help you diagnose the problem, the error is recorded in the event log, and you are alerted to the problem.

Remote management
A virtual presence capability comes standard for remote server management. Remote server management is provided through the following industry-standard interfaces: Intelligent Platform Management Interface (IPMI) Version 2.0 SNMP Version 3 Common Information Model (CIM) Web browser The server supports virtual media and remote control features, which provide the following functions: Remotely viewing video with graphics resolutions up to 1600 x 1200 at 75 Hz with up to 23 bits per pixel, regardless of the system state. Remotely accessing the server by using the keyboard and mouse from a remote client. Mapping the CD or DVD drive, diskette drive, and USB flash drive on a remote client, and mapping ISO and diskette image files as virtual drives that are available for use by the server. Uploading a diskette image to the IMM2 memory and mapping it to the server as a virtual drive. Capturing blue-screen errors.

232

IBM PureFlex System and IBM Flex System Products and Technology

Light path diagnostics


For quick problem determination when you are physically at the server, the x222 offers the following three-step guided path: 1. The Fault LED on the front panel. 2. The light path diagnostics panel, as shown in the following figure. 3. LEDs that are next to key components on the system board. The light path diagnostics panel is visible when you remove the x222 Compute Node from the chassis. The panel for each server is on the right side, as shown in Figure 5-19. Light path panel on upper server

Light path panel on lower server


Figure 5-19 Location of the light path diagnostics panel on each server in the x222 Compute Node

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button on the specific server showing the error. The power button on each server doubles as the light path diagnostics remind button when the server is removed from the chassis.

Chapter 5. Compute nodes

233

The meanings of the LEDs in the light path diagnostics panel are listed in Table 5-39.
Table 5-39 Light path diagnostic panel LEDs LED LP S BRD MIS NMI TEMP MEM Meaning The light path diagnostics panel is operational. A system board error is detected. A mismatch has occurred between the processors, DIMMs, or HDDs within the configuration (as reported by POST). A non-maskable interrupt (NMI) has occurred. An over-temperature condition occurs that was critical enough to shut down the server. A memory fault has occurred. The corresponding DIMM error LEDs on the system board are also lit.

5.3.13 Operating system support


Each server in the x222 Compute Node supports the following operating systems: Microsoft Windows Server 2008 R2 with Service Pack 1 Microsoft Windows Server 2008, Datacenter x64 Edition with Service Pack 2 Microsoft Windows Server 2008, Enterprise x64 Edition with RA Service Pack 2 Microsoft Windows Server 2008, Standard x64 Edition with RA Service Pack 2 Microsoft Windows Server 2008, Web x64 Edition with RA Service Pack 2 Microsoft Windows Server 2012 Novell SUSE Linux Enterprise Server 11 for AMD64/EM64T, Service Pack 2 Novell SUSE Linux Enterprise Server 11 with Xen for AMD64/EM64T, Service Pack 2 Red Hat Enterprise Linux 5 Server x64 Edition, U9 Red Hat Enterprise Linux 5 Server with Xen x64 Edition, U9 Red Hat Enterprise Linux 6 Server x64 Edition, U4 VMware ESX 4.1, U3 VMware ESXi 4.1, U3 VMware vSphere 5, U2 VMware vSphere 5.1, U1 For more information about the latest list of supported operating systems, see this website: http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml

5.4 IBM Flex System x240 Compute Node


The IBM Flex System x240 Compute Node, available as machine type 8737 with a three-year warranty, is a half-wide, two-socket server. It runs the latest Intel Xeon processor E5-2600 family (formerly code named Sandy Bridge-EP) processors. It is ideal for infrastructure, virtualization, and enterprise business applications, and is compatible with the IBM Flex System Enterprise Chassis. This section includes the following topics: 5.4.1, Introduction on page 235 5.4.2, Features and specifications on page 237 5.4.3, Models on page 239

234

IBM PureFlex System and IBM Flex System Products and Technology

5.4.4, Chassis support on page 239 5.4.5, System architecture on page 240 5.4.6, Processor on page 242 5.4.7, Memory on page 245 5.4.8, Standard onboard features on page 258 5.4.9, Local storage on page 259 5.4.10, Integrated virtualization on page 266 5.4.11, Embedded 10 Gb Virtual Fabric adapter on page 268 5.4.12, I/O expansion on page 269 5.4.13, Systems management on page 271 5.4.14, Operating system support on page 274

5.4.1 Introduction
The x240 supports the following equipment: Up to two Intel Xeon E5-2600 series multi-core processors Twenty-four memory DIMMs Two hot-swap drives Two PCI Express I/O adapters Two optional internal USB connectors Figure 5-20 shows the x240.

Figure 5-20 The x240

Chapter 5. Compute nodes

235

Figure 5-21 shows the location of the controls, LEDs, and connectors on the front of the x240.
Hard disk drive activity LED Hard disk drive status LED

USB port

NMI control

Console Breakout Cable port

Power button / LED

LED panel

Figure 5-21 The front of the x240 showing the location of the controls, LEDs, and connectors

Figure 5-22 shows the internal layout and major components of the x240.

Cover Heat sink Microprocessor heat sink filler I/O expansion adapter Air baffle

Microprocessor Hot-swap storage backplane Hot-swap storage cage Hot-swap storage drive

Air baffle

DIMM Storage drive filler

Figure 5-22 Exploded view of the x240 showing the major components

236

IBM PureFlex System and IBM Flex System Products and Technology

5.4.2 Features and specifications


Table 5-40 lists the features of the x240.
Table 5-40 Features of the x240 Component Machine types Form factor Chassis support Processor Specification 8737 (x-config) 8737-15X and 7863-10X (e-config) Half-wide compute node IBM Flex System Enterprise Chassis Up to two Intel Xeon Processor E5-2600 product family processors. These processors can be eight-core (up to 2.9 GHz), six-core (up to 2.9 GHz), quad-core (up to 3.3 GHz), or dual-core (up to 3.0 GHz). Two QPI links up to 8.0 GT/s each. Up to 1600 MHz memory speed. Up to 20 MB L3 cache. Intel C600 series. Up to 24 DIMM sockets (12 DIMMs per processor) using Low Profile (LP) DDR3 DIMMs. RDIMMs, UDIMMs, and LRDIMMs supported. 1.5V and low-voltage 1.35V DIMMs supported. Support for up to 1600 MHz memory speed, depending on the processor. Four memory channels per processor, with three DIMMs per channel. With LRDIMMs: Up to 768 GB with 24x 32 GB LRDIMMs and two processors With RDIMMs: Up to 512 GB with 16x 32 GB RDIMMs and two processors With UDIMMs: Up to 64 GB with 16x 4 GB UDIMMs and two processors ECC, optional memory mirroring, and memory rank sparing. Two 2.5" hot-swap SAS/SATA drive bays that support SAS, SATA, and SSD drives. Optional support for up to eight 1.8 SSDs. With two 2.5 hot-swap drives: Up to 2 TB with 1 TB 2.5" NL SAS HDDs Up to 2.4 TB with 1.2 TB 2.5" SAS HDDs Up to 2 TB with 1 TB 2.5" SATA HDDs Up to 3.2 TB with 1.6 TB 2.5" SATA SSDs. An intermix of SAS and SATA HDDs and SSDs is supported. Alternatively, with 1.8 SSDs and ServeRAID M5115 RAID adapter, up to 1.6 TB with eight 200 GB 1.8 SSDs. Additional storage available with an attached Flex System Storage Expansion Node. RAID 0, 1, 1E, and 10 with integrated LSI SAS2004 controller. Optional ServeRAID M5115 RAID controller with RAID 0, 1, 10, 5, or 50 support and 1 GB cache. Supports up to eight 1.8 SSD with expansion kits. Optional flash-backup for cache, RAID 6/60, and SSD performance enabler. x2x models: Two 10 Gb Ethernet ports with Embedded 10 Gb Virtual Fabric Ethernet LAN on motherboard (LOM) controller; Emulex BladeEngine 3 based. x1x models: None standard; optional 1 Gb or 10 Gb Ethernet adapters Two I/O connectors for adapters. PCI Express 3.0 x16 interface. USB ports: one external. Two internal for embedded hypervisor with optional USB Enablement Kit. Console breakout cable port that provides local keyboard video mouse (KVM) and serial ports (cable standard with chassis; additional cables are optional)

Chipset Memory

Memory maximums

Memory protection Disk drive bays Maximum internal storage

RAID support

Network interfaces

PCI Expansion slots Ports

Chapter 5. Compute nodes

237

Component Systems management

Specification UEFI, IBM Integrated Management Module II (IMM2) with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, remote presence. Support for IBM Flex System Manager, IBM Systems Director, and IBM ServerGuide. Power-on password, administrator's password, Trusted Platform Module 1.2 Matrox G200eR2 video core with 16 MB video memory that is integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors. 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, VMware vSphere. For more information, see 5.4.14, Operating system support on page 274. Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8 hours fix time, 1-year or 2-year warranty extension, and remote technical support for IBM hardware and selected IBM and OEM software. Width 215 mm (8.5), height 51 mm (2.0), depth 493 mm (19.4) Maximum configuration: 6.98 kg (15.4 lb)

Security features Video Limited warranty Operating systems supported Service and support

Dimensions Weight

Figure 5-23 shows the components on the system board of the x240.
Hot-swap drive bay backplane Processor 2 and 12 memory DIMMs I/O connector 1 Fabric Connector

Light path diagnostics

Processor 1 and 12 memory DIMMs

I/O connector 2

Expansion Connector

Figure 5-23 Layout of the x240 system board

238

IBM PureFlex System and IBM Flex System Products and Technology

5.4.3 Models
The current x240 models are shown in Table 5-41. All models include 8 GB of memory (2x 4 GB DIMMs) running at either 1600 MHz or 1333 MHz (depending on the model).
Table 5-41 Models of the x240 type 8737 Modelsa 8737-A1x 8737-D2x 8737-F2x 8737-G2x 8737-H1x 8737-H2x 8737-J1x 8737-L2x 8737-M1x 8737-M2x 8737-N2x 8737-Q2x 8737-R2x Intel processor (model, cores, core speed, L3 cache, memory speed, TDP power) (two max) 1x Xeon E5-2630L 6C 2.0 GHz 15 MB 1333 MHz 60 W 1x Xeon E5-2609 4C 2.40 GHz 10 MB 1066 MHz 80 W 1x Xeon E5-2620 6C 2.0 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2630 6C 2.3 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2640 6C 2.5 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2640 6C 2.5 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2670 8C 2.6 GHz 20 MB 1600 MHz 115 W 1x Xeon E5-2660 8C 2.2 GHz 20 MB 1600 MHz 95 W 1x Xeon E5-2680 8C 2.7 GHz 20 MB 1600 MHz 130 W 1x Xeon E5-2680 8C 2.7 GHz 20 MB 1600 MHz 130 W 1x Xeon E5-2643 4C 3.3 GHz 10 MB 1600 MHz 130 W 1x Xeon E5-2667 6C 2.9 GHz 15 MB 1600 MHz 130 W 1x Xeon E5-2690 8C 2.9 GHz 20 MB 1600 MHz 135 W Standard memoryb 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB Available drive bays Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Available I/O slotsc 2 1 1 1 2 1 2 1 2 1 1 1 1 10 GbE embedd No Yes Yes Yes No Yes No Yes No Yes Yes Yes Yes

a. The model numbers that are provided are worldwide generally available variant (GAV) model numbers that are not orderable as listed. They must be modified by country. The US GAV model numbers use the following nomenclature: xxU. For example, the US orderable part number for 8737-A2x is 8737-A2U. See the product-specific official IBM announcement letter for other country-specific GAV model numbers. b. The maximum system memory capacity is 768 GB when you use 24x 32 GB DIMMs. c. Some models include an Embedded 10 Gb Virtual Fabric Ethernet LOM controller as standard. This embedded controller precludes the use of an I/O adapter in I/O connector 1, as shown in Figure 5-23 on page 238. For more information, see 5.4.11, Embedded 10 Gb Virtual Fabric adapter on page 268. d. Models number in the form x2x (for example, 8737-L2x) include an Embedded 10 Gb Virtual Fabric Ethernet LOM controller as standard. Model numbers in the form x1x (for example 8737-A1x) do not include this embedded controller.

5.4.4 Chassis support


The x240 type 8737 is supported in the IBM Flex System Enterprise Chassis as listed in Table 5-42.
Table 5-42 x240 chassis support Server x240 BladeCenter chassis (All) No IBM Flex System Enterprise Chassis Yes

Chapter 5. Compute nodes

239

Up to 14 x240 Compute Nodes can be installed in the chassis in 10U of rack space. The actual number of x240 systems that can be powered on in a chassis depends on the following factors: The TDP power rating for the processors that are installed in the x240. The number of power supplies installed in the chassis. The capacity of the power supplies installed in the chassis (2100 W or 2500 W). The power redundancy policy used in the chassis (N+1 or N+N). Table 4-11 on page 93 provides guidelines about the number of x240 systems that can be powered on in the IBM Flex System Enterprise Chassis, based on the type and number of power supplies installed. The x240 is a half-wide compute node. The chassis shelf must be installed in the IBM Flex System Enterprise Chassis. Figure 5-24 shows the chassis shelf in the chassis.

Figure 5-24 The IBM Flex System Enterprise Chassis showing the chassis shelf

The shelf is required for half-wide compute nodes. To install the full-wide or larger, shelves must be removed from within the chassis. Slide the two latches on the shelf towards the center and then slide the shelf from the chassis.

5.4.5 System architecture


The IBM Flex System x240 Compute Node features the Intel Xeon E5-2600 series processors. The Xeon E5-2600 series processor has models with two, four, six, and eight cores per processor with up to 16 threads per socket. The processors have the following features: Up to 20 MB of shared L3 cache Hyper-Threading Turbo Boost Technology 2.0 (depending on processor model) Two QuickPath Interconnect (QPI) links that run at up to 8 GTps One integrated memory controller Four memory channels that support up to three DIMMs each

240

IBM PureFlex System and IBM Flex System Products and Technology

The Xeon E5-2600 series processor implements the second generation of Intel Core microarchitecture (Sandy Bridge) by using a 32 nm manufacturing process. It requires a new socket type, the LGA-2011, which has 2011 pins that touch contact points on the underside of the processor. The architecture also includes the Intel C600 (Patsburg B) Platform Controller Hub (PCH). Figure 5-25 shows the system architecture of the x240 system.

x4 ESI link Intel Xeon Processor 1 Intel C600 PCH

PCIe x4 G2

LSI2004 SAS Internal USB Front USB HDDs or SSDs

USB

Front KVM port Video & serial

DDR3 DIMMs 4 memory channels 3 DIMMs per channel QPI links (8 GT/s)

x1

USB

IMM v2

Management to midplane PCIe x8 G2 10GbE LOM

PCIe x16 G3 I/O connector 1 Intel Xeon Processor 2 PCIe x8 G3 PCIe x8 G3 PCIe x16 G3 I/O connector 2 PCIe x16 G3 Sidecar connector

Figure 5-25 IBM Flex System x240 Compute Node system board block diagram

The IBM Flex System x240 Compute Node has the following system architecture features as standard: Two 2011-pin type R (LGA-2011) processor sockets An Intel C600 PCH Four memory channels per socket Up to three DIMMs per memory channel Twenty-four DDR3 DIMM sockets Support for UDIMMs, RDIMMs, and new LRDIMMs One integrated 10 Gb Virtual Fabric Ethernet controller (10 GbE LOM in diagram) One LSI 2004 SAS controller Integrated HW RAID 0 and 1 One Integrated Management Module II Two PCIe x16 Gen3 I/O adapter connectors Two Trusted Platform Module (TPM) 1.2 controllers One internal USB connector

Chapter 5. Compute nodes

241

The new architecture allows the sharing of data on-chip through a high-speed ring interconnect between all processor cores, the last level cache (LLC), and the system agent. The system agent houses the memory controller and a PCI Express root complex that provides 40 PCIe 3.0 lanes. This ring interconnect and LLC architecture are shown in Figure 5-26.

Core Core

L1/L2 L1/L2

LLC LLC Ring interconnect

.
Core L1/L2 LLC

to Chipset

System agent
PCIe 3.0 Root Complex 40 lanes PCIe 3.0 Memory Controller

QPI link

4 channels 3 DIMMs per channel

Figure 5-26 Intel Xeon E5-2600 basic architecture

The two Xeon E5-2600 series processors in the x240 are connected through two QuickPath Interconnect (QPI) links. Each QPI link is capable of up to eight giga-transfers per second (GTps) depending on the processor model installed. Table 5-43 shows the QPI bandwidth of the Intel Xeon E5-2600 series processors.
Table 5-43 QuickPath Interconnect bandwidth Intel Xeon E5-2600 series processor Advanced Standard Basic QuickPath Interconnect speed (GTps) 8.0 GTps 7.25 GTps 6.4 GTps QuickPath Interconnect bandwidth (GBps) in each direction 32.0 GBps 29.0 GBps 25.6 GBps

5.4.6 Processor
The Intel Xeon E5-2600 series is available with up to eight cores and 20 MB of last-level cache. It features an enhanced instruction set called Intel Advanced Vector Extension (AVX). This set doubles the operand size for vector instructions (such as floating-point) to 256 bits and boosts selected applications by up to a factor of two. The new architecture also introduces Intel Turbo Boost Technology 2.0 and improved power management capabilities. Turbo Boost automatically turns off unused processor cores and increases the clock speed of the cores in use if thermal requirements are still met. Turbo Boost Technology 2.0 makes use of the new integrated design. It also implements a more granular overclocking in 100 MHz steps instead of 133 MHz steps on former Nehalem-based and Westmere-based microprocessors.

242

IBM PureFlex System and IBM Flex System Products and Technology

As listed in Table 5-41 on page 239, standard models come with one processor that is installed in processor socket 1. In a two processor system, both processors communicate with each other through two QPI links. I/O is served through 40 PCIe Gen2 lanes and through a x4 Direct Media Interface (DMI) link to the Intel C600 PCH. Processor 1 has direct access to 12 DIMM slots. By adding the second processor, you enable access to the remaining 12 DIMM slots. The second processor also enables access to the sidecar connector, which enables the use of mezzanine expansion units. Table 5-44 show a comparison between the features of the Intel Xeon 5600 series processor and the new Intel Xeon E5-2600 series processor that is installed in the x240.
Table 5-44 Comparison of Xeon 5600 series and Xeon E5-2600 series processor features Specification Cores Physical Addressing Cache size Memory channels per socket Max memory speed Virtualization technology New instructions QPI frequency Inter-socket QPI links PCI Express Xeon 5600 Up to six cores / 12 threads 40-bit (Uncorea limited) 12 MB 3 1333 MHz Real Mode support and transition latency reduction AES-NI 6.4 GTps 1 36 Lanes PCIe on chipset Xeon E5-2600 Up to eight cores / 16 threads 46-bit (Core and Uncorea ) Up to 20 MB 4 1600 MHz Adds Large VT pages Adds AVX 8.0 GTps 2 40 Lanes/Socket Integrated PCIe

a. Uncore is an Intel term that is used by Intel to describe the parts of a processor that are not the core.

Table 5-45 lists the features for the different Intel Xeon E5-2600 series processor types.
Table 5-45 Intel Xeon E5-2600 series processor features Processor model Advanced Xeon E5-2650 Xeon E5-2658 Xeon E5-2660 Xeon E5-2665 Xeon E5-2670 Xeon E5-2680 Xeon E5-2690 2.0 GHz 2.1 GHz 2.2 GHz 2.4 GHz 2.6 GHz 2.7 GHz 2.9 GHz Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes 20 MB 20 MB 20 MB 20 MB 20 MB 20 MB 20 MB 8 8 8 8 8 8 8 95 W 95 W 95 W 115 W 115 W 130 W 135 W 8 GT/s 8 GT/s 8 GT/s 8 GT/s 8 GT/s 8 GT/s 8 GT/s 1600 MHz 1600 MHz 1600 MHz 1600 MHz 1600 MHz 1600 MHz 1600 MHz Processor frequency Turbo HT L3 cache Cores Power TDP QPI Link speeda Max DDR3 speed

Chapter 5. Compute nodes

243

Processor model Standard Xeon E5-2620 Xeon E5-2630 Xeon E5-2640 Basic Xeon E5-2603 Xeon E5-2609 Low power Xeon E5-2650L Xeon E5-2648L Xeon E5-2630L Special Purpose Xeon E5-2667 Xeon E5-2643 Xeon E5-2637

Processor frequency

Turbo

HT

L3 cache

Cores

Power TDP

QPI Link speeda

Max DDR3 speed

2.0 GHz 2.3 GHz 2.5 GHz

Yes Yes Yes

Yes Yes Yes

15 MB 15 MB 15 MB

6 6 6

95 W 95 W 95 W

7.2 GT/s 7.2 GT/s 7.2 GT/s

1333 MHz 1333 MHz 1333 MHz

1.8 MHz 2.4 GHz

No No

No No

10 MB 10 MB

4 4

80 W 80 W

6.4 GT/s 6.4 GT/s

1066 MHz 1066 MHz

1.8 GHz 1.8 GHz 2.0 GHz

Yes Yes Yes

Yes Yes Yes

20 MB 20 MB 15 MB

8 8 6

70 W 70 W 60 W

8 GT/s 8 GT/s 7.2 GT/s

1600 MHz 1600 MHz 1333 MHz

2.9 GHz 3.3 GHz 3.0 GHz

Yes No No

Yes No No

15 MB 10 MB 5 MB

6 4 2

130 W 130 W 80 W

8 GT/s 6.4 GT/s 8 GT/s

1600 MHz 1600 MHz 1600 MHz

a. GTps = giga transfers per second.

Table 5-46 lists the processor options for the x240.


Table 5-46 Processors for the x240 type 8737 Part number 81Y5180 81Y5182 81Y5183 81Y5184 81Y5206 49Y8125 81Y5185 81Y5190 95Y4670 81Y5186 81Y5179 95Y4675 81Y5187 49Y8144 Feature A1CQ A1CS A1CT A1CU A1ER A2EP A1CV A1CY A31A A1CW A1ES A319 A1CX A2ET Description Intel Xeon Processor E5-2603 4C 1.8 GHz 10 MB Cache 1066 MHz 80 W Intel Xeon Processor E5-2609 4C 2.40 GHz 10 MB Cache 1066 MHz 80 W Intel Xeon Processor E5-2620 6C 2.0 GHz 15 MB Cache 1333 MHz 95 W Intel Xeon Processor E5-2630 6C 2.3 GHz 15 MB Cache 1333 MHz 95 W Intel Xeon Processor E5-2630L 6C 2.0 GHz 15 MB Cache 1333 MHz 60 W Intel Xeon Processor E5-2637 2C 3.0 GHz 5 MB Cache 1600 MHz 80 W Intel Xeon Processor E5-2640 6C 2.5 GHz 15 MB Cache 1333 MHz 95 W Intel Xeon Processor E5-2643 4C 3.3 GHz 10 MB Cache 1600 MHz 130 W Intel Xeon Processor E5-2648L 8C 1.8 GHz 20 MB Cache 1600 MHz 70 W Intel Xeon Processor E5-2650 8C 2.0 GHz 20 MB Cache 1600 MHz 95 W Intel Xeon Processor E5-2650L 8C 1.8 GHz 20 MB Cache 1600 MHz 70 W Intel Xeon Processor E5-2658 8C 2.1 GHz 20 MB Cache 1600 MHz 95 W Intel Xeon Processor E5-2660 8C 2.2 GHz 20 MB Cache 1600 MHz 95 W Intel Xeon Processor E5-2665 8C 2.4 GHz 20 MB Cache 1600 MHz 115 W L2x H1x, H2x N2x D2x F2x G2x A1x Where used

244

IBM PureFlex System and IBM Flex System Products and Technology

Part number 81Y5189 81Y9418 81Y5188 49Y8116

Feature A1CZ A1SX A1D9 A2ER

Description Intel Xeon Processor E5-2667 6C 2.9 GHz 15 MB Cache 1600 MHz 130 W Intel Xeon Processor E5-2670 8C 2.6 GHz 20 MB Cache 1600 MHz 115 W Intel Xeon Processor E5-2680 8C 2.7 GHz 20 MB Cache 1600 MHz 130 W Intel Xeon Processor E5-2690 8C 2.9 GHz 20 MB Cache 1600 MHz 135 W

Where used Q2x J1x M1x, M2x R2x

For more information about the Intel Xeon E5-2600 series processors, see this website: http://intel.com/content/www/us/en/processors/xeon/xeon-processor-5000-sequence.html

5.4.7 Memory
The x240 has 12 DIMM sockets per processor (24 DIMMs in total) running at 800, 1066, 1333, or 1600 MHz. It supports 2 GB, 4 GB, 8 GB, 16 GB, and 32 GB memory modules, as shown in Table 5-49 on page 250. The x240 with the Intel Xeon E5-2600 series processors can support up to 768 GB of memory in total when you use 32 GB LRDIMMs with both processors installed. The x240 uses double data rate type 3 (DDR3) LP DIMMs. You can use registered DIMMs (RDIMMs), unbuffered DIMMs (UDIMMs), or load-reduced DIMMs (LRDIMMs). However, the mixing of the different memory DIMM types is not supported. The E5-2600 series processor has four memory channels, and each memory channel can have up to three DIMMs. Figure 5-27 shows the E5-2600 series and the four memory channels.

Channel 2

Channel 3

DIMM 10

Channel 0

Channel 1

DIMM 6

DIMM 3

DIMM 4

Figure 5-27 The Intel Xeon E5-2600 series processor and the four memory channels

Memory subsystem overview


Table 5-47 summarizes some of the characteristics of the x240 memory subsystem. All of these characteristics are described in the following sections.
Table 5-47 Memory subsystem characteristics of the x240 Memory subsystem characteristic Number of memory channels per processor Supported DIMM voltages IBM Flex System x240 Compute Node 4 Low voltage (1.35V) Standard voltage (1.5V)

DIMM 5

DIMM 2

DIMM 1

Intel Xeon E5-2600 processor

DIMM 11

DIMM 12

DIMM 9

DIMM 8

DIMM 7

Chapter 5. Compute nodes

245

Memory subsystem characteristic Maximum number of DIMMs per channel (DPC) DIMM slot maximum Mixing of memory types (RDIMMS, UDIMMS, LRDIMMs) Mixing of memory speeds Mixing of DIMM voltage ratings Registered DIMM (RDIMM) modules Supported memory sizes Supported memory speeds Maximum system capacity Maximum memory speed

IBM Flex System x240 Compute Node 3 (using 1.5V DIMMs) 2 (using 1.35V DIMMs) One-processor: 12 Two-processor: 24 Not supported in any configuration Supported; lowest common speed for all installed DIMMs Supported; all 1.35 V will run at 1.5 V

32, 16, 8, 4, and 2 GB 1600, 1333, 1066, and 800 MHz 512 GB (16 x 16 GB) 1.35V @ 2DPC: 1333 MHz 1.5V @ 2DPC: 1600 MHz 1.5V @ 3DPC: 1066 MHz 8 One-processor: 12 Two-processor: 24

Maximum ranks per channel (any memory voltage) Maximum number of DIMMs Unbuffered DIMM (UDIMM) modules Supported memory sizes Supported memory speeds Maximum system capacity Maximum memory speed

4 GB 1333 MHz 64 GB (16 x 4 GB) 1.35V @ 2DPC: 1333 MHz 1.5V @ 2DPC: 1333 MHz 1.35V or 1.5V @ 3DPC: Not supported 8 One-processor: 8 Two-processor: 16

Maximum ranks per channel (any memory voltage) Maximum number of DIMMs Load-reduced (LRDIMM) modules Supported sizes Maximum capacity Supported speeds Maximum memory speed

32 and 16 GB 768 GB (24 x 32 GB) 1333 and 1066 MHz 1.35V @ 2DPC: 1066 MHz 1.5V @ 2DPC: 1333 MHz 1.35V or 1.5V @ 3DPC: 1066 MHz 8a

Maximum ranks per channel (any memory voltage)

246

IBM PureFlex System and IBM Flex System Products and Technology

Memory subsystem characteristic Maximum number of DIMMs

IBM Flex System x240 Compute Node One-processor: 12 Two-processor: 24

a. Because of reduced electrical loading, a 4R (four-rank) LRDIMM has the equivalent load of a two-rank RDIMM. This reduced load allows the x240 to support three 4R LRDIMMs per channel (instead of two as with UDIMMs and RDIMMs). For more information, see on page 247.

Figure 5-28 shows the location of the 24 memory DIMM sockets on the x240 system board and other components.
DIMMs 13-18 Microprocessor 2 DIMMs 1-6 I/O expansion 1 LOM connector (some models only)

I/O expansion 2

DIMMs 19-24

DIMMs 7-12

Microprocessor 1

Figure 5-28 DIMM layout on the x240 system board

Table 5-48 lists which DIMM connectors belong to which processor memory channel.
Table 5-48 The DIMM connectors for each processor memory channel Processor Memory channel Channel 0 Channel 1 Processor 1 Channel 2 Channel 3 Channel 0 Channel 1 Processor 2 Channel 2 Channel 3 13, 14, and 15 16, 17, and 18 7, 8, and 9 10, 11, and 12 22, 23, and 24 19, 20, and 21 DIMM connector 4, 5, and 6 1, 2, and 3

Chapter 5. Compute nodes

247

Memory types
The x240 supports the following types of DIMM memory: RDIMM modules Registered DIMMs are the mainstream module solution for servers or any applications that demand heavy data throughput, high density, and high reliability. RDIMMs use registers to isolate the memory controller address, command, and clock signals from the dynamic random-access memory (DRAM). This process results in a lighter electrical load. Therefore, more DIMMs can be interconnected and larger memory capacity is possible. However, the register often does impose a clock or more of delay, meaning that registered DIMMs often have slightly longer access times than their unbuffered counterparts. In general, RDIMMs have the best balance of capacity, reliability, and workload performance with a maximum performance of 1600 MHz (at 2 DPC). For more information about supported x240 RDIMM memory options, see Table 5-49 on page 250. UDIMM modules In contrast to RDIMMs that use registers to isolate the memory controller from the DRAMs, UDIMMs attach directly to the memory controller. Therefore, they do not introduce a delay, which creates better performance. The disadvantage is limited drive capability. Limited capacity means that the number of DIMMs that can be connected together on the same memory channel remains small because of electrical loading. This leads to less DIMM support, fewer DIMMs per channel (DPC), and overall lower total system memory capacity than RDIMM systems. UDIMMs have the lowest latency and lowest power usage. They also have the lowest overall capacity. For more information about supported x240 UDIMM memory options, see Table 5-49 on page 250. LRDIMM modules Load-reduced DIMMs are similar to RDIMMs. They also use memory buffers to isolate the memory controller address, command, and clock signals from the individual DRAMS on the DIMM. Load-reduced DIMMs take the buffering a step further by buffering the memory controller data lines from the DRAMs as well.

248

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-29 shows a comparison of RDIMM and LRDIMM memory types.


Registered DIMM
DATA DRAM DRAM DRAM DRAM

Load-reduced DIMM
DRAM DRAM DRAM DRAM

Memory controller

Register
CMD/ADDR/ CLK

Memory controller

DRAM DRAM

CMD/ ADDR/ CLK

Memory Buffer DRAM DRAM

DATA

DRAM DRAM

DRAM DRAM

Figure 5-29 Comparing RDIMM buffering and LRDIMM buffering

In essence, all signaling between the memory controller and the LRDIMM is now intercepted by the memory buffers on the LRDIMM module. This system allows more ranks to be added to each LRDIMM module without sacrificing signal integrity. It also means that fewer actual ranks are seen by the memory controller (for example, a 4R LRDIMM has the same look as a 2R RDIMM). The added buffering that the LRDIMMs support greatly reduces the electrical load on the system. This reduction allows the system to operate at a higher overall memory speed for a certain capacity. Conversely, it can operate at a higher overall memory capacity at a certain memory speed. LRDIMMs allow maximum system memory capacity and the highest performance for system memory capacities above 384 GB. They are suited for system workloads that require maximum memory such as virtualization and databases. For more information about supported x240 LRDIMM memory options, see Table 5-49 on page 250. The memory type that is installed in the x240 combines with other factors to determine the ultimate performance of the x240 memory subsystem. For a list of rules when populating the memory subsystem, see Memory installation considerations on page 257.

Chapter 5. Compute nodes

249

Memory options
Table 5-49 lists the memory DIMM options for the x240.
Table 5-49 Memory DIMMs for the x240 Part number FC Description Where used

Registered DIMM (RDIMM) modules - 1066 MHz and 1333 MHz 49Y1405 49Y1406 8940 8941 2 GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 4 GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM H1x, H2x, G2x, F2x, D2x, A1x

49Y1407 49Y1397 49Y1563 49Y1400

8942 8923 A1QT 8939

4 GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 8 GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066 MHz LP RDIMM

Registered DIMM (RDIMM) modules - 1600 MHz 49Y1559 A28Z 4 GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM R2x, Q2x, N2x, M2x, M1x, L2x, J1x

90Y3178 90Y3109 00D4968

A24L A292 A2U5

4 GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM 8 GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM 16 GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM

Unbuffered DIMM (UDIMM) modules 49Y1404 8648 4 GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP UDIMM

Load-reduced (LRDIMM) modules 49Y1567 90Y3105 A290 A291 16 GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM 32 GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM

Memory channel performance considerations


The memory that is installed in the x240 can be clocked at 1600 MHz, 1333 MHz, 1066 MHz, or 800 MHz. You select the speed based on the type of memory, population of memory, processor model, and several other factors. Use the following items to determine the ultimate performance of the x240 memory subsystem: Model of Intel Xeon E5-2600 series processor installed As described in 5.4.5, System architecture on page 240, the Intel Xeon E5-2600 series processors includes one integrated memory controller. The model of processor that is installed determines the maximum speed that the integrated memory controller clocks the installed memory. Table 5-45 on page 243 lists the maximum DDR3 speed that the processor model supports. This maximum speed might not be the ultimate speed of the memory subsystem.

250

IBM PureFlex System and IBM Flex System Products and Technology

Speed of DDR3 DIMMs installed For maximum performance, the speed rating of each DIMM module must match the maximum memory clock speed of the Xeon E5-2600 processor. Remember the following rules when you match processors and DIMM modules: The processor never over-clocks the memory in any configuration. The processor clocks all the installed memory at either the rated speed of the processor or the speed of the slowest DIMM installed in the system. For example, an Intel Xeon E5-2640 series processor clocks all installed memory at a maximum speed of 1333 MHz. If any 1600 MHz DIMM modules are installed, they are clocked at 1333 MHz. However, if any 1066 MHz or 800 MHz DIMM modules are installed, all installed DIMM modules are clocked at the slowest speed (800 MHz). Number of DIMMs per channel (DPC) Generally, the Xeon E5-2600 processor series clocks up to 2DPC at the maximum rated speed of the processor. However, if any channel is fully populated (3DPC), the processor slows all the installed memory down. For example, an Intel Xeon E5-2690 series processor clocks all installed memory at a maximum speed of 1600 MHz up to 2DPC. However, if any one channel is populated with 3DPC, all memory channels are clocked at 1066 MHz. DIMM voltage rating The Xeon E5-2600 processor series supports both low voltage (1.35 V) and standard voltage (1.5 V) DIMMs. Table 5-49 on page 250 shows that the maximum clock speed for supported low voltage DIMMs is 1333 MHz. The maximum clock speed for supported standard voltage DIMMs is 1600 MHz. Table 5-50 and Table 5-51 on page 252 list the memory DIMM types that are available for the x240 and shows the maximum memory speed, which is based on the number of DIMMs per channel, ranks per DIMM, and DIMM voltage rating.
Table 5-50 Maximum memory speeds (Part 1 - UDIMMs, LRDIMMs and Quad rank RDIMMs) Spec Rank Part numbers Rated speed Rated voltage Operating voltage Maximum quantitya Largest DIMM Max memory capacity Max memory at rated speed Maximum operating speed 1 DIMM per channel 2 DIMMs per channel 3 DIMMs per channel 1333 MHz 1333 MHz NSc 1333 MHz 1333 MHz NSc 1066 MHz 1066 MHz 1066 MHz 1333 MHz 1333 MHz 1066 MHz 800 MHz NSb NSd 1066 MHz 800 MHz NSd 1.35 V 16 4 GB 48 GB 48 GB UDIMMs Dual rank 49Y1404 (4 GB) 1333 MHz 1.35 V 1.5 V 16 4 GB 48 GB 48 GB 1.35 V 24 32 GB 768 GB N/A LRDIMMs Quad rank 49Y1567 (16 GB) 90Y3105 (32 GB) 1333 MHz 1.35 V 1.5 V 24 32 GB 768 GB 512 GB 1.35 V 8 32 GB 256 GB N/A RDIMMs Quad rank 49Y1400 (16 GB) 90Y3102 (32 GB) 1066 MHz 1.35 V 1.5 V 16 32 GB 512 GB 256 GB

Chapter 5. Compute nodes

251

a. The maximum quantity that is supported is shown for two processors installed. When one processor is installed, the maximum quantity that is supported is half of that shown. b. NS = Not supported at 1.35 V. Will operate at 1.5 V instead c. NS = Not supported. UDIMMs only support up to 2 DIMMs per channel. d. NS = Not supported. RDIMMs support up to 8 ranks per channel. Table 5-51 Maximum memory speeds (Part 2 - Single and Dual rank RDIMMs) Spec Rank Part numbers Single rank 49Y1405 (2GB) 49Y1406 (4GB) 1333 MHz 1.35 V 1.35 V 16 4 GB 64 GB N/A 1.5 V 24 4 GB 96 GB 64 GB 49Y1559 (4GB) RDIMMs Dual rank 49Y1407 (4GB) 49Y1397 (8GB) 49Y1563 (16GB) 1333 MHz 1.35 V 1.35 V 16 16 GB 256 GB N/A 1.5 V 24 16 GB 384 GB 256 GB 90Y3178 (4GB) 90Y3109 (8GB) 00D4968 (16GB) 1600 MHz 1.5 V 1.5 V 24 16 GB 384 GB 256 GB

Rated speed Rated voltage Operating voltage Max quantitya

1600 MHz 1.5 V 1.5 V 24 4 GB 96 GB 64 GB

Largest DIMM Max memory capacity Max memory at rated speed

Maximum operating speed (MHz) 1 DIMM per channel 2 DIMMs per channel 3 DIMMs per channel 1333 MHz 1333 MHz NSb 1333 MHz 1333 MHz 1066 MHz 1600 MHz 1600 MHz 1066 MHz 1333 MHz 1333 MHz NSb 1333 MHz 1333 MHz 1066 MHz 1600 MHz 1600 MHz 1066 MHz

a. The maximum quantity that is supported is shown for two processors installed. When one processor is installed, the maximum quantity that is supported is half of that shown. b. NS = Not supported at 1.35 V. Will operate at 1.5 V instead

Tip: When an unsupported memory configuration is detected, the IMM illuminates the DIMM mismatch light path error LED and the system does not boot. A DIMM mismatch error includes the following examples: Mixing of RDIMMs, UDIMMs, or LRDIMMs in the system Not adhering to the DIMM population rules In some cases, the error log points to the DIMM slots that are mismatched.

Memory modes
The x240 type 8737 supports the following memory modes: Independent channel mode Rank-sparing mode Mirrored-channel mode These modes can be selected in the Unified Extensible Firmware Interface (UEFI) setup. For more information, see 5.4.13, Systems management on page 271.

252

IBM PureFlex System and IBM Flex System Products and Technology

Independent channel mode


This mode is the default mode for DIMM population. DIMMs are populated in the last DIMM connector on the channel first, then installed one DIMM per channel, equally distributed between channels and processors. In this memory mode, the operating system uses the full amount of memory that is installed and no redundancy is provided. The IBM Flex System x240 Compute Node that is configured in independent channel mode yields a maximum of 192 GB of usable memory with one processor installed. It yields 384 GB of usable memory with two processors installed that use 16 GB DIMMs. Memory DIMMs must be installed in the correct order, starting with the last physical DIMM socket of each channel first. The DIMMs can be installed without matching sizes, but avoid this configuration because it might affect optimal memory performance. For more information about the memory DIMM installation sequence when you use independent channel mode, see Memory DIMM installation: Independent channel and rank-sparing modes on page 254.

Rank-sparing mode
In rank-sparing mode, one memory DIMM rank serves as a spare of the other ranks on the same channel. The spare rank is held in reserve and is not used as active memory. The spare rank must have an identical or larger memory capacity than all the other active memory ranks on the same channel. After an error threshold is surpassed, the contents of that rank are copied to the spare rank. The failed rank of memory is taken offline, and the spare rank is put online and used as active memory in place of the failed rank. The memory DIMM installation sequence when using rank-sparing mode is identical to independent channel mode, as described in Memory DIMM installation: Independent channel and rank-sparing modes on page 254.

Mirrored-channel mode
In mirrored-channel mode, memory is installed in pairs. Each DIMM in a pair must be identical in capacity, type, and rank count. The channels are grouped in pairs. Each channel in the group receives the same data. One channel is used as a backup of the other, which provides redundancy. The memory contents on channel 0 are duplicated in channel 1, and the memory contents of channel 2 are duplicated in channel 3. The DIMMs in channel 0 and channel 1 must be the same size and type. The DIMMs in channel 2 and channel 3 must be the same size and type. The effective memory that is available to the system is only half of what is installed. Because memory mirroring is handled in hardware, it is operating system-independent. Consideration: In a two processor configuration, memory must be identical across the two processors to enable the memory mirroring feature.

Chapter 5. Compute nodes

253

Figure 5-30 shows the E5-2600 series processor with the four memory channels and which channels are mirrored when operating in mirrored-channel mode.

Channel 1

Channel 3

DIMM 10

Channel 0 & 1 mirrored


Channel 0 DIMM 6 DIMM 4 DIMM 5

Channel 2

DIMM 7

Mirrored Pair

Figure 5-30 Showing the mirrored channels and DIMM pairs when in mirrored-channel mode

For more information about the memory DIMM installation sequence when mirrored channel mode is used, see Memory DIMM installation: Mirrored-channel on page 257.

DIMM installation order


This section describes the preferred order in which DIMMs should be installed, based on the memory mode that is used.

Memory DIMM installation: Independent channel and rank-sparing modes


The following guidelines are only for when the processors are operating in independent channel mode or rank-sparing mode. The x240 boots with one memory DIMM installed per processor. However, the suggested memory configuration balances the memory across all the memory channels on each processor to use the available memory bandwidth. Use one of the following suggested memory configurations: Four, eight, or 12 memory DIMMs in a single processor x240 server Eight, 16, or 24 memory DIMMs in a dual processor x240 server This sequence spreads the DIMMs across as many memory channels as possible. For best performance and to ensure a working memory configuration, install the DIMMs in the sockets as shown in Table 5-52 on page 255 and Table 5-53 on page 256.

254

IBM PureFlex System and IBM Flex System Products and Technology

DIMM 8

DIMM 9

Intel Xeon E5-2600 processor

DIMM 11

DIMM 12

DIMM 1

DIMM 2

DIMM 3

Channel 2 & 3 mirrored

Table 5-52 shows DIMM installation if you have one processor installed.
Table 5-52 Suggested DIMM installation for the x240 with one processor installed Optimal memory configa Processor 1
Channel 2 Channel 1 Channel 3 Channel 4 Channel 3

Processor 2
Channel 4 Channel 2 Channel 1

Number of processors

Number of DIMMs

DIMM 10

DIMM 11

DIMM 12

DIMM 13

DIMM 14

DIMM 15

DIMM 16

DIMM 17

DIMM 18

DIMM 19

DIMM 20

DIMM 21

DIMM 22

DIMM 23

1 1 1 1 1 1 1 1 1 1 1 1

1 2 3 4 5 6 7 8 9 10 11 12 x x x x x x x x x x x x x x x x x x x x x

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

a. For optimal memory performance, populate all the memory channels equally.

Chapter 5. Compute nodes

255

DIMM 24

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 7

DIMM 8

DIMM 9

Table 5-53 shows DIMM installation if you have two processors installed.
Table 5-53 Suggested DIMM installation for the x240 with two processors installed Optimal memory configa Processor 1
Channel 2 Channel 1 Channel 3 Channel 4 Channel 3

Processor 2
Channel 4 Channel 2 Channel 1

Number of processors

Number of DIMMs

DIMM 10

DIMM 11

DIMM 12

DIMM 13

DIMM 14

DIMM 15

DIMM 16

DIMM 17

DIMM 18

DIMM 19

DIMM 20

DIMM 21

DIMM 22

DIMM 23 x x x x x x x x x x x

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

a. For optimal memory performance, populate all the memory channels equally.

256

IBM PureFlex System and IBM Flex System Products and Technology

DIMM 24

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 7

DIMM 8

DIMM 9

Memory DIMM installation: Mirrored-channel


Table 5-54 lists the memory DIMM installation order for the x240, with one or two processors that are installed when operating in mirrored-channel mode.
Table 5-54 The DIMM installation order for mirrored-channel mode DIMM paira 1st 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th One processor that is installed 1 and 4 9 and 12 2 and 5 8 and 11 3 and 6 7 and 10 Two processors that are installed 1 and 4 13 and 16 9 and 12 21 and 24 2 and 5 14 and 17 8 and 11 20 and 23 3 and 6 15 and 18 7 and 10 19 and 22

a. The pair of DIMMs must be identical in capacity, type, and rank count.

Memory installation considerations


Use the following general guidelines when you decide about the memory configuration of your IBM Flex System x240 Compute Node: All memory installation considerations apply equally to one- and two-processor systems. All DIMMs must be DDR3 DIMMs. Memory of different types (RDIMMs, UDIMMs, and LRDIMMs) cannot be mixed in the system. If you mix DIMMs with 1.35 V and 1.5 V, the system runs all of them at 1.5 V and you lose the energy advantage. If you mix DIMMs with different memory speeds, all DIMMs in the system run at the lowest speed. Install memory DIMMs in order of their size, with the largest DIMM first. The order is described in Table 5-52 on page 255 and Table 5-53 on page 256. The correct installation order is the DIMM slot farthest from the processor first (DIMM slots 1, 4, 9, and 12) working inward. Install memory DIMMs in order of their rank, with the largest DIMM in the DIMM slot farthest from the processor. Start with DIMM slots 1, 4, 9, and 12, and work inward. Memory DIMMs can be installed one DIMM at a time. However, avoid this configuration because it can affect performance. For maximum memory bandwidth, install one DIMM in each of the four memory channels, that is, in matched quads (four DIMMs at a time). Populate equivalent ranks per channel.

Chapter 5. Compute nodes

257

5.4.8 Standard onboard features


This section describes the standard onboard features of the IBM Flex System x240 Compute Node.

USB ports
The x240 has one external USB port on the front of the compute node. Figure 5-31 shows the location of the external USB connector on the x240.

External USB connector

Figure 5-31 The front USB connector on the x240 Compute Node

The x240 also supports an option that provides two internal USB ports (x240 USB Enablement Kit) that are primarily used for attaching USB hypervisor keys. For more information, see 5.4.10, Integrated virtualization on page 266.

Console breakout cable


The x240 connects to local video, USB keyboard, and USB mouse devices by connecting the console breakout cable. The console breakout cable connects to a connector on the front bezel of the x240 compute node. The console breakout cable also provides a serial connector. Figure 5-32 shows the console breakout cable.

Breakout cable connector

Serial connector 2-port USB Video connector


Figure 5-32 Console breakout cable that connects to the x240

258

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-55 lists the ordering part number and feature code of the console breakout cable. One console breakout cable ships with the IBM Flex System Enterprise Chassis.
Table 5-55 Ordering part number and feature code Part number 81Y5286 Feature code A1NF Description IBM Flex System Console Breakout Cable

Trusted Platform Module


Trusted computing is an industry initiative that provides a combination of secure software and secure hardware to create a trusted platform. It is a specification that increases network security by building unique hardware IDs into computing devices. The x240 implements Trusted Platform Module (TPM) Version 1.2 support. The TPM in the x240 is one of the three layers of the trusted computing initiative, as shown in Table 5-56.
Table 5-56 Trusted computing layers Layer Level 1: Tamper-proof hardware, used to generate trustable keys Level 2: Trustable platform Level 3: Trustable execution Implementation Trusted Platform Module UEFI or BIOS Intel processor Operating system Drivers

5.4.9 Local storage


The x240 compute node features an onboard LSI 2004 SAS controller with two 2.5-inch small form factor (SFF) hot-swap drive bays.

Integrated SAS controller


The 2.5-inch internal drive bays are accessible from the front of the compute node. An onboard LSI SAS2004 controller provides RAID 0, RAID 1, or RAID 10 capability. It supports up to two SFF hot-swap SAS or SATA HDDs or two SFF hot-swap solid-state drives. Figure 5-33 shows how the LSI2004 SAS controller and hot-swap storage devices connect to the internal HDD interface.

SAS 0 LSI2004 SAS Controller SAS 0 SAS 1 SAS 1

Hot-Swap Storage Device 1 Hot-Swap Storage Device 2

Figure 5-33 The LSI2004 SAS controller connections to the HDD interface

Chapter 5. Compute nodes

259

Figure 5-34 shows the front of the x240, including the two hot-swap drive bays.

Figure 5-34 The x240 showing the front hot-swap disk drive bays

Supported 2.5-inch drives


The x240 type 8737 has support for up to two hot-swap SFF SAS or SATA HDDs or up two hot-swap SFF SSDs. These two hot-swap components are accessible from the front of the compute node without removing the compute node from the chassis. Table 5-57 shows a list of supported SAS and SATA HDDs and SSDs.
Table 5-57 Supported SAS and SATA HDDs and SSDs Part number Feature code Description

10K SAS hard disk drives 42D0637 90Y8877 49Y2003 90Y8872 81Y9650 00AD075 5599 A2XC 5433 A2XD A282 A48S IBM 300 GB 10K 6 Gbps SAS 2.5" SFF Slim-HS HDD IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD IBM 600 GB 10K 6 Gbps SAS 2.5" SFF Slim-HS HDD IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD IBM 900 GB 10K 6 Gbps SAS 2.5" SFF HS HDD IBM 1.2TB 10K 6Gbps SAS 2.5'' G2HS HDD

15K SAS hard disk drives 42D0677 90Y8926 81Y9670 NL SATA 81Y9722 81Y9726 81Y9730 NL SAS 42D0707 90Y8953 81Y9690 5409 A2XE A1P3 IBM 500 GB 7200 6 Gbps NL SAS 2.5" SFF Slim-HS HDD IBM 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDD IBM 1TB 7.2K 6 Gbps NL SAS 2.5" SFF HS HDD A1NX A1NZ A1AV IBM 250 GB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD IBM 500 GB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD IBM 1TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD 5536 A2XB A283 IBM 146 GB 15K 6 Gbps SAS 2.5" SFF Slim-HS HDD IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD IBM 300 GB 15K 6 Gbps SAS 2.5" SFF HS HDD

10K and 15K Self-encrypting drives (SED) 90Y8944 90Y8913 A2ZK A2XF IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS SED IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS SED

260

IBM PureFlex System and IBM Flex System Products and Technology

Part number 90Y8908 81Y9662 00AD085

Feature code A3EF A3EG A48T

Description IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS SED IBM 900GB 10K 6Gbps SAS 2.5" SFF G2HS SED IBM 1.2TB 10K 6Gbps SAS 2.5'' G2HS SED

SAS-SSD Hybrid drive 00AD102 A4G7 IBM 600GB 10K 6Gbps SAS 2.5'' G2HS Hybrid

Solid-state drives - Enterprise 41Y8331 41Y8336 41Y8341 00W1125 49Y6129 49Y6134 49Y6139 49Y6195 A4FL A4FN A4FQ A3HR A3EW A3EY A3F0 A4GH S3700 200GB SATA 2.5" MLC HS Enterprise SSD S3700 400GB SATA 2.5" MLC HS Enterprise SSD S3700 800GB SATA 2.5" MLC HS Enterprise SSD IBM 100GB SATA 2.5" MLC HS Enterprise SSD IBM 200GB SAS 2.5" MLC HS Enterprise SSD IBM 400GB SAS 2.5" MLC HS Enterprise SSD IBM 800GB SAS 2.5" MLC HS Enterprise SSD IBM 1.6TB SAS 2.5" MLC HS Enterprise SSD

Solid-state drives - Enterprise Value 49Y5844 49Y5839 90Y8643 90Y8648 A3AU A3AS A2U3 A2U4 IBM 512GB SATA 2.5" MLC HS Enterprise Value SSD IBM 64GB SATA 2.5" MLC HS Enterprise Value SSD IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD IBM 128GB SATA 2.5" MLC HS Enterprise Value SSD

ServeRAID M5115 SAS/SATA controller


In addition, the x240 supports up to eight 1.8-inch solid-state drives that are combined with a ServeRAID M5115 SAS/SATA controller (90Y4390). The M5115 attaches to the I/O adapter 1 connector. It can be attached even if the Compute Node Fabric Connector is installed. The Compute Node Fabric Connector is used to route the Embedded 10 Gb Virtual Fabric adapter to bays 1 and 2. For more information, see 5.4.12, I/O expansion on page 269. The ServeRAID M5115 cannot be installed if an adapter is installed in I/O adapter slot 1. Table 5-58 lists the ServeRAID M5115 and associated hardware kits.
Table 5-58 ServeRAID M5115 and supported hardware kits for the x240 Part number 90Y4390 90Y4342 90Y4341 90Y4391 Feature code A2XW A2XX A2XY A2XZ Description ServeRAID M5115 SAS/SATA Controller ServeRAID M5100 Series Enablement Kit for IBM Flex System x240 ServeRAID M5100 Series IBM Flex System Flash Kit for x240 ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240 Maximum supported 1 1 1 1

Chapter 5. Compute nodes

261

The ServeRAID M5115 supports the following combinations of 2.5-inch drives and 1.8-inch solid-state drives: Up to two 2.5-inch drives only Up to four 1.8-inch drives only Up to two 2.5-inch drives, plus up to four 1.8-inch solid-state drives Up to eight 1.8-inch solid-state drives The ServeRAID M5115 SAS/SATA Controller (90Y4390) provides an advanced RAID controller that supports RAID 0, 1, 10, 5, 50, and, optionally, 6 and 60. It includes 1 GB of cache. This cache can be backed up to a flash cache when attached to the supercapacitor included with the optional ServeRAID M5100 Series Enablement Kit (90Y4342). At least one of the following hardware kits is required with the ServeRAID M5115 controller to enable specific drive support: ServeRAID M5100 Series Enablement Kit for IBM Flex System x240 (90Y4342) enables support for up to two 2.5-inch HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection. This enablement kit replaces the standard two-bay backplane (which is attached through the system board to an onboard controller) with a new backplane. The new backplane attaches with an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit. MegaRAID CacheVault flash cache protection uses NAND flash memory that is powered by a supercapacitor to protect data that is stored in the controller cache. This module eliminates the need for the lithium-ion battery that is commonly used to protect DRAM cache memory on Peripheral Component Interconnect (PCI) RAID controllers. To avoid data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash. This process uses power from the supercapacitor. After the power is restored to the RAID controller, the saved data is transferred from the NAND flash back to the DRAM cache. The DRAM cache can then be flushed to disk. Tip: The Enablement Kit is only required if 2.5-inch drives are used. If you plan to install four or eight 1.8-inch SSDs, this kit is not required. ServeRAID M5100 Series IBM Flex System Flash Kit for x240 (90Y4341) enables support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay SSD backplane that attaches with an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and so this kit does not have a supercap. ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240 (90Y4391) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles that replace the existing baffles, and each baffle has mounts for two SSDs. Included flexible cables connect the drives to the controller. Table 5-59 on page 263 shows the kits that are required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, you need the M5115 controller, the Flash kit, and the SSD Expansion kit. Tip: If the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is installed, the x240 USB Enablement Kit (49Y8119, which is described in 5.2.11, Integrated virtualization on page 211) cannot be installed. The x240 USB Enablement Kit and the SSD Expansion Kit both include special air baffles that cannot be installed at the same time.

262

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-59 ServeRAID M5115 hardware kits Required drive support Maximum number of 2.5-inch drives 2 0 2 0 Maximum number of 1.8-inch SSDs 0 4 (front) 4 (internal) 8 (both) => => => => Components required ServeRAID M5115 90Y4390 Required Required Required Required Required Required Enablement Kit 90Y4342 Required Required Required Required Flash Kit 90Y4341 SSD Expansion Kit 90Y4391a

a. If you install the SSD Expansion Kit, you cannot install the x240 USB Enablement Kit (49Y8119) at the same time.

Figure 5-35 shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (as shown in row 1 of Table 5-59).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Enablement Kit (90Y4342)
ServeRAID M5115 controller

MegaRAID CacheVault flash cache protection

Replacement 2-drive backplane

Figure 5-35 The ServeRAID M5115 and the Enablement Kit installed

Chapter 5. Compute nodes

263

Figure 5-36 shows how the ServeRAID M5115 and Flash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (as shown in row 4 of Table 5-59 on page 263).

ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Flash Kit (90Y4341) and ServeRAID M5100 Series SSD Expansion Kit (90Y4391)
ServeRAID M5115 controller

Flash Kit: Replacement 4-drive SSD backplane and drive bays

SSD Expansion Kit: Four SSDs on special air baffles above DIMMs (no CacheVault flash protection)

Eight drives supported: - Four internal drives - Four front-accessible drives

Figure 5-36 ServeRAID M5115 with Flash and SSD Expansion Kits installed

The eight SSDs are installed in the following locations: Four in the front of the system in place of the two 2.5-inch drive bays Two in a tray above the memory banks for CPU 1 Two in a tray above the memory banks for CPU 2 The ServeRAID M5115 controller has the following specifications: Eight internal 6 Gbps SAS/SATA ports PCI Express 3.0 x8 host interface 6 Gbps throughput per port 800 MHz dual-core IBM PowerPC processor with LSI SAS2208 6 Gbps RAID-on-Chip (ROC) controller Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411 Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342 Support for SAS and SATA HDDs and SSDs Support for intermixing SAS and SATA HDDs and SSDs; mixing different types of drives in the same array (drive group) is not recommended Support for self-encrypting drives (SEDs) with MegaRAID SafeStore Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447) Support for up to 64 virtual drives, up to 128 drive groups, up to 16 virtual drives per drive group, and up to 32 physical drives per drive group Support for logical unit number (LUN) sizes up to 64 TB 264
IBM PureFlex System and IBM Flex System Products and Technology

Configurable stripe size up to 1 MB Compliant with Disk Data Format (DDF) configuration on disk (CoD) S.M.A.R.T. support MegaRAID Storage Manager management software Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance accelerator, and SSD caching enabler. Table 5-60 lists all Feature on Demand (FoD) license upgrades.
Table 5-60 Supported upgrade features Part number 90Y4410 90Y4412 90Y4447 Feature code A2Y1 A2Y2 A36G Description ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System ServeRAID M5100 Series Performance Accelerator for IBM Flex System (MegaRAID FastPath) ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System (MegaRAID CacheCade Pro 2.0) Maximum supported 1 1 1

These features have the following characteristics: RAID 6 Upgrade (90Y4410) Adds support for RAID 6 and RAID 60. This license is a Feature on Demand license. Performance Accelerator (90Y4412) The Performance Accelerator for IBM Flex System is implemented by using the LSI MegaRAID FastPath software. It provides high-performance I/O acceleration for SSD-based virtual drives by using a low-latency I/O path to increase the maximum input/output operations per second (IOPS) capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is a Feature on Demand license. SSD Caching Enabler for traditional hard disk drives (90Y4447) The SSD Caching Enabler for IBM Flex System is implemented by using the LSI MegaRAID CacheCade Pro 2.0. It is designed to accelerate the performance of HDD arrays with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache. This configuration helps maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a Feature on Demand license. This feature requires at least one SSD drive be installed.

Chapter 5. Compute nodes

265

Supported 1.8-inch solid-state drives


The 1.8-inch solid-state drives that are supported by the ServeRAID M5115 controller are listed in Table 5-61.
Table 5-61 Supported 1.8-inch solid-state drives Part number 43W7746 43W7726 Feature code 5420 5428 Description IBM 200 GB SATA 1.8" MLC SSD IBM 50 GB SATA 1.8" MLC SSD Maximum supported 8 8

Storage Expansion Node


Further internal storage is supported if an IBM Flex System Storage Expansion Node is attached. For more information, see 5.10, IBM Flex System Storage Expansion Node on page 363.

5.4.10 Integrated virtualization


The x240 offers an IBM standard USB flash drive option that is preinstalled with VMware ESXi, which is an embedded version of VMware ESXi. It is fully contained on the flash drive, so it does not require any local disk space. The IBM USB Memory Key for VMware Hypervisor plugs into the USB ports on the optional x240 USB Enablement Kit, as shown in Figure 5-37 on page 267. Table 5-62 lists the ordering information for the VMware hypervisor options.
Table 5-62 IBM USB Memory Key for VMware Hypervisor Part number 41Y8300 41Y8307 41Y8311 41Y8298 Feature code A2VC A383 A2R3 A2G0 Description IBM USB Memory Key for VMware ESXi 5.0 IBM USB Memory Key for VMware ESXi 5.0 Update 1 IBM USB Memory Key for VMware ESXi 5.1 IBM Blank USB Memory Key for VMware ESXi Downloadsa

a. The Blank USB Memory Key requires the download of the VMware vSphere (ESXi) Hypervisor with IBM Customization image, which is available at this website: http://ibm.com/systems/x/os/vmware/

The USB memory keys connect to the internal x240 USB Enablement Kit. Table 5-63 lists the ordering information for the internal x240 USB Enablement Kit.
Table 5-63 Internal USB port option Part number 49Y8119 Feature code A3A3 Description x240 USB Enablement Kit

266

IBM PureFlex System and IBM Flex System Products and Technology

The x240 USB Enablement Kit connects to the system board of the server, as shown in Figure 5-37. The kit offers two ports and enables you to install two memory keys. If you install both keys, both devices are listed in the boot menu. With this setup, you can boot from either device, or set one as a backup in case the first one becomes corrupted.
USB flash key USB two-port assembly

Figure 5-37 The x240 compute node showing the location of the internal x240 USB Enablement Kit

There are two types of USB keys, preloaded keys or blank keys. Blank keys allow you to download an IBM customized version of ESXi and load it onto the key. The x240 supports one or two keys installed, but only in the following combinations: One preload key One blank key One preload key and one blank key Two blank keys Two preload keys is an unsupported combination. Installing two preloaded keys prevents ESXi from booting as described at this website: http://kb.vmware.com/kb/1035107 Having two keys installed provides a backup boot device. Both devices are listed in the boot menu, which allows you to boot from either device or to set one as a backup in case the first one gets corrupted. Consideration: The x240 USB Enablement Kit and USB memory keys are not supported if the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is already installed because these kits occupy the same location in the server.

Chapter 5. Compute nodes

267

For a complete description of the features and capabilities of VMware ESX Server, see this website: http://www.vmware.com/products/vi/esx/

5.4.11 Embedded 10 Gb Virtual Fabric adapter


Some models of the x240 include an Embedded 10 Gb Virtual Fabric adapter that is built into the system board. Table 5-41 on page 239 lists what models of the x240 include the Embedded 10 Gb Virtual Fabric adapter. Each x240 model that includes the embedded 10 Gb Virtual Fabric adapter also has the Compute Node Fabric Connector that is installed in I/O connector 1. The Compute Node Fabric Connector is physically screwed onto the system board, and provides connectivity to the Enterprise Chassis midplane. Models without the Embedded 10 Gb Virtual Fabric adapter do not include any other Ethernet connections to the Enterprise Chassis midplane. For those models, an I/O adapter must be installed in I/O connector 1 or I/O connector 2. This adapter provides network connectivity between the server and the chassis midplane, and ultimately to the network switches. Figure 5-38 shows the Compute Node Fabric Connector.

Figure 5-38 The Compute Node Fabric Connector

The Compute Node Fabric Connector enables port 1 on the Embedded 10 Gb Virtual Fabric adapter to be routed to I/O module bay 1. Similarly, port 2 can be routed to I/O module bay 2. The Compute Node Fabric Connector can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1. Consideration: If I/O connector 1 has the Embedded 10 Gb Virtual Fabric adapter installed, only I/O connector 2 is available for the installation of additional I/O adapters. (An exception is that the ServeRAID Controller can coexist in slot 1 with an Embedded adapter.) The Embedded 10 Gb Virtual Fabric adapter is based on the Emulex BladeEngine 3, which is a single-chip, dual-port 10 Gigabit Ethernet (10 GbE) Ethernet Controller. The Embedded 10 Gb Virtual Fabric adapter includes the following features: PCI-Express Gen2 x8 host bus interface Supports multiple Virtual Network Interface Card (vNIC) functions TCP/IP offload Engine (TOE enabled) SRIOV capable RDMA over TCP/IP capable iSCSI and FCoE upgrade offering using FoD

268

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-64 lists the ordering information for the IBM Flex System Embedded 10 Gb Virtual Fabric Upgrade. This upgrade enables the iSCSI and FCoE support on the Embedded 10 Gb Virtual Fabric adapter.
Table 5-64 Feature on Demand upgrade for FCoE and iSCSI support Part number 90Y9310 Feature code A2TD Description IBM Virtual Fabric Advanced Software Upgrade (LOM)

Figure 5-39 shows the x240 and the location of the Compute Node Fabric Connector on the system board.
Captive screws LOM connector

Figure 5-39 The x240 showing the location of the Compute Node Fabric Connector

5.4.12 I/O expansion


The x240 has two PCIe 3.0 x16 I/O expansion connectors for attaching I/O adapters. There is also another expansion connector that is designed for future expansion options. The I/O expansion connectors are a high-density 216-pin PCIe connector. Installing I/O adapters allows the x240 to connect to switch modules in the IBM Flex System Enterprise Chassis.

Chapter 5. Compute nodes

269

Figure 5-40 shows the rear of the x240 compute node and the locations of the I/O connectors.

I/O connector 1

I/O connector 2

Figure 5-40 Rear of the x240 compute node showing the locations of the I/O connectors

Table 5-65 lists the I/O adapters that are supported in the x240.
Table 5-65 Supported I/O adapters for the x240 compute node Part number Feature code Ports Description

Ethernet adapters 49Y7900 90Y3466 90Y3554 90Y3482 A10Y A1QY A1R1 A3HK 4 2 4 2 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter IBM Flex System EN4132 2-port 10Gb Ethernet Adapter IBM Flex System CN4054 10Gb Virtual Fabric Adapter IBM Flex System EN6132 2-port 40Gb Ethernet Adapter

Fibre Channel adapters 69Y1938 95Y2375 88Y6370 95Y2386 95Y2391 69Y1942 A1BM A2N5 A1BP A45R A45S A1BQ 2 2 2 2 4 2 IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC3052 2-port 8Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter IBM Flex System FC5052 2-port 16Gb FC Adapter IBM Flex System FC5054 4-port 16Gb FC Adapter IBM Flex System FC5172 2-port 16Gb FC Adapter

InfiniBand adapters 90Y3454 A1QZ 2 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

Requirement: Any supported I/O adapter can be installed in either I/O connector. However, you must be consistent not only across chassis, but across all compute nodes.

270

IBM PureFlex System and IBM Flex System Products and Technology

The x240 also supports adapters that are installed in an attached Flex System PCIe Expansion Node. For more information, see 5.9, IBM Flex System PCIe Expansion Node on page 356.

5.4.13 Systems management


The following section describes some of the systems management features that are available with the x240.

Front panel LEDs and controls


The front of the x240 includes several LEDs and controls that help with systems management. They include an HDD activity LED, status LEDs, and power, identify, check log, fault, and light path diagnostic LEDs. Figure 5-41 shows the location of the LEDs and controls on the front of the x240.
Hard disk drive activity LED Hard disk drive status LED

USB port

Identify LED

Fault LED

NMI control

Console Breakout Cable port

Power button / LED

Check log LED

Figure 5-41 The front of the x240 with the front panel LEDs and controls shown

Table 5-66 describes the front panel LEDs.


Table 5-66 x240 front panel LED information LED Power Color Green Description This LED lights solid when system is powered up. When the compute node is initially plugged into a chassis, this LED is off. If the power-on button is pressed, the integrated management module (IMM) flashes this LED until it determines the compute node is able to power up. If the compute node is able to power up, the IMM powers the compute node on and turns on this LED solid. If the compute node is not able to power up, the IMM turns off this LED and turns on the information LED. When this button is pressed with the x240 out of the chassis, the light path LEDs are lit. You can use this LED to locate the compute node in the chassis by requesting it to flash from the chassis management module console. The IMM flashes this LED when instructed to by the Chassis Management Module. This LED functions only when the x240 is powered on. The IMM turns on this LED when a condition occurs that prompts the user to check the system error log in the Chassis Management Module. This LED lights solid when a fault is detected somewhere on the compute node. If this indicator is on, the general fault indicator on the chassis front panel should also be on.

Location

Blue

Check error log Fault

Yellow Yellow

Chapter 5. Compute nodes

271

LED Hard disk drive activity LED Hard disk drive status LED

Color Green Yellow

Description Each hot-swap hard disk drive has an activity LED. When this LED is flashing, it indicates that the drive is in use. When this LED is lit, it indicates that the drive failed. If an optional IBM ServeRAID controller is installed in the server, when this LED is flashing slowly (one flash per second), it indicates that the drive is being rebuilt. When the LED is flashing rapidly (three flashes per second), it indicates that the controller is identifying the drive.

Table 5-67 describes the x240 front panel controls.


Table 5-67 x240 front panel control information Control Power on / off button Characteristic Recessed with Power LED Description If the x240 is off, pressing this button causes the x240 to power up and start loading. When the x240 is on, pressing this button causes a graceful shutdown of the individual x240 so that it is safe to remove. This process includes shutting down the operating system (if possible) and removing power from the x240. If an operating system is running, you might need to hold the button for approximately 4 seconds to initiate the shutdown. Protect this button from accidental activation. Group it with the Power LED. Causes an NMI for debugging purposes.

NMI

Recessed. It can be accessed only by using a small pointed object.

Power LED
The status of the power LED of the x240 shows the power status of the x240 compute node. It also indicates the discovery status of the node by the Chassis Management Module. The power LED states are listed in Table 5-68.
Table 5-68 The power LED states of the x240 compute node Power LED state Off On; fast flash mode Status of compute node No power to the compute node. The compute node has power. The Chassis Management Module is in discovery mode (handshake). The compute node has power. Power in stand-by mode. The compute node has power. The compute node is operational.

On; slow flash mode On; solid

Consideration: The power button does not operate when the power LED is in fast flash mode.

Light path diagnostics panel


For quick problem determination when you are physically at the server, the x240 offers the following three-step guided path: 1. The Fault LED on the front panel. 2. The light path diagnostics panel, which is shown in Figure 5-42 on page 273. 3. LEDs next to key components on the system board. 272
IBM PureFlex System and IBM Flex System Products and Technology

The x240 light path diagnostics panel is visible when you remove the server from the chassis. The panel is on the upper right of the compute node, as shown in Figure 5-42.

Figure 5-42 Location of x240 light path diagnostics panel

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis. The meaning of each LED in the light path diagnostics panel is listed in Table 5-69.
Table 5-69 Light path panel LED definitions LED LP S BRD MIS NMI TEMP MEM ADJ Color Green Yellow Yellow Yellow Yellow Yellow Yellow Meaning The light path diagnostics panel is operational. A system board error is detected. A mismatch occurred between the processors, DIMMs, or HDDs within the configuration as reported by POST. A non-maskable interrupt (NMI) occurred. An over-temperature condition occurred that was critical enough to shut down the server. A memory fault occurred. The corresponding DIMM error LEDs on the system board are also lit. A fault is detected in the adjacent expansion unit (if installed).

Integrated Management Module II


Each x240 server has an IMM2 onboard, and uses the Unified Extensible Firmware Interface (UEFI) to replace the older BIOS interface. The IMM2 provides the following major features as standard: IPMI v2.0-compliance Remote configuration of IMM2 and UEFI settings without the need to power on the server

Chapter 5. Compute nodes

273

Remote access to system fan, voltage, and temperature values Remote IMM and UEFI update UEFI update when the server is powered off Remote console by way of a serial over LAN Remote access to the system event log Predictive failure analysis and integrated alerting features; for example, by using SNMP Remote presence, including remote control of server by using a Java or Active x client Operating system failure window (blue screen) capture and display through the web interface Virtual media that allow the attachment of a diskette drive, CD/DVD drive, USB flash drive, or disk image to a server Remember: Unlike IBM BladeCenter, the assigned TCP/IP address of the IMM is available on the local network. You can use this address to remotely manage the x240 by connecting directly to the IMM independent of the File System Manager (FSM) or Chassis Management Module (CMM). For more information about the IMM, see 3.4.1, Integrated Management Module II on page 47.

5.4.14 Operating system support


The following operating systems are supported by the x240: Microsoft Windows Server 2008 HPC Edition Microsoft Windows Server 2008 R2 Microsoft Windows Server 2008, Datacenter x64 Edition Microsoft Windows Server 2008, Enterprise x64 Edition Microsoft Windows Server 2008, Standard x64 Edition Microsoft Windows Server 2008, Web x64 Edition Red Hat Enterprise Linux 5 Server with Xen x64 Edition Red Hat Enterprise Linux 5 Server x64 Edition Red Hat Enterprise Linux 6 Server x64 Edition SUSE Linux Enterprise Server 10 for AMD64/EM64T SUSE Linux Enterprise Server 11 for AMD64/EM64T SUSE Linux Enterprise Server 11 with Xen for AMD64/EM64T VMware ESX 4.1 VMware ESXi 4.1 VMware vSphere 5 VMware vSphere 5.1 For the latest list of supported operating systems, see IBM ServerProven at this website: http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml

274

IBM PureFlex System and IBM Flex System Products and Technology

5.5 IBM Flex System x440 Compute Node


The IBM Flex System x440 Compute Node, machine type 7917 is a high-density, four-socket server that is optimized for high-end virtualization, mainstream database deployments, memory-intensive, and high performance environments. This section includes the following topics: 5.5.1, Introduction on page 275 5.5.2, Models on page 278 5.5.3, Chassis support on page 279 5.5.4, System architecture on page 280 5.5.5, Processor options on page 281 5.5.6, Memory options on page 282 5.5.7, Internal disk storage on page 284 5.5.8, Embedded 10Gb Virtual Fabric on page 290 5.5.9, I/O expansion options on page 291 5.5.10, Network adapters on page 294 5.5.11, Storage host bus adapters on page 295 5.5.12, Integrated virtualization on page 295 5.5.13, Light path diagnostics panel on page 296 5.5.14, Operating systems support on page 297

5.5.1 Introduction
The IBM Flex System x440 Compute Node is a double-wide compute node that provides scalability to support up to four Intel Xeon E5-4600 processors. The nodes width allows for significant I/O capability. The server is ideal for virtualization, database, and memory-intensive high performance computing environments. Figure 5-43 shows the front of the compute node, which includes the location of the controls, LEDs, and connectors. The light path diagnostic panel is on the upper edge of the front panel bezel, in the same place as on the x220 and x240.
Two 2.5 HS drive bays Light path diagnostics panel

Power USB port Console breakout cable port

LED panel

Figure 5-43 The IBM Flex System x440 Compute Node

Chapter 5. Compute nodes

275

Figure 5-44 shows the internal layout and major components of the x440.

Cover

Air baffle Air baffle Air baffle Heat sink Microprocessor heat sink filler Backplane assembly I/O expansion adapter

Hot-swap hard disk drive DIMM

Hard disk drive bay filler

Microprocessor

Figure 5-44 Exploded view of the x440 showing the major components

Table 5-70 lists the features of the x440.


Table 5-70 IBM Flex System x440 Compute Node specifications Components Form factor Chassis support Processor Specification Full-wide compute node. IBM Flex System Enterprise Chassis. Up to four Intel Xeon processor E5-4600 product family processors, each with eight cores (up to 2.7 GHz), six cores (up to 2.9 GHz), or four cores (up to 2.0 GHz). Two QPI links, up to 8.0 GTps each. Up to 1600 MHz memory speed. Up to 20 MB L3 cache per processor. Intel C600 series. Up to 48 DIMM sockets (12 DIMMs per processor) using Low Profile (LP) DDR3 DIMMs. RDIMMs and LRDIMMs are supported. 1.5 V and low-voltage 1.35 V DIMMs are supported. Support for up to 1600 MHz memory speed, depending on the processor. Four memory channels per processor (three DIMMs per channel). Supports two DIMMs per channel operating at 1600 MHz (2 DPC @ 1600MHz) with single and dual-rank RDIMMs. Supports three DIMMs per channel at 1066 MHz with single and dual-rank RDIMMs.

Chipset Memory

276

IBM PureFlex System and IBM Flex System Products and Technology

Components Memory maximums Memory protection Disk drive bays Maximum internal storage

Specification With LRDIMMs: Up to 1.5 TB with 48x 32 GB LRDIMMs and four processors. With RDIMMs: Up to 768 GB with 48x 16 GB RDIMMs and four processors. ECC, Chipkill (for x4-based memory DIMMs), memory mirroring, and memory rank sparing. Two 2.5-inch hot-swap SAS/SATA drive bays that support SAS, SATA, and SSD drives. Optional Flash Kit support for up to eight 1.8-inch SSDs. With two 2.5-inch hot-swap drives: Up to 2 TB with 1 TB 2.5" NL SAS HDDs, or up to 2.4 TB with 1.2 TB 2.5" SAS HDDs, or up to 2 TB with 1 TB 2.5" NL SATA HDDs, or up to 3.2 TB with 1.6 TB 2.5" SAS SSDs. Intermix of SAS and SATA HDDs and SSDs is supported. With 1.8-inch SSDs and ServeRAID M5115 RAID adapter: Up to 1.6 TB with eight 200 GB 1.8-inch SSDs. RAID 0 and 1 with integrated LSI SAS2004 controller. Optional ServeRAID M5115 RAID controller with RAID 0, 1, 10, 5, and 50 support and 1 GB cache. Supports up to eight 1.8-inch SSDs with expansion kits. Optional flash-backup for cache, RAID 6/60, and SSD performance enabler. x4x models: Four 10 Gb Ethernet ports with two dual-port Embedded 10Gb Virtual Fabric Ethernet LAN-on-motherboard (LOM) controllers; Emulex BE3 based. Upgradeable to FCoE and iSCSI using IBM Feature on Demand license activation. x2x models: None standard; optional 1 Gb or 10 Gb Ethernet adapters. Four I/O connectors for adapters. PCI Express 3.0 x16 interface. USB ports: One external. Two internal for embedded hypervisor. Console breakout cable port that provides local KVM and serial ports (cable standard with chassis; additional cables are optional). UEFI, IBM Integrated Management Module 2 (IMM2) with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, and remote presence. Support for IBM Flex System Manager, IBM Systems Director, and IBM ServerGuide. Power-on password, administrator's password, and Trusted Platform Module V1.2. Matrox G200eR2 video core with 16 MB video memory that is integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors. Three-year customer-replaceable unit and onsite limited warranty with 9x5/NBD. Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, VMware ESX 4, and vSphere 5. For details, see 5.5.14, Operating systems support on page 297. Optional service upgrades are available through IBM ServicePac offerings: Four-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, and remote technical support for IBM hardware and selected IBM and OEM software. Width: 437 mm (17.2 in.), height 51 mm (2.0 in.), depth 493 mm (19.4 in.) Maximum weight: 12.25 kg (27 lbs).

RAID support

Network interfaces

PCI Expansion slots Ports

Systems management Security features Video Limited warranty Operating systems supported Service and support

Dimensions Weight

Chapter 5. Compute nodes

277

Figure 5-45 shows the components on the system board of the x440.

Each processor has 12 local memory DIMMs

1
I/O adapters: 1 (top) to 4 (bottom)

Hot-swap drive bays

2
USB ports

Light path diagnostics

Figure 5-45 Layout of the IBM Flex System x440 Compute Node system board

5.5.2 Models
The current x440 models, with processor, memory, and other embedded options that are shipped as standard with each model type, are shown in Table 5-71.
Table 5-71 Model Standard models of the IBM Flex System x440 Compute Node, type 7917 Intel Processor E5-4800: 4 maximuma Memory RAID adapter Disk bays (used/max)b Disks Embedded 10GbE Virtual Fabric No Standard No Standard No Standard I/O slots (used/ max) 0/4 2 / 4d 0/4 2 / 4d 0/4 2 / 4d

7917-A2x 7917-A4x 7917-B2x 7917-B4x 7917-C2x 7917-C4x

Xeon E5-4603 4C 2.0 GHz 10 MB 1066 MHz 95W Xeon E5-4603 4C 2.0 GHz 10 MB 1066 MHz 95W Xeon E5-4607 6C 2.2 GHz 12 MB 1066 MHz 95W Xeon E5-4607 6C 2.2 GHz 12 MB 1066 MHz 95W Xeon E5-4610 6C 2.4 GHz 15 MB 1333 MHz 95W Xeon E5-4610 6C 2.4 GHz 15 MB 1333 MHz 95W

1x 8 GB 1066 MHzc 1x 8 GB 1066 MHzc 1x 8 GB 1066 MHzc 1x 8 GB 1066 MHzc 1x 8 GB 1333 MHz 1x 8 GB 1333 MHz

SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID

2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2)

Open Open Open Open Open Open

278

IBM PureFlex System and IBM Flex System Products and Technology

Model

Intel Processor E5-4800: 4 maximuma

Memory

RAID adapter

Disk bays (used/max)b

Disks

Embedded 10GbE Virtual Fabric No Standard No Standard

I/O slots (used/ max) 0/4 2 / 4d 0/4 2 / 4d

7917-D2x 7917-D4x 7917-F2x 7917-F4x

Xeon E5-4620 8C 2.2 GHz 16 MB 1333 MHz 95W Xeon E5-4620 8C 2.2 GHz 16 MB 1333 MHz 95W Xeon E5-4650 8C 2.7 GHz 20 MB 1600 MHz 130W Xeon E5-4650 8C 2.7 GHz 20 MB 1600 MHz 130W

1x 8 GB 1333 MHz 1x 8 GB 1333 MHz 1x 8 GB 1600 MHz 1x 8 GB 1600 MHz

SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID

2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2)

Open Open Open Open

a. Processor detail: Processor quantity and model, cores, core speed, L3 cache, memory speed, and power consumption. b. The 2.5-inch drive bays can be replaced and expanded with more internal bays to support up to eight 1.8-inch SSDs. For more information, see 5.5.7, Internal disk storage on page 284. c. For models Axx and Bxx, the standard DIMM is rated at 1333 MHz, but operates at up to 1066 MHz to match the processor memory speed. d. The x4x models include two Embedded 10Gb Virtual Fabric Ethernet controllers. Connections are routed by using a Fabric Connector. The Fabric Connectors preclude the use of an I/O adapter in I/O connectors 1 and 3, except the ServeRAID M5115 controller, which can be installed in slot 1.

5.5.3 Chassis support


The x440 type 7917 is supported in the IBM Flex System Enterprise Chassis, as listed in Table 5-72.
Table 5-72 x440 chassis support Server x440 type 7917 BladeCenter chassis (all) No IBM Flex System Enterprise Chassis Yes

Up to seven x440 Compute Nodes can be installed in the chassis in 10U of rack space. The actual number of x440 systems that can be powered on in a chassis depends on the following factors: The TDP power rating for the processors that are installed in the x440. The number of power supplies installed in the chassis. The capacity of the power supplies installed in the chassis (2100 W or 2500 W). The power redundancy policy used in the chassis (N+1 or N+N). Table 4-11 on page 93 provides guidelines about what number of x440 systems can be powered on in the IBM Flex System Enterprise Chassis, based on the type and number of power supplies installed.

Chapter 5. Compute nodes

279

5.5.4 System architecture


The IBM Flex System x440 Compute Node features the Intel Xeon E5-4600 series processors. The Xeon E5-4600 series processor has models with either four, six, or eight cores per processor with up to 16 threads per socket. The E5-4600 processors have the following features: Up to 20 MB of shared L3 cache Hyper-Threading Turbo Boost Technology 2.0 Two QPI links that run at up to 8 GTps One integrated memory controller Four memory channels that support up to three DIMMs each Figure 5-46 shows the system architecture of the x440 system.
x4 ESI link Intel C600 PCH x1 USB Video & serial PCIe x16 G3 IMM v2 Management to midplane 10GbE LOM PCIe x8 G3 10GbE LOM PCIe x16 G3 Intel Xeon CPU 4 DDR3 DIMMs 4 memory channels 3 DIMMs per channel I/O connector 1 I/O connector 2 I/O connector 3 I/O connector 4 Expansion connector

Intel Xeon CPU 3

PCIe Mux

PCIe x4 G2

LSI2004 SAS Internal USB Front USB HDDs or SSDs

PCIe x8 G3 Intel Xeon CPU 1 QPI links (8 GT/s) Intel Xeon CPU 2

USB

Front KVM port

PCIe x16 G3

Figure 5-46 System Architecture of the x440

The IBM Flex System x440 Compute Node has the following system architecture features as standard: Four 2011-pin type R (LGA-2011) processor sockets. An Intel C600 PCIe Controller Hub (PCH). Four memory channels per socket. Up to three DIMMs per memory channel. A total of 16 DDR3 DIMM sockets. Support for LRDIMMs and RDIMMs. Two dual port integrated 10Gb Virtual Fabric Ethernet controllers that are based on Emulex BE3. Upgradeable to FCoE and iSCSI through IBM Features on Demand (FoD). 280
IBM PureFlex System and IBM Flex System Products and Technology

One LSI 2004 SAS controller with integrated RAID 0 and 1 to the two internal drive bays. Support for ServeRAID M5115 controller for RAID 5 and other levels to up to 1.8-inch bays. Integrated Management Module II (IMMv2) for systems management. Four PCIe 3.0 I/O adapter connectors x16. Two internal and one external USB connectors. Important: A second processor must be installed to use I/O adapter slots 3 and 4 in the x440 compute node. This configuration is necessary because the PCIe lanes that are used to drive I/O slots 3 and 4 are routed to processors 2 and 4, as shown in Figure 5-46 on page 280.

5.5.5 Processor options


The x440 supports the processor options that are listed in Table 5-73. The server supports one, two, or four Intel Xeon E5-4600 processors. The table also shows which server models have each processor standard. If no corresponding model for a particular processor is listed, the processor is available only through the configure-to-order (CTO) process. For a given processor model (for example, the Xeon E5-4603), there are two part numbers: the first one is for the rear two processors (CPUs 1 and 2) and include taller heat sinks; the second part number is for the front two processors (CPUs 3 and 4) and include shorter heat sinks.
Table 5-73 Supported processors for the x440 Part number Feature codea
A2C0 / A2C2 A2C1 A2C3 / A2C5 A2C4 A2C6 / A2C8 A2C7 A2C9 / A2CB A2CA A2CC / A2CH A2CG A2CF / A2CE A2CD A2CJ / A2CL A2CK A2QU / A2QW A2QV

Intel Xeon processor description

CPU s 1&2
Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No

CPU s 3&4
No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes

Models where used


A2x and A4x B2x and B4x C2x and C4x D2x and D4x F2x and F4x -

90Y9060 88Y6263 90Y9062 69Y3100 90Y9064 69Y3106 90Y9066 90Y9049 90Y9070 69Y3112 90Y9068 90Y9055 90Y9072 69Y3118 90Y9186 90Y9185

Xeon E5-4603 4C 2.0GHz 10MB 1066MHz 95W Xeon E5-4603 4C 2.0GHz 10MB 1066MHz 95W Xeon E5-4607 6C 2.2GHz 12MB 1066MHz 95W Xeon E5-4607 6C 2.2GHz 12MB 1066MHz 95W Xeon E5-4610 6C 2.4GHz 15MB 1333MHz 95W Xeon E5-4610 6C 2.4GHz 15MB 1333MHz 95W Xeon E5-4617 6C 2.9GHz 15MB 1600MHz 130W Xeon E5-4617 6C 2.9GHz 15MB 1600MHz 130W Xeon E5-4620 8C 2.2GHz 16MB 1333MHz 95W Xeon E5-4620 8C 2.2GHz 16MB 1333MHz 95W Xeon E5-4640 8C 2.4GHz 20MB 1600MHz 95W Xeon E5-4640 8C 2.4GHz 20MB 1600MHz 95W Xeon E5-4650 8C 2.7GHz 20MB 1600MHz 130W Xeon E5-4650 8C 2.7GHz 20MB 1600MHz 130W Xeon E5-4650L 8C 2.6GHz 20MB 1600MHz 115W Xeon E5-4650L 8C 2.6GHz 20MB 1600MHz 115W

Chapter 5. Compute nodes

281

a. When two feature codes are specified, the first feature code is for CPU 1 and the second feature code is for CPU 2. When only one feature code is specified, this is the feature code that is used for CPU 3 and CPU 4.

5.5.6 Memory options


IBM DDR3 memory is compatibility tested and tuned for optimal performance and throughput. IBM memory specifications are integrated into the light path diagnostic procedures for immediate system performance feedback and optimum system uptime. From a service and support standpoint, IBM memory automatically assumes the IBM system warranty, and IBM provides service and support worldwide. The x440 supports two types of low profile DDR3 memory: RDIMMs and LRDIMMs. The server supports up to 12 DIMMs when one processor is installed, and up to 48 DIMMs when four processors are installed. Each processor has four memory channels, with three DIMMs per channel. The following rules apply when you select the memory configuration: The x440 supports RDIMMs and LRDIMMs, but UDIMMs are not supported. Mixing of RDIMM and LRDIMM is not supported. Mixing 1.5 V and 1.35 V DIMMs in the same server is supported. In such a case, all DIMMs operate at 1.5 V. The maximum number of ranks that is supported per channel is eight (except for Load Reduced DIMMs, where more than eight ranks are supported, because one quad-rank LRDIMM provides the same electrical load on a memory bus as a single-rank RDIMM). The maximum quantity of DIMMs that can be installed in the server depends on the number of processors. For more information, see the Maximum quantity row in Table 5-74. All DIMMs in all processor memory channels operate at the same speed, which is determined as the lowest value of the following components: Memory speed that is supported by a specific processor. Lowest maximum operating speed for the selected memory configuration that depends on rated speed. For more information, see the Maximum operating speed section in Table 5-74. Table 5-74 shows the maximum memory speeds that are achievable based on the installed DIMMs and the number of DIMMs per channel. The table also shows the maximum memory capacity at any speed that is supported by the DIMM and the maximum memory capacity at the rated DIMM speed. In the table, cells that are highlighted with a gray background indicate when the specific combination of DIMM voltage and number of DIMMs per channel still allows the DIMMs to operate at the rated speed.
Table 5-74 Maximum memory speeds Specification Rank Part numbers RDIMMs Single rank DIMMs 49Y1406 (4 GB) 1333 MHz 49Y1559 (4 GB) 1600 MHz Dual rank DIMMs 49Y1407 (4 GB) 49Y1397 (8 GB) 49Y1563 (16 GB) 1333 MHz 90Y3109 (4 GB) 00D4968 (16 GB) 1600 MHz LRDIMM Quad Rank LRDIMMs 49Y1567 (16 GB) 90Y3105 (32 GB) 1333 MHz

Rated speed

282

IBM PureFlex System and IBM Flex System Products and Technology

Specification Rated voltage Maximum quantitya Largest DIMM Max memory capacity Max memory at rated speed

RDIMMs 1.35 V 48 4 GB 192 GB 192 GB 1.5V 48 4 GB 192 GB 128 GB 1.35 V 48 16 GB 768 GB 512 GB 1.5 V 48 16 GB 768 GB 512 GB

LRDIMM 1.35 V 48 32 GB 1.5 TB 1.0 TB

Maximum operating speed (MHz) 1 DIMM per channel 2 DIMMs per channel 3 DIMMs per channel 1333 MHz 1333 MHz 1066 MHz (1.5 V) 1600 MHz 1600 MHz 1066 MHz 1333 MHz 1333 MHz 1066 (1.5V) 1600 MHz 1600 MHz 1066 MHz 1333 MHz (1.5 V) 1333 MHz (1.5V) 1066 MHz

a. The maximum quantity that is supported is shown for four processors installed. When two processors are installed, the maximum quantity that is supported is a half of the quantity that is shown. When one processor is installed, the quantity is one quarter of that shown.

The x440 supports the following memory protection technologies: ECC Chipkill (for x4-based memory DIMMs; look for x4 in the DIMM description) Memory mirroring Memory sparing If memory mirroring is used, DIMMs must be installed in pairs (minimum of one pair per processor). Both DIMMs in a pair must be identical in type and size. If memory rank sparing is used, a minimum of one quad-rank DIMM or two single-rank or dual-rank DIMMs must be installed per populated channel. These DIMMs do not need to be identical. In rank sparing mode, one rank of a DIMM in each populated channel is reserved as spare memory. The size of a rank varies depending on the DIMMs installed. Table 5-75 lists the memory options available for the x440 server. DIMMs can be installed one at a time, but for performance reasons, install them in sets of four (one for each of the memory channels). A total of 48 DIMMs are the maximum number supported.
Table 5-75 Memory options for the x440 Part number Feature code Description Models where used

Registered DIMM (RDIMM) modules 49Y1406 49Y1407 49Y1559 8941 8947 A28Z 4 GB (1x 4 GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 4 GB (1x 4 GB, 2Rx8, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 4 GB (1x 4 GB, 1Rx4, 1.5 V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM -

Chapter 5. Compute nodes

283

Part number 90Y3109 49Y1397 49Y1563 00D4968

Feature code A292 8923 A1QT A2U5

Description 8 GB (1x 8 GB, 2Rx4, 1.5 V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM 8 GB (1x 8 GB, 2Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x 16 GB, 2Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x 16 GB, 2Rx4, 1.5 V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM

Models where used F2x and F4x All other models -

Load Reduced DIMM (LRDIMM) modules 49Y1567 90Y3105 A290 A291 16 GB (1x1 6 GB, 4Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM 32 GB (1x 32 GB, 4Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM -

5.5.7 Internal disk storage


The x440 server has two 2.5-inch hot-swap drive bays that are accessible from the front of the server, as shown in Figure 5-43 on page 275. These bays are connected to the integrated 4-port LSI SAS2004 6 Gbps SAS/SATA RAID-on-Chip (ROC) controller. The integrated LSI SAS2004 ROC includes the following features: Four-port controller with 6 Gbps throughput per port PCIe x4 Gen 2 host interface Two SAS ports that are routed internally to the two hot-swap drive bays Supports RAID levels 0, 1, 10 and 1E (The x440 implements only RAID 0 and 1 with the two internal drive bays.) The 2.5-inch drive bays support SAS or SATA HDDs or SATA SSDs. Table 5-76 lists the supported 2.5-inch drive options.
Table 5-76 2.5-inch internal disk options Part number Feature code Description Maximum supported

10K SAS hard disk drives 00AD075 81Y9650 90Y8872 90Y8877 A48S A282 A2XD A2XC IBM 1.2TB 10K 6Gbps SAS 2.5'' G2HS HDD IBM 900GB 10K 6Gbps SAS 2.5" SFF HS HDD IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD 2 2 2 2

10K and 15K SAS self-encrypting drives (SEDs)a 90Y8944 00AD085 81Y9662 A2ZK A48T A3EG IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS SED IBM 1.2TB 10K 6Gbps SAS 2.5'' G2HS SED IBM 900GB 10K 6Gbps SAS 2.5" SFF G2HS SED 2 2 2

284

IBM PureFlex System and IBM Flex System Products and Technology

Part number 90Y8908 90Y8913 44W2264

Feature code A3EF A2XF 5413

Description IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS SED IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS SED IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS SED

Maximum supported 2 2 2

15K SAS hard disk drives 81Y9670 90Y8926 A283 A2XB IBM 300GB 15K 6Gbps SAS 2.5" SFF HS HDD IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD 2 2

NL SATA drives 81Y9730 81Y9726 81Y9722 A1AV A1NZ A1NX IBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD IBM 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD IBM 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 2 2 2

NL SAS drives 81Y9690 90Y8953 A1P3 A2XE IBM 1TB 7.2K 6Gbps NL SAS 2.5" SFF HS HDD IBM 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDD 2 2

SAS-SSD Hybrid drive 00AD102 A4G7 IBM 600GB 10K 6Gbps SAS 2.5'' G2HS Hybrid 2

Enterprise SSDs 49Y6195 49Y6139 49Y6134 49Y6129 41Y8331 41Y8336 41Y8341 43W7718 00W1125 A4GH A3F0 A3EY A3EW A4FL A4FN A4FQ A2FN A3HR IBM 1.6TB SAS 2.5" MLC HS Enterprise SSD IBM 800GB SAS 2.5" MLC HS Enterprise SSD IBM 400GB SAS 2.5" MLC HS Enterprise SSD IBM 200GB SAS 2.5" MLC HS Enterprise SSD S3700 200GB SATA 2.5" MLC HS Enterprise SSD S3700 400GB SATA 2.5" MLC HS Enterprise SSD S3700 800GB SATA 2.5" MLC HS Enterprise SSD IBM 200GB SATA 2.5" MLC HS SSD IBM 100GB SATA 2.5" MLC HS Enterprise SSD 2 2 2 2 2 2 2 2 2

Enterprise Value SSDs 90Y8643 90Y8648 A2U3 A2U4 IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD IBM 128GB SATA 2.5" MLC HS Enterprise Value SSD 2 2

a. Supports self-encrypting drive (SED) technology. For more information, see Self-Encrypting Drives for IBM System x at this website: http://www.redbooks.ibm.com/abstracts/tips0761.html?Open.

Chapter 5. Compute nodes

285

Support for 1.8-inch SSDs


In addition, the x440 supports up to eight 1.8-inch SSDs combined with a ServeRAID M5115 SAS/SATA controller (90Y4390). The M5115 attaches to the I/O adapter 1 connector and can be attached even if the Compute Node Fabric Connector is installed (which is used to route the Embedded 10Gb Virtual Fabric adapter to bays 1 and 2, as described 5.5.9, I/O expansion options on page 291). The ServeRAID M5115 cannot be installed if an adapter is installed in I/O adapter slot 1. The ServeRAID M5115 supports the following combinations of 2.5-inch drives and 1.8-inch SSDs: Up to two 2.5-inch drives only Up to four 1.8-inch drives only Up to two 2.5-inch drives, plus up to four 1.8-inch SSDs Up to eight 1.8-inch SSDs The ServeRAID M5115 SAS/SATA Controller (90Y4390) provides an advanced RAID controller that supports RAID 0, 1, 10, 5, 50, and, optionally, 6 and 60. It includes 1 GB of cache, which can be backed up to flash memory when attached to the supercapacitor included with the optional ServeRAID M5100 Series Enablement Kit (46C9030). At least one hardware kit is required with the ServeRAID M5115 controller, and there are three hardware kits that are supported that enable specific drive support, as listed in Table 5-77.
Table 5-77 ServeRAID M5115 and supported hardware kits for the x440 Part number 90Y4390 46C9030 46C9031 46C9032 Feature code A2XW A3DS A3DT A3DU Description ServeRAID M5115 SAS/SATA Controller ServeRAID M5100 Series Enablement Kit for IBM Flex System x440 ServeRAID M5100 Series IBM Flex System Flash Kit for x440 ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x440 Maximum supported 1 1 1 1

The following hardware kits are available: ServeRAID M5100 Series Enablement Kit for IBM Flex System x440 (46C9030) enables support for up to two 2.5-inch HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection. This enablement kit replaces the two standard 1-bay backplanes (which are attached through the system board to an onboard controller) with new 1-bay backplanes that attach to an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit. MegaRAID CacheVault flash cache protection uses NAND flash memory that is powered by a supercapacitor to protect data that is stored in the controller cache. This module eliminates the need for the lithium-ion battery that is commonly used to protect DRAM cache memory on PCI RAID controllers. To avoid the possibility of data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash memory by using power from the supercapacitor. After the power is restored to the RAID controller, the saved data is transferred from the NAND flash memory back to the DRAM cache, which can then be flushed to disk. Tip: The Enablement Kit is only required if 2.5-inch drives are to be used. This kit is not required if you plan to install four or eight 1.8-inch SSDs only. 286
IBM PureFlex System and IBM Flex System Products and Technology

ServeRAID M5100 Series IBM Flex System Flash Kit for x440 (46C9031) enables support for up to four 1.8-inch SSDs. This kit replaces the two standard 1-bay backplanes with a two 2-bay backplanes that attach to an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required. Therefore, this kit does not have a supercapacitor. ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x440 (46C9032) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles that can attach two 1.8-inch SSD attachment locations and flex cables for attachment to up to four 1.8-inch SSDs. Product-specific kits: These kits are specific for the x440 and cannot be used with the x240 or x220. Table 5-78 shows the kits that are required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, you need the M5115 controller, the Flash Kit, and the SSD Expansion Kit.
Table 5-78 ServeRAID M5115 hardware kits Wanted drive support Maximum number of 2.5-inch drives 2 0 2 0 Maximum number of 1.8-inch SSDs 0 4 (front) 4 (internal) 8 (both) => => => => Components required ServeRAID M5115 90Y4390 Required Required Required Required Required Required Enablement kIt 46C9030 Required Required Required Required Flash Kit 46C9031 SSD Expansion Kit 46c9032

Figure 5-47 shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (as shown in row 1 of Table 5-78).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Enablement Kit (46C9030)
ServeRAID M5115 controller

MegaRAID CacheVault flash cache protection

Replacement 1-drive backplanes

Figure 5-47 The ServeRAID M5115 and the Enablement Kit installed

Chapter 5. Compute nodes

287

Figure 5-48 shows how the ServeRAID M5115 and Flash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (as shown in row 4 in Table 5-78 on page 287).
ServeRAID M5115 controller (90Y4390) with Flash Kit for x440 (46C9031) and SSD Expansion Kit for x440 (46C9032)
ServeRAID M5115 controller

Flash Kit: Replacement SSD backplanes and drive bays

SSD Expansion Kit: Four SSDs connectors on special air baffles above DIMMs (no CacheVault flash protection)

Four frontaccessible SSDs

Four internal SSDs

Figure 5-48 ServeRAID M5115 with Flash and SSD Expansion Kits installed

The eight SSDs are installed in the following locations: Four in the front of the system in place of the two 2.5-inch drive bays Four on trays above the memory banks The ServeRAID M5115 controller, 90Y4390, includes the following specifications: Eight internal 6 Gbps SAS/SATA ports. PCI Express 3.0 x8 host interface. 6 Gbps throughput per port. 800 MHz dual-core IBM PowerPC processor with LSI SAS2208 6 Gbps RAID on Chip (ROC) controller. Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411. Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342. Support for SAS and SATA HDDs and SSDs. Support for intermixing SAS and SATA HDDs and SSDs. Mixing different types of drives in the same array (drive group) is not recommended. Support for self-encrypting drives (SEDs) with MegaRAID SafeStore. Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447). Support for up to 64 virtual drives, up to 128 drive groups, up to 16 virtual drives per one drive group, and up to 32 physical drives per one drive group. Support for logical unit number (LUN) sizes up to 64 TB. Configurable stripe size up to 1 MB. 288
IBM PureFlex System and IBM Flex System Products and Technology

Compliant with Disk Data Format (DDF) configuration on disk (COD). S.M.A.R.T. support. MegaRAID Storage Manager management software. Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance upgrade, and SSD caching enabler. The feature upgrades are as listed in Table 5-79. These upgrades are all Feature on Demand (FoD) license upgrades.
Table 5-79 Supported ServeRAID M5115 upgrade features Part number 90Y4410 90Y4412 90Y4447 Feature code A2Y1 A2Y2 A36G Description Maximum supporte d 1 1 1

ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System ServeRAID M5100 Series Performance Upgrade for IBM Flex System (MegaRAID FastPath) ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System (MegaRAID CacheCade Pro 2.0)

The following features are available: RAID 6 Upgrade (90Y4410) Adds support for RAID 6 and RAID 60. This is a Feature on Demand license. Performance Upgrade (90Y4412) The Performance Upgrade for IBM Flex System (which is implemented by using the LSI MegaRAID FastPath software) provides high-performance I/O acceleration for SSD-based virtual drives by using a low-latency I/O path to increase the maximum I/O per second (IOPS) capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is a Feature on Demand license. SSD Caching Enabler for traditional hard disk drives (90Y4447) The SSD Caching Enabler for IBM Flex System (which is implemented by using the LSI MegaRAID CacheCade Pro 2.0) is designed to accelerate the performance of HDD arrays with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache to help maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a Feature on Demand license. This feature requires at least one SSD drive be installed.

Chapter 5. Compute nodes

289

The 1.8-inch SSDs supported with the ServeRAID M5115 controller are listed in Table 5-80.
Table 5-80 Supported 1.8-inch SSDs Part number 43W7746 43W7726 Feature code 5420 5428 Description IBM 200GB SATA 1.8" MLC SSD IBM 50GB SATA 1.8" MLC SSD Maximum supported 8 8

5.5.8 Embedded 10Gb Virtual Fabric


Some models of the x440 include two Embedded 10Gb Virtual Fabric controllers, or VFA, which is also known as LAN on Motherboard (LOM) built into the system board. Table 5-71 on page 278 lists what models of the x440 include the Embedded 10Gb Virtual Fabric controllers. Each x440 model that includes the embedded 10 Gb also has the Compute Node Fabric Connector installed in each of I/O connectors 1 and 3 (and physically screwed onto the system board) to provide connectivity to the Enterprise Chassis midplane. Figure 5-49 shows the Compute Node Fabric Connector.

Figure 5-49 The Compute Node Fabric Connector

The Fabric Connector enables port 1 of each embedded 10 Gb controller to be routed to I/O module bay 1 and port 2 of each controller to be routed to I/O module bay 2. The Fabric Connectors can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1 and 3. The Embedded 10Gb controllers are based on the Emulex BladeEngine 3 (BE3), which is a single-chip, dual-port 10 Gigabit Ethernet (10GbE) Ethernet Controller. The Embedded 10Gb controller includes the following features: PCI-Express Gen2 x8 host bus interface Supports multiple virtual NIC (vNIC) functions TCP/IP Offload Engine (TOE enabled) SRIOV capable RDMA over TCP/IP capable iSCSI and FCoE upgrade offering through FoD Table 5-81 on page 291 lists the ordering information for the IBM Flex System Embedded 10Gb Virtual Fabric Upgrade, which enables the iSCSI and FCoE support on the Embedded 10Gb Virtual Fabric controller. To upgrade both controllers, you need two FoD licenses.

290

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-81 Feature on Demand upgrade for FCoE and iSCSI support Part number 90Y9310 Feature code A2TD Description IBM Virtual Fabric Advanced Software Upgrade (LOM) Maximum supported 2

5.5.9 I/O expansion options


The x440 has four I/O expansion connectors for attaching I/O adapters, as shown in Figure 5-50. There is a fifth expansion connector (under I/O adapter 4), which is designed for future expansion options. Expansion Nodes: The x440 does not support the PCIe Expansion Node or the Storage Expansion Node. The I/O expansion connector is a high-density 216-pin PCIe connector. Installing I/O adapters allows the server to connect to switch modules in the IBM Flex System Enterprise Chassis. Each slot has a PCI Express 3.0 x16 host interface and all slots support the same form-factor adapters. The four adapters provide substantial I/O capability for this server.

1 2 3 4
Figure 5-50 Location of the I/O adapters in the IBM Flex System x440 Compute Node

Important: A second processor must be installed so that I/O adapter slots 3 and 4 in the x440 compute node can be used because the PCIe lanes that used to drive I/O slots 3 and 4 are routed to processors 2 and 4, as shown in Figure 5-46 on page 280.

Chapter 5. Compute nodes

291

All I/O adapters are the same shape and can be used in any available slot. A compatible switch or pass-through module must be installed in the corresponding I/O bays in the chassis, as indicated in Table 5-82. Installing two switches means that all ports of the adapter are enabled, which improves performance and network availability.
Table 5-82 Adapter to I/O bay correspondence I/O adapter slot in the x440 Slot 1 Port on the adapter Port 1 Port 2 Port 3 (for 4-port cards) Port 4 (for 4-port cards) Slot 2 Port 1 Port 2 Port 3 (for 4-port cards) Port 4 (for 4-port cards) Slot 3 Port 1 Port 2 Port 3 (for 4-port cards) Port 4(for 4-port cards) Slot 4 Port 1 Port 2 Port 3 (for 4-port cards) Port 4 (for 4-port cards) Corresponding I/O module bay in the chassis Module bay 1 Module bay 2 Module bay 1 Module bay 2 Module bay 3 Module bay 4 Module bay 3 Module bay 4 Module bay 1 Module bay 2 Module bay 1 Module bay 2 Module bay 3 Module bay 4 Module bay 3 Module bay 4

292

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-51 shows the location of the switch bays in the rear of the Enterprise Chassis.
I/O module bay 1 I/O module bay 3 I/O module bay 2 I/O module bay 4

Figure 5-51 Locations of the I/O modules

Figure 5-52 shows how the two port adapters are connected to switches that are installed in the I/O Module bays in an Enterprise Chassis.

x440 A1 in bays 1 & 2 A2 A3 A4

. Switch . . bay 1 . . .

. Switch . . bay 3 . . .

x440 A1 in bays 13/14 A2 A3 A4

. Switch . . bay 2 . . .

. Switch . . bay 4 . . .

Figure 5-52 Logical layout of the interconnects between I/O adapters and I/O module

Chapter 5. Compute nodes

293

5.5.10 Network adapters


As described in 5.5.8, Embedded 10Gb Virtual Fabric on page 290, certain models (those models with a model number of the form x4x) have two 10 Gb Ethernet controllers on the system board. Its ports are routed to the midplane and switches that are installed in the chassis through two Compute Note Fabric Connectors that take the place of adapters in I/O slots 1 and 3. Models without the Embedded 10Gb Virtual Fabric controller (those models with a model number of the form x2x) do not include any other Ethernet connections to the Enterprise Chassis midplane as standard. Therefore, for those models, an I/O adapter must be installed to provide network connectivity between the server and the chassis midplane and ultimately to the network switches. Table 5-83 lists the supported network adapters and upgrades. Adapters can be installed in any slot. However, compatible switches must be installed in the corresponding bays of the chassis.
Table 5-83 Network adapters Part number Feature code Description Number of ports Maximum supported
a

40Gb Ethernet 90Y3482 A3HK IBM Flex System EN6132 2-port 40Gb Ethernet adapter 2 4

10Gb Ethernet 90Y3554 90Y3558 90Y3466 A1R1 A1R0 A1QY IBM Flex System CN4054 10Gb Virtual Fabric adapter IBM Flex System CN4054 Virtual Fabric adapter (Software Upgrade) (Feature on Demand to provide FCoE and iSCSI support) IBM Flex System EN4132 2-port 10Gb Ethernet adapter 4 License 2 4 4 4

1Gb Ethernet 49Y7900 InfiniBand 90Y3454 A1QZ IBM Flex System IB6132 2-port FDR InfiniBand adapter 2 4 A10Y IBM Flex System EN2024 4-port 1Gb Ethernet adapter 4 4

a. For x4x models with two Embedded 10Gb Virtual Fabric controllers standard, the Compute Node Fabric Connectors occupy the same space as the I/O adapters in I/O slots 1 and 3, so you must remove the Fabric Connectors if you plan to install adapters in those I/O slots

294

IBM PureFlex System and IBM Flex System Products and Technology

5.5.11 Storage host bus adapters


Table 5-84 lists storage host bus adapters (HBAs) that are supported by the x440 server.
Table 5-84 Storage adapters Part number 69Y1942 95Y2391 95Y2386 88Y6370 69Y1938 95Y2375 Feature code A1BQ A45S A45R A1BP A1BM A2N5 Description IBM Flex System FC5172 2-port 16Gb FC adapter IBM Flex System FC5054 4-port 16Gb FC adapter IBM Flex System FC5052 2-port 16Gb FC adapter IBM Flex System FC5022 2-port 16Gb FC adapter IBM Flex System FC3172 2-port 8Gb FC adapter IBM Flex System FC3052 2-port 8Gb FC adapter Number of ports 2 4 2 2 2 2 Maximum supporteda 2 2 2 2 2 2

a. For x4x models with two Embedded 10Gb Virtual Fabric controllers standard, the Compute Node Fabric Connectors occupy the same space as the I/O adapters in I/O slots 1 and 3, so you must remove the Fabric Connectors if you plan to install adapters in those I/O slots

5.5.12 Integrated virtualization


The x440 offers USB flash drive options that are preinstalled with versions of VMware ESXi. This software is an embedded version of VMware ESXi and is contained on the flash drive, without requiring any disk space. The USB memory key plugs into one of the two internal USB ports on the system board (see Figure 5-45 on page 278). The supported USB memory keys are listed in Table 5-85.
Table 5-85 Virtualization options Part number 41Y8300 41Y8307 41Y8311 41Y8298 Feature code A2VC A383 A2R3 A2G0 Description IBM USB Memory Key for VMware ESXi 5.0 IBM USB Memory Key for VMware ESXi 5.0 Update 1 IBM USB Memory Key for VMware ESXi 5.1 IBM Blank USB Memory Key for VMware ESXi Downloadsa Maximum supported 1 1 1 2

a. The Blank USB Memory Key requires the download of the VMware vSphere (ESXi) Hypervisor with IBM Customization image, which is available at this website: http://ibm.com/systems/x/os/vmware/

There are two types of USB keys: preload keys or blank keys. Blank keys allow you to download an IBM customized version of ESXi and load it onto the key. Each server supports one or two keys to be installed, but only the following combinations: One preload key (keys that are preloaded at the factory) One blank key (a key that you download the customized image) One preload key and one blank key Two blank keys

Chapter 5. Compute nodes

295

Two preload keys is an unsupported combination.Installing two preload keys prevents ESXi from booting. This is similar to the error as described at this website: http://kb.vmware.com/kb/1035107 Having two keys that are installed provides a backup boot device. Both devices are listed in the boot menu, which allows you to boot from either device or to set one as a backup in case the first one becomes corrupted

5.5.13 Light path diagnostics panel


For quick problem determination when you are physically at the server, the x440 offers the following three-step guided path: 1. The Fault LED on the front panel. 2. The light path diagnostics panel. 3. LEDs next to key components on the system board. The x440 light path diagnostics panel is visible when you remove the server from the chassis. The panel is at the upper right side of the compute node, as shown in Figure 5-53.

Figure 5-53 Location of x440 light path diagnostics panel

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis. The meanings of the LEDs in the light path diagnostics panel are listed in Table 5-86.
Table 5-86 Light path diagnostic panel LEDs LED LP S BRD Meaning The light path diagnostics panel is operational. A system board error is detected.

296

IBM PureFlex System and IBM Flex System Products and Technology

LED MIS NMI TEMP MEM ADJ

Meaning A mismatch occurred between the processors, DIMMs, or HDDs within the configuration reported by POST. A non-maskable interrupt (NMI) occurred. An over-temperature condition occurred that was critical enough to shut down the server. A memory fault occurred. The corresponding DIMM error LEDs on the system board are also lit. A fault is detected in the adjacent expansion unit (if installed).

Remote management
The server contains an IBM Integrated Management Module II (IMMv2), which interfaces with the advanced management module in the chassis. The combination of these two components provides advanced service-processor control, monitoring, and an alerting function. If an environmental condition exceeds a threshold or if a system component fails, LEDs on the system board are lit to help you diagnose the problem, the error is recorded in the event log, and you are alerted to the problem. A virtual presence capability comes standard for remote server management. Remote server management is provided through the following industry-standard interfaces: Intelligent Platform Management Interface (IPMI) Version 2.0 Simple Network Management Protocol (SNMP) Version 3 Common Information Model (CIM) Web browser The server also supports virtual media and remote control features, which provide the following functions: Remotely viewing video with graphics resolutions up to 1600 x 1200 at 75 Hz with up to 23 bits per pixel, regardless of the system state. Remotely accessing the server by using the keyboard and mouse from a remote client. Mapping the CD or DVD drive, diskette drive, and USB flash drive on a remote client, and mapping ISO and diskette image files as virtual drives that are available for use by the server. Uploading a diskette image to the IMM2 memory and mapping it to the server as a virtual drive. Capturing blue-screen errors.

5.5.14 Operating systems support


The x440 supports the following operating systems: Microsoft Windows Server 2008 R2 Microsoft Windows Server 2008, Datacenter x64 Edition Microsoft Windows Server 2008, Enterprise x64 Edition Microsoft Windows Server 2008, Standard x64 Edition Microsoft Windows Server 2008, Web x64 Edition Microsoft Windows Server 2012 Red Hat Enterprise Linux 5 Server x64 Edition Red Hat Enterprise Linux 5 Server with Xen x64 Edition

Chapter 5. Compute nodes

297

Red Hat Enterprise Linux 6 Server x64 Edition SUSE Linux Enterprise Server 10 for AMD64/EM64T SUSE Linux Enterprise Server 11 for AMD64/EM64T SUSE Linux Enterprise Server 11 with Xen for AMD64/EM64T VMware ESX 4.1 VMware ESXi 4.1 VMware vSphere 5 VMware vSphere 5.1 Support by some of these operating system versions is after the date of initial availability. Check the IBM ServerProven website for the latest information about the specific versions and service levels that are supported and any other prerequisites at this website: http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/matrix.shtml

5.6 IBM Flex System p260 and p24L Compute Nodes


The IBM Flex System p260 Compute Node and IBM Flex System p24L Compute Node are based on IBM POWER architecture technologies. These compute nodes run in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment by using advanced processing technology. This section describes the server offerings and the technology that is used in their implementation. This section includes the following topics: 5.6.1, Specifications on page 298 5.6.2, System board layout on page 301 5.6.3, IBM Flex System p24L Compute Node on page 301 5.6.4, Front panel on page 302 5.6.5, Chassis support on page 304 5.6.6, System architecture on page 304 5.6.7, Processor on page 305 5.6.8, Memory on page 308 5.6.9, Active Memory Expansion on page 310 5.6.10, Storage on page 313 5.6.11, I/O expansion on page 315 5.6.12, System management on page 316 5.6.13, Operating system support on page 317

5.6.1 Specifications
The IBM Flex System p260 Compute Node is a half-wide, Power Systems compute node with the following characteristics: Two POWER7 or POWER7+ processor sockets Sixteen memory slots Two I/O adapter slots An option for up to two internal drives for local storage

298

IBM PureFlex System and IBM Flex System Products and Technology

The IBM Flex System p260 Compute Node includes the specifications that are shown in Table 5-87.
Table 5-87 IBM Flex System p260 Compute Node specifications Components Model numbers Form factor Chassis support Processor Specification IBM Flex System p24L Compute Node: 1457-7FL IBM Flex System p260 Compute Node: 7895-22X, 7895-23X and 7895-23A Half-wide compute node. IBM Flex System Enterprise Chassis. p24L: Two IBM POWER7 processors p260: Two IBM POWER7 (model 22X) or POWER7+ (models 23A and 23X) processors. POWER7 processors: Each processor is a single-chip module (SCM) that contains either eight cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. There is one GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 45 nm fabrication technology. POWER7+ processors: Each processor is a single-chip module (SCM) that contains either eight cores (up to 4.1 GHz or 3.6 GHz and 80 MB L3 cache), four cores (4.0 GHz and 40 MB L3 cache) or two cores (4.0 GHz and 20 MB L3 cache). Each processor has 10 MB L3 cache per core. There is an integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. There is one GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 32 nm fabrication technology. Chipset Memory IBM P7IOC I/O hub. 16 DIMM sockets. RDIMM DDR3 memory supported. Integrated memory controller in each processor, each with four memory channels. Supports IBM Active Memory Expansion with AIX 6.1 or later. All DIMMs operate at 1066 MHz. Both LP (low profile) and VLP (very low profile) DIMMs supported, although only VLP DIMMs are supported if internal HDDs are configured. The use of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs. 512 GB using 16x 32 GB DIMMs. ECC, Chipkill. Two 2.5-inch non-hot-swap drive bays that support 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, only 1.8-inch SSDs are supported. If VLP DIMMs are installed, both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together. 1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives. RAID support by using the operating system. None standard. Optional 1 Gb or 10 Gb Ethernet adapters.

Memory maximums Memory protection Disk drive bays

Maximum internal storage RAID support Network interfaces

Chapter 5. Compute nodes

299

Components PCI Expansion slots Ports Systems management Security features Video Limited warranty Operating systems supported Service and support

Specification Two I/O connectors for adapters. PCI Express 2.0 x16 interface. One external USB port. FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, and IBM Systems Director. Power-on password, selectable boot sequence. None. Remote management by using Serial over LAN and IBM Flex System Manager. 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD. IBM AIX, IBM i, and Linux.

Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software. Width: 215 mm (8.5), height: 51 mm (2.0), depth: 493 mm (19.4). Maximum configuration: 7.0 kg (15.4 lb).

Dimensions Weight

300

IBM PureFlex System and IBM Flex System Products and Technology

5.6.2 System board layout


Figure 5-54 shows the system board layout of the IBM Flex System p260 Compute Node. POWER7 processors Two I/O adapter connectors

16 DIMM slots

Two I/O Hubs

(HDDs are mounted on the cover, located over the memory DIMMs)
Figure 5-54 Layout of the IBM Flex System p260 Compute Node

Connector for future expansion

5.6.3 IBM Flex System p24L Compute Node


The IBM Flex System p24L Compute Node shares several similarities with the IBM Flex System p260 Compute Node. It is a half-wide, Power Systems compute node with two POWER7 processor sockets, 16 memory slots, and two I/O adapter slots, This compute note has an option for up to two internal drives for local storage. The IBM Flex System p24L Compute Node is optimized for lower-cost Linux installations. The IBM Flex System p24L Compute Node includes the following features: Up to 16 POWER7 processing cores, with up to 8 per processor Sixteen DDR3 memory DIMM slots that support Active Memory Expansion Supports VLP and LP DIMMs Two P7IOC I/O hubs RAID-compatible SAS controller that supports up to 2 SSD or HDD drives Two I/O adapter slots Flexible service processor (FSP) System management alerts IBM Light Path Diagnostics USB 2.0 port IBM EnergyScale technology The system board layout for the IBM Flex System p24L Compute Node is identical to the IBM Flex System p260 Compute Node, and is shown in Figure 5-54.

Chapter 5. Compute nodes

301

5.6.4 Front panel


The front panel of Power Systems compute nodes includes the following common elements, as shown in Figure 5-55: USB 2.0 port Power control button and light path LED (green) Location LED (blue) Information LED (amber) Fault LED (amber)

USB 2.0 port

Power button

LEDs (left-right): location, information,

Figure 5-55 Front panel of the IBM Flex System p260 Compute Node

The USB port on the front of the Power Systems compute nodes is useful for various tasks. These tasks include out-of-band diagnostic procedures, hardware RAID setup, operating system access to data on removable media, and local OS installation. It might be helpful to obtain a USB optical (CD or DVD) drive for these purposes, in case the need arises. Tip: There is no optical drive in the IBM Flex System Enterprise Chassis.

302

IBM PureFlex System and IBM Flex System Products and Technology

The power-control button on the front of the server (see Figure 5-55 on page 302) has the following functions: When the system is fully installed in the chassis: Use this button to power the system on and off. When the system is removed from the chassis: Use this button to illuminate the light path diagnostic panel on the top of the front bezel, as shown in Figure 5-56.

Figure 5-56 Light path diagnostic panel

The LEDs on the light path panel indicate the status of the following devices: LP: Light Path panel power indicator S BRD: System board LED (might indicate trouble with processor or MEM, too) MGMT: Flexible Support Processor (or management card) LED D BRD: Drive or direct access storage device (DASD) board LED DRV 1: Drive 1 LED (SSD 1 or HDD 1) DRV 2: Drive 2 LED (SSD 2 or HDD 2) If problems occur, the light path diagnostics LEDs help with identifying the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. Pressing this button temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts. Typically, you can obtain this information from the IBM Flex System Manager or Chassis Management Module before you remove the node. However, having the LEDs helps with repairs and troubleshooting if onsite assistance is needed. For more information about the front panel and LEDs, see IBM Flex System p260 and p460 Compute Node Installation and Service Guide, which is available at this website: http://www.ibm.com/support

Chapter 5. Compute nodes

303

5.6.5 Chassis support


The Power Systems compute nodes can be used only in the IBM Flex System Enterprise Chassis. They do not fit in the previous IBM modular systems, such as IBM iDataPlex or IBM BladeCenter. There is no onboard video capability in the Power Systems compute nodes. The systems are accessed by using Serial over LAN (SOL) or the IBM Flex System Manager.

5.6.6 System architecture


This section describes the system architecture and layout of the p260 and p24L Power Systems compute node. The overall system architecture for the p260 and p24L is shown in Figure 5-57.

DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM

SMI

SAS GX++ 4 bytes


PCIe to PCI

HDDs/SSDs

SMI

SMI

POWER7 Processor 0

P7IOC I/O hub

USB controller

To front panel

SMI

Each: PCIe 2.0 x8

4 bytes each
SMI

I/O connector 1

I/O connector 2 Each: PCIe 2.0 x8

SMI

POWER7 Processor 1
SMI

P7IOC I/O hub

ETE connector SMI Each: PCIe 2.0 x8

Flash NVRAM 256 MB DDR2 TPMD Anchor card/VPD

FSP

Phy

BCM5387 Ethernet switch

Systems Management connector

Gb Ethernet ports

Figure 5-57 IBM Flex System p260 Compute Node and IBM Flex System p24L Compute Node block diagram

This diagram shows the two CPU slots, with eight memory slots for each processor. Each processor is connected to a P7IOC I/O hub, which connects to the I/O subsystem (I/O adapters, local storage). At the bottom, you can see a representation of the service processor (FSP) architecture.

304

IBM PureFlex System and IBM Flex System Products and Technology

5.6.7 Processor
The IBM POWER7 processor represents a leap forward in technology and associated computing capability. The multi-core architecture of the POWER7 processor is matched with a wide range of related technologies to deliver leading throughput, efficiency, scalability, and reliability, availability, and serviceability (RAS). Although the processor is an important component in servers, many elements and facilities must be balanced across a server to deliver maximum throughput. As with previous generations, the design philosophy for POWER7 processor-based systems is system-wide balance. The POWER7 processor plays an important role in this balancing.

Processor options for the p260 and p24L


Table 5-88 defines the processor options for the p260 and p24L compute nodes.
Table 5-88 p260 and p24L processor options Feature code Cores per POWER7 processor Number of POWER7 processors Total cores Core frequency L3 cache size per POWE7 processor

IBM Flex System p260 Compute Node - 7895-23X EPRD EPRB EPRA 4 8 8 2 2 2 8 16 16 4.0 GHz 3.6 GHz 4.1 GHz 40 MB (10 MB per core) 80 MB (10 MB per core) 80 MB (10 MB per core)

IBM Flex System p260 Compute Node - 7895-22X EPR1 EPR3 EPR5 4 8 8 2 2 2 8 16 16 3.3 GHz 3.2 GHz 3.55 GHz 16 MB (4 MB per core) 32 MB (4 MB per core) 32 MB (4 MB per core)

IBM Flex System p260 Compute Node - 7895-23A EPRC 2 2 4 4.0 GHz 20 MB (10 MB per core)

IBM Flex System p24L Compute Node EPR8 EPR9 EPR7 8 8 6 2 2 2 16 16 12 3.2 GHz 3.55 GHz 3.7 GHz 32 MB (4 MB per core) 32 MB (4 MB per core) 24 MB (4 MB per core)

To optimize software licensing, you can deconfigure or disable one or more cores. The feature is listed in Table 5-89.
Table 5-89 Unconfiguration of cores for p260 and p24L Feature code 2319 Description Factory Deconfiguration of 1-core Minimum 0 Maximum 1 less than the total number of cores (For EPR5, the maximum is 7)

Chapter 5. Compute nodes

305

POWER7 architecture
IBM uses innovative methods to achieve the required levels of throughput and bandwidth. Areas of innovation for the POWER7 processor and POWER7 processor-based systems include (but are not limited to) the following elements: On-chip L3 cache that is implemented in embedded dynamic random-access memory (eDRAM) Cache hierarchy and component innovation Advances in memory subsystem Advances in off-chip signaling The superscalar POWER7 processor design also provides the following capabilities: Binary compatibility with the prior generation of POWER processors Support for PowerVM virtualization capabilities, including PowerVM Live Partition Mobility to and from IBM POWER6 and IBM POWER6+ processor-based systems Figure 5-58 shows the POWER7 processor die layout with major areas identified: Eight POWER7 processor cores, L2 cache, L3 cache and chip power bus interconnect, SMP links, GX++ interface, and integrated memory controller.

GX++ Bridge

Memory Controller

C1 Core L2

C1 Core L2

C1 Core L2

C1 Core L2

4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 L2 C1 Core L2 C1 Core L2 C1 Core L2 C1 Core

SMP
Figure 5-58 POWER7 processor architecture

POWER7+ architecture
The POWER7+ architecture builds on the POWER7 architecture. IBM uses innovative methods to achieve the required levels of throughput and bandwidth. Areas of innovation for the POWER7+ processor and POWER7+ processor-based systems include (but are not limited to) the following elements: On-chip L3 cache implemented in embedded dynamic random access memory (eDRAM) Cache hierarchy and component innovation Advances in memory subsystem Advances in off-chip signaling Advances in RAS features such as power-on reset and L3 cache dynamic column repair

306

IBM PureFlex System and IBM Flex System Products and Technology

Memory Buffers

The superscalar POWER7+ processor design also provides the following capabilities: Binary compatibility with the prior generation of POWER processors Support for PowerVM virtualization capabilities, including PowerVM Live Partition Mobility to and from POWER6, POWER6+, and POWER7 processor-based systems Figure 5-59 shows the POWER7+ processor die layout with the following major areas identified: Eight POWER7+ processor cores L2 cache L3 cache Chip power bus interconnect SMP links GX++ interface Memory controllers I/O links

Figure 5-59 POWER7+ processor architecture

POWER7+ processor overview


The POWER7+ processor chip is fabricated with IBM 32 nm silicon-on-insulator (SOI) technology that uses copper interconnects and implements an on-chip L3 cache that uses eDRAM. The POWER7+ processor chip is 567 mm2 and is built by using 2,100,000,000 components (transistors). Eight processor cores are on the chip, each with 12 execution units, 256 KB of L2 cache per core, and access to up to 80 MB of shared on-chip L3 cache. For memory access, the POWER7+ processor includes a double data rate 3 (DDR3) memory controller with four memory channels. To scale effectively, the POWER7+ processor uses a combination of local and global high-bandwidth SMP links.

Chapter 5. Compute nodes

307

Table 5-90 summarizes the technology characteristics of the POWER7+ processor.


Table 5-90 Summary of POWER7+ processor technology Technology Die size Fabrication technology POWER7+ processor 567 mm2 32 nm lithography Copper interconnect Silicon-on-insulator eDRAM 2,100,000,000 components (transistors) offering the equivalent function of 2,700,000,000 8 4/32 256 KB / 2MB 10 MB / 80 MB Two per processor Compatible with prior generations of the POWER processor

Components Processor cores Max execution threads core/chip L2 cache per core/per chip On-chip L3 cache per core / per chip DDR3 memory controllers Compatibility

5.6.8 Memory
Each POWER7 processor has an integrated memory controller. Industry-standard DDR3 RDIMM technology is used to increase the reliability, speed, and density of the memory subsystems.

Memory placement rules


The preferred memory minimum and maximum for the p260 and p24L are listed in Table 5-91.
Table 5-91 Preferred memory limits for p260 and p24L Model IBM Flex System p260 Compute Node IBM Flex System p24L Compute Node Minimum memory 8 GB 24 GB Maximum memory 512 GB (16x 32 GB DIMMs) 512 GB (16 x 32 GB DIMMs)

Generally, use a minimum of 2 GB of RAM per core. The functional minimum memory configuration for the system is 4 GB (2x2 GB). However, this configuration is not sufficient for reasonable production use of the system.

LP and VLP form factors


One benefit of deploying IBM Flex System systems is the ability to use LP memory DIMMs. This design allows for more choices to configure the system to match your needs.

308

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-92 lists the available memory options for the p260 and p24L.
Table 5-92 Supported memory DIMMs - Power Systems compute nodes Part number 78P1011 78P0501 78P0502 78P1917 78P0639 78P1915 78P1539 Feature code EM04 8196 8199 EEMD 8145 EEME EEMF Description 2x 2 GB DDR3 RDIMM 1066 MHz 2x 4 GB DDR3 RDIMM 1066 MHz 2x 8 GB DDR3 RDIMM 1066 MHz 2x 8 GB DDR3 RDIMM 1066 MHz 2x 16 GB DDR3 RDIMM 1066 MHz 2x 16 GB DDR3 RDIMM 1066 MHz 2x 32 GB DDR3 RDIMM 1066 MHz Form factor LPa VLP VLP VLP LP
a

p24L Yes Yes Yes Yes Yes Yes Yes

p260 22X Yes Yes Yes Yes Yes Yes Yes

p260 23X No Yes No Yes No Yes Yes

p260 23A No Yes No Yes No Yes Yes

LPa LPa

a. If 2.5-inch HDDs are installed, low-profile DIMM features cannot be used (EM04, 8145, EEME and EEMF cannot be used).

Requirement: Because of the design of the on-cover storage connections, if you want to use 2.5-inch HDDs, you must use VLP DIMMs (4 GB or 8 GB). The cover cannot close properly if LP DIMMs and SAS HDDs are configured in the same system. This mixture physically obstructs the cover. However, SSDs and LP DIMMs can be used together. For more information, see 5.6.10, Storage on page 313. There are 16 buffered DIMM slots on the p260 and the p24L, as shown in Figure 5-60.

SMI

DIMM 1 (P1-C1) DIMM 2 (P1-C2) DIMM 3 (P1-C3) DIMM 4 (P1-C4) DIMM 5 (P1-C5) DIMM 6 (P1-C6) DIMM 7 (P1-C7) DIMM 8 (P1-C8) DIMM 9 (P1-C9) DIMM 10 (P1-C10) DIMM 11 (P1-C11) DIMM 12 (P1-C12) DIMM 13 (P1-C13) DIMM 14 (P1-C14) DIMM 15 (P1-C15) DIMM 16 (P1-C16)

SMI

POWER7 Processor 0

SMI

SMI

SMI

SMI

POWER7 Processor 1
SMI

SMI

Figure 5-60 Memory DIMM topology (IBM Flex System p260 Compute Node)

Chapter 5. Compute nodes

309

The following memory-placement rules must be adhered to: Install DIMM fillers in unused DIMM slots to ensure effective cooling. Install DIMMs in pairs. Both must be the same size. Both DIMMs in a pair must be the same size, speed, type, and technology. Otherwise, you can mix compatible DIMMs from multiple manufacturers. Install only supported DIMMs, as described on the IBM ServerProven website: http://www.ibm.com/servers/eserver/serverproven/compat/us/ Table 5-93 shows the required placement of memory DIMMs for the p260 and the p24L, depending on the number of DIMMs installed.
Table 5-93 DIMM placement - p260 and p24L Number of DIMMs Processor 0 Processor 1

DIMM 10

DIMM 11

DIMM 12

DIMM 13

DIMM 14

DIMM 15 x

2 4 6 8 10 12 14 16

x x x x x x x x x x x x x x x x

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

Usage of mixed DIMM sizes


All installed memory DIMMs do not have to be the same size. However, keep the following groups of DIMMs the same size: Slots 1-4 Slots 5-8 Slots 9-12 Slots 13-16

5.6.9 Active Memory Expansion


The optional Active Memory Expansion feature is a POWER7 technology that allows the effective maximum memory capacity to be much larger than the true physical memory. Applicable to AIX 6.1 or later, this innovative compression and decompression of memory content that uses processor cycles allows memory expansion of up to 100%. This memory expansion allows an AIX 6.1 or later partition to perform more work with the same physical amount of memory. Conversely, a server can run more partitions and perform more work with the same physical amount of memory.

310

IBM PureFlex System and IBM Flex System Products and Technology

DIMM 16

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 7

DIMM 8

DIMM 9

Active Memory Expansion uses processor resources to compress and extract memory contents. The trade-off of memory capacity for processor cycles can be an excellent choice. However, the degree of expansion varies based on how compressible the memory content is. Have adequate spare processor capacity available for the compression and decompression. Tests in IBM laboratories that used sample workloads showed excellent results for many workloads in terms of memory expansion per added processor that was used. Other test workloads had more modest results. You have a great deal of control over Active Memory Expansion usage. Each individual AIX partition can turn on or turn off Active Memory Expansion. Control parameters set the amount of expansion that is wanted in each partition to help control the amount of processor capacity that is used by the Active Memory Expansion function. An IBM Public License (IPL) is required for the specific partition that turns memory expansion on or off. After the memory expansion is turned on, there are monitoring capabilities in standard AIX performance tools, such as lparstat, vmstat, topas, and svmon. Figure 5-61 represents the percentage of processor that is used to compress memory for two partitions with various profiles. The green curve corresponds to a partition that has spare processing power capacity. The blue curve corresponds to a partition constrained in processing power.
2 % CPU utilization for expansion Very cost effective 1
1 = Plenty of spare CPU resource available 2 = Constrained CPU resource already running at significant utilization

Amount of memory expansion


Figure 5-61 Processor usage versus memory expansion effectiveness

Both cases show the following knee of the curve relationships for processor resources that are required for memory expansion: Busy processor cores do not have resources to spare for expansion. The more memory expansion that is done, the more processor resources are required. The knee varies, depending on how compressible the memory contents are. This variability demonstrates the need for a case-by-case study to determine whether memory expansion can provide a positive return on investment. To help you perform this study, a planning tool is included with AIX 6.1 Technology Level 4 or later. You can use the tool to sample actual workloads and estimate both how expandable the partition memory is and how much processor resource is needed. Any Power System model runs the planning tool.

Chapter 5. Compute nodes

311

Figure 5-62 shows an example of the output that is returned by this planning tool. The tool outputs various real memory and processor resource combinations to achieve the wanted effective memory and proposes one particular combination. In this example, the tool proposes to allocate 58% of a processor core, to benefit from 45% extra memory capacity.
Active Memory Expansion Modeled Statistics: ----------------------Modeled Expanded Memory Size : 8.00 GB Expansion Factor --------1.21 1.31 1.41 1.51 1.61 True Memory Modeled Size -------------6.75 GB 6.25 GB 5.75 GB 5.50 GB 5.00 GB Modeled Memory Gain ----------------1.25 GB [ 19%] 1.75 GB [ 28%] 2.25 GB [ 39%] 2.50 GB [ 45%] 3.00 GB [ 60%] CPU Usage Estimate ----------0.00 0.20 0.35 0.58 1.46

Active Memory Expansion Recommendation: --------------------The recommended AME configuration for this workload is to configure the LPAR with a memory size of 5.50 GB and to configure a memory expansion factor of 1.51. This will result in a memory expansion of 45% from the LPAR's current memory size. With this configuration, the estimated CPU usage due to Active Memory Expansion is approximately 0.58 physical processors, and the estimated overall peak CPU resource required for the LPAR is 3.72 physical processors. Figure 5-62 Output from the AIX Active Memory Expansion planning tool

For more information about this topic, see the white paper, Active Memory Expansion: Overview and Usage Guide, which is available at this website: http://www.ibm.com/systems/power/hardware/whitepapers/am_exp.html

312

IBM PureFlex System and IBM Flex System Products and Technology

5.6.10 Storage
The p260 and p24L has an onboard SAS controller that can manage up to two non-hot-pluggable internal drives. Both 2.5-inch HDDs and 1.8-inch SSDs are supported. The drives attach to the cover of the server, as shown in Figure 5-63.

Figure 5-63 The IBM Flex System p260 Compute Node showing HDD location on top cover

Storage configuration impact to memory configuration


The following types of local drives that are used impacts the form factor of your memory DIMMs: If HDDs are chosen, only VLP DIMMs can be used because of internal spacing. There is not enough room for the 2.5-inch drives to be used with LP DIMMs (currently the 2 GB and 16 GB sizes). Verify your memory choice to make sure that it is compatible with the local storage configuration. The use of SSDs does not have the same limitation, and both LP and VLP DIMMs can be used with SSDs.

Local storage and cover options


Local storage options are shown in Table 5-94. None of the available drives are hot-swappable. If you use local drives, you must order the appropriate cover with connections for your drive type. The maximum number of drives that can be installed in the p260 or p24L is two. SSD and HDD drives cannot be mixed.
Table 5-94 Local storage options Feature code Part number Description

2.5-inch SAS HDDs 7069 8274 None 42D0627 Top cover with HDD connectors for the p260 and p24L 300 GB 10K RPM non-hot-swap 6 Gbps SAS

Chapter 5. Compute nodes

313

Feature code 8276 8311

Part number 49Y2022 81Y9654

Description 600 GB 10K RPM non-hot-swap 6 Gbps SAS 900 GB 10K RPM non-hot-swap 6 Gbps SAS

1.8-inch SSDs 7068 8207 No drives 7067 None Top cover for no drives on the p260 and p24L None 74Y9114 Top cover with SSD connectors for the p260 and p24L 177 GB SATA non-hot-swap SSD

As shown in Figure 5-63 on page 313, the local drives (HDD or SDD) are mounted to the top cover of the system. When you order your p260 or p24L, select the cover that is appropriate for your system (SSD, HDD, or no drives).

Local drive connection


On covers that accommodate drives, the drives attach to an interposer that connects to the system board when the cover is properly installed. This connection is shown in Figure 5-64.

Figure 5-64 Connector on drive interposer card mounted to server cover

314

IBM PureFlex System and IBM Flex System Products and Technology

The connection for the covers drive interposer on the system board is shown in Figure 5-65.

Figure 5-65 Connection for drive interposer card mounted to the system cover

RAID capabilities
Disk drives and SSDs in the p260 and p24L can be used to implement and manage various types of RAID arrays. They can do so in operating systems that are on the ServerProven list. For the compute node, you must configure the RAID array through the smit sasdam command, which is the SAS RAID Disk Array Manager for AIX. The AIX Disk Array Manager is packaged with the Diagnostics utilities on the Diagnostics CD. Use the smit sasdam command to configure the disk drives for use with the SAS controller. The diagnostics CD can be downloaded in ISO file format from this website: http://www14.software.ibm.com/webapp/set2/sas/f/diags/download/ For more information, see Using the Disk Array Manager in the Systems Hardware Information Center at this website: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/p7ebj/s asusingthesasdiskarraymanager.htm Tip: Depending on your RAID configuration, you might have to create the array before you install the operating system in the compute node. Before you can create a RAID array, reformat the drives so that the sector size of the drives changes from 512 bytes to 528 bytes. If you decide later to remove the drives, delete the RAID array before you remove the drives. If you decide to delete the RAID array and reuse the drives, you might need to reformat the drives. Change the sector size of the drives from 528 bytes to 512 bytes.

5.6.11 I/O expansion


There are two I/O adapter slots on the p260 and the p24L. The I/O adapter slots on IBM Flex System nodes are identical in shape (form factor). There is no onboard network capability in the Power Systems compute nodes other than the Flexible Service Processor (FSP) NIC interface, so an Ethernet adapter must be installed to provide network connectivity.

Chapter 5. Compute nodes

315

Slot 1 requirements: You must have one of the following I/O adapters installed in slot 1 of the Power Systems compute nodes: EN4054 4-port 10Gb Ethernet Adapter (Feature Code #1762) EN2024 4-port 1Gb Ethernet Adapter (Feature Code #1763) IBM Flex System CN4058 8-port 10Gb Converged Adapter (#EC24) In the p260 and p24L, the I/O is controlled by two P7-IOC I/O controller hub chips. This configuration provides additional flexibility when assigning resources within Virtual I/O Server (VIOS) to specific Virtual Machine/LPARs. Table 5-95 shows the available I/O adapter cards.
Table 5-95 Supported I/O adapters for the p260 and p24L Feature code 1762a 1763a EC24a EC26 1764 EC23 EC2E 1761 Description IBM Flex System EN4054 4-port 10Gb Ethernet Adapter IBM Flex System EN2024 4-port 1Gb Ethernet Adapter IBM Flex System CN4058 8-port 10Gb Converged adapter IBM Flex System EN4132 2-port 10Gb RoCE adapter IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC5052 2-port 16Gb FC adapter IBM Flex System FC5054 4-port 16Gb FC adapter IBM Flex System IB6132 2-port QDR InfiniBand Adapter Number of ports 4 4 8 2 2 2 4 2

a. At least one 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter must be configured in each server.

5.6.12 System management


There are several advanced system management capabilities that are built into the p260 and p24L. A Flexible Support Processor handles most of the server-level system management. It has features, such as system alerts and SOL capability, which are described in this section.

Flexible Support Processor


An FSP provides out-of-band system management capabilities. These capabilities include system control, runtime error detection, configuration, and diagnostic procedures. Generally, you do not interact with the Flexible Support Processor directly. Rather, you use tools such as IBM Flex System Manager, Chassis Management Module, and external IBM Systems Director Management Console. The Flexible Support Processor provides an SOL interface, which is available by using the Chassis Management Module and the console command.

316

IBM PureFlex System and IBM Flex System Products and Technology

Serial over LAN


The p260 and p24L do not have an on-board video chip and do not support KVM connection. Server console access is obtained by a SOL connection only. SOL provides a means to manage servers remotely by using a command-line interface (CLI) over a Telnet or Secure Shell (SSH) connection. SOL is required to manage servers that do not have KVM support or that are attached to the IBM Flex System Manager. SOL provides console redirection for both Software Management Services (SMS) and the server operating system. The SOL feature redirects server serial-connection data over a local area network (LAN) without requiring special cabling. It does so by routing the data by using the Chassis Management Module (CMM) network interface. The SOL connection enables Power Systems compute nodes to be managed from any remote location with network access to the CMM. SOL offers the following advantages: Remote administration without KVM (headless servers) Reduced cabling and no requirement for a serial concentrator Standard Telnet/SSH interface, which eliminates the requirement for special client software The CMM CLI provides access to the text-console command prompt on each server through a SOL connection. This configuration enables the p260 and p24L to be managed from a remote location.

Anchor card
As shown in Figure 5-66, the anchor card contains the vital product data chip that stores system-specific information. The pluggable anchor card provides a means for this information to be transferable from a faulty system board to the replacement system board. Before the service processor knows what system it is on, it reads the vital product data chip to obtain system information. The vital product data chip includes information such as system type, model, and serial number.

Figure 5-66 Anchor card

5.6.13 Operating system support


The IBM Flex System p24L Compute Node is designed to run Linux only. The IBM Flex System p260 Compute Node (model 22X) supports the following configurations: AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284 AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later

Chapter 5. Compute nodes

317

AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283 AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later1 IBM i 6.1 with i 6.1.1 machine code, or later IBM i 7.1, or later Novell SUSE Linux Enterprise Server 11 Service Pack 2 for POWER Red Hat Enterprise Linux 5.7, for POWER, or later Red Hat Enterprise Linux 6.2, for POWER, or later VIOS 2.2.1.4, or later The IBM Flex System p260 Compute Node (model 23X) supports the following operating systems: IBM i 6.1 with i 6.1.1 machine code or later IBM i 7.1 or later VIOS 2.2.2.0 or later AIX V7.1 with the 7100-02 Technology Level or later AIX V6.1 with the 6100-08 Technology Level or later Novell SUSE Linux Enterprise Server 11 Service Pack 2 for POWER Red Hat Enterprise Linux 5.7, for POWER, or later Red Hat Enterprise Linux 6.2, for POWER, or later The IBM Flex System p260 Compute Node (model 23A) supports the following operating systems: AIX V7.1 with the 7100-02 Technology Level with Service Pack 3 or later AIX V6.1 with the 6100-08 Technology Level with Service Pack 3 or later VIOS 2.2.2.3 or later IBM i 6.1 with i 6.1.1 machine code, or later IBM i 7.1 TR3 or later SUSE Linux Enterprise Server 11 Service Pack (SP) 2 for POWER Red Hat Enterprise Linux 6.4 for POWER OS support: Support by some of these operating system versions are post generally availability. For more information about the specific versions and service levels supported and any other prerequisites, see this website: http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/matrix.sh tml

5.7 IBM Flex System p270 Compute Node


The IBM Flex System p270 Compute Node is based on IBM POWER architecture technologies and uses the new POWER7+ dual-chip module (DCM) processors. These compute nodes run in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment by using advanced processing technology. 5.7.1, Specifications on page 319 5.7.2, System board layout on page 320 5.7.3, Comparing the p260 and p270 on page 321 5.7.4, Front panel on page 322 5.7.5, Chassis support on page 323
1

IBM AIX 5L V5.3 Service Extension is required.

318

IBM PureFlex System and IBM Flex System Products and Technology

5.7.6, System architecture on page 324 5.7.7, IBM POWER7+ processor on page 325 5.7.8, Memory subsystem on page 327 5.7.9, Active Memory Expansion feature on page 329 5.7.10, Storage on page 329 5.7.11, I/O expansion on page 333 5.7.12, System management on page 333 5.7.13, Operating system support on page 334

5.7.1 Specifications
The IBM Flex System p270 Compute Node is a half-wide, Power Systems compute node with the following characteristics: Two POWER7+ dual-chip module (DCM) processor sockets Sixteen memory slots Two I/O adapter slots plus support for the IBM Flex System Dual VIOS Adapter An option for up to two internal drives for local storage The p270 has the specifications that are shown in Table 5-96.
Table 5-96 Specifications for p270 Components Model number Form factor Chassis support Processor Specification 7954-24X Standard-width compute node IBM Flex System Enterprise Chassis Two IBM POWER7+ Dual Chip Modules. Each Dual Chip Module (DCM) contains two processor chips, each with six cores (24 cores total). Cores have a frequency of 3.1 or 3.4 GHz and each core has 10 MB of L3 cache (240 MB L3 cache total). Integrated memory controllers with four memory channels from each DCM. Each memory channel operates at 6.4 Gbps. One GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 32 nm fabrication technology. IBM P7IOC I/O hub. 16 DIMM sockets. RDIMM DDR3 memory is supported. Integrated memory controller in each processor, each with four memory channels. Supports Active Memory Expansion with AIX V6.1 or later. All DIMMs operate at 1066 MHz. Both LP (low profile) and VLP (very low profile) DIMMs are supported, although only VLP DIMMs are supported if internal HDDs are configured. The usage of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs. 512 GB using 16x 32 GB DIMMs. ECC, Chipkill. Two 2.5-inch non-hot-swap drive bays supporting 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, then only 1.8-inch SSDs are supported. If VLP DIMMs are installed, then both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together. 1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives.

Chipset Memory

Memory maximums Memory protection Disk drive bays

Maximum internal storage

Chapter 5. Compute nodes

319

Components SAS controller

Specification IBM ObsidianE SAS controller embedded on system board connects to the two local drive bays. Supports 3 Gbps SAS with a PCIe 2.0 x8 host interface. Supports RAID 0 and RAID 10 with two drives. A second Obsidian SAS controller is available through the optional IBM Flex System Dual VIOS adapter. When the Dual VIOS adapter is installed, each SAS controller controls one drive. Without the Dual VIOS adapter installed: RAID 0 and RAID 10 (two drives) With the Dual VIOS adapter installed: RAID 0 (one drive to each SAS controller) None standard. Optional 1Gb or 10Gb Ethernet adapters. Two I/O connectors for adapters. PCIe 2.0 x16 interface. One external USB port. FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, and IBM Systems Director. Optional support for a Hardware Management Console (HMC) or an Integrated Virtualization Manager (IVM) console. FSP password, selectable boot sequence. None. Remote management through Serial over LAN and IBM Flex System Manager. 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD. IBM AIX, IBM i, and Linux. See 5.7.13, Operating system support on page 334 for details. Optional service upgrades are available through IBM ServicePac offerings: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software. Width: 215mm (8.5), height 51mm (2.0), depth 493mm (19.4). Maximum configuration: 7.7 kg (17.0 lb).

RAID support Network interfaces PCI Expansion slots Ports Systems management

Security features Video Limited warranty Operating systems supported Service and support

Dimensions Weight

5.7.2 System board layout


IBM Flex System p270 Compute Node (7954-24X) is a standard-wide Power Systems compute node with two POWER7+ processor sockets, 16 memory slots, two I/O adapter slots, and options for up to two internal drives for local storage and another SAS controller. The IBM Flex System p270 Compute Node includes the following features: Two dual chip modules (DCM) each consisting of two POWER7+ chips to provide a total of 24 POWER7+ processing cores 16 DDR3 memory DIMM slots Supports Very Low Profile (VLP) and Low Profile (LP) DIMMs Two P7IOC I/O hubs A RAID-capable SAS controller that supports up to two SSDs or HDDs Optional second SAS controller on the IBM Flex System Dual VIOS Adapter to support dual VIO servers on internal drives Two I/O adapter slots Flexible Service Processor (FSP)

320

IBM PureFlex System and IBM Flex System Products and Technology

IBM light path diagnostics USB 2.0 port Figure 5-67 shows the system board layout of the IBM Flex System p270 Compute Node. POWER7+ Dual Chip Module Two I/O adapter connectors I/O adapter 1

16 DIMM slots

(Disks are mounted on the cover, located over the memory DIMMs.)

Optional SAS controller card (IBM Flex System Dual VIOS Adapter)

Figure 5-67 System board layout of the IBM Flex System p270 Compute Node

5.7.3 Comparing the p260 and p270


Table 5-97 shows a comparison between the p270 with the p260.
Table 5-97 p260 and p270 comparison table p260 (Machine type 7895) Model number Chip Processor packaging Specifications Total cores per system Clock speed L2 cache per chip L3 cache per core L3 cache per chip 8 3.3 2 MB 4 MB 16 MB 16 3.22 4 MB 4 MB 32 MB 16 3.55 4 MB 4 MB 32 MB 4 4.08 2 MB 10 MB 20 MB 8 4.08 2 MB 10 MB 40 MB 16 3.6 4 MB 10 MB 80 MB 16 4.1 4 MB 10 MB 80 MB 24 3.1 2 MB 4 per DCM 10 MB 60 MB 24 3.4 2 MB 4 per DCM 10 MB 60 MB 22X POWER7 23A 23X POWER7+ Single-chip module (SCM) p270 (7954) 24X POWER7+ Dual-chip module (DCM)

Chapter 5. Compute nodes

321

p260 (Machine type 7895) Model number L3 cache per system Memory min Memory max 32 MB 22X 64 MB 64 MB 23A 40 MB 80 MB 23X 160 MB 160 MB

p270 (7954) 24X 240 MB 240 MB

8 GB per server 512 GB per server

5.7.4 Front panel


The front panel of Power Systems compute nodes have the following common elements, as shown in Figure 5-55 on page 302: One USB 2.0 port Power button and light path, light-emitting diode (LED) (green) Location LED (blue) Information LED (amber) Fault LED (amber)

USB 2.0 port

Power button

LEDs (left-right): location, info, fault

Figure 5-68 Front panel of the IBM Flex System p270 Compute Node

The USB port on the front of the Power Systems compute nodes is useful for various tasks, including out-of-band diagnostic tests, hardware RAID setup, operating system access to data on removable media, and local OS installation. It might be helpful to obtain a USB optical (CD or DVD) drive for these purposes, in case the need arises. Tip: There is no optical drive in the IBM Flex System Enterprise Chassis.

Light path diagnostic LED panel


The power button on the front of the server (see Figure 5-55 on page 302) has the following functions: When the system is fully installed in the chassis: Use this button to power the system on and off.

322

IBM PureFlex System and IBM Flex System Products and Technology

When the system is removed from the chassis: Use this button to illuminate the light path diagnostic panel on the top of the front bezel, as shown in Figure 5-69.

Figure 5-69 Light path diagnostic panel

The LEDs on the light path panel indicate the following LEDs: LP: Light Path panel power indicator S BRD: System board LED (can indicate trouble with a processor or memory) MGMT: Anchor card error (also referred to as the management card) LED. D BRD: Drive or DASD board LED DRV 1: Drive 1 LED (SSD 1 or HDD 1) DRV 2: Drive 2 LED (SSD 2 or HDD 2) ETE: Expansion connector LED If problems occur, you can use the light path diagnostics LEDs to identify the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. This action temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts towards a resolution. Typically, an administrator already obtained this information from the IBM Flex System Manager or Chassis Management Module before removing the node. However, the LEDs helps with repairs and troubleshooting if onsite assistance is needed. For more information about the front panel and LEDs, see IBM Flex System p270 Compute Node Installation and Service Guide, which is available at this website: http://publib.boulder.ibm.com/infocenter/flexsys/information

5.7.5 Chassis support


The Power Systems compute nodes can be used only in the IBM Flex System Enterprise Chassis. They do not fit in the previous IBM modular systems, such as IBM iDataPlex or IBM BladeCenter. There is no onboard video capability in the Power Systems compute nodes. The machines are designed to use SOL with IVM or the IBM Flex System Manager (FSM) or HMC when SOL is disabled.

Chapter 5. Compute nodes

323

Up to 14 p270 Compute Nodes can be installed in the chassis in 10U of rack space. The actual number of systems that can be powered on in a chassis depends on the following factors: Number of power supplies that are installed in the chassis Capacity of the power supplies installed in the chassis (2100 W or 2500 W) Power redundancy policy used in the chassis (N+1 or N+N) Table 4-11 on page 93 provides guidelines about what number of p270 systems can be powered on in the IBM Flex System Enterprise Chassis, based on the type and number of power supplies installed.

5.7.6 System architecture


This section covers the system architecture and layout of Power Systems compute nodes. The overall system architecture for the p270 is shown in Figure 5-70.

DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM

Drive SMI

SAS GX++ 4 bytes


ETE Optional SAS

Drive

SMI

SMI

POWER7+ dual-chip module 0

P7IOC I/O hub


PCIe to PCI Each: PCIe 2.0 x8 USB controller

To front panel

SMI

4 bytes each
SMI

PCIe 2.0 x8

I/O connector 1

I/O connector 2 SMI

SMI

POWER7 + dual-chip module 1

P7IOC I/O hub

Each: PCIe 2.0 x8

SMI

SAS controller on the optional Dual VIOS Adapter installed in the ETE connector Flash NVRAM 256 MB DDR2 TPMD Anchor card/VPD Gb Ethernet ports

FSP

Phy

BCM5387 Ethernet switch

Systems Management connector

Figure 5-70 IBM Flex System p270 Compute Node block diagram

The p270 compute node has the POWER7+ processors packaged as dual-chip modules (DCMs). Each DCM consists of two POWER7+ processors. DCMs installed consist of two six-core chips. In Figure 5-70 on page 324, you can see the two DCMs, with eight memory slots for each module. Each module is connected to a P7IOC I/O hub, which connects to the I/O subsystem (I/O adapters and local storage). At the bottom of Figure 5-70 on page 324, you can see a representation of the flexible service processor (FSP) architecture. 324
IBM PureFlex System and IBM Flex System Products and Technology

Introduced in this generation of Power Systems compute nodes is a secondary SAS controller card, which is inserted in the ETE connector. This secondary SAS controller allows independent assignment of the internal drives to separate partitions.

5.7.7 IBM POWER7+ processor


The IBM POWER7+ processor is an evolution of the POWER7 architecture and represents an improvement in technology and associated computing capability of the POWER7. The multi-core architecture of the POWER7+ processor is matched with a wide range of related technologies to deliver leading throughput, efficiency, scalability, and RAS. Although the processor is an important component in servers, many elements and facilities must be balanced across a server to deliver maximum throughput. As with previous generations of systems based on POWER processors, the design philosophy for POWER7+ processor-based systems is one of system-wide balance in which the POWER7+ processor plays an important role.

Processor options
Table 5-98 defines the processor options for the p270 Compute Node.
Table 5-98 Processor options Feature code EPRF EPRE Number of sockets 2 2 POWER7+ chips per socket 2 (dual-chip modules) 2 (dual-chip modules) Cores per POWER7+ chip 6 6 Total cores 24 24 Core frequency 3.1 GHz 3.4 GHz L3 cache size per POWER7+ processor 60 MB 60 MB

To optimize software licensing, you can deconfigure or disable one or more cores. The feature is listed in Table 5-99.
Table 5-99 Deconfiguration of cores Feature code 2319 Description Factory Deconfiguration of one core Minimum 0 Maximum 23

This core deconfiguration feature can also be updated after installation by using the field core override option. One core must remain enabled, hence the maximum number of 23 features.

Architecture
IBM uses innovative methods to achieve the required levels of throughput and bandwidth. Areas of innovation for the POWER7+ processor and POWER7+ processor-based systems include (but are not limited to) the following elements: On-chip L3 cache implemented in embedded dynamic random access memory (eDRAM) Cache hierarchy and component innovation Advances in memory subsystem Advances in off-chip signaling Advances in RAS features such as power-on reset and L3 cache dynamic column repair Figure 5-71 shows the POWER7+ processor die layout with the following major areas identified: Eight POWER7+ processor cores (six are enabled in the p270) L2 cache

Chapter 5. Compute nodes

325

L3 cache Chip power bus interconnect SMP links GX++ interface Memory controllers I/O links

Figure 5-71 POWER7+ processor architecture (6 cores are enabled in the p270)

Table 5-100 shows comparable characteristics between the generations of POWER7+ and POWER7 processors
Table 5-100 Comparing the technology of the POWER7+ and POWER7 processors Characteristic
Technology Die size Maximum cores Maximum SMT threads per core Maximum frequency L2 Cache L3 Cache

POWER7
45 nm 567 mm2 8 4 4.25 GHz 256 KB per core 4 MB or 8MB of FLR-L3 cache per core with each core having access to the full 32 MB of L3 cache, on-chip eDRAM DDR3 Two GX++

POWER7+
32 nm 567 mm2 8 4 4.3 GHz 256 KB per core 10 MB of FLR-L3 cache per core with each core having access to the full 80 MB of L3 cache, on-chip eDRAM DDR3 Two GX++

Memory Support I/O Bus

326

IBM PureFlex System and IBM Flex System Products and Technology

5.7.8 Memory subsystem


Each POWER7+ processor that is used in the compute nodes has an integrated memory controller. Industry-standard DDR3 Registered DIMM (RDIMM) technology is used to increase reliability, speed, and density of memory subsystems. The minimum and maximum configurable memory is listed in Table 5-101.
Table 5-101 Configurable memory limits for the p270 Minimum memory 8 GB Recommended minimum 48 GB (2 GB per core) Maximum memory 512 GB (16x 32 GB DIMMs)

Table 5-102 lists the available memory options.


Table 5-102 Memory options for the p270 Feature code 8196 EEMD EEME EEMF Description 2x 4 GB DDR3 DIMM 2x 8 GB DDR3 DIMM 2x 16 GB DDR3 DIMM 2x 32 GB DDR3 DIMM Speed 1066 MHz 1066 MHz 1066 MHz 1066 MHz Form factor VLP VLP LP LP

DASD/local storage option dependency on memory form factor: Because of the design of the on-cover storage connections, clients that seek to use SAS HDDs must use VLP DIMMs (4 GB or 8 GB). The cover cannot be closed properly if LP DIMMs and SAS HDDs are configured in the same system. However, SSDs and LP DIMMs can be used together. For more information, see 5.6.10, Storage on page 313.

Chapter 5. Compute nodes

327

There are 16 buffered DIMM slots on the p270, eight per processor, as shown in Figure 5-60 on page 309.
DIMM 1 (P1-C1) DIMM 2 (P1-C2) DIMM 3 (P1-C3) DIMM 4 (P1-C4) DIMM 5 (P1-C5) DIMM 6 (P1-C6) DIMM 7 (P1-C7) DIMM 8 (P1-C8) DIMM 9 (P1-C9) DIMM 10 (P1-C10) DIMM 11 (P1-C11) DIMM 12 (P1-C12) DIMM 13 (P1-C13) DIMM 14 (P1-C14) DIMM 15 (P1-C15) DIMM 16 (P1-C16)

SMI

POWER7+ dual-chip module 0

SMI

SMI

SMI

SMI

POWER7+ dual-chip module 1

SMI

SMI

SMI

Figure 5-72 Memory DIMM topology (IBM Flex System p270 Compute Node)

The following memory-placement rules must be considered: Install DIMM fillers in unused DIMM slots to ensure proper cooling. Install DIMMs in pairs. Both DIMMs in a pair must be the same size, speed, type, and technology. You can mix compatible DIMMs from multiple manufacturers. Install only supported DIMMs, as described at the IBM ServerProven website: http://www.ibm.com/servers/eserver/serverproven/compat/us/ Table 5-93 on page 310 shows the required placement of memory DIMMs, depending on the number of DIMMs installed.
Table 5-103 DIMM placement: p270 Processor 0 DIMM 10 DIMM 11 DIMM 1 DIMM 2 DIMM 3 DIMM 4 DIMM 5 DIMM 6 DIMM 7 DIMM 8 DIMM 9 Processor 1 DIMM 12 DIMM 13 DIMM 14 DIMM 15 x DIMM 16 x x x x x x

Number of DIMMs 2 4 6 8 10 12 14 16

x x x x x x x x x x x x x x x x

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

328

IBM PureFlex System and IBM Flex System Products and Technology

All installed memory DIMMs do not have to be the same size, but it is a preferred practice that the following groups of DIMMs be kept the same size: Slots 1 - 4 Slots 5 - 8 Slots 9 - 12 Slots 13 - 16

5.7.9 Active Memory Expansion feature


Like the p260, the p280 supports the optional Active Memory Expansion feature that allows the effective maximum memory capacity to be much larger than the true physical memory. Applicable to AIX V6.1 Technology Level 4 (TL4) or later, this innovative compression and decompression of memory content that uses processor cycles allows memory expansion of up to 100%. For more information, see 5.6.9, Active Memory Expansion on page 310. Important: Active Memory Expansion is only available for the AIX operating system.

5.7.10 Storage
The Power Systems compute nodes have an onboard SAS controller that can manage one or two, non-hot-pluggable internal drives. Both 2.5-inch HDDs and 1.8-inch SSDs are supported; however, the use of 2.5-inch drives imposes restrictions on DIMMs that are used, as described in the next section. The drives attach to the cover of the server, as shown in Figure 5-73. The IBM Flex System Dual VIOS Adapter sits below the I/O adapter that is installed in I/O connector 2. Dual VIOS Adapter (installs under I/O adapter 2) Drives mounted on the underside of the cover

Figure 5-73 The p270 showing the hard disk drive locations on the top cover

Chapter 5. Compute nodes

329

Storage configuration impact to memory configuration


The following types of local drives (2.5-inch HDDs or 1.8-inch SSDs) that are used affects the form factor of your memory DIMMs: If 2.5-inch HDDs are chosen, only Very Low Profile (VLP) DIMMs can be used because of internal space requirements (currently 4 GB and 8GB sizes). There is not enough room for the 2.5-inch drives to be used with Low Profile (LP) DIMMs. Verify your memory requirements to make sure that it is compatible with the local storage configuration. The usage of 1.8-inch SSDs provides more clearance for the DIMMs and therefore does not impose the same limitation. LP or VLP DIMMs can be used with SSDs to give all available memory options.

Local storage and cover options


Local storage options are shown in Table 5-104. None of the available drives are hot-swappable. The maximum number of drives that can be installed in any Power Systems compute node is two. SSDs and HDDs cannot be mixed.
Table 5-104 Drive options for internal disk storage Feature code Description Maximum supported

Optional second SAS adapter, installed in expansion port EC2F IBM Flex System Dual VIOS adapter 1

2.5-inch SAS HDDs 8274 8276 8311 1.8-inch SSDs 8207 177 GB SATA non-hot-swap SSD 2 300 GB 10K RPM non-hot-swap 6 Gbps SAS 600 GB 10K RPM non-hot-swap 6 Gbps SAS 900 GB 10K RPM non-hot-swap 6 Gbps SAS 2 2 2

If you use local drives, you must order the appropriate cover with connections for your drive type. As you can see in Figure 5-63 on page 313, the local drives (HDD or SDD) are mounted to the top cover of the system. Table 5-105 lists the top cover options because you must select the cover feature that matches the drives you want to install: 2.5-inch drives, 1.8-inch drives, or no drives.
Table 5-105 Top cover options for the p270 Feature code 7069 7068 7067 Description Top cover with connectors for 2.5-inch drives for the p270 Top cover with connectors for 1.8-inch drives for the p270 Top cover for no drives on the p270

330

IBM PureFlex System and IBM Flex System Products and Technology

Local drive connection


On covers that accommodate drives, the drives attach to an interposer that connects to the system board when the cover is properly installed. This connection is shown in Figure 5-74.

Figure 5-74 Connector on drive interposer card mounted to server cover

On the system board, the connection for the covers drive interposer is shown in Figure 5-74.

Figure 5-75 Connection for drive interposer card mounted to the system cover

IBM Flex System Dual VIOS Adapter


If the optional IBM Flex System Dual VIOS Adapter, #EC2F, is installed, the two drives are controlled independently. One drive is controlled by the onboard SAS controller and the other drive is controlled by the SAS controller on the Dual VIOS Adapter. Such a configuration is suitable for a Dual VIOS configuration. Ordering information for the Dual VIOS Adapter is shown in Table 5-106.
Table 5-106 Dual VIOS Adapter ordering information Feature code EC2F Description IBM Flex System Dual VIOS Adapter

Chapter 5. Compute nodes

331

The IBM Flex System Dual VIOS Adapter is shown in Figure 5-76. The adapter attaches via the expansion (ETE) connector. Even with the Dual VIOS Adapter installed, an I/O adapter can be installed in slot 2. I/O adapter in slot 1

Available connector for slot 2

Figure 5-76 IBM Flex System Dual VIOS Adapter in the p270

RAID capabilities
When two internal drives are installed in the p270 but without the Dual VIOS Adapter installed, RAID-0 or RAID-10 can be configured. Configure the RAID array by running the smit sasdam command, which starts the SAS RAID Disk Array Manager for AIX. The AIX Disk Array Manager is packaged with the Diagnostics utilities on the Diagnostics CD. Run the smit sasdam command to configure the disk drives for use with the SAS controller. The diagnostics CD can be downloaded in ISO file format from the following website: http://www14.software.ibm.com/webapp/set2/sas/f/diags/download/ For more information, see Using the Disk Array Manager in the Systems Hardware Information Center at this website: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/p7ebj/s asusingthesasdiskarraymanager.htm Tip: Depending on your RAID configuration, you might need to create the array before you install the operating system in the compute node. Before you can create a RAID array, you must reformat the drives so that the sector size of the drives changes from 512 bytes to 528 bytes. If you later decide to remove the drives, delete the RAID array before you remove the drives. If you decide to delete the RAID array and reuse the drives, you might need to reformat the drives so that the sector size of the drives changes from 528 bytes to 512 bytes.

332

IBM PureFlex System and IBM Flex System Products and Technology

5.7.11 I/O expansion


There are two I/O adapter slots on the p270. The I/O adapter slots on IBM Flex System nodes are identical in shape (form factor). There is no onboard network capability in the Power Systems compute nodes other than the Flexible Service Processor (FSP) NIC interface. Therefore, an Ethernet adapter must be installed to provide network connectivity. Slot 1 requirements: You must have one of the following I/O adapters that are installed in slot 1 of the Power Systems compute nodes: EN4054 4-port 10Gb Ethernet Adapter (Feature Code #1762) EN2024 4-port 1Gb Ethernet Adapter (Feature Code #1763) IBM Flex System CN4058 8-port 10Gb Converged Adapter (#EC24) In the p270, the I/O is controlled by two P7-IOC I/O controller hub chips. This configuration provides more flexibility when resources are assigned within Virtual I/O Server (VIOS) to specific Virtual Machine/LPARs. Table 5-107 shows the available I/O adapter cards for p270.
Table 5-107 Supported I/O adapter for p270 Feature code Ethernet I/O adapters 1763 1762 EC26 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter IBM Flex System EN4132 2-port 10Gb RoCE adapter Description

Converged Ethernet adapter EC24 IBM Flex System CN4058 8-port 10Gb Converged Adapter

Fibre Channel /O adapters 1764 EC23 EC2E IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC5052 2-port 16Gb FC Adapter IBM Flex System FC5054 4-port 16Gb FC Adapter

InfiniBand I/O adapters 1761 IBM Flex System IB6132 2-port QDR InfiniBand Adapter

5.7.12 System management


There are several advanced system management capabilities that are built into Power Systems compute nodes. A Flexible Support Processor handles most of the server-level system management. It has features, such as system alerts and Serial-over-LAN capability, that we describe in this section.

Flexible Support Processor


A Flexible Support Processor (FSP) provides out-of-band system management capabilities, such as system control, runtime error detection, configuration, and diagnostic tests.
Chapter 5. Compute nodes

333

Generally, you do not interact with the Flexible Support Processor directly, but by using tools, such as IBM Flex System Manager, Chassis Management Module, the IBM Hardware Management Console (HMC) and the Integrated Virtualization Manager (IVM). The Flexible Support Processor provides a Serial-over-LAN interface, which is available by using the Chassis Management Module and the console command.

Serial over LAN


The Power Systems compute nodes do not have an on-board video chip and do not support KVM connections. Server console access is obtained by a SOL connection only. SOL provides a means to manage servers remotely by using a CLI over a Telnet or SSH connection. SOL is required to manage Power Systems compute nodes that do not have KVM support or that are managed by IVM. SOL provides console redirection for both System Management Services (SMS) and the server operating system. The SOL feature redirects server serial-connection data over a LAN without requiring special cabling by routing the data through the CMM network interface. The SOL connection enables Power Systems compute nodes to be managed from any remote location with network access to the Chassis Management Module. SOL offers the following advantages: Remote administration without KVM (headless servers) Reduced cabling and no requirement for a serial concentrator Standard Telnet/SSH interface, eliminating the requirement for special client software The CMM CLI provides access to the text-console command prompt on each server through a SOL connection, which enables the Power Systems compute nodes to be managed from a remote location.

5.7.13 Operating system support


The p270 Compute Node supports the following operating systems: AIX V7.1 with the 7100-02 Technology Level with Service Pack 3 or later AIX V6.1 with the 6100-08 Technology Level with Service Pack 3 or later IBM i 7.1 or later IBM i 6.1 or later IBM VIOS 2.2.2.3 or later RHEL 6 for IBM POWER Update 4 or later SLES 11 for IBM POWER Service Pack 2 or later

334

IBM PureFlex System and IBM Flex System Products and Technology

5.8 IBM Flex System p460 Compute Node


The IBM Flex System p460 Compute Node is based on IBM POWER architecture technologies. This compute node runs in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment by using advanced processing technology. This section describes the server offerings and the technology that is used in their implementation. The section includes the following topics: 5.8.1, Overview on page 335 5.8.2, System board layout on page 338 5.8.3, Front panel on page 338 5.8.4, Chassis support on page 340 5.8.5, System architecture on page 341 5.8.6, Processor on page 342 5.8.7, Memory on page 345 Figure 5.8.8 on page 349 5.8.9, Storage on page 350 5.8.10, Local storage and cover options on page 351 5.8.11, Hardware RAID capabilities on page 353 5.8.12, I/O expansion on page 353 5.8.13, System management on page 354 5.8.14, Integrated features on page 355 5.8.15, Operating system support on page 355

5.8.1 Overview
The IBM Flex System p460 Compute Node is a full-wide, Power Systems compute node. It has four POWER7 processor sockets, 32 memory slots, four I/O adapter slots, and an option for up to two internal drives for local storage. The IBM Flex System p460 Compute Node has the specifications that are shown in Table 5-108.
Table 5-108 IBM Flex System p460 Compute Node specifications Components Model numbers Form factor Chassis support Specification 7895-42X and 7895-43X Full-wide compute node. IBM Flex System Enterprise Chassis.

Chapter 5. Compute nodes

335

Components Processor

Specification p460: Four IBM POWER7 (model 42X) or POWER7+ (model 43X) processors. POWER7 processors: Each processor is a single-chip module (SCM) that contains eight cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. One GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 45 nm fabrication technology. POWER7+ processors: Each processor is a single-chip module (SCM) that contains eight cores (up to 4.1 GHz or 3.6 GHz and 80 MB L3 cache) or four cores (4.0 GHz and 40 MB L3 cache). Each processor has 10 MB L3 cache per core, so 8-core processors have 80 MB of L3 cache total. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. One GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 32 nm fabrication technology.

Chipset Memory

IBM P7IOC I/O hub. 32 DIMM sockets. RDIMM DDR3 memory supported. Integrated memory controller in each processor, each with four memory channels. Supports Active Memory Expansion with AIX 6.1 or later. All DIMMs operate at 1066 MHz. Both LP and VLP DIMMs are supported, although only VLP DIMMs are supported if internal HDDs are configured. The use of 1.8-inch SSDs allows the use of LP and VLP DIMMs. 1 TB using 32x 32 GB DIMMs. ECC, Chipkill. Two 2.5-inch non-hot-swap drive bays that support 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, only 1.8-inch SSDs are supported. If VLP DIMMs are installed, both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together. 1.8 TB that uses two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives. RAID support by using the operating system. None standard. Optional 1 Gb or 10 Gb Ethernet adapters. Two I/O connectors for adapters. PCI Express 2.0 x16 interface. One external USB port. FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, and IBM Systems Director. Power-on password, selectable boot sequence. None. Remote management by using Serial over LAN and IBM Flex System Manager.

Memory maximums Memory protection Disk drive bays

Maximum internal storage RAID support Network interfaces PCI Expansion slots Ports Systems management Security features Video

336

IBM PureFlex System and IBM Flex System Products and Technology

Components Limited warranty Operating systems supported Service and support

Specification 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD. IBM AIX, IBM i, and Linux.

Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software. Width: 437 mm (17.2"), height: 51 mm (2.0), depth: 493 mm (19.4). Maximum configuration: 14.0 kg (30.6 lb).

Dimensions Weight

Chapter 5. Compute nodes

337

5.8.2 System board layout


Figure 5-77 shows the system board layout of the IBM Flex System p460 Compute Node. POWER7 processors Four I/O adapter connectors I/O adapter installed

32 DIMM slots

Figure 5-77 Layout of the IBM Flex System p460 Compute Node

5.8.3 Front panel


The front panel of Power Systems compute nodes has the following common elements, as shown in Figure 5-78 on page 339: USB 2.0 port Power control button and light path LED (green) Location LED (blue) Information LED (amber) Fault LED (amber)

338

IBM PureFlex System and IBM Flex System Products and Technology

USB 2.0 port


Figure 5-78 Front panel of the IBM Flex System p460 Compute Node

Power button

LEDs (left-right): location, information, fault

The USB port on the front of the Power Systems compute nodes is useful for various tasks. These tasks include out-of-band diagnostic procedures, hardware RAID setup, operating system access to data on removable media, and local OS installation. It might be helpful to obtain a USB optical (CD or DVD) drive for these purposes, in case the need arises. Tip: There is no optical drive in the IBM Flex System Enterprise Chassis. The power-control button on the front of the server (see Figure 5-55 on page 302) has the following functions: When the system is fully installed in the chassis: Use this button to power the system on and off. When the system is removed from the chassis: Use this button to illuminate the light path diagnostic panel on the top of the front bezel, as shown in Figure 5-79.

Figure 5-79 Light path diagnostic panel

Chapter 5. Compute nodes

339

The LEDs on the light path panel indicate the status of the following devices: LP: Light Path panel power indicator S BRD: System board LED (might indicate trouble with processor or MEM) MGMT: Flexible Support Processor (or management card) LED D BRD: Drive or DASD board LED DRV 1: Drive 1 LED (SSD 1 or HDD 1) DRV 2: Drive 2 LED (SSD 2 or HDD 2) ETE: Sidecar connector LED (not present on the IBM Flex System p460 Compute Node) If problems occur, the light path diagnostics LEDs help with identifying the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. Pressing the button temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts. You usually obtain this information from the IBM Flex System Manager or Chassis Management Module before you remove the node. However, having the LEDs helps with repairs and troubleshooting if onsite assistance is needed. For more information about the front panel and LEDs, see IBM Flex System p260 and p460 Compute Node Installation and Service Guide, which is available at this website: http://www.ibm.com/support

5.8.4 Chassis support


The p460 can be used only in the IBM Flex System Enterprise Chassis. They do not fit in the previous IBM modular systems, such as IBM iDataPlex or IBM BladeCenter. There is no onboard video capability in the Power Systems compute nodes. The systems are accessed by using SOL or the IBM Flex System Manager.

340

IBM PureFlex System and IBM Flex System Products and Technology

5.8.5 System architecture


The IBM Flex System p460 Compute Node shares many of the same components as the IBM Flex System p260 Compute Node. The IBM Flex System p460 Compute Node is a full-wide node, and adds processors and memory along with two more adapter slots. It has the same local storage options as the IBM Flex System p260 Compute Node. The IBM Flex System p460 Compute Node system architecture is shown in Figure 5-80.

DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM

SMI SMI SMI SMI 4 bytes each SMI SMI SMI SMI FSP BCM5387 Ethernet switch Systems Management connector Gb Ethernet ports
Phy

POWER7 Processor 0

GX++ 4 bytes

PCIe to PCI

USB controller

To front panel

P7IOC I/O hub


Each: PCIe 2.0 x8 I/O connector 1

I/O connector 2

POWER7 Processor 1

P7IOC I/O hub

Each: PCIe 2.0 x8

DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM

Flash NVRAM 256 MB DDR2 TPMD Anchor card/VPD SMI SAS SMI SMI SMI 4 bytes each SMI SMI SMI SMI FSPIO HDDs/SSDs

POWER7 Processor 2

P7IOC I/O hub


Each: PCIe 2.0 x8 I/O connector 3

I/O connector 4

POWER7 Processor 3

P7IOC I/O hub

Each: PCIe 2.0 x8

Figure 5-80 IBM Flex System p460 Compute Node block diagram

Chapter 5. Compute nodes

341

The four processors in the IBM Flex System p460 Compute Node are connected in a cross-bar formation, as shown in Figure 5-81.

POWER7 Processor 0

POWER7 Processor 1

4 bytes each

POWER7 Processor 2

POWER7 Processor 3

Figure 5-81 IBM Flex System p460 Compute Node processor connectivity

5.8.6 Processor
The IBM POWER7 processor represents a leap forward in technology and associated computing capability. The multi-core architecture of the POWER7 processor is matched with a wide range of related technologies to deliver leading throughput, efficiency, scalability, and RAS. Although the processor is an important component in servers, many elements and facilities must be balanced across a server to deliver maximum throughput. The design philosophy for POWER7 processor-based systems is system-wide balance, in which the POWER7 processor plays an important role. Table 5-109 defines the processor options for the p460.
Table 5-109 Processor options for the p460 Feature code POWER7 EPR2 EPR4 EPR6 POWER7+ 4 8 8 4 4 4 16 32 32 3.3 GHz 3.2 GHz 3.55 GHz 16 MB 32 MB 32 MB Cores per POWER7 processor Number of POWER7 processors Total cores Core frequency L3 cache size per POWE7 processor

EPRK
EPRH EPRJ

4 8 8

4 4 4

16 32 32

4.0 GHz 3.6 GHz 4.1 GHz

40 MB 80 MB 80 MB

342

IBM PureFlex System and IBM Flex System Products and Technology

To optimize software licensing, you can deconfigure or disable one or more cores. The feature is listed in Table 5-110.
Table 5-110 Deconfiguration of cores Feature code 2319 Description Factory Deconfiguration of 1-core Minimum 0 Maximum 1 less than the total number of cores (For EPR5, the maximum is 7)

POWER7 architecture
IBM uses innovative methods to achieve the required levels of throughput and bandwidth. Areas of innovation for the POWER7 processor and POWER7 processor-based systems include (but are not limited to) the following elements: On-chip L3 cache that is implemented in embedded dynamic random-access memory (eDRAM) Cache hierarchy and component innovation Advances in memory subsystem Advances in off-chip signaling The superscalar POWER7 processor design also provides the following capabilities: Binary compatibility with the prior generation of POWER processors Support for PowerVM virtualization capabilities, including PowerVM Live Partition Mobility to and from IBM POWER6 and IBM POWER6+ processor-based systems Table 5-81 on page 291 shows the POWER7 processor die layout with major areas identified: Eight POWER7 processor cores, L2 cache, L3 cache and chip power bus interconnect, SMP links, GX++ interface, and integrated memory controller.

GX++ Bridge

Memory Controller

C1 Core L2

C1 Core L2

C1 Core L2

C1 Core L2

4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 L2 C1 Core L2 C1 Core L2 C1 Core L2 C1 Core

SMP
Figure 5-82 POWER7 processor architecture

Memory Buffers

Chapter 5. Compute nodes

343

POWER7+ architecture
The POWER7+ architecture builds on the POWER7 architecture. IBM uses innovative methods to achieve the required levels of throughput and bandwidth. Areas of innovation for the POWER7+ processor and POWER7+ processor-based systems include (but are not limited to) the following elements: On-chip L3 cache implemented in embedded dynamic random access memory (eDRAM) Cache hierarchy and component innovation Advances in memory subsystem Advances in off-chip signaling Advances in RAS features such as power-on reset and L3 cache dynamic column repair The superscalar POWER7+ processor design also provides the following capabilities: Binary compatibility with the prior generation of POWER processors Support for PowerVM virtualization capabilities, including PowerVM Live Partition Mobility to and from POWER6, POWER6+, and POWER7 processor-based systems Figure 5-83 shows the POWER7+ processor die layout with major areas identified: Eight POWER7+ processor cores L2 cache L3 cache Chip power bus interconnect SMP links GX++ interface Memory controllers I/O links

Figure 5-83 POWER7+ processor architecture

344

IBM PureFlex System and IBM Flex System Products and Technology

POWER7+ processor overview


The POWER7+ processor chip is fabricated with IBM 32 nm silicon-on-insulator (SOI) technology that uses copper interconnects, and implements an on-chip L3 cache using eDRAM. The POWER7+ processor chip is 567 mm2 and is built using 2,100,000,000 components (transistors). Eight processor cores are on the chip, each with 12 execution units, 256 KB of L2 cache per core, and access to up to 80 MB of shared on-chip L3 cache. For memory access, the POWER7+ processor includes a double data rate 3 (DDR3) memory controller with four memory channels. To scale effectively, the POWER7+ processor uses a combination of local and global high-bandwidth SMP links. Table 5-111 summarizes the technology characteristics of the POWER7+ processor.
Table 5-111 Summary of POWER7+ processor technology Technology Die size Fabrication technology POWER7+ processor 567 mm2 32 nm lithography Copper interconnect Silicon-on-insulator eDRAM 2,100,000,000 components (transistors) offering the equivalent function of 2,700,000,000 8 4/32 256 KB / 2MB 10 MB / 80 MB Two per processor Compatible with prior generations of the POWER processor

Components Processor cores Max execution threads core/chip L2 cache per core/per chip On-chip L3 cache per core / per chip DDR3 memory controllers Compatibility

5.8.7 Memory
Each POWER7 processor has two integrated memory controllers in the chip. Industry standard DDR3 RDIMM technology is used to increase reliability, speed, and density of memory subsystems.

Memory placement rules


The preferred memory minimum and maximums for the p460 are shown in Table 5-112.
Table 5-112 Preferred memory limits for the p460 Model IBM Flex System p460 Compute Node Minimum memory 32 GB Maximum memory 1 TB (32x 32 GB DIMMs)

Use a minimum of 2 GB of RAM per core. The functional minimum memory configuration for the system is 4 GB (2x2 GB) but that is not sufficient for reasonable production use of the system.
Chapter 5. Compute nodes

345

LP and VLP form factors


One benefit of deploying IBM Flex System systems is the ability to use LP memory DIMMs. This design allows for more choices to configure the system to match your needs. Table 5-113 lists the available memory options for the p460.
Table 5-113 Supported memory DIMMs - Power Systems compute nodes Part number 78P1011 78P0501 78P0502 78P1917 78P0639 78P1915 78P1539 e-config feature EM04 8196 8199 EEMD 8145 EEME EEMF Description 2x 2 GB DDR3 RDIMM 1066 MHz 2x 4 GB DDR3 RDIMM 1066 MHz 2x 8 GB DDR3 RDIMM 1066 MHz 2x 8 GB DDR3 RDIMM 1066 MHz 2x 16 GB DDR3 RDIMM 1066 MHz 2x 16 GB DDR3 RDIMM 1066 MHz 2x 32 GB DDR3 RDIMM 1066 MHz Form factor LPa VLP VLP VLP LPa LPa LPa 42X Yes Yes Yes Yes Yes Yes Yes 43X No Yes No Yes No Yes Yes

a. If 2.5-inch HDDs are installed, low-profile DIMM features cannot be used (EM04, 8145, EEME, and EEMF cannot be used).

Requirement: Because of the design of the on-cover storage connections, if you use SAS HDDs, you must use VLP DIMMs (4 GB or 8 GB). The cover cannot close properly if LP DIMMs and SAS HDDs are configured in the same system. Combining the two physically obstructs the cover from closing. For more information, see 5.6.10, Storage on page 313.

346

IBM PureFlex System and IBM Flex System Products and Technology

There are 16 buffered DIMM slots on the p260 and the p24L, as shown in Figure 5-84. The IBM Flex System p460 Compute Node adds two more processors and 16 more DIMM slots, which are divided evenly (eight memory slots) per processor.

SMI

DIMM 1 (P1-C1) DIMM 2 (P1-C2) DIMM 3 (P1-C3) DIMM 4 (P1-C4) DIMM 5 (P1-C5) DIMM 6 (P1-C6) DIMM 7 (P1-C7) DIMM 8 (P1-C8) DIMM 9 (P1-C9) DIMM 10 (P1-C10) DIMM 11 (P1-C11) DIMM 12 (P1-C12) DIMM 13 (P1-C13) DIMM 14 (P1-C14) DIMM 15 (P1-C15) DIMM 16 (P1-C16)

SMI

POWER7 Processor 0

SMI

SMI

SMI

SMI

POWER7 Processor 1
SMI

SMI

Figure 5-84 Memory DIMM topology (Processors 0 and 1 shown)

The memory-placement rules must be considered: Install DIMM fillers in unused DIMM slots to ensure efficient cooling. Install DIMMs in pairs. Both DIMMs in a pair must be the same size, speed, type, and technology. You can mix compatible DIMMs from multiple manufacturers. Install only supported DIMMs, as described on the IBM ServerProven website: http://www.ibm.com/servers/eserver/serverproven/compat/us/

Chapter 5. Compute nodes

347

For the IBM Flex System p460 Compute Node, Table 5-114 shows the required placement of memory DIMMs, depending on the number of DIMMs installed.
Table 5-114 DIMM placement on IBM Flex System p460 Compute Node CPU 0 CPU 1 CPU 2 Number of DIMMs

CPU 3

DIMM 10

DIMM 11

DIMM 12

DIMM 13

DIMM 14

DIMM 15

DIMM 16

DIMM 17

DIMM 18

DIMM 19

DIMM 20

DIMM 21

DIMM 22

DIMM 23

DIMM 24

DIMM 25

DIMM 26

DIMM 27

DIMM 28

DIMM 29

DIMM 30 x

DIMM 31 x

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

Use of mixed DIMM sizes


All installed memory DIMMs do not have to be the same size. However, for best results, keep the following groups of DIMMs the same size: Slots 1 - 4 Slots 5 - 8 Slots 9 - 12 Slots 13 - 16 Slots 17 - 20 Slots 21 - 24 Slots 25 - 28 Slots 29 - 32

348

IBM PureFlex System and IBM Flex System Products and Technology

DIMM 32

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 7

DIMM 8

DIMM 9

5.8.8 Active Memory Expansion feature


The optional Active Memory Expansion feature is a POWER7 technology that allows the effective maximum memory capacity to be much larger than the true physical memory. Applicable to AIX 6.1 or later, this innovative compression and decompression of memory content using processor cycles allows memory expansion of up to 100%. This efficiency allows an AIX 6.1 or later partition to do more work with the same physical amount of memory. Conversely, a server can run more partitions and do more work with the same physical amount of memory. Active Memory Expansion uses processor resources to compress and extract memory contents. The trade-off of memory capacity for processor cycles can be an excellent choice. However, the degree of expansion varies based on how compressible the memory content is. Have adequate spare processor capacity available for the compression and decompression. Tests in IBM laboratories using sample workloads showed excellent results for many workloads in terms of memory expansion per additional processor used. Other test workloads had more modest results. You have a great deal of control over Active Memory Expansion usage. Each individual AIX partition can turn on or turn off Active Memory Expansion. Control parameters set the amount of expansion wanted in each partition to help control the amount of processor that is used by the Active Memory Expansion function. An IPL is required for the specific partition that is turning on or off memory expansion. After the expansion is turned on, there are monitoring capabilities in standard AIX performance tools, such as lparstat, vmstat, topas, and svmon. Figure 5-85 shows the percentage of processor that is used to compress memory for two partitions with different profiles. The green curve corresponds to a partition that has spare processing power capacity. The blue curve corresponds to a partition constrained in processing power.
2 % CPU utilization for expansion Very cost effective 1
1 = Plenty of spare CPU resource available 2 = Constrained CPU resource already running at significant utilization

Amount of memory expansion


Figure 5-85 Processor usage versus memory expansion effectiveness

Both cases show the following knee of the curve relationships for processor resources that are required for memory expansion: Busy processor cores do not have resources to spare for expansion. The more memory expansion that is done, the more processor resources are required.

Chapter 5. Compute nodes

349

The knee varies, depending on how compressible the memory contents are. This variation demonstrates the need for a case-by-case study to determine whether memory expansion can provide a positive return on investment. To help you perform this study, a planning tool is included with AIX 6.1 Technology Level 4 or later. You can use this tool to sample actual workloads and estimate both how expandable the partition memory is and how much processor resource is needed. Any Power System model runs the planning tool. Figure 5-86 shows an example of the output that is returned by this planning tool. The tool outputs various real memory and processor resource combinations to achieve the required effective memory, and proposes one particular combination. In this example, the tool proposes to allocate 58% of a processor core, to benefit from 45% extra memory capacity.
Active Memory Expansion Modeled Statistics: ----------------------Modeled Expanded Memory Size : 8.00 GB Expansion Factor --------1.21 1.31 1.41 1.51 1.61 True Memory Modeled Size -------------6.75 GB 6.25 GB 5.75 GB 5.50 GB 5.00 GB Modeled Memory Gain ----------------1.25 GB [ 19%] 1.75 GB [ 28%] 2.25 GB [ 39%] 2.50 GB [ 45%] 3.00 GB [ 60%] CPU Usage Estimate ----------0.00 0.20 0.35 0.58 1.46

Active Memory Expansion Recommendation: --------------------The recommended AME configuration for this workload is to configure the LPAR with a memory size of 5.50 GB and to configure a memory expansion factor of 1.51. This will result in a memory expansion of 45% from the LPAR's current memory size. With this configuration, the estimated CPU usage due to Active Memory Expansion is approximately 0.58 physical processors, and the estimated overall peak CPU resource required for the LPAR is 3.72 physical processors. Figure 5-86 Output from the AIX Active Memory Expansion planning tool

For more information about this topic, see the white paper Active Memory Expansion: Overview and Usage Guide, which is available at this website: http://www.ibm.com/systems/power/hardware/whitepapers/am_exp.html

5.8.9 Storage
The p460 has an onboard SAS controller that can manage up to two, non-hot-pluggable internal drives. The drives attach to the cover of the server, as shown in Figure 5-87 on page 351. Even though the p460 is a full-wide server, it has the same storage options as the p260 and the p24L.

350

IBM PureFlex System and IBM Flex System Products and Technology

The type of local drives that are used affects the form factor of your memory DIMMs. If HDDs are chosen, only VLP DIMMs can be used because of internal spacing. There is not enough room for the 2.5-inch drives to be used with LP DIMMs (currently the 2 GB and 16 GB sizes). Verify your memory choice to make sure that it is compatible with the local storage configuration. The use of SSDs does not have the same limitation, and so LP DIMMs can be used with SSDs.

Figure 5-87 The IBM Flex System p260 Compute Node showing hard disk drive location

5.8.10 Local storage and cover options


Local storage options are shown in Table 5-115. None of the available drives are hot-swappable. If you use local drives, you must order the appropriate cover with connections for your drive type. The maximum number of drives that can be installed in any Power Systems compute node is two. SSDs and HDDs cannot be mixed. As shown in Figure 5-87, the local drives (HDD or SDD) are mounted to the top cover of the system. When you order your p460, select the cover that is appropriate for your system (SSD, HDD, or no drives) as shown in Table 5-115.
Table 5-115 Local storage options Feature code Part number Description

2.5 inch SAS HDDs 7066 8274 8276 8311 None 42D0627 49Y2022 81Y9654 Top cover with HDD connectors for the IBM Flex System p460 Compute Node (full-wide) 300 GB 10K RPM non-hot-swap 6 Gbps SAS 600 GB 10K RPM non-hot-swap 6 Gbps SAS 900 GB 10K RPM non-hot-swap 6 Gbps SAS

1.8 inch SSDs

Chapter 5. Compute nodes

351

Feature code 7065 8207 No drives 7005

Part number None 74Y9114

Description Top Cover with SSD connectors for IBM Flex System p460 Compute Node (full-wide) 177 GB SATA non-hot-swap SSD

None

Top cover for no drives on the IBM Flex System p460 Compute Node (full-wide)

On covers that accommodate drives, the drives attach to an interposer that connects to the system board when the cover is properly installed, as shown in Figure 5-88.

Figure 5-88 Connector on drive interposer card mounted to server cover

The connection for the covers drive interposer on the system board is shown in Figure 5-89.

Figure 5-89 Connection for drive interposer card mounted to the system cover

352

IBM PureFlex System and IBM Flex System Products and Technology

5.8.11 Hardware RAID capabilities


Disk drives and SSDs in the Power Systems compute nodes can be used to implement and manage various types of RAID arrays in operating systems. These operating systems must be on the ServerProven list. For the compute node, you must configure the RAID array through the smit sasdam command, which is the SAS RAID Disk Array Manager for AIX. The AIX Disk Array Manager is packaged with the Diagnostics utilities on the Diagnostics CD. Use the smit sasdam command to configure the disk drives for use with the SAS controller. The diagnostics CD can be downloaded in ISO file format at this website: http://www14.software.ibm.com/webapp/set2/sas/f/diags/download/ For more information, see Using the Disk Array Manager in the Systems Hardware Information Center at this website: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/p7ebj/s asusingthesasdiskarraymanager.htm Tip: Depending on your RAID configuration, you might have to create the array before you install the operating system in the compute node. Before you create a RAID array, reformat the drives so that the sector size of the drives changes from 512 bytes to 528 bytes. If you later decide to remove the drives, delete the RAID array before you remove the drives. If you decide to delete the RAID array and reuse the drives, you might need to reformat the drives. Change the sector size of the drives from 528 bytes to 512 bytes.

5.8.12 I/O expansion


There are four I/O adapter slots on the p460. The I/O adapter slots are identical in shape (form factor). There is no onboard network capability in the Power Systems compute nodes other than the Flexible Service Processor (FSP) NIC interface, so an Ethernet adapter must be installed to provide network connectivity. Slot 1 requirements: You must have one of the following I/O adapters installed in slot 1 of the Power Systems compute nodes: EN4054 4-port 10Gb Ethernet Adapter (Feature Code #1762) EN2024 4-port 1Gb Ethernet Adapter (Feature Code #1763) IBM Flex System CN4058 8-port 10Gb Converged Adapter (#EC24) In the p460, the I/O is controlled by four P7-IOC I/O controller hub chips. This configuration provides more flexibility when resources are assigned within Virtual I/O Server (VIOS) to specific Virtual Machine/LPARs. Table 5-116 shows the available I/O adapters.
Table 5-116 Supported I/O adapters for the p460 Feature code 1762a 1763a Description IBM Flex System EN4054 4-port 10Gb Ethernet Adapter IBM Flex System EN2024 4-port 1Gb Ethernet Adapter Number of ports 4 4

Chapter 5. Compute nodes

353

Feature code EC24a EC26 1764 EC23 EC2E 1761

Description IBM Flex System CN4058 8-port 10Gb Converged adapter IBM Flex System EN4132 2-port 10Gb RoCE adapter IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC5052 2-port 16Gb FC adapter IBM Flex System FC5054 4-port 16Gb FC adapter IBM Flex System IB6132 2-port QDR InfiniBand Adapter

Number of ports 8 2 2 2 4 2

a. At least one 10 Gb (#1762 or #EC24) or 1 Gb (#1763) Ethernet adapter must be configured in each server.

5.8.13 System management


There are several advanced system management capabilities that are built into the p460. A Flexible Support Processor handles most of the server-level system management. It has features, such as system alerts and Serial-over-LAN capability that are described in this section.

Flexible Support Processor


An FSP provides out-of-band system management capabilities, such as system control, runtime error detection, configuration, and diagnostic procedures. Generally, you do not interact with the Flexible Support Processor directly. Rather, you use tools, such as IBM Flex System Manager, Chassis Management Module, and external IBM Systems Director Management Console. The Flexible Support Processor provides a SOL interface, which is available by using the CMM and the console command. The IBM Flex System p460 Compute Node, even though it is a full-wide system, has only one Flexible Support Processor.

Serial over LAN


The Power Systems compute nodes do not have an on-board video chip and do not support KVM connections. Server console access is obtained by a SOL connection only. SOL provides a means to manage servers remotely by using a CLI over a Telnet or SSH connection. SOL is required to manage servers that do not have KVM support or that are attached to the IBM Flex System Manager. SOL provides console redirection for SMS and the server operating system. The SOL feature redirects server serial-connection data over a LAN without requiring special cabling by routing the data through the CMM network interface. The SOL connection enables Power Systems compute nodes to be managed from any remote location with network access to the CMM. SOL offers the following advantages: Remote administration without KVM (headless servers) Reduced cabling and no requirement for a serial concentrator Standard Telnet/SSH interface, eliminating the requirement for special client software The CMM CLI provides access to the text-console command prompt on each server through a SOL connection. You can use this configuration to manage the Power Systems compute nodes from a remote location. 354
IBM PureFlex System and IBM Flex System Products and Technology

Anchor card
The anchor card, which is shown in Figure 5-90, contains the vital product data chip that stores system-specific information. The pluggable anchor card provides a means for this information to be transferred from a faulty system board to the replacement system board. Before the service processor knows what system it is on, it reads the vital product data chip to obtain system information. The vital product data chip includes information such as system type, model, and serial number.

Figure 5-90 Anchor card

5.8.14 Integrated features


As described in , The section includes the following topics: on page 335, the IBM Flex System p460 Compute Node includes the following integrated features: Flexible Support Processor IBM POWER7 Processors SAS RAID-capable Controller USB port

5.8.15 Operating system support


The p460 model 42X supports the following operating systems: AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284 AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283 AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later 2 IBM i 6.1 with i 6.1.1 machine code, or later IBM i 7.1, or later Novell SUSE Linux Enterprise Server 11 Service Pack 2 for POWER Red Hat Enterprise Linux 5.7, for POWER, or later Red Hat Enterprise Linux 6.2, for POWER, or later VIOS 2.2.1.4, or later
2

AIX 5L V5.3 Service Extension is required.

Chapter 5. Compute nodes

355

The p460 model 43X supports the following operating systems: AIX V7.1 with the 7100-02 Technology Level with Service Pack 3 or later AIX V6.1 with the 6100-08 Technology Level with Service Pack 3 or later VIOS 2.2.2.3 or later IBM i 6.1 with i 6.1.1 machine code, or later IBM i 7.1 TR3 or later SUSE Linux Enterprise Server 11 Service Pack (SP) 2 for POWER Red Hat Enterprise Linux 6.4 for POWER Important: Support by some of these operating system versions is post generally availability. See the following IBM ServerProven website for the latest information about the specific versions and service levels supported and any other prerequisites: http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/matrix.sh tml

5.9 IBM Flex System PCIe Expansion Node


You can use the IBM Flex System PCIe Expansion Node to attach more PCI Express cards, such as High IOPS SSD adapters, I/O adapters, and next-generation graphics processing units (GPU), to supported IBM Flex System compute nodes. This capability is ideal for many applications that require high performance I/O, special telecommunications network interfaces, or hardware acceleration using a PCI Express GPU card. The PCIe Expansion Node supports up to four PCIe adapters and two other Flex System I/O expansion adapters. Figure 5-91 shows the PCIe Expansion Node that is attached to a compute node.

Figure 5-91 IBM Flex System PCIe Expansion Node attached to a compute node

The ordering information for the PCIe Expansion Node is listed in Table 5-117.
Table 5-117 PCIe Expansion Node ordering number and feature code Part number 81Y8983 Feature codea A1BV Description IBM Flex System PCIe Expansion Node

a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.

356

IBM PureFlex System and IBM Flex System Products and Technology

The part number includes the following items: IBM Flex System PCIe Expansion Node Two riser assemblies Interposer cable assembly Double-wide shelf Two auxiliary power cables (for adapters that require additional +12 V power) Four removable PCIe slot air flow baffles Documentation CD that contains the Installation and Service Guide Warranty information and Safety flyer and Important Notices document The PCIe Expansion Node is supported when it is attached to the compute nodes that are listed in Table 5-118.
Table 5-118 Supported compute nodes p24L p260 p270 N p460 N x220 x222 x240 Ya x440 N Part number Expansion Node

81Y8983

IBM Flex System PCIe Expansion Node

Ya

a. Both Processors must be installed in the x220 and x240.

5.9.1 Features
The PCIe Expansion Node has the following features: Support for up to four standard PCIe 2.0 adapters: Two PCIe 2.0 x16 slots that support full-length, full-height adapters (1x, 2x, 4x, 8x, and 16x adapters supported) Two PCIe 2.0 x8 slots that support low-profile adapters (1x, 2x, 4x, and 8x adapters supported) Support for PCIe 3.0 adapters by operating them in PCIe 2.0 mode Support for one full-length, full-height double-wide adapter (using the space of the two full-length, full-height adapter slots) Support for PCIe cards with higher power requirements The Expansion Node provides two auxiliary power connections, up to 75 W each for a total of 150 W of more power by using standard 2x3, +12 V six-pin power connectors. These connectors are placed on the base system board so that they both can provide power to a single adapter (up to 225 W), or to two adapters (up to 150 W each). Power cables are used to connect from these connectors to the PCIe adapters and are included with the PCIe Expansion Node. Two Flex System I/O expansion connectors The I/O expansion connectors are labeled I/O expansion 3 connector and I/O expansion four connector in Figure 5-95 on page 360. These I/O connectors expand the I/O capability of the attached compute node.

Chapter 5. Compute nodes

357

Figure 5-92 shows the locations of the PCIe slots.


Full height PCIe adapter slot 2 Low profile PCIe adapter slot 3

x240 node

Full height PCIe adapter slot 1

Low profile PCIe adapter slot 4

Figure 5-92 PCIe Expansion Node attached to a node showing the four PCIe slots

A double wide shelf is included with the PCIe Expansion Node. The compute node and the expansion node must be attached to the shelf, and then the interposer cable is attached, which links the two electronically. Figure 5-93 shows installation of the compute node and the PCIe Expansion Node on the shelf.

Compute Node

PCIe Expansion Node

Figure 5-93 Installation of a compute node and PCIe Expansion Node on to the tray

358

IBM PureFlex System and IBM Flex System Products and Technology

After the compute node and PCIe Expansion Node are installed onto the shelf, an interposer cable is connected between them. This cable provides the link for the PCIe bus between the two components (this cable is shown in Figure 5-94). The cable consists of a ribbon cable with a circuit board at each end.

Installed interposer cable assembly


Figure 5-94 Top view with compute node (upper) and PCIe Expansion Node (lower) covers removed

5.9.2 Architecture
The architecture diagram is shown on Figure 5-95 on page 360. PCIe version: All PCIe bays on the expansion node operate at PCIe 2.0. The interposer link is a PCIe 2.0 x16 link, which is connected to the switch on the main board of the PCIe Expansion Node. This PCIe switch provides two PCIe connections for bays 1 and 2 (the full-length, full-height adapters slots) and two PCIe connections for bays 3 and 4 (the low profile adapter slots).

Chapter 5. Compute nodes

359

There are two other I/O adapter bays (x16) that are available that connect into the midplane of the enterprise chassis. You can use these bays to set up a single wide node to make use of a double-wide nodes I/O bandwidth to the midplane.
Compute Node PCIe Expansion Node Interposer cable - PCIe 2.0 x16

I/O 1

I/O 2

Expansion connector

I/O 3

I/O 4

Interposer connector x16

x16 PCIe switch x16 x16 PCIe 2.0 x16 FHFL PCIe 2.0 x16 FHFL

x16

Processor 2

x8

x8

PCIe 2.0 x8 LP

Processor 1

Front of Compute Node

Front of Expansion Node

Figure 5-95 Architecture diagram

Number of installed processors: Two processors must be installed in the compute node because the expansion connector is routed from processor 2. Table 5-119 shows the adapter to I/O bay mapping.
Table 5-119 Adapter to I/O bay mapping I/O expansion slot Slot 1 (Compute Node) Port on the adapter Port 1 Port 2 Port 3a Port 4a Slot 2 (Compute Node) Port 1 Port 2 Port 3a Port 4a Corresponding I/O module bay in the chassis Module bay 1 Module bay 2 Module bay 1b Module bay 2b Module bay 3 Module bay 4 Module bay 3b Module bay 4b

360

IBM PureFlex System and IBM Flex System Products and Technology

PCIe 2.0 x8 LP

I/O expansion slot Slot 3 (PCIe Expansion Node)

Port on the adapter Port 1 Port 2 Port 3a Port 4a

Corresponding I/O module bay in the chassis Module bay 1 Module bay 2 Module bay 1b Module bay 2b Module bay 3 Module bay 4 Module bay 3** Module bay 4**

Slot 4 (PCIe Expansion Node)

Port 1 Port 2 Port 3* Port 4*

a. Ports 3 and 4 require that a four-port card be installed in the expansion slot. b. Might require one or more port upgrades to be installed in the I/O module.

5.9.3 Supported PCIe adapters


The Expansion Node supports the following general adapter characteristics: Full-height cards, 4.2 in. (107 mm) Low-profile cards, 2.5 in. (64 mm) Half-length cards, 6.6 in. (168 mm) Full-length cards, 12.3 in. (312 mm) Support for up to four low-profile PCIe cards Support for up to two full-height PCIe cards Support for up to one full-height double-wide PCIe card Support for PCIe standards 1.1 and 2.0 (PCIe 3.0 adapters are supported as PCIe 2.0) The front-facing bezel of the Expansion Node is inset from the normal face of the compute nodes. This inset facilitates the usage of cables that are connected to PCIe adapters that support external connectivity. The Expansion Node provides up to 80 mm of space in the front of the PCIe adapters to allow for the bend radius of these cables. Table 5-120 lists the PCIe adapters that are supported in the Expansion Node. Some adapters must be installed in one of the full-height slots as noted. If the NVIDIA Tesla M2090 is installed in the Expansion Node, an adapter cannot be installed in the other full-height slot. The low-profile slots and Flex System I/O expansion slots can still be used.
Table 5-120 Supported adapters Part number 46C9078 46C9081 81Y4519a 81Y4527a 90Y4377 90Y4397 Feature code A3J3 A3J4 5985 A1NB A3DY A3DZ Description IBM 365GB High IOPS MLC Mono Adapter (low-profile adapter) IBM 785GB High IOPS MLC Mono Adapter (low-profile adapter) 640GB High IOPS MLC Duo Adapter (full-height adapter) 1.28TB High IOPS MLC Duo Adapter (full-height adapter) IBM 1.2TB High IOPS MLC Mono Adapter (low-profile adapter) IBM 2.4TB High IOPS MLC Duo Adapter (full-height adapter) Maximum supported 4 4 2 2 4 2

Chapter 5. Compute nodes

361

Part number 94Y5960 47C2120 47C2121 47C2119 47C2122 None

Feature code A1R4 A4F1 A4F2 A4F3 A4F4 4809d

Description NVIDIA Tesla M2090 (full-height adapter) NVIDIA GRID K1 for IBM Flex System PCIe Expansion Node NVIDIA GRID K2 for IBM Flex System PCIe Expansion Node NVIDIA Tesla K20 for IBM Flex System PCIe Expansion Node Intel Xeon Phi 5110P for IBM Flex System PCIe Expansion Node IBM 4765 Crypto Card

Maximum supported 1b 1c 1c 1c 1c 1c

a. Withdrawn from marketing b. If the NVIDIA Tesla M2090 is installed in the Expansion Node, an adapter cannot be installed in the other full-height slot. The low-profile slots and Flex System I/O expansion slots can still be used. c. If installed, only this adapter is supported in the system. No other PCIe adapters can be installed. d. Orderable as separate MTM 4765-001 feature 4809. Available via AAS (e-config) only.

For the current list of adapters that are supported in the Expansion Node, see the IBM ServerProven site at: http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html For information about the IBM High IOPS adapters, the following IBM Redbooks Product Guides are available: IBM High IOPS MLC Adapters: http://www.redbooks.ibm.com/abstracts/tips0907.html IBM High IOPS Modular Adapters: http://www.redbooks.ibm.com/abstracts/tips0937.html IBM High IOPS SSD PCIe Adapters: http://www.redbooks.ibm.com/abstracts/tips0729.html Although the design of Expansion Node facilitates a much greater set of standard PCIe adapters, Table 5-120 on page 361 lists the adapters that are supported. If the PCI Express adapter that you require is not on the ServerProven website, use the IBM ServerProven Opportunity Request for Evaluation (SPORE) process to confirm compatibility with the configuration.

5.9.4 Supported I/O expansion cards


Table 5-121 lists the Flex System I/O expansion cards that are supported in the PCIe Expansion Node.
Table 5-121 Supported I/O adapters Part number 90Y3554 90Y3558 49Y7900 90Y3466 Feature code A1R1 A1R0 A10Y A1QY Description IBM Flex System CN4054 10Gb Virtual Fabric Adapter IBM Flex System CN4054 Virtual Fabric Adapter (Softtware Upgrade) IBM Flex System EN2024 4-port 1Gb Ethernet Adapter IBM Flex System EN4132 2-port 10Gb Ethernet Adapter

362

IBM PureFlex System and IBM Flex System Products and Technology

Part number 90Y3454 88Y6370 69Y1938 95Y2375 95Y2386 95Y2391 69Y1942

Feature code A1QZ A1BP A1R0 A2N5 A45R A45S A1BQ

Description IBM Flex System IB6132 2-port FDR InfiniBand Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC3052 2-port 8Gb FC Adapter IBM Flex System FC5052 2-port 16Gb FC Adapter IBM Flex System FC5054 4-port 16Gb FC Adapter IBM Flex System FC5172 2-port 16Gb FC Adapter

Not supported: At the time of writing, the IBM Flex System EN6132 2-port 40Gb Ethernet Adapter was not supported in the IBM Flex System PCIe Expansion Node. For the current list of adapters that are supported in the Expansion Node, see the IBM ServerProven site at: http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html For more information about these adapters, see the IBM Redbooks Product Guides for Flex System in the Adapters category: http://www.redbooks.ibm.com/portals/puresystems?Open&page=pg&cat=adapters

5.10 IBM Flex System Storage Expansion Node


The IBM Flex System Storage Expansion Node is a locally attached storage node that is dedicated and directly attached to a single half-wide compute node. The Storage Expansion Node provides storage capacity for Network Attach Storage (NAS) workloads, which provides flexible storage to match capacity, performance, and reliability needs. Ideal workloads include distributed database, transactional database, NAS infrastructure, video surveillance, and streaming solutions. Figure 5-96 shows the IBM Flex System Storage Expansion Node that is connected to the IBM Flex System x240 Compute Node.

Figure 5-96 IBM Flex System Storage Expansion Node Chapter 5. Compute nodes

363

Table 5-122 specifies the ordering information.


Table 5-122 IBM Flex System Storage Expansion Node ordering number and feature code Part number 68Y8588 Feature codea A3JF Description IBM Flex System Storage Expansion Node

a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.

The part number includes the following items: The IBM Flex System Storage Expansion Node Expansion shelf, onto which you install the Compute Node and Storage Expansion Node IBM Warranty information booklet Product documentation CD that includes an installation and service guide The following features are included: Sliding tray to allow access to up to 12 SAS/SATA or SSD storage Hot-swappable drives Supports RAID 0, 1, 5, 6, 10, 50, and 60 512 MB or 1 GB with cache-to-flash super capacitor offload Includes an expansion shelf to physically support the Storage Expansion Node and its compute node Light path diagnostic lights to aid in problem determination Feature on Demand upgrades to add advanced features

5.10.1 Supported nodes


The IBM Flex System Storage Expansion Node is supported when it is attached to the nodes listed in Table 5-123.
Table 5-123 Supported compute nodes p24L p260 p460 N x220 x222 x240 x440 x270 N Part number Expansion Node

68Y8588

IBM Flex System Storage Expansion Node

Ya

Ya

a. Both Processors must be installed in the x220 and x240.

Two processors: Two processors must be installed in the x220 or x240 compute node because the expansion connector used to connect to the Storage Expansion Node is routed from processor 2.

364

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-97 shows the Storage Expansion Node front view when it is attached to an x240 compute node.

x240 Compute Node

Storage Expansion Node

Figure 5-97 Storage Expansion Node front view: Attached to an x240 compute node

The Storage Expansion Node is a PCIe Generation 3 and a SAS 2.1 complaint enclosure that supports up to twelve 2.5-inch drives. The drives can be HDD or SSD, and both SAS or SATA. Drive modes that are supported are JBOD or RAID-0, 1, 5, 6, 10, 50, and 60. The drives are accessed by opening the handle on the front of the Storage Expansion Node and sliding out the drive tray, which can be done while it is operational (hence the terracotta touch point on the front of the unit). The drive tray extended part way out, while connected to an x240 compute node, is shown in Figure 5-98. With the drive tray extended, all 12 hot-swap drives can be accessed on the left side of the tray. Do not keep the drawer open: Depending on your operating environment, the expansion node might power off if the drawer is open for too long. Chassis fans might increase in speed. The drawer should be closed fully for proper cooling and to protect system data integrity. There is an LED to indicate that the drawer is not closed and that the drawer has been open too long, and that thermal thresholds are reached.

Attached compute node

Twelve 2.5-inch hot-swap drive bays

Pull-handle

LED panel

Figure 5-98 Storage Expansion Node with drive tray part way extended

Chapter 5. Compute nodes

365

The Storage Expansion Node is connected to the compute node through its expansion connector. Management and PCIe connections are provided by this expansion connector, as shown in Figure 5-99. Power is obtained from the enterprise chassis midplane directly, not through the compute node.
Compute Node Storage Expansion Node PCIe 3.0 x8

PCIe 3.0 x16

Compute node expansion connector

Cache

LSI RAID controller

Drive tray 12 11 Processor 2 10 9 Processor 1 8 7 5 6 External drive LEDs 3 4 SAS expander 6x SAS 1 2

Figure 5-99 Storage Expansion Node architecture

The LSI SAS controller in the expansion node is connected directly to the PCIe bus of Processor 2 of the compute node. The result is that the compute node sees the disks in the expansion node as locally attached. Management of the Storage Expansion Node is through the IMM2 on the compute node.

5.10.2 Features on Demand upgrades


The LSI RAID controller in the Storage Expansion Node has several options that are enabled through IBM Features on Demand (FoD) and are listed in Table 5-124.
Table 5-124 FoD options available for the Storage Expansion Node Part number 90Y4410 90Y4447 90Y4412 Feature codea A2Y1 A36G A2Y2 Description ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System ServeRAID M5100 Series Performance Accelerator for IBM Flex System

a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.

366

IBM PureFlex System and IBM Flex System Products and Technology

FoD upgrades are system-wide: The FoD upgrades are the same ones that are used with the ServeRAID M5115 available for use internally in the x220 and x240 compute nodes. If you have an M5115 installed in the attached compute node and installed any of these upgrades, then those upgrades are automatically activated on the LSI controller in the expansion node. You do not need to purchase the FoD upgrades separately for the expansion node. RAID 6 Upgrade (90Y4410) Adds support for RAID 6 and RAID 60. This is an FoD license. Performance Upgrade (90Y4412) The Performance Upgrade for IBM Flex System (implemented using the LSI MegaRAID FastPath software) provides high-performance I/O acceleration for SSD-based virtual drives by using a low-latency I/O path to increase the maximum I/O per second (IOPS) capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is an FoD license. SSD Caching Enabler for traditional hard disk drives (90Y4447) The SSD Caching Enabler for IBM Flex System (implemented by using the LSI MegaRAID CacheCade Pro 2.0) accelerates the performance of HDD arrays with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache to help maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a Feature on Demand license. This feature requires at least one SSD drive be installed.

5.10.3 Cache upgrades


Cache upgrades are available in two different sizes, either 1 GB or 512 MB. These upgrades enable the RAID 5 function of the controller. Table 5-125 lists the part numbers for these upgrades.
Table 5-125 ServeRAID M5100 cache upgrades Part number 81Y4559 81Y4487 Feature codea A1WY A1J4 Description ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade for IBM System x ServeRAID M5100 Series 512MB Flash/RAID 5 Upgrade for IBM System x

a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.

No support for expansion cards: Unlike the PCIe Expansion Node, the Storage Expansion Node cannot connect more I/O expansion cards.

Chapter 5. Compute nodes

367

5.10.4 Supported HDD and SSD


Table 5-126 shows the HDDs and SSDs that are supported within the Storage Expansion Node. SSDs and HDDs can be installed inside the unit at the same time, although as a preferred practice, you should create the logical drives with a similar type of disks, for example, for a RAID 1 pair, choose identical drive types, SSDs, or HDDs.
Table 5-126 HDDs and SSDs supported in Storage Expansion Node Part number Feature codea Description

10davK SAS hard disk drives 90Y8877 90Y8872 81Y9650 00AD075 NL SATA 81Y9722 81Y9726 81Y9730 A1NX A1NZ A1AV IBM 250 GB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD IBM 500 GB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD IBM 1TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD A2XC A2XD A282 A48S IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD IBM 900 GB 10K 6 Gbps SAS 2.5" SFF HS HDD IBM 1.2TB 10K 6Gbps SAS 2.5'' G2HS HDD

10K and 15K Self-encrypting drives (SED) 00AD085 A48T IBM 1.2TB 10K 6Gbps SAS 2.5'' G2HS SED

SAS-SSD Hybrid drive 00AD102 A4G7 IBM 600GB 10K 6Gbps SAS 2.5'' G2HS Hybrid

Solid-state drives - Enterprise 41Y8331 41Y8336 41Y8341 49Y6129 49Y6134 49Y6139 49Y6195 A4FL A4FN A4FQ A3EW A3EY A3F0 A4GH S3700 200GB SATA 2.5" MLC HS Enterprise SSD S3700 400GB SATA 2.5" MLC HS Enterprise SSD S3700 800GB SATA 2.5" MLC HS Enterprise SSD IBM 200GB SAS 2.5" MLC HS Enterprise SSD IBM 400GB SAS 2.5" MLC HS Enterprise SSD IBM 800GB SAS 2.5" MLC HS Enterprise SSD IBM 1.6TB SAS 2.5" MLC HS Enterprise SSD

Solid-state drives - Enterprise Value 90Y8643 00AJ000 00AJ005 00AJ010 00AJ015 A2U3 A4KM A4KN A4KP A4KQ IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD S3500 120GB SATA 2.5" MLC HS Enterprise Value SSD S3500 240GB SATA 2.5" MLC HS Enterprise Value SSD S3500 480GB SATA 2.5" MLC HS Enterprise Value SSD S3500 800GB SATA 2.5" MLC HS Enterprise Value SSD

a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.

368

IBM PureFlex System and IBM Flex System Products and Technology

The front of the Storage Expansion Node has a number of LEDs on the lower right front, for identification and status purposes, which are shown in Figure 5-100. The Node is used for indicating a light path fault. Internally, there are a number of light path diagnostic LEDs that are used for fault identification.

Drive fault Tray Open

2 4 6 8 10 12 1 3 5 7 9 1 1
Drive activity LEDs
Figure 5-100 LEDs on the front of the Storage Expansion Node

Table 5-127 describes the statuses of the external LEDs.


Table 5-127 External LED status LED Activity light (each drive bay) Fault/Locate Tray Open Color Green Amber Amber Meaning Flashes when there is activity and displays the drive number. Solid: Indicates a fault on one of the drives. Flashes: One of the drives is set to identify. Flash/beep 15 sec interval: Drawer is not fully closed. Flash/beep 5 sec interval: Drawer has been opened too long. Close the drawer immediately. Flash/beep 0.25 sec interval: The expansion node has reached its thermal threshold. Close the door immediately to avoid drive damage.

In addition to the lights that are described in Table 5-127, there are LEDs locally on each of the drive trays. A green LED indicates disk activity and an amber LED indicates a drive fault. These LEDs can be observed when the drive tray is extended and the unit operational. With the Storage Expansion Node removed from a chassis and its cover removed, there are internal LEDs located below the segmented cable track. There is a light path button that can be pressed and any light path indications can be observed. This button operates when the unit is not powered up because a capacitor provides a power source to illuminate the light path.

Chapter 5. Compute nodes

369

When the light path diagnostics button is pressed, the light path LED is illuminated, which shows the button is functional. If a fault is detected, the relevant LED also lights. Figure 5-101 and Table 5-128 shows the various LEDs and their statuses.

Capacitor

Figure 5-101 Light path LEDs located below the segmented cable track Table 5-128 Internal light path LED status LED Flash/RAID adapter Control panel Temperature Storage expansion Light path Meaning There is a RAID Cache card fault. The LED panel card is not present. A temperature event occurred. There is a fault on the storage expansion unit. Verify that the light path diagnostic function, including the battery, is operating properly.

External SAS connector: There is no external SAS connector on the IBM Flex System Storage Expansion Node. The storage is internal only.

5.11 I/O adapters


Each compute node has the optional capability of accommodating one or more I/O adapters to provide connections to the chassis switch modules. The routing of the I/O adapters ports is done through the chassis midplane to the I/O modules. The I/O adapters allow the compute nodes to connect, through the switch modules or pass-through modules in the chassis, to different LAN or SAN fabric types. As described in 5.4.12, I/O expansion on page 269, any supported I/O adapter can be installed in either I/O connector. On servers with the embedded 10 Gb Ethernet controller, the LOM connector must be unscrewed and removed. After it is installed, the I/O adapter on I/O connector 1 is routed to I/O module bay 1 and bay 2 of the chassis. The I/O adapter that is installed on I/O connector 2 is routed to I/O module bay 3 and bay 4 of the chassis. For more information about specific port routing information, see 4.10, I/O architecture on page 104. This section includes the following topics: 5.11.1, Form factor on page 371 5.11.2, Naming structure on page 372 5.11.3, Supported compute nodes on page 373 370
IBM PureFlex System and IBM Flex System Products and Technology

5.11.4, Supported switches on page 374 5.11.5, IBM Flex System EN2024 4-port 1Gb Ethernet Adapter on page 376 5.11.6, IBM Flex System EN4132 2-port 10Gb Ethernet Adapter on page 377 5.11.7, IBM Flex System EN4054 4-port 10Gb Ethernet Adapter on page 378 5.11.8, IBM Flex System EN6132 2-port 40Gb Ethernet Adapter on page 380 5.11.9, IBM Flex System CN4054 10Gb Virtual Fabric Adapter on page 381 5.11.10, IBM Flex System CN4058 8-port 10Gb Converged Adapter on page 384 5.11.11, IBM Flex System EN4132 2-port 10Gb RoCE Adapter on page 387 5.11.12, IBM Flex System FC3172 2-port 8Gb FC Adapter on page 389 5.11.13, IBM Flex System FC3052 2-port 8Gb FC Adapter on page 391 5.11.14, IBM Flex System FC5022 2-port 16Gb FC Adapter on page 393 5.11.15, IBM Flex System FC5024D 4-port 16Gb FC Adapter on page 394 5.11.16, IBM Flex System FC5052 2-port and FC5054 4-port 16Gb FC Adapters on page 396 5.11.17, IBM Flex System FC5172 2-port 16Gb FC Adapter on page 398 5.11.18, IBM Flex System IB6132 2-port FDR InfiniBand Adapter on page 400 5.11.19, IBM Flex System IB6132 2-port QDR InfiniBand Adapter on page 401 5.11.20, IBM Flex System IB6132D 2-port FDR InfiniBand Adapter on page 403

5.11.1 Form factor


The following I/O adapter form factors for the IBM Flex System compute nodes are available: Standard form factor: Installed in all compute nodes, except the x222 Mid-mezzanine (mid-mezz) form factor: For use in the x222 Compute Node A typical standard form factor I/O adapter is shown in Figure 5-102.
.

PCIe connector Midplane connector

Guide block to ensure correct installation


Figure 5-102 I/O adapter

Standard adapters share a common size (96.7 mm x 84.8 mm)

Chapter 5. Compute nodes

371

The standard I/O adapters attach to a compute node through a high-density 216-pin PCIe connector. A typical mid-mezzanine I/O adapter is shown in Figure 5-103.

Midplane connector

Connector to lower node

Connector to upper node

Figure 5-103 Bottom (left) and top (right) of a mid-mezzanine I/O adapter

5.11.2 Naming structure


Figure 5-104 shows the naming structure for the I/O adapters.

IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter

EN2024D

Fabric Type: EN = Ethernet FC = Fibre Channel CN = Converged Network IB = InfiniBand SI = Systems Interconnect

Series: 2 for 1 Gb 3 for 8 Gb 4 for 10 Gb 5 for 16 Gb 6 for InfiniBand & 40 Gb

Vendor name where A=01 02 = Broadcom, Brocade 05 = Emulex 09 = IBM 13 = Mellanox 17 = QLogic

Maximum number of ports 2 = 2 ports 4 = 4 ports 6 = 6 ports 8 = 8 ports

Adapter Type Blank = Standard D = Dense

Figure 5-104 The naming structure for the I/O adapters

372

IBM PureFlex System and IBM Flex System Products and Technology

5.11.3 Supported compute nodes


Table 5-129 lists the available I/O adapters and their compatibility with x86 and Power based compute nodes.
Table 5-129 I/O adapter compatibility matrix: Compute nodes Feature codesa Supported servers p260 / p460 Y N Y N N Y Y N Y N N Y Y N N N Y N N

Part number

x86 nodes

POWER nodes

786310X only

x440b

p24L

I/O adapters

Ethernet adapters 49Y7900 90Y3466 None 90Y3554 90Y3558 None None 90Y3482 A10Y A1QY None A1R1 A1R0 None None A3HK 1763 None 1762 None None EC24 EC26 None 1763 EC2D None 1759 1760 None None EC31 EN2024 4-port 1Gb Ethernet Adapter EN4132 2-port 10 Gb Ethernet Adapter EN4054 4-port 10Gb Ethernet Adapter CN4054 10Gb Virtual Fabric Adapter CN4054 Virtual Fabric Adapter Upgradec CN4058 8-port 10Gb Converged Adapter EN4132 2-port 10Gb RoCE Adapter EN6132 2-port 40Gb Ethernet Adapter Y Y N Y Y N N Y N N N N N N N N Y Y N Y Y N N Y Y Y N Y Y N N Y Y N Y N N Y Y N Y N Y N N Y Y N

Fibre Channel adapters 69Y1938 95Y2375 88Y6370 95Y2386 95Y2391 69Y1942 95Y2379 A1BM A2N5 A1BP A45R A45S A1BQ A3HU 1764 None None EC23 EC2E None None 1764 EC25 EC2B None None None None FC3172 2-port 8Gb FC Adapter FC3052 2-port 8Gb FC Adapter FC5022 2-port 16Gb FC Adapter FC5052 2-port 16Gb FC Adapter FC5054 4-port 16Gb FC Adapter FC5172 2-port 16Gb FC Adapter FC5024D 4-port 16Gb FC Adapter Y Y Y Y Y Y N N N N N N N Y Y Y Y Y Y Y N Y Y Y Y Y Y N Y N N Y Y N N Y N N Y Y N N

InfiniBand adapters 90Y3454 None 90Y3486 SAS 90Y4390 A2XW None None ServeRAID M5115 SAS/SATA Controllerd Y N Y Yb N N A1QZ None A365 None 1761 None EC2C None None IB6132 2-port FDR InfiniBand Adapter IB6132 2-port QDR InfiniBand Adapter IB6132D 2-port FDR InfiniBand Adapter Y N N N N Y Y N N Y N N N Y N N Y N

a. The three Feature Code columns are as follows: * x86 nodes: For all x86 compute nodes in both XCC (x-config) and AAS (e-config), except for x240 7863-10X * POWER nodes: For all Power Systems compute nodes in AAS (e-config) * 7863-10X only: For x240 model 7863-10X in AAS (e-config) only b. For compatibility as listed here, ensure the x440 is running IMM2 firmware Build 40a or later

Chapter 5. Compute nodes

373

p270

x220

x222

x240

c. Features on Demand (software) upgrade to enable FCoE and iSCSI on the CN4054. One upgrade needed per adapter. d. Various enablement kits and Features on Demand upgrades are available for the ServeRAID M5115. For more information, see the ServeRAID M5115 Product Guide, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0884.html?Open

5.11.4 Supported switches


In this section, we describe switch to adapter interoperability.

Ethernet switches and adapters


Table 5-130 lists Ethernet switch-to-card compatibility. Switch upgrades: To maximize the usable port count on the adapters, the switches might need more license upgrades. For more information, see 4.11, I/O modules on page 112.
Table 5-130 Ethernet switch to card compatibility EN2092 1Gb Switch Part number Feature code (XCC / AAS)a None None None None 49Y7900 A10Y / 1763 90Y3466 A1QY / EC2D None None / 1762 90Y3554 A1R1 / 1759 None None / EC24 None None / EC26 90Y3482 A3HK / A3HK Part number Feature codesa x220 Onboard 1Gb x222 Onboard 10Gb x240 Onboard 10Gb x440 Onboard 10Gb EN2024 4-port 1Gb Ethernet Adapter EN4132 2-port 10 Gb Ethernet Adapter EN4054 4-port 10Gb Ethernet Adapter CN4054 10Gb Virtual Fabric Adapter CN4058 8-port 10Gb Converged Adapter EN4132 2-port 10Gb RoCE Adapter EN6132 2-port 40Gb Ethernet Adapter 49Y4294 A0TF / 3598 Yes Yesc Yes Yes Yes No Yes Yes Yese No No CN4093 10Gb Switch 00D5823 A3HH / ESW2 Yesb Yesc Yes Yes Yes No Yes Yes Yesf No No EN4093R 10Gb Switch 95Y3309 A3J6 / ESW7 Yes Yesc Yes Yes Yes Yes Yes Yes Yese Yes No EN4093 10Gb Switch 49Y4270 A0TB / 3593 Yes Yesc Yes Yes Yes Yes Yes Yes Yese Yes No EN4091 10Gb Pass-thru 88Y6043 A1QV / 3700 Yes No Yes Yes Yesd Yes Yesd Yesd Yesd Yes No SI4093 10Gb SIM 95Y3313 A45T / ESWA Yes Yesc Yes Yes Yes Yes Yes Yes Yes Yes No EN6131 40Gb Switch 90Y9346 A3HJ / ESW6 No No Yes Yes No Yes Yes Yes No Yes Yes

a. The first feature code that is listed is for configurations that are ordered through System x sales channels (XCC by using x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (AAS by using e-config) b. 1 Gb is supported on the CN4093s two external 10 Gb SFP+ ports only. The 12 external Omni Ports do not support 1 GbE speeds.

374

IBM PureFlex System and IBM Flex System Products and Technology

c. Upgrade 1 required to enable enough internal switch ports to connect to both servers in the x222 d. Only two of the ports of this adapter are connected when used with the EN4091 10Gb Pass-thru. e. Only four of the eight ports of CN4058 adapter are connected with the EN2092 switch. f. Only six of the eight ports of the CN4058 adapter are connected with the CN4093, EN4093R, EN4093R switches

Fibre Channel switches and adapters


Table 5-131 lists Fibre Channel switch-to-card compatibility.
Table 5-131 Fibre Channel switch to card compatibility FC5022 16Gb 12-port Part number Feature codesa FC3172 2-port 8Gb FC Adapter FC3052 2-port 8Gb FC Adapter FC5022 2-port 16Gb FC Adapter FC5052 2-port 16Gb FC Adapter FC5054 4-port 16Gb FC Adapter FC5172 2-port 16Gb FC Adapter FC5024D 4-port 16Gb FC Adapter 88Y6374 A1EH / 3770 Yes Yes Yes Yes Yes Yes Yes FC5022 16Gb 24-port 00Y3324 A3DP / ESW5 Yes Yes Yes Yes Yes Yes Yes FC5022 16Gb 24-port ESB 90Y9356 A2RQ / 3771 Yes Yes Yes Yes Yes Yes Yes FC3171 8Gb switch 69Y1930 A0TD / 3595 Yes Yes No No No Yes No FC3171 8Gb Pass-thru 69Y1934 A0TJ / 3591 Yes Yes No No No Yes No

Part number 69Y1938 95Y2375 88Y6370 95Y2386 95Y2391 69Y1942 95Y2379

Feature codes (XCC / AAS)a A1BM / 1764 A2N5 / EC25 A1BP / EC2B A45R / EC23 A45S / EC2E A1BQ / A1BQ A3HU / A3HU

a. The first feature code that is listed is for configurations that are ordered through System x sales channels (XCC by using x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (AAS by using e-config)

InfiniBand switches and adapters


Table 5-132 lists InfiniBand switch-to-card compatibility.
Table 5-132 InfiniBand switch to card compatibility IB6131 InfiniBand Switch Part number 90Y3454 None 90Y3486 Feature codes (XCC / AAS)a A1QZ / EC2C None / 1761 A365 / A365 IB6132 2-port FDR InfiniBand Adapter IB6132 2-port QDR InfiniBand Adapter IB6132D 2-port FDR InfiniBand Adapter Part number Feature codea 90Y3450 A1EK / 3699 Yesb Yes Yesb

a. The first feature code that is listed is for configurations that are ordered through System x sales channels (XCC by using x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (AAS by using e-config)

Chapter 5. Compute nodes

375

b. To operate at FDR speeds, the IB6131 switch will need the FDR upgrade, as listed in Table 4-44 on page 160.

5.11.5 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter


The IBM Flex System EN2024 4-port 1Gb Ethernet Adapter is a quad-port network adapter. It provides 1 Gb per second, full duplex, Ethernet links between a compute node and Ethernet switch modules that are installed in the chassis. The adapter interfaces to the compute node by using the Peripheral Component Interconnect Express (PCIe) bus. Table 5-133 lists the ordering part number and feature code.
Table 5-133 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter ordering information Part number 49Y7900 HVEC feature code (x-config) A10Y AAS feature code (e-config)a 1763 / A10Y

Description EN2024 4-port 1Gb Ethernet Adapter

a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.

The following compute nodes and switches are supported: Compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. The EN2024 4-port 1Gb Ethernet Adapter has the following features: Dual Broadcom BCM5718 ASICs Quad-port Gigabit 1000BASE-X interface Two PCI Express 2.0 x1 host interfaces, one per ASIC Full duplex (FDX) capability, enabling simultaneous transmission and reception of data on the Ethernet network MSI and MSI-X capabilities, with up to 17 MSI-X vectors I/O virtualization support for VMware NetQueue, and Microsoft VMQ Seventeen receive queues and 16 transmit queues Seventeen MSI-X vectors supporting per-queue interrupt to host Function Level Reset (FLR) ECC error detection and correction on internal static random-access memory (SRAM) TCP, IP, and UDP checksum offload Large Send offload and TCP segmentation offload Receive-side scaling Virtual LANs (VLANs): IEEE 802.1q VLAN tagging Jumbo frames (9 KB) IEEE 802.3x flow control Statistic gathering (SNMP MIB II and Ethernet-like MIB [IEEE 802.3x, Clause 30]) Comprehensive diagnostic and configuration software suite

376

IBM PureFlex System and IBM Flex System Products and Technology

Advanced Configuration and Power Interface (ACPI) 1.1a-compliant: multiple power modes Wake-on-LAN (WOL) support Preboot Execution Environment (PXE) support RoHS-compliant Figure 5-105 shows the IBM Flex System EN2024 4-port 1Gb Ethernet Adapter.

Figure 5-105 The EN2024 4-port 1Gb Ethernet Adapter for IBM Flex System

For more information, see the IBM Redbooks Product Guide IBM Flex System EN2024 4-port 1Gb Ethernet Adapter, TIPS0845, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0845.html?Open

5.11.6 IBM Flex System EN4132 2-port 10Gb Ethernet Adapter


The IBM Flex System EN4132 2-port 10Gb Ethernet Adapter from Mellanox provides the highest performing and most flexible interconnect solution for servers that are used in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Table 5-134 lists the ordering information for this adapter.
Table 5-134 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter ordering information Part number 90Y3466 x86 nodes featurea A1QY POWER nodes feature None 7863-10X feature EC2D Description EN4132 2-port 10 Gb Ethernet Adapter

a. For all x86 compute nodes in XCC (x-config) and AAS (e-config), except for x240 7863-10X

The following compute nodes and switches are supported: x86 compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374.

Chapter 5. Compute nodes

377

The IBM Flex System EN4132 2-port 10Gb Ethernet Adapter has the following features: Based on Mellanox Connect-X3 technology IEEE Std. 802.3 compliant PCI Express 3.0 (1.1 and 2.0 compatible) through an x8 edge connector up to 8 GTps 10 Gbps Ethernet Processor offload of transport operations CORE-Direct application offload GPUDirect application offload RDMA over Converged Ethernet (RoCE) End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload Ethernet encapsulation using Ethernet over InfiniBand (EoIB) RoHS-6 compliant Figure 5-106 shows the IBM Flex System EN4132 2-port 10Gb Ethernet Adapter.

Figure 5-106 The EN4132 2-port 10Gb Ethernet Adapter for IBM Flex System

For more information, see the IBM Redbooks Product Guide IBM Flex System EN4132 2-port 10Gb Ethernet Adapter, TIPS0873, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0873.html?Open

5.11.7 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter


The IBM Flex System EN4054 4-port 10Gb Ethernet Adapter from Emulex enables the installation of four 10 Gb ports of high-speed Ethernet into an IBM Power Systems compute node. These ports interface to chassis switches or pass-through modules, which enables connections within and external to the IBM Flex System Enterprise Chassis. The firmware for this 4-port adapter is provided by Emulex, while the AIX driver and AIX tool support are provided by IBM.

378

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-135 lists the ordering information.


Table 5-135 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter ordering information Part number None x86 nodes feature None POWER nodes feature 1762 7863-10X feature None Description EN4054 4-port 10Gb Ethernet Adapter

The following compute nodes and switches are supported: Power Systems compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. The IBM Flex System EN4054 4-port 10Gb Ethernet Adapter has the following features and specifications: Four-port 10 Gb Ethernet adapter Dual-ASIC Emulex BladeEngine 3 controller Connection to either 1 Gb or 10 Gb data center infrastructure (1 Gb and 10 Gb auto-negotiation) PCI Express 3.0 x8 host interface (The p260 and p460 support PCI Express 2.0 x8.) Full-duplex capability Bus-mastering support Direct memory access (DMA) support PXE support IPv4/IPv6 TCP and UDP checksum offload: Large send offload Large receive offload Receive-Side Scaling (RSS) IPv4 TCP Chimney offload TCP Segmentation offload

VLAN insertion and extraction Jumbo frames up to 9000 bytes Load balancing and failover support, including adapter fault tolerance (AFT), switch fault tolerance (SFT), adaptive load balancing (ALB), teaming support, and IEEE 802.3ad Enhanced Ethernet (draft): Enhanced Transmission Selection (ETS) (P802.1Qaz) Priority-based Flow Control (PFC) (P802.1Qbb) Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX, and CEE-DCBX (P802.1Qaz) Supports Serial over LAN (SoL) Total Max Power: 23.1 W

Chapter 5. Compute nodes

379

Figure 5-107 shows the IBM Flex System EN4054 4-port 10Gb Ethernet Adapter.

Figure 5-107 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter

For more information, see the IBM Redbooks Product Guide IBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port 10Gb Ethernet Adapter, TIPS0868, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0868.html?Open

5.11.8 IBM Flex System EN6132 2-port 40Gb Ethernet Adapter


The IBM Flex System EN6132 2-port 40Gb Ethernet Adapter provides a high-performance, flexible interconnect solution for servers that are used in the enterprise data center, high-performance computing, and embedded environments. The IBM Flex System EN6132 2-port 40Gb Ethernet Adapter is based on Mellanox ConnectX-3 ASIC. It includes other features like RDMA and RoCE technologies that help provide acceleration and low latency for specialized applications. This adapter works with the IBM Flex System 40Gb Ethernet Switch to deliver industry-leading Ethernet bandwidth that is ideal for high performance computing. Table 5-137 on page 382 lists the ordering part number and feature codes.
Table 5-136 IBM Flex System EN6132 2-port 40Gb Ethernet Adapter ordering information Part number 90Y3482 x86 nodes featurea A3HK POWER nodes feature None 7863-10X feature EC31 Description EN6132 2-port 40Gb Ethernet Adapter

a. For all x86 compute nodes in XCC (x-config) and AAS (e-config), except for x240 7863-10X

The following compute nodes and switches are supported: x86 compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374.

380

IBM PureFlex System and IBM Flex System Products and Technology

The IBM Flex System EN6132 2-port 40Gb Ethernet Adapter has the following features and specifications: PCI Express 3.0 (1.1 and 2.0 compatible) through an x8 edge connector up to 8 GT/s 40 Gbps Ethernet CPU off-load of transport operations CORE-Direct application off-load GPUDirect application off-load Unified Extensible Firmware Interface (UEFI) Wake on LAN (WoL) RDMA over Converged Ethernet (RoCE) End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless off-load Ethernet encapsulation (EoIB) Data Rate: 1/10/40 Gbps Ethernet RoHS-6 compliant Figure 5-108 shows the IBM Flex System EN6132 2-port 40Gb Ethernet Adapter.

Figure 5-108 The EN6132 2-port 40Gb Ethernet Adapter for IBM Flex System

For more information, see IBM Redbooks Product GuideIBM Flex System EN6132 2-port 40Gb Ethernet Adapter, TIPS0912, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0912.html?Open

5.11.9 IBM Flex System CN4054 10Gb Virtual Fabric Adapter


The IBM Flex System CN4054 10Gb Virtual Fabric Adapter from Emulex is a 4-port 10 Gb converged network adapter. It can scale to up to 16 virtual ports and support multiple protocols such as Ethernet, iSCSI, and FCoE. Table 5-137 on page 382 lists the ordering part numbers and feature codes.

Chapter 5. Compute nodes

381

Table 5-137 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter ordering information Part number 90Y3554 90Y3558 x86 nodes featurea A1R1 A1R0 POWER nodes feature None None 7863-10X feature 1759 1760 Description CN4054 10Gb Virtual Fabric Adapter CN4054 Virtual Fabric Adapter Upgrade

a. For all x86 compute nodes in XCC (x-config) and AAS (e-config), except for x240 7863-10X

The following compute nodes and switches are supported: x86 compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. The IBM Flex System CN4054 10Gb Virtual Fabric Adapter has the following features and specifications: Dual-ASIC Emulex BladeEngine 3 controller. Operates as a 4-port 1/10 Gb Ethernet adapter or supports up to 16 Virtual Network Interface Cards (vNICs). In virtual NIC (vNIC) mode, it supports: Virtual port bandwidth allocation in 100 Mbps increments. Up to 16 virtual ports per adapter (four per port). With the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, four of the 16 vNICs (one per port) support iSCSI or FCoE. Support for two vNIC modes: IBM Virtual Fabric Mode and Switch Independent Mode. Wake On LAN support. With the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, the adapter adds FCoE and iSCSI hardware initiator support. iSCSI support is implemented as a full offload and presents an iSCSI adapter to the operating system. TCP offload Engine (TOE) support with Windows Server 2003, 2008, and 2008 R2 (TCP Chimney) and Linux. The connection and its state are passed to the TCP offload engine. Data transmit and receive is handled by the adapter. Supported by iSCSI. Connection to either 1 Gb or 10 Gb data center infrastructure (1 Gb and 10 Gb auto-negotiation). PCI Express 3.0 x8 host interface. Full-duplex capability. Bus-mastering support. DMA support. PXE support. IPv4/IPv6 TCP, UDP checksum offload: 382 Large send offload Large receive offload RSS IPv4 TCP Chimney offload TCP Segmentation offload

IBM PureFlex System and IBM Flex System Products and Technology

VLAN insertion and extraction. Jumbo frames up to 9000 bytes. Load balancing and failover support, including AFT, SFT, ALB, teaming support, and IEEE 802.3ad. Enhanced Ethernet (draft): Enhanced Transmission Selection (ETS) (P802.1Qaz). Priority-based Flow Control (PFC) (P802.1Qbb). Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX, and CEE-DCBX (P802.1Qaz). Supports Serial over LAN (SoL). Total Max Power: 23.1 W. The IBM Flex System CN4054 10Gb Virtual Fabric Adapter supports the following modes of operation: IBM Virtual Fabric Mode This mode works only with a IBM Flex System Fabric EN4093 10Gb Scalable Switch installed in the chassis. In this mode, the adapter communicates with the switch module to obtain vNIC parameters by using Data Center Bridging Exchange (DCBX). A special tag within each data packet is added and later removed by the NIC and switch for each vNIC group. This tag helps maintain separation of the virtual channels. In IBM Virtual Fabric Mode, each physical port is divided into four virtual ports, which provides a total of 16 virtual NICs per adapter. The default bandwidth for each vNIC is 2.5 Gbps. Bandwidth for each vNIC can be configured at the EN4093 switch from 100 Mbps to 10 Gbps, up to a total of 10 Gb per physical port. The vNICs can also be configured to have no bandwidth if you must allocate the available bandwidth to fewer than eight vNICs. In IBM Virtual Fabric Mode, you can change the bandwidth allocations through the EN4093 switch user interfaces without having to reboot the server. When storage protocols are enabled on the adapter by using CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, six ports are Ethernet, and two ports are either iSCSI or FCoE. Switch Independent vNIC Mode This vNIC mode is supported by the following switches: IBM Flex System Fabric EN4093 10Gb Scalable Switch IBM Flex System EN4091 10Gb Ethernet Pass-thru and a top-of-rack switch Switch Independent Mode offers the same capabilities as IBM Virtual Fabric Mode in terms of the number of vNICs and bandwidth that each can have. However, Switch Independent Mode extends the existing customer VLANs to the virtual NIC interfaces. The IEEE 802.1Q VLAN tag is essential to the separation of the vNIC groups by the NIC adapter or driver and the switch. The VLAN tags are added to the packet by the applications or drivers at each endstation rather than by the switch. Physical NIC (pNIC) mode In pNIC mode, the expansion card can operate as a standard 10 Gbps or 1 Gbps 4-port Ethernet expansion card. When in pNIC mode, the expansion card functions with any of the following I/O modules: IBM Flex System Fabric EN4093 10Gb Scalable Switch IBM Flex System EN4091 10Gb Ethernet Pass-thru and a top-of-rack switch IBM Flex System EN2092 1Gb Ethernet Scalable Switch

Chapter 5. Compute nodes

383

In pNIC mode, the adapter with the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, applied operates in traditional converged network adapter (CNA) mode. It operates with four ports of Ethernet and four ports of storage (iSCSI or FCoE) available to the operating system. Figure 5-108 on page 381 shows the IBM Flex System CN4054 10Gb Virtual Fabric Adapter.

Figure 5-109 The CN4054 10Gb Virtual Fabric Adapter for IBM Flex System

The CN4058 supports FCoE to both FC and FCoE targets. For more information, see 7.4, FCoE on page 473. For more information, see IBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port 10Gb Ethernet Adapter, TIPS0868, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0868.html?Open

5.11.10 IBM Flex System CN4058 8-port 10Gb Converged Adapter


The IBM Flex System CN4058 8-port 10Gb Converged Adapter is an 8-port 10Gb converged network adapter (CNA) for Power Systems compute nodes that supports 10 Gb Ethernet and FCoE. With hardware protocol offloads for TCP/IP and FCoE standard, the CN4058 8-port 10Gb Converged Adapter provides maximum bandwidth with minimal usage of processor resources. This situation is key in IBM Virtual I/O Server (VIOS) environments because it enables more VMs per server, which provides greater cost savings to optimize return on investment (ROI). With eight ports, the adapter makes full use of the capabilities of all Ethernet switches in the IBM Flex System portfolio.

384

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-138 lists the ordering information.


Table 5-138 IBM Flex System CN4058 8-port 10 Gb Converged Adapter Part number None x86 nodes feature None POWER nodes feature EC24 7863-10X feature None Description CN4058 8-port 10Gb Converged Adapter

The following compute nodes and switches are supported: x86 compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. Figure 5-110 shows the CN4058 8-port 10Gb Converged Adapter.

Figure 5-110 The CN4054 10Gb Virtual Fabric Adapter for IBM Flex System

The IBM Flex System CN4058 8-port 10Gb Converged Adapter has the following features: Eight-port 10 Gb Ethernet adapter Dual-ASIC controller using the Emulex XE201 (Lancer) design PCIe Express 2.0 x8 host interface (5 GTps) MSI-X support IBM Fabric Manager support The adapter has the following Ethernet features: IPv4/IPv6 TCP and UDP checksum offload, Large Send Offload (LSO), Large Receive Offload, Receive Side Scaling (RSS), and TCP Segmentation Offload (TSO) VLAN insertion and extraction Jumbo frames up to 9000 bytes Priority Flow Control (PFC) for Ethernet traffic Network boot Interrupt coalescing

Chapter 5. Compute nodes

385

Load balancing and failover support, including adapter fault tolerance (AFT), switch fault tolerance (SFT), adaptive load balancing (ALB), link aggregation, and IEEE 802.1AX The adapter has the following FCoE features: Common driver for CNAs and HBAs 3,500 N_Port ID Virtualization (NPIV) interfaces (total for adapter) Support for FIP and FCoE Ether Types Fabric Provided MAC Addressing (FPMA) support 2048 concurrent port logins (RPIs) per port 1024 active exchanges (XRIs) per port ISCSI support: The CN4058 does not support iSCSI hardware offload. The adapter supports the following IEEE standards: PCI Express base spec 2.0, PCI Bus Power Management Interface rev. 1.2, and Advanced Error Reporting (AER) IEEE 802.3ap (Ethernet over Backplane) IEEE 802.1q (VLAN) IEEE 802.1p (QoS/CoS) IEEE 802.1AX (Link Aggregation) IEEE 802.3x (Flow Control) Enhanced I/O Error Handing (EEH) Enhanced Transmission Selection (ETS) (P802.1Qaz) Priority-based Flow Control (PFC) (P802.1Qbb) Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX, and CEE-DCBX (P802.1Qaz) Supported switches are listed in 5.11.4, Supported switches on page 374. One or two compatible 1 Gb or 10 Gb I/O modules must be installed in the corresponding I/O bays in the chassis. When connected to the 1 Gb switch, the adapter operates at 1 Gb speeds. To maximize the number of adapter ports usable, switch upgrades must also be ordered, as shown in Table 5-139. The table also specifies how many ports of the CN4058 adapter are supported after all the indicated upgrades are applied. Switches should be installed in pairs to maximize the number of ports that are enabled and to provide redundant network connections. Tip: With the switches currently available for Flex System, at most six of the eight ports of the CN4058 adapter are connected. For more information, see the Port count column in Table 5-139.
Table 5-139 I/O modules and upgrades for use with the CN4058 adapter Port count (per pair of switches)a 6

Switches and switch upgrades IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch #ESW2 + CN4093 10Gb Converged Scalable Switch (Upgrade 1) #ESU1 + CN4093 10Gb Converged Scalable Switch (Upgrade 2) #ESU2

386

IBM PureFlex System and IBM Flex System Products and Technology

Switches and switch upgrades IBM Flex System Fabric EN4093R 10Gb Scalable Switch #ESW7 + EN4093 10Gb Scalable Switch (Upgrade 1) #3596 + EN4093 10Gb Scalable Switch (Upgrade 2) #3597 IBM Flex System Fabric EN4093 10Gb Scalable Switch #3593 + EN4093 10Gb Scalable Switch (Upgrade 1) #3596 + EN4093 10Gb Scalable Switch (Upgrade 2) #3597 IBM Flex System EN4091 10Gb Ethernet Pass-thru #3700 IBM Flex System EN2092 1Gb Ethernet Scalable Switch #3598 + EN2092 1Gb Ethernet Scalable Switch (Upgrade 1) #3594

Port count (per pair of switches)a 6

2 4

a. This column indicates the number of adapter ports that are active if all the upgrades are installed. See the following list for details.

To make full use of the capabilities of the CN4048 adapter, the following I/O modules should be upgraded to maximize the number of active internal ports: For CN4093, EN4093, and EN4093R switches: Upgrade 1 and 2 are both required, as indicated in Table 5-139 on page 386, for the CN4093, EN4093, and EN4093R to use six ports on the adapter. If only Upgrade 1 is applied, only four ports per adapter are connected. If neither upgrade is applied, only two ports per adapter are connected. For the EN4091 Pass-thru: The EN4091 Pass-thru has only 14 internal ports and therefore supports only ports 1 and 2 of the adapter. For the EN2092: Upgrade 1 of the EN2092 is required, as indicated in Table 5-139 on page 386, to use four ports of the adapter. If Upgrade 1 is not applied, only two ports per adapter are connected. The CN4058 supports FCoE to both FC and FCoE targets. For more information, see 7.4, FCoE on page 473. The IBM Flex System CN4058 8-port 10Gb Converged Adapter supports the following operating systems: VIOS 2.2.2.0 or later is required to assign the adapter to a VIOS partition. AIX Version 6.1 with the 6100-08 Technology Level Service Pack 3. AIX Version 7.1 with the 7100-02 Technology Level Service Pack 3. IBM i 6.1 is supported as a VIOS client. IBM i 7.1 is supported as a VIOS client. Red Hat Enterprise Linux 6.3 for POWER, or later, with current maintenance updates available from Red Hat. SUSE Linux Enterprise Server 11 Service Pack 2 with additional driver updates provided by SUSE.

5.11.11 IBM Flex System EN4132 2-port 10Gb RoCE Adapter


The IBM Flex System EN4132 2-port 10Gb RoCE Adapter for Power Systems compute nodes delivers high bandwidth and provides RDMA over Converged Ethernet (RoCE) for low latency application requirements.

Chapter 5. Compute nodes

387

Clustered IBM DB2 databases, web infrastructure, and high frequency trading are just a few applications that achieve significant throughput and latency improvements, resulting in faster access, real-time response, and more users per server. This adapter improves network performance by increasing available bandwidth while it decreases the associated transport load on the processor. Table 5-140 lists the ordering part number and feature code.
Table 5-140 Ordering information Part number None x86 nodes feature None POWER nodes feature EC26 7863-10X feature None Description EN4132 2-port 10Gb RoCE Adapter

The following compute nodes and switches are supported: Power Systems compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. Figure 5-111 shows the EN4132 2-port 10Gb RoCE Adapter.

Figure 5-111 IBM Flex System EN4132 2-port 10 Gb RoCE Adapter

The IBM Flex System EN4132 2-port 10Gb RoCE Adapter has the following features: RDMA over Converged Ethernet (RoCE) EN4132 2-port 10Gb RoCE Adapter, which is based on Mellanox ConnectX-2 technology, uses the InfiniBand Trade Association's RDMA over Converged Ethernet (RoCE) technology to deliver similar low latency and high performance over Ethernet networks. By using Data Center Bridging capabilities, RoCE provides efficient low-latency RDMA services over Layer 2 Ethernet. The RoCE software stack maintains existing and future compatibility with bandwidth and latency-sensitive applications. With link-level interoperability in the existing Ethernet infrastructure, network administrators can use existing data center fabric management solutions.

388

IBM PureFlex System and IBM Flex System Products and Technology

Sockets acceleration Applications that uses TCP/UDP/IP transport can achieve industry-leading throughput over InfiniBand or 10 GbE adapters. The hardware-based stateless offload engines in ConnectX-2 reduce the processor impact of IP packet transport, allowing more processor cycles to work on the application. I/O virtualization ConnectX-2 with Virtual Intelligent Queuing (Virtual-IQ) technology provides dedicated adapter resources and ensured isolation and protection for virtual machines within the server. I/O virtualization with ConnectX-2 gives data center managers better server usage while it reduces cost, power, and cable complexity. The IBM Flex System EN4132 2-port 10Gb RoCE Adapter has the following specifications (based on Mellanox Connect-X2 technology): PCI Express 2.0 (1.1 compatible) through an x8 edge connector with up to 5 GTps 10 Gbps Ethernet Processor offload of transport operations CORE-Direct application offload GPUDirect application offload RDMA over Converged Ethernet (RoCE) End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless off-load Ethernet encapsulation (EoIB) 128 MAC/VLAN addresses per port RoHS-6 compliant The adapter meets the following IEEE specifications: IEEE 802.3ae 10 Gigabit Ethernet IEEE 802.3ad Link Aggregation and Failover IEEE 802.3az Energy Efficient Ethernet IEEE 802.1Q, .1p VLAN tags and priority IEEE 802.1Qau Congestion Notification IEEE P802.1Qbb D1.0 Priority-based Flow Control IEEE 1588 Precision Clock Synchronization Jumbo frame support (10 KB) The EN4132 2-port 10Gb RoCE Adapter supports the following operating systems: AIX V7.1 with the 7100-02 Technology Level, or later AIX V6.1 with the 6100-08 Technology Level, or later SUSE Linux Enterprise Server 11 Service Pack 2 for POWER, with current maintenance updates available from SUSE to enable all planned functionality Red Hat Enterprise Linux 6.3, or later

5.11.12 IBM Flex System FC3172 2-port 8Gb FC Adapter


The IBM Flex System FC3172 2-port 8Gb FC Adapter from QLogic enables high-speed access for IBM Flex System Enterprise Chassis compute nodes to connect to a Fibre Channel SAN. This adapter is based on the proven QLogic 2532 8 Gb ASIC design. It works with any of the 8 Gb or 16 Gb IBM Flex System Fibre Channel switch modules.

Chapter 5. Compute nodes

389

Table 5-141 lists the ordering part number and feature code.
Table 5-141 IBM Flex System FC3172 2-port 8 Gb FC Adapter ordering information Part number 69Y1938 x86 nodes featurea A1BM POWER nodes feature 1764 7863-10X feature 1764 Description FC3172 2-port 8Gb FC Adapter

a. For all x86 compute nodes in XCC (x-config) and AAS (e-config), except for x240 7863-10X

The following compute nodes and switches are supported: Compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. The IBM Flex System FC3172 2-port 8Gb FC Adapter has the following features: QLogic ISP2532 controller PCI Express 2.0 x4 host interface Bandwidth: 8 Gb per second maximum at half-duplex and 16 Gb per second maximum at full-duplex per port 8/4/2 Gbps auto-negotiation Support for FCP SCSI initiator and target operation Support for full-duplex operation Support for Fibre Channel protocol SCSI (FCP-SCSI) and Fibre Channel Internet Protocol (FCP-IP) Support for point-to-point fabric connection (F-port fabric login) Support for Fibre Channel Arbitrated Loop (FC-AL) public loop profile: Fibre Loop-(FL-Port)-Port Login Support for Fibre Channel services class 2 and 3 Configuration and boot support in UEFI Power usage: 3.7 W typical RoHS 6 compliant

390

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-112 shows the IBM Flex System FC3172 2-port 8Gb FC Adapter.

Figure 5-112 The IBM Flex System FC3172 2-port 8Gb FC Adapter

For more information, see IBM Flex System FC3172 2-port 8Gb FC Adapter, TIPS0867, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0867.html?Open

5.11.13 IBM Flex System FC3052 2-port 8Gb FC Adapter


The IBM Flex System FC3052 2-port 8Gb FC Adapter from Emulex provides compute nodes with high-speed access to a Fibre Channel SAN. This 2-port 8 Gb adapter is based on the Emulex 8 Gb Fibre Channel application-specific integrated circuits (ASIC). It uses industry-proven technology to provide high-speed and reliable access to SAN-connected storage. The two ports enable redundant connections to the SAN, which can increase reliability and reduce downtime. Table 5-142 lists the ordering part number and feature codes.
Table 5-142 IBM Flex System FC3052 2-port 8 Gb FC Adapter ordering information Part number 95Y2375 x86 nodes featurea A2N5 POWER nodes feature None 7863-10X feature EC25 Description FC3052 2-port 8Gb FC Adapter

a. For all x86 compute nodes in XCC (x-config) and AAS (e-config), except for x240 7863-10X

The following compute nodes and switches are supported: x86 compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. The IBM Flex System FC3052 2-port 8Gb FC Adapter has the following features and specifications: Uses the Emulex Saturn 8 Gb Fibre Channel I/O Controller chip

Chapter 5. Compute nodes

391

Multifunction PCIe 2.0 device with two independent FC ports Auto-negotiation between 2-Gbps, 4-Gbps, and 8-Gbps FC link attachments Complies with the PCIe base and CEM 2.0 specifications Enablement of high-speed and dual-port connection to a Fibre Channel SAN Comprehensive virtualization capabilities with support for N_Port ID Virtualization (NPIV) and Virtual Fabric Simplified installation and configuration by using common HBA drivers Common driver model that eases management and enables upgrades independent of HBA firmware Fibre Channel specifications: Bandwidth: Burst transfer rate of up to 1600 MBps full-duplex per port Support for point-to-point fabric connection: F-Port Fabric Login Support for FC-AL and FC-AL-2 FL-Port Login Support for Fibre Channel services class 2 and 3

Single-chip design with two independent 8 Gbps serial Fibre Channel ports, each of which provides these features: Reduced instruction set computer (RISC) processor Integrated serializer/deserializer Receive DMA sequencer Frame buffer

Onboard DMA: DMA controller for each port (transmit and receive) Frame buffer first in, first out (FIFO): Integrated transmit and receive frame buffer for each data channel Figure 5-113 shows the IBM Flex System FC3052 2-port 8Gb FC Adapter.

Figure 5-113 IBM Flex System FC3052 2-port 8Gb FC Adapter

392

IBM PureFlex System and IBM Flex System Products and Technology

For more information, see IBM Flex System FC3052 2-port 8Gb FC Adapter, TIPS0869, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0869.html?Open

5.11.14 IBM Flex System FC5022 2-port 16Gb FC Adapter


The network architecture on the IBM Flex System platform addresses network challenges. It provides a scalable way to integrate, optimize, and automate your data center. The IBM Flex System FC5022 2-port 16Gb FC Adapter enables high-speed access to external SANs. This adapter is based on the Brocade architecture, and offers end-to-end 16 Gb connectivity to SAN. It can auto-negotiate, and also work at 8 Gb and 4 Gb speeds. It has enhanced features such as N-port trunking, and increased encryption for security. Table 5-143 lists the ordering part number and feature code.
Table 5-143 IBM Flex System FC5022 2-port 16 Gb FC Adapter ordering information Part number 88Y6370 x86 nodes featurea A1BP POWER nodes feature None 7863-10X feature EC2B Description FC5022 2-port 16Gb FC Adapter

a. For all x86 compute nodes in XCC (x-config) and AAS (e-config), except for x240 7863-10X

The following compute nodes and switches are supported: x86 compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. The IBM Flex System FC5022 2-port 16Gb FC Adapter has the following features: 16 Gbps Fibre Channel: Uses 16 Gbps bandwidth to eliminate internal oversubscription Investment protection with the latest Fibre Channel technologies Reduces the number of ISL external switch ports, optics, cables, and power Over 500,000 IOPS per port, which maximizes transaction performance and the density of VMs per compute node. Achieves performance of 315,000 IOPS for email exchange and 205,000 IOPS for SQL Database. Boot from SAN allows the automation SAN Boot LUN discovery to simplify boot from SAN and reduce image management complexity. Brocade Server Application Optimization (SAO) provides QoS levels assignable to VM applications. Direct I/O enables native (direct) I/O performance by allowing VMs to bypass the hypervisor and communicate directly with the adapter. Brocade Network Advisor simplifies and unifies the management of Brocade adapter, SAN, and LAN resources through a single user interface. LUN Masking, which is an Initiator-based LUN masking for storage traffic isolation. NPIV allows multiple host initiator N_Ports to share a single physical N_Port, dramatically reducing SAN hardware requirements.

Chapter 5. Compute nodes

393

Target Rate Limiting (TRL) throttles data traffic when accessing slower speed storage targets to avoid back pressure problems. RoHS-6 compliant. Figure 5-114 shows the IBM Flex System FC5022 2-port 16Gb FC Adapter.

Figure 5-114 IBM Flex System FC5022 2-port 16Gb FC Adapter

For more information, see IBM Flex System FC5022 2-port 16Gb FC Adapter, TIPS0891, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0891.html?Open

5.11.15 IBM Flex System FC5024D 4-port 16Gb FC Adapter


Important: The IBM Flex System FC5024D 4-port 16Gb FC Adapter is only supported in the x222 Compute Node. The IBM Flex System FC5024D 4-port 16Gb FC Adapter is a quad-port mid-mezzanine card for the IBM Flex System x222 Compute Node with two ports routed to each server in the x222. This adapter is based on Brocade architecture, and offers end-to-end 16 Gb connectivity to a SAN. It has enhanced features like N_Port trunking and N_Port ID Virtualization (NPIV) and boot-from-the-SAN with automatic LUN discovery and end-to-end SAO. Table 5-144 lists the ordering part number and feature code.
Table 5-144 IBM Flex System FC5024D 4-port 16 Gb FC Adapter ordering information Part number 95Y2379 Feature codea A3HU Description IBM Flex System FC5024D 4-port 16Gb FC Adapter

a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.

394

IBM PureFlex System and IBM Flex System Products and Technology

The FC5024D is designed to work best with the IBM Flex System FC5022 16Gb SAN Scalable Switch. Working together, these deliver considerable value by simplifying the deployment of server and SAN resources, reducing infrastructure and operational costs, and maximizing server and SAN reliability, availability, and resiliency. The following compute nodes and switches are supported: x86 compute nodes: For information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. The IBM Flex System FC5024D 4-port 16Gb FC Adapter has the following features: Supported in the dual-server x222 Compute Node, where two ports of the adapter are routed to each of the servers Dual ASIC design Supports high-performance 16 Gbps Fibre Channel: Use 16 Gbps bandwidth to eliminate internal oversubscription Investment protection with the latest Fibre Channel technologies Reduce the number of ISL external switch ports, optics, cables and power RoHS-6 compliant adapter Each ASIC connects to one of the two servers in the x222 and act as two independent 2-port adapters, with the following features and functions: Based on the Brocade Catapult2 ASIC Over 500,000 IOPS per port: Maximizes transaction performance and density of VMs per compute node Achieves performance of 330,000 IOPS for email exchange and 205,000 IOPS for SQL Database Boot from SAN allows the automation SAN Boot LUN discovery to simplify boot from SAN and reduce image management complexity Brocade SAO provides QoS levels assignable to VM applications Direct I/O enables native (direct) I/O performance by allowing VMs to bypass the hypervisor and communicate directly with the adapter Brocade Network Advisor simplifies and unifies the management of Brocade adapter, SAN, and LAN resources through a single pane-of-glass LUN Masking, an Initiator-based LUN masking for storage traffic isolation N_Port Id Virtualization (NPIV) allows multiple host initiator N_Ports to share a single physical N_Port, dramatically reducing SAN hardware requirements Target Rate Limiting (TRL) throttles data traffic when accessing slower speed storage targets to avoid back pressure problems Unified driver across all Brocade-based IBM adapter products with automated version synchronization capability FEC provides a method to recover from errors caused on links during data transmission Buffer-to-Buffer (BB) Credit Recovery enables ports to recover lost BB credits FCP-IM I/O Profiling allows users to analyze traffic patterns and help fine-tune Fibre Channel adapter ports, fabrics, and targets for better performance.

Chapter 5. Compute nodes

395

Figure 5-115 shows the IBM Flex System FC5024D 4-port 16Gb FC Adapter.

Figure 5-115 IBM Flex System FC5024D 4-port 16Gb FC Adapter

For more information, see IBM Flex System FC5024D 4-port 16Gb FC Adapter, TIPS1047, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips1047.html?Open

5.11.16 IBM Flex System FC5052 2-port and FC5054 4-port 16Gb FC Adapters
The network architecture on the IBM Flex System platform is specifically designed to address network challenges and give a scalable way to integrate, optimize, and automate the data center. The IBM Flex System FC5052 2-port and FC5054 4-port 16Gb FC Adapters enable high-speed access for Flex System compute nodes to an external SAN. These adapters are based on the proven Emulex Fibre Channel stack, and work with 16 Gb Flex System Fibre Channel switch modules. The FC5054 adapter is based on a two ASIC design, which allows for logical partitioning on Power Systems compute nodes. When compared to the previous generation 8 Gb adapters, the new generation 16 Gb adapters double throughput speeds for Fibre Channel traffic. As a result, it is possible to manage increased amounts of data. Table 5-145 lists the ordering part numbers and feature codes.
Table 5-145 Ordering information Part number 95Y2386 95Y2391 x86 nodes featurea A45R A45S POWER nodes feature EC23 EC2E 7863-10X feature None None Description FC5052 2-port 16Gb FC Adapter FC5054 4-port 16Gb FC Adapter

a. For all x86 compute nodes in XCC (x-config) and AAS (e-config), except for x240 7863-10X

The following compute nodes and switches are supported: x86 compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374.

396

IBM PureFlex System and IBM Flex System Products and Technology

Both adapters offer the following features: Fibre Channel protocol SCSI (FCP-SCSI) and Fibre Channel Internet protocol (FCP-IP) Point-to-point fabric connection: F-Port Fabric Login Fibre Channel Arbitrated Loop (FC-AL) and FCAL-2 FL-Port Login Fibre Channel services class 2 and 3 LUN Masking, an Initiator-based LUN masking for storage traffic isolation N_Port Id Virtualization (NPIV) allows multiple host initiator N_Ports to share a single physical N_Port, dramatically reducing SAN hardware requirements FCP SCSI initiator and target operation Full-duplex operation The IBM Flex System FC5052 2-port 16Gb FC Adapter has the following features: 2-port 16 Gb Fibre Channel adapter Single-ASIC controller using the Emulex XE201 (Lancer) design Auto-Negotiate to 16Gb, 8Gb or 4Gb PCIe Express 2.0 x8 host interface (5 GT/s) MSI-X support Common driver model with the CN4054 10 Gb Ethernet, EN4054 10 Gb Ethernet and FC3052 8Gb FC adapters IBM Fabric Manager support Figure 5-117 on page 398 shows the IBM Flex System FC5052 2-port 16Gb FC Adapter.

Figure 5-116 IBM Flex System FC5052 2-port 16Gb FC Adapter

The IBM Flex System FC5054 4-port 16Gb FC Adapter has the following features: 4-port 16 Gb Fibre Channel adapter Dual-ASIC (FC5024) controller that uses the Emulex XE201 (Lancer) design, which allows for logical partitioning on Power Systems compute nodes

Chapter 5. Compute nodes

397

Auto-Negotiate to 16Gb, 8Gb or 4Gb Two PCIe Express 2.0 x8 host interfaces (each 5 GT/s), one for each ASIC ASICs treated as separate devices by the driver: No shared resources (that is, no PCIe bridge) between ASICs ASICs and each ASIC has its own firmware chip MSI-X support Common driver model with the CN4054 10 Gb Ethernet, EN4054 10 Gb Ethernet and FC3052 8Gb FC adapters IBM Fabric Manager support Figure 5-117 shows the IBM Flex System FC5054 4-port 16Gb FC Adapter.

Figure 5-117 IBM Flex System FC5054 4-port 16Gb FC Adapter

For more information, see the IBM Flex System FC5052 2-port and FC5054 4-port 16Gb FC Adapter, TIPS1044, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips1044.html?Open

5.11.17 IBM Flex System FC5172 2-port 16Gb FC Adapter


The IBM Flex System FC5172 2-port 16Gb FC Adapter from QLogic enables high-speed access for IBM Flex System Enterprise Chassis compute nodes to connect to a Fibre Channel SAN. It works with the 8 Gb or 16 Gb IBM Flex System Fibre Channel switch modules. Table 5-146 lists the ordering part number and feature codes.
Table 5-146 IBM Flex System FC5172 2-port 16 Gb FC Adapter ordering information Part number 69Y1942 x86 nodes featurea A1BQ POWER nodes feature None 7863-10X feature None Description FC5172 2-port 16Gb FC Adapter

a. For all x86 compute nodes in XCC (x-config) and AAS (e-config), except for x240 7863-10X

398

IBM PureFlex System and IBM Flex System Products and Technology

The following compute nodes and switches are supported: Compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. The IBM Flex System FC5172 2-port 16Gb FC Adapter has the following features: QLogic ISP8324 controller PCI Express 3.0 x4 host interface Bandwidth: 16 Gb per second maximum at half-duplex and 32 Gb per second maximum at full-duplex per port Support for FCP SCSI initiator and target operation 16/8/4/2 Gbps auto-negotiation Support for full-duplex operation Support for Fibre Channel protocol SCSI (FCP-SCSI) and Fibre Channel Internet Protocol (FCP-IP) Support for point-to-point fabric connection (F-port fabric login) Support for Fibre Channel Arbitrated Loop (FC-AL) public loop profile: Fibre Loop-(FL-Port)-Port Login Support for Fibre Channel services class 2 and 3 Configuration and boot support in UEFI Approximate Power usage: 16 W. RoHS 6 compliant Figure 5-118 shows the IBM Flex System FC5172 2-port 16Gb FC Adapter.

Figure 5-118 The IBM Flex System FC5172 2-port 16Gb FC Adapter

For more information, see IBM Flex System FC5172 2-port 16Gb FC Adapter, TIPS1043, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips1043.html?Open

Chapter 5. Compute nodes

399

5.11.18 IBM Flex System IB6132 2-port FDR InfiniBand Adapter


InfiniBand is a high-speed server-interconnect technology that is the ideal interconnect technology for access layer and storage components. It is designed for application and back-end IPC applications, for connectivity between application and back-end layers, and from back-end to storage layers. Through the usage of HCAs and switches, InfiniBand technology is used to connect servers with remote storage and networking devices and other servers. It can also be used inside servers for interprocess communication (IPC) in parallel clusters. The IBM Flex System IB6132 2-port FDR InfiniBand Adapter delivers low-latency and high bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered databases, parallelized applications, transactional services, and high-performance embedded I/O applications can achieve significant performance improvements. These improvements in turn help reduce the completion time and lower the cost per operation. The IB6132 2-port FDR InfiniBand Adapter simplifies network deployment by consolidating clustering, communications, and management I/O, and helps provide enhanced performance in virtualized server environments. Table 5-147 lists the ordering part number and feature codes.
Table 5-147 IBM Flex System IB6132 2-port FDR InfiniBand Adapter ordering information Part number 90Y3454 x86 nodes featurea A1QZ POWER nodes feature None 7863-10X feature EC2C Description IB6132 2-port FDR InfiniBand Adapter

a. For all x86 compute nodes in XCC (x-config) and AAS (e-config), except for x240 7863-10X

The following compute nodes and switches are supported: x86 compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. The IB6132 2-port FDR InfiniBand Adapter has the following features and specifications: Based on Mellanox Connect-X3 technology Virtual Protocol Interconnect (VPI) InfiniBand Architecture Specification V1.2.1 compliant Supported InfiniBand speeds (auto-negotiated): 1X/2X/4X SDR (2.5 Gbps per lane) DDR (5 Gbps per lane) QDR (10 Gbps per lane) FDR10 (40 Gbps, 10 Gbps per lane) FDR (56 Gbps, 14 Gbps per lane)

IEEE Std. 802.3 compliant PCI Express 3.0 x8 host-interface up to 8 GTps bandwidth Processor offload of transport operations CORE-Direct application offload GPUDirect application offload

400

IBM PureFlex System and IBM Flex System Products and Technology

Unified Extensible Firmware Interface (UEFI) WoL RoCE End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload Ethernet encapsulation (EoIB) RoHS-6 compliant Power consumption: Typical: 9.01 W, maximum 10.78 W Figure 5-119 shows the IBM Flex System IB6132 2-port FDR InfiniBand Adapter.

Figure 5-119 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

For more information, see IBM Flex System IB6132 2-port FDR InfiniBand Adapter, TIPS0872, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0872.html?Open

5.11.19 IBM Flex System IB6132 2-port QDR InfiniBand Adapter


The IBM Flex System IB6132 2-port QDR InfiniBand Adapter for Power Systems provides a high-performing and flexible interconnect solution for servers that are used in Enterprise Data Centers, High-Performance Computing, and Embedded environments. The adapter is based on Mellanox ConnectX-2 EN technology, which improves network performance by increasing available bandwidth to the processor, especially in virtualized server environments. Table 5-148 lists the ordering part number and feature code.
Table 5-148 IBM Flex System IB6132 2-port QDR InfiniBand Adapter ordering information Part number None x86 nodes feature None POWER nodes feature 1761 7863-10X feature None Description IB6132 2-port QDR InfiniBand Adapter

Chapter 5. Compute nodes

401

The following compute nodes and switches are supported: Power Systems compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. The IBM Flex System IB6132 2-port QDR InfiniBand Adapter has the following features and specifications: ConnectX2 based adapter VPI InfiniBand Architecture Specification v1.2.1 compliant IEEE Std. 802.3 compliant PCI Express 2.0 (1.1 compatible) through an x8 edge connector up to 5 GTps Processor offload of transport operations CORE-Direct application offload GPUDirect application offload UEFI WoL RoCE End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload RoHS-6 compliant Figure 5-120 shows the IBM Flex System IB6132 2-port QDR InfiniBand Adapter.

Figure 5-120 IBM Flex System IB6132 2-port QDR InfiniBand Adapter

For more information, see IBM Flex System IB6132 2-port QDR InfiniBand Adapter, TIPS0890, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0890.html?Open

402

IBM PureFlex System and IBM Flex System Products and Technology

5.11.20 IBM Flex System IB6132D 2-port FDR InfiniBand Adapter


Important: The IBM Flex System IB6132D 2-port FDR InfiniBand Adapter is only supported in the x222 Compute Node. The IBM Flex System IB6132D 2-port FDR InfiniBand Adapter delivers low-latency and high bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered databases, parallelized applications, transactional services, and high-performance embedded I/O applications can achieve significant performance improvements. These improvements in turn help reduce the completion time and lower the cost per operation. The IB6132D 2-port FDR InfiniBand Adapter simplifies network deployment by consolidating clustering, communications, and management I/O, and helps provide enhanced performance in virtualized server environments. The IB6132D 2-port FDR InfiniBand Adapter is a mid-mezzanine form factor adapter that is only supported in the x222 Computer Node. The adapter has two ASICs that operate independently, one for the upper node and one for the lower node of the x222. Each ASIC provides one FDR port. The port for the lower node is connected via the chassis midplane to switch bay 3 and the port for the upper node is connected via the chassis midplane to switch bay 4. Table 5-149 lists the ordering part number and feature code.
Table 5-149 IBM Flex System IB6132D 2-port FDR InfiniBand Adapter ordering information Part number 90Y3486 Feature code A365 Description IB6132D 2-port FDR InfiniBand Adapter

The following compute nodes and switches are supported: x86 compute nodes: For more information, see 5.11.3, Supported compute nodes on page 373. Switches: For more information, see 5.11.4, Supported switches on page 374. Important: The attached switch might require a license to run at FDR speeds. The IB6132D 2-port FDR InfiniBand Adapter has the following features and specifications: Based on Mellanox Connect-X3 technology Two independent Mellanox ASICs, one port per ASIC Two-port card, with one port routed to each of the independent servers in the x222 Compute Node Each port operates at up to 56 Gbps InfiniBand Architecture Specification V1.2.1 compliant Supported InfiniBand speeds (auto-negotiated): 1X/2X/4X SDR (2.5 Gbps per lane) DDR (5 Gbps per lane) QDR (10 Gbps per lane) FDR10 (40 Gbps, 10 Gbps per lane) FDR (56 Gbps, 14 Gbps per lane)
Chapter 5. Compute nodes

403

PCI Express 3.0 x8 host-interface up to 8 GTps bandwidth Processor offload of transport operations CORE-Direct application offload GPUDirect application offload Unified Extensible Firmware Interface (UEFI) WoL End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload Ethernet encapsulation (EoIB) RoHS-6 compliant Figure 5-121 shows the IBM Flex System IB6132D 2-port FDR InfiniBand Adapter.

Figure 5-121 IBM Flex System IB6132D 2-port FDR InfiniBand Adapter

For more information, see IBM Flex System IB6132D 2-port FDR InfiniBand Adapter, TIPS1056, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips1056.html?Open

404

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 6.

Network integration
This chapter covers basic and advanced networking techniques that can be deployed with IBM Flex System platform in a data center to meet availability, performance, scalability, and systems management goals. This chapter includes the following topics: 6.1, Choosing the Ethernet switch I/O module on page 406 6.2, Virtual local area networks on page 408 6.3, Scalability and performance on page 409 6.4, High Availability on page 411 6.5, FCoE capabilities on page 422 6.6, Virtual Fabric vNIC solution capabilities on page 423 6.7, Unified Fabric Port feature on page 427 6.8, Easy Connect concept on page 429 6.9, Stacking feature on page 430 6.10, Openflow support on page 432 6.11, 802.1Qbg Edge Virtual Bridge support on page 433 6.12, SPAR feature on page 433 6.13, Management on page 434 6.14, Summary and conclusions on page 437

Copyright IBM Corp. 2012, 2013. All rights reserved.

405

6.1 Choosing the Ethernet switch I/O module


Selecting the Ethernet I/O module that is best for an environment is a process unique to each client. The following factors should be considered when you are deciding which Ethernet model is right for a specific environment: The first decision is regarding speed requirements. Do you only need 1Gb connectivity to the servers, or is 10Gb to the servers a requirement? Consider the following factors: If there is no immediate need for 10 Gb to the servers, there are no plans to upgrade to 10 Gb in the foreseeable future, and you have no need for any of the advanced features offered in the 10 Gb products, the EN2092 1Gb Ethernet Switch is a possible solution. If you need a solution that has 10G to the server, is not apparent to the network, only has a single link for each compute node for each I/O module, and requires direct connections from the compute node to the external ToR switch, the EN4091 10Gb Ethernet Pass-thru is a viable option. If you need 10Gb today or know you need 10Gb in the near future, need more than one 10G link from each switch bay to each compute node, or need any of the features that are associated with 10Gb server interfaces, such as FCoE and switched-based vNIC support, you have a choice of EN4093R 10Gb Scalable Switch, the CN4093 10Gb Converged Scalable Switch, or the SI4093 System Interconnect Module. The following considerations are important when you are selecting between the EN4093R 10Gb Scalable Switch, the CN4093 10Gb Converged Scalable Switch, and the SI4093 System Interconnect Module: If you require Fibre Channel Forwarder (FCF) services within the Enterprise Chassis, or native Fibre Channel uplinks from the 10G switch, the CN4093 10Gb Converged Scalable Switch is the correct choice. If you do not require FCF services or native Fibre Channel ports on the 10G switch, but need the maximum number of 10G uplinks without purchasing an extra license, support for FCoE transit capabilities, and the most feature-rich solution, the EN4093R 10Gb Scalable Switch is a good choice. If you require ready for use not apparent operation (minimal to no configuration on the switch), and do not need any L3 support or other advanced features (and know there is no need for more advanced functions), the SI4093 System Interconnect Module is a potential choice.

406

IBM PureFlex System and IBM Flex System Products and Technology

There are more criteria involved because each environment has its own unique attributes. However, the criteria that is reviewed in this section are a good starting point in the decision-making process. Some of the Ethernet I/O module selection criteria are summarized in Table 6-1.
Table 6-1 Switch module selection criteria Suitable switch module EN2092 1Gb Ethernet Switch Requirement Gigabit Ethernet to nodes 10 Gb Ethernet to nodes 10 Gb Ethernet uplinks 40 Gb Ethernet uplinks Basic Layer 2 switching Advanced Layer 2 switching: IEEE features (STP, QoS) Layer 3 IPv4 switching (forwarding, routing, ACL filtering) Layer 3 IPv6 switching (forwarding, routing, ACL filtering) 10 Gb Ethernet CEE
FCoE FIP Snooping Bridge support

Switches SI4093 System Interconnec t Module EN4093R 10Gb Scalable Switch CN4093 10Gb Converged Scalable Switch Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No

Yes No Yes No Yes Yes Yes Yes No No No No No No No No No No Noa No

Yes Yes Yes Yes Yes No No No Yes Yes No No Noa Noa No Noa No Yes Yes No

Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No No Yes Yes Yes Yes Yes Yes Yes Yes

FCF support Native FC port support Switch stacking 802.1Qbg Edge Virtual Bridge support vLAG support UFP support Virtual Fabric mode vNIC support Switch independent mode vNIC support SPAR support Openflow support a. Planned support in a later release

Chapter 6. Network integration

407

6.2 Virtual local area networks


Virtual local area networks (VLANs) are commonly used in a Layer 2 network to split groups of networked systems into manageable broadcast domains, create logical segmentation of workgroups, and enforce security policies among logical segments. Primary VLAN considerations include the number and types of supported VLANs and VLAN tagging protocols. The EN4093R 10Gb Scalable Switch, CN4093 10Gb Converged Scalable Switch and EN2092 1Gb Ethernet Switch all have the following VLAN-related features (unless otherwise noted): Important: The EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch under certain configurations (for example, Easy Connect mode) are not apparent to VLAN tags and act as a VLAN tag pass-through, so the limitations that are described next do not apply in these modes. Support for 4094 active VLANs, out of the range of 1 - 4094 Some VLANs might be reserved when certain features (for example, stacking, UFP) are enabled IEEE 802.1Q for VLAN tagging on links (also called trunking by some vendors) Support for tagged or untagged native VLAN Port-based VLANs Protocol-based VLANs Spanning-tree per VLAN (Per VLAN Rapid Spanning-tree) This is the default Spanning-tree mode for the EN2092 1Gb Ethernet Switch, EN4093R 10Gb Scalable Switch, and CN4093 10Gb Converged Scalable Switch. The SI4093 System Interconnect Module does not support Spanning-tree. Limited to 127 instances of Spanning-tree. VLANs added after 127 instances are operational are placed into Spanning-tree instance 1 802.1x Guest VLANs VLAN Maps for ACLs VLAN-based port mirroring The SI4093 System Interconnect Module by default is VLAN not apparent, and passes packets through the switch regardless of tagged or untagged so the number of VLANs that are supported is limited to whatever the compute node OS and the upstream network support. When it is changed from its default mode to SPAR local domain mode, it supports up to 250 VLANs but does not support Spanning-tree because it prohibits a user from creating a loop. Specific to 802.1Q VLAN tagging, this feature is critical to maintain VLAN separation when packets in multiple VLANs must traverse a common link between devices. Without a tagging protocol, such as 802.1Q, maintaining VLAN separation between devices can be accomplished through a separate link for each VLAN, a less than optimal solution. Important: In rare cases, there are some older non-standards based tagging protocols used by vendors. These protocols are not compatible with 802.1Q or the Enterprise Chassis switching products.

408

IBM PureFlex System and IBM Flex System Products and Technology

The need for 802.1Q VLAN tagging is not relegated only to networking devices. It is also supported and frequently used on end nodes and is implemented differently by various operating systems. For example, for Windows Server 2008 and earlier, a vendor driver was needed to subdivide the physical interface into logical NICs, with each logical NIC set for a specific VLAN. Typically, this setup is part of the teaming software from the NIC vendor. Windows Server 2012 has tagging option natively available. For Linux, tagging is done by creating sub-interfaces of a physical or logical NIC, such as eth0.10 for VLAN 10 on physical interface eth0. For VMware ESX, tagging can be done within the vSwitch through port group tag settings (known as Virtual Switch Tagging). Tagging also can be done in the OS within the guest VM itself (called Virtual Guest Tagging). From an OS perspective, having several logical interfaces can be useful when an application requires more than two separate interfaces and you do not want to dedicate an entire physical interface. It might also help to implement strict security policies for separating network traffic that uses VLANs and having access to server resources from different VLANs, without adding more physical network adapters. Review the documentation of the application to ensure that the application that is deployed on the system supports the use of logical interfaces that are often associated with VLAN tagging. For more information about Ethernet switch modules that are available with the Enterprise Chassis, see 4.11, I/O modules on page 112.

6.3 Scalability and performance


Each Enterprise Chassis has four I/O bays and, depending on the Ethernet switch module that is installed in the I/O bay, the license that is installed on the Ethernet switch, and the adapters that are installed on the node. Each bay can support many connections, both toward the nodes and up toward the external network. The I/O switch modules that are available for the Enterprise Chassis are a scalable class of switch. This means that more banks of ports, which are enabled by using Feature on Demand (FoD) licensing, can be enabled as needed, thus scaling the switch to meet a particular requirement. The architecture allows up to potentially three FoD licenses in each I/O module, but current products are limited to a maximum of two FoD expansions. The number and type of ports that are available for use by the user in these FoD licenses depends on the following factors: I/O module installed FoD that is activated on the I/O module I/O adapters that are installed in the nodes The Ethernet I/O switch modules include an enabled base set of ports, and require upgrades to enable the extra ports. Not all Ethernet I/O modules support the same number or types of ports. A cross-reference of the number of FoD expansion licenses that are supported on each of the available I/O modules is shown in Table 6-2 on page 410. The EN4091 10Gb Ethernet Pass-thru is a fixed function device and as such, has no real concept of port expansion.

Chapter 6. Network integration

409

Table 6-2 Module names and the number of FoD expansions allowed Module name EN2092 1Gb Ethernet Switch SI4093 System Interconnect Module EN4093R 10Gb Scalable Switch CN4093 10Gb Converged Scalable Switch EN4091 10Gb Ethernet Pass-thru Number of FoD licenses supported 2 2 2 2 0

As shipped, all I/O modules have support for a base set of ports, which includes 14 internal ports, one to each of the compute node bays up front, and some number of uplinks (for more information, see 4.11, I/O modules on page 112). As noted, upgrades to the scalable switches to enable other sets of ports are added as part of the FoD licensing process. Because of these upgrades, it is possible to increase ports without hardware changes. As each FoD is enabled, the ports that are controlled by the upgrade are activated. If the compute node has a suitable I/O adapter, the server-facing ports are available for use by the node. In general, the act of enabling a bank of ports by applying the FoD merely enables more ports for the switch to use. There is no logical or physical separation of these new ports from a networking perspective, only from a licensing perspective. One exception to this rule is the SI4093 System Interconnect Module. When FoDs are applied to the SI4093 System Interconnect Module, they are done so by using the Switch Partitioning (SPAR) feature, which automatically puts each new set of ports that are added by the FoD process into their own grouping with no interaction with ports in other partitions. This can be adjusted after the FoD is applied to allow ports to be part of different or the same partitions if wanted. As an example of how this licensing works, the EN4093R 10Gb Scalable Switch, by default, includes 14 internal available ports with 10 uplink SFP+ ports. More ports can be enabled with an FoD upgrade, which provides a second or third set of 14 internal ports and some number of 10Gb and 40Gb uplinks, as shown in Figure 6-1 on page 411.

410

IBM PureFlex System and IBM Flex System Products and Technology

14 internal ports

Base Switch: Enables fourteen internal 10 Gb ports (one to each server) and ten external 10 Gb ports
Base

Supports the 2 port 10 Gb LOM and Virtual Fabric capability


Pool of uplink ports

14 internal ports

Upgrade 1

First Upgrade via FoD: Enables second set of fourteen internal 10 Gb ports (one to each server) and two 40 Gb ports Each 40 Gb port can be used as four 10 Gb ports Supports the 4-port Virtual Fabric adapter Second Upgrade via FoD: Enables third set of fourteen internal 10 Gb ports (one to each server) and four external 10 Gb ports Use of enabled internal facing ports requires an appropriate Compute Node NIC/CNA

14 internal ports

Upgrade 2

Figure 6-1 Port upgrade layout for EN4093R 10Gb Scalable Switch

The ability to add ports and bandwidth as needed is a critical element of a scalable platform.

6.4 High Availability


Clients might require continuous access to their network-based resources and applications. Providing High Availability (HA) for client network-attached resources can be a complex task that involves fitting multiple pieces together on a hardware and software level. One key to system HA is to provide HA access to the network infrastructure. Network infrastructure availability can be achieved by using certain techniques and technologies. Most techniques and technologies are widely used standards, but some are specific to the Enterprise Chassis. In this section, we review the most common technologies that can be implemented in an Enterprise Chassis environment to provide HA to the network infrastructure. A typical LAN infrastructure consists of server network interface controllers (NICs), client NICs, and network devices, such as Ethernet switches and cables that connect them. Specific to the Enterprise Chassis, the potential failure areas for node network access include port failures (on switches and the node adapters), the midplane, and the I/O modules. The first step in achieving HA is to provide physical redundancy of components that are connected to the infrastructure. Providing this redundancy typically means that the following measures are taken: Deploy node NICs in pairs Deploy switch modules in pairs Connect the pair of node NICs to separate I/O modules in the Enterprise Chassis Provide connections from each I/O module to a redundant upstream infrastructure

Chapter 6. Network integration

411

Shown in Figure 6-2 is an example of a node with a dual port adapter in adapter slot 1 and a quad port adapter in adapter slot 2. The associated lanes the adapters take to the respective I/O modules in the rear also are shown. To ensure redundancy, when NICs are selected for a team, use NICs that connect to different physical I/O modules. For example, if you were to select the first two NICs shown coming off the top of the quad port adapter, you realize twice the bandwidth and compute node redundancy. However, the I/O module in I/O bay 3 can become a single point of failure, making this configuration a poor design for HA.

Interface Connector

Dual port Ethernet Adapter Adapter slot 1

Base Upgrade 1 (Optional) Upgrade 2 (Optional) Future Base Upgrade 1 (Optional) Upgrade 2 (Optional) Future Base Upgrade 1 (Optional) Upgrade 2 (Optional) Future Base Upgrade 1 (Optional) Upgrade 2 (Optional) Future Midplane

I/O Bay 1

I/O Bay 2

Node Bay 1
Interface Connector

I/O Bay 3

Quad port Ethernet Adapter Adapter slot 2

I/O Bay 4

Figure 6-2 Active lanes shown in red based on adapter installed and FoD enabled

After physical redundancy requirements are met, it is necessary to consider logical elements to use this physical redundancy. The following logical features aid in HA: NIC teaming/bonding on the compute node Layer 2 (L2) failover (also known as Trunk Failover) on the I/O modules Rapid Spanning Tree Protocol for looped environments Virtual Link Aggregation on upstream devices connected to the I/O modules Virtual Router Redundancy Protocol for redundant upstream default gateway Routing Protocols (such as RIP or OSPF) on the I/O modules, if L2 adjacency is not a concern We describe several of these features next.

412

IBM PureFlex System and IBM Flex System Products and Technology

6.4.1 Highly available topologies


The Enterprise Chassis can be connected to the upstream infrastructure in various combinations. Some examples of potential L2 designs are included here. Important: There are many design options that are available to the network architect. This section shows a small subset based on some useful L2 technologies. With the large feature set and high port densities, the I/O modules of the Enterprise Chassis can also be used to implement much more advanced designs, including L3 routing within the enclosure. However, L3 within the chassis is beyond the scope of this document and is thus not covered here. One of the traditional designs for chassis server-based deployments is the looped and blocking design, as shown in Figure 6-3.

Spanning-tree blocked path ToR Switch 1 Upstream Network

X X

I/O Module 1

NIC 1 Compute Node NIC 2

Chassis
ToR Switch 2 I/O Module 2

Aggregation
Figure 6-3 Topology 1: Typical looped and blocking topology

Topology 1 in Figure 6-3 features each I/O module in the Enterprise Chassis with two direct aggregations to a pair of two top-of-rack (ToR) switches. The specific number and speed of the external ports that are used for link aggregation in this and other designs shown in this section depend on the redundancy and bandwidth requirements of the client. This topology is a bit complicated and is considered dated with regard to modern network designs, but is a proven solution. Although it offers complete network-attached redundancy out of the chassis, the potential exists to lose half of the available bandwidth to Spanning Tree blocking because of loops in the design and thus is only recommended if this design is wanted by the customer. Important: Because of possible issues with looped designs in general, a good rule of L2 design is to build loop-free if you can still offer nodes HA access to the upstream infrastructure.

Chapter 6. Network integration

413

Topology 2 in Figure 6-4 features each switch module in the Enterprise Chassis directly connected to its own ToR switch through aggregated links. This topology is a possible example for when compute nodes use some form of NIC teaming that is not aggregation-related. To ensure that the nodes correctly detect uplink failures from the I/O modules, Trunk failover (as described in 6.4.5, Trunk failover on page 420) must be enabled and configured on the I/O modules. With failover, if the uplinks go down, the ports to the nodes shut down. NIC teaming or bonding also is used to fail the traffic over to the other NIC in the team. The combination of this architecture, NIC teaming on the node, and trunk failover on the I/O modules, provides for a highly available environment with no loops and thus no wasted bandwidth to spanning-tree blocked links.

ToR Switch 1 Upstream Network ToR Switch 2

I/O Module 1

NIC 1 Compute Node NIC 2

Chassis
I/O Module 2

Aggregation
Figure 6-4 Topology 2: Non-looped HA design

Topology 3, as shown in Figure 6-5, starts to bring the best of both topology 1 and 2 together in a robust design, which is suitable for use with nodes that run teamed or non-teamed NICs.
Multi-chassis Aggregation ToR Switch 1 Upstream Network ToR Switch 2

I/O Module 1

NIC 1 Compute Node NIC 2

Chassis
I/O Module 2

Aggregation
Figure 6-5 Topology 3: Non-looped design using multi-chassis aggregation

Offering a potential improvement in HA, this design requires that the ToR switches provide a form of multi-chassis aggregation (see Virtual link aggregations on page 418), that allows an aggregation to be split between two physical switches. The design requires the ToR switches to appear as a single logical switch to each I/O module in the Enterprise Chassis. At the time of this writing, this functionality is vendor-specific; however, the products of most major vendors, including IBM ToR products, support this type of function.

414

IBM PureFlex System and IBM Flex System Products and Technology

The I/O modules do not need any special aggregation feature to make full use of this design. Instead, normal static or LACP aggregation support is needed because the I/O modules see this as a simple point-to-point aggregation to a single upstream device. To further enhance the design that is shown in Figure 6-5 on page 414, enable the uplink failover feature (see 6.4.5, Trunk failover on page 420) on the Enterprise Chassis I/O module, which ensures the most robust design possible. One potential draw back to these first three designs is in the case where a node in the Enterprise Chassis is sending traffic into one I/O module, but the receiving device in the same Enterprise Chassis happens to be hashing to the other I/O device (for example, two VMs, one on each Compute Node, but one VM is using the NIC toward I/O bay 1 and the other is using the NIC to I/O bay 2). With the first three designs, this communication must be carried to the ToR and back down, which uses extra bandwidth on the uplinks, increases latency, and sends traffic outside the Enterprise Chassis when there is no need. Topology 4, as shown in Figure 6-6, takes the design to its natural conclusion, of having multi-chassis aggregation on both sides in what is ultimately the most robust and scalable design recommended.

Multi-chassis Aggregation (vLAG, vPC, mLAG, etc) ToR Switch 1 Upstream Network ToR Switch 2

I/O Module 1

NIC 1 Compute Node NIC 2

Chassis
I/O Module 2

Multi-chassis Aggregation (vLAG)


Figure 6-6 Topology 4: Non-looped design by using multi-chassis aggregation on both sides

Topology 4 is considered the most optimal, but not all I/O module configuration options (for example, Virtual Fabric vNIC mode) support the topology 4 design. In this case, topology 3 or 2 is the recommended design. The designs that are reviewed in this section all assume that the L2/L3 boundary for the network is at or above the ToR switches in the diagrams. We touched only on a few of the many possible ways to interconnect the Enterprise Chassis to the network infrastructure. Ultimately, each environment must be analyzed to understand all of the requirements to ensure that the best design is selected and deployed.

Chapter 6. Network integration

415

6.4.2 Spanning Tree


Spanning Tree is defined in the IEEE specification 802.1D. The primary goal of Spanning Tree is to ensure a loop-free design in an L2 network. Loops cannot be allowed to exist in an L2 network because there is no mechanism in an L2 frame to aid in the detection and prevention of looping packets, such as a time to live field or a hop count (all part of the L3 header portion of some packet headers, but not seen by L2 switching devices). Packets might loop indefinitely and use bandwidth that can be used for other purposes. Ultimately, an L2-looped network eventually fails as broadcast and multicast packets rapidly multiply through the loop. The entire process that is used by Spanning Tree to control loops is beyond the scope of this document. In its simplest terms, Spanning Tree controls loops by exchanging Bridge Protocol Data Units (BPDUs) and building a tree that blocks redundant paths until they might be needed; for example, if the path currently selected for forwarding went down. The Spanning Tree specification evolved considerably since its original release. Other standards, such as 802.1w (rapid Spanning Tree) and 802.1s (multi-instance Spanning Tree) are included in the current Spanning Tree specification, 802.1D-2004. As some features were added, other features, such as the original non-rapid Spanning Tree, are no longer part of the specification. The EN2092 1Gb Ethernet Switch, EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch all support the 802.1D specification. They also support a Cisco proprietary version of Spanning Tree called Per VLAN Rapid Spanning Tree (PVRST). The following Spanning Tree modes are currently supported on these modules: Rapid Spanning Tree (RSTP), also known as mono instance Spanning Tree Multi-instance Spanning Tree (MSTP) Per VLAN Rapid Spanning Tree (PVRST) Disabled (turns off spanning tree on the switch) Important: The SI4093 System Interconnect Module does not have support for spanning-tree. It prohibits loops by restricting uplinks out of a switch partition to a single path, which makes it impossible to create a loop.

Topology 2 in Figure 6-4 on page 414 features each switch module in the Enterprise Chassis. The default Spanning Tree for the Enterprise Chassis I/O modules is PVRST. This Spanning Tree allows seamless integration into the largest and most commonly deployed infrastructures in use today. This mode also allows for better potential load balancing of redundant links (because blocking and forwarding is determined per VLAN rather than per physical port) over RSTP, and without some of the configuration complexities that are involved with implementing an MSTP environment. With PVRST, as VLANs are created or deleted, an instance of Spanning Tree is automatically created or deleted for each VLAN. Other supported forms of Spanning Tree can be enabled and configured if required, which allows the Enterprise Chassis to be readily deployed into the most varied environments.

416

IBM PureFlex System and IBM Flex System Products and Technology

6.4.3 Link aggregation


Sometimes referred to as trunking, port channel, or Etherchannel, link aggregation involves taking multiple physical links and binding them into a single common link for use between two devices. The primary purposes of aggregation are to improve HA and increase bandwidth.

Bundling the links


Although there are several different kinds of aggregation, the two most common and are supported by the Enterprise Chassis I/O modules are static and Link Aggregation Control Protocol (LACP). Important: In rare cases, there are still some older non-standards based aggregation protocols, such as Port Aggregation Protocol (PAgP) in use by some vendors. These protocols are not compatible with static or LACP aggregations. Static aggregation does not use any protocol to create the aggregation. Instead, static aggregation combines the ports based on the aggregation configuration applied on the ports and assumes that the other side of the connection does the same. Important: In some cases, static aggregation is referred to as static LACP. This term actually is a contradictory term because as it is difficult in this context to be static and have a Control Protocol. LACP is an IEEE standard that was defined in 802.3ad. The standard was later included in the mainline 802.3 standard but then was pulled out into the current standard 802.1AX-2008. LACP is a dynamic way of determining whether both sides of the link agree they should be aggregating. The decision to use static or LACP is usually a question of what a client uses in their network. If there is no preference, the following are some considerations to aid in the decision making process. Static aggregation is the quickest and easiest way to build an aggregated link. This method also is the most stable in high-bandwidth usage environments, particularly if pause frames are exchanged. The use of static aggregation can be advantageous in mixed vendor environments because it can help prevent possible interoperability issues. Because settings in the LACP standard do not have a recommended default, vendors are allowed to use different defaults, which can lead to unexpected interoperation. For example, the LACP Data Unit (LACPDU) timers can be set to be exchanged every 1 second or every 30 seconds. If one side is set to 1 second and one side is set to 30 seconds, the LACP aggregation can be unstable. This is not an issue with static aggregations. Important: Most vendors default to the use of the 30-second exchange of LACPDUs, including IBM switches. If you encounter a vendor that defaults to 1-second timers (for example, Juniper), we advise that the other vendor changes to operate with 30-second timers, rather than setting both to 1 second. This 30 second setting tends to produce a more robust aggregation as opposed to the 1-second timers.

Chapter 6. Network integration

417

One of the downsides to static aggregation is that it lacks a mechanism to detect if the other side is correctly configured for aggregation. So, if one side is static and the other side is not configured, configured incorrectly, or is not connected to the correct ports, it is possible to cause a network outage by bringing up the links. Based on the information presented in this section, If you are sure that your links are connected to the correct ports and that both sides are configured correctly for static aggregation, static aggregation is a solid choice. LACP has the inherent safety that a protocol brings to this process. At linkup, LACPDUs are exchanged and both sides must agree they are using LACP before it attempts to bundle the links. So, in the case of mis-configuration or incorrect connections, LACP helps protect the network from an unplanned outage. IBM has also enhanced LACP to support a feature known as suspend-port. By definition of the IEEE standard, if ports cannot bundle because the other side does not understand LACP (for example, is not configured for LACP), the ports should be treated as individual ports and remain operational. This might lead to potential issues under certain circumstances (such as if Spanning-tree was disabled). To prevent accidental loops, the suspend-port feature can hold the ports down until such time as proper LACPDUs are exchanged and the links can be bundled. This feature also protects against certain mis-cabling or mis-configuration that might split the aggregation into multiple smaller aggregations. For more information about this feature, see the Application Guide that is provided for the product. The disadvantages of the use of LACP are that it takes a small amount of time to negotiate the aggregation and form an aggregating link (usually under a second), and it can become unstable and unexpectedly fail in environments with heavy and continued pause frame activity. Another factor to consider about aggregation is whether it is better to aggregate multiple low-speed links into a high-speed aggregation, or use a single high-speed link with a similar speed to all of the links in the aggregation. If your primary goal is HA, aggregations can offer a no-single-point-of-failure situation that a single high-speed link cannot offer. If maximum performance and lowest possible latency are the primary goals, often a single high-speed link makes more sense. Another factor is cost. Often, one high-speed link can cost more to implement than a link that consists of an aggregation of multiple slower links.

Virtual link aggregations


Aside from the standard point-to-point aggregations that are covered in this section, there is a technology that provides multi-chassis aggregation, sometimes called distributed aggregation or virtual link aggregation. Under the latest IEEE specifications, an aggregation is still defined as a bundle between only two devices. By this definition, you cannot create an aggregation on one device and have the links of that aggregation connect to more than a single device on the other side of the aggregation. The use of only two devices limits the ability to offer certain robust designs. Although the standards bodies are working on a solution that provides split aggregations across devices, most vendors devised their own version of multi-chassis aggregation. For example, Cisco has virtual Port Channel (vPC) on Nexus products, and Virtual Switch System (VSS) on the 6500 line. IBM offers virtual Link Aggregation (vLAG) on many of our ToR solutions, and on the EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch.

418

IBM PureFlex System and IBM Flex System Products and Technology

The primary goals of virtual link aggregation are to overcome the limits that are imposed by current standards-based aggregation, and provide a distributed aggregation across a pair of switches instead of a single switch. The decisions whether to aggregate and which method of aggregation is most suitable to a specific environment are not always straightforward. But if the decision is made to aggregate, the I/O modules for the Enterprise Chassis offer the necessary wanted features to integrate into the aggregated infrastructure.

6.4.4 NIC teaming


NIC teaming, also known as bonding, is a solution that is used on servers to logically bond two or more NICs to form one or more logical interfaces for purposes of HA, increased performance, or both. While teaming or bonding is not a switch-based technology, it is a critical component of a highly available environment, and is described here for reference purposes. There are many forms of NIC teaming, and the types available for a server are tied to the OS that is installed on the server. For Microsoft Windows, the teaming software traditionally was provided by the NIC vendor and was installed as an add-on to the OS. This software often included the elements necessary to enable VLAN tagging on the logical NICs created by the teaming software. These logical NICs are seen by the OS as physical NICs and are treated as such when they are configured. Depending on the NIC vendor, the teaming software might offer several different types of failover, including simple Active/Standby, static aggregation, dynamic aggregation (LACP), and vendor-specific load balancing schemes. Starting with Windows Server 2012, NIC teaming (along with VLAN tagging) is native to the OS and no longer requires a third-party application. For Linux based systems, the bonding module is used to implement NIC teaming. There are a number of bonding modes available, most commonly mode 1 (Active/Standby) and mode 4 (LACP aggregation). Like Windows teaming, Linux bonding also offers logical interfaces to the OS that can be used as wanted. Unlike Windows teaming, VLAN tagging is controlled by different software in Linux and can create sub-interfaces for VLANs off physical and logical entities; for example, eth0.10 for VLAN 10 on physical eth0, or bond0:20, for VLAN 20 on a logical NIC bond pair 0. Another common server OS, VMware ESX, also has built-in teaming in the form of assigning multiple NICs to a common vSwitch (a logical switch that runs within an ESX host, shared by the VMs that require network access). VMware has several teaming modes, with the default option called Route based on the originating virtual port ID. This default mode provides a per VM load balance of physical NICs that are assigned to the vSwitch and does not require any form of aggregation configured on the upstream switches. Another mode, Route based on IP hash, equates to a static aggregation. If configured, it requires the upstream switch connections to be configured for static aggregation. The teaming method that is best for a specific environment is unique to each situation. However, the following common elements might help in the decision-making process: Do not select a mode that requires some form of aggregation (static/LACP) on the switch side unless the NICs in the team go to the same physical switch or logical switch that was created by a technology, such as virtual link aggregation or stacking. If a mode that uses some form of aggregation is used, you must also perform proper configuration on the upstream switches to complete the aggregation on that side.

Chapter 6. Network integration

419

The most stable solution is often Active/Standby, but this solution has the disadvantage of losing any bandwidth on a NIC that is in standby mode. Most teaming software also offers proprietary forms of load balancing. The selection of these modes must be thoroughly tested for suitability to the task for an environment. Most teaming software incorporates the concept of auto failback, which means that if a NIC went down and then came back up, it automatically fails back to the original NIC. Although this function helps ensure good load balancing, each time that a NIC fails, some small packet loss might occur, which can lead to unexpected instabilities. When a flapping link occurs, a severe disruption to the network connection of the servers results, as the connection path goes back and forth between NICs. One way to mitigate this situation is to disable the auto failback feature. After a NIC fails, the traffic falls back only if the original link is restored and something happened to the current link that requires a switchover. It is your responsibility to understand your goals and the tools that are available to achieve those goals. NIC teaming is one tool for users that need HA connections for their compute nodes.

6.4.5 Trunk failover


Trunk failover, also known as failover or link state tracking, is an important feature for ensuring high availability in chassis-based computing. This feature is used with NIC teaming to ensure the compute nodes can detect an uplink failure from the I/O modules. With traditional NIC teaming and bonding, the decision process that is used by the teaming software to use a NIC is based on whether the link to the NIC is up or down. In a chassis-based environment, the link between the NIC and the internal I/O module rarely goes down unexpectedly. Instead, a more common occurrence might be the uplinks from the I/O module go down; for example, an upstream switch crashed or cables were disconnected. In this situation, although the I/O module no longer has a path to send packets because of the upstream fault, the actual link to the internal server NIC is still up. The server might continue to send traffic to this unusable I/O module, which leads to a black hole condition. To prevent this black hole condition and to ensure continued connection to the upstream network, trunk failover can be configured on the I/O modules. Depending on the configuration, trunk failover monitors a set of uplinks. In the event that these uplinks go down, trunk failover takes down the configured server-facing links. This action alerts the server that this path is not available, and NIC teaming can take over and redirect traffic to the other NIC. Trunk failover offers the following features: Besides triggering on link up/down, trunk failover also operates on the spanning-tree blocking and discarding state. From a data packet perspective, a blocked link is no better than a down link. Trunk failover can be configured to fail over if the number of links in a monitored aggregation falls below a certain number. Trunk failover can be configured to trigger on VLAN failure. When a monitored uplink comes back up, trunk failover automatically brings back up the downstream links if Spanning Tree is not blocking and other attributes, such as the minimum number of links are met for the trigger.

420

IBM PureFlex System and IBM Flex System Products and Technology

For trunk failover to work properly, it is assumed that there is an L2 path between the uplinks, external to the chassis. This path is most commonly found at the switches just above the chassis level in the design (but they can be higher) if there is an external L2 path between the Enterprise Chassis I/O modules. Important: Other solutions to detect an indirect path failure were created, such as the VMware beacon probing feature. Although these solutions might (or might not) offer advantages, trunk failover is the simplest and most unintrusive way to provide this functionality.

Trunk failover feature is shown in Figure 6-7.


 $OO XSOLQNV RXW RI WKH ,2 PRGXOH KDYH JRQH GRZQ FRXOG EH D OLQN IDLOXUH RU IDLOXUH RI 7R5  DQG VR IRUWK   7UXQN IDLORYHU WDNHV GRZQ WKH OLQN WR 1,&  WR QRWLI\ WKH QRGH WKDW WKH SDWK RXW RI ,2 PRGXOH  LV JRQH  1,& WHDPLQJ RQ WKH FRPSXWH QRGH EHJLQV WR VHQG DOO WUDIILF WRZDUG WKH VWLOO IXQFWLRQLQJ 1,& 

 7R5 6ZLWFK 
,2 0RGXOH  )DLORYHU HQDEOHG

1,&  /RJLFDO 7HDPHG 1,& 1,&  1RGH

&KDVVLV
7R5 6ZLWFK 
,2 0RGXOH  )DLORYHU HQDEOHG


Figure 6-7 Trunk failover in action

The use of trunk failover with NIC teaming is a critical element in most topologies for nodes requiring a highly available path from the Enterprise Chassis. One exception is in topology 4, as shown in Figure 6-6 on page 415. With this multi-chassis aggregation design, failover is not needed because all NICs have access to all uplinks on either switches. If all uplinks were to go down, there is no failover path remaining.

6.4.6 Virtual Router Redundancy Protocol


Rather than having every server make its own routing decisions (not scalable), most servers implement a default gateway. In this configuration, if the server sends a packet to a device on a subnet that is not the same as its own, the server sends the packets to a default gateway and allows the default gateway determine where to send the packets. If this default gateway is a stand-alone router and it goes down, the servers that point their default gateway setting at the router cannot route off their own subnet. To prevent this type of single point of failure, most data center routers that offer a default gateway service implement a redundancy protocol so that one router can take over for the other when one router fails.

Chapter 6. Network integration

421

Although there are nonstandard solutions to this issue, for example, Hot Standby Router Protocol (HSRP), most routers now implement standards-based Virtual Router Redundancy Protocol (VRRP). Important: Although they offer similar services, HSRP and VRRP are not compatible with each other. In its simplest form, two routers that run VRRP share a common IP address (called the Virtual IP address). One router traditionally acts as master and the other as a backup if the master goes down. Information is constantly exchanged between the routers to ensure one can provide the services of the default gateway to the devices that point at its Virtual IP address. Servers that require a default gateway service point the default gateway service at the Virtual IP address, and redundancy is provided by the pair of routers that run VRRP. The EN2092 1Gb Ethernet Switch, EN4093R 10Gb Scalable Switch, and CN4093 10Gb Converged Scalable Switch offer support for VRRP directly within the Enterprise Chassis, but most common data center designs place this function in the routing devices above the chassis (or even higher). The design depends on how important it to have a common L2 network between nodes in different chassis. But if needed, this function can be moved within the Enterprise Chassis as networking requirements dictate.

6.5 FCoE capabilities


One common way to reduce management points and networking elements in an environment is by converging technologies that were traditionally implemented on separate physical infrastructures. Like collapsing office phone systems from a separate cabling plant and components into a common IP infrastructure, Fibre Channel networks also are experiencing this type of convergence. And like phone systems that have moved to Ethernet, Fibre Channel also is moving to Ethernet. Fibre Channel over Ethernet (FCoE) removes the need for separate HBAs on the servers and separate Fibre Channel cables out of the back of the server or chassis. Instead, a Converged Network Adapter (CNA) is installed in the server. The CNA presents what appears to be an NIC and an HBA to the OS, but the output from the server is only 10 Gb Ethernet. The IBM Flex System Enterprise Chassis provides multiple I/O modules that support FCoE. The EN4093R 10Gb Scalable Switch, CN4093 10Gb Converged Scalable Switch, and SI4093 System Interconnect Module all support FCoE, with the CN4093 10Gb Converged Scalable Switch also supporting the Fibre Channel Forwarder (FCF) function, which supports NPV, full fabric FC, and native FC ports. This FCoE function also requires the correct components on the Compute Nodes in the form of the proper CNA and licensing. No special license is needed on any of the I/O modules to support FCoE because support comes as part of the base product. The EN4091 10Gb Ethernet Pass-thru can also provide support for FCoE, assuming the proper CNA and license are on the Compute Node, and the upstream connection supports FCoE traffic. The EN4093R 10Gb Scalable Switch and SI4093 System Interconnect Module are FIP Snooping Bridges (FSB) and thus provide FCoE transit services between the Compute Node and an upstream Fibre Channel Forwarder (FCF) device. A typical design requires an upstream device such as an IBM G8264CS switch that breaks the FC portion of the FCoE out to the necessary FC format. 422
IBM PureFlex System and IBM Flex System Products and Technology

Important: In its default mode, the SI4093 System Interconnect Module supports passing the FCoE traffic up to the FCF, but no FSB support. If FIP snooping is required on the SI4093 System Interconnect Module, it must be placed into local domain SPAR mode. The CN4093 10Gb Converged Scalable Switch also can act as an FSB, but if wanted, it can operate as an FCF, which allows the switch to support a full fabric mode for direct storage attachment, or in N Port Virtualizer (NPV) mode, for connection to a non-IBM SAN fabric. The CN4093 10Gb Converged Scalable Switch also supports native FC ports for directly connecting FC devices to the CN4093 10Gb Converged Scalable Switch. Because the Enterprise Chassis also supports native Fibre Channel modules and various FCoE technologies, it can provide a storage connection solution that meets any wanted goal with regard to remote storage access.

6.6 Virtual Fabric vNIC solution capabilities


Virtual Network Interface Controller (vNIC) is a way to divide a physical NIC into smaller logical NICs (or partition them) so that the OS has more ways to logically connect to the infrastructure. The vNIC feature is supported only on 10 Gb ports facing the compute nodes within the chassis, and only on the certain Ethernet I/O modules. These currently include the EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch. vNIC also requires a node adapter that also supports this functionality. As of this writing, there are two primary forms of vNIC available: Virtual Fabric mode (or Switch dependent mode) and Switch independent mode. The Virtual Fabric mode is subdivided into two submodes: dedicated uplink vNIC mode and shared uplink vNIC mode. All vNIC modes share the following elements: They are supported only on 10 Gb connections. Each vNIC mode allows a NIC to be divided into up to 4 vNICs per physical NIC (can be less than 4, but not more). They all require an adapter that has support for one or more of the vNIC modes. When creating vNICs, the default bandwidth is 2.5 Gb for each vNIC, but can be configured to be anywhere from 100 Mb up to the full bandwidth of the NIC. The bandwidth of all configured vNICs on a physical NIC cannot exceed 10 Gb. All modes support FCoE.

Chapter 6. Network integration

423

A summary of some of the differences and similarities of these modes is shown in Table 6-3. These differences and similarities are described next.
Table 6-3 Attributes of vNIC modes IBM Virtual Fabric mode Capability Requires support in the I/O module Requires support in the NIC/CNA Supports adapter transmit rate control Support I/O module transmit rate control Supports changing rate without restart of node Requires a dedicated uplink per vNIC group Support for node OS-based tagging Support for failover per vNIC group Support for more than one uplink path per vNIC Dedicated uplink Yes Yes Yes Yes Yes Yes Yes Yes No Shared uplink Yes Yes Yes Yes Yes No No Yes No Switch independent mode No Yes Yes No No No Yes N/A Yes

6.6.1 Virtual Fabric mode vNIC


Virtual Fabric mode vNIC depends on the switch in the I/O module bay to participate in the vNIC process. Specifically, the IBM Flex System Fabric EN4093R 10Gb Scalable Switch and the CN4093 10Gb Converged Scalable Switch support this mode. It also requires an adapter on the Compute node that supports the vNIC Virtual Fabric mode feature. In Virtual Fabric mode vNIC, configuration is performed on the switch. The configuration information is communicated between the switch and the adapter so that both sides agree on and enforce bandwidth controls. The mode can be changed to different speeds at any time without reloading the OS or the I/O module. There are two types of Virtual Fabric vNIC modes: dedicated uplink mode and shared uplink mode. Both modes incorporate the concept of a vNIC group on the switch that is used to associated vNICs and physical ports into virtual switches within the chassis. How these vNIC groups are used is the primary difference between dedicated uplink mode and shared uplink mode. Virtual Fabric vNIC modes share the following attributes: They conceptually are a vNIC group that must be created on the I/O module. Similar vNICs are bundled into common vNIC groups. Each vNIC group is treated as a virtual switch within the I/O module. Packets in one vNIC group can get only to a different vNIC group by going to an external switch/router. For the purposes of Spanning Tree and packet flow, each vNIC group is treated as a unique switch by upstream connecting switches/routers. Both modes support the addition of physical NICs (pNIC) (the NICs from nodes not using vNIC) to vNIC groups for internal communication to other pNICs and vNICs in that vNIC group, and share any uplink that is associated with that vNIC group.

424

IBM PureFlex System and IBM Flex System Products and Technology

Dedicated uplink mode


Dedicated uplink mode is the default mode when vNIC is enabled on the I/O module. In dedicated uplink mode, each vNIC group must have its own dedicated physical or logical (aggregation) uplink. In this mode, no more than one physical or logical uplink to a vNIC group can be assigned and it assumed that HA is achieved by some combination of aggregation on the uplink or NIC teaming on the server. In dedicated uplink mode, vNIC groups are VLAN independent to the nodes and the rest of the network, which means that you do not need to create VLANs for each VLAN that is used by the nodes. The vNIC group takes each packet (tagged or untagged) and moves it through the switch. This mode is accomplished by the use of a form of Q-in-Q tagging. Each vNIC group is assigned some VLAN that is unique to each vNIC group. Any packet (tagged or untagged) that comes in on a downstream or upstream port in that vNIC group has a tag placed on it equal to the vNIC group VLAN. As that packet leaves the vNIC into the node or out an uplink, that tag is removed and the original tag (or no tag, depending on the original packet) is revealed.

Shared uplink mode


Shared uplink mode is a version of vNIC that is currently slated to be available in the latter half of 2013 for I/O modules that support vNIC (check the Application Guide or Release Notes for the device to confirm support for this feature). It is a global option that can be enabled on an I/O module that has the vNIC feature enabled. As the name suggests, it allows an uplink to be shared by more than one group, thus reducing the possible number of uplinks required. It also changes the way that the vNIC groups process packets for tagging. In shared uplink mode, it is expected that the servers no longer use tags. Instead, the vNIC group VLAN acts as the tag that is placed on the packet. When a server sends a packet into the vNIC group, it has a tag placed on it equal to the vNIC group VLAN and then sends it out the uplink tagged with that VLAN.

Chapter 6. Network integration

425

Virtual Fabric vNIC shared uplink mode is shown in Figure 6-8.

Operating System VMware ESX vmnic2 vSwitch1 vmnic4 vSwitch2 10 Gb NIC

Physical NIC
vNIC 1.1 Tag VLAN 100 vNIC 1.2 Tag VLAN 200 vNIC 1.3 Tag VLAN 300 vNIC 1.4 Tag VLAN 400

INT-1

vNICGroup 1 VLAN100

EXT-1

vNICGroup 2 VLAN200

vmnic6 vSwitch3

vNICGroup 3 VLAN300

vmnic8 vSwitch4 Compute Node


Figure 6-8 IBM Virtual Fabric vNIC shared uplink mode

vNICGroup 4 VLAN400

EXT-9

EN4093 10 Gb Scalable Switch

EXT-x

6.6.2 Switch-independent mode vNIC


Switch-independent mode vNIC is configured only on the node, and the I/O module is unaware of this virtualization. The I/O module acts as a normal switch in all ways (any VLAN that must be carried through the switch must be created on the switch and allowed on the wanted ports). This mode is enabled at the node directly (either via F1 setup at boot time, via Emulex OneCommand manager, or possibly via FSM configuration pattern controls), and has similar rules as dedicated vNIC mode regarding how you can divide the vNIC. But any bandwidth settings are limited to how the node sends traffic, not how the I/O module sends traffic back to the node (because the I/O module is unaware of the vNIC virtualization taking place on the Compute Node). Also, the bandwidth settings cannot be changed in real time because they require a reload for any speed change to take effect. Switch independent mode requires setting an LPVID value in the Compute Node NIC configuration, and this is a catch-all VLAN for the vNIC to which it is assigned. Any untagged packet from the OS that is sent to the vNIC is sent to the switch with the tag of the LPVID for that vNIC. Any tagged packet that is sent from the OS to the vNIC is sent to the switch with the tag set by the OS (the LPVID is ignored). Owing to this interaction, most users set the LPVID to some unused VLAN and then tag all packets in the OS. One exception to this is for a Compute Node that needs PXE to boot the base OS. In that case, the LPVID for the vNIC that is providing the PXE service must be set for the wanted PXE VLAN.

426

IBM PureFlex System and IBM Flex System Products and Technology

Because all packets that are coming into the switch from a NIC configured for switch independent mode vNIC always are tagged (by the OS or by the LPVID setting if the OS is not tagging), all VLANs that are allowed on the port on the switch side also should be tagging. This means set the PVID/Native VLAN on the switch port to some unused VLAN, or set it to one that is used and enable PVID tagging to ensure the port sends and receives PVID/Native VLAN packets as tagged. In most operating systems, switch independent mode vNIC supports as many VLANs as the OS supports. One exception is with bare metal Windows OS installations, where in switch independent mode, only a limited number of VLANs are supported per vNIC (maximum of 63 VLANs, but less in some cases, depending on version of Windows and what driver is in use). See the documentation for your NIC for details about any limitations for Windows and switch independent mode vNIC. In this section we described the various modes of vNIC. The mode that is best-suited for a user depends on the users requirements. Virtual Fabric dedicated uplink mode offers the most control, and shared uplink mode and switch-independent mode offer the most flexibility with uplink connectivity.

6.7 Unified Fabric Port feature


Unified Fabric Port (UFP) is another approach to NIC virtualization, similar to vNIC but with enhanced flexibility, and should be considered the direction for future development in the virtual NIC area for IBM switching solutions. UFP is supported today on the EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch. UFP and vNIC are mutually exclusive in that you cannot enable UFP and vNIC at the same time on the same switch. If a comparison were to be made between UFP and vNIC, UFP is most closely related to vNIC Virtual Fabric mode, in that both sides, the switch and the NIC/CNA share in controlling bandwidth usage, but there are significant differences. Compared to vNIC, UFP supports the following modes of operation per virtual NIC (vPort): Access: The vPort only allows the default VLAN, which is similar to a physical port in access mode. Trunk: The vPort permits host side tagging and supports up to 32 customer defined VLANs on each vPort (4000 total across all vPorts). Tunnel: Q-in-Q mode, where the vPort is customer VLAN independent (this is the closest to vNIC Virtual Fabric dedicated uplink mode). Tunnel mode is the default mode for a vPort. FCoE: Dedicates the specific vPort for FCoE traffic The following rules and attributes must be considered regarding UFP vPorts: They are supported only on 10 Gb internal interfaces. UFP allows a NIC to be divided into up to four virtual NICs called vPorts per physical NIC (can be less than four, but not more than four). Each vPort can be set for a different mode or same mode (with the exception of the FCoE mode, which is limited only a single vPort on a UFP port, and specifically only vPort 2). UFP requires the proper support in the Compute Node for any port that uses UFP.

Chapter 6. Network integration

427

By default each vPort is guaranteed 2.5 Gb, and can burst up to the full 10G if other vPorts do not need the bandwidth. The guaranteed minimum bandwidth and maximum bandwidth for each vPort are configurable. The minimum bandwidth settings of all configured vPorts on a physical NIC cannot exceed 10 Gb. Each vPort must have a default VLAN assigned. This default VLAN is used for different purposes in different modes. This default VLAN must be unique across the other three vPorts for this physical port. In other words, vPort 1.1 must have a different default VLAN assigned then vPort 1.2, 1.3 or 1.4. When in trunk or access mode, this default VLAN is untagged by default, but can be configured for tagging if wanted, which is similar to tagging the native/PVID VLAN on a physical port. In tunnel mode, the default VLAN is the outer tag for the Q-in-Q tunnel through the switch and is not seen by the end hosts and upstream network. vPort 2 is the only vPort that supports the FCoE setting. vPort 2 can also be used for other modes (for example, access, trunk or tunnel). However, if you want the physical port to support FCoE, this function can only be defined on vPort 2 Table 6-4 offers some check points in helping to select a wanted UFP mode.
Table 6-4 Attributes of UFP modes IBM UFP vPort mode options Capability Support for a single untagged VLAN on the vPorta Support for VLAN restrictions on vPortb VLAN independent pass-true for customer VLANs Support for FCoE on vPort Support to carry more than 32 VLANs on a vPort Access Yes Yes No No No Trunk Yes Yes No No No Tunnel Yes No Yes No Yes FCoE No Yes No Yes No

a. Typically a user sets the vPort for access mode if the OS is using this vPort as a simple untagged link. Both trunk and tunnel mode also can support this, but are not necessary to carry only a single untagged VLAN. b. Access and FCoE mode restricts VLANs to only the default VLAN set on the vPort. Trunk mode restricts VLANs to ones that are specifically allowed per VLAN on the switch (up to 32).

What are some of the criteria to decide if a UFP or vNIC solution should be implemented to provide the virtual NIC capability? In an environment that has not standardized on any specific virtual NIC technology and does not need per logical NIC failover today, UFP is the way to go. As noted, all future virtual NIC development is on UFP, and the per-logical NIC failover function will be available in a coming release. UFP has the advantage being able to emulate vNIC virtual fabric modes mode (via tunnel mode for dedicate uplink vNIC and access mode for shared uplink vNIC) but can also offer virtual NIC support with customer VLAN awareness (trunk mode) and shared virtual group uplinks for access and trunk mode vPorts. If an environment has already standardized on Virtual Fabric mode vNIC and plans to stay with it, or requires the ability of failover per logical group today, Virtual Fabric mode vNIC is recommended.

428

IBM PureFlex System and IBM Flex System Products and Technology

Switch independent mode vNIC is actually exclusive of this decision-making process. Switch independent mode has its own unique attributes, one being truly switch independent, which allows you to configure the switch without restrictions to the virtual NIC technology, other than allowing the proper VLANs. UFP and Virtual Fabric mode vNIC each have a number of unique switch-side requirements and configurations. The down side to Switch independent mode vNIC is the inability to make changes without reloading the server, and the lack of bidirectional bandwidth allocation.

6.8 Easy Connect concept


The Easy Connect concept (sometimes called Easy Connect mode) is not necessarily a specific feature, but a way of using several different features to attempt to minimize ongoing switch management requirements. Some customers want the potential uplink cable reduction or increased Compute Node facing ports that are offered by a switch-based solution, but prefer the ease of use of a pass-through based solution to reduce the potential increase to management that is required for each new edge switch. The Easy Connect concept offers this reduction in management in a fully scalable switch-based solution. There are actually a number of features that can be used to accomplish an Easy Connect solution. We describe a few of those features here. Easy Connect takes a switch module and makes it not apparent to the upstream network and the Compute Nodes. It does this by pre-creating a large aggregation of the uplinks, (so there is no chance for loops), disabling spanning-tree (so the upstream does not receive any spanning-tree BPDUs) and then using a form of Q-in-Q to mask user VLAN tagging as the packets travel through the switch (to remove the need to configure each VLAN the Compute Nodes might need). After it is configured, a switch in Easy Connect mode does not require any configuration changes as a customer adds and removes VLANs. In essence, Easy Connect turns the switch into a VLAN independent port aggregator, with support for growing up to the maximum bandwidth of the product (for example, add upgrade FoDs to increase the 10G links to Compute Nodes and number and types of uplinks that are available for connection to the upstream network). To accomplish an Easy Connect mode, customers have the following options: For customers that want an Easy Connect type of solution ready for use (zero touch switch deployment), the SI4093 System Interconnect Module provides this by default. The SI4093 System Interconnect Module accomplishes this by having the following factory default configuration: Putting all default internal and external ports into a single SPAR Putting all uplinks into a common LACP aggregation and enabling the LACP suspend-port feature Enabling the failover feature on the common LACP key No spanning-tree support (the SI4093 is designed to never permit more than a single uplink path per SPAR so does not support spanning-tree) For customers that want the option of using advanced features but also want an Easy Connect mode solution, the EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch offer configurable options that can make them appear not apparent to the attaching Compute Nodes and upstream network switches, with the option of changing to more advanced modes of configuration when wanted.

Chapter 6. Network integration

429

As noted, the SI4093 System Interconnect Module accomplishes this by defaulting to the SPAR feature in pass-through mode that puts all Compute Node ports and all uplinks into a common Q-in-Q group, that transparently moves any user packets (tagged or untagged) between the Compute nodes and the upstream networking. For the EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch, there are a number of features that can be used to accomplish this. A few of these features are described in this publication. The primary difference between these switches and the SI4093 System Interconnect Module is that on these models, you must first perform a small set of configuration steps to set up this not apparent mode, after which no more management of the switches is required. One common element of all Easy Connect modes is the use of a Q-in-Q type operation to hide user VLANs from the switch fabric in the I/O module, so that the switch acts as more of a port aggregator, and is user VLAN independent. This Easy Connect mode configuration can be accomplished by way of any of the following features: Use of the tagpvid-ingress option vNIC Virtual Fabric dedicated uplink mode feature UFP vPort tunnel mode feature SPAR pass-through domain feature In general, all features can work to provide this Easy Connect functionality, with each having some pros and cons. For example, if you want to use Easy Connect with vLAG, you use the tagpvid-ingress mode (the other modes do not permit the vLAG ISL). But if you want to use Easy Connect with FCoE today, you cannot use tagpvid-ingress and must switch to something such as the vNIC Virtual Fabric dedicated uplink mode or UFP tunnel mode (SPAR pass-through mode allows FCoE but does not support FIP snooping, which might or might not be a concern for some customers). As an example of how tagpvid-ingress works (and in essence each of these modes), consider the tagpvid-ingress operation. When all internal ports and the wanted uplink ports are placed into a common PVID/Native VLAN, and tagpvid-ingress is then enabled on these ports (along with any wanted aggregation protocol on the uplinks that are required to match the other end of the links), all ports with this Native/PVID setting are part of Q-in-Q tunnel with the Native/PVID VLAN acting as the outer tag (and switching traffic based on this VLAN) and the inner customer tag just riding through the fabric on the Native/PVID VLAN to the wanted port (or ports) in this tunnel. In all modes of Easy connect, local switching is still supported, but if any packet must get to a different subnet or VLAN, it must go to an external L3 routing device to accomplish this task. It is recommended that you contact your local IBM networking resource if you want to implement Easy Connect on the EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch.

6.9 Stacking feature


Stacking is supported on the EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch modules. It is provided by reserving a group of uplinks into stacking links and creating a ring of links. This ensures the loss of a single link or single switch in the stack does not lead to a disruption of the stack.

430

IBM PureFlex System and IBM Flex System Products and Technology

Stacking provides the ability to take up to eight switches and treat them as a single switch from a port usage and management perspective. This means ports on different switches in the stack can be aggregated upstream and downstream and you only log in to a single IP address to manage all switches in the stack. For devices attaching to the stack, the stack looks and acts like a single large switch. Important: Setting a switch to stacking mode requires a reload of the switch. Upon coming up into stacking mode, the switch is reset to factory default and generates a new set of port numbers on that switch. Where the ports in a non-stacked switch are denoted with a simple number or a name (that is, INTA1, EXT4, and so on), ports in a stacked switch use numbering such as X:Y, where X is the number of the switch in the stack, and Y is the physical port number on that stack member. Before v7.7 releases of code, it was only possible to stack the EN4093R 10Gb Scalable Switch into a common stack. But in v7.7 and later code, support was added to stack in a pair CN4093 10Gb Converged Scalable Switch into a stack of EN4093R 10Gb Scalable Switch to add FCF capability into the stack. The limit for this hybrid stacking is a maximum of 6 x EN4093R 10Gb Scalable Switch and 2 x CN4093 10Gb Converged Scalable Switch in a common stack. Stacking the Enterprise Chassis I/O modules directly to the IBM Top of Rack switches is not supported. Connections between a stack of Enterprise Chassis I/O modules and upstream switches can be made with standard single or aggregated connections, including the use of vLAG/vPC on the upstream switches to connect links across stack members into a common non-blocking fabric between the stack and the Top of Rack switches. An example of four I/O modules in a highly available stacking design is shown in Figure 6-9.

Multi-chassis Aggregation (vLAG, vPC, mLAG, etc)

I/O Module 1

NIC 1 Compute Node NIC 2

Chassis 1
ToR Switch 1 Upstream Network I/O Module 1 ToR Switch 2 I/O Module 2

NIC 1 Compute Node NIC 2

Chassis 2
I/O Module 2

Stacking
Figure 6-9 IBM Virtual Fabric vNIC shared uplink mode

This example shows a design with no single points of failures via a stack of four I/O modules in a single stack.

Chapter 6. Network integration

431

One limitation of the current implementation of stacking is that if an upgrade of code is needed, a reload of the entire stack must take place. Because upgrades are uncommon and should be scheduled for non-production hours, a single stack design is efficient and clean. But some customers do not want to have any downtime (scheduled or otherwise) so this single stack design is unwanted. For users that still want to make use of stacking, a two-stack design might be an option because a set of switches is stacked in bay 1 into one stack, and a set of switches in bay 2 in a second stack. The primary advantage to a two-stack design is that each stack can be upgraded one at a time, with the running stack maintaining connectivity for the Compute Nodes during the upgrade or reload. The down side is traffic on one stack that must get to switches and the other stack must go through the upstream network. As you can see, stacking might or might not be suitable for all customers. However, if you want to use it, it is another tool that is available for building a robust infrastructure by using the Enterprise Chassis I/O modules.

6.10 Openflow support


As of v7.7 code, the EN4093R 10Gb Scalable Switch supports an Openflow option. Openflow is an open standards-based approach for network switching that separates networking into the local data plane (on the switch) and a control plane that is external to the network switch (usually on a server). Instead of the use of normal learning mechanisms to build up tables of where packets must go, a switch that is running Openflow has the decision-making process in the external server. That server tells the switch to establish flows for the sessions that must traverse the switch. The initial release of support for Openflow on the EN4093R 10Gb Scalable Switch is based on the Openflow 1.0.0 standard and supports the following modes of operation: Switch/Hybrid mode: Defaults to all ports as normal switch ports, but can be enabled for Openflow Hybrid mode without a reload such that some ports can then be enabled for Openflow while others still run normal switching. Dedicated Openflow mode: Requires a reload to go into effect. All ports on the switch are Openflow ports. By default, the switch is a normal network switch that can be dynamically enabled for Openflow. In this default mode, you can issue a simple operational command to put the switch into Hybrid mode and start to configure ports as Openflow or normal switch ports. Inside the switch, ports that are configured into Openflow mode are isolated from ports in normal mode. Any communications between these Openflow and normal ports must occur outside of the switch. Hybrid mode Openflow is suitable for users wanting to experiment with Openflow on some ports while still using the other ports for regular switch traffic. Dedicated Openflow mode is for a customer that plans to run the entire switch in Openflow mode, and has the benefit of allowing a user to ensure the number of a certain type of flows, known as FDB flows. Hybrid mode does not. IBM also offers an Openflow controller to manage ports in Openflow mode. For more information about configuring Openflow on the EN4093R 10Gb Scalable Switch, see the appropriate Application Guide for the product. For more information about Openflow, see this website: http://www.openflow.org 432
IBM PureFlex System and IBM Flex System Products and Technology

6.11 802.1Qbg Edge Virtual Bridge support


802.1Qbg, also known as Edge Virtual Bridging (EVB) and Virtual Ethernet Port Aggregation (VEPA), is an IEEE standard targeted at bringing better network visibility and control into virtualized server environments. It does this by moving the control of packet flows between VMs up from the virtual switch in the hypervisor into the attaching physical switch, which allows the physical switch to provide granular control to the flows between VMs. It also supports the virtualization of the physical NICs into virtual NICs via protocols that are part of the 802.1Qbg specification. 802.1Qbg is currently supported on the EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch modules. The IBM implementation of 802.1Qbg supports the following features: Virtual Ethernet Bridging (VEB) and VEPA Provides support for switching between VMs on a common hypervisor. Edge Control Protocol (ECP) Provides reliable delivery of upper layer protocol data units (PDUs). Virtual Station Interface (VSI) Discovery and Configuration Protocol (VDP) Provides support for advertising VSIs to the network and centralized configuration of policies for the VM, regardless of its location in the network. EVB Type-Length Value (TLV) A component of Link layer Discover Protocol (LLDP) that is used to aid in the discovery and configuration of VEPA, ECP, and VDP. The current IBM implementation for these products is based on the 802.1Qbg draft, which has some variations from the final standard. For more information about IBMs implementation and operation of 802.1Qbg, see the appropriate Application Guide for the switch. For more information about this standard, see the IEEE documents at this website: http://standards.ieee.org/about/get/802/802.1.html

6.12 SPAR feature


SPAR is a feature that allows a physical switch to be carved into multiple logical switches. Once carved up, ports within a SPAR session can talk only to each other. Ports that do not belong to a specific SPAR cannot communicate to ports in that SPAR, without going outside the switch. Currently, the EN4093R 10Gb Scalable Switch, the CN4093 10Gb Converged Scalable Switch, and the SI4093 System Interconnect Module support SPAR. SPAR includes the following primary modes of operation: Pass-through domain mode: This is the default mode when SPAR is enabled. It is VLAN independent. It passes tagged and untagged packets through the SPAR session without looking at the customer tag.

Chapter 6. Network integration

433

On the SI4093 System Interconnect Module, SPAR supports passing FCoE packets to upstream FCF, but without the benefit of FIP snooping within the SPAR. The EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch do not support FCoE traffic in Pass-Through domain mode today. Local domain mode: This mode is not VLAN independent and requires a user to create any wanted VLANs on the switch. Currently, there is a limit of 256 VLANs in Local domain mode. Provides support for FIP Snooping on FCoE sessions. Unlike Pass-Through domain mode, Local domain mode provides strict control of end host VLAN usage The following points should be considered with regard to SPAR: SPAR is disabled by default on the EN4093R 10Gb Scalable Switch and CN4093 10Gb Converged Scalable Switch. SPAR is enabled by default on SI4093 System Interconnect Module, with all ports defaulting to a single pass-through SPAR group. This configuration can be changed if wanted. Any port can be a member only of a single SPAR group at one time. Only a single uplink path is allowed per SPAR group (it can be a single link, a static aggregation, or an LACP aggregation). This limitation ensures that no loops are possible with ports in a SPAR group. Currently, SPAR cannot be used with UFP or Virtual Fabric vNIC. Switch independent mode vNIC is supported with SPAR. UFP support is slated for a future release. Currently, up to eight SPAR sessions per switch are supported. This number might be increased in a future release. As you can see, SPAR must be considered as another tool in the user toolkit for ways to deploy the Enterprise Chassis Ethernet switching solutions in unique ways.

6.13 Management
The Enterprise Chassis is managed as an integrated solution. It also offers the ability to manage each element as an individual product. From an I/O module perspective, the Ethernet switch modules can be managed through the IBM Flex System Manager (FSM), an integrated management appliance for all IBM Flex System solution components.

434

IBM PureFlex System and IBM Flex System Products and Technology

Network Control, a component of FSM, provides advanced network management functions for IBM Flex System Enterprise Chassis network devices. The following functions are included in network control: Discovery Inventory Network topology Health and status monitoring Configuring network devices Network Control is a preinstalled plug-in that builds on base management software capabilities. This build is done by integrating the launch of vendor-based device management tools, topology views of network connectivity, and subnet-based views of servers and network devices. Network Control offers the following network-management capabilities: Discover network devices in your environment Review network device inventory in tables or a network topology view Monitor the health and status of network devices Manage devices by groups: Ethernet switches, Fibre Channel over Ethernet, or Subnet View network device configuration settings and apply templates to configure devices, including Converged Enhanced Ethernet quality of service (QoS), VLANs, and Link Layer Discovery Protocol (LLDP) View systems according to VLAN and subnet Run network diagnostic tools, such as ping and trace route Create logical network profiles to quickly establish VLAN connectivity Simplify VM connections management by configuring multiple characteristics of a network when virtual machines are part of a network system pool With management software VMControl, maintain network state (VLANs and ACLs) as a virtual machine is migrated (KVM) Manage virtual switches, including virtual Ethernet bridges Configure port profiles, a collection of network settings associated with a virtual system Automatically configure devices in network systems pools Ethernet I/O modules also can be managed by the command-line interface (CLI), web interface, IBM System Networking Switch Center, or any third-party SNMP-based management tool. The EN4093R 10Gb Scalable Switch, CN4093 10Gb Converged Scalable Switch, and the EN2092 1Gb Ethernet Switch modules all offer two CLI options (because it is a non-managed device, the pass-through module has no user interface). Currently, the default CLI for these Ethernet switch modules is the IBM Networking OS CLI, which is a menu-driven interface. A user also can enable an optional CLI known as industry standard CLI (isCLI) that more closely resembles Cisco IOS CLI. The SI4093 System Interconnect Module only supports the isCLI option for CLI access. For more information about how to configure various features and the operation of the various user interfaces, see the Application and Command Reference guides, which are available at this website: http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp

Chapter 6. Network integration

435

6.13.1 Management tools and their capabilities


The various user interfaces that are available for the I/O modules, whether the CLI or the web-based GUI, offer the ability to fully configure and manage all features that are available to the switches. Some elements of the modules can be configured from the Chassis Management Module (CMM) user interface. The best tool for a user often depends on that users experience with different interfaces and their knowledge of networking features. Most commonly, the CLI is used by those who work with networks as part of their day-to-day jobs. The CLI offers the quickest way to accomplish tasks, such as scripting an entire configuration. The downside to the CLI is that it tends to be more cryptic to those that do not use them every day. For those users that do not need the power of the CLI, the web-based GUI permits the configuration and management of all switch features.

IBM System Networking Switch Center


Aside from the tools that run directly on the modules, IBM also offers IBM SNSC, a tool that provides the following functions: Improve network visibility and drive availability, reliability and performance Simplify management of large groups of switches with automatic discovery of switches in the network Automate and integrate management, deployment and monitoring Simple network management protocol (SNMP)- based configuration and management Support of network policies for virtualization Authentication and authorization Fault and performance management Integration with IBM Systems Director and VMware Virtual Center and vSphere clients For more information about IBM SNSC, see this website: http://www-03.ibm.com/systems/networking/software/snsc/index.html Any third-party management platforms that support SNMP also can be used to configure and manage the modules.

IBM Fabric Manager


By using IBM Fabric Manager (IFM), you can quickly replace and recover compute nodes in your environment. IFM assigns Ethernet MAC, Fibre Channel worldwide name (WWN), and serial-attached SCSI (SAS) WWN addresses so that any compute nodes that are plugged into those bays take on the assigned addresses. These assignments enable the Ethernet and Fibre Channel infrastructure to be configured before and after any compute nodes are connected to the chassis. With IFM, you can monitor the health of compute nodes and without user intervention automatically replace a failed compute node from a designated pool of spare compute nodes. After receiving a failure alert, IFM attempts to power off the failing compute node, read the IFM virtualized addresses and boot target parameters, apply these parameters to the next compute node in the standby pool, and power on the standby compute node.

436

IBM PureFlex System and IBM Flex System Products and Technology

You can also pre-assign MAC and WWN addresses and storage boot targets for up to 256 chassis or 3584 compute nodes. By using an enhanced GUI, you can create addresses for compute nodes and save the address profiles. You then can deploy the addresses to the bays in the same chassis or in up to 256 different chassis without any compute nodes installed in the chassis. Additionally, you can create profiles for chassis not installed in the environment by associating an IP address to the future chassis. IFM is available as a Feature on Demand (FoD) through the IBM Flex System Manager management software.

6.14 Summary and conclusions


The IBM Flex System platform provides a unique set of features that enable the integration of leading-edge technologies and transformation approaches into the data centers. These IBM Flex System features ensure that the availability, performance, scalability, security, and manageability goals of the data center networking design are met as efficiently as possible. The key data center technology implementation trends include the virtualization of servers, storage, and networks. Trends also include the steps toward infrastructure convergence that are based on mature 10 Gb Ethernet technology. In addition, the data center network is being flattened, and the logical overlay network becomes important in overall network design. These approaches and directions are fully supported by IBM Flex System offerings. IBM Flex System data center networking capabilities provide the following solutions to many issues that arise in data centers where new technologies and approaches are being adopted: Network administrator responsibilities can no longer be limited by the NIC level. Administrators must consider the platforms of the server network-specific features and requirements, such as vSwitches. IBM offers Distributed Switch 5000V that provides standard functional capabilities and management interfaces to ensure smooth integration into a data center network management framework. After 10 Gb Ethernet networks reach their maturity and price attractiveness, they can provide sufficient bandwidth for virtual machines in virtualized server environments and become a foundation of unified converged infrastructure. IBM Flex System offers 10 Gb Ethernet Scalable Switches and Pass-through modules that can be used to build a unified converged fabric. Although 10 Gb Ethernet is becoming a prevalent server network connectivity technology, there is a need to go beyond 10 Gb to avoid oversubscription in switch-to-switch connectivity, thus freeing room for emerging technologies, such as 40 Gb Ethernet. IBM Flex System offers the industrys first 40 Gb Ethernet-capable switch, EN4093, to ensure that the sufficient bandwidth is available for inter-switch links. Network infrastructure must be VM-aware to ensure the end-to-end QoS and security policy enforcement. IBM Flex System network switches offer VMready capability that provides VM visibility to the network and ensures that the network policies are implemented and enforced end-to-end. Pay as you grow scalability becomes an essential approach as increasing network bandwidth demands must be satisfied in a cost-efficient way with no disruption in network services. IBM Flex System offers scalable switches that enable ports when required by purchasing and activates simple software FoD upgrades without the need to buy and install additional hardware.

Chapter 6. Network integration

437

Infrastructure management integration becomes more important because the interrelations between appliances and functions are difficult to control and manage. Without integrated tools that simplify the data center operations, managing the infrastructure box-by-box becomes cumbersome. IBM Flex System offers centralized systems management with the integrated management appliance, IBM Flex System Manager, that integrates network management functions into a common data center management framework from a single pane of glass.

438

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 7.

Storage integration
IBM Flex System Enterprise Chassis offers several possibilities for integration into storage infrastructures, such as Fibre Channel (FC), iSCSI, and Converged Enhanced Ethernet. This chapter addresses major considerations to take into account during IBM Flex System Enterprise Chassis storage infrastructure planning. These considerations include storage system interoperability, I/O module selection and interoperability rules, performance, High Availability and redundancy, backup, and boot from storage area network (SAN). This chapter covers internal and external storage. This chapter includes the following topics: 7.1, IBM Flex System V7000 Storage Node on page 440 7.2, External storage on page 459 7.3, Fibre Channel on page 468 7.4, FCoE on page 473 7.5, iSCSI on page 475 7.6, HA and redundancy on page 476 7.7, Performance on page 478 7.8, Backup solutions on page 478 7.9, Boot from SAN on page 481

Copyright IBM Corp. 2012, 2013. All rights reserved.

439

7.1 IBM Flex System V7000 Storage Node


The IBM Flex System V7000 Storage Node and V7000 Expansion Node are designed specifically to be installed internally within the IBM Flex System Enterprise Chassis. Figure 7-1 shows the unit.

Figure 7-1 IBM Flex System V7000 Storage Node

Figure 7-2 shows a V7000 Storage Node installed within an Enterprise Chassis. Power, management, and I/O connectors are provided by the chassis midplane.

Figure 7-2 Enterprise Chassis containing a V7000 Storage Node

440

IBM PureFlex System and IBM Flex System Products and Technology

The V7000 Storage Node offers the following features: Physical chassis Plug and Play integration Automated deployment and discovery Integration into the Flex System Manager Chassis map Fibre Channel over Ethernet (FCoE) optimized offering (plus FC and iSCSI) Advanced storage efficiency capabilities Thin provisioning, IBM FlashCopy, IBM Easy Tier, IBM Real-time Compression, and nondisruptive migration External virtualization for rapid data center integration Metro and Global Mirror for multi-site recovery Scalable up to 240 SFF drives (HDD and SSD) Clustered systems support up to 960 SFF drives Support for Flex System compute nodes across multiple chassis The functionality is comparable somewhat to the Storwize V7000 external product. Table 7-1 compares the two products.
Table 7-1 Function Management software IBM Storwize V7000 versus IBM Flex System V7000 Storage Node function IBM Storwize V7000 Storwize V7000 and Storwize V7000 Unified IBM Flex System V7000 Storage Node Flex System Manager: Integrated server, storage, and networking management Flex System V7000 management GUI: Detailed storage setup Graphical user interface (GUI) 240 per Control Enclosure; 960 per clustered system Physically integrated into IBM Flex System Chassis SAN-attached 8 Gbps FC (FC), 10 Gbps iSCSI/FCoE

GUI Capacity Mechanical GUI

Graphical user interface (GUI) 240 per Control Enclosure; 960 per clustered system Storwize V7000 and Storwize V7000 Unified SAN-attached 8 Gbps FC, 1Gbps iSCSI and optional iSCSI/FCoE NAS Attached 1Gbps Ethernet (Storwize V7000 Unified) 8 GB/16 GB/64 GB

Cache per controller/ enclosure/clustered system Integrated features Mirroring Virtualization (internal and external), data migration Compression Unified Support

8 GB/16 GB/64 GB

IBM System Storage Easy Tier, FlashCopy, and thin provisioning Metro Mirror and Global Mirror Yes

System Storage Easy Tier, FlashCopy, and thin provisioning Metro Mirror and Global Mirror Yes

Yes NAS connectivity that is supported by Storwize V7000 Unified; IBM Active Cloud Engine integrated

Yes No

Chapter 7. Storage integration

441

Function IBM Tivoli FlashCopy Manager Support Tivoli

IBM Storwize V7000 Yes IBM Tivoli Storage Productivity Center Select, IBM Tivoli Storage Manager, and IBM Tivoli Storage Manager FastBack

IBM Flex System V7000 Storage Node Yes IBM Tivoli Storage Productivity Center Select integrated into Flex System Manager, Tivoli Storage Productivity Center, Tivoli Storage Manager, and IBM Tivoli Storage Manager FastBack supported

When it is installed within the Enterprise Chassis, the V7000 Storage Node occupies a total of four standard node bays because it is a double-wide and double-high unit. A total of three V7000 Storage Nodes can be installed within a single Enterprise Chassis. Chassis Management Module requirements: For redundancy, two CMMs must be installed in the chassis when a V7000 Storage node is installed. For more information about the requirements and limitations for the management by IBM Flex System Manager of Flex System V7000 Storage Node, Storwize V7000, and SAN Volume Controller, see this website: http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.com montasks.doc/flex_storage_management.pdf Installation of the V7000 Storage Node might require removal of the following items from the chassis: Up to four front filler panels Up to two compute node selves After the fillers and the compute node shelves are removed, two chassis rails must be removed from the chassis. Compute node shelf removal is shown in Figure 7-3.

Shelf

Tabs

Figure 7-3 Compute node shelf removal

442

IBM PureFlex System and IBM Flex System Products and Technology

After the compute node shelf is removed, the two compute node rails (left and right) must be removed from within the chassis by reaching inside and sliding up the blue touchpoint, as shown in Figure 7-4.

Figure 7-4 Removal of node slide rails

The V7000 Storage Node is slid into the double high chassis opening and the two locking levers closed, as shown in Figure 7-5.

Figure 7-5 Insertion of storage node into chassis

Chapter 7. Storage integration

443

When the levers are closed and the unit is installed within the Enterprise Chassis, the V7000 Storage Node connects physically and electrically to the chassis midplane, which provides the following items: Power Management The I/O connections between the storage node Host Interface Cards (HICs) and the I/O modules that are installed within the chassis

7.1.1 V7000 Storage Node types


The V7000 Storage Node is available in two different forms: as a dual controller storage node or as a storage expansion node for a JBOD expansion that is connected by serial-attached SCSI (SAS) cables to the Controller storage node. They look similar, but have different modules that are installed within them. Table 7-2 shows the two different models and descriptions.
Table 7-2 V7000 Storage Node models MTMa 4939-X49 4939-A49 4939-X29 4939-A29 Part number 4939H49 4939H29 Product Name IBM Flex System V7000 Control Enclosure IBM Flex System V7000 Expansion Enclosure

a. The first Machine Type Model (MTM) number that is listed is the IBM System x sales channel, and the second MTM is the Power Systems channel.

The IBM Flex System V7000 Control Enclosure has the following components: An enclosure of 24 disks. Two Controller Modules. Up to 24 SFF drives. A battery inside each node canister. Each Control Enclosure supports up to nine Expansion Enclosures that are attached in a single SAS chain. Up to two Expansion Enclosures can be attached to each Control Enclosure within the Enterprise chassis. Figure 7-6 shows the V7000 Control Enclosure front view.

Figure 7-6 V7000 Control Enclosure front view

444

IBM PureFlex System and IBM Flex System Products and Technology

The IBM Flex System V7000 Expansion Enclosure has the following components: An enclosure for up to 24 disks with two Expansion Modules installed Two SAS ports on each Expansion module Figure 7-7 shows the front view of the V7000 Expansion Enclosure.

Figure 7-7 V7000 Expansion Enclosure front view

Figure 7-8 shows the layout of the enclosure with the outer and Controller Modules covers removed. The HICs can be seen at the rear of the enclosure, where they connect to the midplane of the Enterprise Chassis.
HIC

Controller module

Enclosure Hard disk bay 24, 2.5-inch HDD bays

Figure 7-8 V7000 Storage Enclosure with the covers removed

7.1.2 Controller Modules


The Controller Enclosure has space for two Controller Modules (also known as Node canisters), into which two HICs can be installed.

Chapter 7. Storage integration

445

Figure 7-9 shows the V7000 Storage Node with Controller Modules.

Figure 7-9 V7000 Storage Node with Controller Modules

The parts that are highlighted in Figure 7-9 are described Table 7-3.
Table 7-3 Part descriptions Part 1 2 3 4 Description SAS Port1 and Node Canister 1 Node Canister 1 Node Canister 2 SAS Port1 and Node Canister 2

The Controller Module features the following components: One or two HICs that are installed in the rear. The first HIC must always be two 10 Gb Ethernet ports (FCoE and iSCSI). The second HIC can be four 2/4/8 Gb FC ports or two 10 Gb Ethernet Ports (FCoE or iSCSI). One internal 10/100/1000 Mbps Ethernet for management (no iSCSI). One external 6 Gbps SAS ports (four lanes). Usage optional. Two external USB ports (not used for normal operation). One battery. Each Controller Module has a single SAS connector for the interconnection of expansion units, along with two USB ports. The USB ports are used when servicing the system. When a USB flash drive is inserted into one of the USB ports on a node canister in a Control Enclosure, the node canister searches for a control file on the USB flash drive and runs the command that is specified in the file.

446

IBM PureFlex System and IBM Flex System Products and Technology

Figure 7-10 shows the Controller Module front view with the LEDs highlighted.

Figure 7-10 Controller Module front view with LEDs highlighted

Table 7-4 explains the LEDs.


Table 7-4 LED descriptions LED number 1 LED name SAS port Status LED color Amber State and description Off: There are no faults or conditions that are detected by the canister on the SAS port or down stream device that is connected to the port. On solid: There is a fault condition that is isolated by the canister on the external SAS port. Slow flashing: The port is disabled and does not service SAS traffic. Flashing: One or more of the narrow ports of the SAS links on the wide SAS port link failed, and the port is not operating as a full wide port. Off: Power is not present or there is no SAS link connectivity established. On solid: There is at least one active SAS link in the wide port that is established and there is no external port activity. Flashing: The expansion port activity LED should flash at a rate proportional to the level of SAS port interface activity as determined by the canister. The port also flashes when routing updates or configuration changes are being performed on the port. Off: There are no isolated FRU failures in the canister. On solid: Replace the canister. Off: There are no failures that are isolated to the internal components of the canister. On solid: Replace the failing HIC. Flashing: An internal component is being identified on this canister. Off: The battery is not in use. Fast flashing: The system is saving cache and system state data to a storage device. Off: No faults are detected with the battery. On solid: A fault was detected with the battery. Off: Indicates that the battery is not in a state where it can support a save of cache and system state data. On solid: Indicates that the battery is fully charged and can support a save of cache and system state data. Flashing: Indicates that the battery is charging and can support at least one save of cache and system state data. Fast flashing: Indicates that the battery is charging, but cannot yet support a save of cache and system state data.

SAS Port Activity

Green

3 4

Canister Fault Internal Fault

Amber Amber

Battery in Use Battery Fault Battery Status

Green

6 7

Amber Amber

Chapter 7. Storage integration

447

LED number 8

LED name Power

LED color Green

State and description Off: There is no power to the canister. Make sure that the CMM powered on the storage node. Try reseating the canister. If the state persists, follow the hardware replacement procedures for the parts in the following order: node canister and then Control Enclosure. On solid: The canister is powered on. Flashing: The canister is in a powered down state. Use the CMM to power on the canister. Fast flashing: The management controller is in the process of communicating with the CMM during the initial insertion of the canister. If the canister remains in this state for more than 10 minutes, try reseating the canister. If the state persists, follow the hardware replacement procedure for the node canister. Off: The canister is not operational. On solid: The canister is active. You should not power off, or remove, a node canister whose status LED is on solid. You might lose access to data or corrupted volume data. Follow the procedures to shut down a node so that access to data is not compromised. Flashing: The canister is in the candidate or service state. Off: There is no host I/O activity. Flashing: The canister is actively processing input/output (IO) traffic. Off: There are no isolated failures on the storage enclosure. On solid: There are one or more isolated failures in the storage enclosure that require service or replacement. Off: There are no conditions that require the user to log in to the management interface and review the error logs. On solid: The system requires the attention of the user through one of the management interfaces. There are multiple reasons that the Check Log LED can be illuminated. Off: The canister is not identified by the canister management system. On solid: The canister is identified by the canister management system. Flashing: Occurs during power-on and power-on self-test (POST) activities.

Canister status

Green

10 11

Canister Activity Enclosure Fault Check Log

Green Amber

12

Amber

13

Canister or Control Enclosure Identify

Blue

Figure 7-11 shows a Controller Module with its cover removed. With the cover removed, the HIC can be removed or replaced as needed. Figure 7-11 shows two HICs that are installed in the Controller Modules (1) and the direction of removal of a HIC (2).

Figure 7-11 Controller Module

448

IBM PureFlex System and IBM Flex System Products and Technology

The battery within the Controller Module contains enough capacity to shut down the node canister twice from fully charged. The batteries do not provide any brownout protection or ride-through timers. When AC power is lost to the node canister, it shuts down. The ride-through behavior is provided by the Enterprise Chassis. The batteries need only one second of testing every three months, rather than the full discharge and recharge cycle that is needed for the Storwize V7000 batteries. The battery test is performed while the node is online. It is performed only if the other node in the Control Enclosure is online. If the battery fails the test, the node goes offline immediately. The battery is automatically tested every time that the controllers operating system is powered up.

Special battery shutdown mode: If (and only if) you are shutting down the node canister and are going to remove the battery, you must run the following shutdown command: satask stopnode poweroff battery This command puts the battery into a mode where it can safely be removed from the node canister after the power is off. The principal (and probably only) use case for this shutdown is a node canister replacement where you must swap the battery from the old node canister to the new node canister.

Removing the canister without shutdown: If a node canister is removed from the enclosure without shutting it down, the battery keeps the node canister powered while the node canister performs a shutdown.

Chapter 7. Storage integration

449

Figure 7-12 shows the Internal V7000 Control Enclosures architecture. The HICs provide the I/O to the I/O Module bays, where switches are generally installed. The management network and power are also connected.
Chassis Midplane RAID Controller (Left)
Host Controller 4C JF PCIe SW Host Controller

HIC 1 (Left)

HIC 2 (Right)

IBEX 3 DIMMs SAS HBA Battery FHD SSD

IMM

Disk Midplane
Sensor Farm 1 G SW

SAS Expander

Power Interposer
1 G SW

Disk Tray
VPD

Pwr Regs Pwr Ctl Pwr Regs

24 Disk Trays
VPD

Disk Tray

1 G SW SAS Expander 1 G SW Battery FHD SSD SAS HBA 3 DIMMs IBEX IMM Host Controller Sensor Farm

HIC 1 (Left)

4C JF

PCIe SW Host Controller

HIC 2 (Right)

RAID Controller (Right)

Figure 7-12 V7000 Control Enclosure schematic

7.1.3 Expansion Modules


Expansion Modules are installed within the Storage Expansion Enclosure. They are distinct from Controller Modules because they have no USB ports and have two SAS ports. Figure 7-13 shows an Expansion Module.

Figure 7-13 Expansion Module

450

IBM PureFlex System and IBM Flex System Products and Technology

Table 7-5 explains the meanings of the numbers in Figure 7-13 on page 450.
Table 7-5 Expansion Module LEDs LED number 1 LED name SAS Port Status LED color Amber State and description Off: There are no faults or conditions that are detected by the expansion canister on the SAS port or down stream device that is connected to the port. On solid: There is a fault condition that is isolated by the expansion canister on the external SAS port. Slow flashing: The port is disabled and does not service SAS traffic. Flashing: One or more of the narrow ports of the SAS links on the wide SAS port link failed, and the port is not operating as a full wide port. Off: Power is not present or there is no SAS link connectivity established. On solid: There is at least one active SAS link in the wide port that is established and there is no external port activity. Flashing: The expansion port activity LED should flash at a rate proportional to the level of SAS port interface activity as determined by the expansion canister. The port also flashes when routing updates or configuration changes are being performed on the port. Off: There are no faults or conditions that are detected by the expansion canister on the SAS port or down stream device that is connected to the port. On solid: There is a fault condition that is isolated by the expansion canister on the external SAS port. Slow flashing: The port is disabled and does not service SAS traffic. Flashing: One or more of the narrow ports of the SAS links on the wide SAS port link failed, and the port is not operating as a full wide port. Off: Power is not present or there is no SAS link connectivity established. On solid: There is at least one active SAS link in the wide port that is established and there is no external port activity. Flashing: The expansion port activity LED should flash at a rate proportional to the level of SAS port interface activity as determined by the expansion canister. The port also flashes when routing updates or configuration changes are being performed on the port. Off: There are no isolated FRU failures on the expansion canister. On solid: There are one or more isolated FRU failures in the expansion canister that require service or replacement. Off: There are no failures that are isolated to the internal components of the expansion canister. On solid: An internal component requires service or replacement. Flashing: An internal component is being identified on this expansion canister. Off: There is no power to the expansion canister. On solid: The expansion canister is powered on. Flashing: The expansion canister is in a powered down state. Fast flashing: The management controller is in the process of communicating with the CMM during the initial insertion of the expansion canister. Off: The expansion canister is not identified by the controller management system. On solid: The expansion canister is identified by the controller management system Flashing: Occurs during power-on and power-on self-test (POST) activities.

SAS Port activity

Green

SAS Port Status

Amber

SAS Port activity

Green

Expansion Canister Fault Expansion Canister Internal Fault Power

Amber

Amber

Green

Identify

Blue

Chapter 7. Storage integration

451

LED number 9

LED name Expansion Enclosure Fault

LED color Amber

State and description Off: There are no faults or conditions that are detected by the expansion canister on the SAS port or down stream device that is connected to the port. On solid: There is a fault condition that is isolated by the expansion canister on the external SAS port. Slow flashing: The port is disabled and will not service SAS traffic. Flashing: One or more of the narrow ports of the SAS links on the wide SAS port link failed, and the port is not operating as a full wide port.

The Expansion Module has two 6 Gbps SAS ports at the front of the unit. Usage of port 1 is mandatory; usage of port 2 is optional. These ports are used to connect to the Storage Controller Modules. Mini SAS ports: The SAS ports on the Flex System V7000 expansion canisters are HD Mini SAS ports. IBM Storwize V7000 canister SAS ports are Mini SAS.

7.1.4 SAS cabling


The V7000 Control Enclosure can be cabled to internal V7000 Expansion Enclosures or to external V7000 2076 enclosures. A total of nine units are supported and can be Flex System V7000 Expansion Enclosures (internal) or Storwize V7000 Expansion Enclosures (external) if they meet the following criteria: Internal Expansion Enclosures can never exceed two and must be in the same Flex System chassis as the controller. The total Expansion Enclosures cannot exceed nine. The left side canister of the Flex System V7000 Control Enclosure must always be cabled to one of the following canisters: The left canister of the Flex System V7000 Expansion Enclosure The top canister of a Storwize V7000 External Enclosure The right canister of the Flex System V7000 Control Enclosure must always be cabled to one of the following canisters: The right canister of the Flex System V7000 Expansion Enclosure The bottom canister of a Storwize V7000 External Enclosure The cabling order must be preserved between the two node canisters. For example, if the enclosures A, B, C, and D are attached to the left node canister in the order A B C D, then the enclosures must be attached to the right node canisters in the order A B C D. Storwize V7000 Expansion Enclosures are cabled in the usual manner.

452

IBM PureFlex System and IBM Flex System Products and Technology

Figure 7-14 shows an example of the use of both the V7000 internal and external Expansion Enclosures, with one Control Enclosure. The initial connections are made to the internal Expansion Enclosures within the Flex System Chassis. The SAS cables are then chained to the external Expansion Enclosures. The internal management connections also are shown in Figure 7-14.

Control Enclosure
SVC SVC

A
IMM OSES OSES IMM

SAS Internal Expansion Enclosure


IMM SAS

Ethernet

HD Mini SAS
SAS IMM

Internal Expansion Enclosure


IMM SAS

SAS

IMM

Mini SAS V7000 Expansion


SAS

SAS

V7000 Expansion
SAS

SAS

Figure 7-14 Cables

The cables that are used for linking to the Flex System V7000 Control and Expansion Enclosures are different from the cables that are used to link externally attached enclosures. A pair of the Internal Expansion Cables is shipped as standard with the Expansion Unit. The cables for internal connection are the HD SAS to HD SAS0 type.

Chapter 7. Storage integration

453

The cables that are used to link an internal Controller or Expansion unit are of a different type and must be ordered separately. These cables are HD SAS to Mini SAS and are supplied in a package of two. The cables are described in Table 7-6.
Table 7-6 Cable part number and feature code Part number 90Y7682 Feature code ADA6 Product name External Expansion Cable Pack (Dual 6M SAS Cables - HD SAS to Mini SAS)

7.1.5 Host interface cards


HICs are installed within the control canister of the V7000 Control Enclosure. The enclosure can accommodate the following different types of HIC: 10Gb Ethernet 2-Port host interface card 8Gb Fibre Channel 4-port host interface card If required, the second HICs are selected to match the I/O modules that are installed in the Enterprise Chassis. HIC slot 1 in each node canister connects to I/O modules 1 and 2, and the HIC slot 2 in each node canister connects to IO modules 3 and 4. The location of the host interface card in slot 1 (port 1) is on the left side when you are facing the front of the canister. The location of the host interface card in slot 2 (port 2) is on the right side when you are facing the front of the canister. HIC locations: The first HIC location can be populated only by a 10Gbps Ethernet HIC; the second location can be populated by a 10Gb Ethernet HIC or an 8Gb Fibre Channel HIC. HICs must be in identical population order on each control canister pair.

7.1.6 Fibre Channel over Ethernet with a V7000 Storage Node


Integration of compute nodes, networking, and storage is one of the key advantages of the Enterprise Chassis. When it is combined with the CN409310Gb Converged Scalable Switch that is installed within the Enterprise Chassis, the IBM Flex System V7000 Storage Node can be directly connected through the midplane in Fibre Channel over Ethernet mode. By using an (external) Storwize V7000, this function can be delivered by using the EN4093 that is connected to a converged Top of Rack switch, such as G8264CS. This configuration breaks out the FC to an existing SAN, such as Cisco/Brocade, to which the Storwize V7000 is then connected. The CN4093 converged switch acts as a Full Fabric FC/FCoE switch for end-to-end FCoE configurations or as an integrated Fibre Channel Forwarder (FCF) NPV Gateway breaking out FC traffic within the chassis for the native Fibre Channel SAN connectivity. The CN4093 offers Ethernet and Fibre Channel ports on the same switch. A number of external ports can be 10 GbE or 4/8 Gb FC ports (OmniPorts), which offers flexible configuration options. For a complete description of the CN4093, see 4.11.5, IBM Flex System EN6131 40Gb Ethernet Switch on page 117.

454

IBM PureFlex System and IBM Flex System Products and Technology

Consideration: It is not possible to connect to the V7000 Storage Node over the Chassis Midplane in FCoE mode without the use of the CN4093 Converged Scalable Switch. For the latest support matrixes for storage products, see the storage vendor interoperability guides. IBM storage products can be referenced in the System Storage Interoperability Center (SSIC), which are available at this website: http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

7.1.7 V7000 Storage Node drive options


A selection of drives is available in hard disk drive (HDD) and solid-state disk (SSD) formats. The part numbers are shown in Table 7-7.
Table 7-7 V7000 Storage Node drive options Part number 90Y7642 90Y7642 90Y7647 90Y7652 90Y7657 90Y7662 90Y7667 90Y7672 90Y7676 90Y7682 90Y7683 90Y7684 Feature code AD11 AD12 AD21 AD23 AD24 AD31 AD32 AD41 AD43 ADA6 ADB1 ADB2 Product name 500GB 7.2K 2.5 inch HDD 1TB 7.2K 2.5 inch HDD 300 GB 10K 2.5 inch HDD 600 GB 10K 2.5 inch HDD 900 GB 10K 2.5 inch HDD 146 GB 15K 2.5 inch HDD 300 GB 15K 2.5 inch HDD 200 GB 2.5 inch SSD 400 GB 2.5 inch SSD External Expansion Cable Pack (Dual 6M SAS Cables - HD SAS to Mini SAS) 10Gb CNA 2 Port Card 8Gb FC 4 Port Card

7.1.8 Features and functions


The following functions are available with the IBM Flex System V7000 Storage Node: Thin provisioning (no license required) Traditional fully allocated volumes allocate real physical disk capacity for an entire volume, even if that capacity is never used. Thin-provisioned volumes allocate real physical disk capacity only when data is written to the logical volume. Volume mirroring (no license required) Provides a single volume image to the attached host systems while maintaining pointers to two copies of data in separate storage pools. Copies can be on separate disk storage systems that are being virtualized. If one copy fails, IBM Flex System V7000 Storage Node provides continuous data access by redirecting I/O to the remaining copy. When the copy becomes available, automatic resynchronization occurs.

Chapter 7. Storage integration

455

FlashCopy (included with the base IBM Flex System V7000 Storage Node license) Provides a volume level point-in-time copy function for any storage being virtualized by IBM Flex System V7000 Storage Node. This function is designed to create copies for backup, parallel processing, testing, and development, and have the copies available almost immediately. IBM Flex System V7000 Storage Node includes the following FlashCopy functions: Full/Incremental copy This function copies only the changes from the source or target data since the last FlashCopy operation and enables completion of point-in-time online backups more quickly than the use of traditional FlashCopy. Multitarget FlashCopy IBM Flex System V7000 Storage Node supports copying of up to 256 target volumes from a single source volume. Each copy is managed by a unique mapping and, in general, each mapping acts independently and is not affected by other mappings that share the source volume. Cascaded FlashCopy This function is used to create copies of copies and supports full, incremental, or nocopy operations. Reverse FlashCopy This function allows data from an earlier point-in-time copy to be restored with minimal disruption to the host. FlashCopy nocopy with thin provisioning This function provides a combination of the use of thin-provisioned volumes and FlashCopy together to reduce disk space requirements when copies are made. The following variations of this option are available: Space-efficient source and target with background copy: Copies only the allocated space. Space-efficient target with no background copy: Copies only the space that is used for changes between the source and target and is generally referred to as snapshots. This function can be used with multi-target, cascaded, and incremental FlashCopy. Consistency groups Consistency groups address the issue where application data is on multiple volumes. By placing the FlashCopy relationships into a consistency group, commands can be issued against all of the volumes in the group. This action enables a consistent point-in-time copy of all of the data, even if it might be on a physically separate volume. FlashCopy mappings can be members of a consistency group, or they can be operated in a stand-alone manner, that is, not as part of a consistency group. FlashCopy commands can be issued to a FlashCopy consistency group, which affects all FlashCopy mappings in the consistency group, or to a single FlashCopy mapping if it is not part of a defined FlashCopy consistency group. Remote Copy feature Remote Copy is a licensed feature that is based on the number of enclosures that are being used at the smallest configuration location. Remote Copy provides the capability to perform Metro Mirror or Global Mirror operations.

456

IBM PureFlex System and IBM Flex System Products and Technology

Metro Mirror Provides a synchronous remote mirroring function up to approximately 300 km (186.41 miles) between sites. As the host I/O completes only after the data is cached at both locations, performance requirements might limit the practical distance. Metro Mirror provides fully synchronized copies at both sites with zero data loss after the initial copy is completed. Metro Mirror can operate between multiple IBM Flex System V7000 Storage Node systems. Global Mirror Provides a long distance asynchronous remote mirroring function up to approximately 8,000 km (4970.97 miles) between sites. With Global Mirror, the host I/O completes locally and the changed data is sent to the remote site later. This function is designed to maintain a consistent recoverable copy of data at the remote site, which lags behind the local site. Global Mirror can operate between multiple IBM Flex System V7000 Storage Node systems. Data Migration (no charge for temporary usage) IBM Flex System V7000 Storage Node provides a data migration function that can be used to import external storage systems into the IBM Flex System V7000 Storage Node system. You can use these functions to perform the following actions: Move volumes nondisruptively onto a newly installed storage system. Move volumes to rebalance a changed workload. Migrate data from other back-end storage to IBM Flex System V7000 Storage Node managed storage. IBM System Storage Easy Tier (no charge) Provides a mechanism to seamlessly migrate hot spots to the most appropriate tier within the IBM Flex System V7000 Storage Node solution. This migration can be to internal drives within IBM Flex System V7000 Storage Node or to external storage systems that are virtualized by IBM Flex System V7000 Storage Node. Real Time Compression (RTC) Provides for data compression by using the IBM Random-Access Compression Engine (RACE), which can be performed on a per volume basis in real time on active primary workloads. RTC can provide as much as a 50% compression rate for data that is not already compressed. This function can reduce the amount of capacity that is needed for storage, which can delay further growth purchases. RTC supports all storage that is attached to the IBM Flex System V7000 Storage Node, whether it is internal, external, or external virtualized storage. A compression evaluation tool that is called the IBM Comprestimator Utility can be used to determine the value of the use of compression on a specific workload for your environment. The tool is available at this website: http://ibm.com/support/docview.wss?uid=ssg1S4001012

7.1.9 Licenses
IBM Flex System V7000 Storage Node requires licenses for the following features: Enclosure External Virtualization
Chapter 7. Storage integration

457

Remote Copy (Advanced Copy Services: Metro Mirror/Global Mirror) RTC A summary of the licenses is shown in Table 7-8.
Table 7-8 Licenses License type Enclosure External Virtualization Remote Copy Real Time Compression Unit Base+expansion Physical Enclosure Number Physical Enclosure Number Of External Storage Physical Enclosure Number Physical Enclosure Number License number 5639-VM1 5639-EV1 5639-RM1 5639-CP1 License required? Yes Optional add-on feature Optional add-on feature Optional add-on feature

The following functions do not need a license: FlashCopy Volume Mirroring Thin Provisioning Volume Migration Easy Tier For the latest support matrixes for storage products, see the storage vendor interoperability guides. IBM storage products can be referenced in the System Storage Interoperability Center (SSIC), which is available at this website: http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

7.1.10 Configuration restrictions


When a Flex System V7000 control enclosure that is running IBM Flex System V7000 software version 7.1 is configured, the following restrictions apply when combinations of internal (Flex System V7000) and external (Storwize V7000) expansion enclosures are attached: If no Flex System V7000 expansion enclosures are attached to a Flex System V7000 control enclosure, no more than nine Storwize V7000 expansion enclosures can be attached to the same Flex System V7000 control enclosure. If one Flex System V7000 expansion enclosure is attached to a Flex System V7000 control enclosure, no more than eight Storwize V7000 expansion enclosures can be attached to the same Flex System V7000 control enclosure. If two Flex System V7000 expansion enclosures are attached to a Flex System V7000 control enclosure, no more than seven Storwize V7000 expansion enclosures can be attached to the same Flex System V7000 control enclosure. No more than two Flex System V7000 expansion enclosures can be attached to the same Flex System V7000 control enclosure. Chassis Management Module requirements: For redundancy, two CMMs must be installed in the chassis when a V7000 Storage Node is installed.

458

IBM PureFlex System and IBM Flex System Products and Technology

For more information and requirements for the management of Flex System V7000 Storage Node, Storwize V7000, and SAN Volume Controller by IBM Flex System Manager, see V7.1 Configuration Limits and Restrictions for IBM Flex System V7000, S1004369, which is available at this website: http://ibm.com/support/docview.wss?uid=ssg1S1004369

7.2 External storage


The following options are available for attaching external storage systems to Enterprise Chassis: SANs that are based on Fibre Channel (FC) technologies SANs that are based on iSCSI Converged Networks that are based on 10 Gb Converged Enhanced Ethernet (CEE) Traditionally, FC-based SANs are the most common and advanced design of external storage infrastructure. They provide high levels of performance, availability, redundancy, and scalability. However, the cost of implementing FC SANs is higher when compared with CEE or iSCSI. Almost every FC SAN includes the following major components: Host bus adapters (HBAs) FC switches FC storage servers FC tape devices Optical cables for connecting these devices to each other iSCSI-based SANs provide all the benefits of centralized shared storage in terms of storage consolidation and adequate levels of performance. However, they use traditional IP-based Ethernet networks instead of expensive optical cabling. iSCSI SANs consist of the following components: Server hardware iSCSI adapters or software iSCSI initiators Traditional network components, such as switches and routers Storage servers with an iSCSI interface, such as IBM System Storage DS3500 or IBM N Series Converged Networks can carry SAN and LAN types of traffic over the same physical infrastructure. You can use consolidation to decrease costs and increase efficiency in building, maintaining, operating, and managing the networking infrastructure. iSCSI, FC-based SANs, and Converged Networks can be used for diskless solutions to provide greater levels of usage, availability, and cost effectiveness. The following IBM storage products that are supported by the Enterprise Chassis. The products are described later in this section: IBM Storwize V7000 IBM XIV Storage System series IBM System Storage DS8000 series IBM System Storage DS5000 series IBM System Storage V3700 IBM System Storage DS3000 series IBM FlashSystem 820 and 720 IBM System Storage N series IBM System Storage TS3500 Tape Library
Chapter 7. Storage integration

459

IBM System Storage TS3310 Tape Library IBM System Storage TS3100 Tape Library System Storage Interoperability Center (SSIC) provides information that relates to end-to-end support of IBM storage when it is connected to IBM Flex System. The SSIC website allows the selection of many items of an end-to-end solution. For example selection of: Storage family and model Storage code version Connection protocol, such as FCoE or FC Flex System node model type I/O Adapter type, such as specific HBA or LOM Flex System switches, transit switches and Top of Rack switches For the latest support matrixes for storage products, see the storage vendor interoperability guides. IBM storage products can be referenced in the System Storage Interoperability Center (SSIC), which is available at this website: http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

Although the SSIC details support for IBM storage that is attached to an Enterprise Chassis, it does not necessarily follow that the Flex System Manager fully supports and manages the storage that is attached or allows all tasks to be completed with that external storage. FSM supports the following storage devices: Flex System V7000 Storage Node IBM Storwize V7000 IBM Storwize V3700 IBM Storwize V3500 IBM SAN Volume Controller IBM System Storage DS8000 IBM XIV Storage System Gen3 Listings of the Storage Subsystem and the tasks that are supported can be found within the IBM Flex System Information Center at this website: https://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp?topic=%2Fc om.ibm.acc.8731.doc%2Ftask_support_for_storage_products_2013.html

7.2.1 IBM Storwize V7000


IBM Storwize V7000 is an innovative storage offering that delivers essential storage efficiency technologies and exceptional ease of use and performance. It is integrated into a compact, modular design. Scalable solutions require highly flexible systems. In a truly virtualized environment, you need virtualized storage. All Storwize V7000 storage is virtualized. The Storwize V7000 offers the following features: Enables rapid, flexible provisioning and simple configuration changes Enables nondisruptive movement of data among tiers of storage, including IBM Easy Tier Enables data placement optimization to improve performance

460

IBM PureFlex System and IBM Flex System Products and Technology

The most important aspect of the Storwize V7000 and its use with the IBM Flex System Enterprise Chassis is that Storwize V7000 can virtualize external storage. In addition, Storwize V7000 has the following features: Capacity from existing storage systems becomes part of the IBM storage system Single user interface to manage all storage, regardless of vendor Designed to significantly improve productivity Virtualized storage inherits all the rich base system functions, including IBM FlashCopy, Easy Tier, and thin provisioning Moves data transparently between external storage and the IBM storage system Extends life and enhances value of existing storage assets Storwize V7000 offers thin provisioning, FlashCopy, EasyTier, performance management, and optimization. External virtualization allows for rapid data center integration into existing IT infrastructures. The Metro/Global Mirroring option provides support for multi-site recovery. Figure 7-15 shows the IBM Storwize V7000.

Figure 7-15 IBM Storwize V7000

Chapter 7. Storage integration

461

The levels of integration of Storwize V7000 with IBM Flex System provide the following features: Starting Level: IBM Flex System Single Point of Management Higher Level: Data center Management IBM Flex System Manager Storage Control Detailed Level: Data Management Storwize V7000 Storage User GUI Upgrade Level: Data center Productivity Tivoli Storage Productivity Center for Replication Storage Productivity Center IBM Storwize V7000 provides a number of configuration options that simplify the implementation process. It also provides automated wizards, called directed maintenance procedures (DMP), to help resolve any events. IBM Storwize V7000 is a clustered, scalable, and midrange storage system, and an external virtualization device. IBM Storwize V7000 Unified is the latest release of the product family. This virtualized storage system is designed to consolidate block and file workloads into a single storage system. This consolidation provides simplicity of management, reduced cost, highly scalable capacity, performance, and HA. IBM Storwize V7000 Unified Storage also offers improved efficiency and flexibility through built-in SSD optimization, thin provisioning, and nondisruptive migration of data from existing storage. The system can virtualize and reuse existing disk systems, which provide a greater potential return on investment. For more information about IBM Storwize V7000, see this website: http://www.ibm.com/systems/storage/disk/storwize_v7000/overview.html

7.2.2 IBM XIV Storage System series


The IBM XIV Storage System is a proven, high-end disk storage series that is designed to address storage challenges across the application spectrum. It addresses challenges in virtualization, email, database, analytics, and data protection solutions. The XIV series delivers consistent high performance and high reliability at tier 2 costs for even the most demanding workloads. It uses massive parallelism to allocate system resources evenly and always, and can scale seamlessly without manual tuning. Its virtualized design and customer-acclaimed ease of management dramatically reduce administrative costs and bring optimization to virtualized server and cloud environments. The XIV Storage System series has the following key features: A revolutionary high-end disk system for UNIX and Intel processor-based environments that are designed to reduce the complexity of storage management. Provides even and consistent performance for a broad array of applications. No tuning is required. XIV Gen3 is suitable for demanding workloads. Scales up to 360 TB of physical capacity, 161 TB of usable capacity. Thousands of instantaneous and highly space-efficient snapshots enable point-in-time copies of data. Built-in thin provisioning can help reduce direct and indirect costs.

462

IBM PureFlex System and IBM Flex System Products and Technology

Synchronous and asynchronous remote mirroring provides protection against primary site outages, disasters, and site failures. Offers FC and iSCSI attach for flexibility in server connectivity. For more information about the XIV, see this website: http://www.ibm.com/systems/storage/disk/xiv/index.html

7.2.3 IBM System Storage DS8000 series


The IBM System Storage DS8000 series helps users maintain control of their storage environments so they can focus on using timely data to grow their businesses. Quick and reliable data access is the driving force behind real-time Business Analytics, and this flagship IBM disk system sets the standard for what an enterprise disk system should be with extraordinary performance, reliability, and security. Its scalable capacity ranges from 5 TB with the entry-level Business Class option to more than 2 PB for a full system. The DS8870 includes the following key features: PBM Power-based symmetric multiprocessing (SMP) controllers Host adapters: 4- and 8-port 8 Gbps FC/IBM FICON scalable from 2 to 32 adapters and up to 128 ports Drive adapters: Up to 16 4-port, 8 Gbps Fibre Channel adapters Drive options: SSDs, enterprise 15k rpm and 10k rpm drives, 7.2k rpm nearline drives; scalable from 16 to 1,536 drives All-flash option for ultra fast data access Processor memory for cache and nonvolatile storage scalable from 16 GB to more than 1 TB System capacity scalable from 1 TB to more than 2 PB of physical capacity For more information about the DS8000 series, see this website: http://www.ibm.com/systems/storage/disk/ds8000/

7.2.4 IBM System Storage DS5000 series


The IBM System Storage DS5000 series is designed to meet the demanding open-systems requirements of today and tomorrow while establishing a new standard for lifecycle longevity. Building on many decades of design expertise, the DS5000 storage systems architecture delivers industry-leading performance, real reliability, multidimensional scalability and unprecedented investment protection. DS5000 supports IBM AIX and IBM Power Systems T10-PI Data Integrity Initiative. The DS5000 series has the following key features: Efficient, compact 4U packaging that is designed for 19-inch rack Pay-as-you-grow that is designed to support nondisruptive growth Easy-to-use, easy-to-configure management interface able to manage IBM System Storage DS3000, IBM System Storage DS4000 and IBM System Storage DS5000 series storage systems Concurrent hardware upgrades and firmware loads support high-availability design Connectivity to IBM SAN switches, directors, and routers

Chapter 7. Storage integration

463

Heterogeneous support for the most common operating systems, including Microsoft Windows, UNIX, Linux, and Apple Macintosh For more information about the DS5000 series, see this website: http://www.ibm.com/systems/storage/disk/ds5000/

7.2.5 IBM System Storage V3700


IBM Storwize V3700, the entry-level system of the Storwize family, delivers efficient, entry-level configurations that are specifically designed to meet the needs of small and midsize businesses. Designed to provide organizations with the ability to consolidate and share data at an affordable price, Storwize V3700 offers the following advanced software capabilities that are usually found in more expensive systems: Easily manage and deploy this system by using the embedded graphical user interface that is based on the IBM Storwize interface design. Experience rapid, flexible provisioning and simple configuration changes with internal virtualization and thin provisioning. Have continuous access to data with integrated nondisruptive migration. Protect data with sophisticated remote mirroring and integrated IBM FlashCopy technology. Optimize costs for mixed workloads, with up to three times performance improvement with only five percent flash memory capacity by using IBM System Storage Easy Tier. Benefit from advanced functionality and reliability usually only found in more expensive systems. Scale up to 120 2.5-inch disk drives or 60 3.5-inch disk drives with four expansion units. Provide host attachment through 6 Gb SAS and 1 Gb iSCSI ports (standard). Help reduce power consumption with energy-saving features.

7.2.6 IBM System Storage DS3500 series


IBM combines best-of-type development with leading host interface and drive technology in the IBM System Storage DS3500 Express. With next-generation 6 Gbps SAS back-end and host technology, you have a seamless path to consolidated and efficient storage. This configuration improves performance, flexibility, scalability, data security, and ultra-low power consumption without sacrificing simplicity, affordability, or availability. DS3500 includes the following features: Next-generation 6 Gbps SAS systems to deliver mid-range performance and scalability at entry-level price. Data consolidation to ensure data availability and efficiencies across the organization. Energy-saving implementations for cost savings today and tomorrow. DS3500 Express meets the Network Equipment Building System (NEBS) telecommunications specification that requires robust abilities and support for -48 V DC power supplies. Management expertise built into an intuitive and powerful storage management software. Investment protection and cost-effective backup and recovery with support for 16 Remote Mirrors over Fibre Channel connections and 32 Global Mirrors across IP or Fibre Channel host ports. 464
IBM PureFlex System and IBM Flex System Products and Technology

Mixed host interfaces support enables IBM DB2 Administration Server and SAN tiering, which reduces overall operation and acquisition costs. Relentless data security with local key management of full disk encryption drives. Drive and expansion enclosure intermix cost-effectively meets all application, rack, and energy-efficiency requirements. Support for SSDs, high-performance SAS drives, nearline SAS drives, and self-encrypting disk (SED) drives IBM System Storage DS Storage Manager software. Optional premium features deliver enhanced capabilities for the DS3500 system. For more information about the DS3000, see this website: http://www.ibm.com/systems/storage/disk/ds3500

7.2.7 IBM network-attached storage products


IBM network-attached storage (NAS) products provide a wide-range of network attachment capabilities to a broad range of host and client systems. Offerings range from Entry N3000 Express series, Midrange System Storage N6000 Series, and the Enterprise N7000 Series. These solutions use Data ONTAP, a scalable and flexible operating system that provides the following features: More efficient use of your storage resources. High system availability to meet internal and external service level agreements. Reduced storage management complexity and associated storage IT costs. A single, scalable platform that can simultaneously support NAS, iSCSI and FC SAN deployments. Integrated application manageability for SAP, Microsoft Exchange, Microsoft SharePoint, Oracle, and more. Data ONTAP enables you to store more data in less disk space with integrated data deduplication and thin provisioning. FlexVol technology ensures that you use your storage systems at maximum efficiency, which minimizes your hardware investments. Not only can you reduce the amount of physical storage, you can also see significant savings in power, cooling, and data center space costs. For more information about the IBM N series, see this website: http://www.ibm.com/systems/storage/network/

7.2.8 IBM FlashSystem


Businesses are moving to all flash systems to boost critical application performance, gain efficiencies, and strategically deploy resources for data management. IBM leads the industry with flash optimization in storage, systems, and software. With the announcement of the IBM FlashSystem family, IBM offers the most comprehensive flash portfolio to help your business compete, innovate, and grow. IBM FlashSystem 820 and IBM FlashSystem 720 are designed to speed up the performance of multiple enterprise-class applications, including OLTP and OLAP databases, virtual desktop infrastructures, technical computing applications, and cloud-scale infrastructures.
Chapter 7. Storage integration

465

These IBM systems deliver extreme performance per gigabyte, so organizations can quickly uncover business insights by using traditional data analytics and new, big data technologies. In addition, FlashSystem 820 and FlashSystem 720 eliminate storage bottlenecks with IBM MicroLatency (that is, less than 100-microsecond access times) to enable faster decision making. With these low latencies, the storage disk layer can operate at speeds that are comparable to those of the CPUs, DRAM, networks, and buses in the I/O data path. IBM FlashSystem can be connected to Flex System Chassis. The SSIC should be consulted for supported configurations. For more information about the IBM FlashSystem offerings, see this website: http://www.ibm.com/systems/storage/flash/720-820/

7.2.9 IBM System Storage TS3500 Tape Library


The IBM System Storage TS3500 Tape Library is designed to provide a highly scalable, automated tape library for mainframe and open systems backup and archive. This system can scale from midrange to enterprise environments. The TS3500 Tape Library continues to lead the industry in tape drive integration with the following features: Massive scalability of cartridges and drives with the shuttle connector Maximized sharing of library resources with IBM Multipath architecture Ability to dynamically partition cartridge slots and drives with the advanced library management system Maximum availability with path failover features Supports multiple simultaneous, heterogeneous server attachment Remote reporting of status using Simple Network Management Protocol (SNMP) Preserves tape drive names during storage area network changes Built-in diagnostic drive and media exception reporting Simultaneously supports TS1130, TS1140 and LTO Ultrium 6, 5 and 4 tape drive encryption Remote management via web browser One base frame and up to 15 expansion frames per library; up to 15 libraries interconnected per complex Up to 12 drives per frame (up to 192 per library, up to 2,700 per complex) Up to 224 I/O slots (16 I/O slots standard) IBM 3592 write-once-read-many (WORM) cartridges or LTO Ultrium 6, 5 and 4 cartridges Up to 125 PB compressed with LTO Ultrium 6 cartridges per library, up to 1.875 EB compressed per complex Up to 180 PB compressed with 3592 extended capacity cartridges per library, up to 2.7 EB compressed per complex LTO Fibre Channel interface for server attachment For more information about the TS3500, see this website: http://www.ibm.com/systems/storage/tape/ts3500 466

IBM PureFlex System and IBM Flex System Products and Technology

7.2.10 IBM System Storage TS3310 series


If you have rapidly growing data backup needs and limited physical space for a tape library, the IBM System Storage TS3310 offers simple, rapid expansion as your processing needs grow. You can use this tape library to start with a single five EIA rack unit (5U) tall library. As your need for tape backup expands, you can add more expansion modules (9U), each of which contains space for more cartridges, tape drives, and a redundant power supply. The entire system grows vertically. Currently, available configurations include the base library module and a 5U base with up to four 9U expansion modules. The TS3310 includes the following features: Provides a modular, scalable tape library designed to grow as your needs grow Features desktop, desk-side and rack-mounted configurations Delivers optimal data storage efficiency with high cartridge density that uses standard or write-once-read-many (WORM) Linear Tape-Open (LTO) Ultrium data cartridges Can simplify user access to data stored on LTO Ultrium 6 and 5 cartridges through the use of IBM Linear Tape File System software Doubles the compressed cartridge capacity and provides over 40 percent better performance compared to 5th generation LTO Ultrium drives For more information about the TS3310, see this website: http://www.ibm.com/systems/storage/tape/ts3310

7.2.11 IBM System Storage TS3200 Tape Library


The TS3200 and its storage management applications are designed to address capacity, performance, data protection, reliability, affordability, and application requirements. It is designed as a functionally rich, high-capacity, entry-level tape storage solution that incorporates the latest LTO Ultrium tape technology. The TS3200 is an excellent solution for large-capacity or high-performance tape backup with or without random access, and is an excellent choice for tape automation for Flex System. The TS3200 includes the following features: Designed to support LTO Ultrium 6, 5 or 4 tape drives for increased capacity and performance, including Low Voltage Differential (LVD) SCSI, FC and SAS attachments Designed to offer outstanding capacity, performance and reliability in a 4U form factor with 48 data cartridge slots and a mail slot for midrange storage environments Designed to support cost-effective backup, save, restore, and archival storage in sequential or random-access mode with a standard bar code reader Remote library management through a standard web interface that is designed to offer flexibility and greater administrative control of storage operations 8 Gb Fibre Channel or 6 Gb SAS interfaces LVD SCSI drive, 4 Gb Fibre Channel and 3 Gb SAS interfaces in LTO Ultrium 4 full-height Use of up to four LTO Ultrium half-height tape drives Stand-alone or rack-mount option For more information about the TS3200 Tape unit, see this website: http://www.ibm.com/systems/storage/tape/ts3200/

Chapter 7. Storage integration

467

7.2.12 IBM System Storage TS3100 Tape Library


The TS3100 is well-suited for handling the backup, restore, and archive data-storage needs for small to midsize environments. With the use of one LTO full-height tape drive or up to two LTO half-height tape drives and with a 24-tape cartridge capacity, the TS3100 uses LTO technology to cost-effectively handle growing storage requirements. The TS3100 is configured with two removable cartridge magazines, one on the left side (12 data cartridge slots) and one on the right (12 data cartridge slots). Additionally, the left magazine includes a single mail slot to help support continuous library operation while importing and exporting media. A bar code reader is standard in the library and supports the librarys operation in sequential or random-access mode. The TS3100 includes the following features: Designed to support LTO Ultrium 6, 5 or 4 tape drives, for increased capacity and performance including LVD SCSI, FC, and SAS attachments Designed to offer outstanding capacity, performance and reliability in a 2U form factor with 24 data cartridge slots and a mail slot for midrange storage environments Designed to support cost-effective backup, save, restore, and archival storage in sequential or random-access mode with a standard bar code reader Remote library management through a standard web interface that is designed to offer flexibility and greater administrative control of storage operations 8 Gb Fibre Channel or 6 Gb SAS interfaces LVD SCSI drive, 4 Gb Fibre Channel and 3 Gb SAS interfaces in LTO Ultrium 4 full-height Use of up to 2 LTO Ultrium half-height tape drives Stand-alone or rack-mount option For more information about the TS3100, see this website: http://www.ibm.com/systems/storage/tape/ts3100

7.3 Fibre Channel


FC is a proven and reliable network for storage interconnect. The IBM Flex System Enterprise Chassis FC portfolio offers various choices to meet your needs and interoperate with exiting SAN infrastructure.

7.3.1 FC requirements
In general, if Enterprise Chassis is integrated into FC storage fabric, ensure that the following requirements are met. Check the compatibility guides from your storage system vendor for confirmation: Enterprise Chassis server hardware and HBA are supported by the storage system. Refer to the IBM System Storage Interoperation Center (SSIC) or the third-party storage system vendors support matrixes for this information. The FC fabric that is used or proposed for use is supported by the storage system. The operating systems that are deployed are supported by IBM server technologies and the storage system.

468

IBM PureFlex System and IBM Flex System Products and Technology

Multipath drivers exist and are supported by the operating system and storage system (in case you plan for redundancy). Clustering software is supported by the storage system (in case you plan to implement clustering technologies). If any of these requirements are not met, consider another solution that is supported. Almost every vendor of storage systems or storage fabrics has extensive compatibility matrixes that include supported HBAs, SAN switches, and operating systems. For more information about IBM System Storage compatibility, see the IBM System Storage Interoperability Center at this website: http://www.ibm.com/systems/support/storage/config/ssic

7.3.2 FC switch selection and fabric interoperability rules


IBM Flex System Enterprise Chassis provides integrated FC switching functions by using the following switch options: IBM Flex System FC3171 8Gb SAN Switch IBM Flex System FC3171 8Gb SAN Pass-thru IBM Flex System FC5022 16Gb SAN Scalable Switch

Considerations for the FC5022 16Gb SAN Scalable Switch


The module can function in Fabric OS Native mode or Brocade Access Gateway mode. The switch ships with Fabric OS mode as the default. The mode can be changed by using OS commands or web tools. Access Gateway simplifies SAN deployment by using N_Port ID Virtualization (NPIV). NPIV provides FC switch functions that improve switch scalability, manageability, and interoperability. The default configuration for Access Gateway is that all N-Ports have fail over and fall back enabled. In Access Gateway mode, the external ports can be N_Ports, and the internal ports (128) can be F_Ports, as shown in Table 7-9
Table 7-9 Default configuration F_port 1,21 2,22 3,23 4,24 5,25 6,26 7,27 8,28 9 10 N_port 0 29 30 31 32 33 34 35 36 37 F_port 11 12 13 14 15 16 17 18 19 20 N_Port 38 39 40 41 42 43 44 45 46 47

Chapter 7. Storage integration

469

For more information, see the Brocade Access Gateway Administrators Guide.

Considerations for the FC3171 8Gb SAN Pass-thru and FC3171 8Gb SAN Switch
These I/O Modules provide seamless integration of IBM Flex System Enterprise Chassis into existing Fibre Channel fabric. They avoid any multivendor interoperability issues by using NPIV technology. All ports are licensed on both of these switches (there are no port licensing requirements). The I/O module has 14 internal ports and six external ports that are presented at the rear of the chassis. Attention: If you need Full Fabric capabilities at any time in the future, purchase the Full Fabric Switch Module (FC3171 8Gb SAN Switch) instead of the Pass-Thru module (FC3171 8Gb SAN Pass-thru). The pass-through module never can be upgraded. You can reconfigure the FC3171 8Gb SAN Switch to become a Pass-Thru module by using the switch GUI or command-line interface (CLI). The module can be converted back to a full function SAN switch at any time. The switch requires a reset when you turn on or off transparent mode. Operating in pass-through mode adds ports to the fabrics and not domain IDs such as switches. This process is not apparent to the switches in the fabric. This section describes how the NPIV concept works for the Intelligent pass-through Module (and the Brocade Access Gateway). The following basic types of ports are used in Fibre Channel fabrics: N_Ports (node ports) represent an end-point FC device (such as host, storage system, or tape drive) connected to the FC fabric. F_Ports (fabric ports) are used to connect N_Ports to the FC switch (that is, the host HBAs N_port is connected to the F_Port on the switch). E_Ports (expansion ports) provide interswitch connections. If you must connect one switch to another, E_ports are used. The E_port on one switch is connected to the E_Port on another switch. When one switch is connected to another switch in the existing FC fabric, it uses the Domain ID to uniquely identify itself in the SAN (like a switch address). Because every switch in the fabric has the Domain ID and this ID is unique in the SAN, the number of switches and number of ports is limited. This, in turn, limits SAN scalability. For example, QLogic theoretically supports up to 239 switches, and McDATA supports up to 31 switches. Another concern with E_Ports is an interoperability issue between switches from different vendors. In many cases, only the so-called interoperability mode can be used in these fabrics, thus disabling most of the vendors advanced features. Each switch requires some management tasks to be performed on it. Therefore, an increased number of switches increases the complexity of the management solution, especially in heterogeneous SANs that consist of multivendor fabrics. NPIV technology helps to address these issues.

470

IBM PureFlex System and IBM Flex System Products and Technology

Initially, NPIV technology was used in virtualization environments to share one HBA with multiple virtual machines, and assign unique port IDs to each of them. You can use this configuration to separate traffic between virtual machines (VMs). You can manage VMs in the same way as physical hosts: by zoning fabric or partitioning storage. For example, if NPIV is not used, every virtual machine shares one HBA with one worldwide name (WWN). This restriction means that you cannot separate traffic between these systems and isolate logical unit numbers (LUNs) because all of them use the same ID. In contrast, when NPIV is used, every VM has its own port ID, and these port IDs are treated as N_Ports by the FC fabric. You can perform storage partitioning or zoning based on the port ID of the VM. The switch that the virtualized HBAs are connected to must support NPIV as well. For more information, see the documentation that comes with the FC switch. The IBM Flex System FC3171 8Gb SAN Switch in pass-through mode, the IBM Flex System FC3171 8Gb SAN Pass-thru, and the Brocade Access Gateway use the NPIV technique. The technique presents the nodes port IDs as N_Ports to the external fabric switches. This process eliminates the need for E_Ports connections between the Enterprise Chassis and external switches. In this way, all 14 internal nodes FC ports are multiplexed and distributed across external FC links and presented to the external fabric as N_Ports. This configuration means that external switches that are connected to the chassis that are configured for Fibre pass-through do not see the pass-through module. Instead, they see only N_ports connected to the F_ports. This configuration can help to achieve a higher port count for better scalability without the use of Domain IDs, and avoid multivendor interoperability issues. However, modules that operate in Pass-Thru cannot be directly attached to the storage system. They must be attached to an external NPIV-capable FC switch. For more information, see the switch documentation about NPIV support. Select a SAN module that can provide the required functionality with seamless integration into the existing storage infrastructure, as shown in Table 7-10. There are no strict rules to follow during integration planning. However, several considerations must be taken into account.
Table 7-10 SAN module feature comparison and interoperability FC5022 16Gb SAN Scalable Switch Basic FC connectivity FC-SW-2 interoperability Zoning Maximum number of Domain IDs Advanced FC connectivity Port Aggregation Advanced fabric security Interoperability (existing fabric) Brocade fabric interoperability QLogic fabric interoperability Cisco fabric interoperability Yes No No No No No Yes No Yes Yes No Yes Yes Yes Nob Yes Not applicable Not applicable Not applicable Not applicable Yesa Yes 239 Yes Yes 239 Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable FC3171 8Gb SAN Switch FC5022 16Gb SAN Scalable Switch in Brocade Access Gateway mode FC3171 8Gb SAN Pass-thru (and FC3171 8Gb SAN Switch in pass-through mode)

Chapter 7. Storage integration

471

a. Indicates that a feature is supported without any restrictions for existing fabric, but with restrictions for added fabric, and vice versa. b. Does not necessarily mean that a feature is not supported. Instead, it means that severe restrictions apply to the existing fabric. Some functions of the existing fabric potentially must be disabled (if used).

Almost all switches support interoperability standards, which means that almost any switch can be integrated into existing fabric by using interoperability mode. Interoperability mode is a special mode that is used for integration of different vendors FC fabrics into one. However, only standards-based functionality is available in the interoperability mode. Advanced features of a storage fabrics vendor might not be available. Broadcom, McDATA, and Cisco have interoperability modes on their fabric switches. Check the compatibility matrixes for a list of supported and unsupported features in the interoperability mode. Table 7-10 on page 471 provides a high-level overview of standard and advanced functions available for particular Enterprise Chassis SAN switches. It lists how these switches might be used for designing new storage networks or integrating with existing storage networks. Remember: Advanced (proprietary) FC connectivity features from different vendors might be incompatible with each other, even those features that provide almost the same function. For example, both Brocade and Cisco support port aggregation. However, Brocade uses ISL trunking and Cisco uses PortChannels and they are incompatible with each other. For example, if you integrate FC3052 2-port 8Gb FC Adapter (Brocade) into QLogic fabric, you cannot use Brocade proprietary features such as ISL trunking. However, QLogic fabric does not lose functionality. Conversely, if you integrate QLogic fabric into existing Brocade fabric, placing all Brocade switches in interoperability mode loses Advanced Fabric Services functions. If you plan to integrate Enterprise Chassis into an FC fabric that is not listed here, QLogic might be a good choice. However, this configuration is possible with interoperability mode only, so extended functions are not supported. A better way is to use the FC3171 8Gb SAN Pass-thru or Brocade Access Gateway. Switch selection and interoperability have the following rules: FC3171 8Gb SAN Switch is used when Enterprise Chassis is integrated into existing QLogic fabric or when basic FC functionality is required. That is, with one Enterprise Chassis with a direct-connected storage server. FC5022 16Gb SAN Scalable Switch is used when Enterprise Chassis is integrated into existing Brocade fabric or when advanced FC connectivity is required. You might use this switch when several Enterprise Chassis are connected to high-performance storage systems. If you plan to use advanced features such as ISL trunking, you might need to acquire specific licenses for these features. Tip: The use of FC storage fabric from the same vendor often avoids possible operational, management, and troubleshooting issues. If Enterprise Chassis is attached to a non-IBM storage system, support is provided by the storage systems vendor. Even if non-IBM storage is listed on IBM ServerProven, it means only that the configuration was tested. It does not mean that IBM provides support for it. See the vendor compatibility information for supported configurations.

472

IBM PureFlex System and IBM Flex System Products and Technology

For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at this website: http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

7.4 FCoE
One common way to reduce administration costs is by converging technologies that are implemented on separate infrastructures. FCoE removes the need for separate Ethernet and FC HBAs on the servers. Instead, a Converged Network Adapter (CNA) is installed in the server. Although IBM does not mandate the use of FCoE, the choice of using separate Ethernet and SAN switches inside the chassis or choosing a converged FCoE solution is left up to the client. IBM Flex System offers both connectivity solutions. A CNA presents what appears to be an NIC and an HBA to the OS, but the output out of the node is 10 Gb Ethernet. The adapter can be the integrated 10Gb LOM with FCoE upgrade applied, or it can be a converged adapter 10Gb such as the CN4054 10Gb Virtual Fabric Adapter or CN4058 8-port 10Gb Converged Adapter that includes FCoE. The CNA is then connected via the chassis midplane to an internal switch that passes these FCoE packets onwards to an external switch that contains a Fibre Channel Forwarder (where the FC is broken out, such as the EN4093R), or by using a switch that is integrated inside the chassis that includes an FC Forwarder. Such a switch is the CN4093 10Gb Converged Scalable Switch, which can break out FC and Ethernet to the rear of the Flex System chassis. The CN4093 10Gb Converged Scalable Switch has external Omni ports that can be configured as FC or Ethernet. This section lists FCoE support. Table 7-11 on page 474 lists FCoE support that uses FC targets. Table 7-12 on page 474 lists FCoE support that uses native FCoE targets (that is, end-to-end FCoE). Tip: Use these tables only as a starting point. Configuration support must be verified through the IBM SSIC website: http://ibm.com/systems/support/storage/ssic/interoperability.wss

Chapter 7. Storage integration

473

Table 7-11 FCoE support using FC targets Ethernet adapter 10Gb onboard LOM (x240) + FCoE upgrade, 90Y9310 10Gb onboard LOM (x440) + FCoE upgrade, 90Y9310 CN4054 10Gb Adapter, 90Y3554 + FCoE upgrade, 90Y3558 Flex System I/O module FC Forwarder (FCF) Supported SAN fabric Operating systems Storage targets DS8000 IBM SAN Volume Controller IBM Storwize V7000 V7000 Storage Node (FC) TS3200, TS3310, TS3500

EN4091 10Gb Ethernet Pass-thru (vNIC2 and pNIC)

Cisco Nexus 5010 Cisco Nexus 5020

Cisco MDS 9124 Cisco MDS 9148 Cisco MDS 9513

EN4093 10Gb Switch (vNIC1, vNIC2, UFP, and pNIC) EN4093R 10Gb Switch (vNIC1, vNIC2, UFP and pNIC)

Brocade VDX 6730 Cisco Nexus 5548 Cisco Nexus 5596

IBM B-type

Cisco MDS

Windows Server 2008 R2 SLES 10 SLES 11 RHEL 5 RHEL 6 ESX 4.1 vSphere 5.0

DS8000 SAN Volume Controller Storwize V7000 V7000 Storage Node (FC) IBM XIV

CN4093 10Gb Converged Switch (vNIC1, vNIC2 and pNIC) EN4093 10Gb (pNIC only) EN4093R 10Gb Switch (pNIC only) EN4093 10Gb Switch (pNIC only) EN4093R 10Gb Switch (pNIC only)

IBM B-type Cisco MDS

Brocade VDX 6730

IBM B-type AIX V6.1 AIX V7.1 VIOS 2.2 SLES 11.2 RHEL 6.3 DS8000 SAN Volume Controller Storwize V7000 V7000 Storage Node (FC) IBM XIV

CN4058 8-port 10Gb Converged Adapter, EC24

Cisco Nexus 5548 Cisco Nexus 5596

Cisco MDS

CN4093 10Gb Converged Switch (pNIC only) Table 7-12 FCoE support using FCoE targets (end-to-end FCoE) Ethernet adapter Flex System I/O module

IBM B-type Cisco MDS

Operating systems Windows Server 2008 R2 SLES 10 SLES 11 RHEL 5 RHEL 6 ESX V4.1 vSphere 5.0 AIX V6.1 AIX V7.1 VIOS 2.2 SLES 11.2 RHEL 6.3

Storage targets

10Gb onboard LOM (x240) + FCoE upgrade, 90Y9310 10Gb onboard LOM (x440) + FCoE upgrade, 90Y9310 CN4054 10Gb Adapter, 90Y3554 + FCoE upgrade, 90Y3558

CN4093 10Gb Converged Switch (vNIC1, vNIC2, and pNIC)

V7000 Storage Node (FCoE)

CN4058 8-port 10Gb Converged Adapter, EC24

CN4093 10Gb Converged Switch (pNIC only)

V7000 Storage Node (FCoE)

474

IBM PureFlex System and IBM Flex System Products and Technology

7.5 iSCSI
iSCSI uses a traditional Ethernet network for block I/O between storage system and servers. Servers and storage systems are connected to the LAN and use iSCSI to communicate with each other. Because iSCSI uses a standard TCP/IP stack, you can use iSCSI connections across LAN or wide area network (WAN) connections. iSCSI targets IBM System Storage DS3500 iSCSI models, an optional DHCP server, and a management station with iSCSI Configuration Manager. The software iSCSI initiator is specialized software that uses a servers processor for iSCSI protocol processing. A hardware iSCSI initiator exists as microcode that is built in to the LAN on Motherboard (LOM) on the node or on the I/O Adapter providing it is supported. Both software and hardware initiator implementations provide iSCSI capabilities for Ethernet NICs. However, an operating system driver can be used only after the locally installed operating system is turned on and running. In contrast, the NIC built-in microcode is used for boot-from-SAN implementations, but cannot be used for storage access when the operating system is already running. Table 7-13 lists iSCSI support that uses a hardware-based iSCSI initiator. IBM System Storage Interoperation Center normally lists support only for iSCSI storage that is attached by using hardware iSCSI offload adapters in the servers. Flex System compute nodes support any type of iSCSI (1Gb or 10Gb) storage if the software iSCSI initiator device drivers that meet the storage requirements for operating system and device driver levels are met. Tip: Use these tables only as a starting point. Configuration support must be verified through the IBM SSIC website: http://ibm.com/systems/support/storage/ssic/interoperability.wss

Table 7-13 Hardware-based iSCSI support Ethernet adapter 10Gb onboard LOM (x240)a 10Gb onboard LOM (x440)a CN4054 10Gb Virtual Fabric Adapter, 90Y3554b Flex System I/O module EN4093 10Gb Switch (vNIC1, vNIC2, UFP, and pNIC) EN4093R 10Gb Switch (vNIC1, vNIC2, UFP and pNIC) Operating systems Windows Server 2008 R2 SLES 10 and 11 RHEL 5 & 6 ESX 4.1 vSphere 5.0 Storage targets SAN Volume Controller Storwize V7000 V7000 Storage Node (iSCSI) IBM XIV

a. FCoE upgrade is required, IBM Virtual Fabric Advanced Software Upgrade (LOM), 90Y9310 b. FCoE upgrade is required, IBM Flex System CN4054 Virtual Fabric Adapter Upgrade, 90Y3558

iSCSI on Enterprise Chassis nodes can be implemented on the CN4054 10Gb Virtual Fabric Adapter and the embedded 10 Gb Virtual Fabric adapter LOM. Remember: Both of these NIC solutions require a Feature on Demand (FoD) upgrade, which enables and provides iSCSI initiator.

Chapter 7. Storage integration

475

Software initiators can be obtained from the operating system vendor. For example, Microsoft offers a software iSCSI initiator for download. They also can be obtained as a part of an NIC firmware upgrade (if supported by NIC). For more information about IBM System Storage compatibility, see the IBM System Storage Interoperability Center at this website: http://www.ibm.com/systems/support/storage/config/ssic

Tip: Consider the use of a separate network segment for iSCSI traffic. That is, isolate NICs, switches or virtual local area networks (VLANs), and storage system ports that participate in iSCSI communications from other traffic. If you plan for redundancy, you must use multipath drivers. Generally, they are provided by the operating system vendor for iSCSI implementations, even if you plan to use hardware initiators. It is possible to implement HA clustering solutions by using iSCSI, but certain restrictions might apply. For more information, see the storage system vendor compatibility guides. When you plan your iSCSI solution, consider the following items: IBM Flex System Enterprise Chassis nodes, the initiators, and the operating system are supported by an iSCSI storage system. For more information, see the compatibility guides from the storage vendor. Multipath drivers exist and are supported by the operating system and the storage system (when redundancy is planned). For more information, see the compatibility guides from the operating system vendor and storage vendor. For more information, see the following resources: IBM SSIC: http://www.ibm.com/systems/support/storage/config/ssic IBM System Storage N series Interoperability Matrix: http://ibm.com/support/docview.wss?uid=ssg1S7003897 Microsoft Support for iSCSI: http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/msfiscsi. mspx

7.6 HA and redundancy


The Enterprise Chassis has built-in network redundancy. All I/O Adapter servers are dual port. I/O modules can be installed as a pair into the Enterprise Chassis to avoid possible single points of failure in the storage infrastructure. All major vendors, including IBM, use dual controller storage systems to provide redundancy.

476

IBM PureFlex System and IBM Flex System Products and Technology

A typical topology for integrating Enterprise Chassis into an FC infrastructure is shown in Figure 7-16.

Storage System Controller 1 I/O Module Controller 2

Node

Chassis

Storage Network

I/O Module
Figure 7-16 IBM Enterprise Chassis LAN infrastructure topology

This topology includes a dual port FC I/O Adapter that is installed onto the node. A pair of FC I/O Modules is installed into bays 3 and 4 of the Enterprise Chassis. In a failure, the specific operating system driver that is provided by the storage system manufacturer is responsible for the automatic failover process. This process is also known as multipathing capability. If you plan to use redundancy and HA for storage fabric, ensure that failover drivers satisfy the following requirements: They are available from the vendor of the storage system. They come with the system or can be ordered separately (remember to order them in such cases). They support the node operating system. They support the redundant multipath fabric that you plan to implement (that is, they support the required number of redundant paths). For more information, see the storage system documentation from the vendor.

Chapter 7. Storage integration

477

7.7 Performance
Performance is an important consideration during storage infrastructure planning. Providing the required end-to-end performance for your SAN can be accomplished in several ways. First, the storage systems failover driver can provide load balancing across redundant paths in addition to HA. When used with DS8000, IBM System Storage Multi-path Subsystem Device Driver (SDD) provides this function. If you plan to use such drivers, ensure that they satisfy the following requirements: They are available from the storage system vendor. They come with the system, or can be ordered separately. They support the node operating system. They support the multipath fabric that you plan to implement. That is, they support the required number of paths implemented. Also, you can use static LUN distribution between two storage controllers in the storage system. Some LUNs are served by controller 1 and others are served by controller 2. A zoning technique can also be used with static LUN distribution if you have redundant connections between FC switches and the storage system controllers. Trunking or PortChannels between FC or Ethernet switches can be used to increase network bandwidth, which increases performance. Trunks in the FC network use the same concept as in standard Ethernet networks. Several physical links between switches are grouped into one logical link with increased bandwidth. This configuration is typically used when an Enterprise Chassis is integrated into existing advanced FC infrastructures. However, remember that only the FC5022 16Gb SAN Scalable Switch supports trunking. Also be aware that this feature is an optional one that requires the purchase of another license. For more information, see the storage system vendor documentation and the switch vendor documentation.

7.8 Backup solutions


Backup is an important consideration when you deploy infrastructure systems. First, you must decide which tape backup solution to implement. Data can be backed up by using the following methods: Centralized local area network (LAN) backup with dedicated backup server (compute node in the chassis) with FC-attached tape autoloader or tape library Centralized LAN backup with dedicated backup server (server external to the chassis) with FC-attached tape autoloader or tape library LAN-free backup with FC-attached tape autoloader or library (for more information, see 7.8.2, LAN-free backup for nodes on page 480) If you plan to use a node as a dedicated backup server or LAN-free backup for nodes, use only certified tape autoloaders and tape libraries. If you plan to use a dedicated backup server on a non-Enterprise Chassis system, use tape devices that are certified for that server. Also, verify that the tape device and type of backup you select are supported by the backup software you plan to use.

478

IBM PureFlex System and IBM Flex System Products and Technology

For more information about supported tape devices and interconnectivity, see the IBM SSIC: http://www.ibm.com/systems/support/storage/config/ssic

7.8.1 Dedicated server for centralized LAN backup


The simplest way to provide backup for the Enterprise Chassis is to use a compute node or external server with an SAS-attached or FC-attached tape unit. In this case, all nodes that require backup have backup agents, and backup traffic from these agents to the backup server uses standard LAN paths. If you use an FC-attached tape drive, connect it to FC fabric (or at least to an HBA) that is dedicated for backup. Do not connect it to the FC fabric that carries the disk traffic. If you cannot use dedicated switches, use zoning techniques on FC switches to separate these two fabrics. Consideration: Avoid mixing disk storage and tape storage on the same FC HBA. If you experience issues with your SAN because the tape and disk on the same HBA, IBM Support requests that you separate these devices. If you plan to use a node as a dedicated backup server with FC-attached tape, use one port of the I/O adapter for tape and another for disk. There is no redundancy in this case. Figure 7-17 shows possible topologies and traffic flows for LAN backups and FC-attached storage devices.

Storage System Controller 1 Controller 2

Ethernet I/O Module


Node backup server Node backup agent

FCSM

Chassis

FC Switch Module

Storage Network

Ethernet I/O Module

FCSM Tape Autoloader

Backup data is moved from disk storage to backup server's disk storage through LAN by backup agent

Backup data is moved from disk backup storage to tape backup storage by backup server

Figure 7-17 LAN backup topology and traffic flow

The topology that is shown in Figure 7-17 has the following characteristics: Each Node participating in backup (except the actual backup server) has dual connections to the disk storage system. The backup server has only one disk storage connection (shown in red).
Chapter 7. Storage integration

479

The other port of the FC HBA is dedicated for tape storage. A backup agent is installed onto each Node requiring backup. The backup traffic flow starts with the backup agent transfers backup data from the disk storage to the backup server through LAN. The backup server stores this data on its disk storage; for example, on the same storage system. Then, the backup server transfers data from its storage directly to the tape device. Zoning is implemented on an FC Switch Module to separate disk and tape data flows. Zoning almost resembles VLANs in networks.

7.8.2 LAN-free backup for nodes


LAN-free backup means that the SAN fabric is used for the backup data flow instead of LAN. LAN is used only for passing control information between the backup server and agents. LAN-free backup can save network bandwidth for network applications, which provides better network performance. The backup agent transfers backup data from the disk storage directly to the tape storage during LAN-free backup. Figure 7-18 shows this process.

Storage System Controller 1 Controller 2 Ethernet I/O Module


Node backup server Node backup agent

FCSM

Chassis

Storage Network

Ethernet I/O Module

FCSM 2
Tape Autoloader

Figure 7-18 LAN-free backup without disk storage redundancy

Figure 7-18 shows the simplest topology for LAN-free backup. With this topology, the backup server controls the backup process and the backup agent moves the backup data from the disk storage directly to the tape storage. In this case, there is no redundancy that is provided for the disk storage and tape storage. Zones are not required because the second Fibre Channel Switching Module (FCSM) is exclusively used for the backup fabric. Backup software vendors can use other (or more) topologies and protocols for backup operations. Consult the backup software vendor documentation for a list of supported topologies and features, and more information.

480

IBM PureFlex System and IBM Flex System Products and Technology

7.9 Boot from SAN


Boot from SAN (or SAN Boot) is a technique that is used when the node in the chassis has no local disk drives. It uses an external storage system LUN to boot the operating system. The operating system and data are on the SAN. This technique is commonly used to provide higher availability and better usage of the systems storage (where the operating system is). Hot spare Nodes or Rip-n-Replace techniques can also be easily implemented by using Boot from SAN.

7.9.1 Implementing Boot from SAN


To successfully implement SAN Boot, the following conditions must be met. Check the respective storage system compatibility guides for more information: Storage system supports SAN Boot Operating system supports SAN Boot FC HBAs, or iSCSI initiators support SAN Boot You can also check the documentation for the operating system that is used for Boot from SAN support and requirements and storage vendors. See the following sources for more SAN boot-related information: Windows Boot from Fibre Channel SAN Overview and Detailed Technical Instructions for the System Administrator is available at this website: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=2815 SAN Configuration Guide (from VMware), is available at this website: http://www.vmware.com/pdf/vi3_esx_san_cfg.pdf For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at this website: http://www.ibm.com/systems/support/storage/config/ssic

7.9.2 iSCSI SAN Boot specific considerations


iSCSI SAN Boot enables a diskless node to be started from an external iSCSI storage system. You can use the onboard 10 Gb Virtual Fabric LOM on the node itself or an I/O adapter. Specifically, the IBM Flex System CN4054 10Gb Virtual Fabric Adapter supports iSCSI with the IBM Flex System CN4054 Virtual Fabric Adapter Upgrade, part 90Y3558. For the latest compatibility information, see the storage vendor compatibility guides. For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at this website: http://www.ibm.com/systems/support/storage/config/ssic

Chapter 7. Storage integration

481

482

IBM PureFlex System and IBM Flex System Products and Technology

Abbreviations and acronyms


AC ACL AES-NI AMM AMP ANS API AS ASIC ASU AVX BACS BASP BE BGP BIOS BOFM CEE CFM CLI CMM CPM CPU CRTM DC DHCP DIMM DMI DRAM DRTM DSA ECC EIA ESB ETE FC alternating current access control list Advanced Encryption Standard New Instructions advanced management module Apache, MySQL, and PHP/Perl Advanced Network Services application programming interface Australian Standards application-specific integrated circuit Advanced Settings Utility Advanced Vector Extensions Broadcom Advanced Control Suite Broadcom Advanced Server Program Broadband Engine Border Gateway Protocol basic input/output system BladeCenter Open Fabric Manager Converged Enhanced Ethernet cubic feet per minute command-line interface Chassis Management Module Copper Pass-thru Module central processing unit Core Root of Trusted Measurements domain controller Dynamic Host Configuration Protocol dual inline memory module Desktop Management Interface dynamic random-access memory Dynamic Root of Trust Measurement Dynamic System Analysis error checking and correcting Electronic Industries Alliance Enterprise Switch Bundle everything-to-everything Fibre Channel LED LOM LP KB KVM LACP LAN LDAP IGMP IMM IP IS ISP IT ITE ITSO FC-AL FDR FSM FSP FTP FTSS GAV GB GT HA HBA HDD HPC HS HT HW I/O IB IBM ID IEEE Fibre Channel Arbitrated Loop fourteen data rate Flex System Manager flexible service processor File Transfer Protocol Field Technical Sales Support generally available variant gigabyte gigatransfers high availability host bus adapter hard disk drive high-performance computing hot swap Hyper-Threading hardware input/output InfiniBand International Business Machines identifier Institute of Electrical and Electronics Engineers Internet Group Management Protocol integrated management module Internet Protocol information store Internet service provider information technology IT Element International Technical Support Organization kilobyte keyboard video mouse Link Aggregation Control Protocol local area network Lightweight Directory Access Protocol light emitting diode LAN on Motherboard low profile

Copyright IBM Corp. 2012, 2013. All rights reserved.

483

LPC LR LR-DIMM MAC MB MSTP NIC NL NS NTP OPM OSPF PCI PCIe PDU PF PSU QDR QPI RAID RAM RAS RDIMM RFC RHEL RIP ROC ROM RPM RSS SAN SAS SATA SDMC SerDes SFF SLC SLES SLP SNMP SSD

Local Procedure Call long range load-reduced DIMM media access control megabyte Multiple Spanning Tree Protocol network interface card nearline not supported Network Time Protocol Optical Pass-Thru Module Open Shortest Path First Peripheral Component Interconnect PCI Express power distribution unit power factor power supply unit quad data rate QuickPath Interconnect redundant array of independent disks random access memory remote access services; row address strobe registered DIMM request for comments Red Hat Enterprise Linux Routing Information Protocol RAID-on-Chip read-only memory revolutions per minute Receive-Side Scaling storage area network Serial Attached SCSI Serial ATA Systems Director Management Console Serializer-Deserializer small form factor Single-Level Cell SUSE Linux Enterprise Server Service Location Protocol Simple Network Management Protocol solid-state drive

SSH SSL STP TCG TCP TDP TFTP TPM TXT UDIMM UDLD UEFI UI UL UPS URL USB VE VIOS VLAG VLAN VM VPD VRRP VT WW WWN

Secure Shell Secure Sockets Layer Spanning Tree Protocol Trusted Computing Group Transmission Control Protocol thermal design power Trivial File Transfer Protocol Trusted Platform Module text unbuffered DIMM Unidirectional link detection Unified Extensible Firmware Interface user interface Underwriters Laboratories uninterruptible power supply Uniform Resource Locator universal serial bus Virtualization Engine Virtual I/O Server Virtual Link Aggregation Groups virtual LAN virtual machine vital product data Virtual Router Redundancy Protocol Virtualization Technology worldwide Worldwide Name

484

IBM PureFlex System and IBM Flex System Products and Technology

Related publications and education


The publications that are listed in this section are considered suitable for a more detailed discussion of the topics that are covered in this book.

IBM Redbooks
The following publications from IBM Redbooks provide more information about the following topics and are available from the following website: http://www.redbooks.ibm.com/portals/puresystems IBM Flex System: IBM Flex System p270 Compute Node Planning and Implementation Guide, SG24-8166 IBM Flex System p260 and p460 Planning and Implementation Guide, SG24-7989 IBM Flex System Networking in an Enterprise Data Center, REDP-4834 Moving to IBM PureFlex System: x86-to-x86 Migration, REDP-4887 Chassis, Compute Nodes, and Expansion Nodes IBM Flex System Enterprise Chassis, TIPS0863 IBM Flex System Manager, TIPS0862 IBM Flex System p24L, p260 and p460 Compute Nodes, TIPS0880 IBM Flex System p270 Compute Node, TIPS1018 IBM Flex System PCIe Expansion Node, TIPS0906 IBM Flex System Storage Expansion Node, TIPS0914 IBM Flex System x220 Compute Node, TIPS0885 IBM Flex System x222 Compute Node, TIPS1036 IBM Flex System x240 Compute Node, TIPS0860 IBM Flex System x440 Compute Node, TIPS0886 IBM Flex System EN2092 1Gb Ethernet Scalable Switch, TIPS0861 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module, TIPS0865 IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switches, TIPS0864 IBM Flex System FC3171 8Gb SAN Switch and Pass-thru, TIPS0866 IBM Flex System FC5022 16Gb SAN Scalable Switches, TIPS0870 IBM Flex System IB6131 InfiniBand Switch, TIPS0871 IBM Flex System Fabric SI4093 System Interconnect Module, TIPS1045 IBM Flex System EN6131 40Gb Ethernet Switch, TIPS0911

Switches:

Adapters: IBM Flex System EN6132 2-port 40Gb Ethernet Adapter, TIPS0912 IBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port 10Gb Ethernet Adapter, TIPS0868 IBM Flex System CN4058 8-port 10Gb Converged Adapter, TIPS0909 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter, TIPS0845 IBM Flex System EN4132 2-port 10Gb Ethernet Adapter, TIPS0873 IBM Flex System EN4132 2-port 10Gb RoCE Adapter, TIPS0913

Copyright IBM Corp. 2012, 2013. All rights reserved.

485

IBM Flex System FC3052 2-port 8Gb FC Adapter, TIPS0869 IBM Flex System FC3172 2-port 8Gb FC Adapter, TIPS0867 IBM Flex System FC5022 2-port 16Gb FC Adapter, TIPS0891 IBM Flex System FC5172 2-port 16Gb FC Adapter, TIPS1043 IBM Flex System FC5052 2-port and FC5054 4-port 16Gb FC Adapters, TIPS1044 IBM Flex System FC5024D 4-port 16Gb FC Adapter, TIPS1047 IBM Flex System IB6132D 2-port FDR InfiniBand Adapter, TIPS1056 IBM Flex System IB6132 2-port FDR InfiniBand Adapter, TIPS0872 IBM Flex System IB6132 2-port QDR InfiniBand Adapter, TIPS0890 ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884 Other relevant document: IMM and IMM2 Support on IBM System x and BladeCenter Servers, TIPS0849 You can search for, view, download, or order these documents and other Redbooks, Redpapers, Web Docs, draft, and other materials, at this website: http://www.ibm.com/redbooks

IBM education
The following IBM educational offerings are available for IBM Flex System. Some course numbers and titles might have changed after publication: Important: IBM courses that are prefixed with NGTxx are traditional, face-to-face classroom offerings. Courses that are prefixed with NGVxx are Instructor Led Online (ILO) offerings. Courses that are prefixed with NGPxx are Self-paced Virtual Class (SPVC) offerings. NGT10/NGV10/NGP10, IBM Flex System - Introduction NGT20/NGV20/NGP20, IBM Flex System x240 Compute Node NGT30/NGV30/NGP30, IBM Flex System p260 and p460 Compute Nodes NGT40/NGV40/NGP40, IBM Flex System Manager Node NGT50/NGV50/NGP50, IBM Flex System Scalable Networking For more information about these and many other IBM System x educational offerings, visit the global IBM Training website at: http://www.ibm.com/training

Online resources
The following websites are also relevant as further information sources: IBM Flex System Interoperability Guide: http://www.redbooks.ibm.com/fsig Configuration and Option Guide: http://www.ibm.com/systems/xbc/cog/

486

IBM PureFlex System and IBM Flex System Products and Technology

IBM Flex System Enterprise Chassis Power Requirements Guide: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401 IBM Flex System Information Center: http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp IBM System Storage Interoperation Center: http://www.ibm.com/systems/support/storage/ssic Integrated Management Module II Users Guide: http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5086346 ServerProven compatibility page for operating system support: http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.sh tml ServerProven for IBM Flex System: http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html xREF - IBM x86 Server Reference: http://www.redbooks.ibm.com/xref

Help from IBM


IBM Support and downloads: http://www.ibm.com/support IBM Global Services http://www.ibm.com/services

Related publications and education

487

488

IBM PureFlex System and IBM Flex System Products and Technology

IBM PureFlex System and IBM Flex System Products and Technology

IBM PureFlex System and IBM Flex System Products and Technology

(1.0 spine) 0.875<->1.498 460 <-> 788 pages

IBM PureFlex System and IBM Flex System Products and Technology

IBM PureFlex System and IBM Flex System Products and Technology

IBM PureFlex System and IBM Flex System Products and Technology

IBM PureFlex System and IBM Flex System Products and Technology

Back cover

IBM PureFlex System and IBM Flex System Products and Technology
Describes the IBM Flex System Enterprise Chassis and compute node technology Provides details about available I/O modules and expansion options Explains networking and storage configurations
To meet todays complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more. The IBM PureFlex System combines no-compromise system designs along with built-in expertise and integrates them into complete, optimized solutions. At the heart of PureFlex System is the IBM Flex System Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications. The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high-speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy now, and to scale to meet your needs in the future. This IBM Redbooks publication describes IBM PureFlex System and IBM Flex System. It highlights the technology and features of the chassis, compute nodes, management features, and connectivity options. Guidance is provided about every major component, and about networking and storage connectivity. This book is intended for customers, Business Partners, and IBM employees who want to know the details about the new family of products.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE


IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks

SG24-7984-03

ISBN 0738438898

Das könnte Ihnen auch gefallen