Sie sind auf Seite 1von 326

Front cover

IBM PureFlex System and IBM Flex System Products and Technology
Describes the IBM Flex System Enterprise Chassis and compute node technology Provides details of available I/O modules and expansion options Explains networking and storage configurations

David Watts Randall Davis Richard French Lu Han Dave Ridley Cristian Rojas

ibm.com/redbooks

International Technical Support Organization IBM PureFlex System and IBM Flex System Products and Technology July 2012

SG24-7984-00

Note: Before using this information and the product it supports, read the information in Notices on page ix.

First Edition (July 2012) This edition applies: IBM PureFlex System IBM Flex System Enterprise Chassis IBM Flex System Manager IBM Flex System x220 Compute Node IBM Flex System x240 Compute Node IBM Flex System p260 Compute Node IBM Flex System p24L Compute Node IBM Flex System p460 Compute Node IBM 42U 1100 mm Enterprise V2 Dynamic Rack

Copyright International Business Machines Corporation 2012. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Chapter 1. IBM PureSystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 IBM PureApplication System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 IBM Flex System: The building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.1 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.2 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.3 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.4 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.5 Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 IBM Flex System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4.1 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4.2 IBM Flex System Enterprise Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4.3 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4.4 I/O Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.5 This book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Chapter 2. IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 IBM PureFlex System capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 IBM PureFlex System Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Top of rack Ethernet switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Top of rack SAN switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.6 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.8 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.9 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 IBM PureFlex System Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Top of rack Ethernet switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Top of rack SAN switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.6 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.8 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.9 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 IBM PureFlex System Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Copyright IBM Corp. 2012. All rights reserved.

11 12 13 13 14 14 14 16 16 17 17 20 20 21 21 22 22 23 23 24 24 26 27 27 iii

2.4.2 Top of rack Ethernet switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Top of rack SAN switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.8 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.9 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 IBM SmartCloud Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Management network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Chassis Management Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Compute node management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Integrated Management Module II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Flexible service processor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 I/O modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Software features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Supported agents, hardware, operating systems, and tasks . . . . . . . . . . . . . . . .

28 28 29 30 30 31 31 33 34 37 38 39 39 40 41 43 43 44 45 46 48 51 53

Chapter 4. Chassis and infrastructure configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.1.1 Front of the chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.1.2 Midplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.1.3 Rear of the chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.1.4 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.1.5 Air filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.1.6 Compute node shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.1.7 Hot plug and hot swap components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.2 Power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.3 Fan modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.4 Fan logic module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.5 Front information panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.6 Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.7 Power supply and fan module requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.7.1 Fan module population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.7.2 Power supply population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.8 Chassis Management Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.9 I/O architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.10 I/O modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.10.1 I/O module LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.10.2 Serial access cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.10.3 I/O module naming scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.10.4 IBM Flex System Fabric EN4093 10 Gb Scalable Switch. . . . . . . . . . . . . . . . . . 94 4.10.5 IBM Flex System EN4091 10 Gb Ethernet Pass-thru . . . . . . . . . . . . . . . . . . . . 100 4.10.6 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch . . . . . . . . . . . . . . . . 102 4.10.7 IBM Flex System FC5022 16 Gb SAN Scalable Switch . . . . . . . . . . . . . . . . . . 107 4.10.8 IBM Flex System FC3171 8 Gb SAN Switch . . . . . . . . . . . . . . . . . . . . . . . . . . 113

iv

IBM PureFlex System and IBM Flex System Products and Technology

4.10.9 IBM Flex System FC3171 8 Gb SAN Pass-thru . . . . . . . . . . . . . . . . . . . . . . . . 4.10.10 IBM Flex System IB6131 InfiniBand Switch . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Infrastructure planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.1 Supported power cords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.2 Supported PDUs and UPS units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.3 Power planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.4 UPS planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.5 Console planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.6 Cooling planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.7 Chassis-rack cabinet compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 IBM 42U 1100 mm Enterprise V2 Dynamic Rack . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 IBM Rear Door Heat eXchanger V2 Type 1756 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 IBM Flex System x240 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.6 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.7 Standard onboard features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.8 Local storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.9 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.10 Embedded 10 Gb Virtual Fabric Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.11 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.12 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 IBM Flex System x220 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Processor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.7 Internal disk storage controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.8 Supported internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.9 Embedded 1 Gb Ethernet controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.10 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.11 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.12 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 IBM Flex System p260 and p24L Compute Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 IBM Flex System p24L Compute Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.5 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.6 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.7 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.8 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.9 Active Memory Expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

116 118 119 119 120 120 124 125 126 127 128 134 139 140 140 140 144 144 145 147 150 162 163 169 170 171 172 176 177 177 180 181 182 184 184 186 191 192 192 193 194 197 198 198 200 200 201 203 203 203 205 207

5.4.10 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.11 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.12 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.13 Integrated features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.14 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 IBM Flex System p460 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.6 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.7 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.8 Active Memory Expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.9 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.10 Local storage and cover options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.11 Hardware RAID capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.12 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.13 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.14 Integrated features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.15 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Form factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Naming structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Supported compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Supported switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.5 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter . . . . . . . . . . . . . . . . . . 5.6.6 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter . . . . . . . . . . . . . . . . . 5.6.7 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter . . . . . . . . . . . . . . . . . 5.6.8 IBM Flex System CN4054 10 Gb Virtual Fabric Adapter . . . . . . . . . . . . . . . . . . 5.6.9 IBM Flex System FC3172 2-port 8 Gb FC Adapter. . . . . . . . . . . . . . . . . . . . . . . 5.6.10 IBM Flex System FC3052 2-port 8 Gb FC Adapter. . . . . . . . . . . . . . . . . . . . . . 5.6.11 IBM Flex System FC5022 2-port 16Gb FC Adapter . . . . . . . . . . . . . . . . . . . . . 5.6.12 IBM Flex System IB6132 2-port FDR InfiniBand Adapter . . . . . . . . . . . . . . . . . 5.6.13 IBM Flex System IB6132 2-port QDR InfiniBand Adapter. . . . . . . . . . . . . . . . . Chapter 6. Network integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Ethernet switch module selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Scalable switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 High availability and redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Redundant network topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Spanning Tree Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Layer 2 failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 Virtual Link Aggregation Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.5 Virtual Router Redundancy Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.6 Routing protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Trunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Jumbo frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 NIC teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 Server Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 IBM Virtual Fabric Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

209 212 214 215 215 216 216 218 218 220 221 222 223 226 227 228 229 230 232 233 233 234 234 235 235 236 236 238 240 242 246 247 249 251 253 257 258 258 260 261 262 262 263 264 265 266 266 266 266 267 267 267

vi

IBM PureFlex System and IBM Flex System Products and Technology

6.6.1 Virtual Fabric mode vNIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 6.6.2 Switch independent mode vNIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 6.7 VMready . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Chapter 7. Storage integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 External storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 IBM XIV Storage System series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.3 IBM System Storage DS8000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.4 IBM System Storage DS5000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.5 IBM System Storage DS3000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.6 IBM System Storage N series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.7 IBM System Storage TS3500 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.8 IBM System Storage TS3310 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.9 IBM System Storage TS3100 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Fibre Channel requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 FC switch selection and fabric interoperability rules . . . . . . . . . . . . . . . . . . . . . . 7.3 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 High availability and redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Backup solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Dedicated server for centralized LAN backup. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 LAN-free backup for nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Implementing Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.2 iSCSI SAN Boot specific considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Converged networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 274 275 276 277 278 278 278 280 280 281 281 281 282 286 287 288 289 289 290 291 291 292 292

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Related publications and education. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 295 296 296 297

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

vii

viii

IBM PureFlex System and IBM Flex System Products and Technology

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information about the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

Copyright IBM Corp. 2012. All rights reserved.

ix

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Active Memory AIX AS/400 BladeCenter BNT DS8000 Easy Tier EnergyScale FlashCopy IBM Flex System IBM SmartCloud IBM iDataPlex Netfinity NMotion Power Systems POWER6+ POWER6 POWER7 PowerPC PowerVM POWER PureApplication PureFlex PureSystems Redbooks Redbooks (logo) ServerProven ServicePac Storwize System Storage System x VMready WebSphere XIV

The following terms are trademarks of other companies: Intel Xeon, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Linear Tape-Open, LTO, Ultrium, the LTO Logo and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. SnapMirror, SnapManager, NearStore, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other countries. Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group in the United States and other countries. BNT, NMotion, VMready, and Server Mobility are trademarks or registered trademarks of Blade Network Technologies, Inc., an IBM Company. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Other company, product, or service names may be trademarks or service marks of others.

IBM PureFlex System and IBM Flex System Products and Technology

Preface
To meet todays complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more. The IBM PureFlex System combines no-compromise system designs along with built-in expertise and integrates them into complete, optimized solutions. At the heart of PureFlex System is the IBM Flex System Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications. The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy now, and to scale to meet your needs in the future. This IBM Redbooks publication describes IBM PureFlex System and IBM Flex System. It highlights the technology and features of the chassis, compute nodes, management features, and connectivity options. Guidance is provided about every major component, and about networking and storage connectivity. This book is intended for customers, Business Partners, and IBM employees who want to know the details about the new family of products. It assumes that you have a basic understanding of blade server concepts and general IT knowledge.

The team who wrote this book


This book was produced by a team of specialists from around the world working at the International Technical Support Organization, Raleigh Center. David Watts is a Consulting IT Specialist at the IBM ITSO Center in Raleigh. He manages residencies and produces IBM Redbooks publications on hardware and software topics related to IBM Flex System, IBM System x, and BladeCenter servers and associated client platforms. He has authored over 200 books, papers, and Product Guides. He holds a Bachelor of Engineering degree from the University of Queensland (Australia), and has worked for IBM in both the United States and Australia since 1989. David is an IBM Certified IT Specialist, and a member of the IT Specialist Certification Review Board. Randall Davis is a Senior IT Specialist working in the System x pre-sales team for IBM Australia as a Field Technical Sales Support (FTSS) specialist. He regularly performs System x, BladeCenter, and Storage demonstrations for customers at the IBM Demonstration Centre in Melbourne, Australia. He also helps instruct Business Partners and customers on how to configure and install the BladeCenter. His areas of expertise are the IBM BladeCenter, System x servers, VMware, and Linux. Randall started at IBM as a System 36 and AS/400 Engineer in 1989.

Copyright IBM Corp. 2012. All rights reserved.

xi

Richard French has worked for over 30 years at IBM, with the last nine in System x pre-sales Technical Education as a Senior Instructional Designer / Instructor. He is the course author and instructor of many System x and BladeCenter classroom courses, and leads the technical team that develops IBM Flex System course curriculum. He holds numerous certifications from IBM, Microsoft, and CompTIA, and has assisted in developing IBM certification exams. He is based in Raleigh NC. Lu Han is a Senior IT Specialist working in Advanced Technical Skills team for the IBM Growth Markets Unit. Before this, he was a member of Greater China Group ATS team. He started at IBM as a System x and BladeCenter Techline engineer in 2002, and has about 10 years of wide technical experience in IBM x86 products. He is familiar with System x and BladeCenter solutions that cover system management, virtualization, networking, and cloud. He also focuses on IBM iDataPlex products and HPC solutions. Dave Ridley is the System x, BladeCenter, iDataPlex, and IBM Flex System Product Manager for IBM in the United Kingdom and Ireland. His role includes product transition planning, supporting marketing events, press briefings, management of the UK loan pool, running early ship programs, and supporting the local sales and technical teams. He is based in Horsham in the United Kingdom, and has been working for IBM since 1998. In addition, he has been involved with IBM x86 products for some 27 years. Cristian Rojas is a Senior IT Specialist working as a Client Technical Sales Specialist for IBM. He supports all System x and BladeCenter products, and is a technical focal point for IBM PureFlex System in the Northeast United States. Before this role, he worked as a member of the System x Performance team in Raleigh, NC where his main focus was future server performance optimization. His other areas of expertise include installing and configuring IBM System x and BladeCenter servers, and virtualization including Red Hat and VMware. He has worked for IBM since 2005, and is based in Allentown, PA.

The team (l-r): Richard, David, Lu, Cristian, Dave, and Randall

xii

IBM PureFlex System and IBM Flex System Products and Technology

Thanks to the following people for their contributions to this project: From IBM marketing: TJ Aspden Michael Bacon John Biebelhausen Bruce Corregan Mary Beth Daughtry Mike Easterly Diana Cunniffe Kyle Hampton Botond Kiss From IBM development: Mike Anderson Sumanta Bahali Wayne Banks Keith Cramer Mustafa Dahnoun Dean Duff Royce Espey Kaena Freitas Dottie Gardner Sam Gaver Phil Godbolt Mike Goodman John Gossett Tim Hiteshew Andy Huryn Bill Ilas Don Keener Caroline Metry Meg McColgan Mark McCool Rob Ord Greg Pruett Mike Solheim Fang Su Vic Stankevich Tan Trinh Rochelle White Dale Weiler Mark Welch Al Willard Shekhar Mishra Justin Nguyen Sander Kim Dean Parker Hector Sanchez David Tareen David Walker Randi Wood Bob Zuber

From the International Technical Support Organization: Kevin Barnes Tamikia Barrow Mary Comianos Shari Deiana Cheryl Gera Others from IBM around the world: Kerry Anders Bill Champion Others from other companies: Tom Boucher, Emulex Brad Buland, Intel Jeff Lin, Emulex Chris Mojica, QLogic Brent Mosbrook, Emulex Jimmy Myers, Brocade Haithuy Nguyen, Mellanox Brian Sparks, Mellanox Matt Wineberg, Brocade Michael L. Nelson Matt Slavin Ilya Krutov Karen Lawrence Julie OShea Linda Robinson

xiii

Now you can become a published author, too!


Heres an opportunity to spotlight your skills, grow your career, and become a published authorall at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks


Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html

xiv

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 1.

IBM PureSystems
During the last 100 years, information technology has moved from a specialized tool to a pervasive influence on nearly every aspect of life. From tabulating machines that counted with mechanical switches or vacuum tubes to the first programmable computers, IBM has been a part of this growth. The goal has always been to help customers to solve problems. IT is a constant part of business and of general life. The expertise of IBM in delivering IT solutions has helped the planet become more efficient. As organizational leaders seek to extract more real value from their data, business processes, and other key investments, IT is moving to the strategic center of business. To meet these business demands, IBM is introducing a new category of systems. These systems combine the flexibility of general-purpose systems, the elasticity of cloud computing, and the simplicity of an appliance that is tuned to the workload. Expert integrated systems are essentially the building blocks of capability. This new category of systems represents the collective knowledge of thousands of deployments, established guidelines, innovative thinking, IT leadership, and distilled expertise. The offerings in IBM PureSystems are designed to deliver value in the following ways: Built-in expertise helps you to address complex business and operational tasks automatically. Integration by design helps you to tune systems for optimal performance and efficiency. Simplified experience, from design to purchase to maintenance, creates efficiencies quickly. The IBM PureSystems offerings are optimized for performance and virtualized for efficiency. These systems offer a no-compromise design with system-level upgradeability. IBM PureSystems is built for cloud, containing built-in flexibility and simplicity. At IBM, expert integrated systems come in two types: IBM PureFlex System. Infrastructure systems deeply integrate the IT elements and expertise of your system infrastructure. IBM PureApplication System. Platform systems include middleware and expertise for deploying and managing your application platforms.

Copyright IBM Corp. 2012. All rights reserved.

1.1 IBM PureFlex System


To meet todays complex and ever-changing business demands, you need a solid foundation of server, storage, networking, and software resources. Furthermore, it needs to be simple to deploy, and able to quickly and automatically adapt to changing conditions. You also need access to, and the ability to take advantage of, broad expertise and proven guidelines in systems management, applications, hardware maintenance and more. IBM PureFlex System is a comprehensive infrastructure system that provides an expert integrated computing system. It combines servers, enterprise storage, networking, virtualization, and management into a single structure. Its built-in expertise enables organizations to manage and flexibly deploy integrated patterns of virtual and hardware resources through unified management. These systems are ideally suited for customers who want a system that delivers the simplicity of an integrated solution while still able to tune middleware and the runtime environment. IBM PureFlex System uses workload placement based on virtual machine compatibility and resource availability. Using built-in virtualization across servers, storage, and networking, the infrastructure system enables automated scaling of resources and true workload mobility. IBM PureFlex System has undergone significant testing and experimentation so that it can mitigate IT complexity without compromising the flexibility to tune systems to the tasks businesses demand. By providing both flexibility and simplicity, IBM PureFlex System can provide extraordinary levels of IT control, efficiency, and operating agility. This combination enables businesses to rapidly deploy IT services at a reduced cost. Moreover, the system is built on decades of expertise. This expertise enables deep integration and central management of the comprehensive, open-choice infrastructure system. It also dramatically cuts down on the skills and training required for managing and deploying the system. IBM PureFlex System combines advanced IBM hardware and software along with patterns of expertise. It integrates them into three optimized configurations that are simple to acquire and deploy so you get fast time to value. The PureFlex System has the following configurations: IBM PureFlex System Express, which is designed for small and medium businesses and is the most affordable entry point for PureFlex System. IBM PureFlex System Standard, which is optimized for application servers with supporting storage and networking, and is designed to support your key ISV solutions. IBM PureFlex System Enterprise, which is optimized for transactional and database systems. It has built-in redundancy for highly reliable and resilient operation to support your most critical workloads.

IBM PureFlex System and IBM Flex System Products and Technology

These configurations are summarized in Table 1-1.


Table 1-1 IBM PureFlex System configurations Component IBM PureFlex System 42U Rack IBM Flex System Enterprise Chassis IBM Flex System Fabric EN4093 10 Gb Scalable Switch IBM Flex System FC3171 8 Gb SAN Switch IBM Flex System Manager Node IBM Flex System Manager software license Chassis Management Module Chassis power supplies (std/max) Chassis 80 mm fan modules (std/max) IBM Storwize V7000 Disk System IBM Storwize V7000 Software IBM PureFlex System Express 1 1 1 IBM PureFlex System Standard 1 1 1 IBM PureFlex System Enterprise 1 1 2 with both port-count upgrades 2 1 Flex System Manager Advanced with 3-year service and support 2 6/6 8/8 Yes (redundant controller) Base with 3-year software maintenance agreement

1 1 IBM Flex System Manager with 1-year service and support 2 2/6 4/8 Yes (redundant controller) Base with 1-year software maintenance agreement

2 1 IBM Flex System Manager Advanced with 3-year service and support 2 4/6 6/8 Yes (redundant controller) Base with 3-year software maintenance agreement

The fundamental building blocks of IBM PureFlex System solutions are the IBM Flex System Enterprise Chassis complete with compute nodes, networking, and storage. For more information about IBM PureFlex System, see Chapter 2, IBM PureFlex System on page 11.

1.2 IBM PureApplication System


IBM PureApplication System is a platform system that includes a full application platform set of middleware and expertise in with the IBM PureFlex System. It can be controlled with a single management console. This workload-aware, flexible platform is easy to deploy, customize, safeguard, and manage in a traditional or private cloud environment. IBM PureApplication System ultimately provides superior IT economics. With the IBM PureApplication System, you can provision your own patterns of software, middleware, and virtual system resources. You can provision these patterns within a unique framework that is shaped by IT guidelines and industry standards. These standards have been culled from many years of IBM experience with clients and a deep understanding of computing. These IT guidelines and standards are infused throughout the system. 3

IBM PureApplication System provides the following advantages: IBM builds expertise into preintegrated deployment patterns, which can speed the development and delivery of new services By automating key processes such as application deployment, PureApplication System built-in expertise capabilities can reduce the cost and time required to manage an infrastructure Built-in application optimization expertise reduces the number of unplanned outages through guidelines and automation of the manual processes identified as sources of those outages Administrators can use built-in application elasticity to scale up or to scale down automatically. Systems can use data replication to increase availability. Patterns of expertise can automatically balance, manage, and optimize the elements necessary, from the underlying hardware resources up through the middleware and software. These patterns of expertise help deliver and manage business processes, services, and applications by encapsulating guidelines and expertise into a repeatable and deployable form. This knowledge and expertise has been gained from decades of optimizing the deployment and management of data centers, software infrastructures, and applications around the world. These patterns help you achieve the following types of value: Agility: As you seek to innovate to bring products and services to market faster, you need fast time-to-value. Expertise built into a solution can eliminate manual steps, automate delivery, and support innovation. Efficiency: To reduce costs and conserve valuable resources, get the most from your systems in terms of energy efficiency, simple management, and fast, automated response to problems. With built-in expertise, you can optimize your critical business applications and get the most out of your investments. Increased simplicity: You need a less complex environment. Patterns of expertise can help you easily consolidate diverse servers, storage, and applications onto an easier-to-manage, integrated system. Control: With optimized patterns of expertise, you can accelerate cloud implementations to lower risk by improving security and reducing human error. IBM PureApplication System is available in four configurations. These configuration options enable you to choose the size and compute power that meets your needs for application infrastructure. You can upgrade to the next size when your organization requires more capacity, and in most cases, you can do so without taking an application downtime.

IBM PureFlex System and IBM Flex System Products and Technology

Table 1-2 provides a high-level overview of the configurations.


Table 1-2 IBM PureApplication System configurations IBM PureApplication System W1500-96 Cores RAM SSD Storage HDD Storage Application Services Entitlement 96 1.5 IBM PureApplication System W1500-192 192 3.1 6.4 TB 48.0 TB Included IBM PureApplication System W1500-384 384 6.1 IBM PureApplication System W1500-608 608 9.7

IBM PureApplication System is outside the scope of this book. For more information, see: http://ibm.com/expert

1.3 IBM Flex System: The building blocks


IBM PureFlex System and IBM PureApplication System are built from reliable IBM technology that supports open standards and offer confident road maps, IBM Flex System. IBM Flex System is designed for multiple generations of technology, supporting your workload today while being ready for the future demands of your business.

1.3.1 Management
IBM Flex System Manager is designed to optimize the physical and virtual resources of the IBM Flex System infrastructure while simplifying and automating repetitive tasks. It provides easy system set-up procedures with wizards and built-in expertise, and consolidated monitoring for all of your resources, including compute, storage, networking, virtualization, and energy. IBM Flex System Manager provides core management functionality along with automation. It is an ideal solution that allows you to reduce administrative expense and focus your efforts on business innovation. A single user interface controls these features: Intelligent automation Resource pooling Improved resource utilization Complete management integration Simplified setup

1.3.2 Compute nodes


The compute nodes are designed to advantage of the full capabilities of IBM POWER7 and Intel Xeon processors. This configuration offers the performance you need for your critical applications.

With support for a range of hypervisors, operating systems, and virtualization environments, the compute nodes provide the foundation for these applications: Virtualization solutions Database applications Infrastructure support Line of business applications

1.3.3 Storage
The storage capabilities of IBM Flex System give you advanced functionality with storage nodes in your system, and take advantage of your existing storage infrastructure through advanced virtualization. IBM Flex System simplifies storage administration with a single user interface for all your storage. The management console is integrated with the comprehensive management system. These management and storage capabilities allow you to virtualize third-party storage with nondisruptive migration of your current storage infrastructure. You can also take advantage of intelligent tiering so you can balance performance and cost for your storage needs. The solution also supports local and remote replication, and snapshots for flexible business continuity and disaster recovery capabilities.

1.3.4 Networking
The range of available adapters and switches to support key network protocols allow you to configure IBM Flex System to fit in your infrastructure. However, you can do so without sacrificing being ready for the future. The networking resources in IBM Flex System are standards-based, flexible, and fully integrated into the system. This combination gives you no-compromise networking for your solution. Network resources are virtualized and managed by workload. And these capabilities are automated and optimized to make your network more reliable and simpler to manage. IBM Flex Systems gives you these key networking capabilities: Supports the networking infrastructure you have today, including Ethernet, Fibre Channel and InfiniBand Offers industry-leading performance with 1 Gb, 10 Gb, and 40 Gb Ethernet; 8 Gb and 16 Gb Fibre Channel; and FDR InfiniBand Provides pay-as-you-grow scalability so you can add ports and bandwidth when needed

1.3.5 Infrastructure
The IBM Flex System Enterprise Chassis is the foundation of the offering, supporting intelligent workload deployment and management for maximum business agility. The 14-node, 10U chassis delivers high-performance connectivity for your integrated compute, storage, networking, and management resources. The chassis is designed to support multiple generations of technology, and offers independently scalable resource pools for higher utilization and lower cost per workload.

IBM PureFlex System and IBM Flex System Products and Technology

1.4 IBM Flex System overview


The expert integrated system of IBM PureSystems are based on a new hardware and software platform, IBM Flex System.

1.4.1 IBM Flex System Manager


The IBM Flex System Manager (FSM) is a high performance scalable systems management appliance with a preloaded software stack. As an appliance, the hardware is closed, on a dedicated compute node platform, and designed to provide a specific purpose. It is intended to configure, monitor, and manage IBM Flex System resources in multiple IBM Flex System Enterprise Chassis (Enterprise Chassis), optimizing time-to-value. The FSM provides an instant resource-oriented view of the Enterprise Chassis and its components, providing vital information for real-time monitoring. An increased focus on optimizing time-to-value is evident in these features: Setup wizards, including initial setup wizards, provide intuitive and quick setup of the FSM The Chassis Map provides multiple view overlays to track health, firmware inventory, and environmental metrics Configuration management for repeatable setup of compute, network, and storage devices Remote presence application for remote access to compute nodes with single sign-on Quick search provides results as you type Beyond the physical world of inventory, configuration, and monitoring, IBM Flex System Manager enables virtualization and workload optimization for a new class of computing: Resource utilization: Detects congestion, notification policies, and relocation of physical and virtual machines that include storage and network configurations within the network fabric Resource pooling: Pooled network switching, with placement advisors that consider VM compatibility, processor, availability, and energy Intelligent automation: Automated and dynamic VM placement based on utilization, energy, hardware predictive failure alerts, and host failures Figure 1-1 shows the IBM Flex System Manager.

Figure 1-1 IBM Flex System Manager

1.4.2 IBM Flex System Enterprise Chassis


The IBM Flex System Enterprise Chassis offers compute, networking, and storage capabilities far exceeding those currently available. With the ability to handle up 14 compute nodes, intermixing POWER7 and Intel x86, the Enterprise Chassis provides flexibility and tremendous compute capacity in a 10-U package. Additionally, the rear of the chassis accommodates four high speed networking switches. With interconnecting compute nodes, networking, and storage using a high performance and scalable mid-plane, Enterprise Chassis can support 40 Gb speeds. The ground-up design of the Enterprise Chassis reaches new levels of energy efficiency through innovations in power, cooling, and air flow. Simpler controls and futuristic designs allow the Enterprise Chassis to break free of one size fits all energy schemes. The ability to support the workload demands of tomorrows workloads is built in with a new IO architecture, which provides choice and flexibility in fabric and speed. With the ability to use Ethernet, InfiniBand, FC, FCoE, and iSCSI, the Enterprise Chassis is uniquely positioned to meet the growing IO needs to the IT industry. Figure 1-2 shows the IBM Flex System Enterprise Chassis.

Figure 1-2 The IBM Flex System Enterprise Chassis

1.4.3 Compute nodes


IBM Flex System offers compute nodes that vary in architecture, dimension, and capabilities. Optimized for efficiency, density, performance, reliability, and security, the portfolio includes the following IBM POWER7-based and Intel Xeon-based nodes: IBM Flex System x240 Compute Node, a two socket Intel Xeon-based compute node IBM Flex System x220 Compute Node, a cost-optimized two-socket Intel Xeon-based compute node 8
IBM PureFlex System and IBM Flex System Products and Technology

IBM Flex System p260 Compute Node, a two socket IBM POWER7-based compute node IBM Flex System p24L Compute Node, a two socket IBM POWER7-based compute node optimized for Linux IBM Flex System p460 Compute Node, a four socket IBM POWER7-based compute node Figure 1-3 shows a IBM Flex System p460 Compute Node.

Figure 1-3 IBM Flex System p460 Compute Node

The nodes are complimented with leadership IO capabilities of up to 16 x 10 Gb lanes per node. The following IO adapters are offered: IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter IBM Flex System CN4054 10 Gb Virtual Fabric Adapter IBM Flex System FC3052 2-port 8 Gb FC Adapter IBM Flex System FC3172 2-port 8 Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter IBM Flex System IB6132 2-port FDR InfiniBand Adapter IBM Flex System IB6132 2-port QDR InfiniBand Adapter

1.4.4 I/O Modules


Networking in data centers is undergoing a transition from a discrete traditional model to a more flexible, optimized model. The network architecture in IBM Flex System has been designed to address the key challenges customers are facing today in their data centers. The key focus areas of the network architecture on this platform are unified network management, optimized and automated network virtualization, and simplified network infrastructure. Providing innovation, leadership, and choice in the I/O module portfolio uniquely positions IBM Flex System to provide meaningful solutions to address customer need. The following is a list of the I/O Modules offered with IBM Flex System: IBM Flex System Fabric EN4093 10 Gb Scalable Switch IBM Flex System EN2092 1 Gb Ethernet Scalable Switch IBM Flex System EN4091 10 Gb Ethernet Pass-thru IBM Flex System FC3171 8 Gb SAN Switch IBM Flex System FC3171 8 Gb SAN Pass-thru IBM Flex System FC5022 16 Gb SAN Scalable Switch

IBM Flex System FC5022 24-port 16 Gb ESB SAN Scalable Switch IBM Flex System IB6131 InfiniBand Switch IBM Flex System IB6132 2-port QDR InfiniBand Adapter Figure 1-4 shows the IBM Flex System Fabric EN4093 10 Gb Scalable Switch.

Figure 1-4 IBM Flex System Fabric EN4093 10 Gb Scalable Switch

1.5 This book


This book describes the IBM Flex System components in detail. It addresses the technology and features of the chassis, compute nodes, management features, and connectivity and storage options. It starts with a discussion of the systems management features of the product portfolio.

10

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 2.

IBM PureFlex System


IBM PureFlex System provides an integrated computing system that combines servers, enterprise storage, networking, virtualization, and management into a single structure. Its built-in expertise allows you to manage and flexibly deploy integrated patterns of virtual and hardware resources through unified management. This chapter includes the following sections: 2.1, IBM PureFlex System capabilities on page 12 2.2, IBM PureFlex System Express on page 13 2.3, IBM PureFlex System Standard on page 20 2.4, IBM PureFlex System Enterprise on page 27 2.5, IBM SmartCloud Entry on page 34

Copyright IBM Corp. 2012. All rights reserved.

11

2.1 IBM PureFlex System capabilities


The PureFlex System offers these advantages: Configurations that ease acquisition experience and match your needs Optimized to align with targeted workloads and environments Designed for cloud with SmartCloud Entry included on Standard and Enterprise Choice of architecture, operating system, and virtualization engine Designed for simplicity with integrated, single-system management across physical and virtual resources Simplified ordering that accelerates deployment into your environments Ships as a single integrated entity directly to you Includes factory integration and lab services optimization IBM PureFlex System has three preintegrated offerings that support compute, storage, and networking requirements. You can select from these offerings, which are designed for key client initiatives and help simplify ordering and configuration. As a result, PureFlex System reduces the cost, time, and complexity of system deployments. The IBM PureFlex System is offered in these configurations: Express: The infrastructure system for small-sized and midsized businesses, and the most cost-effective entry point (2.2, IBM PureFlex System Express on page 13). Standard: The infrastructure system for application servers with supporting storage and networking (2.3, IBM PureFlex System Standard on page 20). Enterprise: The infrastructure system optimized for scalable cloud deployments. It has built-in redundancy for highly reliable and resilient operation to support critical applications and cloud services (2.4, IBM PureFlex System Enterprise on page 27). A PureFlex System configuration has these main components: Preinstalled and configured IBM Flex System Enterprise Chassis Compute nodes with either IBM POWER or Intel Xeon processors IBM Flex System Manager, preinstalled with management software and licenses for software activation IBM Storwize V7000 external storage unit All hardware components preinstalled in an IBM PureFlex System 42U rack Choice of: Operating system: IBM AIX, IBM i, Microsoft Windows, Red Hat Enterprise Linux, or SUSE Linux Enterprise Server Virtualization software: IBM PowerVM, KVM, VMware vSphere, or Microsoft Hyper V SmartCloud Entry (see 2.5, IBM SmartCloud Entry on page 34). Complete pre-integrated software and hardware On-site services included to get you up and running quickly Restriction: Orders for Power Systems compute node must be one of the three IBM PureFlex System configurations. Build-to-order configurations are not available.

12

IBM PureFlex System and IBM Flex System Products and Technology

2.2 IBM PureFlex System Express


The tables in this section show the hardware, software, and services that make up IBM PureFlex System Express: 2.2.1, Chassis 2.2.2, Top of rack Ethernet switch 2.2.3, Top of rack SAN switch on page 14 2.2.4, Compute nodes on page 14 2.2.5, IBM Flex System Manager on page 16 2.2.6, IBM Storwize V7000 on page 16 2.2.7, Rack cabinet on page 17 2.2.8, Software on page 17 2.2.9, Services on page 20 To specify IBM PureFlex System Express in the IBM ordering system, specify the indicator feature code listed in Table 2-1 for each system type.
Table 2-1 Express indicator feature code AAS feature code EFD1 XCC feature code A2VS Description IBM PureFlex System Express Indicator Feature Code

2.2.1 Chassis
Table 2-2 lists the major components of the IBM Flex System Enterprise Chassis, including the switches and options. Remember: The tables in this section do not list all feature codes. Some features are not listed here for brevity.
Table 2-2 Components of the chassis and switches AAS feature code 7893-92X 3593 3282 EB29 3595 3286 3590 4558 9039 3592 XCC feature code 8721-HC1 A0TB 5053 3268 A0TD 5075 A0UD 6252 A0TM A0UE Description IBM Flex System Enterprise Chassis IBM Flex System Fabric EN4093 10 Gb Scalable Switch 10 GbE 850 nm Fiber SFP+ Transceiver (SR) IBM BNT SFP RJ45 Transceiver IBM Flex System FC3171 8 Gb SAN Switch IBM 8 GB SFP+ Short-Wave Optical Transceiver Additional PSU 2500 W 2.5 m, 16A/100-240V, C19 to IEC 320-C20 power cord Base Chassis Management Module Additional Chassis Management Module 1 2 5 1 2 0 2 1 1 Minimum quantity

13

AAS feature code 9038 7805

XCC feature code None A0UA

Description Base Fan Modules (four) Additional Fan Modules (two)

Minimum quantity 1 0

2.2.2 Top of rack Ethernet switch


If more than one chassis is configured then a top of rack Ethernet switch must be added to the configuration. If only one chassis is configured, the switch is optional. Table 2-3 lists the switch components for an Ethernet switch.
Table 2-3 Components of the top of rack Ethernet switch AAS feature code 7309-HC3 7309-G52 ECB5 XCC feature code 1455-64C 1455-48E A1PJ Description IBM System Networking RackSwitch G8264 IBM System Networking RackSwitch G8052 3m IBM Passive DAC SFP+ Cable Minimum quantity 0a 0a 1 per EN4093 switch 0

EB25

A1PJ

3m IBM QSFP+ DAC Break Out Cable

a. One required when a two or more Enterprise Chassis are configured

2.2.3 Top of rack SAN switch


If more than one chassis is configured then a top of rack SAN switch must be added to the configuration. If only one chassis is configured, the switch is optional. Table 2-4 lists the switch components for a SAN switch.
Table 2-4 Components of the top of rack SAN switch AAS feature code 2498-B24 5605 2808 XCC feature code 2498-B24 5605 2808 Description 24-port SAN Switch 5m optic cable 8 Gb SFP transceivers (eight pack) Minimum quantity 0 1 1

2.2.4 Compute nodes


The PureFlex System Express requires one of the following compute nodes: IBM Flex System p260 Compute Node (IBM POWER7 based) IBM Flex System p24L Compute Node (IBM POWER7 based) IBM Flex System x240 Compute Node (Intel Xeon based)

14

IBM PureFlex System and IBM Flex System Products and Technology

Table 2-5 lists the major components of the IBM Flex System p260 Compute Node.
Table 2-5 Components of IBM Flex System p260 Compute Node AAS feature code 7895-22x 1764 1762 Description IBM Flex System p260 Compute Node IBM Flex System FC3172 2-port 8 Gb FC Adapter IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter Minimum quantity 1 1 1

Base Processor 1 Required, select only one, minimum 1, maximum 1 EPR1 EPR3 EPR5 8 Cores, 2 x 4 core, 3.3 GHz + 2-socket system board 16 Cores, 2 x 8 core, 3.2 GHz + 2-socket system board 16 Cores, 2 x 8 core, 3.55 GHz + 2-socket system board 1

Memory - 8 GB per core minimum with all dual inline memory module (DIMM) slots filled with same memory type 8145 8199 32 GB (2x 16 GB), 1066 MHz, LP RDIMMs (1.35V) 16 GB (2x 8 GB), 1066 MHz, VLP RDIMMs (1.35V)

Table 2-6 lists the major components of the IBM Flex System p24L Compute Node.
Table 2-6 Components of IBM Flex System p24L Compute Node AAS feature code 1457-7FL 1764 1762 Description IBM Flex System p24L Compute Node IBM Flex System FC3172 2-port 8 Gb FC Adapter IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter Minimum quantity 1 1 1

Base Processor 1 Required, select only one, minimum 1, maximum 1 EPR7 EPR8 EPR9 12 cores, 2x 6core, 3.7 GHz + 2-socket system board 16 cores, 2x 8 core, 3.2 GHz + 2-socket system board 16 cores, 2x 8 core, 3.55 GHz + 2-socket system board 1

Memory - 2 GB per core minimum with all DIMM slots filled with same memory type 8145 8199 8196 EM04 32 GB (2x 16 GB), 1066 MHz, LP RDIMMs (1.35V) 16 GB (2x 8 GB), 1066 MHz, VLP RDIMMs (1.35V) 8 GB (2x 4 GB), 1066 MHz, DDR3, VLP RDIMMS(1.35V) 4 GB (2 x2 GB), 1066 MHz, DDR3 DRAM, RDIMM (1Rx8)

15

Table 2-7 lists the major components of the IBM Flex System x240 Compute Node.
Table 2-7 Components of IBM Flex System x240 Compute Node AAS feature code 7863-10X EN20 EN21 1764 1759 XCC feature code 8737AC1 A1BC A1BD A2N5 A1R1 Description IBM Flex System x240 Compute Node x240 with embedded 10 Gb Virtual Fabric x240 without embedded 10 Gb Virtual Fabric (select one of these base features) IBM Flex System FC3052 2-port 8 Gb FC Adapter IBM Flex System CN4054 10 Gb Virtual Fabric Adapter (select if x240 without embedded 10 Gb Virtual Fabric is selected - EN21/A1BD) IBM Flex System x240 USB Enablement Kit 2 GB USB Hypervisor Key (VMware 5.0) 1 Minimum quantity

1 1

EBK2 EBK3

49Y8119 41Y8300

2.2.5 IBM Flex System Manager


Table 2-8 lists the major components of the IBM Flex System Manager.
Table 2-8 Components of the IBM Flex System Manager AAS feature code 7955-01M EB31 EM09 None None 1771 3767 XCC feature code 8731AC1 9220 None 8941 A1CW 5420 A1AV Description IBM Flex System Manager Platform Bundle preload indicator 8 GB (2x 4 GB) 1333 MHz RDIMMs (1.35V) 4 GB (1x 4 GB) 1333 MHz RDIMMs (1.35V) Intel Xeon E5-2650 8C 2.0 GHz 20 MB 1600 MHz 95W 200 GB, 1.8", SATA MLC SSD 1TB 2.5 SATA 7.2K RPM hot-swap 6 Gbps HDD Minimum quantity 1 1 4a 8 1 2 1

a. In the AAS system, FC EM09 is pairs of DIMMs. In the XCC system, FC 8941 is single DIMMs. The DIMMS are otherwise identical.

2.2.6 IBM Storwize V7000


Table 2-9 lists the major components of the IBM Storwize V7000 storage server.
Table 2-9 Components of the IBM Storwize V7000 storage server AAS feature code 2076-124 5305 3512 3514 XCC feature code 2076-124 5305 3512 3514 Description IBM Storwize V7000 Controller 5m Fiber-optic Cable 200 GB 2.5 INCH SSD or 400 GB 2.5 INCH SSD Minimum quantity 1 2 2a

16

IBM PureFlex System and IBM Flex System Products and Technology

AAS feature code 0010 6008 9730 9801

XCC feature code 0010 6008 9730 9801

Description Storwize V7000 Software Preload 8 GB Cache Power cord to PDU (includes two power cords) Power supplies

Minimum quantity 1 2 1 2

a. If Power Systems compute node is selected, at least eight drives must be installed in the Storwize V7000. If an Intel Xeon-based compute node is selected with SmartCloud Entry, four drives must be installed in the Storwize V7000.

2.2.7 Rack cabinet


Table 2-10 lists the major components and options of the rack.
Table 2-10 Components of the rack AAS feature code 7953-94X EC06 EC03 EC02 7196 7189+6492 7189+6491 7189+6489 7189+6667 7189+6653 XCC feature code 93634AX None None None 5897 5902 5904 5903 5906 None Description IBM 42U 1100 mm Enterprise V2 Dynamic Rack Gray Door Side Cover Kit (Black) Rear Door (Black/flat) Combo PDU C19/C13 3-Phase 60A Combo PDU C19/C13 1-Phase 60A Combo PDU C19/C13 1-Phase 63A International Combo PDU C19/C13 3-Phase 32A International Combo PDU C19/C13 1-Phase 32A Australia and NZ Combo PDU C19/C13 3-Phase 16A International Minimum quantity 1 1 1 1 2a 2 2 2 2 4

a. Select one PDU line item from this list. They are mutually exclusive. Most are quantity = 2 except for the 16A PDU, which is quantity = 4. The selection depends on customers country and utility power requirements.

2.2.8 Software
This section lists the software features of IBM PureFlex System Express.

AIX and IBM i


Table 2-11 lists the software features included with the Express configuration on POWER7 processor-based compute nodes for AIX and IBM i.
Table 2-11 Software features for IBM PureFlex System Express with AIX and IBM i on Power AIX 6 Standard components - Express IBM Storwize V7000 Software 5639-VM1 V7000 Base PID 5639-SM1 One-year Software Maintenance (SWMA) AIX 7 IBM i 6.1 IBM i 7.1

17

AIX 6 IBM Flex System Manager Operating system

AIX 7

IBM i 6.1

IBM i 7.1

5765-FMX IBM Flex System Manager Standard 5660-FMX 1-year SWMA 5765-G62 AIX Standard 6 5771-SWM 1-year SWMA 5765-G98 AIX Standard 7 5771-SWM 1-year SWMA 5761-SS1 IBM i 6.1 5733-SSP 1-year SWMA 5770-SS1 IBM i 7.1 5733-SSP 1-year SWMA

Virtualization Security (PowerSC) Cloud Software (optional)

5765-PVS PowerVM Standard 5771-PVS 1-year SWMA 5765-PSE PowerSC Standard 5660-PSE 1-year SWMA Not applicable Not applicable

None standard in Express configurations. Optional - see following.

Optional components - Express Expansion IBM Storwize V7000 Software IBM Flex System Manager Operating system Virtualization Security (PowerSC) Cloud Software (optional) 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring 5765-FMS IBM Flex System Manager Advanced 5765-AEZ AIX 6 Enterprise 5765-G99 AIX 7 Enterprise IBM i 6.1 IBM i 7.1

5765-PVE PowerVM Enterprise Not applicable 5765-SCP Smart Cloud Entry 5660-SCP 1-year SWMA Requires upgrade to 5765-FMS IBM Flex System Manager Advanced Not applicable 5765-SCP Smart Cloud Entry 5660-SCP 1-year SWMA Requires upgrade to 5765-FMS IBM Flex System Manager Advanced Not applicable Not applicable Not applicable Not applicable

RHEL and SUSE Linux on Power


Table 2-12 lists the software features included with the Express configuration on POWER7 processor-based compute nodes for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES).
Table 2-12 Software features for IBM PureFlex System Express with RHEL and SLES on Power Red Hat Enterprise Linux (RHEL) Standard components - Express IBM Storwize V7000 Software IBM Flex System Manager 5639-VM1 V7000 Base PID 5639-SM1 1-year SWMA 5765-FMX IBM Flex System Manager Standard 5660-FMX 1-year SWMA SUSE Linux Enterprise Server (SLES)

18

IBM PureFlex System and IBM Flex System Products and Technology

Red Hat Enterprise Linux (RHEL) Operating system Virtualization Cloud Software (optional) 5639-RHP RHEL 5 & 6 5765-PVS PowerVM Standard 5771-PVS 1-year SWMA

SUSE Linux Enterprise Server (SLES) 5639-S11 SLES 11

5765-SCP Smart Cloud Entry 5660-SCP 1-year SWMA Requires upgrade to 5765-FMS IBM Flex System Manager Advanced

Optional components - Express Expansion IBM Storwize V7000 Software IBM Flex System Manager Virtualization 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring 5765-FMS IBM Flex System Manager Advanced 5765-PVE PowerVM Enterprise

Intel Xeon-based compute nodes


Table 2-13 lists the software features included with the Express configuration on Intel Xeon-based compute nodes.
Table 2-13 Software features for IBM PureFlex System Express on Intel Xeon-based compute nodes Intel Xeon-based compute nodes (AAS) Standard components - Express IBM Storwize V7000 Software IBM Flex System Manager Operating system Virtualization Cloud Software (optional) 5639-VM1 V7000 Base PID 5639-SM1 1-year SWMA 5765-FMX IBM Flex System Manager Standard 5660-FMX 1-year SWMA Varies Not applicable Not applicable 94Y9782 IBM Flex System Manager Standard 1-year SWMA Varies Intel Xeon-based compute nodes (HVEC)

Optional components - Express Expansion IBM Storwize V7000 Software IBM Flex System Manager Operating system 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring 5765-FMS IBM Flex System Manager Advanced 5639-OSX RHEL for x86 5639-W28 Windows 2008 R2 5639-CAL Windows 2008 Client Access 94Y9783 IBM Flex System Manager Advanced 5731RSI RHEL for x86 - L3 support only 5731RSR RHEL for x86 - L1-L3 support 5731W28 Windows 2008 R2 5731CAL Windows 2008 Client Access

Virtualization Cloud Software

VMware ESXi selectable in the hardware configuration 5765-SCP SmartCloud Entry 5660-SCP 1-year SWMA 5641-SC1 SmartCloud Entry with 1-year SWMA

19

2.2.9 Services
IBM PureFlex System Express includes the following services: Service and Support offerings: Software Maintenance: 1-year of 9x5 (9 hours per day, 5 days per week) Hardware Maintenance: 3-years of 9x5 Next Business Day service Technical Support Services Essential minimum service level offering for every IBM PureFlex System Express configuration: Three years with one microcode analysis per year Optional TSS offerings for IBM PureFlex System Express: Three years of Warranty Service upgrade to 24x7x4 service Three years of SWMA on applicable products Three years of Software Support on Windows Server / Linux and VMware environments. Three years of Enhanced Technical Support Lab Services: Three days of on-site Lab services If the first compute node is a p260 or p460, 6911-300 is specified If the first compute node is a x240, 6911-100 is specified

2.3 IBM PureFlex System Standard


The tables in this section show the hardware, software, and services that make up IBM PureFlex System Standard. 2.3.1, Chassis on page 21 2.3.2, Top of rack Ethernet switch on page 21 2.3.3, Top of rack SAN switch on page 22 2.3.4, Compute nodes on page 22 2.3.5, IBM Flex System Manager on page 23 2.3.6, IBM Storwize V7000 on page 23 2.3.7, Rack cabinet on page 24 2.3.8, Software on page 24 2.3.9, Services on page 26 To specify IBM PureFlex System Standard in the IBM ordering system, specify the indicator feature code listed in Table 2-14 for each system type.
Table 2-14 Standard indicator feature code AAS feature code EFD2 XCC feature code A2VT Description IBM PureFlex System Standard Indicator Feature Code: First of each MTM (for example, first compute node)

Remember: The tables in this section do not list all feature codes. Some features are not listed here for brevity.

20

IBM PureFlex System and IBM Flex System Products and Technology

2.3.1 Chassis
Table 2-15 lists the major components of the IBM Flex System Enterprise Chassis including the switches and options.
Table 2-15 Components of the chassis and switches AAS feature code 7893-92X 3593 3282 EB29 3595 3286 3590 4558 9039 3592 9038 7805 XCC feature code 8721-HC1 A0TB 5053 3268 A0TD 5075 A0UD 6252 A0TM A0UE None A0UA Description IBM Flex System Enterprise Chassis IBM Flex System Fabric EN4093 10 Gb Scalable Switch 10 GbE 850 nm Fiber SFP+ Transceiver (SR) IBM BNT SFP RJ45 Transceiver IBM Flex System FC3171 8 Gb SAN Switch IBM 8 GB SFP+ Short-Wave Optical Transceiver Additional PSU 2500W 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Power Cord Base Chassis Management Module Additional Chassis Management Module Base Fan Modules (four) Additional Fan Modules (two) Minimum quantity 1 1 4 5 2 4 2 4 1 1 1 1

2.3.2 Top of rack Ethernet switch


If more than one chassis is configured, a top of rack Ethernet switch must be added to the configuration. If only one chassis is configured, the top of rack switch is optional. Table 2-16 lists the switch components for an Ethernet switch.
Table 2-16 Components of the top of rack Ethernet switch AAS feature code 7309-HC3 XCC feature code 1455-64C 1455-48E ECB5 A1PJ Description IBM System Networking RackSwitch G8264 IBM System Networking RackSwitch G8052 3m IBM Passive DAC SFP+ Cable Minimum quantity 0a 0a 1 per EN4093 switch 0

EB25

A1PJ

3m IBM QSFP+ DAC Break Out Cable

a. One required when two or more Enterprise Chassis are configured

21

2.3.3 Top of rack SAN switch


If more than one chassis is configured, a top of rack SAN switch must be added to the configuration. If only one chassis is configured, the top of rack switch is optional.Table 2-17 lists the switch components for a SAN switch.
Table 2-17 Components of the top of rack SAN switch AAS feature code 2498-B24 5605 2808 XCC feature code 2498-B24 5605 2808 Description 24-port SAN Switch 5m optic cable 8 Gb SFP transceivers (eight pack) Minimum quantity 0 1 1

2.3.4 Compute nodes


The PureFlex System Standard requires one of the following compute nodes: IBM Flex System p460 Compute Node (IBM POWER7 based) IBM Flex System x240 Compute Node (Intel Xeon based) Table 2-18 lists the major components of the IBM Flex System p460 Compute Node.
Table 2-18 Components of IBM Flex System p460 Compute Node AAS feature code 7895-42x 1764 1762 Description IBM Flex System p460 Compute Node IBM Flex System FC3172 2-port 8 Gb FC Adapter IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter Minimum quantity 1 2 2

Base Processor 1 Required, select only one, minimum 1, maximum 1 EPR2 EPR4 EPR6 16 Cores, (4 x 4 core), 3.3 GHz + 4-socket system board 32 Cores, (4 x 8 core), 3.2 GHz + 4-socket system board 32 Cores, (4 x 8 core), 3.55 GHz + 4-socket system board 1

Memory - 8 GB per core minimum with all DIMM slots filled with same memory type 8145 8199 32 GB (2 x 16 GB), 1066 MHz, LP RDIMMs (1.35V) 16 GB (2 x 8 GB), 1066 MHz, VLP RDIMMs (1.35V)

Table 2-19 lists the major components of the IBM Flex System x240 Compute Node.
Table 2-19 Components of IBM Flex System x240 Compute Node AAS feature code 7863-10X EN20 EN21 XCC feature code 8737AC1 A1BC A1BD Description IBM Flex System x240 Compute Node x240 with embedded 10 Gb Virtual Fabric x240 without embedded 10 Gb Virtual Fabric (select one of these base features) 1 Minimum quantity

22

IBM PureFlex System and IBM Flex System Products and Technology

AAS feature code 1764 1759

XCC feature code A2N5 A1R1

Description IBM Flex System FC3052 2-port 8 Gb FC Adapter IBM Flex System CN4054 10 Gb Virtual Fabric Adapter (select if x240 without embedded 10 Gb Virtual Fabric is selected: EN21/A1BD) IBM Flex System x240 USB Enablement Kit 2 GB USB Hypervisor Key (VMware 5.0)

Minimum quantity 1 1

EBK2 EBK3

49Y8119 41Y8300

2.3.5 IBM Flex System Manager


Table 2-20 lists the major components of the IBM Flex System Manager.
Table 2-20 Components of the IBM Flex System Manager AAS feature code 7955-01M EB31 EM09 None None 1771 3767 XCC feature code 8731AC1 9220 None 8941 A1CW 5420 A1AV Description IBM Flex System Manager Platform Bundle preload indicator 8 GB (2x 4 GB) 1333 MHz RDIMMs (1.35V) 4 GB (1x 4 GB) 1333 MHz RDIMMs (1.35V) Intel Xeon E5-2650 8C 2.0 GHz 20 MB 1600 MHz 95W 200 GB, 1.8", SATA MLC SSD 1TB 2.5 SATA 7.2K RPM hot-swap 6 Gbps HDD Minimum quantity 1 1 4a 8 1 2 1

a. In the AAS system, FC EM09 is pairs of DIMMs. In the XCC system, FC 8941 is single DIMMs. The DIMMS are otherwise identical.

2.3.6 IBM Storwize V7000


Table 2-21 lists the major components of the IBM Storwize V7000 storage server.
Table 2-21 Components of the IBM Storwize V7000 storage server AAS feature code 2076-124 5305 3512 3514 0010 6008 9730 9801 XCC feature code 2076-124 5305 3512 3514 0010 6008 9730 9801 Description IBM Storwize V7000 Controller 5m Fiber-optic Cable 200 GB 2.5 INCH SSD or 400 GB 2.5 INCH SSD Storwize V7000 Software Preload 8 GB Cache Power cord to PDU (includes two power cords) Power supplies Minimum quantity 1 2 2a 1 2 1 2

23

a. If Power Systems compute node is selected, at least eight drives must be installed in the Storwize V7000. If an Intel Xeon-based compute node is selected with SmartCloud Entry, four drives must be installed in the Storwize V7000.

2.3.7 Rack cabinet


Table 2-22 lists the major components and options of the rack.
Table 2-22 Components of the rack AAS feature code 7953-94X EC06 EC03 EC02 7196 7189+6492 7189+6491 7189+6489 7189+6667 7189+6653 XCC feature code 93634AX None None None 5897 5902 5904 5903 5906 None Description IBM 42U 1100 mm Enterprise V2 Dynamic Rack Gray Door Side Cover Kit (Black) Rear Door (Black/flat) Combo PDU C19/C13 3-Phase 60A Combo PDU C19/C13 1-Phase 60A Combo PDU C19/C13 1-Phase 63A International Combo PDU C19/C13 3-Phase 32A International Combo PDU C19/C13 1-Phase 32A Australia and NZ Combo PDU C19/C13 3-Phase 16A International Minimum quantity 1 1 1 1 2a 2 2 2 2 4

a. Select one PDU line item from this list. They are mutually exclusive. Most are quantity = 2 except for the 16A PDU, which is quantity = 4. The selection depends on your country and utility power requirements.

2.3.8 Software
This section lists the software features of IBM PureFlex System Standard.

AIX and IBM i


Table 2-23 lists the software features included with the Standard configuration on POWER7 processor-based compute nodes for AIX and IBM i.
Table 2-23 Software features for IBM PureFlex System Standard with AIX and IBM i on Power AIX 6 Standard components - Standard IBM Storwize V7000 Software IBM Flex System Manager Operating system 5639-VM1 V7000 Base PID 5639-SM3 3-year SWMA 5765-FMS IBM Flex System Manager Advanced 5662-FMS 3-year SWMA 5765-G62 AIX Standard 6 5773-SWM 3-year SWMA 5765-G98 AIX Standard 7 5773-SWM 3-year SWMA 5761-SS1 IBM i 6.1 5773-SWM 3-year SWMA 5770-SS1 IBM i 7.1 5773-SWM 3-year SWMA AIX 7 IBM i 6.1 IBM i 7.1

Virtualization

5765-PVE PowerVM Enterprise 5773-PVE 3-year SWMA

24

IBM PureFlex System and IBM Flex System Products and Technology

AIX 6 Security (PowerSC) Cloud Software (default but optional)

AIX 7

IBM i 6.1 Not applicable Not applicable

IBM i 7.1 Not applicable Not applicable

5765-PSE PowerSC Standard 5662-PSE 3-year SWMA 5765-SCP SmartCloud Entry 5662-SCP 3-year SWMA 5765-SCP SmartCloud Entry 5662-SCP 3-year SWMA

Optional components - Standard Expansion IBM Storwize V7000 Software IBM Flex System Manager Operating system Virtualization Security (PowerSC) Cloud Software (optional) 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring Not applicable 5765-AEZ AIX 6 Enterprise 5765-G99 AIX 7 Enterprise

5765-PVE PowerVM Enterprise Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable

RHEL and SUSE Linux on Power


Table 2-24 lists the software features included with the Standard configuration on POWER7 processor-based compute nodes for RHEL and SLES.
Table 2-24 Software features for IBM PureFlex System Standard with RHEL and SLES on Power Red Hat Enterprise Linux (RHEL) Standard components - Standard IBM Storwize V7000 Software IBM Flex System Manager Operating system Virtualization Cloud Software (optional) 5639-VM1 V7000 Base PID 5639-SM3 3-year SWMA 5765-FMS IBM Flex System Manager Advanced 5662-FMS 3-year SWMA 5639-RHP RHEL 5 & 6 5765-PVE PowerVM Enterprise 5773-PVE 3-year SWMA 5765-SCP SmartCloud Entry 5662-SCP 3-year SWMA 5639-S11 SLES 11 SUSE Linux Enterprise Server (SLES)

Optional components - Standard Expansion IBM Storwize V7000 Software IBM Flex System Manager Virtualization 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring Not applicable Not applicable

25

Intel Xeon-based compute nodes


Table 2-25 lists the software features included with the Standard configuration on Intel Xeon-based compute nodes.
Table 2-25 Software features for IBM PureFlex System Standard on Intel Xeon-based compute nodes Intel Xeon-based compute nodes (AAS) Standard components - Standard IBM Storwize V7000 Software IBM Flex System Manager Operating system Virtualization Cloud Software (optional) (Windows and RHEL only) 5639-VM1 - V7000 Base PID 5639-SM3 - 3-year SWMA 5765-FMX IBM Flex System Manager Standard 5662-FMX 3-year SWMA Varies 94Y9787 IBM Flex System Manager Standard, 3-year SWMA Varies Intel Xeon-based compute nodes (HVEC)

VMware ESXi selectable in the hardware configuration 5765-SCP SmartCloud Entry 5662-SCP 3 yr SWMA 5641-SC3 SmartCloud Entry, 3 yr SWMA

Optional components - Standard Expansion IBM Storwize V7000 Software IBM Flex System Manager Operating system 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring 5765-FMS IBM Flex System Manager Advanced 5639-OSX RHEL for x86 5639-W28 Windows 2008 R2 5639-CAL Windows 2008 Client Access 94Y9783 IBM Flex System Manager Advanced 5731RSI RHEL for x86 - L3 support only 5731RSR RHEL for x86 - L1-L3 support 5731W28 Windows 2008 R2 5731CAL Windows 2008 Client Access

Virtualization Cloud Software

VMware ESXi selectable in the hardware configuration Not applicable Not applicable

2.3.9 Services
IBM PureFlex System Standard includes the following services: Service & Support offerings: Software Maintenance: 1-year of 9x5 (9 hours per day, 5 days per week) Hardware Maintenance: 3-years of 9x5 Next Business Day service Technical Support Services Essential minimum service level offering for every IBM PureFlex System Standard configuration: Three years with one microcode analysis per year Three years of Warranty Service upgrade to 24x7x4 service Three years of Account Advocate or Enhanced Technical Support (9x5) and software support prerequisites.

26

IBM PureFlex System and IBM Flex System Products and Technology

Lab Services: Five days of on-site Lab services If the first compute node is a p260 or p460, 6911-300 is specified If the first compute node is a x240, 6911-100 is specified

2.4 IBM PureFlex System Enterprise


The tables in this section show the hardware, software, and services that make up IBM PureFlex System Enterprise. 2.4.1, Chassis 2.4.2, Top of rack Ethernet switch on page 28 2.4.3, Top of rack SAN switch on page 28 2.4.4, Compute nodes on page 29 2.4.5, IBM Flex System Manager on page 30 2.4.6, IBM Storwize V7000 on page 30 2.4.7, Rack cabinet on page 31 2.4.8, Software on page 31 2.4.9, Services on page 33 To specify IBM PureFlex System Enterprise in the IBM ordering system, specify the indicator feature code listed in Table 2-26 for each system type.
Table 2-26 Enterprise indicator feature code AAS feature code EFD3 XCC feature code A2VU Description IBM PureFlex System Enterprise Indicator Feature Code: First of each MTM (for example, first compute node)

Remember: The tables in this section do not list all feature codes. Some features are not listed here for brevity.

2.4.1 Chassis
Table 2-27 lists the major components of the IBM Flex System Enterprise Chassis including the switches and options.
Table 2-27 Components of the chassis and switches AAS feature code 7893-92X 3593 3596 3597 XCC feature code 8721-HC1 A0TB A1EL A1EM Description IBM Flex System Enterprise Chassis IBM Flex System Fabric EN4093 10 Gb Scalable Switch IBM Flex System Fabric EN4093 10 Gb Scalable Switch Upgrade 1 IBM Flex System Fabric EN4093 10 Gb Scalable Switch Upgrade 2 Minimum quantity 1 2 2 2

27

AAS feature code 3282 EB29 3595 3286 3590 4558 9039 3592 9038 7805

XCC feature code 5053 3268 A0TD 5075 A0UD 6252 A0TM A0UE None A0UA

Description 10 GbE 850 nm Fiber SFP+ Transceiver (SR) IBM BNT SFP RJ45 Transceiver IBM Flex System FC3171 8 Gb SAN Switch IBM 8 GB SFP+ Short-Wave Optical Transceiver Additional PSU 2500W 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Power Cord Base Chassis Management Module Additional Chassis Management Module Base Fan Modules (four) Additional Fan Modules (two)

Minimum quantity 4 6 2 8 4 6 1 1 1 2

2.4.2 Top of rack Ethernet switch


A minimum of two top of rack Ethernet switches are required in the Enterprise configuration. Table 2-28 lists the switch components for an Ethernet switch.
Table 2-28 Components of the top of rack Ethernet switch AAS feature code 7309-HC3 XCC feature code 1455-64C 1455-48E ECB5 A1PJ Description IBM System Networking RackSwitch G8264 IBM System Networking RackSwitch G8052 3m IBM Passive DAC SFP+ Cable Minimum quantity 2a 2a 1 per EN4093 switch 1

EB25

A1PJ

3m IBM QSFP+ DAC Break Out Cable

a. For IBM Power Systems configurations, two are required. For System x configurations, two are required when two or more Enterprise Chassis are configured.

2.4.3 Top of rack SAN switch


A minimum of two top of rack SAN switches are required in the Enterprise configuration. Table 2-29 lists the switch components for a SAN switch.
Table 2-29 Components of the top of rack SAN switch AAS feature code 2498-B24 5605 2808 XCC feature code 2498-B24 5605 2808 Description 24-port SAN Switch 5m optic cable 8 Gb SFP transceivers (eight pack) Minimum quantity 0 1 1

28

IBM PureFlex System and IBM Flex System Products and Technology

2.4.4 Compute nodes


The PureFlex System Enterprise requires one of the following compute nodes: IBM Flex System p460 Compute Node (IBM POWER7 based) IBM Flex System x240 Compute Node (Intel Xeon based) Table 2-30 lists the major components of the IBM Flex System p260 Compute Node.
Table 2-30 Components of IBM Flex System p460 Compute Node AAS feature code 7895-42x 1764 1762 Description IBM Flex System p460 Compute Node IBM Flex System FC3172 2-port 8 Gb FC Adapter IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter Minimum quantity 2 2 2

Base Processor 1 Required, select only one, minimum 1, maximum 1 EPR2 EPR4 EPR6 16 Cores, (4 x 4 core), 3.3 GHz + 4-socket system board 32 Cores, (4 x 8 core), 3.2 GHz + 4-socket system board 32 Cores, (4 x 8 core), 3.55 GHz + 4-socket system board 1

Memory: 8 GB per core minimum with all DIMM slots filled with same memory type 8145 8199 32 GB (2 x 16 GB), 1066 MHz, LP RDIMMs (1.35V) 16 GB (2 x 8 GB), 1066 MHz, VLP RDIMMs (1.35V)

Table 2-31 lists the major components of the IBM Flex System x240 Compute Node.
Table 2-31 Components of IBM Flex System x240 Compute Node AAS feature code 7863-10X EN20 EN21 1764 1759 XCC feature code 8737AC1 A1BC A1BD A2N5 A1R1 Description IBM Flex System x240 Compute Node x240 with embedded 10 Gb Virtual Fabric x240 without embedded 10 Gb Virtual Fabric (select one of these base features) IBM Flex System FC3052 2-port 8 Gb FC Adapter IBM Flex System CN4054 10 Gb Virtual Fabric Adapter (select if x240 without embedded 10 Gb Virtual Fabric is selected - EN21/A1BD) IBM Flex System x240 USB Enablement Kit 2 GB USB Hypervisor Key (VMware 5.0) Minimum quantity 2 1 per

1 per 1 per

EBK2 EBK3

49Y8119 41Y8300

29

2.4.5 IBM Flex System Manager


Table 2-32 lists the major components of the IBM Flex System Manager.
Table 2-32 Components of the IBM Flex System Manager AAS feature code 7955-01M EB31 EM09 None None 1771 3767 XCC feature code 8731AC1 9220 None 8941 A1CW 5420 A1AV Description IBM Flex System Manager Platform Bundle preload indicator 8 GB (2 x 4 GB) 1333 MHz RDIMMs (1.35V) 4 GB (1 x 4 GB) 1333 MHz RDIMMs (1.35V) Intel Xeon E5-2650 8C 2.0 GHz 20 MB 1600 MHz 95W 200 GB, 1.8", SATA MLC SSD 1TB 2.5 SATA 7.2K RPM hot-swap 6 Gbps HDD Minimum quantity 1 1 4a 8 1 2 1

a. In the AAS system, FC EM09 is pairs of DIMMs. In the XCC system, FC 8941 is single DIMMs. The DIMMS are otherwise identical.

2.4.6 IBM Storwize V7000


Table 2-33 lists the major components of the IBM Storwize V7000 storage server.
Table 2-33 Components of the IBM Storwize V7000 storage server AAS feature code 2076-124 5305 3512 3514 0010 6008 9730 9801 XCC feature code 2076-124 5305 3512 3514 0010 6008 9730 9801 Description IBM Storwize V7000 Controller 5m Fiber-optic Cable 200 GB 2.5 INCH SSD or 400 GB 2.5 INCH SSD Storwize V7000 Software Preload 8 GB Cache Power cord to PDU (includes two power cords) Power supplies Minimum quantity 1 4 2a 1 2 1 2

a. If Power Systems compute node is selected, at least eight drives must be installed in the Storwize V7000. If an Intel Xeon-based compute node is selected with SmartCloud Entry, four drives must be installed in the Storwize V7000.

30

IBM PureFlex System and IBM Flex System Products and Technology

2.4.7 Rack cabinet


Table 2-34 lists the major components of the rack and options.
Table 2-34 Components of the rack AAS feature code 7953-94X EC06 EC03 EC02 7196 7189+6492 7189+6491 7189+6489 7189+6667 7189+6653 XCC feature code 93634AX None None None 5897 5902 5904 5903 5906 None Description IBM 42U 1100 mm Enterprise V2 Dynamic Rack Gray Door Side Cover Kit (Black) Rear Door (Black/flat) Combo PDU C19/C13 3-Phase 60A Combo PDU C19/C13 1-Phase 60A Combo PDU C19/C13 1-Phase 63A International Combo PDU C19/C13 3-Phase 32A International Combo PDU C19/C13 1-Phase 32A Australia and NZ Combo PDU C19/C13 3-Phase 16A International Minimum quantity 1 1 1 1 2a 2 2 2 2 4

a. Select one PDU line item from this list. They are mutually exclusive. Most are quantity = 2 except for the 16A PDU which is quantity = 4. The selection depends on your country and utility power requirements.

2.4.8 Software
This section lists the software features of IBM PureFlex System Enterprise.

AIX and IBM i


Table 2-35 lists the software features included with the Enterprise configuration on POWER7 processor-based compute nodes for AIX and IBM i.
Table 2-35 Software features for IBM PureFlex System Enterprise with AIX and IBM i on Power AIX 6 Standard components - Standard IBM Storwize V7000 Software IBM Flex System Manager Operating system 5639-VM1 V7000 Base PID 5639-SM3 3-year SWMA 5765-FMS IBM Flex System Manager Advanced 5662-FMS 3-year SWMA 5765-G62 AIX Standard 6 5773-SWM 3-year SWMA 5765-G98 AIX Standard 7 5773-SWM 3-year SWMA 5761-SS1 IBM i 6.1 5773-SWM 3-year SWMA 5770-SS1 IBM i 7.1 5773-SWM 3-year SWMA AIX 7 IBM i 6.1 IBM i 7.1

Virtualization Security (PowerSC)

5765-PVE PowerVM Enterprise 5773-PVE 3-year SWMA 5765-PSE PowerSC Standard 5662-PSE 3-year SWMA Not applicable Not applicable

31

AIX 6 Cloud Software (default but optional) 5765-SCP SmartCloud Entry 5662-SCP 3-year SWMA

AIX 7 5765-SCP SmartCloud Entry 5662-SCP 3-year SWMA

IBM i 6.1 Not applicable

IBM i 7.1 Not applicable

Optional components - Standard Expansion IBM Storwize V7000 Software IBM Flex System Manager Operating system Virtualization Security (PowerSC) Cloud Software (optional) 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring Not applicable 5765-AEZ AIX 6 Enterprise 5765-G99 AIX 7 Enterprise

5765-PVE PowerVM Enterprise Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable

RHEL and SUSE Linux on Power


Table 2-36 lists the software features included with the Enterprise configuration on POWER7 processor-based compute nodes for RHEL and SLES.
Table 2-36 Software features for IBM PureFlex System Enterprise with RHEL and SLES on Power Red Hat Enterprise Linux (RHEL) Standard components - Standard IBM Storwize V7000 Software IBM Flex System Manager Operating system Virtualization Cloud Software (optional) 5639-VM1 V7000 Base PID 5639-SM3 3-year SWMA 5765-FMS IBM Flex System Manager Advanced 5662-FMS 3-year SWMA 5639-RHP RHEL 5 & 6 5765-PVE PowerVM Enterprise 5773-PVE 3-year SWMA 5765-SCP SmartCloud Entry 5662-SCP 3-year SWMA 5639-S11 SLES 11 SUSE Linux Enterprise Server (SLES)

Optional components - Standard Expansion IBM Storwize V7000 Software IBM Flex System Manager Virtualization 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring Not applicable Not applicable

32

IBM PureFlex System and IBM Flex System Products and Technology

Intel Xeon-based compute nodes


Table 2-37 lists the software features included with the Enterprise configuration on Intel Xeon-based compute nodes.
Table 2-37 Software features for IBM PureFlex System Enterprise on Intel Xeon-based compute nodes Intel Xeon-based compute nodes (AAS) Standard components - Enterprise IBM Storwize V7000 Software IBM Flex System Manager Operating system Virtualization Cloud Software (optional) 5639-VM1 - V7000 Base PID 5639-SM3 - 3-year SWMA 5765-FMX IBM Flex System Manager Standard 5662-FMX 3-year SWMA Varies 94Y9787 IBM Flex System Manager Standard, 3-year SWMA Varies Intel Xeon-based compute nodes (HVEC)

VMware ESXi selectable in the hardware configuration 5765-SCP SmartCloud Entry 5662-SCP 3 yr SWMA 5641-SC3 SmartCloud Entry, 3 yr SWMA

Optional components - Enterprise Expansion IBM Storwize V7000 Software IBM Flex System Manager Operating system 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring 5765-FMS IBM Flex System Manager Advanced 5639-OSX RHEL for x86 5639-W28 Windows 2008 R2 5639-CAL Windows 2008 Client Access 94Y9783 IBM Flex System Manager Advanced 5731RSI RHEL for x86 - L3 support only 5731RSR RHEL for x86 - L1-L3 support 5731W28 Windows 2008 R2 5731CAL Windows 2008 Client Access

Virtualization Cloud Software

VMware ESXi selectable in the hardware configuration Not applicable Not applicable

2.4.9 Services
IBM PureFlex System Enterprise includes the following services: Service & Support offerings: Software Maintenance: 1-year of 9x5 (9 hours per day, 5 days per week) Hardware maintenance: 3-years of 9x5 Next Business Day service Technical Support Services Essential minimum service level offering for every IBM PureFlex System Standard configuration: Three years with two microcode analyses per year Three years of Warranty Service upgrade to 24x7x4 service Three years of Account Advocate or Enhanced Technical Support (24x7) and software support prerequisites. Lab Services: Seven days of on-site Lab services If the first compute node is a p260 or p460, 6911-300 is specified If the first compute node is a x240, 6911-100 is specified 33

2.5 IBM SmartCloud Entry


It is a challenge of delivering new capabilities as your data, applications, physical hardware such as servers, storages, and networks all increase. The traditional means of deploying, provisioning, managing, and maintaining physical and virtual resources can no longer meet the demands of IT infrastructure. Virtualization simplifies and improves efficiency and utilization, and helps manage growth beyond physical resource boundaries. With SmartCloud Entry, you can build on your current virtualization strategies to continue to gain IT efficiency, flexibility, and control. Adapting Cloud in IT environments has the following advantages: Reduce data center footprint and management cost Automated server request/provisioning solution Improve utilization, workload management, and capability to deliver new services Rapid service deployment improving from several weeks to just days or hours. Built-in metering system Improve IT governance and risk management IBM simplifies the customer journey from server consolidation to cloud management. IBM provides complete cloud solutions. These solutions include hardware, software technologies, and services for implementing private cloud. They add value on top of virtualized infrastructure with IBM SmartCloud Entry for Cloud offerings. The product provides a comprehensive cloud software stack with capabilities that you can get only with multiple products from other providers such as VMware. It enables you to quickly deploy your Cloud environment. IBM also offers advanced Cloud when those features are required. You can take advantage of existing IBM server investments and virtualized environments to deploy IBM SmartCloud Entry with the essential cloud infrastructure capabilities: Create images: Simplify storage of thousands of images. Easily create new golden master images and software appliances by using corporate standard operating systems Convert images from physical systems or between various x86 hypervisors Reliably track images to ensure compliance and minimize security risks Optimize resources, reducing the number of virtualized images and the storage required for them Deploy VMs: Reduce time to value for new workloads from months to a few days. Deploy application images across compute and storage resources User self-service for improved responsiveness Ensure security through VM isolation, and project-level user access controls Easy to use: You do not need to know all the details of the infrastructure Investment protection from full support of existing virtualized environments Optimize performance on IBM systems with dynamic scaling, expansive capacity, and continuous operation

34

IBM PureFlex System and IBM Flex System Products and Technology

Operate a private cloud: Cut costs with efficient operations. Delegate provisioning to authorized users to improve productivity Maintain full oversight to ensure an optimally running and safe cloud through automated approval/rejection Standardize deployment and configuration to improve compliance and reduce errors by setting policies, defaults, and templates Simplify administration with an intuitive interface for managing projects, users, workloads, resources, billing, approvals, and metering IBM Cloud and virtualization solutions offer flexible approaches to cloud. Where you start your journey depends on your business needs. For more information about IBM SmartCloud Entry, see: http://ibm.com/systems/cloud

35

36

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 3.

Systems management
IBM Flex System Manager, the management component of IBM Flex System Enterprise Chassis, and compute nodes are designed to help you get the most out of your IBM Flex System installation. They also allow you to automate repetitive tasks. These management interfaces can significantly reduce the number of manual navigational steps for typical management tasks. They offer simplified system set-up procedures by using wizards and built-in expertise to consolidated monitoring for physical and virtual resources. This chapter contains the following sections: 3.1, Management network on page 38 3.2, Chassis Management Module on page 39 3.3, Security on page 41 3.4, Compute node management on page 43 3.5, IBM Flex System Manager on page 46

Copyright IBM Corp. 2012. All rights reserved.

37

3.1 Management network


In an IBM Flex System Enterprise Chassis, you can configure separate management and data networks. The management network is a private and secure Gigabit Ethernet network. It is used to complete management-related functions throughout the chassis, including management tasks related to the compute nodes, switches, and the chassis itself. The management network is shown in Figure 3-1 as the blue line. It connects the Chassis Management Module (CMM) to the compute nodes, the switches in the I/O bays, the Flex System Manager (FSM). The FSM connection to the management network is through a special Broadcom 5718-based management network adapter (Eth0). The management networks in multiple chassis can be connected together through the external ports of the CMMs in each chassis through a GbE top-of-rack switch. The yellow line in the Figure 3-1 shows the production data network. The FSM also connects to the production network (Eth1) so that it can access the Internet for product updates and other related information.
Separate Management and Data Networks

Eth1 = embedded 2-port 10 GbE controller with Virtual Fabric Connector Eth0 = Special GbE management network adapter

Enterprise Chassis Flex System Manager System x compute node Power Systems compute node

Eth0 IMM

Eth1 IMM FSP

CMM

Port

I/O bay 1

I/O bay 2

CMMs in other Enterprise Chassis

CMM

CMM

CMM Data Network

Top-of-Rack Switch Management Network

Management workstation

Figure 3-1 Separate management and production data networks

38

IBM PureFlex System and IBM Flex System Products and Technology

Tip: If you want, the management node console can be connected to the data network for convenient access. One of the key functions that the data network supports is discovery of operating systems on the various network endpoints. Discovery of operating systems by the FSM is required to support software updates on an endpoint such as a compute node. The FSM Checking and Updating Compute Nodes wizard assists you in discovering operating systems as part of the initial setup.

3.2 Chassis Management Module


The CMM provides single-chassis management, and is used to communicate with the management controller in each compute node. It provides system monitoring, event recording, and alerts, and manages the chassis, its devices, and the compute nodes. The chassis supports up to two chassis management modules. If one CMM fails, the second CMM can detect its inactivity, activate itself, and take control of the system without any disruption. The CMM is central of the management of the chassis, and is required in the Enterprise Chassis. The following section describes the usage models of the CMM and its features. For more information, see 4.8, Chassis Management Module on page 82.

3.2.1 Overview
The CMM is a hot-swap module that provides basic system management functions for all devices installed in the Enterprise Chassis. An Enterprise Chassis comes with at least one CMM, and supports CMM redundancy. The CMM is shown in Figure 3-2

Figure 3-2 Chassis Management Module

39

Through an embedded firmware stack, the CMM implements functions to monitor, control, and provide external user interfaces to manage all chassis resources. The CMM allows you to perform these functions among others: Define login IDs and passwords Configure security settings such as data encryption and user account security Select recipients for alert notification of specific events Monitor the status of the compute nodes and other components Find chassis component information Discover other chassis in the network and enable access to them Control the chassis, compute nodes, and other components Access the I/O modules to configure them Change the startup sequence in a compute node Set the date and time Use a remote console for the compute nodes Enable multi-chassis monitoring Set power policies and view power consumption history for chassis components

3.2.2 Interfaces
The CMM supports a web-based graphical user interface that provides a way to perform chassis management functions within a supported web browser. You can also perform management functions through the CMM command-line interface (CLI). Both the web-based and CLI interfaces are accessible through the single RJ45 Ethernet connector on the CMM, or from any system connected to the same network. The CMM has the following default IPv4 settings: IP address: 192.168.70.100 Subnet: 255.255.255.0 User ID: USERID (all capital letters) Password: PASSW0RD (all capital letters, with a zero instead of the letter O) The CMM does not have a fixed static IPv6 IP address by default. Initial access to the CMM in an IPv6 environment can be done by either using the IPv4 IP address or the IPv6 link-local address. The IPv6 link-local address is automatically generated based on the MAC address of the CMM. By default, the CMM is configured to respond to DHCP first before using its static IPv4 address. If you do not want this operation to take place, connect locally to the CMM and change the default IP settings. You can connect locally, for example, by using a mobile computer. The web-based GUI brings together all the functionality needed to manage the chassis elements in an easy-to-use fashion consistently across all System x IMM2 based platforms.

40

IBM PureFlex System and IBM Flex System Products and Technology

Figure 3-3 shows the Chassis Management Module login window.

Figure 3-3 CMM login pane

Figure 3-4 shows an example of the Chassis Management Module front page after login.

Figure 3-4 Initial view of CMM after login

3.3 Security
The focus of IBM on smarter computing is evident in the improved security measures implemented in IBM Flex System Enterprise Chassis. Todays world of computing demands tighter security standards and native integration with computing platforms. For example, the push towards virtualization has increased the need for more security. This increase comes as

41

more mission critical workloads are consolidated on to fewer and more powerful servers. The IBM Flex System Enterprise Chassis takes a new approach to security with a ground-up chassis management design to meet new security standards. These security enhancements and features are provided in the chassis: Single sign-on (central user management) End-to-end audit logs Secure boot: TPM and CRTM Intel TXT technology (Intel Xeon-based compute nodes) Signed firmware updates to ensure authenticity Secure communications Certificate authority and management Chassis and compute node detection and provisioning Role-based access control Security policy management Same management protocols supported on BladeCenter AMM for compatibility with earlier versions Insecure protocols come disabled by default in CMM, with Locks settings to prevent user from inadvertently or maliciously enabling them Supports up to 84 local CMM user accounts Supports up to 32 simultaneous sessions Planned support for DRTM The Enterprise Chassis ships Secure, and supports two security policy settings: Secure: Default setting to ensure a secure chassis infrastructure Strong password policies with automatic validation and verification checks Updated passwords that replace the manufacturing default passwords after the initial setup Only secure communication protocols such as Secure Shell (SSH) and Secure Sockets Layer (SSL) Certificates to establish secure, trusted connections for applications that run on the management processors Legacy: Flexibility in chassis security Weak password policies with minimal controls Manufacturing default passwords that do not have to be changed Unencrypted communication protocols such as Telnet, SNMPv1, TCP Command Mode, CIM-XML, FTP Server, and TFTP Server The centralized security policy makes Enterprise Chassis easy to configure. In essence, all components run with the same security policy provided by the CMM. This consistency ensures that all I/O modules run with a hardened attack surface.

42

IBM PureFlex System and IBM Flex System Products and Technology

3.4 Compute node management


Each node in the Enterprise Chassis has a management controller that communicates upstream through the CMM-enabled 1 GbE private management network that enables management capability. Different chassis components supported in the Enterprise Chassis can implement different management controllers. Table 3-1 details the different management controllers implemented in the chassis components.
Table 3-1 Chassis components and their respective management controllers Chassis components Intel Xeon processor-based compute nodes Power Systems compute nodes Chassis Management Module Management controller Integrated Management Module II (IMM2) Flexible service processor (FSP) Integrated Management Module II (IMM2)

The management controllers for the various Enterprise Chassis components have the following default IPv4 addresses: CMM:192.168.70.100 Compute nodes: 192.168.70.101-114 (corresponding to the slots 1-14 in the chassis) I/O Modules: 192.168.70.120-123 (sequentially corresponding to chassis bay numbering) In addition to the IPv4 address, all I/O modules also support link-local IPv6 addresses and configurable external IPv6 addresses.

3.4.1 Integrated Management Module II


The Integrated Management Module II (IMM2) is the next generation of the IMMv1 (first released in the Intel Xeon Nehalem-EP-based servers). It is present on all Intel Xeon Romley based platforms, and features a complete rework of hardware and firmware. The IMM2 enhancements include a more responsive user interface, faster power on, and increased remote presence performance. The IMM2 incorporates a new web user interface that provides a common look and feel across all IBM System x software products. In addition to the new interface, the following provides a list of other major enhancements from IMMv1: Faster processor and more memory IMM2 manageable northbound from outside the chassis, which enables consistent management and scripting with System x rack servers Remote presence: Increased color depth and resolution for more detailed server video Active X client in addition to Java client Increased memory capacity (~50 MB) provides convenience for remote software installations No IMM2 reset required on configuration changes because they become effective immediately without reboot Hardware management of non-volatile storage Faster Ethernet over USB

43

1 Gb Ethernet management capability Improved system power-on and boot time More detailed information for UEFI detected events enables easier problem determination and fault isolation User interface meets accessibility standards (CI-162 compliant) Separate audit and event logs Trusted IMM with significant security enhancements (CRTM/TPM, signed updates, authentication policies, and so on) Simplified update/flashing mechanism Addition of Syslog alerting mechanism provides you with an alternative to email and SNMP traps. Support for Features On Demand (FoD) enablement of server functions, option card features, and System x solutions and applications First Failure Data Capture - One button web press initiates data collection and download For more information about IMM2, see Chapter 5, Compute nodes on page 139. For more detailed information, see Integrated Management Module II Users Guide http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5086346 IMM and IMM2 Support on IBM System x and BladeCenter Servers, TIPS0849: http://www.redbooks.ibm.com/abstracts/tips0849.html

3.4.2 Flexible service processor


Several advanced system management capabilities are built into POWER7-based compute nodes. An FSP handles most of the server-level system management. The FSP used in Enterprise Chassis compatible POWER based nodes is the same service processor used on POWER rack servers. It has system alerts and Serial over LAN (SOL) capability The FSP provides out-of-band system management capabilities, such as system control, runtime error detection, configuration, and diagnostic procedures. Generally, you do not interact with the FSP directly. Rather, you interact by using tools such as IBM Flex System Manager and Chassis Management Module. Both the p260 and p460 have one FSP each. The Flexible Service Processor provides an SOL interface, which is available by using the CMM and the console command. The POWER7-based compute nodes do not have an on-board video chip, and do not support keyboard, video, and mouse (KVM) connections. Server console access is obtained by a SOL connection only. SOL provides a means to manage servers remotely by using a CLI over a Telnet or SSH connection. SOL is required to manage servers that do not have KVM support or that are attached to the FSM. SOL provides console redirection for both Software Management Services (SMS) and the server operating system. The SOL feature redirects server serial-connection data over a LAN without requiring special cabling by routing the data through the CMM network interface. The SOL connection enables POWER7-based compute nodes to be managed from any remote location with network access to the CMM.

44

IBM PureFlex System and IBM Flex System Products and Technology

SOL offers the following functions: Remote administration without KVM Reduced cabling and no requirement for a serial concentrator Standard Telnet/SSH interface, eliminating the requirement for special client software The Chassis Management Module CLI provides access to the text-console command prompt on each server through a SOL connection. This configuration allows the POWER7-based compute nodes to be managed from a remote location.

3.4.3 I/O modules


The I/O modules have the following base functions: Initialization Configuration Diagnostics (both power-on and concurrent) Status Reporting In addition, the following set of protocols and software features are supported on the I/O modules: Supports configuration method over the Ethernet management port. A scriptable SSH CLI, a web server with SSL support, Simple Network Management Protocol v3 (SNMPv3) Agent with alerts, and a sFTP client. Server ports used for Telnet, HTTP, SNMPv1 agents, TFTP, FTP, and other insecure protocols are DISABLED by default. LDAP authentication protocol support for user authentication. For Ethernet I/O modules, 802.1x enabled with policy enforcement point (PEP) capability to allow support of TNC (Trusted Network Connect). The ability to capture and apply a switch configuration file and the ability to capture a first failure data capture (FFDC) data file. Ability to transfer files by using URL update methods (HTTP, HTTPS, FTP, TFTP, sFTP). Various methods for firmware updates are supported including FTP, sFTP, and TFTP. In addition, firmware updates by using a URL that includes protocol support for HTTP, HTTPs, FTP, sFTP, and TFTP are supported. Supports SLP discovery in addition to SNMPv3. Ability to detect firmware/hardware hangs, and ability to pull a crash-failure memory dump file to an FTP (sFTP) server. Supports selectable primary and backup firmware banks as the current operational firmware. Ability to send events, SNMP traps, and event logs to the CMM, including security audit logs. IPv4 and IPv6 on by default. The CMM management port supports IPv4 and IPv6 (IPV6 support includes the use of link local addresses. Port mirroring capabilities: Port mirroring of CMM ports to both internal and external ports.

45

For security reasons, the ability to mirror the CMM traffic is hidden and is available only to development and service personnel Management virtual local area network (VLAN) for Ethernet switches: A configurable management 802.1q tagged VLAN in the standard VLAN range of 1 - 4094. It includes the CMMs internal management ports and the I/O modules internal ports that are connected to the nodes.

3.5 IBM Flex System Manager


The FSM is a high performance scalable system management appliance. It is based on the IBM Flex System x240 Compute Node. The x240 is described in more detail in 5.2, IBM Flex System x240 Compute Node on page 140. The FSM hardware comes preinstalled with systems management software that enables you to configure, monitor, and manage IBM Flex System resources in up to four chassis. Remember: Support for management of more than four chassis with a single FSM can be added at a later date. The IBM Flex System Manager has these high-level features and functions: Supports a comprehensive, pre-integrated system that is configured to optimize performance and efficiency Automated processes triggered by events simplify management and reduce manual administrative tasks Centralized management reduces the skills and the number of steps it takes to manage and deploy a system Enables comprehensive management and control of energy utilization and costs Automates responses for a reduced need for manual tasks such as custom actions and filters, configure, edit, relocate, and automation plans Full integration with server views, including virtual server views, enables efficient management of resources The pre-load contains a set of software components that are responsible for running management functions. These components must be activated by using the available IBM FoD software entitlement licenses. They are licensed on a per-chassis basis, so you need one license for each chassis you plan to manage. The management node comes without any entitlement licenses, so you must purchase a license to enable the required FSM functions. The part number to order the management node is shown in Table 3-2.
Table 3-2 Ordering information for IBM Flex System Manager node Part number 8731A1xa Description IBM Flex System Manager node

a. x in the Part number represents a country-specific letter (for example, the EMEA part number is 8731A1G, and the US part number is 8731A1U). Ask your local IBM representative for specifics.

46

IBM PureFlex System and IBM Flex System Products and Technology

The part numbers to order FoD software entitlement licenses are shown in the following tables. The part numbers for the same features are different in different countries. Ask your local IBM representative for specifics. Table 3-3 shows the information for the United States, Canada, Asia Pacific, and Japan.
Table 3-3 Ordering information for FoD licenses (United States, Canada, Asia Pacific, Japan) Part number Base feature set 90Y4217 90Y4222 IBM Flex System Manager per managed chassis with 1-Year SW S&S IBM Flex System Manager per managed chassis with 3-Year SW S&S Description

Advanced feature set 90Y4249 00D7554 Fabric Manager 00D7550 00D7551 IBM Fabric Manager, per managed chassis with 1-Year SW S&S IBM Fabric Manager, per managed chassis with 3-Year SW S&S IBM Flex System Manager, Advanced Upgrade, per managed chassis with 1-Year SW S&S IBM Flex System Manager, Advanced Upgrade, per managed chassis with 3-Year SW S&S

Table 3-4 shows the ordering information for Latin America and Europe/Middle East/Africa.
Table 3-4 Ordering information for FoD licenses (Latin America and Europe/Middle East/Africa) Part number Base feature set 95Y1174 95Y1179 IBM Flex System Manager Per Managed Chassis with 1-Year SW S&S IBM Flex System Manager Per Managed Chassis with 3-Year SW S&S Description

Advanced feature set 94Y9219 94Y9220 Fabric Manager 00D4692 00D4693 IBM Fabric Manager, Per Managed Chassis with 1-Year SW S&S IBM Fabric Manager, Per Managed Chassis with 3-Year SW S&S IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with 1-Year SW S&S IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with 3-Year SW S&S

IBM Flex System Manager base feature set offers the following functions: Support up to four managed chassis Support up to 5,000 managed elements Auto-discovery of managed elements Overall health status Monitoring and availability Hardware management Security management

47

Administration Network management (Network Control) Storage management (Storage Control) Virtual machine lifecycle management (VMControl Express) IBM Flex System Manager advanced feature set offers all capabilities of the base feature set plus: Image management (VMControl Standard) Pool management (VMControl Enterprise) IBM Fabric Manager offers the following features: Manage assignments of Ethernet MAC and Fibre Channel WWN addresses Monitor the health of compute nodes, and automatically replace a failed compute node from a designated pool of spare compute nodes Preassign MAC and WWN addresses, as well as storage boot targets, for up to 256 chassis or 3584 compute nodes. Using an enhanced GUI, you can perform these tasks: Create addresses for compute nodes Save the address profiles Deploy the addresses to slots in the same chassis, or in up to 256 different chassis

3.5.1 Hardware overview


Fundamentally, the FSM from a hardware point of view is a locked-down compute node with a specific hardware configuration. This configuration is designed for optimal performance of the preinstalled software stack. The FSM looks similar to the Intel-based x240. However, there are slight differences between the system board designs, so these two hardware nodes are not interchangeable. Figure 3-5 shows a front view of the FSM.

Figure 3-5 IBM Flex System Manager

48

IBM PureFlex System and IBM Flex System Products and Technology

Figure 3-6 shows the internal layout and major components of the FSM.

Cover

Heat sink Microprocessor Microprocessor heat sink filler SSD and HDD backplane Hot-swap storage cage SSD interposer SSD drives I/O expansion adapter ETE adapter

SSD mounting insert Air baffles

Hot-swap storage drive Storage drive filler DIMM filler

DIMM

Figure 3-6 Exploded view of the IBM Flex System Manager node, showing major components

Additionally, the FSM comes preconfigured with the components described in Table 3-5.
Table 3-5 Features of the IBM Flex System Manager node (8731) Feature Processor Memory SAS Controller Disk Integrated NIC Systems Management Description 1x Intel Xeon Processor E5-2650 8C 2.0 GHz 20 MB Cache 1600 MHz 95 W 8 x 4 GB (1x4 GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM One LSI 2004 SAS Controller 1 x IBM 1TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD 2 x IBM 200GB SATA 1.8" MLC SSD (configured in an RAID-1) Embedded dual-port 10 Gb Virtual Fabric Ethernet controller (Emulex BE3) Dual-port 1 GbE Ethernet controller on a management adapter (Broadcom 5718) Integrated Management Module II (IMM2) Management network adapter

49

Figure 3-7 shows the internal layout of the FSM. Filler slot for Processor 2 Processor 1

Drive bays

Management network adapter

Figure 3-7 Internal view that shows the major components of IBM Flex System Manager

Front controls
The FSM has similar controls and LEDs as the IBM Flex System x240 Compute Node. The diagram in Figure 3-8 shows the front of an FSM with the location of the control and LEDs.
Solid state drive LEDs
a a a a a a a a a a a a a a a aaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 2 a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 1 a a a a a a a aaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 0 a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Power button/LED

Identify LED

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaa a a aaa a a a aaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaa aaaaaaaaa a a a a aaa a a a a a a aaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaa aaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aa a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaa a a a aaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

USB connector KVM connector

Hard disk drive activity LED

Fault Hard disk drive LED status LED Check log LED

Figure 3-8 FSM front panel showing controls and LEDs

Storage
The FSM ships with 2 x IBM 200 GB SATA 1.8" MLC SSD and 1 x IBM 1 TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD drives. The 200 GB SSD drives are configured in an RAID-1 pair that provides roughly 200 GB of usable space. The 1 TB SATA drive is not part of a RAID group.

50

IBM PureFlex System and IBM Flex System Products and Technology

The partitioning of the disks is listed in Table 3-6


Table 3-6 Detailed SSD and HDD disk partitioning Physical disk SSD SSD SSD HDD HDD HDD HDD HDD Virtual disk size 50 MB 60 GB 80 GB 40 GB 40 GB 60 GB 80 GB 30 GB Description Boot disk OS/Application disk Database disk Update repository Dump space Spare disk for OS/Application Spare disk for database Service Partition

Management network adapter


The management network adapter is a standard feature of the FSM, and provides a physical connection into the private management network of the chassis. The adapter is shown in Figure 3-6 on page 49 as the everything-to-everything (ETE) adapter. The management network adapter contains a Broadcom 5718 Dual 1GbE adapter and a Broadcom 5389 8-port L2 switch. This card is one of the features that makes the FSM unique compared to all other nodes supported by the Enterprise Chassis. The management network adapter provides a physical connection into the private management network of the chassis. The connection allows the software stack to have visibility into both the data and management networks. The L2 switch on this card is automatically set up by the IMM2, and connects the FSM and the onboard IMM2 into the same internal private network.

3.5.2 Software features


The IBM Flex System Manager management software has these main features: Monitoring and problem determination A real-time multichassis view of hardware components with overlays for additional information Automatic detection of issues in your environment through event setup that triggers alerts and actions Identification of changes that might affect availability Server resource utilization by virtual machine or across a rack of systems Hardware management Automated discovery of physical and virtual servers and interconnections, applications, and supported third-party networking Inventory of hardware components Chassis and hardware component views Hardware properties Component names/hardware identification numbers

51

Firmware levels Utilization rates Network management Management of network switches from various vendors Discovery, inventory, and status monitoring of switches Graphical network topology views Support for KVM, pHyp, VMware virtual switches, and physical switches VLAN configuration of switches Integration with server management Per-virtual machine network usage and performance statistics provided to VMControl Logical views of servers and network devices grouped by subnet and VLAN Storage management Discovery of physical and virtual storage devices Support for virtual images on local storage across multiple chassis Inventory of physical storage configuration Health status and alerts Storage pool configuration Disk sparing and redundancy management Virtual volume management Support for virtual volume discovery, inventory, creation, modification, and deletion Virtualization management (base feature set) Support for VMware, Hyper-V, KVM, and IBM PowerVM Create virtual servers Edit virtual servers Manage virtual servers Relocate virtual servers Discover virtual server, storage, and network resources, and visualize the physical-to-virtual relationships Virtualization management (advanced feature set) Create new image repositories for storing virtual appliances and discover existing image repositories in your environment Import external, standards-based virtual appliance packages into your image repositories as virtual appliances Capture a running virtual server that is configured just the way you want, complete with guest operating system, running applications, and virtual server definition Import virtual appliance packages that exist in the Open Virtual Machine Format (OVF) from the Internet or other external sources Deploy virtual appliances quickly to create new virtual servers that meet the demands of your ever-changing business needs Create, capture, and manage workloads

52

IBM PureFlex System and IBM Flex System Products and Technology

Create server system pools, which enable you to consolidate your resources and workloads into distinct and manageable groups Deploy virtual appliances into server system pools Manage server system pools, including adding hosts or additional storage space, and monitoring the health of the resources and the status of the workloads in them Group storage systems together by using storage system pools to increase resource utilization and automation Manage storage system pools by adding storage, editing the storage system pool policy, and monitoring the health of the storage resources Additional features Resource-oriented chassis map provides instant graphical view of chassis resource that includes nodes and I/O modules Fly-over provides instant view of individual server (node) status and inventory Chassis map provides inventory view of chassis components, a view of active statuses that require administrative attention, and a compliance view of server (node) firmware Actions can be taken on nodes such as working with server-related resources, showing and installing updates, submitting service requests, and starting the remote access tools Open video sessions and mount media such as DVDs with software updates to their servers from their local workstation Remote KVM connections Remote Virtual Media connections (mount CD/DVD/ISO/USB media) Power operations against servers (Power On/Off/Restart)

Remote console

Hardware detection and inventory creation Firmware compliance and updates Automatic detection of hardware failures Provides alerts Takes corrective action Notifies IBM of problems to escalate problem determination

Health status (such as processor utilization) on all hardware devices from a single chassis view Administrative capabilities, such as setting up users within profile groups, assigning security levels, and security governance.

3.5.3 Supported agents, hardware, operating systems, and tasks


IBM Flex System Manager provides four tiers of agents for managed systems. For each managed system, you need to choose the tier that provides the amount and level of capabilities that you need for that system. Select the level of agent capabilities that best fits the type of managed system and the management tasks you need to perform.

53

IBM Flex System Manager has these agent tiers: Agentless in-band Managed systems without any FSM client software installed. FSM communicates with the managed system through the operating system. Agentless out-of-band Managed systems without any FSM client software installed. FSM communicates with the managed system through something other than the operating system, such as a service processor or a hardware management console. Platform Agent Managed systems with Platform Agent installed. FSM communicates with the managed system through the Platform Agent. Common Agent Managed systems with Common Agent installed. FSM communicates with the managed system through the Common Agent. Table 3-7 lists the agent tier support for the IBM Flex System managed compute nodes. Managed nodes include x240 compute node that supports Windows, Linux and VMware, and p260 and p460 compute nodes that support IBM AIX, IBM i, and Linux.
Table 3-7 Agent tier support by management system type Agent tier Managed system type Compute nodes that run AIX Compute nodes that run IBM i Compute nodes that run Linux Compute nodes that run Linux and supporting SSH Compute nodes that run Windows Compute nodes that run Windows and supporting SSH or distributed component object model (DCOM) Compute nodes that run VMware Other managed resources that support SSH or SNMP Agentless in-band Yes Yes No Yes No Yes Agentless out-of-band Yes Yes Yes Yes Yes Yes Platform Agent No Yes Yes Yes Yes Yes Common Agent Yes Yes Yes Yes Yes Yes

Yes Yes

Yes Yes

Yes No

Yes No

Table 3-8 summarizes the management tasks supported by the compute nodes that depend on the agent tier.
Table 3-8 Compute node management tasks supported by the agent tier Agent tier Managed system type Command automation Hardware alerts Platform alerts Agentless in-band No No No Agentless out-of-band No Yes No Platform Agent No Yes Yes Common Agent Yes Yes Yes

54

IBM PureFlex System and IBM Flex System Products and Technology

Agent tier Managed system type Health and status monitoring File transfer Inventory (hardware) Inventory (software) Problems (hardware status) Process management Power management Remote control Remote command line Resource monitors Update manager

Agentless in-band No No No Yes No No No No Yes No No

Agentless out-of-band No No Yes No Yes No Yes Yes No No No

Platform Agent Yes No Yes Yes Yes No No No Yes Yes Yes

Common Agent Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes

Table 3-9 shows supported virtualization environments and their management tasks.
Table 3-9 Supported virtualization environments and management tasks Virtualization environment Management task Deploy virtual servers Deploy virtual farms Relocate virtual servers Import virtual appliance packages Capture virtual servers Capture workloads Deploy virtual appliances Deploy workloads Deploy server system pools Deploy storage system pools AIX and Linuxa Yes No Yes Yes Yes Yes Yes Yes Yes Yes IBM i Yes No No Yes Yes Yes Yes Yes No No VMware vSphere Yes Yes Yes No No No No No No No Microsoft Hyper-V Yes No No No No No No No No No Linux KVM Yes Yes Yes Yes Yes Yes Yes Yes Yes No

a. Linux on Power Systems compute nodes

Table 3-10 shows supported I/O switches and their management tasks.
Table 3-10 Supported I/O switches and management tasks I/O module Management task Discovery Inventory Monitoring EN2092 1 Gb Ethernet Yes Yes Yes EN4093 10 Gb Ethernet Yes Yes Yes FC3171 8 Gb FC Yes Yes Yes FC5022 16 Gb FC Yes Yes Yes

55

I/O module Management task Alerts Configuration

EN2092 1 Gb Ethernet Yes Yes

EN4093 10 Gb Ethernet Yes Yes

FC3171 8 Gb FC Yes Yes

FC5022 16 Gb FC Yes No

Table 3-11 shows supported storage systems and their management tasks.
Table 3-11 Supported storage systems and management tasks Storage system Management task Storage device discovery Integrated physical and logical topology views Show relationships between storage and server resources Perform logical and physical configuration View controller and volume status and to set notification alerts Yes Yes Yes Yes Yes V7000

For more information, see the IBM Flex System Manager product publications available from the IBM Flex System Information Center at: http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp

56

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 4.

Chassis and infrastructure configuration


The IBM Flex System Enterprise Chassis (machine type 8721) is a 10U next-generation server platform with integrated chassis management. It is a compact, high-density, high-performance, rack-mount, scalable server platform system. It supports up to 14 one-bay compute nodes that share common resources, such as power, cooling, management, and I/O resources within a single Enterprise Chassis. In addition, it can also support up to seven 2-bay compute nodes or three 4-bay compute nodes when the shelves are removed. You can mix and match 1-bay, 2-bay, and 4-bay compute nodes to meet your specific hardware needs. This chapter includes the following sections: 4.1, Overview on page 58 4.2, Power supplies on page 65 4.3, Fan modules on page 68 4.4, Fan logic module on page 70 4.5, Front information panel on page 71 4.6, Cooling on page 72 4.7, Power supply and fan module requirements on page 77 4.8, Chassis Management Module on page 82 4.9, I/O architecture on page 85 4.10, I/O modules on page 92 4.11, Infrastructure planning on page 119 4.12, IBM 42U 1100 mm Enterprise V2 Dynamic Rack on page 128 4.13, IBM Rear Door Heat eXchanger V2 Type 1756 on page 134

Copyright IBM Corp. 2012. All rights reserved.

57

4.1 Overview
Figure 4-1 shows the Enterprise Chassis as seen from the front. The front of the chassis has 14 horizontal bays with removable dividers that allow nodes and future elements to be installed within the chassis. The nodes can be installed when the chassis is powered. The chassis employs a die-cast mechanical bezel for rigidity. This chassis construction allows for tight tolerances between nodes, shelves, and the chassis bezel. These tolerances ensure accurate location and mating of connectors to the midplane.

Figure 4-1 IBM Flex System Enterprise Chassis

The major components of Enterprise Chassis are: Fourteen 1-bay compute node bays (can also support seven 2-bay or three 4-bay compute nodes with the shelves removed). Six 2500-watt power modules that provide N+N or N+1 redundant power. Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules). Four physical I/O modules. An I/O architectural design capable of providing: Up to eight lanes of I/O to an I/O adapter. Each lane capable of up to 16 Gbps. A maximum of 16 lanes of I/O to a half wide-node with two adapters. A wide variety of networking solutions that include Ethernet, Fibre Channel, FCoE, and InfiniBand. Two IBM Flex System Manager (FSM) management appliances for redundancy. The FSM provides multiple-chassis management support for up to four chassis. Two IBM Chassis Management Module (CMMs). The CMM provides single-chassis management support.

58

IBM PureFlex System and IBM Flex System Products and Technology

Table 4-1 lists these components.


Table 4-1 8721-A1x Chassis configuration Part number 8721-A1x Quantity 1 1 2 4 2 1 2 1 Description IBM Flex System Enterprise Chassis Chassis Management Module 2500W power supply unit 80 mm fan modules 40 mm fan modules Console breakout cable C19 to C20 2M power cables Rack mount kit

Figure 4-2 shows the component parts of the chassis, with the shuttle removed. The shuttle forms the rear of the chassis where the I/O Modules, power supplies, fan modules, and Chassis Management Modules are installed. The Shuttle would be removed only to gain access to the midplane or fan distribution cards, in the rare event of a service action.

Chassis

Chassis management module

40mm fan Fan CMM module logic filler module

Power supply filler

I/O module

80mm fan module 80mm fan filler

Fan distribution cards

Midplane Power supply

Rear LED card

Shuttle

Figure 4-2 Enterprise Chassis component parts

Within the chassis, a personality card holds vital product data (VPD) and other information relevant to the particular chassis. This card can be replaced only under service action, and is not normally accessible. The personality card is attached to the midplane as shown in Figure 4-4 on page 61. 59

4.1.1 Front of the chassis


Figure 4-3 shows the bay numbers and air apertures on the front of the Enterprise Chassis.
Upper airflow inlets
a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a aaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a aaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a aaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a aaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a aaa a a a a aaaaaaaaa a a a a a a a aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaa a a a aaaaaaa a aaaaaaa aaaaa a a a a a a a a a a aaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaa a a a aaaaaaa a a a a a a a aaa a aaaaaaa aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a aaaaaaa a a a a aaa aaa a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa

Bay 13

Bay 14

Bay 11

Bay 12

Bay 9

Bay 10

Bay 7

Bay 8

Bay 5

Bay 6

Bay 3

Bay 4

Bay 1

a a a a a aaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a aaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Bay 2

Information Panel Figure 4-3 Front view of the Enterprise Chassis

Lower airflow Inlets

The chassis has the following features on the front: The front information panel on the lower left of the chassis Bays 1 - 14 supporting nodes and FSM Lower airflow inlet apertures that provide air cooling for switches, CMMs, and power supplies Upper airflow inlet apertures that provide cooling for power supplies For efficient cooling, each bay in the front or rear in the chassis must contain either a device or a filler. The Enterprise Chassis provides several LEDs on the front information panel that can be used to obtain the status of the chassis. The Identify, Check log, and the Fault LED are also on the rear of the chassis for ease of use.

60

IBM PureFlex System and IBM Flex System Products and Technology

4.1.2 Midplane
The midplane is the circuit board that connects to the compute nodes from the front of the chassis. It also connects to I/O modules, fan modules, and power supplies from the rear of the chassis. The midplane is located within the chassis, and can be accessed by removing the Shuttle assembly. Removing the midplane is only necessary in case of service action. The midplane is passive, which is to say that there are no electronic components on it. The midplane has apertures to allow air to pass through. It has connectors on both sides for power supplies, fan distribution cards, switches, I/O adapters, and nodes. Midplane front view Node power connectors Management connectors I/O module connectors Midplane rear view Power supply connectors CMM connectors

I/O adapter connectors


Figure 4-4 Connectors on the midplane

Fan power and signal connectors

Personality card connector

61

4.1.3 Rear of the chassis


Figure 4-5 shows the rear view of the chassis.

Figure 4-5 Rear view of Enterprise Chassis

The following components can be installed into the rear of the chassis Up to two CMMs. Up to six 2500W power supply modules. Up to six fan modules that consist of four 80 mm fan modules and two 40 mm fan modules. Additional fan modules can be installed for a total of 10 modules. Up to four I/O modules.

4.1.4 Specifications
Table 4-2 shows the specifications of the Enterprise Chassis 8721-A1x.
Table 4-2 Enterprise Chassis specifications Feature Machine type-model Form factor Maximum number of compute nodes supported Chassis per 42U rack Nodes per 42U rack Specifications System x ordering sales channel: 8721-A1x Power Systems sales channel: 7893-92Xa 10U rack mounted unit 14 half-wide (single bay), 7 full-wide (two bays), or 3 double-height full-wide (four bays). Mixing is supported. 4 56 half-wide, or 28 full-wide

62

IBM PureFlex System and IBM Flex System Products and Technology

Feature Management

Specifications One or two Chassis Management Modules for basic chassis management. Two CMMs form a redundant pair. One CMM is standard in 8721-A1x. The CMM interfaces with the integrated management module (IMM) or flexible service processor (FSP) integrated in each compute node in the chassis. An optional IBM Flex System Managera management appliance provides comprehensive management that includes virtualization, networking, and storage management. Up to eight lanes of I/O to an I/O adapter, with each lane capable of up to 16 Gbps bandwidth. Up to 16 lanes of I/O to a half wide-node with two adapters. A wide variety of networking solutions that include Ethernet, Fibre Channel, FCoE, and InfiniBand Six 2500-watt power modules that provide N+N or N+1 redundant power. Two are standard in model 8721-A1x. Power supplies are 80 PLUS Platinum certified and provide over 94% efficiency at both 50% load and 20% load. Power capacity of 2500 Watts output rated at 200VAC. Each power supply contains two independently powered 40 mm cooling fan modules. Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules). Four 80 mm and two 40 mm fan modules are standard in model 8721-A1x. Height: 440 mm (17.3) Width: 447 mm (17.6) Depth, measured from front bezel to rear of chassis: 800 mm (31.5") Depth, measured from node latch handle to the power supply handle: 840 mm (33.1") Minimum configuration: 96.62 kg (213 lb) Maximum configuration: 220.45 kg (486 lb) 6.3 to 6.8 bels Operating air temperature 5C to 40C Input power: 200 - 240 V ac (nominal), 50 or 60 Hz Minimum configuration: 0.51 kVA (two power supplies) Maximum configuration: 13 kVA (six power supplies) 12,900 watts maximum

I/O architecture

Power supplies

Fan modules Dimensions

Weight Declared sound level Temperature Electrical power

Power consumption

a. When ordering the IBM Flex System Enterprise Chassis through the Power Systems sales channel, select one of the IBM PureFlex System offerings. These offers are described in Chapter 2, IBM PureFlex System on page 11. In such offerings, the IBM Flex System Manager is a standard component and therefore is not optional.

For data center planning, the chassis is rated to a maximum operating temperature of 40C. For comparison, BC-H is rated to 35C. 110v operation is not supported: The AC operating range is 200VAC to 240VAC.

4.1.5 Air filter


There is an optional airborne contaminate filter that can be fitted to the front of the chassis as listed in Table 4-3.
Table 4-3 IBM Flex System Enterprise Chassis airborne contaminant filter ordering information Part Number 43W9055 43W9057 Description IBM Flex System Enterprise Chassis airborne contaminant filter IBM Flex System Enterprise Chassis airborne contaminant filter replacement pack

63

The filter is attached to and removed from the chassis as shown in Figure 4-6.

Figure 4-6 Dust filter

4.1.6 Compute node shelves


A shelf is required for half-wide bays. The chassis ships with these shelves in place. To allow for installation of the full-wide or larger, shelves must be removed from the chassis. Remove the shelves by sliding two blue latches on the shelf towards the center and then sliding the shelf out of the chassis. Figure 4-7 shows removal of a shelf from Enterprise Chassis.

Shelf

Tabs

Figure 4-7 Shelf removal

64

IBM PureFlex System and IBM Flex System Products and Technology

4.1.7 Hot plug and hot swap components


The chassis follows the standard color coding scheme used by IBM for touch points and hot swap components. Touch points are blue, and are found on these locations: The fillers that cover empty fan and power supply bays The handle of nodes Other removable items that cannot be hot swapped Hot Swap components have orange touch points. Orange tabs are found on fan modules, fan logic modules, power supplies, and I/O Module handles. The orange designates that the items are hot swap, and can be both removed and replaced while the chassis is powered. Table 4-4 shows which components are hot swap and which are hot plug. Nodes can be plugged into the chassis while the chassis is powered. The node can then be powered on. Power the node off before removal.
Table 4-4 Hot plug and hot swap components Component Node I/O Module 40 mm Fan Pack 80 mm Fan Pack Power Supply Fan logic module Hot plug Yes Yes Yes Yes Yes Yes Hot swap Noa Yesb Yes Yes Yes Yes

a. Node must be powered off, in standby before removal. b. I/O Module might require reconfiguration, and removal is disruptive to any communications that are taking place.

4.2 Power supplies


A maximum of six power supplies can be installed within the Enterprise Chassis. The power supplies are 80 PLUS Platinum certified and are 2500 Watts output rated at 200VAC to 208VAC (nominal), and 2750W at 220VAC to 240VAC (nominal). The power supply has an oversubscription rating of up to 3538 Watts output at 200VAC. The power supply operating range is 200-240VAC. The power supplies also contain two independently powered 40mm cooling fan modules that use power from the midplane, not from the power supply. 80 PLUS is a performance specification for power supplies used within servers and computers. The standard has several ratings, such as Bronze, Silver, Gold, Platinum. To meet the 80 PLUS Platinum standard, the power supply must have a power factor (PF) of 0.95 or greater at 50% rated load and efficiency equal to or greater than the following: 90% at 20% of rated load 94% at 50% of rated load 91% at 100% of rated load Further information about 80 PLUS can be found at http://www.plugloadsolutions.com 65

Table 4-5 lists the efficiency of the Enterprise Chassis power supplies at various percentage loads.
Table 4-5 Power supply efficiency at different loads Load Input voltage (Vac) Output power Efficiency 10% load 200-208V 250 W 93.2% 220-240V 275 W 93.5% 20% load 200-208V 500 W 94.2% 220-240V 550 W 94.4% 50% load 200-208V 1250 W 94.5% 220-240V 1375 W 92.2% 100% load 200-208V 2500 W 91.8% 220-240V 2750 W 91.4%

Figure 4-8 shows the location of the power supplies.


Power supply bay 6

Power supply bay 3

Power supply bay 5

Power supply bay 2

Power supply bay 4

Power supply bay 1

Figure 4-8 Power supply locations

The chassis allows configurations of power supplies to give N+N or N+1 redundancy. A fully configured chassis operates on just three 2500W power supplies with no redundancy, but N+1 or N+N is better to keep the chassis available. Three (or six with N+N redundancy) power supplies allows for a balanced 3-phase configuration. All power supply modules are combined into a single power domain within the chassis. This combination distributes power to each of the compute nodes, I/O modules, and ancillary components through the Enterprise Chassis midplane. The midplane is a highly reliable design with no active components. Each power supply is designed to provide fault isolation and is hot swappable. Power monitoring of both the DC and AC signals allows the Chassis Management Module to accurately monitor the power supplies. The integral power supply fans are not dependent upon the power supply being functional. They operate and are powered independently from the midplane.

66

IBM PureFlex System and IBM Flex System Products and Technology

Power supplies are added as required to meet the load requirements of the Enterprise Chassis configuration. There is no need to over provision a chassis. For more information about power-supply unit (PSU) planning, see 4.11, Infrastructure planning on page 119. Figure 4-9 shows the power supply rear view and highlights the LEDs. There is a handle for removal and insertion of the power supply.

Removal latch Pull handle

LEDs (left to right:) AC power DC power Fault

Figure 4-9 Power supply

The rear of the power supply has a C20 inlet socket for connection to power cables. You can use a C19-C20 power cable, which can connect to a suitable IBM DPI rack power distribution unit (PDU). The rear LEDs are: AC Power: When lit green, this LED indicates that AC power is being supplied to the PSU inlet. DC Power: When lit green, this LED indicates that DC power is being supplied to the chassis midplane. Fault: When lit amber, this LED indicates a fault with the PSU. Table 4-6 shows the specifications for the Enterprise Chassis power supplies.
Table 4-6 2500W Power Supply Module option part number Part Number 43W9049 Feature codesa A0UC / 3590 Description IBM Flex System Enterprise Chassis 2500W Power Module

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Before removing any power supplies, ensure that the remaining power supplies have sufficient capacity to power the Enterprise Chassis. Power usage information can be found in the Chassis Management Module web interface. For more information about oversubscription, see 4.7.2, Power supply population on page 78.

67

4.3 Fan modules


The Enterprise Chassis supports up to 10 hot pluggable fan modules that consist of two 40 mm fan modules and eight 80 mm fan modules. A chassis can operate with a minimum of six hot-swap fan modules installed, consisting of four 80 mm fan modules and two 40 mm fan modules. The fan modules plug into the chassis and connect to the fan distribution cards. More 80 mm fan modules can be added as required to support chassis cooling requirements. Figure 4-10 shows the fan bays in the back of the Enterprise Chassis.
Fan bay 10 Fan bay 5

Fan bay 4 Fan bay 9

Fan bay 3 Fan bay 8

Fan bay 7

Fan bay 2

Fan bay 6

Fan bay 1

Figure 4-10 Fan bays in the Enterprise Chassis

For more information about how to populate the fan modules, see 4.6, Cooling on page 72.

68

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-11 shows a 40 mm fan module,

Removal latch Pull handle

Power on LED

Fault LED

Figure 4-11 40 mm fan module

The two 40 mm fan modules in fan bays 5 and 10 distribute airflow to the I/O modules and chassis management modules. These modules ship preinstalled in the chassis. Each 40 mm fan module contains two 40 mm fans internally, side by side. The 80 mm fan modules distribute airflow to the compute nodes through the chassis from front to rear. Each 80 mm fan module contains two 80 mm fan modules, back to back at each end of the module, which are counter rotating. Both fan modules have an electromagnetic compatibility (EMC) mesh screen on the rear internal face of the module. This design provides a laminar flow through the screen. Laminar flow is a smooth flow of air, sometimes called streamline flow. This flow reduces turbulence of the exhaust air and improves the efficiency of the overall fan assembly. These factors combine to form a highly efficient fan design that provides the best cooling for lowest energy input: Design of the whole fan assembly The fan blade design The distance between and size of the fan modules The EMC mesh screen Figure 4-12 shows an 80 mm fan module.

Removal latch Pull handle

Power on LED

Fault LED

Figure 4-12 80 mm fan module

69

The minimum number of 80 mm fan modules is four. The maximum number of 80 mm fan modules that can be installed is eight. When the modules are ordered as an option, they are supplied as a pair. Both fan modules have two LED indicators, consisting of a green power-on indicator and an amber fault indicator. The power indicator lights when the fan module has power, and flashes when the module is in power save state. Table 4-7 lists the specifications on the 80 mm Fan Module Pair option.
Table 4-7 80 mm Fan Module Pair option part number Part Number 43W9078 Feature codea A0UA / 7805 Description IBM Flex System Enterprise Chassis 80 mm Fan Module Pair

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

For more information about airflow and cooling, see 4.6, Cooling on page 72.

4.4 Fan logic module


There are two fan logic modules included within the chassis as shown in Figure 4-13.

Fan logic bay 2

Fan logic bay 1

Figure 4-13 Fan logic modules on the rear of the chassis

Fan logic modules are multiplexers for the internal I2C bus, which is used for communication between hardware components within the chassis. Each fan pack is accessed through a dedicated I2C bus, switched by the Fan Mux card, from each CMM. The fan logic module switches the I2C bus to each individual fan pack. This module can be used by the Chassis Management Module to determine multiple parameters, such as fan RPM.

70

IBM PureFlex System and IBM Flex System Products and Technology

There is a fan logic module for the left and right side of the chassis. The left fan logic module access the left fan modules, and the right fan logic module accesses the right fan modules. Fan presence indication for each fan pack is read by the fan logic module. Power and fault LEDs are also controlled by the fan logic module. Figure 4-14 shows a fan logic module and its LEDs.

Figure 4-14 Fan logic module

As shown in Figure 4-14 there are two LEDs on the fan logic module. The Power-on LED is green when the fan logic module is powered. The amber fault LED flashes to indicate a faulty fan logic module. Fan logic modules are hot swappable. For more information about airflow and cooling, see 4.6, Cooling on page 72

4.5 Front information panel


Figure 4-15 shows the front information panel

!
White backlit IBM logo Identify LED Check log LED Fault LED

Figure 4-15 Front information panel

The following items are displayed on the front information panel: White Backlit IBM Logo: When lit, this logo indicates that the chassis is powered. Locate LED: When lit (blue) solid, this LED indicates the location of the chassis. When flashing, this LED indicates that a condition has occurred that caused the CMM to indicate that the chassis needs attention.

71

Check Error Log LED: When lit (amber), this LED indicates that a noncritical event has occurred. This event might be a wrong I/O module inserted into a bay, or a power requirement that exceeds the capacity of the installed power modules. Fault LED: When lit (amber), this LED indicates that a critical system error has occurred. This can be an error in a power module or a system error in a node. Figure 4-16 shows the LEDs on the rear of the chassis.

Identify LED

Check log LED

Fault LED

Figure 4-16 Chassis LEDs on the rear of the unit (lower right)

4.6 Cooling
This section addresses Enterprise Chassis cooling. The flow of air within the Enterprise Chassis follows a front to back cooling path. Cool air is drawn in at the front of the chassis and warm air is exhausted to the rear. Air is drawn in both through the front node bays and the front airflow inlet apertures at the top and bottom of the chassis. There are two cooling zones for the nodes: A left zone and a right zone. The cooling can be scaled up as required, based on which node bays are populated. The number of fan modules required for a certain number of nodes is described further in this section. When a node is not inserted in a bay, an airflow damper closes in the midplane. Therefore, no air is drawn in through an unpopulated bay. When a node is inserted into a bay, the damper is opened mechanically by the node insertion. This action allows for cooling of the node in that bay.

72

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-17 shows the upper and lower cooling apertures. Upper cooling apertures
a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a aaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a aaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaa a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a aaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaa aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a aaa a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a aaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a aaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa

a a a a a aaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a aaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a aaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaa a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Lower cooling apertures


Figure 4-17 Enterprise Chassis lower and upper cooling apertures

Various fan modules are present in the chassis to assist with efficient cooling. Fan modules consist of both 40 mm and 80 mm types, and are contained within hot pluggable fan modules. The power supplies also have two integrated, independently powered 40 mm fan modules. The cooling path for the nodes begins when air is drawn in from the front of the chassis. The airflow intensity is controlled by the 80 mm fan modules in the rear. Air passes from the front of the chassis, through the node, through openings in the Midplane and then into a plenum chamber. Each plenum is isolated from the other, providing separate left and right cooling zones. The 80 mm fan packs on each zone then move the warm air from the plenum to the rear of the chassis. In a 2-bay wide node, the air flow within the node is not segregated because it spans both airflow zones.

73

Figure 4-18 shows a chassis with the outer casing removed for clarity to show airflow path through the chassis. There is no airflow through the chassis midplane where a node is not installed. The air damper is opened only when a node is inserted in that bay. Node installed in Bay 14 Cool airflow in 80 mm fan pack

Cool airflow in

Warm Airflow Midplane

Node installed in Bay 1

Figure 4-18 Airflow into chassis through the Nodes and exhaust through the 80 mm fan packs (chassis casing is removed for clarity)

74

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-19 shows the path of air from the upper and lower airflow inlet apertures to the power supplies. Nodes Power Supply Cool airflow in

Cool airflow in Midplane


Figure 4-19 Airflow path power supplies. (chassis casing is removed for clarity)

75

Figure 4-20 shows the airflow from the lower inlet aperture to the 40 mm fan modules. This airflow provides cooling for the switch modules and CMM installed in the rear of the chassis.

Nodes 40 mm fan module

Airflow

I/O modules

CMM

Figure 4-20 40 mm fan module airflow (chassis casing is removed for clarity)

The right side 40 mm fan module cools the right switches, while the left 40 mm fan module cools the left pair of switches. Each 40 mm fan module has a pair of fans for redundancy. Cool air flows in from the lower inlet aperture at the front of the chassis. It is drawn into the lower openings in the CMM and I/O Modules where it provides cooling for these components. It passes through and is drawn out the top of the CMM and I/O modules. The warm air is expelled to the rear of the chassis by the 40 mm fan assembly. This expulsion is shown by the red airflow arrows in Figure 4-20. The removal of the fan pack exposes an opening in the bay to the 80 mm fan packs located below. A back flow damper within the fan bay then closes. The backflow damper prevents hot air from re-entering the system from the rear of the chassis. The 80 mm fan packs cool the switch modules and the CMM while the fan pack is being replaced. Chassis cooling is implemented as a function of: Node configurations Power Monitor Circuits Component Temperatures Ambient Temperature This results in lower airflow volume (measured in cubic feet per minute or CFM) and lower cooling energy spent at a chassis level. This system also maximizes the temperature difference across the chassis (known generally as the Delta T) for more efficient room integration. Monitored Chassis level airflow usage is displayed to enable airflow planning and monitoring for hot air recirculation. 76
IBM PureFlex System and IBM Flex System Products and Technology

Five Acoustic Optimization states can be selected. Use the one that best balances performance requirements with the noise level of the fans. Chassis level CFM usage is available to you for planning purposes. In addition, ambient health awareness can detect potential hot air recirculation to the chassis.

4.7 Power supply and fan module requirements


The number of fan modules and power supplies required is dependent on the number of nodes installed within a chassis and the level of redundancy required. When installing additional nodes, install the nodes, fan modules, and power supplies from the bottom upwards.

4.7.1 Fan module population


The fan modules are populated dependent on nodes installed. To support the base configuration and up to four nodes, a chassis ships with four 80 mm fan modules and two 40 mm fan modules preinstalled. The minimum configuration of 80 mm fan modules is four, which provides cooling for a maximum of four nodes. This configuration is shown in Figure 4-21 and is the base configuration.

13 11 9 7 5 3 1 Node Bays Front View

14 12 10 8 6 4 2 Cooling zone Cooling zone 7 6 2 1 9 8 4 3

Rear View

Figure 4-21 Four 80 mm fan modules allow a maximum of four nodes installed

77

Installing six 80 mm fan modules installed allows a further four nodes to be supported within the chassis. The maximum therefore is eight as shown in Figure 4-22.

13 11 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 12 10 8 8 6 6 4 4 2 2 Cooling zone Cooling zone 7 6 2 1 9 8 4 3

Rear View

Figure 4-22 Six 80 mm fan modules allow for a maximum of eight nodes

To cool more than eight nodes, all fan modules must be installed as shown in Figure 4-23

13 13 11 11 9 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 14 12 12 10 10 8 8 6 6 4 4 2 2 9 8 4 3

7 6 Cooling zone

2 1 Cooling zone

Rear View

Figure 4-23 Eight 80 mm fan modules support for 9 to 14 nodes

If there are insufficient fan modules for the number of nodes installed, the nodes might be throttled.

4.7.2 Power supply population


The power supplies can be installed in either N+N or N+1 configuration. N+N means a fully redundant configuration where there are duplicate power supplies for each supply needed for full operation. N+1 means there is only one redundant power supply and all other supplies are needed for full operation. To support a full chassis of Nodes, N (the number of power supplies) must equal 3 for N+N operation. N must be greater than or equal to 3 for N+1 operation. As the number of nodes in a chassis is expanded, more power supplies can be added as required. This system allows cost effective scaling of power configurations. 78
IBM PureFlex System and IBM Flex System Products and Technology

If there is not enough DC power available to meet the load demand, the Chassis Management Module automatically powers down devices to reduce the load demand.

Power policies
There are five power management policies that can be selected to dictate how the chassis is protected in the case of potential power module or supply failures. These policies are configured by using the Chassis Management Module graphical interface. AC Power source redundancy Power is allocated under the assumption that no throttling of the nodes is allowed if a power supply fault occurs. This is an N+N configuration. AC Power source redundancy with compute node throttling allowed Power is allocated under the assumption that throttling of the nodes are allowed if a power supply fault occurs. This is an N+N configuration. Power Module Redundancy Maximum input power is limited to one less than the number of power modules when more than one power module is present. One power module can fail without affecting compute note operation. Multiple power node failures can cause the chassis to power off. Some compute nodes might not be able to power on if doing so would exceed the power policy limit. Power Module Redundancy with compute node throttling allowed This can be described as oversubscription mode. Operation in this mode assumes that a nodes load can be reduced, or throttled, to the continuous load rating within a specified time. This process occurs following a loss of one or more power supplies. The Power Supplies can exceed their continuous rating of 2500w for short periods. This is for an N+1 configuration. Basic Power Management This allows the total output power of all power supplies to be used. When operating in this mode, there is no power redundancy. If a power supply fails, or an AC feed to one or more supplies is lost, the entire chassis might shut down. There is no power throttling. The chassis run in one of these power capping policies: No Power Capping Maximum input power is determined by the active power redundancy policy Static Capping This sets an overall chassis limit on the maximum input power. In a situation where powering on a component would cause the limit to be exceeded, the component is prevented from powering on.

79

Power supplies required in an N+N configuration


A total of six PSU can be installed and, in an N+N configuration, the options are either 2, 4, or 6 power supplies installed. The chassis ships with Power supply bay 1 and 4 preinstalled. For N+N, this configuration allows up to four nodes to be populated into the chassis before requiring any additional power supplies. Figure 4-24 shows this configuration.

13 11 9 7 5 3 1 Node Bays Front View

14 12 10 8 6 4 2 Power Supply Bays Rear View 4 4 1 1 5 2 6 3

Figure 4-24 N+N with four nodes installed

For up to eight nodes with N+N configuration, install a further pair of power supplies in bays 2 and 5 as shown in Figure 4-25.

13 11 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 12 10 8 8 6 6 4 4 2 2 Power Supply Bays Rear View 4 4 1 1 5 5 2 2 6 3

Figure 4-25 N+N power supply requirements with up to eight nodes installed

80

IBM PureFlex System and IBM Flex System Products and Technology

To support more than eight nodes with N+N, install the remaining pair of power supplies (3 and 6) as shown in Figure 4-26.

13 13 11 11 9 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 14 12 12 10 10 8 8 6 6 4 4 2 2 Power Supply Bays Rear View 4 4 1 1 5 5 2 2 6 6 3 3

Figure 4-26 N+N power supply requirements for nodes 9 - 14

Power supplies required in an N+1 configuration


The chassis ships with two power supplies installed. Therefore, you can install up to 4 nodes in an N+1 power configuration. Figure 4-27 shows an N+1 configuration.

13 11 9 7 5 3 1 Node Bays Front View

14 12 10 8 6 4 2 Power Supply Bays Rear View 4 4 1 1 5 2 6 3

Figure 4-27 N+1: Two PSUs support up to four nodes

81

With configurations between five and eight nodes, for N+1 a total of three Power supplies are required (Figure 4-28).

13 11 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 12 10 8 8 6 6 4 4 2 2 Power Supply Bays Rear View 4 4 1 1 5 2 2 6 3

Figure 4-28 N+1: Up to eight nodes are supported with three power supplies

For configurations greater than nine nodes, a total of four power supplies are required as shown in Figure 4-29.

13 13 11 11 9 9 7 7 5 5 3 3 1 1 Node Bays Front View

14 14 12 12 10 10 8 8 6 6 4 4 2 2 Power Supply Bays Rear View 4 4 1 1 5 5 2 2 6 3

Figure 4-29 N+1 fully configured chassis requires four power supplies

A fully populated chassis can function on three power supplies. However, avoid this configuration because it has no power redundancy in the event of a power source or power supply failure.

4.8 Chassis Management Module


The CMM provides single chassis management and the networking path for remote keyboard, video, mouse (KVM) capability for compute nodes within the chassis. The chassis can accommodate one or two CMM. The first is installed into CMM Bay 1, the second into CMM bay 2. Installing two provides CMM redundancy.

82

IBM PureFlex System and IBM Flex System Products and Technology

Table 4-8 lists the ordering information for the second CMM.
Table 4-8 Chassis Management Module ordering information Part number 68Y7030 Feature codea A0UE / 3592 Description IBM Flex System Chassis Management Module

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Figure 4-30 shows the location of the CMM bays on the back of the Enterprise Chassis.

Figure 4-30 CMM Bay 1 and Bay 2

The CMM provides these functions: Power control Fan management Chassis and compute node initialization Switch management Diagnostics Resource discovery and inventory management Resource alerts and monitoring management Chassis and compute node power management Network management The CMM has the following connectors: USB connection: Can be used for insertion of a USB media key for tasks such as firmware updates. 10/100/1000 Mbps RJ45 Ethernet connection: For connection to a management network. The CMM can be managed through this Ethernet port.

83

Serial port (mini-USB): For local serial (command-line interface (CLI)) access to the CMM. Use the cable kit listed in Table 4-9 for connectivity.
Table 4-9 Serial cable specifications Part number 90Y9338 Feature codea A2RR / None Description IBM Flex System Management Serial Access Cable Contains two cables: Mini-USB-to-RJ45 serial cable Mini-USB-to-DB9 serial cable

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

The CMM has the following LEDs that provide status information: Power-on LED Activity LED Error LED Ethernet port link and port activity LEDs Figure 4-31 shows the CMM connectors and LEDs.

Figure 4-31 Chassis Management Module

The CMM also incorporates a reset button. It has two functions, dependent upon how long the button is held in: When pressed for less than 5 seconds, the CMM restarts. When pressed for more than 5 seconds (for example 10-15 seconds), the CMM configuration is reset to manufacturing defaults and then restarts. For more information about how the CMM integrates into the Systems management architecture, see 3.2, Chassis Management Module on page 39.

84

IBM PureFlex System and IBM Flex System Products and Technology

4.9 I/O architecture


The Enterprise Chassis can accommodate four I/O modules installed in vertical orientation into the rear of the chassis, as shown in Figure 4-32.

I/O module bay 1

I/O module bay 3

I/O module bay 2

I/O module bay 4

Figure 4-32 Rear view that shows the I/O Module bays 1-4

If a node has a two port integrated LAN on Motherboard (LOM) as standard, Module 1 and 2 are connected to this LOM. If an I/O adapter is installed in the nodes I/O expansion bay 1, Modules 1 and 2 would be connected to this LOM. Modules 3 and 4 connect to the I/O adapter that is installed within I/O expansion bay 2 on the node. These I/O modules provide external connectivity, and connect internally to each of the nodes within the chassis. They can be either Switch or Pass thru modules, with a potential to support other types in the future.

85

Figure 4-33 shows the connections from the nodes to the switch modules.
LOM connector (remove when I/O expansion adapter is installed)

4 lanes (KX-4) or 4 10 Gbps lanes (KR)

Node LOM bay 1 with LOM

I/O module 1

I/O module 3

I/O module 2 Node LOM bay 2 with I/O expansion adapter

I/O module 4

Node bay 14

14 internal groups (of 4 lanes each), one to each node.

Figure 4-33 LOM, I/O adapter, and switch module connections

The node in Bay 1 on Figure 4-33 shows that when shipped with a LOM, the LOM connector provides the link from the node system board to the midplane. Some nodes do not ship with LOM. If required, this LOM connector can be removed and an I/O expansion adapter installed in its place. This configuration is shown on the node in Bay 2 on Figure 4-33

86

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-34 shows the electrical connections from the LOM and I/O adapters to the I/O Modules, which all takes place across the chassis midplane.

Node M1 1 M2

. Switch . 1 .

Node M1 2 M2

. Switch . 2 .

Node M1 3 M2

. Switch . 3 .

Node M1 14 M2

. Switch . 4 .

Each line between an I/O adapter and a switch is four links


Figure 4-34 Logical layout of node to switch interconnects

A total of two I/O expansion adapters (designated M1 and M2 in Figure 4-34) can be plugged into a half-wide node. Up to 4 I/O adapters can be plugged into a full-wide node. Each I/O adapter has two connectors. One connects to the compute nodes system board (PCI Express connection). The second connector is a high speed interface to the midplane that mates to the midplane when the node is installed into a bay within the chassis. As shown in Figure 4-34, each of the links to the midplane from the I/O adapter (shown in red) are in fact four links wide. Exactly how many links are employed on each I/O adapter is dependent on the design of the adapter and the number of ports that are wired. Therefore, a half wide node can have a maximum of 16 I/O links, and a full wide node 32.

87

Figure 4-35 shows an I/O expansion adapter.

PCIe connector Midplane connector

Guide block to ensure correct installation


Figure 4-35 I/O expansion adapter

Adapters share a common size (100 mm x 80 mm)

Each of these individual I/O links or lanes can be wired for 1 Gb or 10 Gb Ethernet, or 8 or 16 Gbps Fibre Channel. You can enable any number of these links. The application-specific integrated circuit (ASIC) type on the I/O Expansion adapter dictates the number of links that can be enabled. Some ASICs are two port and some are four port. For a two port ASIC, one port can go to one switch and one port to the other. This configuration is shown in Figure 4-36 on page 89. In the future other combinations can be implemented. In an Ethernet I/O adapter, the wiring of the links is to the IEEE 802.3ap standard, which is also known as the Backplane Ethernet standard. The Backplane Ethernet standard has different implementations at 10 Gbps, being 10GBASE-KX4 and 10GBASE-KR. The I/O architecture of the Enterprise Chassis supports both the KX4 and KR. 10GBASE-KX4 uses the same physical layer coding (IEEE 802.3 clause 48) as 10GBASE-CX4, where each individual lane (SERDES = Serializer/DeSerializer) carries 3.125 Gbaud of signaling bandwidth. 10GBASE-KR uses the same coding (IEEE 802.3 clause 49) as 10GBASE-LR/ER/SR, where the SERDES lane operates at 10.3125 Gbps. Each of the links between I/O expansion adapter and I/O module can either be 4x 3.125 Lanes/port (KX-4) or 4x 10 Gbps Lanes (KR). This choice is dependent on the expansion adapter and I/O Module implementation.

88

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-36 shows how the integrated 2-port 10 Gb LOM connects through a LOM connector to the midplane on a compute node. This implementation provides a pair of 10 Gb lanes. Each lane connects to a 10 Gb switch or 10 Gb pass-through module installed in I/O module bays in the rear of the chassis.

10 Gbps KR lane

P1 LOM Connector P1 P2 LOM

P2

Figure 4-36 LOM implementation: Emulex 10 Gb Virtual Fabric onboard LOM to I/O Module

89

A half-wide compute node with two standard I/O adapter sockets and an I/O adapter with two ports is shown in Figure 4-37. Port 1 connects to one switch in the chassis and Port 2 connects to another switch in the chassis. With 14 compute nodes installed in the chassis, therefore, each switch has 14 internal ports for connectivity to the compute nodes.

Half-wide node P1 2-Port I/O adapter in slot 1

P1 P3 P5 P7 P2 P4 P6 P8

x1 Ports

P2

x1 Ports

I/O adapter in slot 2

P1 P3 P5 P7 P2 P4 P6 P8 I/O modules

Figure 4-37 I/O adapter with two port ASIC

90

IBM PureFlex System and IBM Flex System Products and Technology

Another implementation of the I/O adapter is the four port. Figure 4-38 shows the interconnection to the I/O module bays for such I/O adapters that uses a 4-port ASIC.

Half-wide node P1 P2 P3 P4 ASIC 4-Port

I/O adapter in slot 1

P1 P3 P5 P7 P2 P4 P6 P8

x1 Ports x1 Ports

I/O adapter in slot 2

P1 P3 P5 P7 P2 P4 P6 P8 I/O modules

Figure 4-38 I/O adapter with four port ASIC connections

In this case, with each node having a four port I/O adapter in I/O slot 1, each I/O module would require 28 internal ports enabled. This configuration highlights another key feature of the I/O architecture: Switch partitioning. Switch partitioning is where sets of ports are enabled by Feature on Demand (FoD) to allow a great number of connections between nodes and a switch. With two lanes per node to each switch and 14 nodes requiring four ports connected, each switch therefore needs to have 28 internal ports enabled. You also need sufficient uplink ports. The architecture allows for a total of eight lanes per I/O adapter. Therefore, a total of 16 I/O lanes per half wide node is possible. Each I/O module requires the matching number of internal ports to be enabled. For more information about switch partitioning and port enablement using FoD, see 4.10, I/O modules on page 92. For more information about I/O expansion adapters that install on the nodes, see 5.5.1, Overview on page 216.

91

4.10 I/O modules


I/O modules are inserted into the rear of the Enterprise Chassis to provide interconnectivity both within the chassis and external to the chassis. This section covers the I/O and Switch module naming scheme. It contains the following subsections: 4.10.1, I/O module LEDs 4.10.2, Serial access cable on page 93 4.10.3, I/O module naming scheme on page 94 4.10.4, IBM Flex System Fabric EN4093 10 Gb Scalable Switch on page 94 4.10.5, IBM Flex System EN4091 10 Gb Ethernet Pass-thru on page 100 4.10.6, IBM Flex System EN2092 1 Gb Ethernet Scalable Switch on page 102 4.10.7, IBM Flex System FC5022 16 Gb SAN Scalable Switch on page 107 4.10.8, IBM Flex System FC3171 8 Gb SAN Switch on page 113 4.10.9, IBM Flex System FC3171 8 Gb SAN Pass-thru on page 116 4.10.10, IBM Flex System IB6131 InfiniBand Switch on page 118 There are four I/O Module bays to the rear of the chassis. To insert an I/O module into a bay, the I/O filler must first be removed. Figure 4-39 shows how to remove an I/O filler and inserting an I/O module into the chassis by using the two handles.

Figure 4-39 Removing an I/O filler and installing an I/O module

92

IBM PureFlex System and IBM Flex System Products and Technology

4.10.1 I/O module LEDs


I/O Module Status LED are at the bottom of the module when inserted into the chassis. All modules share three status LEDs as shown in Figure 4-40.

Serial port for local management

OK

Identify

Switch error

Figure 4-40 Example of an I/O module status LEDs

The LEDs are as follows: OK (power) When this LED is lit, it indicates that the switch is on. When it is not lit and the amber switch error LED is lit, it indicates a critical alert. If the amber LED is also not lit, it indicates that the switch is off. Identify You can physically identify a switch by making this blue LED light up by using the management software. Switch Error When this LED is lit, it indicates a POST failure or critical alert. When this LED is lit, the system-error LED on the chassis is also lit. When this LED is not lit and the green LED is lit, it indicates that the switch is working correctly. If the green LED is also not lit, it indicates that the switch is off

4.10.2 Serial access cable


The switches (and CMM) support local command-line interface (CLI) access through a USB serial cable. The mini-USB port on the switch is near the LEDs as shown in Figure 4-40. A cable kit with supported serial cables can be ordered as listed in Table 4-10.
Table 4-10 Serial cable Part number 90Y9338 Feature codea A2RR / None Description IBM Flex System Management Serial Access Cable

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Part number 90Y9338 contains two cables: Mini-USB-to-RJ45 serial cable Mini-USB-to-DB9 serial cable 93

4.10.3 I/O module naming scheme


The I/O module naming scheme follows a logical structure, similar to that of the I/O adapters. Figure 4-41 shows the I/O module naming scheme. As time progresses this scheme might be expanded to support future technology.

IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

EN2092

Fabric Type: EN = Ethernet FC = Fibre Channel CN = Converged Network IB = InfiniBand

Series: 2 for 1 Gb 3 for 8 Gb 4 for 10 Gb 5 for 16 Gb 6 for InfiniBand

Vendor name where A=01 02 = Brocade 09 = IBM 13 = Mellanox 17 = QLogic

Maximum number of partitions 2 = 2 partitions

Figure 4-41 IBM Flex System I/O Module naming scheme

4.10.4 IBM Flex System Fabric EN4093 10 Gb Scalable Switch


The IBM Flex System Fabric EN4093 10 Gb Scalable Switch is a 10 Gb 64-port upgradeable midrange to high-end switch module. It offers Layer 2/3 switching designed to install within the I/O module bays of the Enterprise Chassis. The switch contains the following ports: Up to 42 internal 10 Gb ports Up to 14 external 10 Gb uplink ports (enhanced small form-factor pluggable (SFP+) connectors) Up to 2 external 40 Gb uplink ports (quad small form-factor pluggable (QSFP+) connectors) The switch is considered suited for clients with these needs: Building a 10 Gb infrastructure Implementing a virtualized environment Requiring investment protection for 40 Gb uplinks Want to reduce total cost of ownership (TCO) and improve performance, while maintaining high levels of availability and security Want to avoid oversubscription (traffic from multiple internal ports that attempt to pass through a lower quantity of external ports, leading to congestion and performance impact)

94

IBM PureFlex System and IBM Flex System Products and Technology

The EN4093 10Gb Scalable Switch is shown in Figure 4-42.

Figure 4-42 IBM Flex System Fabric EN4093 10 Gb Scalable Switch

As listed in Table 4-11, the switch is initially licensed with fourteen 10 Gb internal ports enabled and ten 10 Gb external uplink ports enabled. Further ports can be enabled, including the two 40 Gb external uplink ports with the Upgrade 1 and Upgrade 2 license options. Upgrade 1 must be applied before Upgrade 2 can be applied. Table 4-11 lists the available parts and upgrades.
Table 4-11 IBM Flex System Fabric EN4093 10 Gb Scalable Switch part numbers and port upgrades Part number 49Y4270 Feature codea A0TB / 3593 Product description Internal IBM Flex System Fabric EN4093 10 Gb Scalable Switch 10x external 10 Gb uplinks 14x internal 10 Gb ports IBM Flex System Fabric EN4093 10 Gb Scalable Switch (Upgrade 1) Adds 2x external 40 Gb uplinks Adds 14x internal 10 Gb ports IBM Flex System Fabric EN4093 10 Gb Scalable Switch (Upgrade 2) (requires Upgrade 1): Adds 4x external 10 Gb uplinks Add 14x internal 10 Gb ports 14 Total ports enabled 10 Gb uplink 10 40 Gb uplink 0

49Y4798

A1EL / 3596

28

10

88Y6037

A1EM / 3597

42

14

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

95

The key components on the front of the switch are shown in Figure 4-43.
14x 10 Gb uplink ports (10 standard, 4 with Upgrade 2) 2x 40 Gb uplink ports (enabled with Upgrade 1)

Switch release handle (one either side)

SFP+ ports

QSFP+ ports

Management ports

Switch LEDs

Figure 4-43 IBM Flex System Fabric EN4093 10 Gb Scalable Switch

Each upgrade license enables additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches) Upgrade 1 requires a four-port Ethernet adapter (two ports of the adapter to each switch) Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) Consideration: Adding Upgrade 2 enables an additional 14 internal ports. This allows you to deliver the ability to have 42 internal ports, with three ports connected to each of the 14 compute nodes in the chassis. To take full advantage of all 42 internal ports, a 6-port adapter is required, but this type of adapter is currently not available. Upgrade 2 still provides a benefit even with a 4-port adapter because this upgrade enables an extra four external 10 Gb uplinks as well. The rear of the switch has 14 SPF+ module ports and two QSFP+ module ports. The QSFP+ ports can be used to provide either two 40 Gb uplinks or eight 10 Gb ports. Use one of the supported QSFP+ to 4x 10 Gb SFP+ cables listed in Table 4-12. This cable splits a single 40 Gb QSPFP port into 4 SFP+ 10 Gb ports. For management of the switch, a mini USB port and an Ethernet management port are provided. The supported SFP+ and QSFP+ modules and cables for the switch are listed in Table 4-12.
Table 4-12 Supported SFP+ modules and cables Part number Feature codea Description

Serial console cables 90Y9338 A2RR / None IBM Flex System Management Serial Access Cable Kit

Small form-factor pluggable (SFP) transceivers - 1 GbE 81Y1618 81Y1622 90Y9424 3268 / EB29 3269 / EB2A A1PN / None IBM SFP RJ-45 Transceiver (does not support 10/100 Mbps) IBM SFP SX Transceiver IBM SFP LX Transceiver

96

IBM PureFlex System and IBM Flex System Products and Technology

Part number

Feature codea

Description

SFP+ transceivers - 10 GbE 46C3447 90Y9412 44W4408 5053 / None A1PM / None 4942 / 3382 IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver 10GBase-SR SFP+ (MMFiber) transceiver

SFP+ Direct Attach Copper (DAC) cables - 10 GbE 90Y9427 90Y9430 90Y9433 A1PH / ECB4 A1PJ / ECB5 A1PK / None 1m IBM Passive DAC SFP+ 3m IBM Passive DAC SFP+ 5m IBM Passive DAC SFP+

QSFP+ transceiver and cables - 40 GbE 49Y7884 90Y3519 90Y3521 A1DR / EB27 A1MM / None A1MN / None IBM QSFP+ 40GBASE-SR Transceiver (Requires either cable 90Y3519 or cable 90Y3521) 10m IBM MTP Fiberoptic Cable (requires transceiver 49Y7884) 30m IBM MTP Fiberoptic Cable (requires transceiver 49Y7884)

QSFP+ breakout cables - 40 GbE to 4x10 GbE 49Y7886 49Y7887 49Y7888 A1DL / EB24 A1DM / EB25 A1DN / EB26 1m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable 3m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable 5m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable

QSFP+ Direct Attach Copper (DAC) cables - 40 GbE 49Y7890 49Y7891 A1DP / None A1DQ / None 1m QSFP+ to QSFP+ DAC 3m QSFP+ to QSFP+ DAC

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

The EN4093 10Gb Scalable Switch has the following features and specifications: Internal ports Forty-two internal full-duplex 10 Gigabit ports. Fourteen ports are enabled by default. Optional FoD licenses are required to activate the remaining 28 ports. Two internal full-duplex 1 GbE ports connected to the chassis management module. External ports Fourteen ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC cables. Ten ports are enabled by default. An optional FoD license is required to activate the remaining four ports. SFP+ modules and DAC cables are not included and must be purchased separately. Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs (ports are disabled by default. An optional FoD license is required to activate them). QSFP+ modules and DAC cables are not included and must be purchased separately. One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module.

97

Scalability and performance 40 Gb Ethernet ports for extreme uplink bandwidth and performance Fixed-speed external 10 Gb Ethernet ports to take advantage of 10 Gb core infrastructure Autosensing 10/1000/1000 external Gigabit Ethernet ports for bandwidth optimization Non-blocking architecture with wire-speed forwarding of traffic and aggregated throughput of 1.28 Tbps Media Access Control (MAC) address learning: Automatic update, support of up to 128,000 MAC addresses Up to 128 IP interfaces per switch Static and Link Aggregation Control Protocol (LACP) (IEEE 802.3ad) link aggregation: Up to 220 Gb of total uplink bandwidth per switch, up to 64 trunk groups, up to 16 ports per group Support for jumbo frames (up to 9,216 bytes) Broadcast/multicast storm control Internet Group Management Protocol (IGMP) snooping to limit flooding of IP multicast traffic IGMP filtering to control multicast traffic for hosts that participate in multicast groups Configurable traffic distribution schemes over trunk links based on source/destination IP or MAC addresses or both Fast port forwarding and fast uplink convergence for rapid STP convergence Availability and redundancy Virtual Router Redundancy Protocol (VRRP) for Layer 3 router redundancy IEEE 802.1D Spanning Tree Protocol (STP) for providing L2 redundancy IEEE 802.1s Multiple STP (MSTP) for topology optimization, up to 32 STP instances are supported by single switch IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical delay-sensitive traffic like voice or video Rapid Per-VLAN STP (RPVST) enhancements Layer 2 Trunk Failover to support active/standby configurations of network adapter that team on compute nodes Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off Virtual local area network (VLAN) support Up to 1024 VLANs supported per switch, with VLAN numbers that range from 1 to 4095 (4095 is used for the management modules connection only.) 802.1Q VLAN tagging support on all ports Private VLANs Security VLAN-based, MAC-based, and IP-based access control lists (ACLs) 802.1x port-based authentication Multiple user IDs and passwords

98

IBM PureFlex System and IBM Flex System Products and Technology

User access control Radius, TACACS+ and LDAP authentication and authorization Quality of Service (QoS) Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing Traffic shaping and remarking based on defined policies Eight weighted round robin (WRR) priority queues per port for processing qualified traffic IP v4 Layer 3 functions Host management IP forwarding IP filtering with ACLs, up to 896 ACLs supported VRRP for router redundancy Support for up to 128 static routes Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4), up to 2048 entries in a routing table Support for Dynamic Host Configuration Protocol (DHCP) Relay Support for IGMP snooping and IGMP relay Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM). IP v6 Layer 3 functions IPv6 host management (except default switch management IP address) IPv6 forwarding Up to 128 static routes Support for OSPF v3 routing protocol IPv6 filtering with ACLs Virtualization Virtual Fabric with virtual network interface card (vNIC) 802.1Qbg Edge Virtual Bridging (EVB) IBM VMready Converged Enhanced Ethernet Priority-based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow control to allow the switch to pause traffic. This function is based on the 802.1p priority value in each packets VLAN tag. Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for allocating link bandwidth based on the 802.1p priority value in each packets VLAN tag. Data Center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows neighboring network devices to exchange information about their capabilities. Manageability Simple Network Management Protocol (SNMP V1, V2, and V3) HTTP browser GUI

99

Telnet interface for CLI Secure Shell (SSH) Serial interface for CLI Scriptable CLI Firmware image update: Trivial File Transfer Protocol (TFTP) and File Transfer Protocol (FTP) Network Time Protocol (NTP) for switch clock synchronization Monitoring Switch LEDs for external port status and switch module status indication Remote monitoring (RMON) agent to collect statistics and proactively monitor switch performance Port mirroring for analyzing network traffic that passes through the switch Change tracking and remote logging with syslog feature Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer required elsewhere) POST diagnostic procedures For more information, see the IBM Redbooks Product Guide for the IBM Flex System Fabric EN4093 10 Gb Scalable Switch, at: http://www.redbooks.ibm.com/abstracts/tips0864.html?Open

4.10.5 IBM Flex System EN4091 10 Gb Ethernet Pass-thru


The EN4091 10 Gb Ethernet Pass-thru module offers a one for one connection between a single node bay and an I/O module uplink. It has no management interface, and can support both 1 Gb and 10 Gb dual-port adapters installed in the compute nodes. If quad-port adapters are installed in the compute nodes, only the first two ports have access to the pass-through modules ports. The necessary 1 GbE or 10 GbE module (SFP, SFP+ or DAC) must also be installed in the external ports of the pass-through. This configuration supports the speed (1 Gb or 10 Gb) and medium (fiber optic or copper) for adapter ports on the compute nodes. The IBM Flex System EN4091 10 Gb Ethernet Pass-thru is shown in Figure 4-44.

Figure 4-44 IBM Flex System EN4091 10 Gb Ethernet Pass-thru

100

IBM PureFlex System and IBM Flex System Products and Technology

The ordering part number and feature codes are listed in Table 4-13.
Table 4-13 EN4091 10 Gb Ethernet Pass-thru part number and feature codes Part number 88Y6043 Feature codea A1QV / 3700 Product Name IBM Flex System EN4091 10 Gb Ethernet Pass-thru

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

The EN4091 10 Gb Ethernet Pass-thru has the following specifications: Internal ports 14 internal full-duplex Ethernet ports that can operate at 1 Gb or 10 Gb speeds External ports Fourteen ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC. SFP+ modules and DAC cables are not included, and must be purchased separately. Unmanaged device that has no internal Ethernet management port. However, it is able to provide its VPD to the secure management network in the Chassis Management Module Supports 10 Gb Ethernet signaling for CEE, FCoE, and other Ethernet-based transport protocols. Allows direct connection from the 10 Gb Ethernet adapters installed in compute nodes in a chassis to an externally located top of rack switch or other external device. Restriction: The EN4091 10 Gb Ethernet Pass-thru has only 14 internal ports. As a result, only two ports on each compute node are enabled, one for each of two pass-through modules installed in the chassis. If four-port adapters are installed in the compute nodes, ports 3 and 4 on those adapters are not enabled. There are three standard I/O module status LEDs as shown in Figure 4-40 on page 93. Each port has link and activity LEDs. Table 4-14 lists the supported transceivers and DAC cables.
Table 4-14 IBM Flex System EN4091 10 Gb Ethernet Pass-thru part numbers and feature codes Part number Feature codesa Description

SFP+ transceivers - 10 GbE 44W4408 46C3447 90Y9412 4942 / 3282 5053 / None A1PM / None 10 GbE 850 nm Fibre Channel SFP+ Transceiver (SR) IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver

SFP transceivers - 1 GbE 81Y1622 81Y1618 90Y9424 3269 / EB2A 3268 / EB29 A1PN / None IBM SFP SX Transceiver IBM SFP RJ45 Transceiver IBM SFP LX Transceiver

Direct-attach copper (DAC) cables 81Y8295 A18M / EN01 1m 10GE Twinax Act Copper SFP+ DAC (active)

101

Part number 81Y8296 81Y8297 95Y0323 95Y0326 95Y0329

Feature codesa A18N / EN02 A18P / EN03 A25A / None A25B / None A25C / None

Description 3m 10GE Twinax Act Copper SFP+ DAC (active) 5m 10GE Twinax Act Copper SFP+ DAC (active) 1m IBM Active DAC SFP+ Cable 3m IBM Active DAC SFP+ Cable 5m IBM Active DAC SFP+ Cable

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

For more information, see the IBM Redbooks Product Guide for the IBM Flex System EN4091 10 Gb Ethernet Pass-thru, at: http://www.redbooks.ibm.com/abstracts/tips0865.html?Open

4.10.6 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch


The EN2092 1Gb Ethernet Switch provides support for L2/L3 switching and routing. The switch has these ports: Up to 28 internal 1 Gb ports Up to 20 external 1 Gb ports (RJ45 connectors) Up to 4 external 10 Gb uplink ports (SFP+ connectors) The switch is shown in Figure 4-45.

Figure 4-45 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

102

IBM PureFlex System and IBM Flex System Products and Technology

As listed in Table 4-15, the switch comes standard with 14 internal and 10 external Gigabit Ethernet ports enabled. Further ports can be enabled, including the four external 10 Gb uplink ports. Upgrade 1 and the 10 Gb Uplinks upgrade can be applied in either order.
Table 4-15 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch part numbers and port upgrades Part number 49Y4294 Feature codea A0TF / 3598 Product description IBM Flex System EN2092 1 Gb Ethernet Scalable Switch 14 internal 1 Gb ports 10 external 1 Gb ports IBM Flex System EN2092 1 Gb Ethernet Scalable Switch (Upgrade 1) Adds 14 internal 1 Gb ports Adds 10 external 1 Gb ports IBM Flex System EN2092 1 Gb Ethernet Scalable Switch (10 Gb Uplinks) Adds 4 external 10 Gb uplinks

90Y3562

A1QW / 3594

49Y4298

A1EN / 3599

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

The key components on the front of the switch are shown in Figure 4-46.
20x external 1 Gb ports (10 standard, 10 with Upgrade 1) 4x 10 Gb uplink ports (enabled with Uplinks upgrade)

RJ45 ports
Figure 4-46 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

SFP+ ports Management Switch port LEDs

The standard switch has 14 internal ports, and the Upgrade 1 license enables 14 additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter installed in each compute node (one port of the adapter goes to each of two switches) Upgrade 1 requires a four-port Ethernet adapter installed in each compute node (two ports of the adapter to each switch) The standard has 10 external ports enabled. Additional external ports are enabled with license upgrades: Upgrade 1 enables 10 additional ports for a total of 20 ports Uplinks Upgrade enables the four 10 Gb SFP+ ports. These two upgrades can be installed in either order.

103

This switch is considered ideal for clients with these characteristics: Still use 1 Gb as their networking infrastructure Are deploying virtualization and require multiple 1 Gb ports Want investment protection for 10 Gb uplinks Looking to reduce TCO and improve performance, while maintaining high levels of availability and security Looking to avoid oversubscription (multiple internal ports that attempt to pass through a lower quantity of external ports, leading to congestion and performance impact). The switch has three switch status LEDs (see Figure 4-40 on page 93) and one mini-USB serial port connector for console management. Uplink Ports 1 - 20 are RJ45, and the 4 x 10 Gb uplink ports are SFP+. The switch supports either SFP+ modules or DAC cables. The supported SFP+ modules and DAC cables for the switch are listed in Table 4-16.
Table 4-16 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch SFP+ and DAC cables Part number SFP transceivers 81Y1622 81Y1618 90Y9424 SFP+ transceivers 44W4408 46C3447 90Y9412 DAC cables 90Y9427 90Y9430 90Y9433 A1PH / None A1PJ / ECB5 A1PK / None 1m IBM Passive DAC SFP+ 3m IBM Passive DAC SFP+ 5m IBM Passive DAC SFP+ 4942 / 3282 5053 / None A1PM / None 10 GbE 850 nm Fibre Channel SFP+ Transceiver (SR) IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver 3269 / EB2A 3268 / EB29 A1PN / None IBM SFP SX Transceiver IBM SFP RJ45 Transceiver IBM SFP LX Transceiver Feature codea Description

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

The EN2092 1 Gb Ethernet Scalable Switch has the following features and specifications: Internal ports Twenty-eight internal full-duplex Gigabit ports. Fourteen ports are enabled by default. An optional FoD license is required to activate another 14 ports. Two internal full-duplex 1 GbE ports connected to the chassis management module External ports Four ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC. These ports are disabled by default. An optional FoD license is required to activate them. SFP+ modules are not included and must be purchased separately. 104
IBM PureFlex System and IBM Flex System Products and Technology

Twenty external 10/100/1000 1000BASE-T Gigabit Ethernet ports with RJ-45 connectors. Ten ports are enabled by default. An optional FoD license is required to activate another 10 ports. One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module. Scalability and performance Fixed-speed external 10 Gb Ethernet ports for maximum uplink bandwidth Autosensing 10/1000/1000 external Gigabit Ethernet ports for bandwidth optimization Non-blocking architecture with wire-speed forwarding of traffic MAC address learning: Automatic update, support of up to 32,000 MAC addresses Up to 128 IP interfaces per switch Static and LACP (IEEE 802.3ad) link aggregation, up to 60 Gb of total uplink bandwidth per switch, up to 64 trunk groups, up to 16 ports per group Support for jumbo frames (up to 9,216 bytes) Broadcast/multicast storm control IGMP snooping for limit flooding of IP multicast traffic IGMP filtering to control multicast traffic for hosts that participate in multicast groups Configurable traffic distribution schemes over trunk links based on source/destination IP or MAC addresses, or both Fast port forwarding and fast uplink convergence for rapid STP convergence Availability and redundancy VRRP for Layer 3 router redundancy IEEE 802.1D STP for providing L2 redundancy IEEE 802.1s MSTP for topology optimization, up to 32 STP instances supported by single switch IEEE 802.1w RSTP (provides rapid STP convergence for critical delay-sensitive traffic like voice or video) RPVST enhancements Layer 2 Trunk Failover to support active/standby configurations of network adapter teaming on compute nodes Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off VLAN support Up to 1024 VLANs supported per switch, with VLAN numbers that range from 1 to 4095 (4095 is used for the management modules connection only) 802.1Q VLAN tagging support on all ports Private VLANs Security VLAN-based, MAC-based, and IP-based ACLs 802.1x port-based authentication Multiple user IDs and passwords

105

User access control Radius, TACACS+, and Lightweight Directory Access Protocol (LDAP) authentication and authorization QoS Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing Traffic shaping and remarking based on defined policies Eight WRR priority queues per port for processing qualified traffic IP v4 Layer 3 functions Host management IP forwarding IP filtering with ACLs, up to 896 ACLs supported VRRP for router redundancy Support for up to 128 static routes Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4), up to 2048 entries in a routing table Support for DHCP Relay Support for IGMP snooping and IGMP relay Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM). IP v6 Layer 3 functions IPv6 host management (except default switch management IP address) IPv6 forwarding Up to 128 static routes Support for OSPF v3 routing protocol IPv6 filtering with ACLs Virtualization VMready Manageability Simple Network Management Protocol (SNMP V1, V2, and V3) HTTP browser GUI Telnet interface for CLI SSH Serial interface for CLI Scriptable CLI Firmware image update (TFTP and FTP) NTP for switch clock synchronization Monitoring Switch LEDs for external port status and switch module status indication RMON agent to collect statistics and proactively monitor switch performance

106

IBM PureFlex System and IBM Flex System Products and Technology

Port mirroring for analyzing network traffic that passes through the switch Change tracking and remote logging with the syslog feature Support for the sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer required elsewhere) POST diagnostic functions For more information, see the IBM Redbooks Product Guide for the IBM Flex System EN2092 1 Gb Ethernet Scalable Switch, at: http://www.redbooks.ibm.com/abstracts/tips0861.html?Open

4.10.7 IBM Flex System FC5022 16 Gb SAN Scalable Switch


The IBM Flex System FC5022 16 Gb SAN Scalable Switch is a high-density, 48-port 16 Gbps Fibre Channel switch that is used in the Enterprise Chassis. The switch provides 28 internal ports to compute nodes by way of the midplane, and 20 external SFP+ ports. These system area network (SAN) switch modules deliver an embedded option for IBM Flex System users who deploy storage area networks in their enterprise. They offer end-to-end 16 Gb and 8 Gb connectivity. The N_Port Virtualization mode streamlines the infrastructure by reducing the number of domains to manage. It allows you to add or move servers without impact to the SAN. Monitoring is simplified by using an integrated management appliance. Clients who use end-to-end Brocade SAN can take advantage of the Brocade management tools. Figure 4-47 shows the IBM Flex System FC5022 16 Gb SAN Scalable Switch.

Figure 4-47 IBM Flex System FC5022 16 Gb SAN Scalable Switch

Two versions are available as listed in Table 4-17: A 12-port switch module and a 24-port switch with the Enterprise Switch Bundle (ESB) software. The port count can be applied to internal or external ports by using a a feature called Dynamic Ports on Demand (DPOD).
Table 4-17 IBM Flex System FC5022 16 Gb SAN Scalable Switch part numbers Part number 88Y6374 90Y9356 Feature codesa A1EH / 3770 A1EJ / 3771 Description IBM Flex System FC5022 16 Gb SAN Scalable Switch IBM Flex System FC5022 24-port 16 Gb ESB SAN Scalable Switch Ports enabled 12 24

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

107

Table 4-18 provides a feature comparison between the FC5022 switch models.
Table 4-18 Feature comparison by model Feature FC5022 16 Gb ESB Switch (90Y9356) 24 Included Included Included Included Included Included Included Included Included Included FC5022 16 Gb SAN Scalable Switch (88Y6374) 12 Included Included Included Included Not available Not available Not available Not available Not available Not available

Number of active ports Full fabric Access Gateway Advanced zoning Enhanced Group Management ISL Trunking Adaptive Networking Advanced Performance Monitoring Fabric Watch Extended Fabrics Server Application Optimization

With DPOD, ports are licensed as they come online. With the FC5022 16 Gb SAN Scalable Switch, the first 12 ports that report (on a first-come, first-served basis) on boot-up are assigned licenses. These 12 ports can be any combination of external or internal Fibre Channel ports. After all licenses are assigned, you can manually move those licenses from one port to another. Because this process is dynamic, no defined ports are reserved except ports 0 and 29. The FC5022 16 Gb ESB Switch has the same behavior. The only difference is the number of ports. The part number for the switch includes the following items: One IBM Flex System FC5022 16 Gb SAN Scalable Switch or IBM Flex System FC5022 24-port 16 Gb ESB SAN Scalable Switch Important Notices Flyer Warranty Flyer Documentation CD-ROM The switch does not include a serial management cable. However, IBM Flex System Management Serial Access Cable, 90Y9338, is supported and contains two cables: A mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable. Either cable can be used to connect to the switch locally for configuration tasks and firmware updates.

108

IBM PureFlex System and IBM Flex System Products and Technology

Transceivers
The switch comes without SFP+. They must be ordered separately to provide outside connectivity. Table 4-19 lists supported SFP+ options.
Table 4-19 Supported SFP+ transceivers Part number 88Y6416 88Y6393 Feature codea 5084 / 5370 A22R / 5371 Description Brocade 8 Gb SFP+ SW Optical Transceiver Brocade 16 Gb SFP+ Optical Transceiver

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Benefits
The switches offer the following key benefits: Exceptional price/performance for growing SAN workloads The FC5022 16 Gb SAN Scalable Switch delivers exceptional price/performance for growing SAN workloads. It achieves this through a combination of market-leading 1,600 MBps throughput per port and an affordable high-density form factor. The 48 FC ports produce an aggregate 768 Gbps full-duplex throughput, plus any external eight ports can be trunked for 128 Gbps inter-switch links (ISLs). Because 16 Gbps port technology dramatically reduces the number of ports and associated optics/cabling required through 8/4 Gbps consolidation, the cost savings and simplification benefits are substantial. Accelerating fabric deployment and serviceability with diagnostic ports Diagnostic Ports (D_Ports) are a new port type supported by the FC5022 16 Gb SAN Scalable Switch. They enable administrators to quickly identify and isolate 16 Gbps optics, port, and cable problems, reducing fabric deployment and diagnostic times. If the optical media is found to be the source of the problem, it can be transparently replaced because 16 Gbps optics are hot-pluggable. A building block for virtualized, private cloud storage The FC5022 16 Gb SAN Scalable Switch supports multi-tenancy in cloud environments through VM-aware end-to-end visibility and monitoring, QoS, and fabric-based advanced zoning features. The FC5022 16 Gb SAN Scalable Switch enables secure distance extension to virtual private or hybrid clouds with dark Fibre support. They also enable in-flight encryption and data compression. Internal fault-tolerant and enterprise-class reliability, availability, and serviceability (RAS) features help minimize downtime to support mission-critical cloud environments. Simplified and optimized interconnect with Brocade Access Gateway The FC5022 16 Gb SAN Scalable Switch can be deployed as a full-fabric switch or as a Brocade Access Gateway. It simplifies fabric topologies and heterogeneous fabric connectivity. Access Gateway mode uses N_Port ID Virtualization (NPIV) switch standards to present physical and virtual servers directly to the core of SAN fabrics. This configuration makes it not apparent to the SAN fabric, greatly reducing management of the network edge. Maximizing investments To help optimize technology investments, IBM offers a single point of serviceability backed by industry-renowned education, support, and training. In addition, the IBM 16/8 Gbps SAN Scalable Switch is in the IBM ServerProven program, enabling compatibility among various IBM and partner products. IBM recognizes that customers deserve the most innovative, expert integrated systems solutions.

109

Features and specifications


FC5022 16 Gb SAN Scalable Switches have the following features and specifications: Internal ports 28 internal full-duplex 16 Gb FC ports (up to 14 internal ports can be activated with Port-on-Demand feature, remaining ports are reserved for future use) Internal ports operate as F_ports (fabric ports) in native mode or in access gateway mode Two internal full-duplex 1 GbE ports connect to the chassis management module External ports Twenty external ports for 16 Gb SFP+ or 8 Gb SFP+ transceivers that supporting 4 Gb, 8 Gb, and 16 Gb port speeds. SFP+ modules are not included and must be purchased separately. Ports are activated with Port-on-Demand feature. External ports can operate as F_ports, FL_ports (fabric loop ports), or E_ports (expansion ports) in native mode. They can operate as N_ports (node ports) in access gateway mode. One external 1 GbE port (1000BASE-T) with RJ-45 connector for switch configuration and management. One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module. Access gateway mode (N_Port ID Virtualization - NPIV) support Power-on self-test diagnostics and status reporting ISL Trunking (licensable) allows up to eight ports (at 16, 8, or 4 Gbps speeds) to combine. These ports form a single, logical ISL with a speed of up to 128 Gbps (256 Gbps full duplex). This configuration allows for optimal bandwidth utilization, automatic path failover, and load balancing. Brocade Fabric OS delivers distributed intelligence throughout the network and enables a wide range of value-added applications. These applications include Brocade Advanced Web Tools and Brocade Advanced Fabric Services (on certain models). Supports up to 768 Gbps I/O bandwidth 420 million frames switch per second, 0.7 microseconds latency 8,192 buffers for up to 3,750 Km extended distance at 4 Gbps FC (Extended Fabrics license required) In-flight 64 Gbps Fibre Channel compression and decompression support on up to two external ports (no license required) In-flight 32 Gbps encryption and decryption on up to two external ports (no license required) 48 Virtual Channels per port Port mirroring to monitor ingress or egress traffic from any port within the switch Two I2C connections able to interface with redundant management modules Hot pluggable, up to four hot pluggable switches per chassis Single fuse circuit Four temperature sensors Managed with Brocade Web Tools Supports a minimum of 128 domains in Native mode and Interoperability mode

110

IBM PureFlex System and IBM Flex System Products and Technology

Nondisruptive code load in Native mode and Access Gateway mode 255 N_port logins per physical port D_port support on external ports Class 2 and Class 3 frames SNMP v1 and v3 support SSH v2 support Secure Sockets Layer (SSL) support NTP client support (NTP V3) FTP support for firmware upgrades SNMP/Management Information Base (MIB) monitoring functionality contained within the Ethernet Control MIB-II (RFC1213-MIB) End-to-end optics and link validation Sends switch events and syslogs to the CMM Traps identify cold start, warm start, link up/link down and authentication failure events Support for IPv4 and IPv6 on the management ports The FC5022 16 Gb SAN Scalable Switches come standard with the following software features: Brocade Full Fabric mode: Enables high performance 16 Gb or 8 Gb fabric switching Brocade Access Gateway mode: Uses NPIV to connect to any fabric without adding switch domains to reduce management complexity Dynamic Path Selection: Enables exchange-based load balancing across multiple Inter-Switch Links for superior performance Brocade Advanced Zoning: Segments a SAN into virtual private SANs to increase security and availability Brocade Enhanced Group Management: Enables centralized and simplified management of Brocade fabrics through IBM Network Advisor

Enterprise Switch Bundle software licenses


The IBM Flex System FC5022 24-port 16 Gb ESB SAN Scalable Switch includes a complete set of licensed features. These features maximize performance, ensure availability, and simplify management for the most demanding applications and expanding virtualization environments. This switch comes with 24 port licenses that can be applied to either internal or external links on this switch. This switch also includes the following ESB software licenses: Brocade Extended Fabrics Provides up to 1000km of switches fabric connectivity over long distances. Brocade ISL Trunking Allows you to aggregate multiple physical links into one logical link for enhanced network performance and fault tolerance.

111

Brocade Advanced Performance Monitoring Enables performance monitoring of networked storage resources. This license includes the TopTalkers feature. Brocade Fabric Watch Monitors mission-critical switch operations. Fabric Watch now includes the new Port Fencing capabilities. Adaptive Networking Adaptive Networking provides a rich set of capabilities to the data center or virtual server environments. It ensures high priority connections to obtain the bandwidth necessary for optimum performance, even in congested environments. It optimizes data traffic movement within the fabric by using Ingress Rate Limiting, Quality of Service, and Traffic Isolation Zones Server Application Optimization (SAO) This license optimizes overall application performance for physical servers and virtual machines. SAO, when deployed with Brocade Fibre Channel host bus adapters (HBAs), extends Brocade Virtual Channel technology from fabric to the server infrastructure. This license delivers application-level, fine-grain QoS management to the HBAs and related server applications.

Supported Fibre Channel standards


The switches support the following Fibre Channel standards: FC-AL-2 INCITS 332: 1999 FC-GS-5 ANSI INCITS 427 (includes the following): FC-GS-4 ANSI INCITS 387: 2004 FC-IFR INCITS 1745-D, revision 1.03 (under development) FC-SW-4 INCITS 418:2006 (includes the following): FC-SW-3 INCITS 384: 2004 FC-VI INCITS 357: 2002 FC-TAPE INCITS TR-24: 1999 FC-DA INCITS TR-36: 2004 (includes the following): FC-FLA INCITS TR-20: 1998 FC-PLDA INCIT S TR-19: 1998 FC-MI-2 ANSI/INCITS TR-39-2005 FC-PI INCITS 352: 2002 FC-PI-2 INCITS 404: 2005 FC-PI-4 INCITS 1647-D, revision 7.1 (under development) FC-PI-5 INCITS 479: 2011 FC-FS-2 ANSI/INCITS 424:2006 (includes the following): FC-FS INCITS 373: 2003 FC-LS INCITS 433: 2007 FC-BB-3 INCITS 414: 2006 (includes the following): FC-BB-2 INCITS 372: 2003

112

IBM PureFlex System and IBM Flex System Products and Technology

FC-SB-3 INCITS 374: 2003 (replaces FC-SB ANSI X3.271: 1996 and FC-SB-2 INCITS 374: 2001) RFC 2625 IP and ARP Over FC RFC 2837 Fabric Element MIB MIB-FA INCITS TR-32: 2003 FCP-2 INCITS 350: 2003 (replaces FCP ANSI X3.269: 1996) SNIA Storage Management Initiative Specification (SMI-S) Version 1.2 (includes the following): SNIA Storage Management Initiative Specification (SMI-S) Version 1.03 ISO standard IS24775-2006. (replaces ANSI INCITS 388: 2004) SNIA Storage Management Initiative Specification (SMI-S) Version 1.1.0 SNIA Storage Management Initiative Specification (SMI-S) Version 1.2.0 For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC5022 16 Gb SAN Scalable Switch, at: http://www.redbooks.ibm.com/abstracts/tips0870.html?Open

4.10.8 IBM Flex System FC3171 8 Gb SAN Switch


The IBM Flex System FC3171 8 Gb SAN Switch is a full-fabric Fibre Channel switch module. It can be converted to a pass-through module when configured in transparent mode. Figure 4-48 shows the IBM Flex System FC3171 8 Gb SAN Switch.

Figure 4-48 IBM Flex System FC3171 8 Gb SAN Switch

The I/O module has 14 internal ports and 6 external ports. All ports are licensed on the switch because there are no port licensing requirements. Ordering information is listed in Table 4-20.
Table 4-20 FC3171 8 Gb SAN Switch Part number 69Y1930 Feature codea A0TD / 3595 Product Name IBM Flex System FC3171 8 Gb SAN Switch

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

113

No SFP modules and cables are supplied as standard. The ones listed in Table 4-21 are supported.
Table 4-21 FC3171 8 Gb SAN Switch supported SFP modules and cables Part number 44X1964 39R6475 Feature codesa 5075 / 3286 4804 / 3238 Description IBM 8 Gb SFP+ SW Optical Transceiver 4 Gb SFP Transceiver Option

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

You can reconfigure the FC3171 8 Gb SAN Switch to become a pass-through module by using the switch GUI or CLI. The module can then be converted back to a full function SAN switch at some future date. The switch requires a reset when turning on or off transparent mode. The switch can be configured by using either command line or QuickTools: Command Line: Access the switch by using the console port through the Chassis Management Module or through the Ethernet port. This method requires a basic understanding of the CLI commands. QuickTools: Requires a current version of the Java runtime environment (JRE) on your workstation before pointing a web browser to the switchs IP address. The IP address of the switch must be configured. QuickTools does not require a license and code is included. On this switch when in Full Fabric mode, access to all of the Fibre Channel Security features is provided. Security includes additional services of SSL and SSH, which are available. In addition, RADIUS servers can be used for device and user authentication. After SSL/SSH is enabled, the security features are available to be configured. Configuring security features allows the SAN administrator to configure which devices are allowed to log on to the Full Fabric Switch module. This process is done by creating security sets with security groups. These sets are configured on a per switch basis. The security features are not available when in pass-through mode. FC3171 8 Gb SAN Switch specifications and standards: Fibre Channel standards: C-PH version 4.3 FC-PH-2 FC-PH-3 FC-AL version 4.5 FC-AL-2 Rev 7.0 FC-FLA FC-GS-3 FC-FG FC-PLDA FC-Tape FC-VI FC-SW-2 Fibre Channel Element MIB RFC 2837 Fibre Alliance MIB version 4.0

Fibre Channel protocols: Fibre Channel service classes: Class 2 and class 3

114

IBM PureFlex System and IBM Flex System Products and Technology

Operation modes: Fibre Channel class 2 and class 3, connectionless External port type: Full fabric mode: Generic loop port Transparent mode: Transparent fabric port Internal port type: Full fabric mode: F_port Transparent mode: Transparent host port/NPIV mode Support for up to 44 host NPIV logins Port characteristics: External ports are automatically detected and self- configuring Port LEDs illuminate at startup Number of Fibre Channel ports: 6 external ports and 14 internal ports Scalability: Up to 239 switches maximum depending on your configuration Buffer credits: 16 buffer credits per port Maximum frame size: 2148 bytes (2112 byte payload) Standards-based FC FC-SW2 Interoperability Support for up to a 255 to 1 port-mapping ratio Media type: SFP+ module

2 Gb specifications 2 Gb fabric port speed: 1.0625 or 2.125 Gbps (gigabits per second) 2 Gb fabric latency: Less than 0.4 msec 2 Gb fabric aggregate bandwidth: 80 Gbps at full duplex 4 Gb specifications 4 Gb switch speed: 4.250 Gbps 4 Gb switch fabric point-to-point: 4 Gbps at full duplex 4 Gb switch fabric aggregate bandwidth: 160 Gbps at full duplex 8 Gb specifications 8 Gb switch speed: 8.5 Gbps 8 Gb switch fabric point-to-point: 8 Gbps at full duplex 8 Gb switch fabric aggregate bandwidth: 320 Gbps at full duplex Nonblocking architecture to prevent latency System processor: IBM PowerPC For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC3171 8 Gb SAN Switch, at: http://www.redbooks.ibm.com/abstracts/tips0866.html?Open

115

4.10.9 IBM Flex System FC3171 8 Gb SAN Pass-thru


The IBM Flex System FC3171 8 Gb SAN Pass-thru I/O module is an 8 Gbps Fibre Channel Pass-thru SAN module. It has 14 internal ports and six external ports. It is shipped with all ports enabled. Figure 4-49 shows the IBM Flex System FC3171 8 Gb SAN Pass-thru module.

Figure 4-49 IBM Flex System FC3171 8 Gb SAN Pass-thru

Ordering information is listed in Table 4-22.


Table 4-22 FC3171 8 Gb SAN Pass-thru part number Part number 69Y1934 Feature codea A0TJ / 3591 Description IBM Flex System FC3171 8 Gb SAN Pass-thru

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Exception: If you will need to enable full fabric capability later, do not purchase this switch. Instead, purchase the FC3171 8 Gb SAN Switch. There are no SFPs supplied with the switch and must be ordered separately. Supported transceivers and fiber optic cables are listed in Table 4-23.
Table 4-23 FC3171 8 Gb SAN Pass-thru supported modules and cables Part Number 44X1964 39R6475 Feature Code 5075 / 3286 4804 / 3238 Description IBM 8 Gb SFP+ SW Optical Transceiver 4 Gb SFP Transceiver Option

The FC3171 8 Gb SAN Pass-thru can be configured by using either the command line or QuickTools: Command Line: Access the module by using the console port through the Chassis Management Module or through the Ethernet port. This method requires a basic understanding of the CLI commands. QuickTools: Requires a current version of the JRE on your workstation before pointing a web browser to the modules IP address. The IP address of the module must be configured. QuickTools does not require a license, and the code is included. The pass-through module supports the following standards: Fibre Channel standards: C-PH version 4.3 FC-PH-2 FC-PH-3 116

IBM PureFlex System and IBM Flex System Products and Technology

FC-AL version 4.5 FC-AL-2 Rev 7.0 FC-FLA FC-GS-3 FC-FG FC-PLDA FC-Tape FC-VI FC-SW-2 Fibre Channel Element MIB RFC 2837 Fibre Alliance MIB version 4.0

Fibre Channel protocols: Fibre Channel service classes: Class 2 and class 3 Operation modes: Fibre Channel class 2 and class 3, connectionless External port type: Transparent fabric port Internal port type: Transparent host port/NPIV mode Support for up to 44 host NPIV logins Port characteristics: External ports are automatically detected and self- configuring Port LEDs illuminate at startup Number of Fibre Channel ports: 6 external ports and 14 internal ports Scalability: Up to 239 switches maximum depending on your configuration Buffer credits: 16 buffer credits per port Maximum frame size: 2148 bytes (2112 byte payload) Standards-based FC FC-SW2 Interoperability Support for up to a 255 to 1 port-mapping ratio Media type: SFP+ module

Fabric point-to-point bandwidth: 2 Gbps or 8 Gbps at full duplex 2 Gb Specifications 2 Gb fabric port speed: 1.0625 or 2.125 Gbps (gigabits per second) 2 Gb fabric latency: Less than 0.4 msec 2 Gb fabric aggregate bandwidth: 80 Gbps at full duplex 4 Gb Specifications 4 Gb switch speed: 4.250 Gbps 4 Gb switch fabric point-to-point: 4 Gbps at full duplex 4 Gb switch fabric aggregate bandwidth: 160 Gbps at full duplex 8 Gb Specifications 8 Gb switch speed: 8.5 Gbps 8 Gb switch fabric point-to-point: 8 Gbps at full duplex 8 Gb switch fabric aggregate bandwidth: 320 Gbps at full duplex System processor: PowerPC Maximum frame size: 2148 bytes (2112 byte payload) Nonblocking architecture to prevent latency For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC3171 8 Gb SAN Pass-thru, at: http://www.redbooks.ibm.com/abstracts/tips0866.html?Open

117

4.10.10 IBM Flex System IB6131 InfiniBand Switch


IBM Flex System IB6131 InfiniBand Switch is a 32 port InfiniBand switch. It has 18 FDR/QDR (56/40 Gbps) external ports and 14 FDR/QDR (56/40 Gbps) internal ports for connections to nodes. This switch ships standard with quad data rate (QDR) and can be upgraded to fourteen data rate (FDR). Figure 4-50 shows the IBM Flex System IB6131 InfiniBand Switch.

Figure 4-50 IBM Flex System IB6131 InfiniBand Switch

Ordering information is listed in Table 4-24.


Table 4-24 IBM Flex System IB6131 InfiniBand Switch Part Number and upgrade option Part number Feature codesa A1EK / 3699 Product Name IBM Flex System IB6131 InfiniBand Switch 18 external QDR ports 14 QDR internal ports IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade) Upgrades all ports to FDR speeds

90Y3450

90Y3462

A1QX / ESW1

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Running the MLNX-OS, this switch has one external 1 Gb management port and a mini USB Serial port for updating software and debug use. These ports are in addition to InfiniBand internal and external ports. The switch has 14 internal QDR links and 18 CX4 uplink ports. All ports are enabled. The switch can be upgraded to FDR speed (56 Gbps) by using the FOD process with part number 90Y3462 as listed in Table 4-24. There are no InfiniBand cables shipped as standard with this switch and these must be purchased separately. Supported cables are listed in Table 4-25.
Table 4-25 IB6131 InfiniBand Switch InfiniBand supported cables Part number 49Y9980 90Y3470 Feature codesa 3866 / 3249 A227 / ECB1 Description IB QDR 3m QSFP Cable Option (passive) 3m FDR InfiniBand Cable (passive)

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

The switch has the following specifications: IBTA 1.3 and 1.21 compliance Congestion control Adaptive routing Port mirroring Auto-Negotiation of 10 Gbps, 20 Gbps, 40 Gbps, or 56 Gbps 118
IBM PureFlex System and IBM Flex System Products and Technology

Mellanox QoS: 9 InfiniBand virtual lanes for all ports, eight data transport lanes, and one management lane High switching performance: Simultaneous wire-speed any port to any port Addressing: 48K Unicast Addresses maximum per Subnet, 16K Multicast Addresses per Subnet Switch throughput capability of 1.8 Tb/s For more information, see the IBM Redbooks Product Guide for the IBM Flex System IB6131 InfiniBand Switch, at: http://www.redbooks.ibm.com/abstracts/tips0871.html?Open

4.11 Infrastructure planning


This section addresses the key infrastructure planning areas of power, uninterruptible power supply (UPS), cooling, and console management that must be considered when deploying the IBM Flex System Enterprise Chassis. This section contains these topics: 4.11.1, Supported power cords 4.11.2, Supported PDUs and UPS units 4.11.3, Power planning on page 120 4.11.4, UPS planning on page 124 4.11.5, Console planning on page 125 4.11.6, Cooling planning on page 126 4.11.7, Chassis-rack cabinet compatibility on page 127 For more information about planning your IBM Flex System power infrastructure, see the IBM Flex System Enterprise Chassis Power Requirements Guide at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401

4.11.1 Supported power cords


The Enterprise Chassis supports the power cords listed in Table 4-26. One power cord, feature 6292, is shipped with each power supply option or standard with the server (one per standard power supply).
Table 4-26 Supported power cords Part number 00D7192 00D7193 00D7194 39Y7916 None 00D7195 00D7196 00D7197 Feature code A2Y3 A2Y4 A2Y5 6252 6292 6566 6537 A1NV Description 4.3 m, US/CAN, NEMA L15-30P - (3P+Gnd) to 3X IEC 320 C19 4.3 m, EMEA/AP, IEC 309 32A (3P+N+Gnd) to 3X IEC 320 C19 4.3 m, A/NZ, (PDL/Clipsal) 32A (3P+N+Gnd) to 3X IEC 320 C19 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable 2 m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable 2.5 m, 15A/208V, C19 to NEMA 6-15P (US) power cords 1.8 m, 15A/208V, C19 to NEMA 6-15P (US) power cords 4.3 m, 15A/250V, C19 to NEMA 6-15P (US) power cords

119

4.11.2 Supported PDUs and UPS units


Table 4-27 lists the supported PDUs.
Table 4-27 Supported power distribution units Part number 39Y8923 39Y8938 39Y8939 39Y8940 39Y8948 46M4002 46M4003 46M4140 46M4134 46M4167 71762MX 71762NX 71763MU 71763NU Description DPI 60A 3-Phase C19 Enterprise PDU w/ IEC309 3P+G (208V) fixed power cords 30amp/125V Front-end PDU with NEMA L5-30P connector 30amp/250V Front-end PDU with NEMA L6-30P connector 60amp/250V Front-end PDU with IEC 309 60A 2P+N+Gnd connector DPI Single Phase C19 Enterprise PDU w/o power cords IBM 1U 9 C19/3 C13 Active Energy Manager DPI PDU IBM 1U 9 C19/3 C13 Active Energy Manager 60A 3-Phase PDU IBM 0U 12 C19/12 C13 50A 3-Phase PDU IBM 0U 12 C19/12 C13 Switched and Monitored 50A 3-Phase PDU IBM 1U 9 C19/3 C13 Switched and Monitored 30A 3-Phase PDU IBM Ultra Density Enterprise PDU C19 PDU+ (WW) IBM Ultra Density Enterprise PDU C19 PDU (WW) IBM Ultra Density Enterprise PDU C19 3-Phase 60A PDU+ (NA) IBM Ultra Density Enterprise PDU C19 3-Phase 60A PDU (NA)

Table 4-28 lists the supported UPS units.


Table 4-28 Supported uninterruptible power supply units Part number 21303RX 21304RX 53956AX 53959KX Description IBM UPS 7500XHV IBM UPS 10000XHV IBM 6000VA LCD 4U Rack UPS (200V/208V) IBM 11000VA LCD 5U Rack UPS (230V)

4.11.3 Power planning


The Enterprise Chassis has a maximum of six power supplies installed, so consider how to provide the best power optimized source. Both N+N and N+1 configurations are supported for maximum flexibility in power redundancy. The chassis can accommodate a maximum of six power supplies. You can therefore balance a 3-phase power input into a single, or group of chassis. Each power supply in the chassis has a 16A C20 3-pin socket, and can be fed by a C19 power cable from a suitable supply.

120

IBM PureFlex System and IBM Flex System Products and Technology

The chassis power system is designed for efficiency using datacenter power that consists of 3-phase, 60A Delta 200 VAC (North America), or 3-phase 32A wye 380-415 VAC (international). The chassis can also be fed from single phase 200-240VAC supplies if required. The power is scaled as required, so as additional nodes are added the power and cooling increases accordingly. This section explains both single phase and 3-phase example configurations for North America and worldwide, starting with 3-phase.

Power cabling: 32A at 380-415V 3-phase (International)


Figure 4-51 shows one 3-phase, 32A wye PDU (worldwide, WW) providing power feeds for two chassis. In this case, an appropriate 3-phase power cable is selected for the Ultra-Dense Enterprise PDU+. This cable then splits the phases, supplying one phase to each of the three power supplies within each chassis. One 3-phase 32A wye PDU can power two fully populated chassis within a rack. A second PDU can be added for power redundancy from an alternative power source, if the chassis is configured N+N. Figure 4-51 shows a typical configuration given a 32A 3-phase wye supply at 380-415VAC (often termed WW or International) N+N.

IEC320 16A C19-C20 3m power cable

46M4002 1U 9 C19/3 C13 Switched and monitored DPI PDU

L3 N G

L2 L1

L3 N G

L2 L1

40K9611 IBM DPI 32a Cord (IEC 309 3P+N+G)

= Power cables

Figure 4-51 Example power cabling 32A at 380-415V 3-phase: International

The maximum number of Enterprise Chassis that can be installed with a 42U rack is four. Therefore, the chassis requires a total of four 32A 3-phase wye feeds to provide for a fully redundant N+N configuration.

121

Power cabling: 60A at 208V 3-phase (North America)


In North America, the chassis requires four 60A 3-phase delta supplies at 200 - 208 VAC. A configuration optimized for 3-phase configuration is shown in Figure 4-52.
g g pp

IEC320 16A C19-C20 3m power cable

46M4003 1U 9 C19/3 C13 Switched and monitored DPI PDI

L1 G L2 L3 L2

L1 G L3

46M4003 Includes fixed IEC60309 3P+G 60A line cord

Figure 4-52 Example of power cabling 60A at 208V 3-phase

122

IBM PureFlex System and IBM Flex System Products and Technology

Power Cabling: Single Phase 63A (International)


Figure 4-53 shows International 63A single phase supply feed example. This example uses the switched and monitored PDU+ with an appropriate power cord. Each PSU can draw up to 13.85A from its supply. Therefore, a single chassis can easily be fed from a 63A single phase supply, leaving 18.45A available capacity. This capacity could feed a single PSU on a second chassis power supply (13.85A). Or it could be available for the PDU to supply further items in the rack such as servers or storage devices.

46M4002 1U 9 C19/3 C13 Switched and monitored DPI PDI

N G

N G

40K9613 IBM DPI 63a Cord (IEC 309 P+N+G) = Cables

Figure 4-53 Single phase 63A supply

123

Power Cabling: 60A 200VAC single phase supply (North America)


In North America, UL derating means that a 60 Amp PDU supplies only 48 Amps. At 200VAC, the power supplies in the Enterprise Chassis draw a maximum of 13.85 Amps. Therefore, a single phase 60A supply can power a fully configured chassis. A further 6.8Amps is available from the PDU to power additional items within the chassis such as servers or storage (Figure 4-54).

46M4002 1U 9 C19/3 C13 Switched and monitored DPI PDI

L1 G L2 L3 L2

L1 G L3

40K9615 IBM DPI 60a Cord (IEC 309 2P+G) Building power = 200 VAC, 60 Amp, 1 Phase (48A supplied by PDU after UL derating)

= Cables

Figure 4-54 60A 200VAC single phase supply

For more information about planning your IBM Flex System power infrastructure, see the IBM Flex System Enterprise Chassis Power Requirements Guide at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401

4.11.4 UPS planning


It is possible to power the Enterprise Chassis with a UPS, which provides protection in case of power failure or interruption. IBM does not offer a 3-phase UPS. However, single phase UPS available from IBM can be used to supply power to a chassis, at both 200VAC and 220VAC. An alternative is to use third-party UPS product if 3-phase is required.

124

IBM PureFlex System and IBM Flex System Products and Technology

At international voltages, the 11000VA UPS is ideal for powering a fully loaded chassis. Figure 4-55 shows how each power feed can be connected to one of the four 20A outlets on the rear of the UPS. This UPS requires hard wiring to a suitable supply by a qualified electrician.

53959KX IBM UPS11000 5U

= Cables

Figure 4-55 Two UPS11000 international single-phase (208-230VAC)

In North America the available UPS at 200-208VAC is the UPS6000. This UPS has two outlets that can be used to power two of the power supplies within the chassis. In a fully loaded chassis, the third pair of power supplies must be connected to another UPS. Figure 4-56 shows this UPS configuration.

53956AX IBM UPS6000 4U

= Cables

Figure 4-56 Two UPS 6000 North American (200-208VAC)

For more information, see the at-a-glance guide for the IBM 11000VA LCD 5U Rack Uninterruptible Power Supply at: http://www.redbooks.ibm.com/abstracts/tips0814.html

4.11.5 Console planning


Although the Enterprise Chassis is a lights out system and can be managed remotely with ease, there are other ways to access a node console: Each node can be individually connected to by physically plugging in a console breakout cable to the front of the node. This cable presents a 15pin video connector, two USB sockets and a serial cable out the front. Connecting a portable screen and USB keyboard/mouse near the front of the chassis enables quick connection into the console breakout cable and access directly into the node. This configuration is often called crash cart management capability. 125

Connection to the FSM management interface by browser allows remote presence to each node within the chassis. Connection remotely into the Ethernet management port of the CMM by using the browser allows remote presence to each node within the chassis. You can also connect directly to each IMM2 on a node and start a remote console session to that node through the IMM.

4.11.6 Cooling planning


The chassis is designed to operate in temperatures up to 40c (104F), in ASHRAE class A3 operating environments. The airflow requirements for the Enterprise Chassis are from 270 CFM (cubic feet per minute) to a maximum of 1020 CFM. The Enterprise Chassis has these environmental specifications: Humidity, non-condensing: -12C dew point (10.4F) and 8% - 85% relative humidity Maximum dew point: 24C (75F) Maximum elevation: 3050 m (10.006 ft) Maximum rate of temperature change: 5C/hr (41F/hr) Heat Output (approximate): Maximum configuration: potentially 12.9kW The 12.9kW figure is only a potential maximum, where the most power hungry configuration is chosen and all power envelopes are maximum. For a more realistic figure, use the IBM Power Configurator tool to establish specific power requirements for a configuration. This tool can be found at: http://www.ibm.com/systems/x/hardware/configtools.html Datacenter operation at environmental temperatures above 35C would generally be in a free air cooling environment. This is the expected definition of ASHRAE class A3 (and also the A4 class which raises the upper limit to 45C). A conventional datacenter would not normally run with computer room air conditioning (CRAC) units up to 40C. The risk of either failures of CRAC or power to the CRACs failing gives limited time for shutdowns before over temperatures occur. IBM Flex System Enterprise Chassis is suitable for operation in ASHRAE class A3 environment, installed both operating and non-operating mode. Information about ASHRAE 2011 thermal guidelines, datacenter classes, and white papers can be found at the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) website at: http://www.ashrae.org The chassis can be installed within either IBM or non-IBM racks. However, the IBM 42U 1100 mm Enterprise V2 Dynamic Rack does offer in North America a single floor tile wide and two tiles deep. More information about this sizing is detailed in 4.12, IBM 42U 1100 mm Enterprise V2 Dynamic Rack on page 128. If installed within a non-IBM rack, the vertical rails must have clearances to EIA-310-D. There must be sufficient room in front of the vertical front rack mounting rail to provide minimum bezel clearance of 70 mm (2.76 inches) depth. The rack must be sufficient to support the weight of the chassis, cables, power supplies, and other items installed within. There must be

126

IBM PureFlex System and IBM Flex System Products and Technology

sufficient room behind the rear of the rear rack rails to provide for cable management and routing. Ensure the stability of any non-IBM rack by using stabilization feet or baying kits so that it does not become unstable when it is fully populated. Finally, ensure that sufficient airflow is available to the Enterprise Chassis. Racks with glass fronts do not normally allow sufficient airflow into the chassis.

4.11.7 Chassis-rack cabinet compatibility


IBM offers an extensive range of industry-standard, EIA-compatible rack enclosures and expansion units. The flexible rack solutions help you consolidate servers and save space, while allowing easy access to crucial components and cable management. Table 4-29 lists the IBM Flex System Enterprise Chassis supported in each rack cabinet.
Table 4-29 The chassis supported in each rack cabinet Rack cabinet IBM 11U Office Enablement Kit IBM S2 25U Static standard rack IBM S2 25U Dynamic standard rack IBM S2 42U standard rack IBM S2 42U Dynamic standard rack IBM 42U Enterprise rack IBM 42U 1200 mm Deep Dynamic Rack IBM 42U 1200 mm Deep Static rack IBM 47U 1200 mm Deep Static rack IBM 42U 1100 mm Deep Dynamic racka Part Number 201886X 93072PX 93072RX 93074RX 99564RX 93084PX 93604PX 93614PX 93624PX 93634PX Feature code 2731 6690 1042 1043 5629 5621 7649 7651 7653 7953 Enterprise Chassis Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

a. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated front to back cable raceways. For more information, see 4.12, IBM 42U 1100 mm Enterprise V2 Dynamic Rack on page 128.

The IBM Flex System Enterprise Chassis is not supported in the IBM Netfinity 42U 9306-900 and 9306-910. It is also not supported in the Netfinity Enterprise 9308-42P and the 9308-42X racks. The NetBay 22U is not a supported rack configuration. These racks have glass-fronted doors that allow insufficient airflow for the IBM Flex System Enterprise Chassis. In some cases, the chassis depth is such that the chassis cannot be accommodated within the dimensions of the rack.

127

4.12 IBM 42U 1100 mm Enterprise V2 Dynamic Rack


The IBM 42U 1100 mm Enterprise V2 Dynamic Rack is an industry-standard 24-inch rack that supports the Enterprise Chassis, BladeCenter, System x servers, and options. It is available in either Primary or Expansion form. The expansion rack is designed for baying and has no side panels. It ships with a baying kit. After it is attached to the side of a primary rack, the side panel removed from the primary rack is attached to the side of the expansion rack. The available configurations are shown in Table 4-30.
Table 4-30 Rack options and part numbers Model 9363-4PX 9363-4EX Description IBM 42U 1100 mm Enterprise V2 Dynamic Rack IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack Details Rack ships with side panels and is stand-alone. Rack ships with no side panels, and is designed to attach to a primary rack

This 42U rack conforms to the EIA(TM)-310-D industry standard for a 24-inch, type A rack cabinet. The dimensions are listed in Table 4-31.
Table 4-31 Dimensions of IBM 42U 1100 mm Enterprise V2 Dynamic Rack, 9363-4PX Dimension Height Width Depth Weight Value 2009 mm (79.1 in) 600 mm (23.6 in) 1100 mm (43.3 in) 174 kg (384 lb), including outriggers.

The rack features outriggers (stabilizers) allowing for movement while populated.

128

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-57 shows the 9363-4PX rack.

Figure 4-57 9363-4PX Rack (note tile width relative to rack)

Features of the IBM 42U 1100 mm Enterprise V2 Dynamic Rack rack are: A perforated front door allows for improved air flow. Square EIA Rail mount points Six side-wall compartments support 1U-high PDUs and switches without taking up valuable rack space. Cable management rings are included to assist in cable management Easy to install and remove side panels are a standard feature. The front door can be hinged on either side, providing flexibility to open in either direction. 129

Front and rear doors and side panels include locks and keys to help secure servers. Heavy-duty casters with the use of outriggers (stabilizers) come with the 42U Dynamic racks for added stability, allowing movement of the rack while loaded. Tool-less 0U PDU rear channel mounting reduces installation time and increases accessibility 1U PDU can be mounted to present power outlets to the rear of the chassis in side pocket openings. Removable top and bottom cable access panels in both front and rear

Dynamic design
IBM is the only leading vendor with specific ship-loadable designs. These kinds of racks are called dynamic racks. The IBM 42U 1100 mm Enterprise V2 Dynamic Rack and IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack are dynamic racks. A dynamic rack has extra heavy-duty construction and sturdy packaging that can be reused for shipping a fully loaded rack. They also have outrigger casters for secure movement and tilt stability. Dynamic racks also include a heavy-duty shipping pallet that includes a ramp for easy on and off maneuvering. Dynamic racks undergo additional shock and vibration testing, and all IBM racks are of welded rather than the more flimsy bolted construction.

130

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-58 shows the rear view of the 42U 1100 mm Flex System Dynamic Rack.

Mountings for IBM 0U PDU

Cable raceway

Outriggers
Figure 4-58 42U 1100 mm Flex System Dynamic Rack rear view, with doors and sides panels removed

The IBM 42U 1100 mm Enterprise V2 Dynamic Rack rack also provides additional space for front cable management and the use of front to back cable raceways. There are four cable raceways on each rack: Two each side. The raceways allow cables to be routed from the front of the rack, through the raceway and out to the rear of the rack. The raceways also have openings into the side bays of the rack to allow connection into those bays.

131

Figure 4-59 shows the cable raceways.

Cable raceway
Figure 4-59 Cable raceway (as viewed from rear of rack)

The 1U rack PDUs can also be accommodated in the side bays. In these bays, the PDU is mounted vertically in the rear of the side bay and presents its outlets to the rear of the rack. Four 0U PDUs can also be vertically mounted in the rear of the rack.

132

IBM PureFlex System and IBM Flex System Products and Technology

The rack width is 600 mm, which is a standard width of a floor tile in many locations, to complement current raised floor datacenter designs. Dimensions of the rack base are shown in Figure 4-60.

600 mm 46 mm 199 mm 65 mm

1100 mm

65 mm 458 mm

Front of Rack
Figure 4-60 Rack dimensions

The rack has square mounting holes common in the industry, onto which the Enterprise Chassis and other server and storage products can be mounted.

133

For implementations where the front anti-tip plate is not required, an air baffle/air recirculation prevention plate is supplied with the rack. You might not want to use the plate when an airflow tile must be positioned directly in front of the rack. This air baffle shown in Figure 4-61 can be installed to the lower front of the rack. It helps prevent warm air from the rear of the rack from circulating underneath the rack to the front, improving the cooling efficiency of the entire rack solution.

Recirculation prevention plate

Figure 4-61 Recirculation prevention plate

4.13 IBM Rear Door Heat eXchanger V2 Type 1756


The IBM Rear Door Heat eXchanger V2 is designed to attach to the rear of these racks: IBM 42U 1100 mm Enterprise V2 Dynamic Rack IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack It provides effective cooling for the warm air exhausts of equipment mounted within the rack. The heat exchanger has no moving parts to fail and no power is required. The rear door heat exchanger can be used to improve cooling and reduce cooling costs in a high density HPC Enterprise Chassis environment.

134

IBM PureFlex System and IBM Flex System Products and Technology

The physical design of the door is slightly different to that of the existing Rear Door Heat Exchanger (32R0712) marketed by IBM System x. This door has a wider rear aperture as shown in Figure 4-62. It is designed for attachment specifically to the rear of either an IBM 42U 1100 mm Enterprise V2 Dynamic Rack or IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack.

Figure 4-62 Rear Door Heat Exchanger

Attaching a rear door heat exchanger to the rear of a rack allows up to 100,000 BTU/hr or 30kw of heat to be removed at a rack level. As the warm air passes through the heat exchanger, it is cooled with water and exits the rear of the rack cabinet into the datacenter. The door is designed to provide an overall air temperature drop of up to 25C measured between air that enters the exchanger and exits the rear.

135

Figure 4-63 shows the internal workings of the IBM Rear Door Heat eXchanger V2.

Figure 4-63 IBM Rear Door Heat eXchanger V2

The supply inlet hose provides an inlet for chilled, conditioned water. A return hose delivers warmed water back to the water pump or chiller in the cool loop. It must meet the water supply requirements for secondary loops.

136

IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-64 shows the percentage heat removed from a 30 KW heat load as a function of water temperature and water flow rate. With 18 Degrees at 10 (gpm), 90% of 30 kW heat is removed by the door.
% heat removal as function of water temperature and flow rate for given rack power, rack inlet temperature, and rack air flow rate 140 130 120 110

Water temperature 12C * 14C * 16C * 18C * 20C * 22C * 24C * Rack Power (W) = 30000 Tinlet, air (C) = 27 Airflow (cfm) = 2500 4 6 8 10 12 14

% heat removal

100 90 80 70 60 50 Water flow rate (gpm)

Figure 4-64 Heat removal by Rear Door Heat eXchanger V2 at 30 KW of heat

For efficient cooling, water pressure and water temperature must be delivered in accordance with the specifications listed in Table 4-32. The temperature must be maintained above the dew point to prevent condensation from forming.
Table 4-32 1756 RDHX specifications Rear Door heat exchanger V2 Depth Width Height Empty Weight Filled Weight Temperature Drop Water Temperature Specifications 129 mm (5.0 in) 600 mm (23.6 in) 1950 mm (76.8 in) 39 kg (85 lb) 48 kg (105 lb) Up to 25C (45F) between air exiting and entering RDHX Above Dew Point: 18C 1C (64.4F 1.8F) for ASHRAE Class 1 Environment 22C 1C (71.6F 1.8F) for ASHRAE Class 2 Environment Minimum: 22.7 liters (6 gallons) per minute, Maximum: 56.8 liters (15 gallons) per minute

Required water flow rate (as measured at the supply entrance to the heat exchanger)

137

The installation and planning guide provides lists of suppliers that can provide coolant distribution unit solutions, flexible hose assemblies, and water treatment that meet the suggested water quality requirements. It takes three people to install the rear door heat exchanger. The exchanger requires a non-conductive step ladder to be used for attachment of the upper hinge assembly. Consult the planning and implementation guides before proceeding. The installation and planning guides can be found at: http://www.ibm.com/support/entry/portal/

138

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 5.

Compute nodes
This chapter describes the IBM Flex System servers or compute nodes. The applications installed on the compute nodes can run natively on a dedicated physical server. Or they can be virtualized in a virtual machine managed by a hypervisor layer. The IBM Flex System portfolio of compute nodes includes Intel Xeon processors and IBM POWER7 processors. Depending on the compute node design, nodes can come in one of these form factors: Half-wide node: Occupies one chassis bay, half the width of the chassis (approximately 215 mm or 8.5). An example is the IBM Flex System x240 Compute Node. Full-wide node: Occupies two chassis bays side-by-side, the full width of the chassis (approximately 435 mm or 17). An example is the IBM Flex System p460 Compute Node. This chapter includes the following sections: 5.1, IBM Flex System Manager on page 140 5.2, IBM Flex System x240 Compute Node on page 140 5.3, IBM Flex System x220 Compute Node on page 177 5.4, IBM Flex System p260 and p24L Compute Nodes on page 198 5.5, IBM Flex System p460 Compute Node on page 216 5.6, I/O adapters on page 234

Copyright IBM Corp. 2012. All rights reserved.

139

5.1 IBM Flex System Manager


The IBM Flex System Manager (FSM) is a high performance scalable system management appliance based on the IBM Flex System x240 Compute Node. The FSM hardware comes preinstalled with systems management software that enables you to configure, monitor, and manage IBM Flex System resources in up to four chassis. For more information about the hardware and software of the FSM, see 3.5, IBM Flex System Manager on page 46.

5.2 IBM Flex System x240 Compute Node


The IBM Flex System x240 Compute Node, available as machine type 8737 with a three-year warranty, is a half-wide, two-socket server. It runs the latest Intel Xeon processor E5-2600 family (formerly code named Sandy Bridge-EP) processors. It is ideal for infrastructure, virtualization, and enterprise business applications, and is compatible with the IBM Flex System Enterprise Chassis.

5.2.1 Introduction
The x240 supports the following equipment: Up to two Intel Xeon E5-2600 series multi-core processors 24 dual inline memory module (DIMM) modules Two hot-swap drives Two PCI Express I/O adapters Two optional internal USB connectors Figure 5-1 shows the x240.

Figure 5-1 The x240 type 8737

140

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-2 shows the location of the controls, LEDs, and connectors on the front of the x240.
Hard disk drive activity LED Hard disk drive status LED

USB port

NMI control

Console Breakout Cable port

Power button / LED

LED panel

Figure 5-2 The front of the x240 showing the location of the controls, LEDs, and connectors

Figure 5-3 shows the internal layout and major components of the x240.

Cover Heat sink Microprocessor heat sink filler I/O expansion adapter Air baffle

Microprocessor Hot-swap storage backplane Hot-swap storage cage Hot-swap storage drive

Air baffle

DIMM Storage drive filler

Figure 5-3 Exploded view of the x240, showing the major components

141

Table 5-1 lists the features of the x240.


Table 5-1 Features of the x240 type 8737 Component Form factor Chassis support Processor Specification Half-wide compute node IBM Flex System Enterprise Chassis Up to two Intel Xeon Processor E5-2600 product family processors. These processors can be eight-core (up to 2.9 GHz), six-core (up to 2.9 GHz), quad-core (up to 3.3 GHz), or dual-core (up to 3.0 GHz). Two QPI links up to 8.0 GT/s each. Up to 1600 MHz memory speed. Up to 20 MB L3 cache. Intel C600 series. Up to 24 DIMM sockets (12 DIMMs per processor) using Low Profile (LP) DDR3 DIMMs. RDIMMs, UDIMMs, and LRDIMMs supported. 1.5V and low-voltage 1.35V DIMMs supported. Support for up to 1600 MHz memory speed, depending on the processor. Four memory channels per processor, with three DIMMs per channel. With LRDIMMs: Up to 768 GB with 24x 32 GB LRDIMMs and two processors With RDIMMs: Up to 384 GB with 24x 16 GB RDIMMs and two processors With UDIMMs: Up to 64 GB with 16x 4 GB UDIMMs and two processors ECC, optional memory mirroring, and memory rank sparing. Two 2.5" hot-swap SAS/SATA drive bays that support SAS, SATA, and SSD drives. Optional eXFlash support for up to eight 1.8 SSDs. With two 2.5 hot-swap drives: Up to 2 TB with 1 TB 2.5" NL SAS HDDs Up to 1.8 TB with 900 GB 2.5" SAS HDDs Up to 2 TB with 1 TB 2.5" SATA HDDs Up to 512 GB with 256 GB 2.5" SATA SSDs. An intermix of SAS and SATA HDDs and SSDs is supported. With eXFlash 1.8 SSDs and ServeRAID M5115 RAID adapter, up to 1.6 TB with eight 200 GB 1.8 SSDs. RAID 0, 1, 1E, and 10 with integrated LSI SAS2004 controller. Optional ServeRAID M5115 RAID controller with RAID 0, 1, 10, 5, or 50 support and 1 GB cache. Supports up to eight 1.8 SSD with expansion kits. Optional flash-backup for cache, RAID 6/60, and SSD performance enabler. x2x models: Two 10 Gb Ethernet ports with Embedded 10 Gb Virtual Fabric Ethernet LAN on motherboard (LOM) controller; Emulex BladeEngine 3 based. x1x models: None standard; optional 1 Gb or 10 Gb Ethernet adapters Two I/O connectors for adapters. PCI Express 3.0 x16 interface. USB ports: one external. Two internal for embedded hypervisor with optional USB Enablement Kit. Console breakout cable port that provides local keyboard video mouse (KVM) and serial ports (cable standard with chassis; additional cables optional) UEFI, IBM Integrated Management Module II (IMM2) with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, remote presence. Support for IBM Flex System Manager, IBM Systems Director, and Active Energy Manager, IBM ServerGuide. Power-on password, administrator's password, Trusted Platform Module 1.2

Chipset Memory

Memory maximums

Memory protection Disk drive bays Maximum internal storage

RAID support

Network interfaces

PCI Expansion slots Ports

Systems management

Security features

142

IBM PureFlex System and IBM Flex System Products and Technology

Component Video Limited warranty Operating systems supported Service and support

Specification Matrox G200eR2 video core with 16 MB video memory integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors. 3-year customer-replaceable unit and on-site limited warranty with 9x5/NBD Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, VMware vSphere. For more information, see 5.2.13, Operating system support on page 176. Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8 hours fix time, 1-year or 2-year warranty extension, and remote technical support for IBM hardware and selected IBM and OEM software. Width 215 mm (8.5), height 51 mm (2.0), depth 493 mm (19.4) Maximum configuration: 6.98 kg (15.4 lb)

Dimensions Weight

Figure 5-4 shows the components on the system board of the x240.
Hot-swap drive bay backplane Processor 2 and 12 memory DIMMs I/O connector 1 Fabric Connector

Light path diagnostics

Processor 1 and 12 memory DIMMs

I/O connector 2

Expansion Connector

Figure 5-4 Layout of the x240 system board

143

5.2.2 Models
The current x240 models are shown in Table 5-2. All models include 8 GB of memory (2x 4 GB DIMMs) running at either 1600 MHz or 1333 MHz (depending on model).
Table 5-2 Models of the x240 type 8737 Modelsa 8737-A1x 8737-D2x 8737-F2x 8737-G2x 8737-H1x 8737-H2x 8737-J1x 8737-L2x 8737-M1x 8737-M2x 8737-N2x 8737-Q2x 8737-R2x Intel processor (model, cores, core speed, L3 cache, memory speed, TDP power) (two max) 1x Xeon E5-2630L 6C 2.0 GHz 15 MB 1333 MHz 60 W 1x Xeon E5-2609 4C 2.40 GHz 10 MB 1066 MHz 80 W 1x Xeon E5-2620 6C 2.0 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2630 6C 2.3 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2640 6C 2.5 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2640 6C 2.5 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2670 8C 2.6 GHz 20 MB 1600 MHz 115 W 1x Xeon E5-2660 8C 2.2 GHz 20 MB 1600 MHz 95 W 1x Xeon E5-2680 8C 2.7 GHz 20 MB 1600 MHz 130 W 1x Xeon E5-2680 8C 2.7 GHz 20 MB 1600 MHz 130 W 1x Xeon E5-2643 4C 3.3 GHz 10 MB 1600 MHz 130 W 1x Xeon E5-2667 6C 2.9 GHz 15 MB 1600 MHz 130 W 1x Xeon E5-2690 8C 2.9 GHz 20 MB 1600 MHz 135 W Standard memoryb 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB Available drive bays Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Available I/O slotsc 2 1 1 1 2 1 2 1 2 1 1 1 1 10 GbE embedd No Yes Yes Yes No Yes No Yes No Yes Yes Yes Yes

a. Model numbers provided are worldwide generally available variant (GAV) model numbers that are not orderable as listed. They need to be modified by country. The US GAV model numbers use the following nomenclature: xxU. For example, the US orderable part number for 8737-A2x is 8737-A2U. See the product-specific official IBM announcement letter for other country-specific GAV model numbers. b. Maximum system memory capacity of 768 GB is when using 24x 32 GB DIMMs. c. Some models include an Embedded 10 Gb Virtual Fabric Ethernet LOM controller as standard. This embedded controller precludes the use of an I/O adapter in I/O connector 1 as shown in Figure 5-4 on page 143. For more information, see 5.2.10, Embedded 10 Gb Virtual Fabric Adapter on page 170. d. Models number in the form x2x (for example 8737-L2x) include an Embedded 10 Gb Virtual Fabric Ethernet LOM controller as standard. Model numbers in the form x1x (for example 8737-A1x) do not include this embedded controller.

5.2.3 Chassis support


The x240 type 8737 is supported in the IBM Flex System Enterprise Chassis as listed in Table 5-3.
Table 5-3 x240 chassis support Server x240 BladeCenter chassis (All) No IBM Flex System Enterprise Chassis Yes

144

IBM PureFlex System and IBM Flex System Products and Technology

The x240 is a half wide compute node and requires that the chassis shelf is installed in the IBM Flex System Enterprise Chassis. Figure 5-5 shows the chassis shelf in the chassis.

Figure 5-5 The IBM Flex System Enterprise Chassis showing the chassis shelf

The shelf is required for half-wide compute nodes. To allow for installation of the full-wide or larger, shelves must be removed from within the chassis. Slide the two latches on the shelf towards the center and then slide the shelf from the chassis.

5.2.4 System architecture


The IBM Flex System x240 Compute Node type 8737 features the Intel Xeon E5-2600 series processors. The Xeon E5-2600 series processor has models with two, four, six, and eight cores per processor with up to 16 threads per socket. The processors have the following features: Up to 20 MB of shared L3 cache Hyper-Threading Turbo Boost Technology 2.0 (depending on processor model) Two QuickPath Interconnect (QPI) links that run at up to 8 GT/s One integrated memory controller Four memory channels that support up to three DIMMs each The Xeon E5-2600 series processor implements the second generation of Intel Core microarchitecture (Sandy Bridge) by using a 32nm manufacturing process. It requires a new socket type, the LGA-2011, which has 2011 pins that touch contact points on the underside of the processor. The architecture also includes the Intel C600 (Patsburg B) Platform Controller Hub (PCH).

145

Figure 5-6 shows the system architecture of the x240 system.

x4 ESI link Intel Xeon Processor 1 Intel C600 PCH

PCIe x4 G2

LSI2004 SAS Internal USB Front USB HDDs or SSDs

USB

Front KVM port Video & serial

DDR3 DIMMs 4 memory channels 3 DIMMs per channel QPI links (8 GT/s)

x1

USB

IMM v2

Management to midplane PCIe x8 G2 10GbE LOM

PCIe x16 G3 I/O connector 1 Intel Xeon Processor 2 PCIe x8 G3 PCIe x8 G3 PCIe x16 G3 I/O connector 2 PCIe x16 G3 Sidecar connector

Figure 5-6 IBM Flex System x240 Compute Node system board block diagram

The IBM Flex System x240 Compute Node has the following system architecture features as standard: Two 2011-pin type R (LGA-2011) processor sockets An Intel C600 PCH Four memory channels per socket Up to three DIMMs per memory channel 24 DDR3 DIMM sockets Support for UDIMMs, RDIMMs, and new LRDIMMs One integrated 10 Gb Virtual Fabric Ethernet controller (10 GbE LOM in diagram) One LSI 2004 SAS controller Integrated HW RAID 0 and 1 One Integrated Management Module II Two PCIe x16 Gen3 I/O adapter connectors Two Trusted Platform Module (TPM) 1.2 controllers One internal USB connector

146

IBM PureFlex System and IBM Flex System Products and Technology

The new architecture allows the sharing of data on-chip through a high-speed ring interconnect between all processor cores, the last level cache (LLC), and the system agent. The system agent houses the memory controller and a PCI Express root complex that provides 40 PCIe 3.0 lanes. This ring interconnect and LLC architecture is shown in Figure 5-7.

Core Core

L1/L2 L1/L2

LLC LLC Ring interconnect

.
Core L1/L2 LLC

to Chipset

System agent
PCIe 3.0 Root Complex 40 lanes PCIe 3.0 Memory Controller

QPI link

4 channels 3 DIMMs per channel

Figure 5-7 Intel Xeon E5-2600 basic architecture

The two Xeon E5-2600 series processors in the x240 are connected through two QuickPath Interconnect (QPI) links. Each QPI link is capable of up to eight giga-transfers per second (GT/s) depending on the processor model installed. Table 5-4 shows the QPI bandwidth of the Intel Xeon E5-2600 series processors.
Table 5-4 QuickPath Interconnect bandwidth Intel Xeon E5-2600 series processor Advanced Standard Basic QuickPath Interconnect speed (GT/s) 8.0 GT/s 7.25 GT/s 6.4 GT/s QuickPath Interconnect bandwidth (GB/s) in each direction 32.0 GB/s 29.0 GB/s 25.6 GB/s

5.2.5 Processor
The Intel Xeon E5-2600 series is available with up to eight cores and 20 MB of last-level cache. It features an enhanced instruction set called Intel Advanced Vector Extensions (AVX). This set doubles the operand size for vector instructions (such as floating-point) to 256 bits and boosts selected applications by up to a factor of two. The new architecture also introduces Intel Turbo Boost Technology 2.0 and improved power management capabilities. Turbo Boost automatically turns off unused processor cores and increases the clock speed of the cores in use if thermal requirements are still met. Turbo Boost Technology 2.0 takes advantage of the new integrated design. It also implements a more granular overclocking in 100 MHz steps instead of 133 MHz steps on former Nehalem-based and Westmere-based microprocessors.

147

As listed in Table 5-2 on page 144, standard models come with one processor that is installed in processor socket 1. In a two processor system, both processors communicate with each other through two QPI links. I/O is served through 40 PCIe Gen2 lanes and through a x4 Direct Media Interface (DMI) link to the Intel C600 PCH. Processor 1 has direct access to 12 DIMM slots. By adding the second processor, you enable access to the remaining 12 DIMM slots. The second processor also enables access to the sidecar connector, which enables the use of mezzanine expansion units. Table 5-5 show a comparison between the features of the Intel Xeon 5600 series processor and the new Intel Xeon E5-2600 series processor that is installed in the x240.
Table 5-5 Comparison of Xeon 5600 series and Xeon E5-2600 series processor features Specification Cores Physical Addressing Cache size Memory channels per socket Max memory speed Virtualization technology New instructions QPI frequency Inter-socket QPI links PCI Express Xeon 5600 Up to six cores / 12 threads 40-bit (Uncorea limited) 12 MB 3 1333 MHz Real Mode support and transition latency reduction AES-NI 6.4 GT/s 1 36 Lanes PCIe on chipset Xeon E5-2600 Up to eight cores / 16 threads 46-bit (Core and Uncorea ) Up to 20 MB 4 1600 MHz Adds Large VT pages Adds AVX 8.0 GT/s 2 40 Lanes/Socket Integrated PCIe

a. Uncore is an Intel term used by Intel to describe the parts of a processor that are not the core

Table 5-6 lists the features for the different Intel Xeon E5-2600 series processor types.
Table 5-6 Intel Xeon E5-2600 series processor features Processor model Advanced Xeon E5-2650 Xeon E5-2658 Xeon E5-2660 Xeon E5-2665 Xeon E5-2670 Xeon E5-2680 Xeon E5-2690 2.0 GHz 2.1 GHz 2.2 GHz 2.4 GHz 2.6 GHz 2.7 GHz 2.9 GHz Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes 20 MB 20 MB 20 MB 20 MB 20 MB 20 MB 20 MB 8 8 8 8 8 8 8 95 W 95 W 95 W 115 W 115 W 130 W 135 W 8 GT/s 8 GT/s 8 GT/s 8 GT/s 8 GT/s 8 GT/s 8 GT/s 1600 MHz 1600 MHz 1600 MHz 1600 MHz 1600 MHz 1600 MHz 1600 MHz Processor frequency Turbo HT L3 cache Cores Power TDP QPI Link speeda Max DDR3 speed

148

IBM PureFlex System and IBM Flex System Products and Technology

Processor model Standard Xeon E5-2620 Xeon E5-2630 Xeon E5-2640 Basic Xeon E5-2603 Xeon E5-2609 Low power Xeon E5-2650L Xeon E5-2648L Xeon E5-2630L Special Purpose Xeon E5-2667 Xeon E5-2643 Xeon E5-2637

Processor frequency

Turbo

HT

L3 cache

Cores

Power TDP

QPI Link speeda

Max DDR3 speed

2.0 GHz 2.3 GHz 2.5 GHz

Yes Yes Yes

Yes Yes Yes

15 MB 15 MB 15 MB

6 6 6

95 W 95 W 95 W

7.2 GT/s 7.2 GT/s 7.2 GT/s

1333 MHz 1333 MHz 1333 MHz

1.8 MHz 2.4 GHz

No No

No No

10 MB 10 MB

4 4

80 W 80 W

6.4 GT/s 6.4 GT/s

1066 MHz 1066 MHz

1.8 GHz 1.8 GHz 2.0 GHz

Yes Yes Yes

Yes Yes Yes

20 MB 20 MB 15 MB

8 8 6

70 W 70 W 60 W

8 GT/s 8 GT/s 7.2 GT/s

1600 MHz 1600 MHz 1333 MHz

2.9 GHz 3.3 GHz 3.0 GHz

Yes No No

Yes No No

15 MB 10 MB 5 MB

6 4 2

130 W 130 W 80 W

8 GT/s 6.4 GT/s 8 GT/s

1600 MHz 1600 MHz 1600 MHz

a. GT/s = giga transfers per second.

Table 5-7 lists the processor options for the x240.


Table 5-7 Processors for the x240 type 8737 Part number 81Y5180 81Y5182 81Y5183 81Y5184 81Y5206 49Y8125 81Y5185 81Y5190 95Y4670 81Y5186 81Y5179 95Y4675 81Y5187 49Y8144 Feature A1CQ A1CS A1CT A1CU A1ER A2EP A1CV A1CY A31A A1CW A1ES A319 A1CX A2ET Description Intel Xeon Processor E5-2603 4C 1.8 GHz 10 MB Cache 1066 MHz 80 W Intel Xeon Processor E5-2609 4C 2.40 GHz 10 MB Cache 1066 MHz 80 W Intel Xeon Processor E5-2620 6C 2.0 GHz 15 MB Cache 1333 MHz 95 W Intel Xeon Processor E5-2630 6C 2.3 GHz 15 MB Cache 1333 MHz 95 W Intel Xeon Processor E5-2630L 6C 2.0 GHz 15 MB Cache 1333 MHz 60 W Intel Xeon Processor E5-2637 2C 3.0 GHz 5 MB Cache 1600 MHz 80 W Intel Xeon Processor E5-2640 6C 2.5 GHz 15 MB Cache 1333 MHz 95 W Intel Xeon Processor E5-2643 4C 3.3 GHz 10 MB Cache 1600 MHz 130 W Intel Xeon Processor E5-2648L 8C 1.8 GHz 20 MB Cache 1600 MHz 70 W Intel Xeon Processor E5-2650 8C 2.0 GHz 20 MB Cache 1600 MHz 95 W Intel Xeon Processor E5-2650L 8C 1.8 GHz 20 MB Cache 1600 MHz 70 W Intel Xeon Processor E5-2658 8C 2.1 GHz 20 MB Cache 1600 MHz 95 W Intel Xeon Processor E5-2660 8C 2.2 GHz 20 MB Cache 1600 MHz 95 W Intel Xeon Processor E5-2665 8C 2.4 GHz 20 MB Cache 1600 MHz 115 W L2x H1x, H2x N2x D2x F2x G2x A1x Where used

149

Part number 81Y5189 81Y9418 81Y5188 49Y8116

Feature A1CZ A1SX A1D9 A2ER

Description Intel Xeon Processor E5-2667 6C 2.9 GHz 15 MB Cache 1600 MHz 130 W Intel Xeon Processor E5-2670 8C 2.6 GHz 20 MB Cache 1600 MHz 115 W Intel Xeon Processor E5-2680 8C 2.7 GHz 20 MB Cache 1600 MHz 130 W Intel Xeon Processor E5-2690 8C 2.9 GHz 20 MB Cache 1600 MHz 135 W

Where used Q2x J1x M1x, M2x R2x

For more information about the Intel Xeon E5-2600 series processors, see: http://www.intel.com/content/www/us/en/processors/xeon/xeon-processor-5000-sequenc e.html

5.2.6 Memory
This section has the following topics: Memory subsystem overview Memory types on page 153 Memory options on page 154 Memory channel performance considerations on page 155 Memory modes on page 157 DIMM installation order on page 158 Memory installation considerations on page 161 The x240 has 12 DIMM sockets per processor (24 DIMMs in total) running at either 800, 1066, 1333, or 1600 MHz. It supports 2 GB, 4 GB, 8 GB, 16 GB, and 32 GB memory modules, as listed in Table 5-10 on page 154. The x240 with the Intel Xeon E5-2600 series processors can support up to 768 GB of memory in total when using 32 GB LRDIMMs with both processors installed. The x240 uses double data rate type 3 (DDR3) LP DIMMs. You can use registered DIMMs (RDIMMs), unbuffered DIMMs (UDIMMs) or load-reduced DIMMs (LRDIMMs). However, the mixing of the different memory DIMM types is not supported. The E5-2600 series processor has four memory channels, and each memory channel can have up to three DIMMs. Figure 5-8 shows the E5-2600 series and the four memory channels.

Channel 2

Channel 3

DIMM 10

Channel 0

Channel 1

DIMM 6

DIMM 3

DIMM 4

Figure 5-8 The Intel Xeon E5-2600 series processor and the four memory channels

150

IBM PureFlex System and IBM Flex System Products and Technology

DIMM 5

DIMM 2

DIMM 1

Intel Xeon E5-2600 processor

DIMM 11

DIMM 12

DIMM 9

DIMM 8

DIMM 7

Memory subsystem overview


Table 5-8 summarizes some of the characteristics of the x240 memory subsystem. Details on all these characteristics are explained in detail in the following sections.
Table 5-8 Memory subsystem characteristics of the x240 Memory subsystem characteristic Number of memory channels per processor Supported DIMM voltages Maximum number of DIMMs per channel (DPC) DIMM slot maximum Mixing of memory types (RDIMMS, UDIMMS, LRDIMMs) Mixing of memory speeds Mixing of DIMM voltage ratings Registered DIMM (RDIMM) modules Supported memory sizes Supported memory speeds Maximum system capacity Maximum memory speed 16, 8, 4, and 2 GB 1600, 1333, 1066, and 800 MHz 384 GB (24 x 16 GB) 1.35V @ 2DPC: 1333 MHz 1.5V @ 2DPC: 1600 MHz 1.5V @ 3DPC: 1066 MHz 8 One processor: 12 Two processor: 24 IBM Flex System x240 Compute Node 4 Low voltage (1.35V) Standard voltage (1.5V) 3 (using 1.5V DIMMs) 2 (using 1.35V DIMMs) One processor: 12 Two processor: 24 Not supported in any configuration Supported; lowest common speed for all installed DIMMs Supported; all 1.35V will run at 1.5V

Maximum ranks per channel (any memory voltage) Maximum number of DIMMs Unbuffered DIMM (UDIMM) modules Supported memory sizes Supported memory speeds Maximum system capacity Maximum memory speed

4 GB 1333 MHz 64 GB (16 x 4 GB) 1.35V @ 2DPC: 1333 MHz 1.5V @ 2DPC: 1333 MHz 1.35V or 1.5V @ 3DPC: Not supported 8 One processor: 8 Two processor: 16

Maximum ranks per channel (any memory voltage) Maximum number of DIMMs Load-reduced (LRDIMM) modules Supported sizes

32 and 16 GB

151

Memory subsystem characteristic Maximum capacity Supported speeds Maximum memory speed

IBM Flex System x240 Compute Node 768 GB (24 x 32 GB) 1333 and 1066 MHz 1.35V @ 2DPC: 1066 MHz 1.5V @ 2DPC: 1333 MHz 1.35V or 1.5V @ 3DPC: 1066 MHz 8a One processor: 12 Two processor: 24

Maximum ranks per channel (any memory voltage) Maximum number of DIMMs

a. Due to reduced electrical loading, a 4R (four-rank) LRDIMM has the equivalent load of a two-rank RDIMM. This reduced load allows the x240 to support three 4R LRDIMMs per channel (instead of two as with UDIMMs and RDIMMs). For more information, see Memory types on page 153.

Tip: When an unsupported memory configuration is detected, the IMM illuminates the DIMM mismatch light path error LED and the system will not boot. Examples of a DIMM mismatch error are: Mixing of RDIMMs, UDIMMs, or LRDIMMs in the system Not adhering to the DIMM population rules In some cases, the error log points to the DIMM slots that are mismatched. Figure 5-9 shows the location of the 24 memory DIMM sockets on the x240 system board and other components.
DIMMs 13-18 Microprocessor 2 DIMMs 1-6 I/O expansion 1 LOM connector (some models only)

I/O expansion 2

DIMMs 19-24

DIMMs 7-12

Microprocessor 1

Figure 5-9 DIMM layout on the x240 system board

Table 5-9 lists which DIMM connectors belong to which processor memory channel.

152

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-9 The DIMM connectors for each processor memory channel Processor Memory channel Channel 0 Channel 1 Processor 1 Channel 2 Channel 3 Channel 0 Channel 1 Processor 2 Channel 2 Channel 3 13, 14, and 15 16, 17, and 18 7, 8, and 9 10, 11, and 12 22, 23, and 24 19, 20, and 21 DIMM connector 4, 5, and 6 1, 2, and 3

Memory types
The x240 supports three types of DIMM memory: RDIMM modules Registered DIMMs are the mainstream module solution for servers or any applications that demand heavy data throughput, high density, and high reliability. RDIMMs use registers to isolate the memory controller address, command, and clock signals from the dynamic random-access memory (DRAM). This process results in a lighter electrical load. Therefore, more DIMMs can be interconnected and larger memory capacity is possible. The register does, however, typically impose a clock or more of delay, meaning that registered DIMMs often have slightly longer access times than their unbuffered counterparts. In general, RDIMMs have the best balance of capacity, reliability, and workload performance with a maximum performance of 1600 MHz (at 2 DPC). For more information about supported x240 RDIMM memory options, see Table 5-10 on page 154. UDIMM modules In contrast to RDIMMs that use registers to isolate the memory controller from the DRAMs, UDIMMs attach directly to the memory controller. Therefore, they do not introduce a delay, which creates better performance. The disadvantage is limited drive capability. Limited capacity means that the number of DIMMs that can be connected together on the same memory channel remains small due to electrical loading. This leads to less DIMM support, fewer DIMMs per channel (DPC), and overall lower total system memory capacity than RDIMM systems. UDIMMs have the lowest latency and lowest power usage. They also have the lowest overall capacity. For more information about supported x240 UDIMM memory options, see Table 5-10 on page 154. LRDIMM modules Load-reduced DIMMs are similar to RDIMMs. They also use memory buffers to isolate the memory controller address, command, and clock signals from the individual DRAMS on the DIMM. Load-reduced DIMMs take the buffering a step further by buffering the memory controller data lines from the DRAMs also.

153

Figure 5-10 shows a comparison of RDIMM and LRDIMM memory types.


Registered DIMM
DATA DRAM DRAM DRAM DRAM

Load-reduced DIMM
DRAM DRAM DRAM DRAM

Memory controller

Register
CMD/ADDR/ CLK

Memory controller

DRAM DRAM

CMD/ ADDR/ CLK

Memory Buffer DRAM DRAM

DATA

DRAM DRAM

DRAM DRAM

Figure 5-10 Comparing RDIMM buffering and LRDIMM buffering

In essence, all signaling between the memory controller and the LRDIMM is now intercepted by the memory buffers on the LRDIMM module. This system allows additional ranks to be added to each LRDIMM module without sacrificing signal integrity. It also means that fewer actual ranks are seen by the memory controller (for example, a 4R LRDIMM has the same look as a 2R RDIMM). The additional buffering that the LRDIMMs support greatly reduces the electrical load on the system. This reduction allows the system to operate at a higher overall memory speed for a certain capacity. Conversely, it can operate at a higher overall memory capacity at a certain memory speed. LRDIMMs allow maximum system memory capacity and the highest performance for system memory capacities above 384 GB. They are suited for system workloads that require maximum memory such as virtualization and databases. For more information about supported x240 LRDIMM memory options, see Table 5-10. The memory type installed in the x240 combines with other factors to determine the ultimate performance of the x240 memory subsystem. For a list of rules when populating the memory subsystem, see Memory installation considerations on page 161.

Memory options
Table 5-10 lists the memory DIMM options for the x240.
Table 5-10 Memory DIMMs for the x240 type 8737 Part number FC Description Where used

Registered DIMM (RDIMM) modules 49Y1405 8940 2 GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

154

IBM PureFlex System and IBM Flex System Products and Technology

Part number 49Y1406

FC 8941

Description 4 GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

Where used H1x, H2x, G2x, F2x, D2x, A1x

49Y1407 49Y1559

8942 A28Z

4 GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 4 GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM R2x, Q2x, N2x, M2x, M1x, L2x, J1x

90Y3178 90Y3109 49Y1397 49Y1563 49Y1400 00D4968

A24L A292 8923 A1QT 8939 A2U5

4 GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM 8 GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM 8 GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066 MHz LP RDIMM 16 GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM

Unbuffered DIMM (UDIMM) modules 49Y1404 8648 4 GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP UDIMM

Load-reduced (LRDIMM) modules 49Y1567 90Y3105 A290 A291 16 GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM 32 GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM

Memory channel performance considerations


The memory installed in the x240 can be clocked at 1600 MHz, 1333 MHz, 1066 MHz, or 800 MHz. You select the speed based on the type of memory, population of memory, processor model, and several other factors. Use the following to determine the ultimate performance of the x240 memory subsystem: Model of Intel Xeon E5-2600 series processor installed As mentioned in section 5.2.4, System architecture on page 145, the Intel Xeon E5-2600 series processors includes one integrated memory controller. The model of processor installed determines the maximum speed that the integrated memory controller will clock the installed memory. Table 5-6 on page 148 lists the maximum DDR3 speed that the processor model supports. This maximum speed might not be the ultimate speed of the memory subsystem. Speed of DDR3 DIMMs installed For maximum performance, the speed rating of each DIMM module must match the maximum memory clock speed of the Xeon E5-2600 processor. Keep in mind these rules when matching processors and DIMM modules: The processor never over-clocks the memory in any configuration The processor clocks all the installed memory at either the rated speed of the processor or the speed of the slowest DIMM installed in the system For example, an Intel Xeon E5-2640 series processor clocks all installed memory at a maximum speed of 1333 MHz. If any 1600 MHz DIMM modules are installed, they are

155

clocked at 1333 MHz. However, if any 1066 MHz or 800 MHz DIMM modules are installed, all installed DIMM modules are clocked at the slowest speed (800 MHz). Number of DIMMs per channel (DPC) Generally, the Xeon E5-2600 processor series clocks up to 2DPC at the maximum rated speed of the processor. However, if any channel is fully populated (3DPC), the processor slows all the installed memory down. For example, an Intel Xeon E5-2690 series processor clocks all installed memory at a maximum speed of 1600 MHz up to 2DPC. However, if any one channel is populated with 3DPC, all memory channels are clocked at 1066 MHz. DIMM voltage rating The Xeon E5-2600 processor series supports both low voltage (1.35V) and standard voltage (1.5V) DIMMs. Table 5-10 on page 154 shows the maximum clock speed for supported low voltage DIMMs is 1333 MHz. The maximum clock speed for supported standard voltage DIMMs is 1600 MHz. Table 5-11 lists the memory DIMM options for the x240, including memory channel speed based on number of DIMMs per channel, ranks per DIMM, and DIMM voltage rating.
Table 5-11 x240 memory DIMM and memory channel speed support Memory capacity per DIMM Ranks per DIMM and data width Memory channel speed and voltage support by DIMM per channel (NS = Not Supported) DRAM density 1DPC 1.35V 1.5V 2DPC 1.35V 1.5V 3DPC 1.35V 1.5V

Part number

RDIMM 49Y1405 49Y1406 49Y1407 49Y1559 90Y3178 90Y3109 49Y1397 49Y1563 49Y1400 00D4968 UDIMM 49Y1404 LRDIMM 49Y1567 90Y3105 16 GB 32 GB 4Rx4 4Rx4 2 Gb 4 Gb 1066 1066 1333 1333 1066 1066 1333 1333 1066 1066 1066 1066 4 GB 2Rx8 2 Gb 1333 1333 1333 1333 NS NS 2 GB 4 GB 4 GB 4 GB 4 GB 8 GB 8 GB 16 GB 16 GB 16 GB 1Rx8 1Rx4 2Rx8 1Rx4 2Rx8 2Rx4 2Rx4 2Rx4 4Rx4 2Rx4 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 4 Gb 2 Gb 4 Gb 1333 1333 1333 NS NS NS 1333 1333 800 NS 1333 1333 1333 1600 1600 1600 1333 1333 1066 1600 1333 1333 1333 NS NS NS 1333 1333 NS NS 1333 1333 1333 1600 1600 1600 1333 1333 800 1600 NS NS NS NS NS NS NS NS NS NS 1066 1066 1066 1066 1066 1066 1066 1066 NS 1066

156

IBM PureFlex System and IBM Flex System Products and Technology

Memory modes
The x240 type 8737 supports three memory modes: Independent channel mode Rank-sparing mode Mirrored-channel mode These modes can be selected in the Unified Extensible Firmware Interface (UEFI) setup. For more information, see 5.2.12, Systems management on page 172.

Independent channel mode


This is the default mode for DIMM population. DIMMs are to be populated in the last DIMM connector on the channel first, then installed one DIMM per channel equally distributed between channels and processors. In this memory mode, the operating system uses the full amount of memory installed and no redundancy is provided. The IBM Flex System x240 Compute Node configured in independent channel mode yields a maximum of 192 GB of usable memory with one processor installed. It yields 384 GB of usable memory with two processors installed that use 16 GB DIMMs. Memory DIMMs must be installed in the correct order, starting with the last physical DIMM socket of each channel first. The DIMMs can be installed without matching sizes, but avoid this configuration because it might affect optimal memory performance. For more information about the memory DIMM installation sequence when using independent channel mode, see Memory DIMM installation: Independent channel and rank-sparing modes on page 158

Rank-sparing mode
In rank-sparing mode, one memory DIMM rank serves as a spare of the other ranks on the same channel. The spare rank is held in reserve and is not used as active memory. The spare rank must have identical or larger memory capacity than all the other active memory ranks on the same channel. After an error threshold is surpassed, the contents of that rank are copied to the spare rank. The failed rank of memory is taken offline, and the spare rank is put online and used as active memory in place of the failed rank. The memory DIMM installation sequence when using rank-sparing mode is identical to independent channel mode as described in Memory DIMM installation: Independent channel and rank-sparing modes on page 158.

Mirrored-channel mode
In mirrored-channel mode, memory is installed in pairs. Each DIMM in a pair must be identical in capacity, type, and rank count. The channels are grouped in pairs. Each channel in the group receives the same data. One channel is used as a backup of the other, which provides redundancy. The memory contents on channel 0 are duplicated in channel 1, and the memory contents of channel 2 are duplicated in channel 3. The DIMMs in channel 0 and channel 1 must be the same size and type. The DIMMs in channel 2 and channel 3 must be the same size and type. The effective memory that is available to the system is only half of what is installed. Because memory mirroring is handled in hardware, it is operating system-independent. Restriction: In a two processor configuration, memory must be identical across the two processors to enable the memory mirroring feature.

157

Figure 5-11 shows the E5-2600 series processor with the four memory channels and which channels are mirrored when operating in mirrored-channel mode.

Channel 1

Channel 3

DIMM 10

Channel 0 & 1 mirrored


Channel 0 DIMM 6 DIMM 4 DIMM 5

Channel 2

DIMM 7

Mirrored Pair

Figure 5-11 Showing the mirrored channels and DIMM pairs when in mirrored-channel mode

For more information about the memory DIMM installation sequence when using mirrored channel mode, see Memory DIMM installation: Mirrored-channel on page 161.

DIMM installation order


This section describes the recommended order in which DIMMs should be installed, based on the memory mode used.

Memory DIMM installation: Independent channel and rank-sparing modes


The following guidelines are only for when the processors are operating in Independent channel mode or rank-sparing mode. The x240 boots with one memory DIMM installed per processor. However, the suggested memory configuration is to balance the memory across all the memory channels on each processor to use the available memory bandwidth. Use one of the following suggested memory configurations: Four, eight, or 12 memory DIMMs in a single processor x240 server Eight, 16, or 24 memory DIMMs in a dual processor x240 server This sequence spreads the DIMMs across as many memory channels as possible. For best performance and to ensure a working memory configuration, install the DIMMs in the sockets as shown in the following tables.

158

IBM PureFlex System and IBM Flex System Products and Technology

DIMM 8

DIMM 9

Intel Xeon E5-2600 processor

DIMM 11

DIMM 12

DIMM 1

DIMM 2

DIMM 3

Channel 2 & 3 mirrored

Table 5-12 shows DIMM installation if you have one processor installed.
Table 5-12 Suggested DIMM installation for the x240 with one processor installed Optimal memory configa Processor 1
Channel 2 Channel 1 Channel 3 Channel 4 Channel 3

Processor 2
Channel 4 Channel 2 Channel 1

Number of processors

Number of DIMMs

DIMM 10

DIMM 11

DIMM 12

DIMM 13

DIMM 14

DIMM 15

DIMM 16

DIMM 17

DIMM 18

DIMM 19

DIMM 20

DIMM 21

DIMM 22

DIMM 23

1 1 1 1 1 1 1 1 1 1 1 1

1 2 3 4 5 6 7 8 9 10 11 12 x x x x x x x x x x x x x x x x x x x x x

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

a. For optimal memory performance, populate all memory channel equally

159

DIMM 24

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 7

DIMM 8

DIMM 9

Table 5-13 shows DIMM installation if you have two processors installed.
Table 5-13 Suggested DIMM installation for the x240 with two processors installed Optimal memory configa Processor 1
Channel 2 Channel 1 Channel 3 Channel 4 Channel 3

Processor 2
Channel 4 Channel 2 Channel 1

Number of processors

Number of DIMMs

DIMM 10

DIMM 11

DIMM 12

DIMM 13

DIMM 14

DIMM 15

DIMM 16

DIMM 17

DIMM 18

DIMM 19

DIMM 20

DIMM 21

DIMM 22

DIMM 23 x x x x x x x x x x x

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

a. For optimal memory performance, populate all memory channels equally

160

IBM PureFlex System and IBM Flex System Products and Technology

DIMM 24

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 7

DIMM 8

DIMM 9

Memory DIMM installation: Mirrored-channel


Table 5-14 lists the memory DIMM installation order for the x240, with one or two processors installed when operating in mirrored-channel mode.
Table 5-14 The DIMM installation order for mirrored-channel mode DIMM paira 1st 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th One processor installed 1&4 9 & 12 2&5 8 & 11 3&6 7 & 10 Two processors installed 1&4 13 & 16 9 & 12 21 & 24 2&5 14 & 17 8 & 11 20 & 23 3&6 15 & 18 7 & 10 19 & 22

a. The pair of DIMMs must be identical in capacity, type, and rank count

Memory installation considerations


Use the following general guidelines when deciding about the memory configuration of your IBM Flex System x240 Compute Node: All memory installation considerations apply equally to one- and two-processor systems. All DIMMs must be DDR3 DIMMs. Memory of different types (RDIMMs, UDIMMs, and LRDIMMs) cannot be mixed in the system. If you mix DIMMs with 1.35V and 1.5V, the system runs all of them at 1.5V and you lose the energy advantage. If you mix DIMMs with different memory speeds, all DIMMs in the system run at the lowest speed. Install memory DIMMs in order of their size, with the largest DIMM first. The order is described in Table 5-12 on page 159 and Table 5-13 on page 160. The correct installation order is the DIMM slot farthest from the processor first (DIMM slots 1, 4, 9, and 12) working inward. Install memory DIMMs in order of their rank, with the largest DIMM in the DIMM slot farthest from the processor. Start with DIMM slots 1, 4, 9, and 12, and work inward. Memory DIMMs can be installed one DIMM at a time. However, avoid this configuration because it can affect performance. For maximum memory bandwidth, install one DIMM in each of the four memory channels. In other words, in matched quads (four DIMMs at a time). Populate equivalent ranks per channel.

161

5.2.7 Standard onboard features


This section describes the standard onboard features of the IBM Flex System x240 Compute Node.

USB ports
The x240 has one external USB port on the front of the compute node. Figure 5-12 shows the location of the external USB connector on the x240.

External USB connector

Figure 5-12 The front USB connector on the x240 compute node

The x240 also supports an option that provides two internal USB ports (x240 USB Enablement Kit) to be primarily used for attaching USB hypervisor keys. For more information, see 5.2.9, Integrated virtualization on page 169.

Console breakout cable


The x240 connects to local video, USB keyboard, and USB mouse devices by connecting the console breakout cable. The console breakout cable connects to a connector on the front bezel of the x240 compute node. The console breakout cable also provides a serial connector. Figure 5-13 shows the console breakout cable.

Breakout cable connector

Serial connector 2-port USB Video connector


Figure 5-13 Console breakout cable connecting to the x240

162

IBM PureFlex System and IBM Flex System Products and Technology

Table 5-15 lists the ordering part number and feature code of the console breakout cable. One console breakout cable ships with the IBM Flex System Enterprise Chassis.
Table 5-15 Ordering part number and feature code Part Number 81Y5286 Feature Code A1NF Description IBM Flex System Console Breakout Cable

Trusted Platform Module


Trusted computing is an industry initiative that provides a combination of secure software and secure hardware to create a trusted platform. It is a specification that increases network security by building unique hardware IDs into computing devices. The x240 implements TPM Version 1.2 support. The TPM in the x240 is one of the three layers of the trusted computing initiative as shown in Table 5-16.
Table 5-16 Trusted computing layers Layer Level 1: Tamper-proof hardware, used to generate trustable keys Level 2: Trustable platform Level 3: Trustable execution Implementation Trusted Platform Module UEFI or BIOS Intel processor Operating system Drivers

5.2.8 Local storage


The x240 compute node features an onboard LSI 2004 SAS controller with two small form factor (SFF) hot-swap drive bays. These bays are accessible from the front of the compute node. The onboard LSI SAS2004 controller provides RAID 0, RAID 1, or RAID 10 capability. It supports up to two SFF hot-swap serial-attached SCSI (SAS) or Serial Advanced Technology Attachment (SATA) hard disk drive (HDDs) or two SFF hot-swap solid-state drives. Figure 5-14 shows how the LSI2004 SAS controller and hot-swap storage devices connect to the internal HDD interface.

SAS 0 LSI2004 SAS Controller SAS 0 SAS 1 SAS 1

Hot-Swap Storage Device 1 Hot-Swap Storage Device 2

Figure 5-14 The LSI2004 SAS controller connections to HDD interface

163

Figure 5-15 shows the front of the x240 including the two hot-swap drive bays.

Figure 5-15 The x240 showing the front hot-swap disk drive bays

Local SAS and SATA HDDs and SSDs


The x240 type 8737 has support for up to two hot-swap SFF SAS or SATA HDDs or up two hot-swap SFF solid-state drives (SSDs). These two hot-swap components are accessible from the front of the compute node without removing the compute node from the chassis. See Table 5-17 for a list of supported SAS and SATA HDDs and SSDs.
Table 5-17 Supported SAS and SATA HDDs and SSDs Part number Feature code Description

10K SAS hard disk drives 42D0637 49Y2003 81Y9650 5599 5433 A282 IBM 300 GB 10K 6 Gbps SAS 2.5" SFF Slim-HS HDD IBM 600 GB 10K 6 Gbps SAS 2.5" SFF Slim-HS HDD IBM 900 GB 10K 6 Gbps SAS 2.5" SFF HS HDD

15K SAS hard disk drives 42D0677 81Y9670 NL SATA 81Y9722 81Y9726 81Y9730 NL SAS 42D0707 81Y9690 5409 A1P3 IBM 500 GB 7200 6 Gbps NL SAS 2.5" SFF Slim-HS HDD IBM 1TB 7.2K 6 Gbps NL SAS 2.5" SFF HS HDD A1NX A1NZ A1AV IBM 250 GB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD IBM 500 GB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD IBM 1TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD 5536 A283 IBM 146 GB 15K 6 Gbps SAS 2.5" SFF Slim-HS HDD IBM 300 GB 15K 6 Gbps SAS 2.5" SFF HS HDD

Solid state drives 43W7718 90Y8643 90Y8648 A2FN A2U3 A2U4 IBM 200 GB SATA 2.5" MLC HS SSD IBM 256 GB SATA 2.5" MLC HS Entry SSD IBM 128 GB SATA 2.5" MLC HS Entry SSD

eXFlash storage
In addition, the x240 supports eXFlash with up to eight 1.8-inch solid-state drives combined with a ServeRAID M5115 SAS/SATA controller (90Y4390). The M5115 attaches to the I/O adapter 1 connector. It can be attached even if the Compute Node Fabric Connector is installed. The Compute Node Fabric Connector is used to route the Embedded 10 Gb Virtual 164
IBM PureFlex System and IBM Flex System Products and Technology

Fabric Adapter to bays 1 and 2. For more information, see 5.2.11, I/O expansion on page 171. The ServeRAID M5115 cannot be installed if an adapter is installed in I/O adapter slot 1. The ServeRAID M5115 supports combinations of 2.5-inch drives and 1.8-inch solid-state drives: Up to two 2.5-inch drives only Up to four 1.8-inch drives only Up to two 2.5-inch drives, plus up to four 1.8-inch solid-state drives Up to eight 1.8-inch solid-state drives The ServeRAID M5115 SAS/SATA Controller (90Y4390) provides an advanced RAID controller that supports RAID 0, 1, 10, 5, 50, and optional 6 and 60. It includes 1 GB of cache. This cache can be backed up to flash when attached to the supercapacitor included with the optional ServeRAID M5100 Series Enablement Kit (90Y4342). At least one hardware kit is required with the ServeRAID M5115 controller to enable specific drive support: ServeRAID M5100 Series Enablement Kit for IBM Flex System x240 (90Y4342) enables support for up to two 2.5 HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection. This enablement kit replaces the standard two-bay backplane (which is attached through the system board to an onboard controller) with a new backplane. The new backplane attaches with an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit. MegaRAID CacheVault flash cache protection uses NAND flash memory powered by a supercapacitor to protect data stored in the controller cache. This module eliminates the need for a lithium-ion battery commonly used to protect DRAM cache memory on Peripheral Component Interconnect (PCI) RAID controllers. To avoid data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash. This process uses power from the supercapacitor. After the power is restored to the RAID controller, the saved data is transferred from the NAND flash back to the DRAM cache. The DRAM cache can then be flushed to disk. Tip: The Enablement Kit is only required if 2.5-inch drives are to be used. If you plan to install four or eight 1.8-inch SSDs, this kit is not required. ServeRAID M5100 Series IBM eXFlash Kit for IBM Flex System x240 (90Y4341) enables eXFlash support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay eXFlash backplane that attaches with an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and so this kit does not have a supercap. ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240 (90Y4391) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles replacing the existing baffles, and each baffle has mounts for two SSDs. Included flexible cables connect the drives to the controller.

165

Table 5-18 shows the kits required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, you need the M5115 controller, the eXFlash kit, and the SSD Expansion kit. Tip: If the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is installed, the x240 USB Enablement Kit (49Y8119, described in 5.2.9, Integrated virtualization on page 165) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include special air baffles that cannot be installed at the same time.
Table 5-18 ServeRAID M5115 hardware kits Required drive support Maximum number of 2.5" drives 2 0 2 0 Maximum number of 1.8" SSDs 0 4 (front) 4 (internal) 8 (both) => => => => Components required ServeRAID M5115 90Y4390 Add this Add this Add this Add this ... and this ... and this Enablement Kit 90Y4342 ... and this ... and this ... and this ... and this eXFlash Kit 90Y4341 SSD Expansion Kit 90Y4391a

a. If you install the SSD Expansion Kit, you cannot also installed the x240 USB Enablement Kit (49Y8119).

Figure 5-16 shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (row 1 of Table 5-18).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Enablement Kit (90Y4342)
ServeRAID M5115 controller

MegaRAID CacheVault flash cache protection

Replacement 2-drive backplane

Figure 5-16 The ServeRAID M5115 and the Enablement Kit installed

166

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-17 shows how the ServeRAID M5115 and eXFlash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (row 4 of Table 5-18 on page 166).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series IBM eXFlash Kit (90Y4341) and ServeRAID M5100 Series SSD Expansion Kit (90Y4391)
ServeRAID M5115 controller

eXFlash Kit: Replacement 4-drive SSD backplane and drive bays

SSD Expansion Kit: Four SSDs on special air baffles above DIMMs (no CacheVault flash protection)

Eight drives supported: - Four internal drives - Four front-accessible drives

Figure 5-17 ServeRAID M5115 with eXFlash and SSD Expansion Kits installed

The eight SSDs are installed in the following locations: Four in the front of the system in place of the two 2.5-inch drive bays Two in a tray above the memory banks for CPU 1 Two in a tray above the memory banks for CPU 2 The ServeRAID M5115 controller has the following specifications: Eight internal 6 Gbps SAS/SATA ports PCI Express 3.0 x8 host interface 6 Gbps throughput per port 800 MHz dual-core IBM PowerPC processor with LSI SAS2208 6 Gbps RAID-on-Chip (ROC) controller Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411 Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342. Support for SAS and SATA HDDs and SSDs Support for intermixing SAS and SATA HDDs and SSDs; mixing different types of drives in the same array (drive group) is not recommended Support for self-encrypting drives (SEDs) with MegaRAID SafeStore Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447) Support for up to 64 virtual drives, up to 128 drive groups, up to 16 virtual drives per drive group, and up to 32 physical drives per drive group Support for logical unit number (LUN) sizes up to 64 TB 167

Configurable stripe size up to 1 MB Compliant with Disk Data Format (DDF) configuration on disk (CoD) S.M.A.R.T. support MegaRAID Storage Manager management software Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance accelerator, and SSD caching enabler. Table 5-19 lists all Feature on Demand (FoD) license upgrades.
Table 5-19 Supported upgrade features Part number 90Y4410 90Y4412 90Y4447 Description ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System ServeRAID M5100 Series Performance Accelerator for IBM Flex System (MegaRAID FastPath) ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System (MegaRAID CacheCade Pro 2.0) Maximum quantity supported 1 1 1

These features have the following characteristics: RAID 6 Upgrade (90Y4410) Adds support for RAID 6 and RAID 60. This is a Feature on Demand license. Performance Accelerator (90Y4412) The Performance Accelerator for IBM Flex System is implemented by using the LSI MegaRAID FastPath software. It provides high-performance I/O acceleration for SSD-based virtual drives by using a low-latency I/O path to increase the maximum input/output operations per second (IOPS) capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is a Feature on Demand license. SSD Caching Enabler for traditional hard drives (90Y4447) The SSD Caching Enabler for IBM Flex System is implemented by using the LSI MegaRAID CacheCade Pro 2.0. It is designed to accelerate the performance of HDD arrays with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache. This configuration helps maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a Feature on Demand license. This feature requires at least one SSD drive be installed. The 1.8-inch solid-state drives supported with the ServeRAID M5115 controller are listed in Table 5-20.
Table 5-20 Supported 1.8-inch solid-state drives Part number 43W7746 43W7726 Description IBM 200 GB SATA 1.8" MLC SSD IBM 50 GB SATA 1.8" MLC SSD Maximum quantity supported 8 8

168

IBM PureFlex System and IBM Flex System Products and Technology

5.2.9 Integrated virtualization


The x240 offers an IBM standard USB flash drive option preinstalled with VMware ESXi. This is an embedded version of VMware ESXi. It is fully contained on the flash drive, and so does not require any disk space. The IBM USB Memory Key for VMware Hypervisor plugs into the USB ports on the optional x240 USB Enablement Kit (Figure 5-18). Table 5-21 lists the ordering information for the VMware hypervisor options.
Table 5-21 IBM USB Memory Key for VMware Hypervisor Part number 41Y8300 41Y8298 Feature code A2VC A2G0 Description IBM USB Memory Key for VMware ESXi 5.0 IBM Blank USB Memory Key for VMware ESXi Downloads

The USB memory keys connect to the internal x240 USB Enablement Kit. Table 5-22 lists the ordering information for the internal x240 USB Enablement Kit.
Table 5-22 Internal USB port option Part number 49Y8119 Feature code A33M Description x240 USB Enablement Kit

The x240 USB Enablement Kit connects to the system board of the server as shown in Figure 5-18. The kit offers two ports, and enables you to install two memory keys. If you do, both devices are listed in the boot menu. This setup allows you to boot from either device, or to set one as a backup in case the first one gets corrupted.
USB flash key USB two-port assembly

Figure 5-18 The x240 compute node showing the location of the internal x240 USB Enablement Kit

169

Restriction: If the ServeRAID M5115 SAS/SATA Controller is installed, the IBM USB Memory Key for VMware Hypervisor cannot be installed. For a complete description of the features and capabilities of VMware ESX Server, see: http://www.vmware.com/products/vi/esx/

5.2.10 Embedded 10 Gb Virtual Fabric Adapter


Some models of the x240 include an Embedded 10 Gb Virtual Fabric Adapter built into the system board. Table 5-2 on page 144 lists what models of the x240 include the Embedded 10 Gb Virtual Fabric Adapter. Each x240 model that includes the embedded 10 Gb Virtual Fabric Adapter also has the Compute Node Fabric Connector installed in I/O connector 1. The Compute Node Fabric Connector is physically screwed onto the system board, and provides connectivity to the Enterprise Chassis midplane. Models without the Embedded 10 Gb Virtual Fabric Adapter do not include any other Ethernet connections to the Enterprise Chassis midplane. For those models, an I/O adapter must be installed in either I/O connector 1 or I/O connector 2. This adapter provides network connectivity between the server and the chassis midplane, and ultimately to the network switches. Figure 5-19 shows the Compute Node Fabric Connector.

Figure 5-19 The Compute Node Fabric Connector

The Compute Node Fabric Connector enables Port 1 on the Embedded 10 Gb Virtual Fabric Adapter to be routed to I/O module bay 1. Similarly, port 2 can be routed to I/O module bay 2. The Compute Node Fabric Connector can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1. Restriction: If I/O connector 1 has the Embedded 10 Gb Virtual Fabric Adapter installed, only I/O connector 2 is available for the installation of additional I/O adapters. The Embedded 10 Gb Virtual Fabric Adapter is based on the Emulex BladeEngine 3, which is a single-chip, dual-port 10 Gigabit Ethernet (10 GbE) Ethernet Controller. The Embedded 10 Gb Virtual Fabric Adapter includes these features: PCI-Express Gen2 x8 host bus interface Supports multiple Virtual Network Interface Card (vNIC) functions TCP/IP offload Engine (TOE enabled) SRIOV capable RDMA over TCP/IP capable iSCSI and FCoE upgrade offering using FoD 170
IBM PureFlex System and IBM Flex System Products and Technology

Table 5-23 lists the ordering information for the IBM Flex System Embedded 10 Gb Virtual Fabric Upgrade. This upgrade enables the iSCSI and FCoE support on the Embedded 10 Gb Virtual Fabric Adapter.
Table 5-23 Feature on Demand upgrade for FCoE and iSCSI support Part Number 90Y9310 Feature Code A2TD Description IBM Flex System Embedded 10 Gb Virtual Fabric Upgrade

Figure 5-20 shows the x240 and the location of the Compute Node Fabric Connector on the system board.
Captive screws LOM connector

Figure 5-20 The x240 showing the location of the Compute Node Fabric Connector

5.2.11 I/O expansion


The x240 has two PCIe 3.0 x16 I/O expansion connectors for attaching I/O adapters. There is also another expansion connector designed for future expansion options. The I/O expansion connectors are a high-density 216-pin PCIe connector. Installing I/O adapters allows the x240 to connect to switch modules in the IBM Flex System Enterprise Chassis.

171

Figure 5-21 shows the rear of the x240 compute node and the locations of the I/O connectors.

I/O connector 1

I/O connector 2

Figure 5-21 Rear of the x240 compute node showing the locations of the I/O connectors

Table 5-24 lists the I/O adapters that are supported in the x240.
Table 5-24 Supported I/O adapters for the x240 compute node Part number Feature code Ports Description

Ethernet adapters 49Y7900 90Y3466 90Y3554 A1BR A1QY A1R1 4 2 4 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter IBM Flex System CN4054 10 Gb Virtual Fabric Adapter

Fibre Channel adapters 69Y1938 95Y2375 88Y6370 A1BM A2N5 A1BP 2 2 2 IBM Flex System FC3172 2-port 8 Gb FC Adapter IBM Flex System FC3052 2-port 8 Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter

InfiniBand adapters 90Y3454 A1QZ 2 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

Requirement: Any supported I/O adapter can be installed in either I/O connector. However, you must be consistent not only across chassis, but across all compute nodes.

5.2.12 Systems management


The following section describes some of the systems management features that are available with the x240.

172

IBM PureFlex System and IBM Flex System Products and Technology

Front panel LEDs and controls


The front of the x240 includes several LEDs and controls that assist in systems management. They include a hard disk drive activity LED, status LEDs, and power, identify, check log, fault, and light path diagnostic LEDs. Figure 5-22 shows the location of the LEDs and controls on the front of the x240.
Hard disk drive activity LED Hard disk drive status LED

USB port

Identify LED

Fault LED

NMI control

Console Breakout Cable port

Power button / LED

Check log LED

Figure 5-22 The front of the x240 with the front panel LEDs and controls shown

Table 5-25 describes the front panel LEDs.


Table 5-25 x240 front panel LED information LED Power Color Green Description This LED lights solid when system is powered up. When the compute node is initially plugged into a chassis, this LED is off. If the power-on button is pressed, the integrated management module (IMM) flashes this LED until it determines the compute node is able to power up. If the compute node is able to power up, the IMM powers the compute node on and turns on this LED solid. If the compute node is not able to power up, the IMM turns off this LED and turns on the information LED. When this button is pressed with the x240 out of the chassis, the light path LEDs are lit. You can use this LED to locate the compute node in the chassis by requesting it to flash from the chassis management module console. The IMM flashes this LED when instructed to by the Chassis Management Module. This LED functions only when the x240 is powered on. The IMM turns on this LED when a condition occurs that prompts the user to check the system error log in the Chassis Management Module. This LED lights solid when a fault is detected somewhere on the compute node. If this indicator is on, the general fault indicator on the chassis front panel should also be on. Each hot-swap hard disk drive has an activity LED. When this LED is flashing, it indicates that the drive is in use. When this LED is lit, it indicates that the drive has failed. If an optional IBM ServeRAID controller is installed in the server, when this LED is flashing slowly (one flash per second), it indicates that the drive is being rebuilt. When the LED is flashing rapidly (three flashes per second), it indicates that the controller is identifying the drive.

Location

Blue

Check error log Fault Hard disk drive activity LED Hard disk drive status LED

Yellow Yellow Green Yellow

173

Table 5-26 describes the x240 front panel controls.


Table 5-26 x240 front panel control information Control Power on / off button Characteristic Recessed with Power LED Description If the x240 is off, pressing this button causes the x240 to power up and start loading. When the x240 is on, pressing this button causes a graceful shutdown of the individual x240 so that it is safe to remove. This process includes shutting down the operating system (if possible) and removing power from the x240. If an operating system is running, the button might need to be held for approximately 4 seconds to initiate the shutdown. Protect this button from accidental activation. Group it with the Power LED. Causes an NMI for debugging purposes.

NMI

Recessed. It can be accessed only by using a small pointed object.

Power LED
The status of the power LED of the x240 shows the power status of the x240 compute node. It also indicates the discovery status of the node by the Chassis Management Module. The power LED states are listed in Table 5-27.
Table 5-27 The power LED states of the x240 compute node Power LED state Off On; fast flash mode On; slow flash mode On; solid Status of compute node No power to compute node Compute node has power Chassis Management Module is in discovery mode (handshake) Compute node has power Power in stand-by mode Compute node has power Compute node is operational

Restriction: The power button does not operate when the power LED is in fast flash mode.

Light path diagnostics panel


For quick problem determination when located physically at the server, the x240 offers a three step guided path: 1. The Fault LED on the front panel 2. The light path diagnostics panel shown in Figure 5-23 on page 175 3. LEDs next to key components on the system board

174

IBM PureFlex System and IBM Flex System Products and Technology

The x240 light path diagnostics panel is visible when you remove the server from the chassis. The panel is on the upper right of the compute node as shown in Figure 5-23.

Figure 5-23 Location of x240 light path diagnostics panel

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis. The meaning of each LED in the light path diagnostics panel is listed in Table 5-28.
Table 5-28 Light path panel LED definitions LED LP S BRD MIS NMI TEMP MEM ADJ Color Green Yellow Yellow Yellow Yellow Yellow Yellow Meaning The light path diagnostics panel is operational System board error is detected A mismatch has occurred between the processors, DIMMs, or HDDs within the configuration as reported by POST A non-maskable interrupt (NMI) has occurred An over-temperature condition has occurred that was critical enough to shut down the server A memory fault has occurred. The corresponding DIMM error LEDs on the system board are also lit. A fault is detected in the adjacent expansion unit (if installed)

Integrated Management Module II


Each x240 server has an IMM2 onboard, and uses the Unified Extensible Firmware Interface (UEFI) to replace the older BIOS interface. The IMM2 provides the following major features as standard: IPMI v2.0-compliance Remote configuration of IMM2 and UEFI settings without the need to power on the server

175

Remote access to system fan, voltage, and temperature values Remote IMM and UEFI update UEFI update when the server is powered off Remote console by way of a serial over LAN Remote access to the system event log Predictive failure analysis and integrated alerting features (for example, by using Simple Network Management Protocol (SNMP)) Remote presence, including remote control of server by using a Java or Active x client Operating system failure window (blue screen) capture and display through the web interface Virtual media that allow the attachment of a diskette drive, CD/DVD drive, USB flash drive, or disk image to a server Remember: Unlike IBM BladeCenter, the assigned TCP/IP address of the IMM is available on the local network. This address allows you to remotely manage the x240 by connecting directly to the IMM independent of the FSM or CMM. For more information about the IMM, see 3.4.1, Integrated Management Module II on page 43.

5.2.13 Operating system support


The following operating systems are supported by the x240: Microsoft Windows Server 2008 HPC Edition Microsoft Windows Server 2008 R2 Microsoft Windows Server 2008, Datacenter x64 Edition Microsoft Windows Server 2008, Enterprise x64 Edition Microsoft Windows Server 2008, Standard x64 Edition Microsoft Windows Server 2008, Web x64 Edition Red Hat Enterprise Linux 5 Server with Xen x64 Edition Red Hat Enterprise Linux 5 Server x64 Edition Red Hat Enterprise Linux 6 Server x64 Edition SUSE LINUX Enterprise Server 10 for AMD64/EM64T SUSE LINUX Enterprise Server 11 for AMD64/EM64T SUSE LINUX Enterprise Server 11 with Xen for AMD64/EM64T VMware ESX 4.1 VMware ESXi 4.1 VMware vSphere 5 For the latest list of supported operating systems, see IBM ServerProven at: http://www-03.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatri x.shtml

176

IBM PureFlex System and IBM Flex System Products and Technology

5.3 IBM Flex System x220 Compute Node


The IBM Flex System x220 Compute Node, machine type 7906, is the next generation cost-optimized compute node designed for less demanding workloads and low-density virtualization. The x220 is efficient and equipped with flexible configuration options and advanced management to run a broad range of workloads.

5.3.1 Introduction
The IBM Flex System x220 Compute Node is a high-availability, scalable compute node optimized to support the next-generation microprocessor technology. With a balance of cost and system features, the x220 is an ideal platform for general business workloads. This section describes the key features of the server. Figure 5-24 shows the front of the compute node, showing the location of the controls, LEDs, and connectors.
Two 2.5 HS drive bays

Light path diagnostics panel

USB port

Console breakout cable port

Power

LED panel

Figure 5-24 IBM Flex System x220 Compute Node

177

Figure 5-25 shows the internal layout and major components of the x220.

Cover

Heat sink Microprocessor heat sink filler I/O expansion adapter

Left air baffle

Microprocessor Hard disk drive backplane

Hard disk drive cage Hot-swap hard disk drive Right air baffle

Hard disk drive bay filler

DIMM

Figure 5-25 Exploded view of the x220, showing the major components

Table 5-29 lists the features of the x220.


Table 5-29 IBM Flex System x220 Compute Node specifications Components Form factor Chassis support Processor Specification Half-wide compute node. IBM Flex System Enterprise Chassis. Up to two Intel Xeon Processor E5-2400 product family processors. These processors can be eight-core (up to 2.3 GHz), six-core (up to 2.4 GHz), or quad-core (up to 2.2 GHz). There is one QPI link that runs at 8.0 GTps, L3 cache up to 20 MB, and memory speeds up to 1600 MHz. The server also supports one Intel Pentium Processor 1400 product family processor with two cores, up to 2.8 GHz, 5 MB L3 cache, and 1066 MHz memory speeds. Intel C600 series. Up to 12 DIMM sockets (six DIMMs per processor) using LP DDR3 DIMMs. RDIMMs and UDIMMs are supported. 1.5 V and low-voltage 1.35 V DIMMs are supported. Support for up to 1600 MHz memory speed, depending on the processor. Three memory channels per processor (two DIMMs per channel). Supports two DIMMs per channel operating at 1600 MHz (2 DPC @ 1600 MHz) with single and dual rank RDIMMs.

Chipset Memory

178

IBM PureFlex System and IBM Flex System Products and Technology

Components Memory maximums

Specification With RDIMMs: Up to 192 GB with 12x 16 GB RDIMMs and two E5-2400 processors. With UDIMMs: Up to 48 GB with 12x 4 GB UDIMMs and two E5-2400 processors. Half of these maximums and DIMMs counts with one processor installed. ECC, Chipkill (for x4-based memory DIMMs), and optional memory mirroring and memory rank sparing. Two 2.5-inch hot-swap SAS/SATA drive bays that support SAS, SATA, and SSD drives. Optional eXFlash support for up to eight 1.8-inch SSDs. Onboard ServeRAID C105 supports SATA drives only. With two 2.5-inch hot-swap drives: Up to 2 TB with 1 TB 2.5-inch NL SAS HDDs Up to 1.8 TB with 900 GB 2.5-inch SAS HDDs Up to 2 TB with 1 TB 2.5-inch SATA HDDs Up to 512 GB with 256 GB 2.5-inch SATA SSDs. An intermix of SAS and SATA HDDs and SSDs is supported. With eXFlash 1.8-inch SSDs and ServeRAID M5115 RAID adapter: Up to 1.6 TB with eight 200 GB 1.8-inch SSDs. Software RAID 0 and 1 with integrated LSI-based 3 Gbps ServeRAID C105 controller; supports SATA drives only. Non-RAID is not supported. Optional ServeRAID H1135 RAID adapter with LSI SAS2004 controller, supports SAS/SATA drives with hardware-based RAID 0 and 1. An H1135 adapter is installed in a dedicated PCIe 2.0 x4 connector and does not use either I/O adapter slot (see Figure 5-26 on page 180). Optional ServeRAID M5115 RAID adapter with RAID 0, 1, 10, 5, 50 support and 1 GB cache. M5115 uses the I/O adapter slot 1. Can be installed in all models, including models with an Embedded 1 GbE Fabric Connector. Supports up to eight 1.8-inch SSD with expansion kits. Optional flash-backup for cache, RAID 6/60, and SSD performance enabler. Some models (see Table 5-30 on page 180): Embedded dual-port Broadcom BCM5718 Ethernet Controller that supports Wake on LAN and Serial over LAN, IPv6. TCP/IP offload Engine (TOE) not supported. Routes to chassis I/O module bays 1 and 2 through a Fabric Connector to the chassis midplane. The Fabric Connector precludes the use of I/O adapter slot 1, with the exception that the M5115 can be installed in slot 1 while the Fabric Connector is installed. Remaining models: No network interface standard; optional 1 Gb or 10 Gb Ethernet adapters. Two connectors for I/O adapters; each connector has PCIe x8+x4 interfaces. Includes an Expansion Connector (PCIe 3.0 x16) for future use to connect a compute node expansion unit. Dedicated PCIe 2.0 x4 interface for ServeRAID H1135 adapter only. USB ports: One external and two internal ports for an embedded hypervisor. A console breakout cable port on the front of the server provides local KVM and serial ports (cable standard with chassis; additional cables optional). UEFI, IBM IMM2 with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, and remote presence. Support for IBM Flex System Manager, IBM Systems Director and Active Energy Manager, and IBM ServerGuide. Power-on password, administrator's password, and Trusted Platform Module V1.2. Matrox G200eR2 video core with 16 MB video memory integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors. Three-year customer-replaceable unit and on-site limited warranty with 9x5/NBD.

Memory protection Disk drive bays

Maximum internal storage

RAID support

Network interfaces

PCI Expansion slots

Ports

Systems management Security features Video Limited warranty

179

Components Operating systems supported Service and support

Specification Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, VMware vSphere. For more information, see 5.3.13, Operating system support on page 197. Optional service upgrades are available through IBM ServicePac offerings: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, and remote technical support for IBM hardware and selected IBM and OEM software. Width: 217 mm (8.6 in.), height: 56 mm (2.2 in.), depth: 492 mm (19.4 in.) Maximum configuration: 6.4 kg (14.11 lb).

Dimensions Weight

Figure 5-26 shows the components on the system board of the x220.
Hot-swap drive bay backplane Processor 2 and six memory DIMMs USB port 2 Broadcom Ethernet I/O connector 1 Fabric Connector

Light path diagnostics

Optional ServeRAID H1135

USB port 1

Processor 1 and I/O connector 2 six memory DIMMs

Expansion Connector

Figure 5-26 Layout of the IBM Flex System x220 Compute Node system board

5.3.2 Models
The current x220 models are shown in Table 5-30. All models include 4 GB of memory (one 4 GB DIMM) running at either 1333 MHz or 1066 MHz (depending on model).
Table 5-30 Models of the IBM Flex System x220 Compute Node, type 7906 Model Intel Processor E5-2400: 2 maximum Pentium 1400: 1 maximum 1x Intel Pentium 1403 2C 2.6 GHz 5MB 1066 MHz 80W 1x Intel Xeon E5-2430L 6C 2.0 GHz 15 MB 1333 MHz 60W Memory RAID adapter Disk baysa Disks Embedded 1 GbEb I/O slots (used/ max) 1 / 2b 1 / 2b

7906-A2x 7906-B2x

1x 4 GB UDIMM (1066 MHz)c 1x 4 GB UDIMM 1333 MHz

ServeRAID C105 ServeRAID C105

2x 2.5 hot-swap 2x 2.5 hot-swap

Open Open

Standard Standard

180

IBM PureFlex System and IBM Flex System Products and Technology

Model

Intel Processor E5-2400: 2 maximum Pentium 1400: 1 maximum 1x Intel Xeon E5-2403 4C 1.8 GHz 10 MB 1066 MHz 80W 1x Intel Xeon E5-2420 6C 1.9 GHz 15 MB 1333 MHz 95W 1x Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95W 1x Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95W 1x Intel Xeon E5-2440 6C 2.4 GHz 15 MB 1333 MHz 95W 1x Intel Xeon E5-2450 8C 2.1 GHz 20 MB 1600 MHz 95W 1x Intel Xeon E5-2470 8C 2.3 GHz 20 MB 1600 MHz 95W

Memory

RAID adapter

Disk baysa

Disks

Embedded 1 GbEb

I/O slots (used/ max) 1 / 2b

7906-C2x

1x 4 GB RDIMM (1066 MHz)c 1x 4 GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHzc 1x 4 GB RDIMM 1333 MHzc

ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105

2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap

Open

Standard

7906-D2x

Open

Standard

1 / 2b

7906-G2x

Open

No

0/2

7906-G4x

Open

Standard

1 / 2b

7906-H2x

Open

Standard

1 / 2b

7906-J2x

Open

No

0/2

7906-L2x

Open

No

0/2

a. The 2.5-inch drive bays can be replaced and expanded with IBM eXFlash and a ServeRAID M5115 RAID controller. This configuration supports up to eight 1.8-inch SSDs. b. These models include an embedded 1 Gb Ethernet controller. Connections are routed to the chassis midplane by using a Fabric Connector. Precludes the use of I/O connector 1 (except the ServeRAID M5115). c. For A2x and C2x, the memory operates at 1066 MHz, the memory speed of the processor. For J2x and L2x, memory operates at 1333 MHz to match the installed DIMM, rather than 1600 MHz.

5.3.3 Chassis support


The x220 type 8737 is supported in the IBM Flex System Enterprise Chassis as listed in Table 5-31.
Table 5-31 x220 chassis support Server x220 BladeCenter chassis (all) No IBM Flex System Enterprise Chassis Yes

181

The x220 is a half wide compute node and requires that the chassis shelf is installed in the IBM Flex System Enterprise Chassis. Figure 5-27 shows the chassis shelf in the chassis.

Figure 5-27 The IBM Flex System Enterprise Chassis showing the chassis shelf

The shelf is required for half-wide compute nodes. To allow for installation of the full-wide or larger, shelves must be removed from within the chassis. Remove the shelves by sliding the two latches on the shelf towards the center, and then sliding the shelf from the chassis.

5.3.4 System architecture


The IBM Flex System x220 Compute Node features the Intel Xeon E5-2400 series processors. The Xeon E5-2400 series processor has models with either four, six, or eight cores per processor with up to 16 threads per socket. The processors have the following features: Up to 20 MB of shared L3 cache Hyper-Threading Turbo Boost Technology 2.0 (depending on processor model) One QPI link that runs at up to 8 GT/s One integrated memory controller Three memory channels that support up to two DIMMs each The x220 also supports an Intel Pentium 1403 or 1407 dual-core processor for entry-level server applications. Only one Pentium processor is supported in the x220. CPU socket 2 must be left unused, and only six DIMM sockets are available.

182

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-28 shows the system architecture of the x220 system.


(optional)

x4 ESI link Intel Xeon Processor 1 Intel C600 PCH

ServeRAID PCIe 2.0 x4 H1135 Internal USB Front USB Front KVM port USB x1 USB HDDs or SSDs

DDR3 DIMMs 3 memory channels 2 DIMMs per channel QPI link (up to 8 GT/s) IMM v2

Video & serial Management to midplane PCIe 2.0 x2

1 GbE LOM

PCIe 3.0 x8+x4 I/O connector 1 Intel Xeon Processor 2 PCIe 3.0 x4 PCIe 3.0 x4 PCIe 3.0 x8+x4 I/O connector 2 PCIe 3.0 x16 Sidecar connector

Figure 5-28 IBM Flex System x240 Compute Node system board block diagram

The IBM Flex System x220 Compute Node has the following system architecture features as standard: Two 2011-pin type R (LGA-2011) processor sockets An Intel C600 PCH Three memory channels per socket Up to two DIMMs per memory channel 12 DDR3 DIMM sockets Support for UDIMMs and RDIMMs One integrated 1 Gb Ethernet controller (1 GbE LOM in diagram) One LSI 2004 SAS controller Integrated software RAID 0 and 1 with support for the H1135 LSI-based RAID controller One IMM2 Two PCIe 3.0 I/O adapter connectors with one x8 and one x4 host connection each (12 lanes total). One internal and one external USB connector

183

5.3.5 Processor options


The x220 supports the processor options listed in Table 5-32. The server supports one or two Intel Xeon E5-2400 processors, but supports only one Intel Pentium 1403 or 1407 processor. The table also shows which server models have each processor standard. If no corresponding model for a particular processor is listed, the processor is available only through the configure to order process.
Table 5-32 Supported processors for the x220 Part number 90Y4793 90Y4795 90Y4796 90Y4797 90Y4799 90Y4800 90Y4801 90Y4804 90Y4805 None Nonea Intel Xeon processor description Intel Xeon E5-2470 8C 2.3 GHz 20 MB 1600 MHz 95 W Intel Xeon E5-2450 8C 2.1 GHz 20 MB 1600 MHz 95 W Intel Xeon E5-2440 6C 2.4 GHz 15 MB 1333 MHz 95 W Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95 W Intel Xeon E5-2420 6C 1.9 GHz 15 MB 1333 MHz 95 W Intel Xeon E5-2407 4C 2.2 GHz 10 MB 1066 MHz 80 W Intel Xeon E5-2403 4C 1.8 GHz 10 MB 1066 MHz 80 W Intel Xeon E5-2450L 8C 1.8 GHz 20 MB 1600 MHz 70 W Intel Xeon E5-2430L 6C 2.0 GHz 15 MB 1333 MHz 60 W Intel Pentium 1403 2C 2.6 GHz 5 MB 1066 MHz 80 W Intel Pentium 1407 2C 2.8 GHz 5 MB 1066 MHz 80 W Models where used L2x J2x H2x G2x, G4x D2x C2x B2x A2x -

a. The Intel Pentium 1407 is available through configure to order or special bid only.

5.3.6 Memory options


IBM DDR3 memory is compatibility tested and tuned for optimal performance and throughput. IBM memory specifications are integrated into the light path diagnostic procedures for immediate system performance feedback and optimum system uptime. From a service and support standpoint, IBM memory automatically assumes the IBM system warranty, and IBM provides service and support worldwide. The x220 supports LP DDR3 memory RDIMMs and UDIMMs. The server supports up to six DIMMs when one processor is installed, and up to 12 DIMMs when two processors are installed. Each processor has three memory channels, with two DIMMs per channel. The following rules apply when selecting the memory configuration: Mixing 1.5 V and 1.35 V DIMMs in the same server is supported. In such a case, all DIMMs operate at 1.5 V. The maximum number of ranks supported per channel is eight. The maximum quantity of DIMMs that can be installed in the server depends on the number of processors. For more information, see the Max. qty supported row in Table 5-33 on page 185.

184

IBM PureFlex System and IBM Flex System Products and Technology

All DIMMs in all processor memory channels operate at the same speed, which is determined as the lowest value of: Memory speed supported by a specific processor. Lowest maximum operating speed for the selected memory configuration that depends on rated speed. For more information, see the Max. operating speed section in Table 5-33. The shaded cells indicate that the speed indicated is the maximum that the DIMM allows. Cells highlighted with a gray background indicate when the specific combination of DIMM voltage and number of DIMMs per channel still allows the DIMMs to operate at rated speed.
Table 5-33 Maximum memory speeds Spec Rank Part numbers UDIMMs Single rank 49Y1403 (2 GB) Dual rank 49Y1404 (4 GB) Single rank 49Y1406 (2 GB) RDIMMs Dual rank 49Y1407 (4 GB) 49Y1397 (8 GB) 1333 MHz 1.35 V 1.35 V 12 8 GB 96 GB 96 GB 1.5 V 12 8 GB 96 GB 96 GB 90Y3109 (4 GB) Quad rank 49Y1400 (16 GB)

Rated speed Rated voltage Operating voltage Max quantitya Largest DIMM Max memory capacity Max memory at max speed

1333 MHz 1.35 V 1.35 V 12 2 GB 24 GB 12 GB 1.5 V 12 2 GB 24 GB 12 GB

1333 MHz 1.35 V 1.35 V 12 4 GB 48 GB 24 GB 1.5 V 12 4 GB 48 GB 24 GB

1333 MHz 1.35 V 1.35 V 12 2 GB 24 GB 24 GB 1.5 V 12 2 GB 24 GB 24 GB

1600 MHz 1.5 V 1.5 V 12 4 GB 48 GB 48 GB

1066 MHz 1.35 V 1.35 V 12 16 GB 192 GB 192 GB 1.5 V 12 16 GB 192 GB 192 GB

Maximum operating speed (MHz) 1 DIMM per channel 2 DIMMs per channel 1333 1066 1333 1066 1333 1066 1333 1066 1333 1333 1333 1333 1333 1333 1333 1333 1600 1600 800 800 800 800

a. The maximum quantity supported is shown for two processors installed. When one processor is installed, the maximum quantity supported is half of that shown.

The following memory protection technologies are supported: ECC Chipkill (for x4-based memory DIMMs; look for x4 in the DIMM description) Memory mirroring Memory sparing If memory mirroring is used, DIMMs must be installed in pairs (minimum of one pair per processor). Both DIMMs in a pair must be identical in type and size. If memory rank sparing is used, a minimum of one quad-rank DIMM or two single-rank or dual-rank DIMMs must be installed per populated channel. These DIMMs do not need to be

185

identical. In rank sparing mode, one rank of a DIMM in each populated channel is reserved as spare memory. The size of a rank varies depending on the DIMMs installed. Table 5-34 lists the memory options available for the x220 server. DIMMs can be installed one at a time, but for performance reasons, install them in sets of three (one for each of the three memory channels).
Table 5-34 Memory options for the x220 Part number Description Models where used

Registered DIMM (RDIMM) modules 49Y1406 49Y1407 90Y3109 49Y1397 49Y1400 4 GB (1x 4 GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 4 GB (1x 4 GB, 2Rx8, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 8 GB (1x 8 GB, 2Rx4, 1.5 V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM 8 GB (1x 8 GB, 2Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x 16 GB, 4Rx4, 1.35 V) PC3L-8500 CL7 ECC DDR3 1066 MHz LP RDIMM J2x, L2x C2x, D2x, G2x, G4x, H2x

Unbuffered DIMM (UDIMM) modules 49Y1403 49Y1404 2 GB (1x 2 GB, 1Rx8, 1.35 V) PC3L-10600 ECC DDR3 1333 MHz LP UDIMM 4 GB (1x 4 GB, 2Rx8, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP UDIMM A2x, B2x -

5.3.7 Internal disk storage controllers


The x220 server has two 2.5-inch hot-swap drive bays accessible from the front of the blade server (Figure 5-24 on page 177). The server optionally supports 1.8-inch solid-state drives, as described in ServeRAID M5115 configurations and options on page 188. The x220 supports three disk controllers: ServeRAID C105: An onboard SATA controller with software RAID capabilities ServeRAID H1135: An entry level hardware RAID controller ServeRAID M5115: An advanced RAID controller with cache, backup, and RAID options These three controllers are mutually exclusive. Table 5-35 lists the ordering information.
Table 5-35 Internal storage controller ordering information Part number Integrated 90Y4750 90Y4390 Description ServeRAID C105 ServeRAID H1135 Controller for IBM Flex System and IBM BladeCenter ServeRAID M5115 SAS/SATA Controller Maximum quantity 1 1 1

ServeRAID C105 controller


On standard models, the two 2.5-inch drive bays are connected to a ServeRAID C105 onboard SATA controller with software RAID capabilities. The C105 function is embedded in the Intel C600 chipset.

186

IBM PureFlex System and IBM Flex System Products and Technology

The C105 has the following features: Support for SATA drives (SAS is not supported) Support for RAID 0 and RAID 1 (non-RAID is not supported) 6 Gbps throughput per port Support for up to two volumes Support for virtual drive sizes greater than 2 TB Fixed stripe unit size of 64 KB Support for MegaRAID Storage Manager management software Restriction: There is no native (in-box) driver for Windows and Linux. The drivers must be downloaded separately. In addition, there is no support for VMware, Hyper-V, Xen, or SSDs.

ServeRAID H1135
The x220 also supports an entry level hardware RAID solution with the addition of the ServeRAID H1135 Controller for IBM Flex System and BladeCenter. The H1135 is installed in a dedicated slot (Figure 5-26 on page 180). When the H1135 adapter is installed, the C105 controller is disabled. The H1135 has the following features: Based on the LSI SAS2004 6 Gbps SAS 4-port controller PCIe 2.0 x4 host interface CIOv form factor (supported in the x220 and BladeCenter HS23E) Support for SAS, SATA, and SSD drives Support for RAID 0, RAID 1, and non-RAID 6 Gbps throughput per port Support for up to two volumes Fixed stripe size of 64 KB Native driver support in Windows, Linux, and VMware S.M.A.R.T. support Support for MegaRAID Storage Manager management software

ServeRAID M5115
The ServeRAID M5115 SAS/SATA Controller (90Y4390) is an advanced RAID controller that supports RAID 0, 1, 10, 5, 50, and optional 6 and 60. It includes 1 GB of cache, which can be backed up to flash memory when attached to an optional supercapacitor. The M5115 attaches to the I/O adapter 1 connector. It can be attached even if the Fabric Connector is installed (used to route the Embedded Gb Ethernet to chassis bays 1 and 2). The ServeRAID M5115 cannot be installed if an adapter is installed in I/O adapter slot 1. When the M5115 adapter is installed, the C105 controller is disabled. The ServeRAID M5115 supports combinations of 2.5-inch drives and 1.8-inch solid-state drives: Up to two 2.5-inch drives only Up to four 1.8-inch drives only Up to two 2.5-inch drives, plus up to four 1.8-inch SSDs Up to eight 1.8-inch SSDs For more information about these configurations, see ServeRAID M5115 configurations and options on page 188.

187

The ServeRAID M5115 controller has the following specifications: Eight internal 6 Gbps SAS/SATA ports. PCI Express 3.0 x8 host interface. 6 Gbps throughput per port. 800 MHz dual-core IBM PowerPC processor with an LSI SAS2208 6 Gbps ROC controller. Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411. Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342. Support for SAS and SATA HDDs and SSDs. Support for intermixing SAS and SATA HDDs and SSDs. Mixing different types of drives in the same array (drive group) is not recommended. Support for SEDs with MegaRAID SafeStore. Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447). Support for up to 64 virtual drives, up to 128 drive groups, and up to 16 virtual drives per drive group. Also supports up to 32 physical drives per drive group. Support for LUN sizes up to 64 TB. Configurable stripe size up to 1 MB. Compliant with DDF CoD. S.M.A.R.T. support. MegaRAID Storage Manager management software.

ServeRAID M5115 configurations and options


The x220 with the addition of the M5115 controller supports 2.5-inch drives or 1.8-inch eXFlash SSDs or combinations of the two. At least one hardware kit is required with the ServeRAID M5115 controller. These hardware kits enable specific drive support: ServeRAID M5100 Series Enablement Kit for IBM Flex System x220 (90Y4424) enables support for up to two 2.5-inch HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection. This enablement kit replaces the standard two-bay backplane that is attached through the system board to an onboard controller. The new backplane attaches with an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit. MegaRAID CacheVault flash cache protection uses NAND flash memory powered by a supercapacitor to protect data stored in the controller cache. This module eliminates the need for a lithium-ion battery commonly used to protect DRAM cache memory on PCI RAID controllers. To avoid data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash. This process uses power from the supercapacitor. After power is restored to the RAID controller, the saved data is transferred from the NAND flash back to the DRAM cache. The DRAM cache can then be flushed to disk. 188
IBM PureFlex System and IBM Flex System Products and Technology

Tip: The Enablement Kit is only required if 2.5-inch drives are to be used. If you plan to install four or eight 1.8-inch SSDs only, then this kit is not required. ServeRAID M5100 Series IBM eXFlash Kit for IBM Flex System x220 (90Y4425) enables eXFlash support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay eXFlash backplane that attaches with an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and so this kit does not have a supercapacitor. ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220 (90Y4426) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles, left and right, that can attach two 1.8-inch SSD attachment locations. It also contains flex cables for attachment to up to four 1.8-inch SSDs. Table 5-36 shows the kits required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, you need the M5115 controller, the eXFlash kit, and the SSD Expansion kit.
Table 5-36 ServeRAID M5115 hardware kits Drive support required Maximum number of 2.5-inch drives 2 0 2 0 Maximum number of 1.8-inch SSDs 0 4 (front) 4 (internal) 8 (both) => => => => ServeRAID M5115 90Y4390 Add this Add this Add this Add this ... and this ... and this Components required Enablement Kit 90Y4424 ... and this ... and this ... and this ... and this eXFlash Kit 90Y4425 SSD Expansion Kit 90Y4426

Figure 5-29 shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (row 1 of Table 5-36).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Enablement Kit for x220 (90Y4424)
ServeRAID M5115 controller

MegaRAID CacheVault flash cache protection

Replacement 2-drive backplane

Figure 5-29 The ServeRAID M5115 and the Enablement Kit installed

189

Figure 5-30 shows how the ServeRAID M5115 and eXFlash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (row 4 of Table 5-36 on page 189).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series IBM eXFlash Kit for x220 (90Y4425) and ServeRAID M5100 Series SSD Expansion Kit for x220 (90Y4426)
ServeRAID M5115 controller

eXFlash Kit: Replacement 4-drive SSD backplane and drive bays

SSD Expansion Kit: Four SSDs on special air baffles above DIMMs (no CacheVault flash protection)

Eight drives supported: - Four internal drives - Four front-accessible drives

Figure 5-30 ServeRAID M5115 with eXFlash and SSD Expansion Kits installed

The eight SSDs are installed in the following locations: Four in the front of the system in place of the two 2.5-inch drive bays Two in a tray above the memory banks for processor 1 Two in a tray above the memory banks for processor 2 Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance accelerator, and SSD caching enabler. The FoD license upgrades are listed in Table 5-37.
Table 5-37 Supported upgrade features Part number 90Y4410 90Y4412 90Y4447 Description ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System ServeRAID M5100 Series Performance Accelerator for IBM Flex System (MegaRAID FastPath) ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System (MegaRAID CacheCade Pro 2.0) Maximum supported 1 1 1

These features are described as follows: RAID 6 Upgrade (90Y4410) Adds support for RAID 6 and RAID 60. This is an FoD license. Performance Accelerator (90Y4412) The Performance Accelerator for IBM Flex System, implemented by using the LSI MegaRAID FastPath software, provides high-performance I/O acceleration for SSD-based virtual drives. It uses an extremely low-latency I/O path to increase the maximum IOPS capability of the controller. This feature boosts the performance of applications with a

190

IBM PureFlex System and IBM Flex System Products and Technology

highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is an FoD license. SSD Caching Enabler for traditional hard drives (90Y4447) The SSD Caching Enabler for IBM Flex System, implemented by using the LSI MegaRAID CacheCade Pro 2.0, is designed to accelerate the performance of HDD arrays. It can do so with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache to help maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a FoD license. This feature requires that at least one SSD drive is installed.

5.3.8 Supported internal drives


The x220 supports 1.8-inch and 2.5-inch drives.

Supported 1.8-inch drives


The 1.8-inch solid-state drives supported with the ServeRAID M5115 are listed in Table 5-38.
Table 5-38 Table 9. Supported 1.8-inch solid-state drives Part number 43W7746 43W7726 Description IBM 200 GB SATA 1.8-inch MLC SSD IBM 50 GB SATA 1.8-inch MLC SSD Maximum supported 8 8

Supported 2.5-inch drives


The 2.5-inch drive bays support SAS or SATA HDDs or SATA SSDs. Table 5-39 lists the supported 2.5-inch drive options. The maximum quantity supported is two.
Table 5-39 2.5-inch drive options for internal disk storage Supported by ServeRAID controller Part number Description C105 H1135 M5115

10 K SAS hard disk drives 42D0637 49Y2003 81Y9650 IBM 300 GB 10 K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 600 GB 10 K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 900 GB 10 K 6 Gbps SAS 2.5-inch SFF HS HDD No No No Supported Supported Supported Supported Supported Supported

15 K SAS hard disk drives 42D0677 81Y9670 NL SATA 81Y9722 81Y9726 81Y9730 IBM 250 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD IBM 500 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD IBM 1 TB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD Supported Supported Supported Supported Supported Supported Supported Supported Supported IBM 146 GB 15 K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 300 GB 15 K 6 Gbps SAS 2.5-inch SFF HS HDD No No Supported Supported Supported Supported

191

Supported by ServeRAID controller Part number NL SAS 42D0707 81Y9690 IBM 500 GB 7200 6 Gbps NL SAS 2.5-inch SFF Slim-HS HDD IBM 1 TB 7.2 K 6 Gbps NL SAS 2.5-inch SFF HS HDD No No Supported Supported Supported Supported Description C105 H1135 M5115

Solid-state drives 43W7718 90Y8643 90Y8648 IBM 200 GB SATA 2.5-inch MLC HS SSD IBM 256 GB SATA 2.5-inch MLC HS Entry SSD IBM 128 GB SATA 2.5-inch MLC HS Entry SSD No No No No No No Supported Supported Supported

5.3.9 Embedded 1 Gb Ethernet controller


Some models of the x220 include an Embedded 1 Gb Ethernet controller (also known as LOM) built into the system board. Table 5-30 on page 180 lists what models of the x220 include the controller. Each x220 model that includes the controller also has the Compute Node Fabric Connector installed in I/O connector 1 and physically screwed onto the system board. The Compute Node Fabric Connector provides connectivity to the Enterprise Chassis midplane. Figure 5-26 on page 180 shows the location of the Fabric Connector. The Fabric Connector enables port 1 on the controller to be routed to I/O module bay 1. Similarly, port 2 is routed to I/O module bay 2. The Fabric Connector can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1. The Embedded 1 Gb Ethernet controller has the following features: Broadcom BCM5718 based Dual-port Gigabit Ethernet controller PCIe 2.0 x2 host bus interface Supports Wake on LAN Supports Serial over LAN Supports IPv6 Restriction: TCP/IP offload engine (TOE) is not supported.

5.3.10 I/O expansion


Like other IBM Flex System compute nodes, the x220 has two PCIe 3.0 I/O expansion connectors for attaching I/O adapters. On the x220, each of these connectors has 12 PCIe lanes. These lanes are implemented as one x8 link (connected to the first application-specific integrated circuit (ASIC) on the installed adapter) and one x4 link (connected to the second ASIC on the installed adapter). The I/O expansion connectors are high-density 216-pin PCIe connectors. Installing I/O adapters allows the x220 to connect to switch modules in the IBM Flex System Enterprise Chassis. The x220 also has a third expansion connector designed for future expansion options.

192

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-31 shows the rear of the x240 compute node and the locations of the I/O connectors.

I/O connector 1

I/O connector 2

Figure 5-31 Rear of the x220 compute node showing the locations of the I/O connectors

Table 5-40 lists the I/O adapters that are supported in the x220.
Table 5-40 Supported I/O adapters for the x220 compute node Part number Feature code Ports Description

Ethernet adapters 49Y7900 90Y3466 90Y3554 A1BR A1QY A1R1 4 2 4 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter IBM Flex System CN4054 10 Gb Virtual Fabric Adapter

Fibre Channel adapters 69Y1938 95Y2375 88Y6370 A1BM A2N5 A1BP 2 2 2 IBM Flex System FC3172 2-port 8 Gb FC Adapter IBM Flex System FC3052 2-port 8 Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter

InfiniBand adapters 90Y3454 A1QZ 2 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

Restriction: Any supported I/O adapter can be installed in either I/O connector. However, you must be consistent not only across chassis but across all compute nodes.

5.3.11 Integrated virtualization


The x220 offers USB flash drive options preinstalled with versions of VMware ESXi. This is an embedded version of VMware ESXi and is fully contained on the flash drive, without requiring any disk space. The USB memory key plugs into one of the two internal USB ports on the x220 system board (Figure 5-26 on page 180). If you install USB keys in both USB ports, both

193

devices are listed in the boot menu. This configuration allows you to boot from either device, or set one as a backup in case the first gets corrupted. The supported USB memory keys are listed in Table 5-41.
Table 5-41 Virtualization options Part number 41Y8300 41Y8298 Description IBM USB Memory Key for VMware ESXi 5.0 IBM Blank USB Memory Key for VMware ESXi Downloads Maximum supported 2 2

5.3.12 Systems management


The following section describes some of the systems management features that are available with the x220.

Front panel LEDs and controls


The front of the x220 includes several LEDs and controls that assist in systems management. They include a hard disk drive activity LED, status LEDs, and power, identify, check log, fault, and light path diagnostic LEDs. Figure 5-32 shows the location of the LEDs and controls on the front of the x220.
Hard disk drive activity LED Hard disk drive status LED

USB port

Identify LED

Fault LED

NMI control

Console Breakout Cable port

Power button / LED

Check log LED

Figure 5-32 The front of the x220 with the front panel LEDs and controls shown

Table 5-42 describes the front panel LEDs.


Table 5-42 x220 front panel LED information LED Power Color Green Description This LED lights solid when system is powered up. When the compute node is initially plugged into a chassis, this LED is off. If the power-on button is pressed, the IMM flashes this LED until it determines that the compute node is able to power up. If the compute node is able to power up, the IMM powers the compute node on and turns on this LED solid. If the compute node is not able to power up, the IMM turns off this LED and turns on the information LED. When this button is pressed with the server out of the chassis, the light path LEDs are lit.

194

IBM PureFlex System and IBM Flex System Products and Technology

LED Location

Color Blue

Description A user can use this LED to locate the compute node in the chassis by requesting it to flash from the chassis management module console. The IMM flashes this LED when instructed to by the Chassis Management Module. This LED functions only when the server is powered on. The IMM turns on this LED when a condition occurs that prompts the user to check the system error log in the Chassis Management Module. This LED lights solid when a fault is detected somewhere on the compute node. If this indicator is on, the general fault indicator on the chassis front panel should also be on. Each hot-swap hard disk drive has an activity LED, and when this LED is flashing, it indicates that the drive is in use. When this LED is lit, it indicates that the drive has failed. If an optional IBM ServeRAID controller is installed in the server, when this LED is flashing slowly (one flash per second), it indicates that the drive is being rebuilt. When the LED is flashing rapidly (three flashes per second), it indicates that the controller is identifying the drive.

Check error log Fault Hard disk drive activity LED Hard disk drive status LED

Yellow Yellow Green Yellow

Table 5-43 describes the x220 front panel controls.


Table 5-43 x220 front panel control information Control Power on / off button Characteristic Recessed with Power LED Description If the server is off, pressing this button causes the server to power up and start loading. When the server is on, pressing this button causes a graceful shutdown of the individual server so it is safe to remove. This process includes shutting down the operating system (if possible) and removing power from the server. If an operating system is running, the button might have to be held for approximately 4 seconds to initiate the shutdown. This button must be protected from accidental activation. Group it with the Power LED. Causes an NMI for debugging purposes.

NMI

Recessed. It can be accessed only by using a small pointed object.

Power LED
The status of the power LED of the x220 shows the power status of the compute node. It also indicates the discovery status of the node by the Chassis Management Module. The power LED states are listed in Table 5-44.
Table 5-44 The power LED states of the x240 compute node Power LED state Off On; fast flash mode On; slow flash mode On; solid Status of compute node No power to compute node Compute node has power Chassis Management Module is in discovery mode (handshake) Compute node has power Power in stand-by mode Compute node has power Compute node is operational

Exception: The power button does not operate when the power LED is in fast flash mode.

195

Light path diagnostic procedures


For quick problem determination when located physically at the server, the x220 offers a three step guided path: 1. The Fault LED on the front panel 2. The light path diagnostics panel, shown in Figure 5-33 3. LEDs next to key components on the system board The x220 light path diagnostics panel is visible when you remove the server from the chassis. The panel is on the upper right of the compute node as shown in Figure 5-33.

Figure 5-33 Location of x220 light path diagnostics panel

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis. The meaning of each LED in the light path diagnostics panel is listed in Table 5-45.
Table 5-45 Light path panel LED definitions LED LP S BRD MIS NMI TEMP MEM ADJ Color Green Yellow Yellow Yellow Yellow Yellow Yellow Meaning The light path diagnostics panel is operational System board error is detected A mismatch has occurred between the processors, DIMMs, or HDDs within the configuration as reported by POST An NMI has occurred An over-temperature condition has occurred that was critical enough to shut down the server A memory fault has occurred. The corresponding DIMM error LEDs on the system board should also be lit. A fault is detected in the adjacent expansion unit (if installed)

196

IBM PureFlex System and IBM Flex System Products and Technology

Integrated Management Module II


Each x220 compute node has an IMM2 onboard and uses the UEFI to replace the older BIOS interface. The IMM2 provides the following major features as standard: IPMI v2.0-compliance Remote configuration of IMM2 and UEFI settings without the need to power on the server Remote access to system fan, voltage, and temperature values Remote IMM and UEFI update UEFI update when the server is powered off Remote console by way of a serial over LAN Remote access to the system event log Predictive failure analysis and integrated alerting features (for example, by using SNMP) Remote presence, including remote control of server by using a Java or Active x client Operating system failure window (blue screen) capture and display through the web interface Virtual media that allows the attachment of a diskette drive, CD/DVD drive, USB flash drive, or disk image to a server Remember: Unlike IBM BladeCenter, the assigned TCP/IP address of the IMM is available on the local network. This address allows you to remotely manage the x220 by connecting directly to the IMM independent of the IBM Flex System Manager or Chassis Management Module. For more information about the IMM, see 3.4.1, Integrated Management Module II on page 43.

5.3.13 Operating system support


The following operating systems are supported by the x220: Microsoft Windows Server 2008 HPC Edition Microsoft Windows Server 2008 R2 Microsoft Windows Server 2008, Datacenter x64 Edition Microsoft Windows Server 2008, Enterprise x64 Edition Microsoft Windows Server 2008, Standard x64 Edition Microsoft Windows Server 2008, Web x64 Edition Red Hat Enterprise Linux 5 Server with Xen x64 Edition Red Hat Enterprise Linux 5 Server x64 Edition Red Hat Enterprise Linux 6 Server x64 Edition SUSE LINUX Enterprise Server 10 for AMD64/EM64T SUSE LINUX Enterprise Server 11 for AMD64/EM64T SUSE LINUX Enterprise Server 11 with Xen for AMD64/EM64T VMware ESX 4.1 VMware ESXi 4.1 VMware vSphere 5

197

For the latest list of supported operating systems, see IBM ServerProven at: http://www-03.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatri x.shtml

5.4 IBM Flex System p260 and p24L Compute Nodes


The IBM Flex System p260 Compute Node and IBM Flex System p24L Compute Node are based on IBM POWER architecture technologies. These compute nodes run in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment by using advanced processing technology. This chapter describes the server offerings and the technology used in their implementation. Remember: The IBM Flex System p260 Compute Node can be ordered only as part of IBM PureFlex System as described in Chapter 2, IBM PureFlex System on page 11.

5.4.1 Specifications
The IBM Flex System p260 Compute Node is a half-wide, Power Systems compute node with these characteristics: Two POWER7 processor sockets Sixteen memory slots Two I/O adapter slots An option for up to two internal drives for local storage The IBM Flex System p260 Compute Node has the specifications shown in Table 5-46.
Table 5-46 IBM Flex System p260 Compute Node specifications Components Model numbers Form factor Chassis support Processor Specification 7895-22X. Half-wide compute node. IBM Flex System Enterprise Chassis. Two IBM POWER7 processors. Each processor contains either eight cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. One GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. IBM P7IOC I/O hub. 16 DIMM sockets. RDIMM DDR3 memory supported. Integrated memory controller in each processor, each with four memory channels. Supports IBM Active Memory Expansion with AIX 6.1 or later. All DIMMs operate at 1066 MHz. Both LP (low profile) and VLP (very low profile) DIMMs supported, although only VLP DIMMs are supported if internal HDDs are configured. The use of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs. 256 GB using 16x 16 GB DIMMs.

Chipset Memory

Memory maximums

198

IBM PureFlex System and IBM Flex System Products and Technology

Components Memory protection Disk drive bays

Specification ECC, chipkill. Two 2.5-inch non-hot-swap drive bays that support 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, only 1.8-inch SSDs are supported. If VLP DIMMs are installed, both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together. 1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives. RAID support by using the operating system. None standard. Optional 1 Gb or 10 Gb Ethernet adapters. Two I/O connectors for adapters. PCI Express 2.0 x16 interface. One external USB port. FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, IBM Systems Director, and Active Energy Manager. Power-on password, selectable boot sequence. None. Remote management by using Serial over LAN and IBM Flex System Manager. 3-year customer-replaceable unit and on-site limited warranty with 9x5/NBD. IBM AIX, IBM i, and Linux.

Maximum internal storage RAID support Network interfaces PCI Expansion slots Ports Systems management Security features Video Limited warranty Operating systems supported Service and support

Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software. Width: 215 mm (8.5), height: 51 mm (2.0), depth: 493 mm (19.4). Maximum configuration: 7.0 kg (15.4 lb).

Dimensions Weight

199

5.4.2 System board layout


Figure 5-34 shows the system board layout of the IBM Flex System p260 Compute Node. POWER7 processors Two I/O adapter connectors

16 DIMM slots

Two I/O Hubs

(HDDs are mounted on the cover, located over the memory DIMMs)
Figure 5-34 Layout of the IBM Flex System p260 Compute Node

Connector for future expansion

5.4.3 IBM Flex System p24L Compute Node


The IBM Flex System p24L Compute Node shares several similarities with the IBM Flex System p260 Compute Node. It is a half-wide, Power Systems compute node with two POWER7 processor sockets, 16 memory slots, and two I/O adapter slots, This compute note has an option for up to two internal drives for local storage. The IBM Flex System p24L Compute Node is optimized for lower-cost Linux installations. The IBM Flex System p24L Compute Node has the following features: Up to 16 POWER7 processing cores, with up to 8 per processor Sixteen DDR3 memory DIMM slots that support Active Memory Expansion Supports VLP and LP DIMMs Two P7IOC I/O hubs RAID-compatible SAS controller that supports up to 2 SSD or HDD drives Two I/O adapter slots Flexible service processor (FSP) System management alerts IBM Light Path Diagnostics USB 2.0 port IBM EnergyScale technology The system board layout for the IBM Flex System p24L Compute Node is identical to the IBM Flex System p260 Compute Node, and is shown in Figure 5-34.

200

IBM PureFlex System and IBM Flex System Products and Technology

5.4.4 Front panel


The front panel of Power Systems compute nodes has the following common elements, as shown in Figure 5-35. USB 2.0 port Power control button and light path LED (green) Location LED (blue) Information LED (amber) Fault LED (amber)

USB 2.0 port

Power button

LEDs (left-right): location, info, fault

Figure 5-35 Front panel of the IBM Flex System p260 Compute Node

The USB port on the front of the Power Systems compute nodes is useful for various tasks. These tasks include out-of-band diagnostic procedures, hardware RAID setup, operating system access to data on removable media, and local OS installation. It might be helpful to obtain a USB optical (CD or DVD) drive for these purposes, in case the need arises. Tip: There is no optical drive in the IBM Flex System Enterprise Chassis.

201

The power-control button on the front of the server (Figure 5-35 on page 201) has two functions: When the system is fully installed in the chassis: Use this button to power the system on and off When the system is removed from the chassis: Use this button to illuminate the light path diagnostic panel on the top of the front bezel, as shown in Figure 5-36

Figure 5-36 Light path diagnostic panel

The LEDs on the light path panel indicate the status of the following devices: LP: Light Path panel power indicator S BRD: System board LED (might indicate trouble with processor or MEM, too) MGMT: Flexible Support Processor (or management card) LED D BRD: Drive (or direct access storage device (DASD)) board LED DRV 1: Drive 1 LED (SSD 1 or HDD 1) DRV 2: Drive 2 LED (SSD 2 or HDD 2) If problems occur, the light path diagnostics LEDs assist in identifying the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. Pressing this button temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts. Typically, you can obtain this information from the IBM Flex System Manager or Chassis Management Module before removing the node. However, having the LEDs helps with repairs and troubleshooting if on-site assistance is needed. For more information about the front panel and LEDs, see the IBM Flex System p260 and p460 Compute Node Installation and Service Guide available at: http://www.ibm.com/support

202

IBM PureFlex System and IBM Flex System Products and Technology

5.4.5 Chassis support


The Power Systems compute nodes can be used only in the IBM Flex System Enterprise Chassis. They do not fit in the previous IBM modular systems, such as IBM iDataPlex or IBM BladeCenter. There is no onboard video capability in the Power Systems compute nodes. The systems are accessed by using Serial over LAN (SOL) or the IBM Flex System Manager.

5.4.6 System architecture


This section covers the system architecture and layout of the p260 and p24L Power Systems compute node. The overall system architecture for the p260 and p24L is shown in Figure 5-37.

DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM

SMI

SAS GX++ 4 bytes


PCIe to PCI

HDDs/SSDs

SMI

SMI

POWER7 Processor 0

P7IOC I/O hub

USB controller

To front panel

SMI

Each: PCIe 2.0 x8

4 bytes each
SMI

I/O connector 1

I/O connector 2 Each: PCIe 2.0 x8

SMI

POWER7 Processor 1
SMI

P7IOC I/O hub

ETE connector SMI Each: PCIe 2.0 x8

Flash NVRAM 256 MB DDR2 TPMD Anchor card/VPD

FSP

Phy

BCM5387 Ethernet switch

Systems Management connector

Gb Ethernet ports

Figure 5-37 IBM Flex System p260 Compute Node and IBM Flex System p24L Compute Node block diagram

This diagram shows the two CPU slots, with eight memory slots for each processor. Each processor is connected to a P7IOC I/O hub, which connects to the I/O subsystem (I/O adapters, local storage). At the bottom, you can see a representation of the service processor (FSP) architecture.

5.4.7 Processor
The IBM POWER7 processor represents a leap forward in technology and associated computing capability. The multi-core architecture of the POWER7 processor is matched with

203

a wide range of related technologies to deliver leading throughput, efficiency, scalability, and reliability, availability, and serviceability (RAS). Although the processor is an important component in servers, many elements and facilities must be balanced across a server to deliver maximum throughput. As with previous generations, the design philosophy for POWER7 processor-based systems is system-wide balance. The POWER7 processor plays an important role in this balancing.

Processor options for the p260 and p24L


Table 5-47 defines the processor options for the p260 and p24L compute nodes.
Table 5-47 p260 and p24L processor options Feature code Cores per POWER7 processor Number of POWER7 processors Total cores Core frequency L3 cache size per POWE7 processor

IBM Flex System p260 Compute Node EPR1 EPR3 EPR5 4 8 8 2 2 2 8 16 16 3.3 GHz 3.2 GHz 3.55 GHz 16 MB 32 MB 32 MB

IBM Flex System p24L Compute Node EPR8 EPR9 EPR7 8 8 6 2 2 2 16 16 12 3.2 GHz 3.55 GHz 3.7 GHz 32 MB 32 MB 24 MB

To optimize software licensing, you can unconfigure or disable one or more cores. The feature is listed in Table 5-48.
Table 5-48 Unconfiguration of cores for p260 and p24L Feature code 2319 Description Factory Deconfiguration of 1-core Minimum 0 Maximum 1 less than the total number of cores (For EPR5, the maximum is 7)

Architecture
IBM uses innovative methods to achieve the required levels of throughput and bandwidth. Areas of innovation for the POWER7 processor and POWER7 processor-based systems include (but are not limited to) the following elements: On-chip L3 cache implemented in embedded dynamic random-access memory (eDRAM) Cache hierarchy and component innovation Advances in memory subsystem Advances in off-chip signaling The superscalar POWER7 processor design also provides other capabilities: Binary compatibility with the prior generation of POWER processors Support for PowerVM virtualization capabilities, including PowerVM Live Partition Mobility to and from IBM POWER6 and IBM POWER6+ processor-based systems 204
IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-38 shows the POWER7 processor die layout with major areas identified: Eight POWER7 processor cores, L2 cache, L3 cache and chip power bus interconnect, SMP links, GX++ interface, and integrated memory controller.

GX++ Bridge

Memory Controller

C1 Core L2

C1 Core L2

C1 Core L2

C1 Core L2

4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 4 MB L3 L2 C1 Core L2 C1 Core L2 C1 Core L2 C1 Core

SMP
Figure 5-38 POWER7 processor architecture

5.4.8 Memory
Each POWER7 processor has an integrated memory controller. Industry standard DDR3 RDIMM technology is used to increase the reliability, speed, and density of the memory subsystems.

Memory placement rules


The preferred memory minimum and maximum for the p260 and p24L are listed in Table 5-49.
Table 5-49 Preferred memory limits for p260 and p24L Model IBM Flex System p260 Compute Node IBM Flex System p24L Compute Node Minimum memory 8 GB 24 GB Maximum memory 256 GB (16x 16 GB DIMMs) 256 GB (16 x 16 GB DIMMs)

Generally, use a minimum of 2 GB of RAM per core. The functional minimum memory configuration for the system is 4 GB (2x2 GB). However, this configuration is not sufficient for reasonable production use of the system.

LP and VLP form factors


One benefit of deploying IBM Flex System systems is the ability to use LP memory DIMMs. This design allows for more choices to configure the system to match your needs.

Memory Buffers

205

Table 5-50 lists the available memory options for the p260 and p24L.
Table 5-50 Memory options for p260 and p24L Part number 78P1011 78P0501 78P0502 78P0639 Feature code EM04 8196 8199 8145 Description 2x 2 GB DDR3 DIMM 2x 4 GB DDR3 DIMM 2x 8 GB DDR3 DIMM 2x 16 GB DDR3 DIMM Speed 1066 MHz 1066 MHz 1066 MHz 1066 MHz Form factor LP VLP VLP LP

Requirement: Due to the design of the on-cover storage connections, clients who want to use SAS HDDs must use VLP DIMMs (4 GB or 8 GB). The cover cannot close properly if VLP DIMMs and SAS HDDs are configured in the same system. This mixture physically obstructs the cover. Solid-state drives (SSDs) and LP DIMMs can be used together, however. For more information, see 5.4.10, Storage on page 209. There are 16 buffered DIMM slots on the p260 and the p24L as shown in Figure 5-39.

SMI

DIMM 1 (P1-C1) DIMM 2 (P1-C2) DIMM 3 (P1-C3) DIMM 4 (P1-C4) DIMM 5 (P1-C5) DIMM 6 (P1-C6) DIMM 7 (P1-C7) DIMM 8 (P1-C8) DIMM 9 (P1-C9) DIMM 10 (P1-C10) DIMM 11 (P1-C11) DIMM 12 (P1-C12) DIMM 13 (P1-C13) DIMM 14 (P1-C14) DIMM 15 (P1-C15) DIMM 16 (P1-C16)

SMI

POWER7 Processor 0

SMI

SMI

SMI

SMI

POWER7 Processor 1
SMI

SMI

Figure 5-39 Memory DIMM topology (IBM Flex System p260 Compute Node)

The memory-placement rules are as follows: Install DIMM fillers in unused DIMM slots to ensure effective cooling. Install DIMMs in pairs. Both need to be the same size. Both DIMMs in a pair must be the same size, speed, type, and technology. Otherwise, you can mix compatible DIMMs from multiple manufacturers. Install only supported DIMMs, as described on the IBM ServerProven website: http://www.ibm.com/servers/eserver/serverproven/compat/us/ 206
IBM PureFlex System and IBM Flex System Products and Technology

Table 5-51 shows the required placement of memory DIMMs for the p260 and the p24L, depending on the number of DIMMs installed.
Table 5-51 DIMM placement: p260 and p24L Number of DIMMs Processor 0 Processor 1

DIMM 10

DIMM 11

DIMM 12

DIMM 13

DIMM 14

DIMM 15 x

2 4 6 8 10 12 14 16

x x x x x x x x x x x x x x x x

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

Use of mixed DIMM sizes


All installed memory DIMMs do not have to be the same size. However, keep the following groups of DIMMs the same size: Slots 1-4 Slots 5-8 Slots 9-12 Slots 13-16

5.4.9 Active Memory Expansion


The optional Active Memory Expansion feature is a POWER7 technology that allows the effective maximum memory capacity to be much larger than the true physical memory. Applicable to AIX 6.1 or later, this innovative compression and decompression of memory content using processor cycles allows memory expansion of up to 100%. This memory expansion allows an AIX 6.1 or later partition to do more work with the same physical amount of memory. Conversely, a server can run more partitions and do more work with the same physical amount of memory. Active Memory Expansion uses processor resources to compress and extract memory contents. The trade-off of memory capacity for processor cycles can be an excellent choice. However, the degree of expansion varies based on how compressible the memory content is. Have adequate spare processor capacity available for the compression and decompression. Tests in IBM laboratories using sample workloads showed excellent results for many workloads in terms of memory expansion per additional processor used. Other test workloads had more modest results.

207

DIMM 16

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 7

DIMM 8

DIMM 9

You have a great deal of control over Active Memory Expansion usage. Each individual AIX partition can turn on or turn off Active Memory Expansion. Control parameters set the amount of expansion wanted in each partition to help control the amount of processor used by the Active Memory Expansion function. An IBM Public License (IPL) is required for the specific partition that is turns memory expansion on or off. After being turned on, there are monitoring capabilities in standard AIX performance tools, such as lparstat, vmstat, topas, and svmon. Figure 5-40 represents the percentage of processor used to compress memory for two partitions with various profiles. The green curve corresponds to a partition that has spare processing power capacity. The blue curve corresponds to a partition constrained in processing power.
2 % CPU utilization for expansion Very cost effective 1
1 = Plenty of spare CPU resource available 2 = Constrained CPU resource already running at significant utilization

Amount of memory expansion


Figure 5-40 Processor usage versus memory expansion effectiveness

Both cases show a knee of the curve relationship for processor resources required for memory expansion: Busy processor cores do not have resources to spare for expansion. The more memory expansion that is done, the more processor resources are required. The knee varies, depending on how compressible the memory contents are. This variability demonstrates the need for a case by case study to determine whether memory expansion can provide a positive return on investment. To help you perform this study, a planning tool is included with AIX 6.1 Technology Level 4 or later. The tool allows you to sample actual workloads and estimate both how expandable the partition memory is and how much processor resource is needed. Any Power System model runs the planning tool.

208

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-41 shows an example of the output returned by this planning tool. The tool outputs various real memory and processor resource combinations to achieve the wanted effective memory and proposes one particular combination. In this example, the tool proposes to allocate 58% of a processor core, to benefit from 45% extra memory capacity.
Active Memory Expansion Modeled Statistics: ----------------------Modeled Expanded Memory Size : 8.00 GB Expansion Factor --------1.21 1.31 1.41 1.51 1.61 True Memory Modeled Size -------------6.75 GB 6.25 GB 5.75 GB 5.50 GB 5.00 GB Modeled Memory Gain ----------------1.25 GB [ 19%] 1.75 GB [ 28%] 2.25 GB [ 39%] 2.50 GB [ 45%] 3.00 GB [ 60%] CPU Usage Estimate ----------0.00 0.20 0.35 0.58 1.46

Active Memory Expansion Recommendation: --------------------The recommended AME configuration for this workload is to configure the LPAR with a memory size of 5.50 GB and to configure a memory expansion factor of 1.51. This will result in a memory expansion of 45% from the LPAR's current memory size. With this configuration, the estimated CPU usage due to Active Memory Expansion is approximately 0.58 physical processors, and the estimated overall peak CPU resource required for the LPAR is 3.72 physical processors. Figure 5-41 Output from the AIX Active Memory Expansion planning tool

For more information about this topic, see the white paper, Active Memory Expansion: Overview and Usage Guide, available at: http://www.ibm.com/systems/power/hardware/whitepapers/am_exp.html

5.4.10 Storage
The p260 and p24L has an onboard SAS controller that can manage up to two non-hot-pluggable internal drives. Both 2.5-inch HDDs and 1.8-inch SSDs are supported. The drives attach to the cover of the server, as shown in Figure 5-42 on page 210.

Storage configuration impact to memory configuration


The type of local drives used impacts the form factor of your memory DIMMs: If HDDs are chosen, only VLP DIMMs can be used because of internal spacing. There is not enough room for the 2.5-inch drives to be used with LP DIMMs (currently the 2 GB and 16 GB sizes). Verify your memory choice to make sure that it is compatible with the local storage configuration. The use of SSDs does not have the same limitation, and both LP and VLP DIMMs can be used with SSDs.

209

Figure 5-42 The IBM Flex System p260 Compute Node showing hard disk drive location on top cover

Local storage and cover options


Local storage options are shown in Table 5-52. None of the available drives are hot-swappable. If you use local drives, you need to order the appropriate cover with connections for your drive type. The maximum number of drives that can be installed in the p260 or p24L is two. SSD and HDD drives cannot be mixed. As shown in Figure 5-42, the local drives (HDD or SDD) are mounted to the top cover of the system. When ordering your p260 or p24L select the cover that is appropriate for your system (SSD, HDD, or no drives).
Table 5-52 Local storage options Feature code Part number Description

2.5 inch SAS HDDs 7069 8274 8276 8311 None 42D0627 49Y2022 81Y9654 Top cover with HDD connectors for the p260 and p24L 300 GB 10K RPM non-hot-swap 6 Gbps SAS 600 GB 10K RPM non-hot-swap 6 Gbps SAS 900 GB 10K RPM non-hot-swap 6 Gbps SAS

1.8 inch SSDs 7068 8207 No drives 7067 None Top cover for no drives on the p260 and p24L None 74Y9114 Top cover with SSD connectors for the p260 and p24L 177 GB SATA non-hot-swap SSD

210

IBM PureFlex System and IBM Flex System Products and Technology

Local drive connection


On covers that accommodate drives, the drives attach to an interposer that connects to the system board when the cover is properly installed. This connection is shown in Figure 5-43.

Figure 5-43 Connector on drive interposer card mounted to server cover

The connection for the covers drive interposer on the system board is shown in Figure 5-44.

Figure 5-44 Connection for drive interposer card mounted to the system cover

RAID capabilities
Disk drives and solid-state drives in the p260 and p24L can be used to implement and manage various types of RAID arrays. They can do so in operating systems that are on the ServerProven list. For the compute node, you must configure the RAID array through the smit sasdam command which is the SAS RAID Disk Array Manager for AIX. The AIX Disk Array Manager is packaged with the Diagnostics utilities on the Diagnostics CD. Use smit sasdam to configure the disk drives for use with the SAS controller. The diagnostics CD can be downloaded in ISO file format from: http://www14.software.ibm.com/webapp/set2/sas/f/diags/download/

211

For more information, see Using the Disk Array Manager in the Systems Hardware Information Center at: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/p7ebj/s asusingthesasdiskarraymanager.htm Tip: Depending on your RAID configuration, you might have to create the array before you install the operating system in the compute node. Before you can create a RAID array, reformat the drives so that the sector size of the drives changes from 512 bytes to 528 bytes. If you later decide to remove the drives, delete the RAID array before you remove the drives. If you decide to delete the RAID array and reuse the drives, you might need to reformat the drives. Change the sector size of the drives from 528 bytes to 512 bytes.

5.4.11 I/O expansion


The networking subsystem of the IBM Flex System Enterprise Chassis is designed to provide increased bandwidth and flexibility. The new design also allows for more ports on the available expansion adapters, which allows for greater flexibility and efficiency with your system design.

I/O adapter slots


There are two I/O adapter slots on the p260 and the p24L. Unlike IBM BladeCenter, the I/O adapter slots on IBM Flex System nodes are identical in shape (form factor). Also different is that the I/O adapters for the Power Systems compute nodes have their own connector that plugs into the IBM Flex System Enterprise Chassis midplane. Restriction: There is no onboard network capability in the Power Systems compute nodes other than the FSP NIC interface. All p260, p24L, and p460 configurations must include a 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node.

212

IBM PureFlex System and IBM Flex System Products and Technology

A typical I/O adapter card is shown in Figure 5-45.

PCIe connector Midplane connector

Guide block to ensure correct installation

Adapters share a common size (100 mm x 80 mm)

Figure 5-45 The underside of the IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter

Note the large connector, which plugs into one of the I/O adapter slots on the system board. Also, notice that it has its own connection to the midplane of the Enterprise Chassis. Several of the expansion cards connect directly to the midplane such as the CFFh and HSSF form factors. Others, such as the CIOv, CFFv, SFF, and StFF form factors, do not.

PCI hubs
The I/O is controlled by two P7-IOC I/O controller hub chips. These chips provide additional flexibility when assigning resources within Virtual I/O Server (VIOS) to specific Virtual Machine/logical partitions (LPARs).

Available adapters
Table 5-53 shows the available I/O adapter cards for the p260 and p24L. All p260, p24L, and p460 configurations must include a 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node.
Table 5-53 Supported I/O adapters for the p260 and p24L Feature code 1762a 1763a 1764 1761 Part Number 81Y3124 49Y7900 69Y1938 90Y0134 Description IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter IBM Flex System FC3172 2-port 8 Gb FC Adapter IBM Flex System IB6132 2-port QDR InfiniBand Adapter Number of ports 4 4 2 2

a. At least one 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter must be configured in each server.

213

5.4.12 System management


There are several advanced system management capabilities built into the p260 and p24L. A Flexible Support Processor handles most of the server-level system management. It has features, such as system alerts and SOL capability, which are described in this section.

Flexible Support Processor


An FSP provides out-of-band system management capabilities. These capabilities include system control, runtime error detection, configuration, and diagnostic procedures. Generally, you do not interact with the Flexible Support Processor directly. Rather, you use tools such as IBM Flex System Manager, Chassis Management Module, and external IBM Systems Director Management Console. The Flexible Support Processor provides an SOL interface, which is available by using the Chassis Management Module and the console command.

Serial over LAN


The p260 and p24L do not have an on-board video chip and do not support keyboard, video, and mouse (KVM) connection. Server console access is obtained by a SOL connection only. SOL provides a means to manage servers remotely by using a command-line interface (CLI) over a Telnet or Secure Shell (SSH) connection. SOL is required to manage servers that do not have KVM support or that are attached to the IBM Flex System Manager. SOL provides console redirection for both Software Management Services (SMS) and the server operating system. The SOL feature redirects server serial-connection data over a local area network (LAN) without requiring special cabling. It does so by routing the data by using the Chassis Management Module network interface. The SOL connection enables Power Systems compute nodes to be managed from any remote location with network access to the Chassis Management Module. SOL offers the following advantages: Remote administration without KVM (headless servers) Reduced cabling and no requirement for a serial concentrator Standard Telnet/SSH interface, eliminating the requirement for special client software The Chassis Management Module CLI provides access to the text-console command prompt on each server through a SOL connection. This configuration enables the p260 and p24L to be managed from a remote location.

Anchor card
The anchor card, shown in Figure 5-46 on page 215, contains the vital product data chip that stores system-specific information. The pluggable anchor card provides a means for this information to be transferable from a faulty system board to the replacement system board. Before the service processor knows what system it is on, it reads the vital product data chip to obtain system information. The vital product data chip includes information such as system type, model, and serial number.

214

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-46 Anchor card

5.4.13 Integrated features


As stated in 5.4.1, Specifications on page 198 and 5.4.3, IBM Flex System p24L Compute Node on page 200, the integrated features are as follows: Flexible Support Processor IBM POWER7 Processors SAS RAID-capable Controller USB port In the p260 and p24L, there is a thermal sensor in the Light Path panel assembly.

5.4.14 Operating system support


The IBM Flex System p24L Compute Node is designed to run Linux only. The IBM Flex System p260 Compute Node supports the following configurations: AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284 AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later (planned availability: June 29, 2012) AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later (planned availability: June 29, 2012) AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283 AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later (planned availability: June 29, 2012) AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later (planned availability: June 29, 2012) AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later (planned availability: June 29, 2012) Remember: AIX 5.3 Service Extension is required. IBM i 6.1 with i 6.1.1 machine code, or later IBM i 7.1, or later Novell SUSE Linux Enterprise Server 11 Service Pack 2 for POWER, with current maintenance updates available from Novell to enable all planned functionality

215

Red Hat Enterprise Linux 5.7, for POWER, or later Red Hat Enterprise Linux 6.2, for POWER, or later VIOS 2.2.1.4, or later

5.5 IBM Flex System p460 Compute Node


The IBM Flex System p460 Compute Node is based on IBM POWER architecture technologies. This compute node runs in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment by using advanced processing technology. This section describes the server offerings and the technology used in their implementation. Remember: The IBM Flex System p460 Compute Node can be ordered only as part of IBM PureFlex System as described in Chapter 2, IBM PureFlex System on page 11.

5.5.1 Overview
The IBM Flex System p460 Compute Node is a full-wide, Power Systems compute node. It has four POWER7 processor sockets, 32 memory slots, four I/O adapter slots, and an option for up to two internal drives for local storage. The IBM Flex System p460 Compute Node has the specifications shown in Table 5-54.
Table 5-54 IBM Flex System p260 Compute Node specifications Components Model numbers Form factor Chassis support Processor Specification 7895-42X Full-wide compute node. IBM Flex System Enterprise Chassis. p460: Four IBM POWER7 processors. Each processor contains either eight cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. One GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. IBM P7IOC I/O hub. 32 DIMM sockets. RDIMM DDR3 memory supported. Integrated memory controller in each processor, each with four memory channels. Supports Active Memory Expansion with AIX 6.1 or later. All DIMMs operate at 1066 MHz. Both LP and VLP DIMMs are supported, although only VLP DIMMs are supported if internal HDDs are configured. The use of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs. 512 GB using 32x 16 GB DIMMs. ECC, chipkill.

Chipset Memory

Memory maximums Memory protection

216

IBM PureFlex System and IBM Flex System Products and Technology

Components Disk drive bays

Specification Two 2.5-inch non-hot-swap drive bays that support 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, only 1.8-inch SSDs are supported. If VLP DIMMs are installed, both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together. 1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives. RAID support by using the operating system. None standard. Optional 1 Gb or 10 Gb Ethernet adapters. Two I/O connectors for adapters. PCI Express 2.0 x16 interface. One external USB port. FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, IBM Systems Director, and Active Energy Manager. Power-on password, selectable boot sequence. None. Remote management by using Serial over LAN and IBM Flex System Manager. 3-year customer-replaceable unit and on-site limited warranty with 9x5/NBD. IBM AIX, IBM i, and Linux.

Maximum internal storage RAID support Network interfaces PCI Expansion slots Ports Systems management Security features Video Limited warranty Operating systems supported Service and support

Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software. Width: 437 mm (17.2"), height: 51 mm (2.0), depth: 493 mm (19.4). Maximum configuration: 14.0 kg (30.6 lb).

Dimensions Weight

217

5.5.2 System board layout


Figure 5-47 shows the system board layout of the IBM Flex System p460 Compute Node. POWER7 processors Four I/O adapter connectors I/O adapter installed

32 DIMM slots

Figure 5-47 Layout of the IBM Flex System p460 Compute Node

5.5.3 Front panel


The front panel of Power Systems compute nodes has the following common elements, as shown in Figure 5-48 on page 219: USB 2.0 port Power control button and light path LED (green) Location LED (blue) Information LED (amber) Fault LED (amber)

218

IBM PureFlex System and IBM Flex System Products and Technology

USB 2.0 port


Figure 5-48 Front panel of the IBM Flex System p460 Compute Node

Power button

LEDs (left-right): location, info, fault

The USB port on the front of the Power Systems compute nodes is useful for various tasks. These tasks include out-of-band diagnostic procedures, hardware RAID setup, operating system access to data on removable media, and local OS installation. It might be helpful to obtain a USB optical (CD or DVD) drive for these purposes, in case the need arises. Tip: There is no optical drive in the IBM Flex System Enterprise Chassis. The power-control button on the front of the server (Figure 5-35 on page 201) has these functions: When the system is fully installed in the chassis: Use this button to power the system on and off. When the system is removed from the chassis: Use this button to illuminate the light path diagnostic panel on the top of the front bezel, as shown in Figure 5-49.

Figure 5-49 Light path diagnostic panel

219

The LEDs on the light path panel indicate the status of the following devices: LP: Light Path panel power indicator S BRD: System board LED (might indicate trouble with processor or MEM) MGMT: Flexible Support Processor (or management card) LED D BRD: Drive (or DASD) board LED DRV 1: Drive 1 LED (SSD 1 or HDD 1) DRV 2: Drive 2 LED (SSD 2 or HDD 2) ETE: Sidecar connector LED (not present on the IBM Flex System p460 Compute Node) If problems occur, the light path diagnostics LEDs assist in identifying the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. Pressing the button temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts. You usually obtain this information from the IBM Flex System Manager or Chassis Management Module before removing the node. However, having the LEDs helps with repairs and troubleshooting if on-site assistance is needed. For more information about the front panel and LEDs, see the IBM Flex System p260 and p460 Compute Node Installation and Service Guide available at: http://www.ibm.com/support

5.5.4 Chassis support


The p460 can be used only in the IBM Flex System Enterprise Chassis. They do not fit in the previous IBM modular systems, such as IBM iDataPlex or IBM BladeCenter. There is no onboard video capability in the Power Systems compute nodes. The systems are accessed by using SOL or the IBM Flex System Manager.

220

IBM PureFlex System and IBM Flex System Products and Technology

5.5.5 System architecture


The IBM Flex System p460 Compute Node shares many of the same components as the IBM Flex System p260 Compute Node. The IBM Flex System p460 Compute Node is a full-wide node, and adds additional processors and memory along with two more adapter slots. It has the same local storage options as the IBM Flex System p260 Compute Node. The IBM Flex System p460 Compute Node system architecture is shown in Figure 5-50.

DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM

SMI SMI SMI SMI 4 bytes each SMI SMI SMI SMI FSP BCM5387 Ethernet switch Systems Management connector Gb Ethernet ports
Phy

POWER7 Processor 0

GX++ 4 bytes

PCIe to PCI

USB controller

To front panel

P7IOC I/O hub


Each: PCIe 2.0 x8 I/O connector 1

I/O connector 2

POWER7 Processor 1

P7IOC I/O hub

Each: PCIe 2.0 x8

DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM

Flash NVRAM 256 MB DDR2 TPMD Anchor card/VPD SMI SAS SMI SMI SMI 4 bytes each SMI SMI SMI SMI FSPIO HDDs/SSDs

POWER7 Processor 2

P7IOC I/O hub


Each: PCIe 2.0 x8 I/O connector 3

I/O connector 4

POWER7 Processor 3

P7IOC I/O hub

Each: PCIe 2.0 x8

Figure 5-50 IBM Flex System p460 Compute Node block diagram

221

The four processors in the IBM Flex System p460 Compute Node are connected in a cross-bar formation as shown in Figure 5-51.

POWER7 Processor 0

POWER7 Processor 1

4 bytes each

POWER7 Processor 2

POWER7 Processor 3

Figure 5-51 IBM Flex System p460 Compute Node processor connectivity

5.5.6 Processor
The IBM POWER7 processor represents a leap forward in technology and associated computing capability. The multi-core architecture of the POWER7 processor is matched with a wide range of related technologies to deliver leading throughput, efficiency, scalability, and RAS. Although the processor is an important component in servers, many elements and facilities must be balanced across a server to deliver maximum throughput. The design philosophy for POWER7 processor-based systems is system-wide balance, in which the POWER7 processor plays an important role. Table 5-55 defines the processor options for the p460.
Table 5-55 Processor options for the p460 Feature code EPR2 EPR4 EPR6 Cores per POWER7 processor 4 8 8 Number of POWER7 processors 4 4 4 Total cores 16 32 32 Core frequency 3.3 GHz 3.2 GHz 3.55 GHz L3 cache size per POWE7 processor 16 MB 32 MB 32 MB

222

IBM PureFlex System and IBM Flex System Products and Technology

To optimize software licensing, you can unconfigure or disable one or more cores. The feature is listed in Table 5-56.
Table 5-56 Unconfiguration of cores Feature code 2319 Description Factory Deconfiguration of 1-core Minimum 0 Maximum 1 less than the total number of cores (For EPR5, the maximum is 7)

5.5.7 Memory
Each POWER7 processor has two integrated memory controllers in the chip. Industry standard DDR3 RDIMM technology is used to increase reliability, speed, and density of memory subsystems.

Memory placement rules


The preferred memory minimum and maximums for the p460 are shown in Table 5-57.
Table 5-57 Preferred memory limits for the p460 Model IBM Flex System p460 Compute Node Minimum memory 32 GB Maximum memory 512 GB (32x 16 GB DIMMs)

Use a minimum of 2 GB of RAM per core. The functional minimum memory configuration for the system is 4 GB (2x2 GB) but that is not sufficient for reasonable production use of the system.

LP and VLP form factors


One benefit of deploying IBM Flex System systems is the ability to use LP memory DIMMs. This design allows for more choices to configure the system to match your needs. Table 5-58 lists the available memory options for the p460.
Table 5-58 Memory options for the p460 Part number 78P1011 78P0501 78P0502 78P0639 Feature code EM04 8196 8199 8145 Description 2 GB DDR3 DIMM 4 GB DDR3 DIMM 8 GB DDR3 DIMM 16 GB DDR3 DIMM Speed 1066 MHz 1066 MHz 1066 MHz 1066 MHz Form factor LP VLP VLP LP

Requirement: Due to the design of the on-cover storage connections, if you use SAS HDDs, you must use VLP DIMMs (4 GB or 8 GB). The cover cannot close properly if VLP DIMMs and SAS hard disk drives are configured in the same system. Combining the two physically obstructs the cover from closing. For more information, see 5.4.10, Storage on page 209.

223

There are 16 buffered DIMM slots on the p260 and the p24L, as shown in Figure 5-52. The IBM Flex System p460 Compute Node adds two more processors and 16 additional DIMM slots, divided evenly (eight memory slots) per processor.

SMI

DIMM 1 (P1-C1) DIMM 2 (P1-C2) DIMM 3 (P1-C3) DIMM 4 (P1-C4) DIMM 5 (P1-C5) DIMM 6 (P1-C6) DIMM 7 (P1-C7) DIMM 8 (P1-C8) DIMM 9 (P1-C9) DIMM 10 (P1-C10) DIMM 11 (P1-C11) DIMM 12 (P1-C12) DIMM 13 (P1-C13) DIMM 14 (P1-C14) DIMM 15 (P1-C15) DIMM 16 (P1-C16)

SMI

POWER7 Processor 0

SMI

SMI

SMI

SMI

POWER7 Processor 1
SMI

SMI

Figure 5-52 Memory DIMM topology (Processors 0 and 1 shown)

The memory-placement rules are as follows: Install DIMM fillers in unused DIMM slots to ensure efficient cooling. Install DIMMs in pairs. Both need to be the same size. Both DIMMs in a pair must be the same size, speed, type, and technology. You can mix compatible DIMMs from multiple manufacturers. Install only supported DIMMs, as described on the IBM ServerProven website: http://www.ibm.com/servers/eserver/serverproven/compat/us/

224

IBM PureFlex System and IBM Flex System Products and Technology

For the IBM Flex System p460 Compute Node, Table 5-59 shows the required placement of memory DIMMs, depending on the number of DIMMs installed.
Table 5-59 DIMM placement on IBM Flex System p460 Compute Node CPU 0 CPU 1 CPU 2 Number of DIMMs

CPU 3

DIMM 10

DIMM 11

DIMM 12

DIMM 13

DIMM 14

DIMM 15

DIMM 16

DIMM 17

DIMM 18

DIMM 19

DIMM 20

DIMM 21

DIMM 22

DIMM 23

DIMM 24

DIMM 25

DIMM 26

DIMM 27

DIMM 28

DIMM 29

DIMM 30 x

DIMM 31 x

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

Use of mixed DIMM sizes


All installed memory DIMMs do not have to be the same size. However, for best results, keep these groups of DIMMs the same size: Slots 1-4 Slots 5-8 Slots 9-12 Slots 13-16 Slots 17-20 Slots 21-24 Slots 25-28 Slots 29-32

225

DIMM 32

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 7

DIMM 8

DIMM 9

5.5.8 Active Memory Expansion


The optional Active Memory Expansion feature is a POWER7 technology that allows the effective maximum memory capacity to be much larger than the true physical memory. Applicable to AIX 6.1 or later, this innovative compression and decompression of memory content using processor cycles allows memory expansion of up to 100%. This efficiency allows an AIX 6.1 or later partition to do more work with the same physical amount of memory. Conversely, a server can run more partitions and do more work with the same physical amount of memory. Active Memory Expansion uses processor resources to compress and extract memory contents. The trade-off of memory capacity for processor cycles can be an excellent choice. However, the degree of expansion varies based on how compressible the memory content is. Have adequate spare processor capacity available for the compression and decompression. Tests in IBM laboratories using sample workloads showed excellent results for many workloads in terms of memory expansion per additional processor used. Other test workloads had more modest results. You have a great deal of control over Active Memory Expansion usage. Each individual AIX partition can turn on or turn off Active Memory Expansion. Control parameters set the amount of expansion wanted in each partition to help control the amount of processor used by the Active Memory Expansion function. An IPL is required for the specific partition that is turning on or off memory expansion. After being turned on, there are monitoring capabilities in standard AIX performance tools, such as lparstat, vmstat, topas, and svmon. Figure 5-53 represents the percentage of processor used to compress memory for two partitions with different profiles. The green curve corresponds to a partition that has spare processing power capacity. The blue curve corresponds to a partition constrained in processing power.
2 % CPU utilization for expansion Very cost effective 1
1 = Plenty of spare CPU resource available 2 = Constrained CPU resource already running at significant utilization

Amount of memory expansion


Figure 5-53 Processor usage versus memory expansion effectiveness

Both cases show a knee of the curve relationship for processor resources required for memory expansion: Busy processor cores do not have resources to spare for expansion. The more memory expansion that is done, the more processor resources are required. The knee varies, depending on how compressible the memory contents are. This variation demonstrates the need for a case by case study to determine whether memory expansion can provide a positive return on investment. To help you perform this study, a planning tool is included with AIX 6.1 Technology Level 4 or later. This tool allows you to sample actual 226
IBM PureFlex System and IBM Flex System Products and Technology

workloads and estimate both how expandable the partition memory is and how much processor resource is needed. Any Power System model runs the planning tool. Figure 5-54 shows an example of the output returned by this planning tool. The tool outputs various real memory and processor resource combinations to achieve the required effective memory, and proposes one particular combination. In this example, the tool proposes to allocate 58% of a processor core, to benefit from 45% extra memory capacity.
Active Memory Expansion Modeled Statistics: ----------------------Modeled Expanded Memory Size : 8.00 GB Expansion Factor --------1.21 1.31 1.41 1.51 1.61 True Memory Modeled Size -------------6.75 GB 6.25 GB 5.75 GB 5.50 GB 5.00 GB Modeled Memory Gain ----------------1.25 GB [ 19%] 1.75 GB [ 28%] 2.25 GB [ 39%] 2.50 GB [ 45%] 3.00 GB [ 60%] CPU Usage Estimate ----------0.00 0.20 0.35 0.58 1.46

Active Memory Expansion Recommendation: --------------------The recommended AME configuration for this workload is to configure the LPAR with a memory size of 5.50 GB and to configure a memory expansion factor of 1.51. This will result in a memory expansion of 45% from the LPAR's current memory size. With this configuration, the estimated CPU usage due to Active Memory Expansion is approximately 0.58 physical processors, and the estimated overall peak CPU resource required for the LPAR is 3.72 physical processors. Figure 5-54 Output from the AIX Active Memory Expansion planning tool

For more information about this topic, see the white paper, Active Memory Expansion: Overview and Usage Guide, available at: http://www.ibm.com/systems/power/hardware/whitepapers/am_exp.html

5.5.9 Storage
The p460 has an onboard SAS controller that can manage up to two, non-hot-pluggable internal drives. The drives attach to the cover of the server, as shown in Figure 5-55 on page 228. Even though the p460 is a full-wide server, it has the same storage options as the p260 and the p24L. The type of local drives used impacts the form factor of your memory DIMMs. If HDDs are chosen, then only VLP DIMMs can be used because of internal spacing. There is not enough room for the 2.5-inch drives to be used with LP DIMMs (currently the 2 GB and 16 GB sizes). Verify your memory choice to make sure that it is compatible with the local storage configuration. The use of SSDs does not have the same limitation, and so LP DIMMs can be used with SSDs.

227

Figure 5-55 The IBM Flex System p260 Compute Node showing hard disk drive location

5.5.10 Local storage and cover options


Local storage options are shown in Table 5-60. None of the available drives are hot-swappable. If you use local drives, you need to order the appropriate cover with connections for your drive type. The maximum number of drives that can be installed in any Power Systems compute node is two. SSD and HDD drives cannot be mixed. As shown in Figure 5-55, the local drives (HDD or SDD) are mounted to the top cover of the system. When ordering your p460, select the cover that is appropriate for your system (SSD, HDD, or no drives) as shown in Table 5-60.
Table 5-60 Local storage options Feature code Part number Description

2.5 inch SAS HDDs 7066 8274 8276 8311 None 42D0627 49Y2022 81Y9654 Top cover with HDD connectors for the IBM Flex System p460 Compute Node (full-wide) 300 GB 10K RPM non-hot-swap 6 Gbps SAS 600 GB 10K RPM non-hot-swap 6 Gbps SAS 900 GB 10K RPM non-hot-swap 6 Gbps SAS

1.8 inch SSDs 7065 8207 None 74Y9114 Top Cover with SSD connectors for IBM Flex System p460 Compute Node (full-wide) 177 GB SATA non-hot-swap SSD

228

IBM PureFlex System and IBM Flex System Products and Technology

Feature code No drives 7005

Part number

Description

None

Top cover for no drives on the IBM Flex System p460 Compute Node (full-wide)

On covers that accommodate drives, the drives attach to an interposer that connects to the system board when the cover is properly installed. This connection is shown in Figure 5-56.

Figure 5-56 Connector on drive interposer card mounted to server cover

The connection for the covers drive interposer on the system board is shown in Figure 5-57.

Figure 5-57 Connection for drive interposer card mounted to the system cover

5.5.11 Hardware RAID capabilities


Disk drives and solid-state drives in the Power Systems compute nodes can be used to implement and manage various types of RAID arrays in operating systems. These operating systems must be on the ServerProven list. For the compute node, you must configure the

229

RAID array through the smit sasdam command, which is the SAS RAID Disk Array Manager for AIX. The AIX Disk Array Manager is packaged with the Diagnostics utilities on the Diagnostics CD. Use smit sasdam to configure the disk drives for use with the SAS controller. The diagnostics CD can be downloaded in ISO file format at: http://www14.software.ibm.com/webapp/set2/sas/f/diags/download/ For more information, see Using the Disk Array Manager in the Systems Hardware Information Center at: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/p7ebj/s asusingthesasdiskarraymanager.htm Tip: Depending on your RAID configuration, you might have to create the array before you install the operating system in the compute node. Before creating a RAID array, reformat the drives so that the sector size of the drives changes from 512 bytes to 528 bytes. If you later decide to remove the drives, delete the RAID array before you remove the drives. If you decide to delete the RAID array and reuse the drives, you might need to reformat the drives. Change the sector size of the drives from 528 bytes to 512 bytes.

5.5.12 I/O expansion


The networking subsystem of the IBM Flex System Enterprise Chassis is designed to provide increased bandwidth and flexibility. The new design also allows for more ports on the available expansion adapters, which allows for greater flexibility and efficiency with your system design.

I/O adapter slots


There are four I/O adapter slots on the IBM Flex System p460 Compute Node. Unlike IBM BladeCenter, the I/O adapter slots on IBM Flex System nodes are identical in shape (form factor). Also, the I/O adapters for the p460 have their own connector that plugs into the IBM Flex System Enterprise Chassis midplane. Restriction: There is no onboard network capability in the Power Systems compute nodes other than the FSP NIC interface. All p260, p24L, and p460 configurations must include a 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node.

230

IBM PureFlex System and IBM Flex System Products and Technology

A typical I/O adapter card is shown in Figure 5-58.

PCIe connector Midplane connector

Guide block to ensure correct installation

Adapters share a common size (100 mm x 80 mm)

Figure 5-58 The underside of the IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter

Note the large connector, which plugs into one of the I/O adapter slots on the system board. Also, notice that it has its own connection to the midplane of the Enterprise Chassis. Several of the expansion cards connect directly to the midplane such as the CFFh and HSSF form factors. Others such as the CIOv, CFFv, SFF, and StFF form factors do not.

PCI hubs
The I/O is controlled by four P7-IOC I/O controller hub chips. This configuration provides additional flexibility when assigning resources within the VIOS to specific Virtual Machine/LPARs.

Available adapters
Table 5-61 shows the available I/O adapter cards for the p460. All p260, p24L, and p460 configurations must include a 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node.
Table 5-61 Supported I/O adapters for the p460 Feature code 1762a 1763a 1764 1761 Part Number 81Y3124 49Y7900 69Y1938 90Y0134 Description IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter IBM Flex System FC3172 2-port 8 Gb FC Adapter IBM Flex System IB6132 2-port QDR InfiniBand Adapter

a. At least one 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter must be configured in each server.

231

5.5.13 System management


There are several advanced system management capabilities built into the p460. A Flexible Support Processor handles most of the server-level system management. It has features, such as system alerts and Serial-over-LAN capability that are described in this section.

Flexible Support Processor


An FSP provides out-of-band system management capabilities, such as system control, runtime error detection, configuration, and diagnostic procedures. Generally, you do not interact with the Flexible Support Processor directly. Rather, you use tools, such as IBM Flex System Manager, Chassis Management Module, and external IBM Systems Director Management Console. The Flexible Support Processor provides a Serial-over-LAN interface, which is available by using the Chassis Management Module and the console command. The IBM Flex System p460 Compute Node, even though it is a full-wide system, has only one Flexible Support Processor.

Serial over LAN


The Power Systems compute nodes do not have an on-board video chip and do not support KVM connections. Server console access is obtained by a SOL connection only. SOL provides a means to manage servers remotely by using a CLI over a Telnet or SSH connection. SOL is required to manage servers that do not have KVM support or that are attached to the IBM Flex System Manager. SOL provides console redirection for both SMS and the server operating system. The SOL feature redirects server serial-connection data over a LAN without requiring special cabling by routing the data through the Chassis Management Module network interface. The SOL connection enables Power Systems compute nodes to be managed from any remote location with network access to the Chassis Management Module. SOL offers the following advantages: Remote administration without KVM (headless servers) Reduced cabling and no requirement for a serial concentrator Standard Telnet/SSH interface, eliminating the requirement for special client software The Chassis Management Module CLI provides access to the text-console command prompt on each server through a SOL connection. This configuration allows you to manage the Power Systems compute nodes from a remote location.

Anchor card
The anchor card, shown in Figure 5-59 on page 233, contains the vital product data chip that stores system-specific information. The pluggable anchor card provides a means for this information to be transferred from a faulty system board to the replacement system board. Before the service processor knows what system it is on, it reads the vital product data chip to obtain system information. The vital product data chip includes information such as system type, model, and serial number.

232

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-59 Anchor card

5.5.14 Integrated features


As stated in 5.5.1, Overview on page 216, the IBM Flex System p460 Compute Node has these integrated features: Flexible Support Processor IBM POWER7 Processors SAS RAID-capable Controller USB port

5.5.15 Operating system support


The IBM Flex System p460 Compute Node supports the following configurations: AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284 AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later (planned availability: June 29, 2012) AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later (planned availability: June 29, 2012) AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283 AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later (planned availability: June 29, 2012) AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later (planned availability: June 29, 2012) AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later (planned availability: June 29, 2012) Remember: AIX 5.3 Service Extension is required. IBM i 6.1 with i 6.1.1 machine code, or later IBM i 7.1, or later Novell SUSE Linux Enterprise Server 11 Service Pack 2 for POWER, with current maintenance updates available from Novell to enable all planned functionality Red Hat Enterprise Linux 5.7, for POWER, or later

233

Red Hat Enterprise Linux 6.2, for POWER, or later VIOS 2.2.1.4, or later

5.6 I/O adapters


Each compute node has the optional capability of accommodating one or more I/O adapters to provide connections to the chassis switch modules. The routing of the I/O adapters ports is done through the chassis midplane to the I/O modules. The I/O adapters allow the compute nodes to connect, through the switch modules or pass-through modules in the chassis, to different LAN or SAN fabric types. As described in 5.2.11, I/O expansion on page 171, any supported I/O adapter can be installed in either I/O connector. On servers with the embedded 10 Gb Ethernet controller, the LOM connector must be unscrewed and removed. After it is installed, the I/O adapter on I/O connector 1 is routed to I/O module bay 1 and bay 2 of the chassis. The I/O adapter installed on I/O connector 2 is routed to I/O module bay 3 and bay 4 of the chassis. For more information about specific port routing information see 4.9, I/O architecture on page 85.

5.6.1 Form factor


The I/O adapters attach to a compute node through a high-density 216-pin Molex PCIe connector. Currently the IBM Flex System compute nodes support only one form factor for I/O adapters. A typical I/O adapter is shown in Figure 5-60.

PCIe connector Midplane connector

Guide block to ensure correct installation


Figure 5-60 I/O adapter

Adapters share a common size (96.7 mm x 84.8 mm)

234

IBM PureFlex System and IBM Flex System Products and Technology

5.6.2 Naming structure


Figure 5-61 shows the naming structure for the I/O adapters.

IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

EN2092

Fabric Type: EN = Ethernet FC = Fibre Channel CN = Converged Network IB = InfiniBand

Series: 2 for 1 Gb 3 for 8 Gb 4 for 10 Gb 5 for 16 Gb 6 for InfiniBand

Vendor name where A=01 02 = Brocade 09 = IBM 13 = Mellanox 17 = QLogic

Maximum number of partitions 2 = 2 partitions

Figure 5-61 The naming structure for the I/O adapters

5.6.3 Supported compute nodes


Table 5-62 lists the available I/O adapters and their compatibility with compute nodes.
Table 5-62 I/O adapter compatibility matrix: Compute nodes System x part number Power feature code Supported servers I/O adapters x220 x240 p260 p460 Page

Ethernet adapters 49Y7900 90Y3466 None 90Y3554 1763 None 1762 None EN2024 4-port 1Gb Ethernet Adapter EN4132 2-port 10 Gb Ethernet Adapter EN4054 4-port 10Gb Ethernet Adapter CN4054 10Gb Virtual Fabric Adapter Yes Yes No Yes Yes Yes No Yes Yes No Yes No Yes No Yes No 236 238 240 242

Fibre Channel adapters 69Y1938 95Y2375 88Y6370 1764 None None FC3172 2-port 8Gb FC Adapter FC3052 2-port 8Gb FC Adapter FC5022 2-port 16Gb FC Adapter Yes Yes Yes Yes Yes Yes Yes No No Yes No No 246 247 249

InfiniBand adapters 90Y3454 None None 1761 IB6132 2-port FDR InfiniBand Adapter IB6132 2-port QDR InfiniBand Adapter Yes No Yes No No Yes No Yes 251 253

235

5.6.4 Supported switches


Table 5-63 lists which switches support the available I/O adapters.
Table 5-63 I/O adapter compatibility matrix: Switches Ethernet switches EN4093 10Gb Scalable Switch, 49Y4270 EN2092 1Gb Ethernet Switch, 49Y4294 Fibre Channel switches IB

EN4091 10 Gb Ethernet Pass-thru,

FC3171 8 Gb SAN Pass-thru, 69Y1934 Yes Yes No

FC5022 16 Gb ESB Switch, 90Y9356

FC5022 16Gb SAN Scalable Switch,

FC3171 8 Gb SAN Switch, 69Y1930

System x part number

Power FC

I/O adapters

Ethernet adapters 49Y7900 90Y3466 None 90Y3554 1763 None 1762 None EN2024 4-port 1Gb Ethernet Adapter EN4132 2-port 10 Gb Ethernet Adapter EN4054 4-port 10Gb Ethernet Adapter CN4054 10Gb Virtual Fabric Adapter Yes Yes Yes Yes Yes No Yes Yes Yes Yes Yes Yes

Fibre Channel adapters 69Y1938 95Y2375 88Y6370 1764 None None FC3172 2-port 8Gb FC Adapter FC3052 2-port 8Gb FC Adapter FC5022 2-port 16Gb FC Adapter Yes Yes Yes Yes Yes Yes Yes Yes No

InfiniBand adapters 90Y3454 None None 1761 IB6132 2-port FDR InfiniBand Adapter IB6132 2-port QDR InfiniBand Adapter Yes Yes

5.6.5 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter


The IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter is a quad-port network adapter from Broadcom. It provides 1 Gb per second, full duplex, Ethernet links between a compute node and Ethernet switch modules installed in the chassis. The adapter interfaces to the compute node by using the Peripheral Component Interconnect Express (PCIe) bus. Table 5-64 lists the ordering part number and feature code.
Table 5-64 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter ordering information Part number 49Y7900 System x feature code A1BR Power feature code 1763

Description EN2024 4-port 1Gb Ethernet Adapter

236

IBM PureFlex System and IBM Flex System Products and Technology

IB6131 InfiniBand Switch, 90Y3450

The adapter is supported in compute nodes as listed in Table 5-65.


Table 5-65 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter supported servers System x part number 49Y7900 Power feature code 1763 Supported servers I/O adapters EN2024 4-port 1Gb Ethernet Adapter x240 Yes p260 Yes p460 Yes

The adapter supports the switches listed in Table 5-66.


Table 5-66 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter supported switches EN4091 10 Gb Ethernet Pass-thru, 88Y6043 Yes

EN4093 10Gb Scalable Switch, 49Y4270 Yes

System x part number 49Y7900

Power FC 1763

I/O adapters EN2024 4-port 1Gb Ethernet Adapter

The EN2024 4-port 1Gb Ethernet Adapter has the following features: Dual Broadcom BCM5718 ASICs Quad-port Gigabit 1000BASE-X interface Two PCI Express 2.0 x1 host interfaces, one per ASIC Full duplex (FDX) capability, enabling simultaneous transmission and reception of data on the Ethernet network MSI and MSI-X capabilities, up to 17 MSI-X vectors I/O virtualization support for VMware NetQueue, and Microsoft VMQ Seventeen receive queues and 16 transmit queues Seventeen MSI-X vectors supporting per-queue interrupt to host Function Level Reset (FLR) ECC error detection and correction on internal static random-access memory (SRAM) TCP, IP, and UDP checksum offload Large Send offload, TCP segmentation offload Receive-side scaling Virtual LANs (VLANs): IEEE 802.1q VLAN tagging Jumbo frames (9 KB) IEEE 802.3x flow control Statistic gathering (SNMP MIB II, Ethernet-like MIB [IEEE 802.3x, Clause 30]) Comprehensive diagnostic and configuration software suite Advanced Configuration and Power Interface (ACPI) 1.1a-compliant: multiple power modes Wake-on-LAN (WOL) support

EN2092 1Gb Ethernet Switch, 49Y4294 Yes

237

Preboot Execution Environment (PXE) support RoHS-compliant Figure 5-62 shows the IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter.

Figure 5-62 The EN2024 4-port 1Gb Ethernet Adapter for IBM Flex System

For more information, see the IBM Redbooks Product Guide for the EN2024 4-port 1Gb Ethernet Adapter, available at: http://www.redbooks.ibm.com/abstracts/tips0845.html?Open

5.6.6 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter


The IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter from Mellanox provides the highest performing and most flexible interconnect solution for servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Table 5-67 lists the ordering information.
Table 5-67 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter ordering information Part number 90Y3466 System x feature code A1QY Power feature code None

Description EN4132 2-port 10 Gb Ethernet Adapter

The adapter is supported in compute nodes as listed in Table 5-68.


Table 5-68 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter supported servers System x part number 90Y3466 Power feature code None Supported servers I/O adapters x240 Yes p260 No p460 No

EN4132 2-port 10 Gb Ethernet Adapter

238

IBM PureFlex System and IBM Flex System Products and Technology

Restriction: This I/O adapter is currently not supported on the p260 and p460. Use the IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter instead. The adapter supports the switches listed in Table 5-69.
Table 5-69 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter supported switches EN4091 10 Gb Ethernet Pass-thru, 88Y6043 Yes

System x part number 90Y3466

Power FC None

I/O adapters EN4132 2-port 10 Gb Ethernet Adapter

Yes

The IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter has the following features: Based on Mellanox Connect-X3 technology IEEE Std. 802.3 compliant PCI Express 3.0 (1.1 and 2.0 compatible) through an x8 edge connector up to 8 GT/s 10 Gbps Ethernet Processor offload of transport operations CORE-Direct application offload GPUDirect application offload RDMA over Converged Ethernet (RoCE) End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload Ethernet encapsulation using Ethernet over InfiniBand (EoIB) RoHS-6 compliant

EN2092 1Gb Ethernet Switch, 49Y4294 No

EN4093 10Gb Scalable Switch, 49Y4270

239

Figure 5-63 shows the IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter.

Figure 5-63 The EN4132 2-port 10 Gb Ethernet Adapter for IBM Flex System

For more information, see the IBM Redbooks Product Guide for the EN4132 2-port 10 Gb Ethernet Adapter at: http://www.redbooks.ibm.com/abstracts/tips0873.html?Open

5.6.7 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter


The IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter from Emulex enables the installation of four 10 Gb ports of high-speed Ethernet into an IBM Power Systems compute node. These ports interface to chassis switches or pass-through modules, enabling connections within and external to the IBM Flex System Enterprise Chassis. The firmware for this four port adapter is provided by Emulex, whereas the AIX driver and AIX tool support are provided by IBM. Table 5-70 lists the ordering information.
Table 5-70 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter ordering information Part number None System x feature code None Power feature code 1762

Description EN4054 4-port 10Gb Ethernet Adapter

The adapter is supported in compute nodes as listed in Table 5-71.


Table 5-71 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter supported servers System x part number None Power feature code 1762 Supported servers I/O adapters EN4054 4-port 10Gb Ethernet Adapter x240 No p260 Yes p460 Yes

Restriction: This I/O adapter is not supported on the x240. Use the IBM Flex System CN4054 10 Gb Virtual Fabric Adapter instead.

240

IBM PureFlex System and IBM Flex System Products and Technology

The adapter supports the switches listed in Table 5-72.


Table 5-72 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter supported switches EN4091 10 Gb Ethernet Pass-thru, 88Y6043 Yes

System x part number None

Power FC 1762

I/O adapters EN4054 4-port 10Gb Ethernet Adapter

Yes

The IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter has the following features and specifications: Four-port 10 Gb Ethernet adapter Dual-ASIC Emulex BladeEngine 3 controller Connection to either 1 Gb or 10 Gb data center infrastructure (1 Gb and 10 Gb auto-negotiation) PCI Express 3.0 x8 host interface (The p260 and p460 support PCI Express 2.0 x8.) Full-duplex capability Bus-mastering support Direct memory access (DMA) support PXE support IPv4/IPv6 TCP, UDP checksum offload Large send offload Large receive offload Receive-Side Scaling (RSS) IPv4 TCP Chimney offload TCP Segmentation offload

VLAN insertion and extraction Jumbo frames up to 9000 bytes Load balancing and failover support, including adapter fault tolerance (AFT), switch fault tolerance (SFT), adaptive load balancing (ALB), teaming support, and IEEE 802.3ad Enhanced Ethernet (draft) Enhanced Transmission Selection (ETS) (P802.1Qaz) Priority-based Flow Control (PFC) (P802.1Qbb) Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX, and CEE-DCBX (P802.1Qaz) Supports Serial over LAN (SoL) Total Max Power: 23.1 W

EN2092 1Gb Ethernet Switch, 49Y4294 Yes

EN4093 10Gb Scalable Switch, 49Y4270

241

Figure 5-64 shows the IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter.

Figure 5-64 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter, available at: http://www.redbooks.ibm.com/abstracts/tips0868.html?Open

5.6.8 IBM Flex System CN4054 10 Gb Virtual Fabric Adapter


The IBM Flex System CN4054 10 Gb Virtual Fabric Adapter from Emulex is a 4-port 10 Gb converged network adapter. It can scale to up to 16 virtual ports and support multiple protocols like Ethernet, iSCSI, and FCoE. Table 5-73 lists the ordering part numbers and feature codes.
Table 5-73 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter ordering information System x part number 90Y3554 90Y3558 System x feature code A1R1 A1R0 Power feature code None None

Description IBM Flex System CN4054 10 Gb Virtual Fabric Adapter IBM Flex System CN4054 Virtual Fabric Adapter Upgrade

242

IBM PureFlex System and IBM Flex System Products and Technology

The adapter and upgrade are supported in compute nodes as listed in Table 5-74.
Table 5-74 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter supported servers System x part number 90Y3554 90Y3558 Power feature code None None Supported servers I/O adapters IBM Flex System CN4054 10 Gb Virtual Fabric Adapter IBM Flex System CN4054 Virtual Fabric Adapter Upgrade x240 Yes Yes p260 No No p460 No No

Note: This I/O adapter is not supported on the p260 and p460. Use the IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter instead. The adapter supports the switches listed in Table 5-75.
Table 5-75 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter supported switches EN4091 10 Gb Ethernet Pass-thru, 88Y6043 Yes

System x part number 90Y3554

Power FC None

I/O adapters CN4054 10Gb Virtual Fabric Adapter

Yes

The IBM Flex System CN4054 10 Gb Virtual Fabric Adapter has the following features and specifications: Dual-ASIC Emulex BladeEngine 3 controller Operates either as a 4-port 1/10 Gb Ethernet adapter, or supports up to 16 Virtual Network Interface Cards (vNICs). In virtual NIC (vNIC) mode, it supports: Virtual port bandwidth allocation in 100 Mbps increments. Up to 16 virtual ports per adapter (four per port). With the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, four of the 16 vNICs (one per port) support iSCSI or FCoE. Support for two vNIC modes: IBM Virtual Fabric Mode and Switch Independent Mode. Wake On LAN support. With the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, the adapter adds FCoE and iSCSI hardware initiator support. iSCSI support is implemented as a full offload and presents an iSCSI adapter to the operating system. TCP offload Engine (TOE) support with Windows Server 2003, 2008, and 2008 R2 (TCP Chimney) and Linux. Connection and its state are passed to the TCP offload engine. Data transmit and receive is handled by adapter.

EN2092 1Gb Ethernet Switch, 49Y4294 Yes

EN4093 10Gb Scalable Switch, 49Y4270

243

Supported with iSCSI. Connection to either 1 Gb or 10 Gb data center infrastructure (1 Gb and 10 Gb auto-negotiation). PCI Express 3.0 x8 host interface. Full-duplex capability. Bus-mastering support. DMA support. PXE support. IPv4/IPv6 TCP, UDP checksum offload: Large send offload Large receive offload RSS IPv4 TCP Chimney offload TCP Segmentation offload

VLAN insertion and extraction. Jumbo frames up to 9000 bytes. Load balancing and failover support, including AFT, SFT, ALB, teaming support, and IEEE 802.3ad. Enhanced Ethernet (draft): Enhanced Transmission Selection (ETS) (P802.1Qaz) Priority-based Flow Control (PFC) (P802.1Qbb) Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX, and CEE-DCBX (P802.1Qaz) Supports Serial over LAN (SoL) Total Max Power: 23.1 W The IBM Flex System CN4054 10 Gb Virtual Fabric Adapter supports the following modes of operation: IBM Virtual Fabric Mode This mode works only in conjunction with a IBM Flex System Fabric EN4093 10 Gb Scalable Switch installed in the chassis. In this mode, the adapter communicates with the switch module to obtain vNIC parameters by using Data Center Bridging Exchange (DCBX). A special tag within each data packet is added and later removed by the NIC and switch for each vNIC group. This tag helps maintain separation of the virtual channels. In IBM Virtual Fabric Mode, each physical port is divided into four virtual ports, providing a total of 16 virtual NICs per adapter. The default bandwidth for each vNIC is 2.5 Gbps. Bandwidth for each vNIC can be configured at the EN4093 switch from 100 Mbps to 10 Gbps, up to a total of 10 Gb per physical port. The vNICs can also be configured to have 0 bandwidth if you must allocate the available bandwidth to fewer than eight vNICs. In IBM Virtual Fabric Mode, you can change the bandwidth allocations through the EN4093 switch user interfaces without having to reboot the server. When storage protocols are enabled on the adapter by using CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, six ports are Ethernet, and two ports are either iSCSI or FCoE.

244

IBM PureFlex System and IBM Flex System Products and Technology

Switch Independent vNIC Mode This vNIC mode is supported with the following switches: IBM Flex System Fabric EN4093 10 Gb Scalable Switch IBM Flex System EN4091 10 Gb Ethernet Pass-thru and a top-of-rack switch Switch Independent Mode offers the same capabilities as IBM Virtual Fabric Mode in terms of the number of vNICs and bandwidth that each can have. However, Switch Independent Mode extends the existing customer VLANs to the virtual NIC interfaces. The IEEE 802.1Q VLAN tag is essential to the separation of the vNIC groups by the NIC adapter or driver and the switch. The VLAN tags are added to the packet by the applications or drivers at each end station rather than by the switch. Physical NIC (pNIC) mode In pNIC mode, the expansion card can operate as a standard 10 Gbps or 1 Gbps 4-port Ethernet expansion card. When in pNIC mode, the expansion card functions with any of the following I/O modules: IBM Flex System Fabric EN4093 10 Gb Scalable Switch IBM Flex System EN4091 10 Gb Ethernet Pass-thru and a top-of-rack switch IBM Flex System EN2092 1 Gb Ethernet Scalable Switch In pNIC mode, the adapter with the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, applied operates in traditional converged network adapter (CNA) mode. It operates with four ports of Ethernet and four ports of storage (iSCSI or FCoE) available to the operating system. Figure 5-65 shows the IBM Flex System CN4054 10 Gb Virtual Fabric Adapter.

Figure 5-65 The CN4054 10Gb Virtual Fabric Adapter for IBM Flex System

For more information, see the IBM Redbooks Product Guide for the IBM Flex System CN4054 10 Gb Virtual Fabric Adapter, at: http://www.redbooks.ibm.com/abstracts/tips0868.html?Open

245

5.6.9 IBM Flex System FC3172 2-port 8 Gb FC Adapter


The IBM Flex System FC3172 2-port 8 Gb FC Adapter from QLogic enables high-speed access for IBM Flex System Enterprise Chassis compute nodes to connect to a Fibre Channel SAN. This adapter is based on the proven QLogic 2532 8 Gb ASIC design. It works with any of the 8 Gb or 16 Gb IBM Flex System Fibre Channel switch modules. Table 5-76 lists the ordering part number and feature code.
Table 5-76 IBM Flex System FC3172 2-port 8 Gb FC Adapter ordering information Part number 69Y1938 Feature codesa A1BM / 1764 Description IBM Flex System FC3172 2-port 8 Gb FC Adapter

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

The adapter is supported in compute nodes as listed in Table 5-77.


Table 5-77 IBM Flex System FC3172 2-port 8 Gb FC Adapter supported servers System x part number 69Y1938 Power feature code 1764 Supported servers I/O adapters IBM Flex System FC3172 2-port 8 Gb FC Adapter x240 Yes p260 Yes p460 Yes

The adapter supports the switches listed in Table 5-78.


Table 5-78 IBM Flex System FC3172 2-port 8 Gb FC Adapter supported switches FC3171 8 Gb SAN Pass-thru, 69Y1934 Yes FC5022 16 Gb ESB Switch, 90Y9356 Yes FC5022 16Gb SAN Scalable Switch, FC3171 8 Gb SAN Switch, 69Y1930 Yes

System x part number 69Y1938

Power FC 1764

I/O adapters FC3172 2-port 8Gb FC Adapter

Yes

The IBM Flex System FC3172 2-port 8 Gb FC Adapter has the following features: QLogic ISP2532 controller PCI Express 2.0 x4 host interface Bandwidth: 8 Gb per second maximum at half-duplex and 16 Gb per second maximum at full-duplex per port 8/4/2 Gbps auto-negotiation Support for FCP SCSI initiator and target operation Support for full-duplex operation Support for Fibre Channel protocol SCSI (FCP-SCSI) and Fibre Channel Internet protocol (FCP-IP) Support for point-to-point fabric connection (F-port fabric login) Support for Fibre Channel Arbitrated Loop (FC-AL) public loop profile: Fibre Loop-(FL-Port)-Port Login Support for Fibre Channel services class 2 and 3

246

IBM PureFlex System and IBM Flex System Products and Technology

Configuration and boot support in UEFI Power usage: 3.7 W typical RoHS 6 compliant Figure 5-66 shows the IBM Flex System FC3172 2-port 8 Gb FC Adapter.

Figure 5-66 The IBM Flex System FC3172 2-port 8 Gb FC Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC3172 2-port 8 Gb FC Adapter, at: http://www.redbooks.ibm.com/abstracts/tips0867.html?Open

5.6.10 IBM Flex System FC3052 2-port 8 Gb FC Adapter


The IBM Flex System FC3052 2-port 8 Gb FC Adapter from Emulex provides compute nodes with high-speed access to a Fibre Channel SAN. This 2-port 8 Gb adapter is based on the Emulex 8 Gb Fibre Channel application-specific integrated circuits (ASIC). It uses industry-proven technology to provide high-speed, reliable access to SAN connected storage. The two ports enable redundant connections to the SAN, which can increase reliability and reduce downtime. Table 5-79 lists the ordering part number and feature code.
Table 5-79 IBM Flex System FC3052 2-port 8 Gb FC Adapter ordering information System x part number 95Y2375 System x feature code A2N5 Power feature code None Description

IBM Flex System FC3052 2-port 8 Gb FC Adapter

247

The adapter is supported in compute nodes as listed in Table 5-80.


Table 5-80 IBM Flex System FC3052 2-port 8 Gb FC Adapter supported servers System x part number 95Y2375 Power feature code None Supported servers I/O adapters IBM Flex System FC3052 2-port 8 Gb FC Adapter x240 Yes p260 No p460 No

Restriction: This I/O adapter is not supported on the p260 and p460. Use the IBM Flex System FC3172 2-port 8 Gb FC Adapter instead. The adapter supports the switches listed in Table 5-81.
Table 5-81 IBM Flex System FC3052 2-port 8 Gb FC Adapter supported switches FC3171 8 Gb SAN Pass-thru, 69Y1934 Yes FC5022 16 Gb ESB Switch, 90Y9356 Yes FC5022 16Gb SAN Scalable Switch, FC3171 8 Gb SAN Switch, 69Y1930 Yes

System x part number 95Y2375

Power FC None

I/O adapters FC3052 2-port 8Gb FC Adapter

Yes

The IBM Flex System FC3052 2-port 8 Gb FC Adapter has the following features and specifications: Uses the Emulex Saturn 8 Gb Fibre Channel I/O Controller chip Multifunction PCIe 2.0 device with two independent FC ports Auto-negotiation between 2-Gbps, 4-Gbps, and 8-Gbps FC link attachments Complies with the PCIe base and CEM 2.0 specifications Enablement of high-speed and dual-port connection to a Fibre Channel SAN Comprehensive virtualization capabilities with support for N_Port ID Virtualization (NPIV) and Virtual Fabric Simplified installation and configuration by using common HBA drivers Common driver model that eases management and enables upgrades independent of HBA firmware Fibre Channel specifications: Bandwidth: Burst transfer rate of up to 1600 MBps full-duplex per port Support for point-to-point fabric connection: F-Port Fabric Login Support for FC-AL and FC-AL-2 FL-Port Login Support for Fibre Channel services class 2 and 3 Single-chip design with two independent 8 Gbps serial Fibre Channel ports, each of which provides these features: Reduced instruction set computer (RISC) processor Integrated serializer/deserializer Receive DMA sequencer Frame buffer

248

IBM PureFlex System and IBM Flex System Products and Technology

Onboard DMA: DMA controller for each port: Transmit and receive Frame buffer first in, first out (FIFO): Integrated transmit and receive frame buffer for each data channel Figure 5-67 shows the IBM Flex System FC3052 2-port 8 Gb FC Adapter.

Figure 5-67 IBM Flex System FC3052 2-port 8 Gb FC Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC3052 2-port 8 Gb FC Adapter, at: http://www.redbooks.ibm.com/abstracts/tips0869.html?Open

5.6.11 IBM Flex System FC5022 2-port 16Gb FC Adapter


The network architecture on the IBM Flex System platform is designed to address network challenges. It gives you a scalable way to integrate, optimize, and automate your data center. The IBM Flex System FC5022 2-port 16Gb FC Adapter enables high-speed access to external SANs. This adapter is based on Brocade architecture, and offers end-to-end 16 Gb connectivity to SAN. It can auto-negotiate, and also work at 8 Gb and 4 Gb speeds. It has enhanced features like N-port trunking, and increased encryption for security. Table 5-82 lists the ordering part number and feature code.
Table 5-82 IBM Flex System FC5022 2-port 16Gb FC Adapter ordering information System x part number 88Y6370 System x feature code A1BP Power feature code None Description

IBM Flex System FC5022 2-port 16Gb FC Adapter

249

The adapter is supported in compute nodes as listed in Table 5-83.


Table 5-83 IBM Flex System FC5022 2-port 16Gb FC Adapter supported servers System x part number 95Y2375 Power feature code None Supported servers I/O adapters IBM Flex System FC3052 2-port 8 Gb FC Adapter x240 Yes p260 No p460 No

Restriction: This I/O adapter is not supported on the p260 and p460. Use the IBM Flex System FC3172 2-port 8 Gb FC Adapter instead. The adapter supports the switches listed in Table 5-84.
Table 5-84 IBM Flex System FC5022 2-port 16Gb FC Adapter supported switches FC3171 8 Gb SAN Pass-thru, 69Y1934 No FC5022 16 Gb ESB Switch, 90Y9356 Yes FC5022 16Gb SAN Scalable Switch, FC3171 8 Gb SAN Switch, 69Y1930 No

System x part number 88Y6370

Power FC None

I/O adapters FC5022 2-port 16Gb FC Adapter

Yes

The IBM Flex System FC5022 2-port 16Gb FC Adapter has the following features: 16 Gbps Fibre Channel Use 16 Gbps bandwidth to eliminate internal oversubscription Investment protection with the latest Fibre Channel technologies Reduce the number of ISL external switch ports, optics, cables, and power Over 500,000 IOPS per port, which maximizes transaction performance and density of VMs per compute node Achieves performance of 315,000 IOPS for Email Exchange and 205,000 IOPS for SQL Database Boot from SAN allows the automation SAN Boot LUN discovery to simplify boot from SAN and reduce image management complexity Brocade Server Application Optimization (SAO) provides quality of service (QoS) levels assignable to VM applications Direct I/O enables native (direct) I/O performance by allowing VMs to bypass the hypervisor and communicate directly with the adapter Brocade Network Advisor simplifies and unifies the management of Brocade adapter, SAN, and LAN resources through a single pane-of-glass LUN Masking, an Initiator-based LUN masking for storage traffic isolation NPIV allows multiple host initiator N_Ports to share a single physical N_Port, dramatically reducing SAN hardware requirements Target Rate Limiting (TRL) throttles data traffic when accessing slower speed storage targets to avoid back pressure problems RoHS-6 compliant

250

IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-68 shows the IBM Flex System FC5022 2-port 16Gb FC Adapter.

Figure 5-68 IBM Flex System FC5022 2-port 16Gb FC Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC5022 2-port 16Gb FC Adapter, at: http://www.redbooks.ibm.com/abstracts/tips0891.html?Open

5.6.12 IBM Flex System IB6132 2-port FDR InfiniBand Adapter


InfiniBand is a high-speed server-interconnect technology that is ideally suited as the interconnect technology for access layer and storage components. It is designed for application and back-end IPC applications, for connectivity between application and back-end layers, and from back-end to storage layers. Through use of host channel adapters (HCAs) and switches, InfiniBand technology is used to connect servers with remote storage and networking devices, and other servers. It can also be used inside servers for interprocess communication (IPC) in parallel clusters The IBM Flex System IB6132 2-port FDR InfiniBand Adapter delivers low-latency and high bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered databases, parallelized applications, transactional services, and high-performance embedded I/O applications can achieve significant performance improvements. These improvements in turn help reduce the completion time and lowers the cost per operation. The IB6132 2-port FDR InfiniBand Adapter simplifies network deployment by consolidating clustering, communications, and management I/O, and helps provide enhanced performance in virtualized server environments. Table 5-85 lists the ordering part number and feature code.
Table 5-85 IBM Flex System IB6132 2-port FDR InfiniBand Adapter ordering information System x part number 90Y3454 System x feature code A1QZ Power feature code None Description

IBM Flex System IB6132 2-port FDR InfiniBand Adapter

251

The adapter is supported in compute nodes as listed in Table 5-86.


Table 5-86 IBM Flex System IB6132 2-port FDR InfiniBand Adapter supported servers System x part number 90Y3454 Power feature code None Supported servers I/O adapters IBM Flex System IB6132 2-port FDR InfiniBand Adapter x240 Yes p260 No p460 No

Restriction: This I/O adapter is not supported on the p260 and p460. Use the IBM Flex System IB6132 2-port QDR InfiniBand Adapter instead. The adapter supports the switches listed in Table 5-87.
Table 5-87 IBM Flex System IB6132 2-port FDR InfiniBand Adapter supported switches

System x part number 90Y3454

Power FC None

I/O adapters IB6132 2-port FDR InfiniBand Adapter

The IB6132 2-port FDR InfiniBand Adapter has the following features and specifications: Based on Mellanox Connect-X3 technology Virtual Protocol Interconnect (VPI) InfiniBand Architecture Specification v1.2.1 compliant Supported InfiniBand speeds (auto-negotiated): 1X/2X/4X SDR (2.5 Gbps per lane) DDR (5 Gbps per lane) QDR (10 Gbps per lane) FDR10 (40 Gbps, 10 Gbps per lane) FDR (56 Gbps, 14 Gbps per lane)

IEEE Std. 802.3 compliant PCI Express 3.0 x8 host-interface up to 8 GT/s bandwidth Processor offload of transport operations CORE-Direct application offload GPUDirect application offload Unified Extensible Firmware Interface (UEFI) WoL RoCE End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload Ethernet encapsulation (EoIB)

252

IBM PureFlex System and IBM Flex System Products and Technology

IB6131 InfiniBand Switch, 90Y3450 Yes

RoHS-6 compliant Power consumption: Typical: 9.01 W, maximum 10.78 W Figure 5-69 shows the IBM Flex System IB6132 2-port FDR InfiniBand Adapter.

Figure 5-69 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System IB6132 2-port FDR InfiniBand Adapter, at: http://www.redbooks.ibm.com/abstracts/tips0872.html?Open

5.6.13 IBM Flex System IB6132 2-port QDR InfiniBand Adapter


The IBM Flex System IB6132 2-port QDR InfiniBand Adapter provides a high-performing and flexible interconnect solution for servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments. The adapter is based on Mellanox ConnectX-2 EN technology, which improves network performance by increasing available bandwidth to the processor, especially in virtualized server environments. Table 5-88 lists the ordering part number and feature code.
Table 5-88 IBM Flex System IB6132 2-port QDR InfiniBand Adapter ordering information System x part number None System x feature code None Power feature code 1761 Description

IB6132 2-port QDR InfiniBand Adapter

The adapter is supported in compute nodes as listed in Table 5-89.


Table 5-89 IBM Flex System IB6132 2-port QDR InfiniBand Adapter supported servers System x part number None Power feature code 1761 Supported servers I/O adapters IB6132 2-port QDR InfiniBand Adapter x240 No p260 Yes p460 Yes

253

Restriction: This I/O adapter is not supported on the x240. Use the IBM Flex System IB6132 2-port FDR InfiniBand Adapter instead. The adapter supports the switches listed in Table 5-90.
Table 5-90 IBM Flex System IB6132 2-port QDR InfiniBand Adapter supported switches

System x part number None

Power FC 1761

I/O adapters IB6132 2-port QDR InfiniBand Adapter

The IBM Flex System IB6132 2-port QDR InfiniBand Adapter has the following features and specifications: ConnectX2 based adapter VPI InfiniBand Architecture Specification v1.2.1 compliant IEEE Std. 802.3 compliant PCI Express 2.0 (1.1 compatible) through an x8 edge connector up to 5 GT/s Processor offload of transport operations CORE-Direct application offload GPUDirect application offload UEFI WoL RoCE End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload RoHS-6 compliant

254

IBM PureFlex System and IBM Flex System Products and Technology

IB6131 InfiniBand Switch, 90Y3450 Yes

Figure 5-70 shows the IBM Flex System IB6132 2-port QDR InfiniBand Adapter.

Figure 5-70 IBM Flex System IB6132 2-port QDR InfiniBand Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System IB6132 2-port QDR InfiniBand Adapter, at: http://www.redbooks.ibm.com/abstracts/tips0890.html?Open

255

256

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 6.

Network integration
This chapter describes different aspects of planning and implementing a network infrastructure of the IBM Flex System Enterprise Chassis. You need to take several factors into account to achieve a successful implementation. These factors include network management, performance, high-availability and redundancy features, VLAN implementation, interoperability, and others. This chapter includes the following sections: 6.1, Ethernet switch module selection on page 258 6.2, Scalable switches on page 258 6.3, VLAN on page 260 6.4, High availability and redundancy on page 261 6.5, Performance on page 266 6.6, IBM Virtual Fabric Solution on page 267 6.7, VMready on page 270

Copyright IBM Corp. 2012. All rights reserved.

257

6.1 Ethernet switch module selection


There are a number of I/O modules that can be used to provide network connectivity. They include Ethernet switch modules that provide integrated switching capabilities and pass-through modules that make internal compute node ports available to the outside. Plan to use the Ethernet switch modules whenever possible, because they often provide the required functions with simplified cabling. However, some circumstances such as specific security policies or certain network requirements prevent using integrated switching capabilities. In these cases, use pass-through modules. For more information about Ethernet pass-through module for the Enterprise Chassis, see 4.10.5, IBM Flex System EN4091 10 Gb Ethernet Pass-thru on page 100. Make sure that the external interface ports of the switches selected are compatible with physical cabling that you use or are planning to use in your data center. Also make sure that features and functions required in the network are supported by proposed switch modules. Table 6-1 lists common selection considerations that are useful when selecting an appropriate switch module.
Table 6-1 Switch module selection criteria Suitable switch module EN2092 1Gb Ethernet Switch EN4093 10Gb Scalable Switch

Requirement Gigabit Ethernet to nodes/10 Gb Ethernet Uplinks 10 Gb Ethernet to nodes/10 Gb Ethernet Uplinks Basic Layer 2 switching (VLAN, port aggregation) Advanced Layer 2 switching: IEEE features (Failover, QoS) Layer 3 IPv4 switching (forwarding, routing, ACL filtering) Layer 3 IPv6 switching (forwarding, routing, ACL filtering) 10 Gb Ethernet CEE/FCoE Switch stacking vNIC support VMready Yes No Yes Yes Yes Yes No No No Yes Yes Yes Yes Yes Yes Yes Yesa Yesa Yes Yes

a. Support for Fibre Channel over Ethernet (FCoE) and switch stacking is planned for later in 2012

6.2 Scalable switches


The switches that are installable within the Enterprise Chassis are scalable. Additional ports (or partitions) can be added as required, growing the switch to meet new requirements. The architecture allows for up to 16 scalable switch partitions within each chassis, with a total of four partitions per switch. The number of partitions is dictated by the specific I/O adapter and I/O module combination. The scalable switch module requires upgrades to enable partitioning. 258
IBM PureFlex System and IBM Flex System Products and Technology

Port upgrades to scalable switches are added as part of Feature on Demand capability (FoD), so you can increase ports with no hardware changes. As each FoD is enabled, the ports of a switch are activated. If the node has a suitable I/O adapter, the ports are available to the node. For more information about switch capability, see 4.10, I/O modules on page 92. The example shown in Figure 6-1 is the EN4093 10Gb Scalable Switch. Fourteen ports are available in the base product together with 10 uplink ports. However, additional logical partitions can be enabled with a FoD upgrade, providing a second set of 14 internal ports.

14 internal ports

Logical partition 1

Base Switch: Enables fourteen internal 10 Gb ports (one to each server) and ten external 10 Gb ports Supports the 2 port 10 Gb LOM and Virtual Fabric capability
Pool of uplink ports

42 10 Gb KR lanes

14 internal ports

Logical partition 2

First Upgrade via FoD: Enables second set of fourteen internal 10 Gb ports (one to each server) and two 40 Gb ports Each 40 Gb port can be used as four 10 Gb ports Supports the 4-port Virtual Fabric adapter Second Upgrade via FoD: Enables third set of fourteen internal 10 Gb ports (one to each server) and four external 10 Gb ports Capable of supporting a six port card in the future

14 internal ports

Logical partition 3 (with future adapter)

Figure 6-1 Logical Partitions for the IBM Flex System Fabric EN4093 10 Gb Scalable Switch

Figure 6-2 shows a node using a two port LAN on Motherboard (LOM). Port 1 is connected to the first switch. The second port is connected to the second switch.

10 Gb Enet switch 10 Gb LOM Node I/O card 2 Switch 3 Switch 4


Figure 6-2 Switch to I/O Module connections

10 Gb Enet switch (2nd switch optional)

259

Figure 6-3 shows a 4-port 10 Gb Ethernet adapter (IBM Flex System CN4054 10 Gb Virtual Fabric Adapter) and a 2-port Fibre Channel (FC) I/O Adapter (IBM Flex System FC5022 2-port 16Gb FC Adapter). These adapters deliver six fabrics to each node.

10 Gb Enet switch 4p 10 Gb Enet card Node 16 Gb FC card 16 Gb FC switch 16 Gb FC switch (2nd switch optional)
Figure 6-3 Showing six port connections to six fabric implementation of Ethernet combined with FC

10 Gb Enet switch (2nd switch optional)

6.3 VLAN
VLANs are commonly used in the Layer 2 network to split up groups of network users into manageable broadcast domains. They are also used to create logical segmentation of workgroups, and to enforce security policies among logical segments. VLAN considerations include the number and types of VLANs supported, tagging protocols supported, and configuration protocols implemented. All switch modules for Enterprise Chassis support the 802.1Q protocol for VLAN tagging. Another use of 802.1Q VLAN tagging is to divide one physical Ethernet interface into several logical interfaces that belong to different VLANs. In other words, a compute node can send and receive tagged traffic from different VLANs on the same physical interface. This process can be done with network adapter management software. This software is the same as used for network interface card (NIC) teaming, as described in 6.5.3, NIC teaming on page 267. Each logical interface displays as a separate network adapter in the operating system with its own set of characteristics. These characteristics include IP addresses, protocols, and services. Use several logical interfaces when an application requires more than two separate interfaces, and you do not want to dedicate a whole physical interface to it. This might be the case if you do not have enough interfaces or low traffic. VLANs might also be helpful if you need to implement strict security policies for separating network traffic. Implementing such policies with VLANs might eliminate the need to implement Layer 3 routing in the network. This configuration can be done without needing to implement Layer 3 routing in the network. To ensure that the application supports logical interfaces, check the documentation for possible restrictions applied to the NIC teaming configurations. Checking documentation is especially important in a clustering solutions implementation. For more information about Ethernet switch modules available with the Enterprise Chassis, see 4.10, I/O modules on page 92.

260

IBM PureFlex System and IBM Flex System Products and Technology

6.4 High availability and redundancy


You might need to have continuous access to your network services and applications. Providing high availability for client network resources is a complex task that involves fitting multiple pieces together on a hardware and a software level. One HA component is to provide network infrastructure availability. Network infrastructure availability can be achieved by implementing certain techniques and technologies. Most of them are widely used standards, but some of them are specific to Enterprise Chassis. This section addresses the most common technologies that can be implemented in an Enterprise Chassis environment to provide high availability for network infrastructure. In general, a typical LAN infrastructure consists of server NICs, client NICs, and network devices such as Ethernet switches and cables that connect them together. The potential failures in a network include port failures (both on switches and servers), cable failures, and network device failures. To provide high availability and redundancy, avoid or minimize single points of failure. Provide redundancy for network equipment and communication links by using: Two Ethernet ports on each compute node (LOM enabled node) Two or four I/O modules on each node (four on double wide nodes) Two or four ports on I/O expansion cards on each compute node Two Ethernet switches per dual port for device redundancy For more information about connection topology between I/O adapters and I/O modules, see 4.10, I/O modules on page 92. Implement technologies that provide automatic failover in case of any failure. Automatic failover can be configured by using certain feature protocols that are supported by network device, together with server-side software. Consider implementing these technologies, which can help achieve a higher level of availability in an Enterprise Chassis network solution (depending on your network architecture): Spanning Tree Protocol Layer 2 failover (also known as Trunk Failover) Virtual Link Aggregation Groups Virtual Router Redundancy Protocol Routing Protocol such as Router Information Protocol (RIP) or Open Shortest Path First (OSPF)

261

6.4.1 Redundant network topologies


The Enterprise Chassis can be connected to the enterprise network in several ways (Figure 6-4).

Topology 1
TOR Switch 1 Rest of Network Switch 1 NIC 1 Compute node

Chassis

Trunk

TOR Switch 2

Switch 2

NIC 2

Topology 2
TOR Switch 1 Rest of Network Switch 1 NIC 1 Compute node NIC 2

Chassis

TOR Switch 2
Figure 6-4 IBM redundant paths

Switch 2

Topology 1 in Figure 6-4 has each switch module in Enterprise Chassis directly connected to the one of the top of rack switches. The switch modules are connected through aggregation links by using some of the external ports on the switch. The specific number of external ports used for link aggregation depends on your redundancy requirements, performance considerations, and real network environments. This topology is the simplest way to integrate the Enterprise Chassis into an existing network, or to build a new one. Topology 2 in Figure 6-4 has each switch module in the Enterprise Chassis with two direct connections to a pair of top of rack switches. This topology is more advanced, and has a higher level of redundancy. However, protocols such as Spanning Tree or Virtual Link Aggregation Groups must be implemented. Otherwise, network loops and broadcast storms might cause network failures.

6.4.2 Spanning Tree Protocol


Spanning Tree Protocol (STP) is a 802.1D standard protocol used in Layer 2 redundant network topologies. When multiple paths exist between two points on a network, STP or one of its enhanced variants can prevent broadcast loops. It can also ensure that the switch uses the most efficient network path. STP can also enable automatic network reconfiguration in case of failure. For example, top of rack switch 1 and 2, together with switch 1 in the

262

IBM PureFlex System and IBM Flex System Products and Technology

Enterprise Chassis, create a loop in a Layer 2 network. For more information, see Topology 2 in Figure 6-4 on page 262. In this case, use STP as a loop prevention mechanism because a Layer 2 network cannot operate in a loop. Assume that the link between TOR 2 and Enterprise Chassis switch 1 is disabled by STP to break a loop. Therefore, traffic goes through the link between enterprise switch 1 and Enterprise Chassis switch 1. During link failure, STP reconfigures the network and activates the previously disabled link. The process of reconfiguration can take tenths of a second, during which time the service is unavailable. Whenever possible, plan to use trunking with VLAN tagging for interswitch connections. This configuration can help achieve higher performance by increasing interswitch bandwidth, and higher availability by providing redundancy for links in the aggregation bundle. For more information about trunking, see 6.5.1, Trunking on page 266. STP modifications, such as Port Fast Forwarding or Uplink Fast, might help to improve STP convergence time and the performance of the network infrastructure. Additionally, several instances of STP can run on the same switch simultaneously. These instances run on a per-VLAN basis. That is, each VLAN has its own copy of STP to load balance traffic across uplinks more efficiently. For example, assume that a switch has two uplinks in a redundant loop topology and several VLANs are implemented. If single STP is used, one of these uplinks is disabled and the other carries traffic from all VLANs. However, if two STP instances are running, one link is disabled for one set of VLANs while carrying traffic from another set of VLANs, and vice versa. In other words, both links are active, thus enabling more efficient use of available bandwidth.

6.4.3 Layer 2 failover


Each compute node can have one IP address per each Ethernet port, or one virtual NIC consisting of two or more physical interfaces with one IP address. This configuration is known as NIC teaming technology. From the Enterprise Chassis perspective, NIC Teaming is useful when you plan to implement high availability configurations with automatic failover in case of internal or external uplink failures. You can use only two ports on compute node per virtual NIC for high availability configurations. One port is active, and the other is standby. One port is connected to a switch in I/O bay 1, and the other port to a switch in I/O bay 2. If you plan to use an Ethernet I/O Adapter for high availability configurations, the same rules apply. Connect the active and standby ports to switches on different bays. During internal port or link failure of the active NIC, the teaming driver switches the port roles. The standby port becomes active and the active port becomes standby. This process takes only a few seconds. After restoration of the failed link, the teaming driver can run a failback or do nothing, depending on the configuration. Look at topology 1 in Figure 6-4 on page 262. Assume that NIC Teaming is on, and that the compute node NIC port connected to switch 1 is active and the other is on standby. If something goes wrong with the internal link to switch 1, the teaming driver detects the NIC port failure and runs a failover. If external connections are lost, such as the connection from Enterprise Chassis switch 1 to top of rack switch 1, nothing happens. There is no failover because the internal link is still on and the teaming driver does not detect any failure. Therefore the network service becomes unavailable. To address this issue, use the Layer 2 Failover technique. Layer 2 Failover can disable all internal ports on switch module in the case of an upstream links failure. A disabled port 263

means no link, so the NIC teaming driver runs a failover. This process is a special feature supported on Enterprise Chassis switch modules. If Layer 2 Failover is enabled and you lose connectivity with top of rack switch 1, the NIC teaming driver runs a failover. Service is then available through top of rack switch 2 and Enterprise Chassis switch 2. Use Layer 2 Failover with NIC active/standby teaming. Before using NIC teaming, verify whether it is supported by the operating system and applications deployed. Remember: Generally, do not use automatic failback for NIC teaming to avoid issues when you replace the failed switch module. A newly installed switch module has no configuration data, and can cause service disruption.

6.4.4 Virtual Link Aggregation Groups


In many data center environments, downstream switches connect to upstream devices that consolidate traffic as shown in Figure 6-5.

ISL Aggregation Layer STP blocks implicit loops Access Layer VLAGs VLAG Peers Links remain active

Servers

Figure 6-5 Typical switching layers with STP and VLAG

A switch in the access layer can be connected to more than one switch in the aggregation layer to provide network redundancy. Typically, STP is used to prevent broadcast loops, blocking redundant uplink paths. This protocol has the unwanted consequence of reducing the available bandwidth between the layers by as much as 50%. In addition, STP might be slow to resolve topology changes that occur during a link failure, and can result in considerable Media Access Control (MAC) address flooding. Using Virtual Link Aggregation Groups (VLAGs), the redundant uplinks remain active using all available bandwidth. Using the VLAG feature, the paired VLAG peers display to the downstream device as a single virtual entity for establishing a multi-port trunk. The VLAG-capable switches synchronize their logical view of the access layer port structure and internally prevent implicit loops. The VLAG topology also responds more quickly to link failure, and does not result in unnecessary MAC address flooding.

264

IBM PureFlex System and IBM Flex System Products and Technology

VLAGs are also useful in multi-layer environments for both uplink and downlink redundancy to any regular LAG-capable device as shown in Figure 6-6.

Layer 2/3 Border

LACP-capable Routers VLAG 5 VLAG 6 ISL VLAG Peers

Layer 2/3 Region with multiple levels

VLAG 3 ISL VLAG Peers VLAG 1 LACP-capable Switch ISL

VLAG 4

VLAG Peers VLAG 2

Servers

LACP-capable Server

Figure 6-6 VLAG with multiple layers

6.4.5 Virtual Router Redundancy Protocol


If you are integrating the Enterprise Chassis into a Layer 3 network with different subnets, routing, and routing protocols, some Layer 3 techniques can be used. These techniques provide high availability service to clients. Traditionally, in multi-subnet IP networks, servers use IP default gateways to communicate with each other. In a redundant network, in case of a router failure, certain protocols need to be used to keep network availability. One of them is Virtual Router Redundancy Protocol (VRRP). VRRP enables redundant router configurations within a LAN, providing alternative router paths for a host to eliminate single point of failure within a network. Each participating routing device with VRRP function is configured with the same virtual router IPv4 address and ID number. One of the routing devices is elected as the master router and controls the shared virtual router IPv4 address. If the master fails, one of the backup routing devices takes control of the virtual router IPv4 address and actively processes traffic addressed to it. Currently, switch modules use VRRP version 2, which supports only IPv4 protocol. VRRP version 3 is defined in RFC 5798. VRRPv3 introduces support for IPv6 in addition to IPv4. But implementation for IPv6 is still not stable, so current switch operating systems do not support IPv6 for VRRP. The IBM Flex System Fabric EN4093 10 Gb Scalable Switch and IBM Flex System EN2092 1 Gb Ethernet Scalable Switch for Enterprise Chassis both offer the VRRP function.

265

6.4.6 Routing protocols


A routing protocol is a protocol that specifies how routers communicate with each other. It disseminates information that enables them to select routes between any two nodes on a network. The choice of the route is done by routing algorithms. Typical standard routing protocols that exist in enterprise networks include Routing Information Protocol (RIP) and Open Shortest Path First (OSPF). Additionally, ISPs and other network service providers use Border Gateway Protocol (BGP).

6.5 Performance
Another major topic to be considered during network planning is network performance. Planning network performance is a complicated task, so the following sections provide guidance about the performance features of IBM Flex System network infrastructures. The commonly used features include link aggregation, jumbo frames, NIC Teaming, and network or server load balancing.

6.5.1 Trunking
Trunking (also commonly referred as EtherChannel in Cisco switches) is a simple way to acquire more network bandwidth between switches. Trunking is a technique that combines several physical links into one logical link to get more bandwidth. A trunk group also provides some level of redundancy for its physical links. That is, if one of the physical links in the trunk group fails, traffic is distributed between the remaining functional links. There are two main ways of establishing a trunk group: Static and dynamic. Static trunk groups can be mostly used without any limitations. It is simple and easy to manage. As for dynamic trunk group, the widely used protocol is Link Aggregation Control Protocol (LACP). This protocol is supported by IBM Flex System EN2092 1 Gb Ethernet Scalable Switch and IBM Flex System Fabric EN4093 10 Gb Scalable Switch.

6.5.2 Jumbo frames


Jumbo frames are used to speed up server network performance. Unlike a traditional Ethernet frame size of up to 1.5 KB, the Ethernet jumbo frames can be up to 9 KB in size. The original 1.5 KB payload size for Ethernet frames was used because of the high error rates and low speed of communications. Thus, if you receive a corrupted packet, only 1.5 KB must be resent to correct the error. However, each frame requires that the network hardware and software process it. If the frame size is increased, the same amount of data can be transferred with less effort. This configuration reduces processor utilization and increases throughput by allowing the system to concentrate on the data in the frames, instead of the frames around the data. Therefore jumbo frames can speed up server network processing, and can provide better utilization of network. Jumbo frames must be supported by all network devices in the communication path. For example, if you plan to implement iSCSI storage with jumbo frames, all components including server NICs, network switches, and storage system NICs must support jumbo frames. IBM Flex System EN2092 1 Gb Ethernet Scalable Switch and IBM Flex System Fabric EN4093 10 Gb Scalable Switch I/O modules support jumbo frames.

266

IBM PureFlex System and IBM Flex System Products and Technology

6.5.3 NIC teaming


NIC teaming can be used for high-availability purposes, but it can also be used to get more network bandwidth for specific servers by configuring separate network connections to act as a single high-bandwidth logical connection. The generic trunking and IEEE 802.3ad LACP modes of NIC teaming can both be used for interfaces connected to the same Ethernet switch module. When NICs are connected to different switch modules, you need to use different interfaces. For Windows, use the vendor-specific drivers and configuration tools. For Linux, use bonding modes 0 or 2. For Broadcom chip-based network adapter IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter, the teaming software is Broadcom Advanced Server Program (BASP) for Windows operating systems. BASP settings are configured by Broadcom Advanced Control Suite (BACS) utility. For the Emulex-based IBM Flex System CN4054 10 Gb Virtual Fabric Adapter and the LOM implementation of this network adapter, use the OneCommand manager software to configure NIC Teaming. The OneCommand NIC Teaming (and Multiple VLAN Manager) is installed automatically when the Windows driver is installed. For more information about each configuration tool, see the network adapter vendors documentation.

6.5.4 Server Load Balancing


In a scale-out environment, the performance of network applications can be increased by implementing load balancing clusters. You can use the following methods: IP load balancing such as Microsoft Network Load Balancing or Linux Virtual Server Application load balancing by using specific software features such as IBM WebSphere Load Balancer Application load balancing by using network devices hardware features such as Server Load Balancing with third-party Layer 4 or Layer 7 Ethernet switches Besides performance, Server Load Balancing also provides high availability by redistributing client requests to the operational servers in case of any server or application failure. Server Load Balancing uses virtual server concept similar to virtual router Together with VRRP, it can provide even higher level of availability for network applications. VRRP and Server Load Balancing can also be used for inter-chassis redundancy and even disaster recovery solutions.

6.6 IBM Virtual Fabric Solution


Currently, deployment of server virtualization technologies in data centers requires significant effort to provide sufficient network I/O bandwidth to satisfy the demands of virtualized applications and services. For example, every virtualized system can host several dozen network applications and services, and each of these services requires bandwidth to function properly. Furthermore, because of different network traffic patterns relevant to different service types, these traffic flows might interfere with each other. This interference can lead to serious network problems, including the inability of the service to run its functions. This type of interference becomes particularly important when I/O disk storage data traffic uses the same physical infrastructure (for example, iSCSI).

267

The IBM Virtual Fabric Virtual Network Interface Card (vNIC) solution addresses the issues described previously. The solution is based on 10 Gb Converged Enhanced Ethernet infrastructure. It takes a 10 Gb port that is on a 10 Gb virtual fabric adapter, and splits the 10 Gb physical port into four vNICs. This configuration allows each vNIC or virtual channel to be between 100 MB and 10 Gb in increments of 100 MB. The total of all four vNICs cannot exceed 10 Gb. The vNIC solution is a way to divide a physical NIC into smaller logical NICs (or partition them). This configuration allows the OS to have more possible ways to logically connect to the infrastructure. The vNIC feature is supported only on 10 Gb ports on the EN4093 10Gb Scalable Switch facing the compute nodes within the chassis. It requires a node adapter, CN4054 10Gb Virtual Fabric Adapter, or Embedded Virtual Fabric Adapter that also supports this function. Two primary forms of vNIC are available: Virtual Fabric mode (or switch dependent mode) and switch independent mode. The Virtual Fabric mode is also subdivided into two submodes: Dedicated uplink vNIC mode and shared uplink vNIC mode. These are some of the common elements of all vNIC modes: Only supported on 10 Gb connections. Each allows a NIC to be divided into up to four vNICs per physical NIC (can be less than four, but not more). They all require an adapter that has support for one or more of the vNIC modes. When creating vNICs, the default bandwidth is 2.5 Gb for each vNIC. However, the bandwidth can be configured to be anywhere from 100 Mb up to the full bandwidth of the NIC. The bandwidth of all configured vNICs on a physical NIC cannot exceed 10 Gb. Table 6-2 shows a comparison of these modes, with details in the following sections.
Table 6-2 Attributes of vNIC modes IBM Virtual Fabric mode Capability Requires support in the I/O module Requires support in the NIC Supports adapter transmit rate control Support I/O module transmit rate control Supports changing rate dynamically Requires a dedicated uplink per vNIC group Support for node OS based tagging Support for failover per vNIC group Support for more than one uplink per vNIC group Dedicated uplink Yes Yes Yes Yes Yes Yes Yes Yes No Shared uplink Yes Yes Yes Yes Yes No No Yes Yes Switch independent mode No Yes Yes No No No Yes No Yes

268

IBM PureFlex System and IBM Flex System Products and Technology

6.6.1 Virtual Fabric mode vNIC


Virtual Fabric mode or switch dependent mode depends on the switch in the I/O switch module that participates in the vNIC process. Specifically, the I/O module that supports this mode of operation today in the Enterprise Chassis is the IBM Flex System Fabric EN4093 10Gb Scalable Switch. It also requires having an adapter on the node that also supports the vNIC switch-dependent mode feature. In switch dependent vNIC mode, the switch itself is configured. This configuration information is communicated between the switch and the adapter so that both sides agree on and enforce bandwidth controls. It can be changed to different speeds at any time, without reloading either the OS or the I/O module. As noted, there are two types of switch-dependent vNIC mode: Dedicated uplink mode and shared uplink mode. Both modes have the concept of a vNIC group on the switch. This concept is used to associated vNICs and physical ports into virtual switches within the chassis. How these vNIC groups are used is the primary difference between dedicated uplink mode and shared uplink mode. These are common attributes of switch-dependent vNIC modes: They have the concept of a vNIC group that needs to be created on the I/O module. Like vNICs are bundled together into common vNIC groups. Each vNIC group is treated as a virtual switch within the I/O module. Packets in one vNIC group can get to a different vNIC group only by going to an external switch/router. For the purposes of Spanning Tree and packet flow, each vNIC group is treated as a unique switch by upstream switches and routers. Both support adding physical NICs (ones from nodes not using vNIC) to vNIC groups. Adding NICs allows for internal communication to other physical NICs and vNICs in that vNIC group, and sharing any uplink associated with that vNIC group.

Dedicated uplink mode


Dedicated uplink mode is the default mode when vNIC is enabled on the I/O module. In dedicated uplink mode, each vNIC group must have its own dedicated physical or logical (aggregation) uplink. It does not allow you to assign more than a single physical or logical uplink to a vNIC group. In addition, it assumes that high availability will be achieved by some combination of aggregation on the uplink and NIC teaming on the server. In this mode, vNIC groups are VLAN agnostic to the nodes and the rest of the network. This configuration means that you do not need to create VLANs for each VLAN used by the nodes. The vNIC group simply takes each packet, tagged or untagged, and moves it through the switch. This process is accomplished by the use of a form of Q-in-Q tagging. Each vNIC group is assigned a VLAN that is unique to that group. Any packet, tagged or untagged, that comes in on a port in that vNIC group gets a tag placed on it equal to the vNIC group VLAN. As that packet leaves the vNIC, the tag is stripped off, revealing the original tag (or no tag, depending on the original packet).

Shared uplink mode


Shared uplink mode is a global option that can be enabled on an I/O module that has vNIC enabled. Changing the I/O module to share uplink mode allows you to share an uplink among vNIC groups, which reduces the number of uplinks required. It also changes the way the vNIC groups process packets for tagging. In shared uplink mode, the servers no longer use tags. Instead, the vNIC group VLAN acts as a tag that is placed on 269

the packet. When a server sends a packet and it gets to the vNIC group, it gets a tag placed on it equal to the vNIC group VLAN. The packet is then sent out the uplink tagged with that VLAN. This approach is illustrated in Figure 6-7.

Operating System VMware ESX vmnic2 vSwitch1 vmnic4 vSwitch2 10 Gb NIC

Physical NIC
vNIC 1.1 Tag VLAN 100 vNIC 1.2 Tag VLAN 200 vNIC 1.3 Tag VLAN 300 vNIC 1.4 Tag VLAN 400

INT-1

vNICGroup 1 VLAN100

EXT-1

vNICGroup 2 VLAN200

vmnic6 vSwitch3

vNICGroup 3 VLAN300

vmnic8 vSwitch4 Compute Node

vNICGroup 4 VLAN400

EXT-9

EN4093 10 Gb Scalable Switch

EXT-x

Figure 6-7 IBM Virtual Fabric vNIC shared uplink mode

6.6.2 Switch independent mode vNIC


Switch independent mode vNIC is accomplished strictly on the node itself. The I/O module is unaware of this virtualization, and acts as a normal switch in all ways. This mode is enabled at the node directly, and has similar rules as dedicated vNIC mode regarding how you can divide the vNIC. However, any bandwidth settings made are limited to how the node sends traffic, not how the I/O module sends traffic back to the node. They cannot be changed in real time because doing so requires a reload. Ultimately, which mode is best for a user depends on their requirements. Virtual Fabric dedicated uplink mode offers the most control, and switch independent mode offers the most flexibility with uplink connectivity.

6.7 VMready
VMready is a unique solution that enables the network to be virtual machine aware. The network can be configured and managed for virtual ports (v-ports) rather than just for physical ports. VMready allows for a define-once-use-many configuration. That means the network attributes are bundled with a v-port. The v-port belongs to a VM, and is movable. Wherever the VM migrates, even to a different physical host, the network attributes of the v-port remain the same. The hypervisor manages the various virtual entities (VEs) on the host server: Virtual machines (VMs), virtual switches, and so on. Currently, VMready function supports up to 270
IBM PureFlex System and IBM Flex System Products and Technology

2048 VEs in a virtualized data center environment. The switch automatically discovers the VEs attached to switch ports, and distinguishes between regular VMs, Service Console Interfaces, and Kernel/Management Interfaces in a VMware environment. VEs can be placed into VM groups on the switch to define communication boundaries. VEs in the same VM group can communicate with each other, whereas VEs in different groups cannot. VM groups also allow for configuring group-level settings such as virtualization policies and access control lists (ACLs). The administrator can also pre-provision VEs by adding their MAC addresses (or their IPv4 addresses or VM names in a VMware environment) to a VM group. When a VE with a pre-provisioned MAC address becomes connected to the switch, the switch automatically applies the appropriate group membership configuration. In addition, VMready together with IBM NMotion allows seamless migration/failover of VMs to different hypervisor hosts, preserving network connectivity configurations. VMready works with all major virtualization products, including VMware, Hyper-V, Xen, and KVM and Oracle VM, without modification of virtualization hypervisors or guest operating systems. A VMready switch can also connect to a virtualization management server to collect configuration information about associated VEs. It can automatically push VM group configuration profiles to the virtualization management server. This process in turn configures the hypervisors and VEs, providing enhanced VE mobility. VMready is supported on both IBM Flex System Fabric EN4093 10 Gb Scalable Switch and IBM Flex System EN2092 1 Gb Ethernet Scalable Switch.

271

272

IBM PureFlex System and IBM Flex System Products and Technology

Chapter 7.

Storage integration
IBM Flex System Enterprise Chassis offers several possibilities for integration into storage infrastructure, such as Fibre Channel, iSCSI, and Converged Enhanced Ethernet. This chapter addresses major considerations to take into account during IBM Flex System Enterprise Chassis storage infrastructure planning. These considerations include storage system interoperability, I/O module selection and interoperability rules, performance, high availability and redundancy, backup, and boot from SAN. This chapter includes the following sections: 7.1, External storage on page 274 7.2, Fibre Channel on page 281 7.3, iSCSI on page 286 7.4, High availability and redundancy on page 287 7.5, Performance on page 288 7.6, Backup solutions on page 289 7.7, Boot from SAN on page 291 7.8, Converged networks on page 292

Copyright IBM Corp. 2012. All rights reserved.

273

7.1 External storage


There are several options for attaching external storage systems to Enterprise Chassis: Storage area networks (SANs) based on Fibre Channel (FC) technologies SANs based on iSCSI Converged Networks based on 10 Gb Converged Enhanced Ethernet (CEE) Traditionally, Fibre Channel-based SANs are the most common and advanced design of external storage infrastructure. They provide high levels of performance, availability and redundancy, and scalability. However, the cost of implementing FC SANs is higher in comparison with CEE or iSCSI. Almost every FC SAN includes these major components: Host bus adapters (HBAs) FC switches FC storage servers FC tape devices Optical cables for connecting these devices to each other iSCSI-based SANs provide all the benefits of centralized shared storage in terms of storage consolidation and adequate levels of performance. However, they use traditional IP-based Ethernet networks instead of expensive optical cabling. iSCSI SANs consist of these components: Server hardware iSCSI adapters or software iSCSI initiators Traditional network components such as switches and routers Storage servers with an iSCSI interface, such as IBM System Storage DS3500 or IBM N Series Converged Networks can carry both SAN and LAN types of traffic over the same physical infrastructure. Consolidation allows you to decrease costs and increase efficiency in building, maintaining, operating, and managing the networking infrastructure. iSCSI, FC-based SANs, and Converged Networks can be used for diskless solutions to provide greater levels of utilization, availability, and cost effectiveness. These IBM storage products that are supported with the Enterprise Chassis are addressed: IBM Storwize V7000 IBM XIV Storage System series IBM System Storage DS8000 series IBM System Storage DS5000 series IBM System Storage DS3000 series IBM System Storage N series IBM System Storage TS3500 Tape Library IBM System Storage TS3310 Tape Library IBM System Storage TS3100 Tape Library For the latest support matrixes for storage products, see the storage vendor interoperability guides. IBM storage products can be referenced in the System Storage Interoperability Center (SSIC): http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

274

IBM PureFlex System and IBM Flex System Products and Technology

7.1.1 IBM Storwize V7000


IBM Storwize V7000 is an innovative storage offering that delivers essential storage efficiency technologies and exceptional ease of use and performance. It is integrated into a compact, modular design. Scalable solutions require highly flexible systems. In a truly virtualized environment, you need virtualized storage. All Storwize V7000 storage is virtualized. The Storwize V7000 offers the following features: Enables rapid, flexible provisioning, and simple configuration changes Enables non-disruptive movement of data among tiers of storage, including IBM Easy Tier Enables data placement optimization to improve performance The most important aspect of the Storwize V7000 and its use with the IBM Flex System Enterprise Chassis is that Storwize V7000 can virtualize external storage. In addition, Storwize V7000 has these features: Capacity from existing storage systems becomes part of the IBM storage system Single user interface to manage all storage, regardless of vendor Designed to significantly improve productivity Virtualized storage inherits all the rich base system functions including IBM FlashCopy, Easy Tier, and thin provisioning Moves data transparently between external storage and the IBM storage system Extends life and enhances value of existing storage assets Storwize V7000 offers thin provisioning, FlashCopy, EasyTier, performance management, and optimization. External virtualization allows for rapid data center integration into existing IT infrastructures. The Metro/Global Mirroring option provides support for multi-site recovery. Figure 7-1 shows the IBM Storwize V7000.

Figure 7-1 IBM Storwize V7000

The levels of integration of Storwize V7000 with IBM Flex System provide these additional features: Starting Level IBM Flex System Single Point of Management Higher Level Datacenter Management IBM Flex System Manager Storage Control

275

Detailed Level Data Management Storwize V7000 Storage User GUI Upgrade Level Datacenter Productivity TPC Storage Productivity Center IBM Storwize V7000 provides a number of configuration options that simplify the implementation process. It also provides automated wizards, called directed maintenance procedures (DMP), to assist in resolving any events. IBM Storwize V7000 is a clustered, scalable, and midrange storage system, as well as an external virtualization device. IBM Storwize V7000 Unified is the latest release of the product family. This virtualized storage system is designed to consolidate block and file workloads into a single storage system. This consolidation provides simplicity of management, reduced cost, highly scalable capacity, performance, and high availability. IBM Storwize V7000 Unified Storage also offers improved efficiency and flexibility through built-in solid-state drive (SSD) optimization, thin provisioning, and nondisruptive migration of data from existing storage. The system can virtualize and reuse existing disk systems, providing a greater potential return on investment. For more information about IBM Storwize V7000, see: http://www.ibm.com/systems/storage/disk/storwize_v7000/overview.html

Statements of direction
IBM intends to further enhance the integration of server, storage, and networking with the introduction of an IBM Flex System storage node. This new storage system will share the software functional richness of IBM Storwize V7000, including IBM System Storage Easy Tier for automated SSD optimization. It will also be physically and logically integrated into IBM PureFlex System. The Flex System storage node is being designed to build on the industry-leading storage virtualization and efficiency capabilities of IBM Storwize V7000. It is intended to have these advantages: Simplify and speed up deployment Provide greater integration of server and storage management Automate and streamline provisioning Greater responsiveness to business needs Lower overall cost IBM statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at the sole discretion of IBM. Information regarding potential future products is intended to outline the general product direction. Do not rely on it in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. Information about potential future products cannot be incorporated into any contract. The development, release, and timing of any future features or functionality described for IBM products remains at the sole discretion of IBM.

7.1.2 IBM XIV Storage System series


The IBM XIV Storage System is a proven, high-end disk storage series designed to address storage challenges across the application spectrum. It addresses challenges in virtualization, email, database, analytics, and data protection solutions. The XIV series delivers consistent 276
IBM PureFlex System and IBM Flex System Products and Technology

high performance and high reliability at tier 2 costs for even the most demanding workloads. It uses massive parallelism to allocate system resources evenly at all times, and can scale seamlessly without manual tuning. Its virtualized design and customer-acclaimed ease of management dramatically reduce administrative costs and bring optimization to virtualized server and cloud environments. The XIV Storage System series has these key features: A revolutionary high-end disk system for UNIX and Intel processor-based environments designed to reduce the complexity of storage management. Provides even and consistent performance for a broad array of applications. No tuning is required. XIV Gen3 is suitable for demanding workloads. Scales up to 360 TB of physical capacity, 161 TB of usable capacity. Thousands of instantaneous and highly space-efficient snapshots enable point-in-time copies of data. Built-in thin provisioning can help reduce direct and indirect costs. Synchronous and asynchronous remote mirroring provides protection against primary site outages, disasters, and site failures. Offers FC and iSCSI attach for flexibility in server connectivity. For more information about the XIV, see: http://www.ibm.com/systems/storage/disk/xiv/index.html

7.1.3 IBM System Storage DS8000 series


Through its extraordinary flexibility, reliability, and performance, the IBM System Storage DS8000 series is designed to manage a broad scope of storage workloads effectively and efficiently. This flagship IBM disk system can simplify your storage environment. It supports a mix of random and sequential I/O workloads for a mix of interactive and batch applications. It supports these workloads whether they are running on a distributed server platforms or on the mainframe. Here are the key features of the DS8800: Performance: DS8800 model offers superior performance with new IBM POWER6+ controllers, faster 8 gigabits per second (Gbps) host and device adapters, and 6 gigabits per second (Gbps) SAS (serial-attached SCSI) drives Availability and resiliency: Greater than 99.999% availability and over 10-year lineage of incremental hardware and microcode improvements built on the IBM POWER server architecture Optimized storage tiering: IBM System Storage Easy Tier feature automatically helps optimize application performance by automating placement of data across the appropriate drive tiers Flexibility: Support for an extensive variety of server platforms, drive tiers, and application workloads that helps enable cost-effective storage consolidation Scalability: Models can scale up from the smallest configuration to the largest configuration (over three petabytes) nondisruptively by upgrading drive capacity, host adapters, drive adapters, and memory For more information about the DS8000 series, see: http://www.ibm.com/systems/storage/disk/ds8000/index.html

277

7.1.4 IBM System Storage DS5000 series


DS5000 series storage systems are designed to meet demanding open-systems requirements, and establish a new standard for lifecycle longevity with field-replaceable host interface cards. Seventh-generation architecture delivers relentless performance, real reliability, multidimensional scalability, and unprecedented investment protection. The DS5000 series has these key features: Provides SAN-ready flexible, efficient, scalable disk storage system for UNIX and Intel processor-based environments Field-replaceable host interface cards (HIC): Two per controller Current release supports four 8 Gbps Fibre Channel HICs or one 10 Gbps iSCSI dual ported (16 total host ports) Scalable up to 448 drives with the EXP5000 enclosure, and up to 960 TB of high-density storage with the EXP5060 enclosure Support for intermixing drive types (FC, FC-SAS, SED, SATA, and SSD) and host interfaces (Fibre Channel and iSCSI) for investment protection and cost-effective tiered storage Supports business continuance with its optional high-availability software and advanced Enhanced Remote Mirroring function Helps protect customer data with its multi-RAID capability, including RAID 6, and hot-swappable redundant components For more information about the DS5000 series, see: http://www.ibm.com/systems/storage/disk/ds5000/index.html

7.1.5 IBM System Storage DS3000 series


IBM combines best-of-type development with leading host interface and drive technology in the IBM System Storage DS3500 Express. With next-generation 6 Gbps SAS back-end and host technology, you have a seamless path to consolidated and efficient storage. This configuration improves performance, flexibility, scalability, data security, and ultra-low power consumption without sacrificing simplicity, affordability, or availability. Here are the key features of the DS3000: Six Gbps SAS systems deliver midrange performance and scalability at entry-level prices Mixed host interface support enables direct-attached storage (DAS) and SAN tiering, reducing overall operation and acquisition costs Full disk encryption with local key management provides relentless data security Supports Network Equipment Building System (NEBS) and European Telecommunications Standards Institute (ETSI) For more information about the DS3000, see: http://www.ibm.com/systems/storage/disk/ds3500/index.html

7.1.6 IBM System Storage N series


The IBM System Storage N series products provide an integrated storage solution where a single storage system can support mission critical applications by using Fibre Channel, 278
IBM PureFlex System and IBM Flex System Products and Technology

iSCSI, and NAS protocols. Using one N series storage system instead of three separate boxes can help simplify IT device management. The unique multiprotocol storage architecture of N series is intended to help organizations reduce investment, operational, and management costs by reducing complexity. Here are the key features of the N series: Integrated storage architecture: Provides a single storage platform to support heterogeneous, multiprotocol storage requirements. This architecture can simultaneously handle both Block I/O (with FCP or iSCSI protocol) and File I/O (with CIFS, NFS, HTTP, FTP, FCoE) application needs. Application-aware software: SnapManager software provides host-based data management of N series storage for databases and business applications. Simplifies application-consistent policy-based automation for data protection and disaster recovery. Creates snapshot copies to automate error-free data restores, and enables application-aware disaster recovery. Thin Provisioning: Allows applications and users to get more space dynamically and nondisruptively without IT staff intervention. Ease of installation: Offers installation tools designed to simplify installation and setup. Increased access: Allows heterogeneous access to IP attached storage and Fibre Channel attached storage subsystems. Operating system: Optimized and finely tuned for storing and sharing data assets. The OS is designed to enable greater efficiency within your organization, and help lower total cost of ownership (TCO) through improved efficiency and productivity. Flexibility: Enables cross-platform data access for Microsoft Windows, UNIX, and Linux environments. This access can help reduce network complexity and expense, and allow data to be shared across the organization. Network-attached storage (NAS): Supports Network File System (NFS), Common Internet File System (CIFS) protocols for attachment to Microsoft Windows, UNIX, and Linux systems. IP SAN: Supports Internet Small Computer System Interface (iSCSI) protocols for IP SAN that can be attached to host servers that include Microsoft Windows, Linux, and UNIX systems. FC SAN: Supports Fibre Channel Protocol (FCP) for accommodating attachment and participation in Fibre Channel SAN environments. FCoE: Supports Fibre Channel flow over Ethernet networks. Expandability: Supports nondisruptive capacity increases and thin-provisioning, which allows you to dynamically increase and decrease user capacity assignments. Allows you to increase your storage infrastructure to keep pace with company growth. Designed to maintain availability and productivity during upgrades. Manageability: Includes integrated system diagnostics and management tools, which are designed to help minimize downtime. Redundancy: Several redundancy and hot-swappable features provide the highest system availability characteristics. Copy Services: Provides extensive outboard services that help recover data in disaster recovery environments. SnapMirror provides one-to-one, one-to-many, and many-to-one mirroring over Fibre Channel or IP infrastructures. NearStore (near-line) feature: SATA drive technology enables online and quick access to archived and nonintensive transactional data.

279

Deduplication: Provides block-level deduplication of data stored in NearStore volumes. Compliance and data retention: Software and hardware features offer nonerasable and nonrewritable data protection to meet the industrys highest regulatory requirements for retaining company data assets. For more information about the N series, see: http://www.ibm.com/systems/storage/network/hardware/index.html

7.1.7 IBM System Storage TS3500 Tape Library


The IBM System Storage TS3500 Tape Library is designed to provide a highly scalable, automated tape library for mainframe and open systems backup and archive. This system can scale from midrange to enterprise environments. The TS3500 Tape Library continues to lead the industry in tape drive integration with these features: Persistent worldwide name (WWN) Multipath architecture Drive/media exception reporting Remote drive/media management Host-based path failover Here are the key features of the TS3500: Supports highly scalable, automated data retention on tape using the LTO Ultrium and IBM 3592 and TS1100 families of tape drives Extreme scalability and capacity that can grow from 1 to 16 frames per library, and from 1 to 15 libraries per library complex by using the TS3500 shuttle connector Up to 900 PB of automated, low-cost storage under a single library image, which dramatically improves floor space utilization and reduces storage cost per terabyte Optional second robotic accessor enhances data availability and reliability Provides data security and regulatory compliance by using support for tape drive encryption and WORM cartridges For more information about the TS3500, see: http://www.ibm.com/systems/storage/tape/ts3500/index.html

7.1.8 IBM System Storage TS3310 series


If you have rapidly growing data backup needs and limited physical space for a tape library, the IBM System Storage TS3310 offers simple, rapid expansion as your processing needs grow. This tape library allows you to start with a single five EIA rack unit (5U) tall library. As your need for tape backup expands, you can add additional 9U expansion modules, each of which contains space for additional cartridges, tape drives, and a redundant power supply. The entire system grows vertically. Currently, available configurations include the 5U base library module and a 5U base with up to four 9U expansion modules. Here are the key features of the TS3310: Modular, scalable tape library designed to grow as your needs grow Available in desktop, desk-side and rack-mounted configurations

280

IBM PureFlex System and IBM Flex System Products and Technology

Designed for optimal data storage efficiency with high cartridge density using standard or Write Once Read Many (WORM) Linear Tape-Open (LTO) data cartridges Hot-swap tape drives and power supplies Redundant power and host path connectivity failover options Remote web-based management and Storage Management Initiative Specification (SMI-S) interface capable For more information about the TS3310, see: http://www.ibm.com/systems/storage/tape/ts3310/index.html

7.1.9 IBM System Storage TS3100 Tape Library


The IBM TS3100 Tape Library Express Model is well-suited for handling backup, save and restore, and archival data-storage needs for small to medium-size environments. The IBM TS3100 model has one full-height tape drive or up to 2 half-height tape drives and a 24 tape cartridge capacity. It is designed to take advantage of LTO technology to help cost effectively handle storage requirements. Here are the key features of the TS3100: Designed to support the newest generation of LTO with one IBM Ultrium 5 full-height tape drive or up to two IBM Ultrium 5 half-height tape drives. Also supports LTO generations 3 and 4 tape drives by using a 2U form factor. Fibre Channel attachment support for half height LTO-5 and LTO-4 tape drives Designed to offer outstanding capacity, performance, and reliability for a cost effective backup, restore, and archive for midrange storage environments Remote library management through a standard web interface supports flexibility and improved administrative control over storage operations For more information about the TS3100, see: http://www.ibm.com/systems/storage/tape/ts3100/index.html

7.2 Fibre Channel


Fibre Channel is a proven and reliable network for storage interconnect. The IBM Flex System Enterprise Chassis FC portfolio offers various choices to meet your needs and interoperate with exiting SAN infrastructure.

7.2.1 Fibre Channel requirements


In general, if Enterprise Chassis is integrated into FC storage fabric, ensure that the following requirements are met. Check the compatibility guides from your storage system vendor for confirmation. Enterprise Chassis server hardware and HBA are supported by the storage system. The FC fabric used or proposed for use is supported by the storage system. The operating systems deployed are supported both by IBM server technologies and the storage system.

281

Multipath drivers exist and are supported by the operating system and storage system (in case you plan for redundancy). Clustering software is supported by the storage system (in case you plan to implement clustering technologies). If any of these requirements are not met, consider another solution that is supported. Almost every vendor of storage systems or storage fabrics has extensive compatibility matrixes that include supported HBAs, SAN switches, and operating systems. For more information about IBM System Storage compatibility, see the IBM System Storage Interoperability Center at: http://www.ibm.com/systems/support/storage/config/ssic

7.2.2 FC switch selection and fabric interoperability rules


IBM Flex System Enterprise Chassis provides integrated FC switching functions by using several switch options: IBM Flex System FC3171 8 Gb SAN Switch IBM Flex System FC3171 8 Gb SAN Pass-thru IBM Flex System FC5022 16 Gb SAN Scalable Switch

Considerations for the FC5022 16Gb SAN Scalable Switch


The module can function either in Fabric OS Native mode or Brocade Access Gateway mode. The switch ships with Fabric OS mode as the default. The mode can be changed by using OS commands or web tools. Access Gateway simplifies SAN deployment by using N_Port ID Virtualization (NPIV). NPIV provides FC switch functions that improve switch scalability, manageability, and interoperability. The default configuration for Access Gateway is that all N-Ports have fail-over and fall back enabled. In Access Gateway mode, the external ports can be N_Ports, and the internal ports (128) can be F_Ports as shown in Table 7-1
Table 7-1 Default configuration F_port 1,21 2,22 3,23 4,24 5,25 6,26 7,27 8,28 9 10 N_port 0 29 30 31 32 33 34 35 36 37 F_port 11 12 13 14 15 16 17 18 19 20 N_Port 38 39 40 41 42 43 44 45 46 47

282

IBM PureFlex System and IBM Flex System Products and Technology

For more information, see the Brocade Access Gateway Administrators Guide.

Considerations for the FC3171 8 Gb SAN Pass-thru and FC3171 8 Gb SAN Switch
Both these I/O Modules provide seamless integration of IBM Flex System Enterprise Chassis into existing Fibre Channel fabric. They avoid any multivendor interoperability issues by using NPIV technology. All ports are licensed on both these switches (there are no port licensing requirements). The I/O module has 14 internal ports and 6 external ports presented to the rear of the chassis. Attention: If you will need Full Fabric capabilities at any time in the future, purchase the Full Fabric Switch Module (FC3171 8 Gb SAN Switch) instead of the Pass-Thru module (FC3171 8 Gb SAN Pass-thru). The pass-through module can never be upgraded. You can reconfigure the FC3171 8 Gb SAN Switch to become a Pass-Thru module by using the switch GUI or CLI. The module can be converted back to a full function SAN switch at any time. The switch requires a reset when turning on or off transparent mode. Operating in pass-through mode adds ports to the fabrics, and not domain IDs like switches. This process is not apparent to the switches in the fabric. This section describes how the NPIV concept works for the Intelligent pass-through Module (and the Brocade Access Gateway). Several basic types of ports are used in Fibre Channel fabrics: N_Ports (node ports) represent an end-point FC device (such as host, storage system, or tape drive) connected to the FC fabric. F_Ports (fabric ports) are used to connect N_Ports to the FC switch (that is, the host HBAs N_port is connected to the F_Port on the switch). E_Ports (expansion ports) provide interswitch connections. If you need to connect one switch to another, E_ports are used. The E_port on one switch is connected to the E_Port on another switch. When one switch is connected to another switch in the existing FC fabric, it uses the Domain ID to uniquely identify itself in the SAN (like a switch address). Because every switch in the fabric has the Domain ID and this ID is unique in the SAN, the number of switches and number of ports is limited. This in turn limits SAN scalability. For example, QLogic theoretically supports up to 239 switches, and McDATA supports up to 31 switches. Another concern with E_Ports is an interoperability issue between switches from different vendors. In many cases only the so-called interoperability mode can be used in these fabrics, thus disabling most of the vendors advanced features. Each switch requires some management tasks to be performed on it. Therefore, an increased number of switches increases the complexity of the management solution, especially in heterogeneous SANs consisting of multivendor fabrics. NPIV technology helps to address these issues. Initially, NPIV technology was used in virtualization environments to share one HBA with multiple virtual machines, and assign unique port IDs to each of them. This configuration allows you to separate traffic between virtual machines (VMs). You can deal with VMs in the same way as physical hosts, by zoning fabric or partitioning storage.

283

For example, if NPIV is not used, every virtual machine shares one HBA with one WWN. This restriction means that you are not able to separate traffic between these systems and isolate LUNs because all of them use the same ID. In contrast, when NPIV is used, every VM has its own port ID, and these port IDs are treated as N_Ports by the FC fabric. You can perform storage partitioning or zoning based on the port ID of the VM. The switch that the virtualized HBAs are connected to must support NPIV as well. Check the documentation that comes with the FC switch. The IBM Flex System FC3171 8 Gb SAN Switch in pass-through mode, the IBM Flex System FC3171 8 Gb SAN Pass-thru, and the Brocade Access Gateway use the NPIV technique. The technique presents the nodes port IDs as N_Ports to the external fabric switches. This process eliminates the need for E_Ports connections between the Enterprise Chassis and external switches. In this way, all 14 internal nodes FC ports are multiplexed and distributed across external FC links and presented to the external fabric as N_Ports. This configuration means that external switches connected to the chassis that are configured for Fibre pass-through do not see the pass-through module. They see only N_ports connected to the F_ports. This configuration can help to achieve a higher port count for better scalability without using Domain IDs, and avoid multivendor interoperability issues. However, modules that operate in Pass-Thru cannot be directly attached to the storage system. They must be attached to an external NPIV-capable FC switch. See the switch documentation about NPIV support. Select a SAN module that can provide the required functionality together with seamless integration into the existing storage infrastructure (Table 7-2). There are no strict rules to follow during integration planning. However, several considerations must be taken into account.
Table 7-2 SAN module feature comparison and interoperability FC5022 16Gb SAN Scalable Switch Basic FC connectivity FC-SW-2 interoperability Zoning Maximum number of Domain IDs Advanced FC connectivity Port Aggregation Advanced fabric security Interoperability (existing fabric) Brocade fabric interoperability QLogic fabric interoperability Cisco fabric interoperability Yes No No No No No Yes No No Yes No Yes Yes Yes Nob Yes Not applicable Not applicable Not applicable Not applicable Yesa Yes 239 Yes Yes 239 Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable FC3171 8 Gb SAN Switch FC5022 16Gb SAN Scalable Switch in Brocade Access Gateway mode FC3171 8 Gb SAN Pass-thru (and FC3171 8 Gb SAN Switch in pass-through mode)

a. Indicates that a feature is supported without any restrictions for existing fabric, but with restrictions for

added fabric, and vice versa.


b. Does not necessarily mean that a feature is not supported. Instead, it means that severe restrictions

apply to the existing fabric. Some functions of the existing fabric potentially must be disabled (if used).

284

IBM PureFlex System and IBM Flex System Products and Technology

Almost all switches support interoperability standards, which means that almost any switch can be integrated into existing fabric by using interoperability mode. Interoperability mode is a special mode used for integration of different vendors FC fabrics into one. However, only standards-based functionality is available in the interoperability mode. Advanced features of a storage fabrics vendor might not be available. Broadcom, McDATA, and Cisco have interoperability modes on their fabric switches. Check the compatibility matrixes for a list of supported and unsupported features in the interoperability mode. Table 7-2 on page 284 provides a high-level overview of standard and advanced functions available for particular Enterprise Chassis SAN switches. It lists how these switches might be used for designing new storage networks or integrating with existing storage networks. Remember: Advanced (proprietary) FC connectivity features from different vendors might be incompatible with each other, even those that provide almost the same function. For example, both Brocade and Cisco support port aggregation. However, Brocade uses ISL trunking and Cisco uses PortChannels, and they are incompatible with each other. For example, if you integrate FC3052 2-port 8Gb FC Adapter (Brocade) into QLogic fabric, you cannot use Brocade proprietary features such as ISL trunking. However, QLogic fabric does not lose functionality. Conversely, if you integrate QLogic fabric into existing Brocade fabric, placing all Brocade switches in interoperability mode loses Advanced Fabric Services functions. If you plan to integrate Enterprise Chassis into a Fibre Channel fabric that is not listed here, QLogic might be a good choice. However, this configuration is possible with interoperability mode only, so extended functions are not supported. A better way would be to use the FC3171 8 Gb SAN Pass-thru or Brocade Access Gateway. Switch selection and interoperability has the following rules: FC3171 8 Gb SAN Switch is used when Enterprise Chassis is integrated into existing QLogic fabric or when basic FC functionality is required. That is, with one Enterprise Chassis with a direct-connected storage server. FC5022 16Gb SAN Scalable Switch is used when Enterprise Chassis is integrated into existing Brocade fabric or when advanced FC connectivity is required. You might use this switch when several Enterprise Chassis are connected to high performance storage systems. If you plan to use advanced features such as ISL trunking, you might need to acquire specific licenses for these features. Tip: Using FC storage fabric from the same vendor often avoids possible operational, management, and troubleshooting issues. If Enterprise Chassis is attached to a non-IBM storage system, support is provided by the storage systems vendor. Even if non-IBM storage is listed on IBM ServerProven, it means only that the configuration has been tested. It does not mean that IBM provides support for it. See the vendor compatibility information for supported configurations. For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at: http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

285

7.3 iSCSI
iSCSI uses a traditional Ethernet network for block I/O between storage system and servers. Servers and storage systems are connected to the LAN, and use iSCSI to communicate with each other. Because iSCSI uses a standard TCP/IP stack, you can use iSCSI connections across LAN or wide area network (WAN) connections. iSCSI targets IBM System Storage DS3500 iSCSI models, an optional DHCP server, and a management station with iSCSI Configuration Manager. The software iSCSI initiator is specialized software that uses a servers processor for iSCSI protocol processing. A hardware iSCSI initiator exists as microcode that is built in to the LAN on Motherboard (LOM) on the node or on the I/O Adapter providing it is supported. Both Software and Hardware initiator implementations provide iSCSI capabilities for Ethernet NICs. However, an operating system driver can be used only after the locally installed operating system is turned on and running. In contrast, the NIC built-in microcode is used for boot-from-SAN implementations, but cannot be used for storage access when the operating system is already running. Currently, iSCSI on Enterprise Chassis nodes can be implemented on the IBM Flex System CN4054 10 Gb Virtual Fabric Adapter and the embedded 10 Gb Virtual Fabric adapter LOM. Remember: Both of these NIC solutions require a Feature on Demand (FoD) upgrade, which enables and provides iSCSI initiator. Software initiators can be obtained from the operating system vendor. For example, Microsoft offers a software iSCSI initiator for download. Or they can be obtained as a part of an NIC firmware upgrade (if supported by NIC). For more information about IBM Flex System CN4054 10 Gb Virtual Fabric Adapter, see 5.5.1, Overview on page 216 and 5.6.12, IBM Flex System IB6132 2-port FDR InfiniBand Adapter on page 251. For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at: http://www.ibm.com/systems/support/storage/config/ssic Tip: Consider using a separate network segment for iSCSI traffic. That is, isolate NICs, switches (or virtual local area network (VLANs)), and storage system ports that participate in iSCSI communications from other traffic. If you plan for redundancy, you must use multipath drivers. Generally, they are provided by the operating system vendor for iSCSI implementations, even if you plan to use hardware initiators. It is possible to implement high availability (HA) clustering solutions by using iSCSI, but certain restrictions might apply. For more information, see the storage system vendor compatibility guides. When planning your iSCSI solution, consider the following items: IBM Flex System Enterprise Chassis nodes, the initiators, and the operating system are supported by an iSCSI storage system. For more information, see the compatibility guides from the storage vendor. 286
IBM PureFlex System and IBM Flex System Products and Technology

Multipath drivers exist, and are supported by the operating system and the storage system (when redundancy is planned). For more information, see the compatibility guides from the operating system vendor and storage vendor. For more information, see the following publications: IBM SSIC http://www.ibm.com/systems/support/storage/config/ssic IBM System Storage N series Interoperability Matrix, found at: http://ibm.com/support/docview.wss?uid=ssg1S7003897 Microsoft Support for iSCSI (from Microsoft), found at: http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/msfiscsi. mspx

7.4 High availability and redundancy


The Enterprise Chassis has built-in network redundancy. All I/O Adapter servers are dual port. I/O modules can be installed as a pair into the Enterprise Chassis to avoid possible single points of failure in the storage infrastructure. All major vendors, including IBM, use dual controller storage systems to provide redundancy. A typical topology for integrating Enterprise Chassis into a Fibre Channel infrastructure is shown in Figure 7-2.

Storage System Controller 1 I/O Module Controller 2

Node

Chassis

Storage Network

I/O Module
Figure 7-2 IBM Enterprise Chassis LAN infrastructure topology

This topology includes a dual port FC I/O Adapter installed onto the node. A pair of FC I/O Modules is installed into bays 3 and 4 of the Enterprise Chassis. In a failure, the specific operating system driver provided by the storage system manufacturer is responsible for the automatic failover process. This process is also known as multipathing capability.

287

If you plan to use redundancy and high availability for storage fabric, ensure that failover drivers satisfy the following requirements: They are available from the vendor of the storage system. They come with the system or can be ordered separately (remember to order them in such cases). They support the node operating system. They support the redundant multipath fabric that you plan to implement (that is, they support the required number of redundant paths). For more information, see the storage system documentation from the vendor.

7.5 Performance
Performance is an important consideration during storage infrastructure planning. Providing the required end-to-end performance for your SAN can be accomplished in several ways. First, the storage systems failover driver can provide load balancing across redundant paths in addition to high availability. IBM System Storage Multi-path Subsystem Device Driver (SDD) used with DS8000 provides this function. If you plan to use such drivers, ensure that they satisfy the following requirements: They are available from the storage system vendor. They come with the system, or can be ordered separately. They support the node operating system. They support the multipath fabric that you plan to implement. That is, they support the required number of paths implemented. Also, you can use static LUN distribution between two storage controllers in the storage system. Some LUNs are served by controller 1, and others are served by controller 2. A zoning technique can also be used together with static LUN distribution if you have redundant connections between FC switches and the storage system controllers. Trunking or PortChannels between FC or Ethernet switches can be used to increase network bandwidth, increasing performance. Trunks in the FC network use the same concept as in standard Ethernet networks. Several physical links between switches are grouped into one logical link with increased bandwidth. This configuration is typically used when an Enterprise Chassis is integrated into existing advanced FC infrastructures. However, keep in mind that only the FC5022 16Gb SAN Scalable Switch supports trunking. Also be aware that this is an optional feature that requires the purchase of an additional license. For more information, see the storage system vendor documentation and the switch vendor documentation.

288

IBM PureFlex System and IBM Flex System Products and Technology

7.6 Backup solutions


Backup is an important consideration when deploying infrastructure systems. First, you need to decide which tape backup solution to implement. There are a number of ways to back up data: Centralized local area network (LAN) backup with dedicated backup server (compute node in the chassis) with FC-attached tape autoloader or tape library Centralized LAN backup with dedicated backup server (server external to the chassis) with FC-attached tape autoloader or tape library LAN-free backup with FC-attached tape autoloader or library (see 7.6.2, LAN-free backup for nodes on page 290.) If you plan to use a node as a dedicated backup server or LAN-free backup for nodes, use only certified tape autoloaders and tape libraries. If you plan to use a dedicated backup server on a non-Enterprise Chassis system, use tape devices that are certified for that server. Also, verify that the tape device and type of backup you select are supported by the backup software you plan to use. For more information about supported tape devices and interconnectivity, see the IBM System Storage Interoperability Center at: http://www.ibm.com/systems/support/storage/config/ssic

7.6.1 Dedicated server for centralized LAN backup


The simplest way to provide backup for the Enterprise Chassis is to use a compute node or external server with a SAS-attached or FC-attached tape unit. In this case, all nodes that require backup have backup agents, and backup traffic from these agents to the backup server uses standard LAN paths. If you use an FC-attached tape drive, connect it to FC fabric (or at least to an HBA) that is dedicated for backup. Do not connect it to the FC fabric that carries the disk traffic. If you cannot use dedicated switches, use zoning techniques on FC switches to separate these two fabrics. Consideration: Avoid mixing disk storage and tape storage on the same FC HBA. If you experience issues with your SAN because the tape and disk on the same HBA, IBM Support will request that you separate these devices. If you plan to use a node as a dedicated backup server with FC-attached tape, use one port of the I/O adapter for tape and another for disk. There is no redundancy in this case.

289

Figure 7-3 shows possible topologies and traffic flows for LAN backups and FC-attached storage devices.

Storage System Controller 1 Controller 2

Ethernet I/O Module


Node backup server Node backup agent

FCSM

Chassis

FC Switch Module

Storage Network

Ethernet I/O Module

FCSM Tape Autoloader

Backup data is moved from disk storage to backup server's disk storage through LAN by backup agent
Figure 7-3 LAN backup topology and traffic flow

Backup data is moved from disk backup storage to tape backup storage by backup server

The topology shown in Figure 7-3 has the following characteristics: Each Node participating in backup, except the backup server itself, has dual connections to the disk storage system. The backup server has only one disk storage connection (shown in red). The other port of the FC HBA is dedicated for tape storage. A backup agent is installed onto each Node requiring backup. The backup traffic flow starts with the backup agent transfers backup data from the disk storage to the backup server through LAN. The backup server stores this data on its disk storage, for example on the same storage system. Then the backup server transfers data from its storage directly to the tape device. Zoning is implemented on an FC Switch Module to separate disk and tape data flows. Zoning is almost like VLANs in networks.

7.6.2 LAN-free backup for nodes


LAN-free backup means that the SAN fabric is used for the backup data flow instead of LAN. LAN is used only for passing control information between the backup server and agents. LAN-free backup can save network bandwidth for network applications, providing better network performance. The backup agent transfers backup data from the disk storage directly to the tape storage during LAN-free backup.

290

IBM PureFlex System and IBM Flex System Products and Technology

Figure 7-4 illustrates this process.

Storage System Controller 1 Controller 2 Ethernet I/O Module


Node backup server Node backup agent

FCSM

Chassis

Storage Network

Ethernet I/O Module

FCSM 2
Tape Autoloader

Figure 7-4 LAN-free backup without disk storage redundancy

Figure 7-4 shows the simplest topology for LAN-free backup. With this topology, the backup server controls the backup process, and the backup agent moves the backup data from the disk storage directly to the tape storage. In this case, there is no redundancy provided for the disk storage and tape storage. Zones are not required because the second Fibre Channel Switching Module (FCSM) is exclusively used for the backup fabric. Backup software vendors can use other (or additional) topologies and protocols for backup operations. Consult the backup software vendor documentation for a list of supported topologies and features, and additional information.

7.7 Boot from SAN


Boot from SAN (or SAN Boot) is a technique used when the node in the chassis has no local disk drives. It uses an external storage system LUN to boot the operating system. Both the operating system and data are on the SAN. This technique is commonly used to provide higher availability and better utilization of the systems storage (where the operating system is). Hot spare Nodes or Rip-n-Replace techniques can also be easily implemented by using boot from SAN.

7.7.1 Implementing Boot from SAN


To successfully implement SAN Boot, the following conditions need to be met. Check the respective storage system compatibility guides for the information you need. Storage system supports SAN Boot. Operating system supports SAN Boot. FC HBAs, or iSCSI initiators support SAN Boot.

291

You can also check the documentation for the operating system used for boot from SAN support and requirements as well as storage vendors. See the following sources for additional SAN boot-related information: Windows Boot from Fibre Channel SAN Overview and Detailed Technical Instructions for the System Administrator can be found at: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=2815 SAN Configuration Guide (from VMware), found at: http://www.vmware.com/pdf/vi3_esx_san_cfg.pdf For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at: http://www.ibm.com/systems/support/storage/config/ssic

7.7.2 iSCSI SAN Boot specific considerations


iSCSI SAN Boot enables a diskless node to be started from an external iSCSI storage system. You can use either the onboard 10 Gb Virtual Fabric LOM on the node itself or an I/O adapter. Specifically, the IBM Flex System CN4054 10 Gb Virtual Fabric Adapter supports iSCSI with the IBM Flex System CN4054 Virtual Fabric Adapter Upgrade, part 90Y3558. For the latest compatibility information, see the storage vendor compatibility guides. For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at: http://www.ibm.com/systems/support/storage/config/ssic

7.8 Converged networks


One common way to do reduce administration costs is by converging technologies that have been implemented on separate infrastructures. Just as office phone systems have been reduced from a separate cabling plant and components to a common IP infrastructure, Fibre Channel networks are also converging to Ethernet. FCoE (Fibre Channel over Ethernet) removes the need for separate HBAs on the servers and separate Fibre Channel cables that come out of the server or chassis. Instead, a Converged Network Adapter (CNA) is installed in the server. This adapter presents what appears to be both a NIC and an HBA to the OS, but the output out of the server is 10 Gb Ethernet. The CN4054 10Gb Virtual Fabric Adapter or Embedded Virtual Fabric Adapter on the x240 with optional Virtual Fabric Upgrade offer this service. Either must be used in conjunction with EN4091 10 Gb Ethernet Pass-thru connected to the external FCoE-capable top of rack switch. The EN4091 10 Gb Ethernet Pass-thru connects a node that runs a CNA to an upstream switch that acts as an FCF (FCoE Forwarder). The Fibre Channel packet is broken back out of the Ethernet packet, and sent into the Fibre Channel SAN. FCoE support on the EN4093 10Gb Scalable Switch is planned for later 2012.

292

IBM PureFlex System and IBM Flex System Products and Technology

Abbreviations and acronyms


AC ACL AES-NI AMM AMP ANS API AS ASIC ASU AVX BACS BASP BE BGP BIOS BOFM CEE CFM CLI CMM CPM CPU CRTM DC DHCP DIMM DMI DRAM DRTM DSA ECC EIA ESB ETE FC alternating current access control list Advanced Encryption Standard New Instructions advanced management module Apache, MySQL, and PHP/Perl Advanced Network Services application programming interface Australian Standards application-specific integrated circuit Advanced Settings Utility Advanced Vector Extensions Broadcom Advanced Control Suite Broadcom Advanced Server Program Broadband Engine Border Gateway Protocol basic input/output system BladeCenter Open Fabric Manager Converged Enhanced Ethernet cubic feet per minute command-line interface Chassis Management Module Copper Pass-thru Module central processing unit Core Root of Trusted Measurements domain controller Dynamic Host Configuration Protocol dual inline memory module Desktop Management Interface dynamic random-access memory Dynamic Root of Trust Measurement Dynamic System Analysis error checking and correcting Electronic Industries Alliance Enterprise Switch Bundle everything-to-everything Fibre Channel LED LOM LP KB KVM LACP LAN LDAP IGMP IMM IP IS ISP IT ITE ITSO FC-AL FDR FSM FSP FTP FTSS GAV GB GT HA HBA HDD HPC HS HT HW I/O IB IBM ID IEEE

Fibre Channel Arbitrated Loop


fourteen data rate Flex System Manager flexible service processor File Transfer Protocol Field Technical Sales Support generally available variant gigabyte gigatransfers high availability host bus adapter hard disk drive high-performance computing hot swap Hyper-Threading hardware input/output InfiniBand International Business Machines identifier Institute of Electrical and Electronics Engineers Internet Group Management Protocol integrated management module Internet Protocol information store Internet service provider information technology IT Element International Technical Support Organization kilobyte keyboard video mouse Link Aggregation Control Protocol local area network Lightweight Directory Access Protocol light emitting diode LAN on Motherboard low profile

Copyright IBM Corp. 2012. All rights reserved.

293

LPC LR LR-DIMM MAC MB MSTP NIC NL NS NTP OPM OSPF PCI PCIe PDU PF PSU QDR QPI RAID RAM RAS RDIMM RFC RHEL RIP ROC ROM RPM RSS SAN SAS SATA SDMC SerDes SFF SLC SLES SLP SNMP SSD

Local Procedure Call long range load-reduced DIMM media access control megabyte Multiple Spanning Tree Protocol network interface card nearline not supported Network Time Protocol Optical Pass-Thru Module Open Shortest Path First Peripheral Component Interconnect PCI Express power distribution unit power factor power supply unit quad data rate QuickPath Interconnect redundant array of independent disks random access memory remote access services; row address strobe registered DIMM request for comments Red Hat Enterprise Linux Routing Information Protocol RAID-on-Chip read-only memory revolutions per minute Receive-Side Scaling storage area network Serial Attached SCSI Serial ATA Systems Director Management Console Serializer-Deserializer small form factor Single-Level Cell SUSE Linux Enterprise Server Service Location Protocol Simple Network Management Protocol solid-state drive

SSH SSL STP TCG TCP TDP TFTP TPM TXT UDIMM UDLD UEFI UI UL UPS URL USB VE VIOS VLAG VLAN VM VPD VRRP VT WW WWN

Secure Shell Secure Sockets Layer Spanning Tree Protocol Trusted Computing Group Transmission Control Protocol thermal design power Trivial File Transfer Protocol Trusted Platform Module text unbuffered DIMM Unidirectional link detection Unified Extensible Firmware Interface user interface Underwriters Laboratories uninterruptible power supply Uniform Resource Locator universal serial bus Virtualization Engine Virtual I/O Server Virtual Link Aggregation Groups virtual LAN virtual machine vital product data Virtual Router Redundancy Protocol Virtualization Technology worldwide Worldwide Name

294

IBM PureFlex System and IBM Flex System Products and Technology

Related publications and education


The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.

IBM Redbooks
The following publications from IBM Redbooks provide additional information about IBM Flex System. These are available from: http://www.redbooks.ibm.com/portals/puresystems IBM Flex System p260 and p460 Planning and Implementation Guide, SG24-7989 IBM Flex System Networking in an Enterprise Data Center, REDP-4834 Chassis and Compute Nodes: IBM Flex System Enterprise Chassis, TIPS0863 IBM Flex System p260 and p460 Compute Node, TIPS0880 IBM Flex System x240 Compute Node, TIPS0860 IBM Flex System Manager, TIPS0862 Switches: IBM Flex System EN2092 1Gb Ethernet Scalable Switch, TIPS0861 IBM Flex System Fabric EN4093 10Gb Scalable Switch, TIPS0864 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module, TIPS0865 IBM Flex System FC5022 16Gb SAN Scalable Switch and FC5022 24-port 16Gb ESB SAN Scalable Switch, TIPS0870 IBM Flex System IB6131 InfiniBand Switch, TIPS0871 IBM Flex System FC3171 8Gb SAN Switch and Pass-thru, TIPS0866 Adapters: IBM Flex System EN2024 4-port 1Gb Ethernet Adapter, TIPS0845 IBM Flex System FC5022 2-port 16Gb FC Adapter, TIPS0891 IBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port 10Gb Ethernet Adapter, TIPS0868 IBM Flex System FC3052 2-port 8Gb FC Adapter, TIPS0869 ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884 IBM Flex System IB6132 2-port FDR InfiniBand Adapter, TIPS0872 IBM Flex System EN4132 2-port 10Gb Ethernet Adapter, TIPS0873 IBM Flex System IB6132 2-port QDR InfiniBand Adapter, TIPS0890 IBM Flex System FC3172 2-port 8Gb FC Adapter, TIPS0867

Copyright IBM Corp. 2012. All rights reserved.

295

Other relevant documents: IMM and IMM2 Support on IBM System x and BladeCenter Servers, TIPS0849: http://www.redbooks.ibm.com/abstracts/tips0849.html You can search for, view, download or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website: ibm.com/redbooks

IBM education
The following are IBM educational offerings for IBM Flex System. Note that some course numbers and titles might have changed slightly after publication. Note: IBM courses prefixed with NGTxx are traditional, face-to-face classroom offerings. Courses prefixed with NGVxx are Instructor Led Online (ILO) offerings. Courses prefixed with NGPxx are Self-paced Virtual Class (SPVC) offerings. NGT10/NGV10/NGP10, IBM Flex System - Introduction NGT20/NGV20/NGP20, IBM Flex System x240 Compute Node NGT30/NGV30/NGP30, IBM Flex System p260 and p460 Compute Nodes NGT40/NGV40/NGP40, IBM Flex System Manager Node NGT50/NGV50/NGP50, IBM Flex System Scalable Networking For more information about these, and many other IBM System x educational offerings, visit the global IBM Training website located at: http://www.ibm.com/training

Online resources
These websites are also relevant as further information sources: IBM Flex System Enterprise Chassis Power Requirements Guide: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401 Integrated Management Module II Users Guide http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5086346 IBM Flex System Information Center http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp ServerProven for IBM Flex System http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html ServerProven compatibility page for operating system support http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.sh tml

296

IBM PureFlex System and IBM Flex System Products and Technology

IBM Flex System Interoperability Guide http://ibm.com/common/ssi/cgi-bin/ssialias?infotype=sa&subtype=wh&htmlfid=WZL12 345USEN Configuration and Option Guide http://www.ibm.com/systems/xbc/cog/ xREF - IBM x86 Server Reference http://www.redbooks.ibm.com/xref IBM System Storage Interoperation Center http://www.ibm.com/systems/support/storage/ssic

Help from IBM


IBM Support and downloads ibm.com/support IBM Global Services ibm.com/services

297

298

IBM PureFlex System and IBM Flex System Products and Technology

Index
Numerics
00D4692 47 00D4693 47 00D4968 155 00D7192 119 00D7193 119 00D7194 119 00D7195 119 00D7196 119 00D7197 119 00D7550 47 00D7551 47 00D7554 47 10GBASE-KR 88 39Y7916 119 41Y8298 169, 194 41Y8300 169, 194 42D0637 164, 191 42D0677 164, 191 42D0707 164, 192 42U 1100mm Enterprise V2 Dynamic Rack 128 43W7718 164, 192 43W7726 168, 191 43W7746 168, 191 43W9049 67 43W9055 63 43W9057 63 43W9078 70 49Y1397 155, 186 49Y1400 155, 186 49Y1403 186 49Y1404 155, 186 49Y1405 154 49Y1406 155, 186 49Y1407 155, 186 49Y1559 155 49Y1563 155 49Y1567 155 49Y2003 164, 191 49Y4270 95 49Y4294 103 49Y4298 103 49Y4798 95 49Y7900 235236 49Y8116 150 49Y8119 169 49Y8125 149 49Y8144 149 68Y7030 83 69Y1930 113 69Y1934 116 69Y1938 235, 246 7895-22X 198 7895-42X 216 7906 177 81Y5179 149 81Y5180 149 81Y5182 149 81Y5183 149 81Y5184 149 81Y5185 149 81Y5186 149 81Y5187 149 81Y5188 150 81Y5189 150 81Y5190 149 81Y5206 149 81Y5286 163 81Y9418 150 81Y9650 164, 191 81Y9670 164, 191 81Y9690 164, 192 81Y9722 164, 191 81Y9726 164, 191 81Y9730 164, 191 8721 57 8721-A1x 59 8731 46 8737 140 88Y6037 95 88Y6043 101 88Y6370 235 88Y6374 107 90Y3105 155 90Y3109 155, 186 90Y3178 155 90Y3450 118 90Y3454 235, 251 90Y3462 118 90Y3466 235, 238 90Y3554 235, 242 90Y3558 242 90Y3562 103 90Y4217 47 90Y4222 47 90Y4249 47 90Y4341 165 90Y4342 165 90Y4390 164165, 187 90Y4410 168, 190 90Y4412 168, 190 90Y4424 188 90Y4425 189 90Y4426 189 90Y4447 168, 190 90Y4793 184 90Y4795 184 90Y4796 184 90Y4797 184

Copyright IBM Corp. 2012. All rights reserved.

299

90Y4799 90Y4800 90Y4801 90Y4804 90Y4805 90Y8643 90Y8648 90Y9310 90Y9338 90Y9356 94Y9219 94Y9220 95Y1174 95Y1179 95Y2375 95Y4670 95Y4675

184 184 184 184 184 164, 192 164, 192 171 84 107 47 47 47 47 235, 247 149 149

A
Access Gateway 282 Active Memory Expansion 207 adapter cards 234255 agents 54 air filter 63 air flow 73 AIX p260 Compute Node 215 p460 Compute Node 233 anchor card Enterprise Chassis 59 p260 Compute Node 214 p460 Compute Node 232 architecture 85 ASHRAE class A3 126

chassis See Enterprise Chassis Chassis Management Module 8284 connections 83 default IP address 40 factory defaults 84 functions 40, 83 IPv6 40 LEDs 84 overview 39 ports 39, 83 reset 84 web interface 40 Chassis Map 7 Check error log LED 72 cloud 1 CN4054 10 Gb Virtual Fabric Adapter 242 FCoE support 292 Common Agent 54 compatibility 235 compute nodes 139255 See also Flex System Manager See also p24L Compute Node See also p260 Compute Node See also p460 Compute Node See also x220 Compute Node See also x240 Compute Node management 43 overview 5, 8 console breakout cable 162 console planning 125 cooling planning 126 Rear Door Heat eXchanger 134

B
backup solutions 289 blades See compute nodes block diagram I/O architecture 86 p260 Compute Node 203 p460 Compute Node 222 x220 Compute Node 183 x240 Compute Node 146 boot from SAN 291 Broadcom BCM5718 controller in x220 192 EN2024 4-port 1 Gb Ethernet Adapter 236 Brocade Brocade Access Gateway 282 FC5022 16 Gb SAN Scalable Switch 107 FC5022 2-port 16 Gb FC Adapter 249

D
DACs EN2092 1 Gb Ethernet Switch 104 EN4091 10 Gb Ethernet Pass-thru 101 EN4093 10 Gb Scalable Switch 96 damper 72 default IP addresses 43 DPOD 108 DS3500 278 DS5000 278 DS8000 277 dust filter 64 dynamic rack 128

E
E_Ports 283 Emulex BladeEngine 3 controller in the x240 170 CN4054 10 Gb Virtual Fabric Adapter 242 EN4054 4-port 10 Gb Ethernet Adapter 240 FC3052 2-port 8 Gb FC Adapter 247 EN2024 4-port 1 Gb Ethernet Adapter 236 EN2092 1 Gb Ethernet Switch 102 comparison with 10 Gb switch 258

C
C105 186 cable raceways 131 CacheCade Pro 2.0 168

300

IBM PureFlex System and IBM Flex System Products and Technology

DACs 104 features 104 LEDs 104 ports 103 specifications 104 transceivers 104 upgrades 103 EN4054 4-port 10 Gb Ethernet Adapter 240 EN4091 10 Gb Ethernet Pass-thru 100 DACs 101 FCoE support 292 LEDs 101 ports 101 specifications 101 transceivers 101 use with four-port adapters 101 EN4093 10 Gb Scalable Switch 94 comparison with 1 Gb switch 258 DAC cables 96 FCoE support 292 features 97 ports 96 scalable 259 specifications 97 transceivers 96 upgrades 9596, 259 uplink ports 95 EN4132 2-port 10 Gb Ethernet Adapter 238 Enterprise Chassis 57138 See also Chassis Management Module air filter 63 air flow 73 air vents 72 airflow 60 anchor card 59 architecture 85 capping 79 components 58 console planning 125 cooling apertures 73 damper 72 depth 63 dimensions 63 dust filter 64 fan logic modules 70 fan module requirements 77 fan modules 68 filter 63 form factor 62 four-port adapters 91 front view 58, 60 height 63 hot-swap components 65 I/O architecture 85 I/O modules 92 information panel 60, 71 input power 63 interconnects 87 introduction 58 KR lanes 90

lanes 88 LEDs 60 line cords 119 midplane 61 modules 92 N+N power redundancy 66 networking 87 noise level 63 overview 8, 58 panel 60 personality card 59 planning 120 policies 79 power consumption 63 power cords 119 power planning 120 power supply requirements 78 racks 127 rear view 62 redundancy 66 security 41 security policy 42 shelf 64 shuttle 59 sizing 77 sound level 63 specifications 62 switches 92 temperature 63 two-port adapters 91 vents 72 VPD 59 weight 63 width 63 Enterprise Switch Bundle 107 ESB 107 EtherChannel 266 Ethernet See also EN2092 1 Gb Ethernet Switch See also EN4091 10 Gb Ethernet Pass-thru See also EN4093 10 Gb Scalable Switch CN4054 10 Gb Virtual Fabric Adapter 242 EN2024 4-port 1 Gb Ethernet Adapter 236 EN4054 4-port 10 Gb Ethernet Adapter 240 EN4132 2-port 10 Gb Ethernet Adapter 238 internal management network 38 Virtual Fabric 267 VLANs 260 x220 Compute Node 192 x240 Compute Node 170 Ethernet switch modules comparison 258 selection of 258 eXFlash x220 Compute Node 188 x240 Compute Node 164 expansion cards 234255 expert integrated systems 1 Express, PureFlex System 2, 13 external storage 274

301

F
F_Ports 283 Fabric Manager features 48 part numbers 47 fabrics 283 fan logic modules 70 fan modules 68 sizing 77 FastPath 168 Fault LED 72 FC3052 2-port 8 Gb FC Adapter 247 FC3171 8 Gb SAN Pass-thru 116 features 116 ports 116 standards 116 tools 116 transceivers 116 FC3171 8 Gb SAN Switch 113 comparison 284 pass-thru mode 283 specifications 114 standards 114 tools 114 FC3172 2-port 8 Gb FC Adapter 246 FC5022 16 Gb SAN Scalable Switch 107 benefits 109 comparison 284 DPOD 108 ESB 107, 111 NPIV 107 ports 107108 standards 112 transceivers 109 FC5022 16Gb SAN Scalable Switch comparison 284 FC5022 2-port 16 Gb FC Adapter 249 FCoE CN4054 10 Gb Virtual Fabric Adapter 242 FCoE upgrade x240 Compute Node 171 Fibre Channel See also FC3171 8 Gb SAN Pass-thru See also FC3171 8 Gb SAN Switch See also FC5022 16 Gb SAN Scalable Switch comparison of switch modules 284 fabrics 283 FC3052 2-port 8 Gb FC Adapter 247 FC3172 2-port 8 Gb FC Adapter 246 FC5022 2-port 16 Gb FC Adapter 249 interoperability 281 redundancy 287 switch selection 282 Flex System Enterprise Chassis See Enterprise Chassis Flex System Manager agents 54 Common Agent 54 components 49 controls 50

features 47 front panel 50 hardware 48 licenses 47 memory 49 networking 51 out-of-band management 54 overview 5, 7 part numbers 47 partitions 51 planar 50 Platform Agent 54 preload 51 processor 49 software 51 specifications 49 storage 50 system board 50 foundations 2 FSM See Flex System Manager FSP 44 p260 Compute Node 214

H
H1135 187 hot-swap components 65 HTTP access 42

I
I/O adapter cards 234255 compatibility 235 I/O architecture 85 I/O modules 92 compatibility 236 EN2092 1 Gb Ethernet Switch 102 EN4091 10 Gb Ethernet Pass-thru 100 EN4093 10 Gb Scalable Switch 94 FC3171 8 Gb SAN Pass-thru 116 FC3171 8 Gb SAN Switch 113 FC5022 16 Gb SAN Scalable Switch 107 IB6131 InfiniBand Switch 118 LEDs 93 overview 9 serial cable 93 USB cable 93 IB6131 InfiniBand Switch 118 cables 118 ports 118 specifications 118 IB6132 2-port FDR InfiniBand Adapter 251 IB6132 2-port QDR InfiniBand Adapter 253 IBM Flex System Manager See Flex System Manager IBM i p260 Compute Node 215 p460 Compute Node 233 IMMv2 See Integrated Management Module II

302

IBM PureFlex System and IBM Flex System Products and Technology

InfiniBand IB6132 2-port FDR InfiniBand Adapter 251 IB6132 2-port QDR InfiniBand Adapter 253 See IB6131 InfiniBand Switch Integrated Management Module II features 43 overview 43 x220 Compute Node 197 x240 Compute Node 175 integrated systems 1 Intel C600 146 Intel processors x220 Compute Node 184 x240 Compute Node 147 internal management network 38 IP addresses 43 Chassis Management Module 40 iSCSI 286 boot from SAN 292 software initiator 286

IP addresses 43 network 38 security 41 MegaRAID 168 Mellanox EN4132 2-port 10 Gb Ethernet Adapter 238 IB6131 InfiniBand Switch 118 IB6132 2-port FDR InfiniBand Adapter 251 IB6132 2-port QDR InfiniBand Adapter 253 memory memory channels 150 p260 Compute Node 205 p460 Compute Node 223 x220 Compute Node 184 x240 Compute Node 150 midplane 61

N
N series 278 N_Ports 283 N+N power redundancy 66 naming I/O adapter cards 235 I/O modules 94 networking 257271 Ethernet switch module selection 258 load balancing 267 performance 266 Virtual Fabric 267 VLANs 260 NPIV 283

J
jumbo frames 266

K
KR lanes 90

L
LEDs chassis 60, 71 Chassis Management Module 84 EN2092 1 Gb Ethernet Switch 104 fan logic module 71 fan modules 70 I/O modules 93 p260 Compute Node 202 power modules 67 switches 93 x220 Compute Node 194 x240 Compute Node 141, 173 light path diagnostics p260 Compute Node 202 p460 Compute Node 219 x220 Compute Node 196 x240 Compute Node 174 line cords 119 load balancing 267 Locate LED 71 LOM 86

O
out-of-band management 54 outriggers 128

P
p24L Compute Node 200 See also p260 Compute Node features 200 p260 Compute Node 198216 Active Memory Expansion 207 anchor card 214 architecture 203204 block diagram 203 cover limitations 206 dimensions 199 DIMM installation sequence 207 DIMM options 205 front panel 201 FSP 44, 214 I/O expansion 212 LEDs 202 light path diagnostics 202 local storage 209 LP DIMMs, use of 209 memory 205 operating systems 215

M
management 3756 See also Chassis Management Module compute nodes 43 FSP 44 I/O modules 45 Integrated Management Module II 43 internal network 38

303

p24L, compared with 200 processors 203 RAID 211 Serial over LAN 214 slots 212 specifications 198 storage 209 system board 200 systems management 214 USB port 201 VPD card 214 weight 199 p460 Compute Node 216234 Active Memory Expansion 226 adapters 231 anchor card 232 architecture 221 block diagram 222 cross-bar processors 222 DIMM installation sequence 225 front panel 218 FSP 44 I/O expansion 230 light path diagnostics 219 LP DIMMs, use of 223 memory 223 operating systems 233 overview 216 power button 218 processors 222 RAID 229 slots 230 SOL 232 specifications 216 storage 227 system board 218 systems management 232 USB 2.0 port 218 VPD card 232 warranty 217 weight 217 PDUs 120 performance networking 266 storage 288 Performance Accelerator 168, 190 personality card 59 planning console 125 cooling 126 power 120 rack 133 Rear Door Heat eXchanger 134 UPS units 124 Platform Agent 54 pNIC mode 245 policies power 79 security 42 power

cabling 121 capping 79 cords 119 power supplies line cords 119 PDUs 120 policies 79 power cords 119 sizing 78 Power Systems compute nodes See p260 Compute Node See p460 Compute Node POWER7 processor 203, 222 PowerLinux See also p24L Compute Node processors p260 Compute Node 203 p460 Compute Node 222 x220 Compute Node 182, 184 x240 Compute Node 147 PureApplication System 3 PureFlex System 2, 11, 20, 2733 Enterprise 27 Express 13 Standard 20 PureSystems 1

Q
QLogic FC3171 8 Gb SAN Pass-thru 116 FC3171 8 Gb SAN Switch 113 FC3172 2-port 8 Gb FC Adapter 246

R
raceways 131 racks 127 RAID 6 Upgrade 168, 190 rank sparing 157 RDHX 134 RDIMMs x220 Compute Node 184 Rear Door Heat eXchanger 134 Red Hat Enterprise Linux p260 Compute Node 216 p460 Compute Node 233 Redbooks website 296 Contact us xiv redundancy power policies 79 power supplies 66 SAN fabric 287 remote presence 7 reset the CMM 84

S
SAN boot 291 scalable switches 258 security 41

304

IBM PureFlex System and IBM Flex System Products and Technology

Serial-over-LAN 44 ServeRAID C105 186 ServeRAID H1135 187 ServeRAID M5100 Performance Accelerator 168, 190 ServeRAID M5100 RAID 6 Upgrade 168, 190 ServeRAID M5100 Series Enablement Kit x220 Compute Node 188 x240 Compute Node 165 ServeRAID M5100 Series IBM eXFlash Kit x220 Compute Node 189 x240 Compute Node 165 ServeRAID M5100 Series SSD Expansion Kit x220 Compute Node 189 x240 Compute Node 165 ServeRAID M5100 SSD Caching Enabler 168, 190 ServeRAID M5115 164, 187 servers See compute nodes shelf 64 ship-loadable designs 130 shuttle 59 solid-state drives x220 Compute Node 191 x240 Compute Node 164 Spanning Tree Protocol 262 SSD Caching Enabler 168, 190 stabilizers 128 storage 273292 backup solutions 289 boot from SAN 291 E_Ports 283 external 274 F_Ports 283 fabrics 283 Fibre Channel interoperability 281 interoperability mode 285 iSCSI 286 N_Ports 283 NPIV 283 overview 6 performance 288 SAN boot 291 SAN modules 284 switch comparison 284 tape 289 virtualization environments 283 Storwize V7000 275 SUSE Linux Enterprise Server p260 Compute Node 215 p460 Compute Node 233 switches 92 See I/O modules 92 compatibility 236 selection criteria 258 System Storage DS3500 278 System Storage DS5000 278 System Storage DS8000 277 System Storage N series 278 System Storage TS3310 280 System Storage TS3500 Tape Library 280

systems management 3756, 82 See also Chassis Management Module compute nodes 43 FSP 44 I/O modules 45 Integrated Management Module II 43 internal network 38 IP addresses 43 network 38 p260 Compute Node 214 p460 Compute Node 232 security 41 x240 Compute Node 172

T
tape storage 289 thermals 136 time-to-value 7 TPM 163 transceivers EN2092 1 Gb Ethernet Switch 104 EN4091 10 Gb Ethernet Pass-thru 101 EN4093 10 Gb Scalable Switch 96 FC3171 8 Gb SAN Pass-thru 116 FC5022 16 Gb SAN Scalable Switch 109 trunking 266 TS3310 280 TS3500 Tape Library 280 Turbo Boost Technology 2.0 147

U
UDIMMs x220 Compute Node 184 UEFI 175, 197 UPS units planning 124 supported models 120 USB ports p260 Compute Node 201 p460 Compute Node 218 USB Enablement Kit in the x240 169 x220 193194 x240 Compute Node 162, 173

V
V7000 275 VIOS p260 Compute Node 216 p460 Compute Node 234 Virtual Fabric 267 Virtual Fabric Mode 244 Virtual Link Aggregation Groups 264 Virtual Router Redundancy Protocol 265 virtualization of storage 283 VLAGs 264 VLANs 260 VMControl 48 VMready 270

305

VMware ESXi 169 vNIC mode 245, 269 VPD chassis 59 p260 Compute Node 214 p460 Compute Node 232 VRRP 265

W
wizards Flex System Manager 7

X
x220 Compute Node 177197 architecture 183 block diagram 183 Broadcom BCM5718 192 dimensions 180 disk drives 191 drives 191 Embedded 1 Gb Ethernet 192 eXFlash 188 exploded view 178 features 178, 183 front panel 194 I/O expansion 192 Integrated Management Module II 43, 197 Intel processors 182 internal storage 186 introduction 177 IPMI compliance 197 LEDs 194 light path diagnostics 196 LOM 192 memory 184 memory features 185 models 180 motherboard 180 operating systems 197 processors 182, 184 ServeRAID C105 186 ServeRAID H1135 187 slots 192 specifications 178 storage 186 system board 180 UEFI 197 virtualization 193 weight 180 x240 Compute Node 140176 10 Gb Virtual Fabric Adapter 170 block diagram 146 comparison 148 components 141 Compute Node Fabric Connector 170 console breakout cable 162 DIMM installation 158 Embedded 10 Gb Virtual Fabric Adapter 170 Emulex BladeEngine 3 170

Ethernet 170 eXFlash 164 exploded view 141 FCoE 292 FCoE upgrade 171 features 142, 146 front panel 141, 173 I/O expansion 171 independent channel mode 157 Integrated Management Module II 43, 175 Intel C600 hub 146 Intel processors 145 internal USB 169 introduction 140 IPMI compliance 175 LEDs 141 light path diagnostics 174 LSI controller 163 memory 150 memory channels 152 memory mirroring 157 models 144 motherboard 143 operating systems 176 planar 143 processor SKUs 148 processors 145, 147150 QPI 145, 147 rank-sparing mode 157 recommendations 158 SAS drives 164 ServeRAID M5115 164 shelf 145 slots 171 solid-state drives 164 specifications 142 storage 163 system board 143 TPM 163 UEFI 175 USB Enablement Kit 169 USB internal slot 169 USB ports 162 virtualization 169 warranty 140 XIV Storage System 276

306

IBM PureFlex System and IBM Flex System Products and Technology

IBM PureFlex System and IBM Flex System Products and Technology

IBM PureFlex System and IBM Flex System Products and Technology

IBM PureFlex System and IBM Flex System Products and Technology

(0.5 spine) 0.475<->0.873 250 <-> 459 pages

IBM PureFlex System and IBM Flex System Products and Technology

IBM PureFlex System and IBM Flex System Products and Technology

IBM PureFlex System and IBM Flex System Products and Technology

Back cover

IBM PureFlex System and IBM Flex System Products and Technology
Describes the IBM Flex System Enterprise Chassis and compute node technology Provides details of available I/O modules and expansion options Explains networking and storage configurations
To meet todays complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more. The IBM PureFlex System combines no-compromise system designs along with built-in expertise and integrates them into complete, optimized solutions. At the heart of PureFlex System is the IBM Flex System Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications. The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy now, and to scale to meet your needs in the future. This IBM Redbooks publication describes IBM PureFlex System and IBM Flex System. It highlights the technology and features of the chassis, compute nodes, management features, and connectivity options. Guidance is provided about every major component, and about networking and storage connectivity. This book is intended for customers, Business Partners, and IBM employees who want to know the details about the new family of products. It assumes that you have a basic understanding of blade server concepts and general IT knowledge.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE


IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-7984-00 ISBN 0738436992

Das könnte Ihnen auch gefallen