You are on page 1of 42

NGT11

IBM PureFlex System Fundamentals

IBM Power Systems compute nodes

Copyright IBM Corporation 2012, 2013


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.

9.0

Unit objectives
After completing this unit, you should be able to:
Recognize the features of the Power Systems family of servers
List the Power compute nodes and features
Plan adapter and I/O module placement to enable external
traffic flow
Explain PowerVM based virtualization on a Power node
Plan for the management of a Power virtualized environment

Copyright IBM Corporation 2012, 2013

The Power node essentials


Proven technology

New compute nodes

New node
details

A new platform

Copyright IBM Corporation 2012, 2013

Flex System Manager (FSM)

What is Power?
Power Systems platform overview

Power
Virtualization?

New node
details

What is Power?
Copyright IBM Corporation 2012, 2013

A new platform

Power Systems family (with a new addition)


Major features:

Power 795

Modular systems with linear scalability


PowerVM virtualization
Physical and virtual management
Roadmap to continuous availability
Binary compatibility
Energy/thermal management

IBM Flex Power System node

Power 780
Power 770

Power 750

Power 720/740

FSM, SDMC & HMC

Power 755

Power 710/730
PS Blades

Copyright IBM Corporation 2012, 2013

Power 775

Power Systems features


Industry leading hardware performance and RAS
CPU options from 2.4 GHz to 4.4 GHz, and from one to 256 cores
Memory options 8 GB to 16 TB

Virtualization through PowerVM


Always on Power Hypervisor
Virtual servers
Memory (Active Memory Sharing) and processor virtualization (micro-partitioning)
and pools
Virtualized networking and storage
Dynamic reconfiguration of virtual servers
Relocation of virtual servers (Live Partition Mobility)

Centralized management through IVM, HMC, SDMC, and IBM Systems


Director
Extended and cloud-enabled using VMControl

High availability on AIX and IBM i through PowerHA SystemMirror


Energy management through Active Energy Manager
Security through PowerSC
Copyright IBM Corporation 2012, 2013

Power is operating system choices


POWER7 support:
AIX 7.1, 6.1, 5.3
Technology levels depend on model

POWER7 support:
i7.1, 6.1
Technology levels depend on model

POWER7 support:
SLES 11, 10
RHEL 6.1, 5.7

Copyright IBM Corporation 2012, 2013

POWER7 support:
VIOS 2.2.1.0
POWER7+ support:
VIOS 2.2.2.0

IBM Flex System Power compute node


IBM Flex System Power compute node overview and
architecture
IBM Flex System Power node subsystems
IBM Flex System Power node systems management

Power
Virtualization?

New node
details

What is Power?

Copyright IBM Corporation 2012, 2013

A new platform

Summary of features by form factor


HalfWide
P260/
p24L,
p270

Fullwide
p460

8 / 12 /16
core
Architecture

Two sockets: Four, six, or eight cores


per socket

Processor (p260)

POWER7+
Four cores at 4.0 GHz
Eight cores at 3.6 / 4.1 GHz

Processor (p24L)

POWER7
Six cores at 3.7 GHz
Eight cores at 3.2 / 3.55 GHz

DDR3 memory

Up to 512 GB

DASD / bays

0 - 2 SAS HDD ( 300 / 600 / 900 GB )


0 2 SATA SSD ( 177 GB )

Adapter card
I/O options

Two

16 / 32
core
Architecture

Four sockets: Four or eight cores per


socket

Processor

POWER7 Four cores at 3.3 GHz


Eight cores at 3.2 / 3.5 GHz

DDR3 memory

Up to 512 GB

DASD / bays

0 - 2 SAS HDD ( 300 / 600 / 900 GB )


0 2 SATA SSD ( 177 GB )

Adapter card
I/O options

Four

Copyright IBM Corporation 2012, 2013

IBM PureFlex POWER7+ compute nodes


p460
7895-43X
p270
7954-24X
Cores: 16 / 32
Max Memory: 1 TB

p260
7895-23X
Cores: 24
Max Memory: 512 GB

p260
7895-23A
Cores: 8 / 16
Max Memory: 512 GB

Cores: 4
Max Memory: 512 GB

Copyright IBM Corporation 2012, 2013

Power compute nodes comparison


p260 Entry

p260

p270

p460

POWER7+
Sockets

Cores

8 or 16

24

16 or 32

4.0

3.6 / 4.1 / 4.0

3.1 / 3.4

3.6 / 4.1 / 4.0

512 GB / 16

512 GB / 16

512 GB / 16

1 TB / 32

2, 4, 8, 16 32 GB

2, 4, 8, 16 32 GB

4, 8, 16 32 GB

2, 4, 8, 16 32 GB

Dual VIOS
Adapter

No

No

Yes

No

Processor Group

P05

P10

P10

P10

300 / 600 / 900

300 / 600 / 900

300 / 600 / 900

300 / 600 / 900

SSD

Yes

Yes

Yes

Yes

RAID

0, 1, 10

0, 1, 10

0, 1, 10

0, 1, 10

Frequency GHz
Max Memory /
# DIMMs
DIMMs
Mezzanine Slots

HDD (GB)

Copyright IBM Corporation 2012, 2013

IBM Flex System Power compute node


IBM Flex System Power compute node overview and
architecture
IBM Flex System Power node subsystems
IBM Flex System Power node systems management

Power
Virtualization?

New node
details

What is Power?

Copyright IBM Corporation 2012, 2013

A new platform

Memory options and form factors


Low profile (LP) DIMMS
DIMM size
2 GB

DIMM height
30 mm

DIMM width
133.4 mm

Data Rate
1066 MHz

16 GB

30 mm

133.4 mm

1066 MHz

32 GB

30 mm

133.4 mm

1066 MHz

Very low profile (VLP) DIMMS


DIMM size

DIMM height

DIMM width

Data Rate

4 GB

18 mm

133.4 mm

1066 MHz

8 GB

18 mm

133.4 mm

1066 MHz

DIMMs installed in pairs of the same size, speed, type, and technology
Pairs of different sized DIMMs can be mixed in a node
SAS HDD only supported with VLP memory type
Copyright IBM Corporation 2012, 2013

Local storage overview


As shown, the local disk drives are mounted
under the cover.
The drives are not hot-pluggable.
Ordering no drives is an option.

Two drives maximum (in all models) can be


installed.
HDD: 300, 600, or 900 GB SAS drive
SSD: 177 GB SATA drive

Important: Drive type is dependent on


DIMM type.
HDD: VLP DIMMs only
SSD: VLP or LP DIMMs

Assigned to same virtual server:


RAID-0 or RAID-1 can be implemented

Copyright IBM Corporation 2012, 2013

Power nodes: IO adapter options


(#1761) -IBM Flex System IB6132 2-port QDR InfiniBand Adapter
(#1762) -IBM Flex System EN4054 4-port 10Gb Ethernet Adapter
(#1763) -IBM Flex System EN2024 4-port 1Gb Ethernet Adapter
(#1764) -IBM Flex System FC3172 2-port 8Gb Fibre Channel Adapter
(#EC23) -IBM Flex System FC5052 2-port 16Gb Fibre Channel Adapter
(#EC24) -IBM Flex System CN4058 8-port 10Gb Converged Adapter
(#EC26) -IBM Flex System EN4132 2-port 10Gb RoCE Adapter
(#EC2E) -IBM Flex System FC5054 4-port 16Gb Fibre Channel Adapter

Copyright IBM Corporation 2012, 2013

I/O adapter location code information


To assign the adapters to a virtual server, you must know the
physical location code.

Un-P1-C18
Un-P1-C19
Un-P1-C34

Un-P1-C35
Un-P1-C37

Un-P1-C36

Copyright IBM Corporation 2012, 2013

Four-port 10 Gb Ethernet adapter connectivity


The four ports are split between different ASICs that are on different PCI busses.
Two ports can be assigned to virtual servers independent of the other two ports.
slot 1

PCIe
conn

Midplane

ASIC

Switch bay 1

C18-L1
ASIC

4-port 10Gb Ethernet

Switch bay 2

C18-L2

The C18-L1 and C18-L2 represent


the end of the location codes for the
10 Gb Ethernet adapter in a p260.
The full location code would be in
the form: U78AF.001-ssssss-P1C18-L1.
Copyright IBM Corporation 2012, 2013

Switch bay 3

Switch bay 4

Two-port 8 Gb Fibre Channel adapter


connectivity
The two-port Fibre Channel adapter contains one ASIC. Both
ports must be assigned to the same virtual server.
slot 1

PCIe
conn

ASIC

Midplane
Switch bay 1

C18-L1
ASIC

4-port 10Gb Ethernet

Switch bay 2

C18-L2

slot 2

Switch bay 3

ASIC
PCIe
conn

C19
Switch bay 4

2-port 8Gb FC
Copyright IBM Corporation 2012, 2013

IBM Flex System Power compute node


IBM Flex System Power compute node overview and
architecture
IBM Flex System Power node subsystems
IBM Flex System Power node systems management

Power
Virtualization?

New node
details

What is Power?

Copyright IBM Corporation 2012, 2013

A new platform

Managing Power servers: An evolution


Over time, the management appliances used to manage
Power servers have evolved.
FSM

SDMC
All rack systems
Power blades
Functional
Hardware or software
integration
Full redundancy
Virtual and physical virtual servers
Unified and simplified

IVM
Entry rack systems
All Power blades
Internal, VIOS-based
Limited redundancy
All-virtual virtual servers

IBM Flex System components


Integrated, independent appliance
Full redundancy
Virtual and physical virtual servers
Specialized to the IBM Flex System
Power Systems Management
VMControl
Plus many more applications

HMC

All rack systems


No Power blades
External, PC/Linux
Full redundancy
Virtual and physical
virtual servers

Copyright IBM Corporation 2012, 2013

IVM: Integrated Virtualization Manager


HMC: Hardware Management Console
SDMC: Systems Director Management
Console
FSM: Flex System Manager

IBM Flex System Manager: Integrated


management appliance
IBM Flex System Manager management
appliance
All basic and advanced functions preloaded as an
appliance
Adds easy-to-use multi-chassis management
Quick start wizards with automated discovery
Advanced remote presence console across multiple chassis
Centralized FoD license key management

Integrated X-Architecture and Power servers,


storage, and network management
Includes full Power node functionality (for example,
Live Partition Mobility, redundant VIOS, concurrent
firmware updates)
Network fabric management (port profiles, VM
priority, rate limiting)
Virtualization management including resource pools
Robust security (centralized user management,
security policy, certificates)
Integrated LDAP and NTP servers for private
management network
Upward integration into Systems Director, Tivoli, and
other third party enterprise managers
Copyright IBM Corporation 2012, 2013

IBM Flex System management


appliance

Base & Extensions

Platform Mgr.

Chassis Mgr.

Storage Mgr

Network Mgr

Active Energy Mgr

POWER SW

Multi-Chassis
Management Appliance

Flex System Manager: Home and Plug-ins


Upon logging in, you will reach the Home tab.
Use the Plug-ins tab to get to Power Systems Management.

Copyright IBM Corporation 2012, 2013

Power Systems virtualization topics


Power Systems virtual servers
Creating Power Systems virtual servers

Power
Virtualization?

New node
details

What is Power?

Copyright IBM Corporation 2012, 2013

A new platform

Virtualizing workloads with PowerVM


Creating a virtualized workload with PowerVM is simple.
Create a new PowerVM virtual server.
Install the operating system (AIX, IBM i, or Linux) in the
virtual server.
Install the workload applications in the virtual server.
Configure the operating system and applications as required.

Virtualization is enabled through the POWER Hypervisor.


The completed virtualized workload can be stored, copied, archived, or
modified just like any other file.
The benefits of virtualizing workloads with PowerVM in this way:

Rapid provisioning: Deploying the ready-to-run workload is a quick and easy


process.
Scalability: Deploying multiple copies of the same workload type is simplified.
Recoverability: Bringing a workload back online after an outage is fast and
reliable.
Consolidation: Many diverse workloads can be hosted on the same server.

All of these benefits save system administrators time and resources.


In addition, workload consolidation offers significant IT infrastructure cost
reductions.
Copyright IBM Corporation 2012, 2013

What is a POWER virtual server?


A POWER virtual server is the allocation of system resources
to create logically separate systems within the same physical
footprint.
These are also known as logical partitions.

A virtual server exists when the isolation is implemented with


firmware.
Not based on physical system building block.
Provides configuration flexibility.
SYS1
1:00
Japan

SYS2
10:00
USA

SYS3
11:00
Brazil

SYS4
12:00
UK

ORD

AIX

Copyright IBM Corporation 2012, 2013

Linux

AIX

i/OS

Virtual server resources


Resources are allocated to virtual servers.
Memory allocated in units as small as the LMB size
Dedicated whole processors or shared processing units
Individual I/O slots
Including virtual devices

All resources can be managed dynamically

Some resources can be shared.


Virtual devices

Some core system components are inherently shared.


AIX
PPPPP
MMM
SSSS

Linux
PP
MM
SSSS
Copyright IBM Corporation 2012, 2013

AIX
PPP
MM
SSSS
S: I/O slot
M: Memory
P: Processor

Virtual I/O adapters


Each virtual server has virtual I/O slots.
Configurable for each virtual server

Virtual slots can have a virtual adapter instance.


Ethernet, SCSI, or Fibre Channel

Virtual I/O slots can be dynamically added or removed just like


physical I/O slots.
Cannot be dynamically moved to another virtual server

Copyright IBM Corporation 2012, 2013

What is a Virtual I/O Server?


A special virtual server hosting physical resources (adapters)
and virtual adapters
Installed and used as an appliance

Physical devices virtualized for virtual I/O client virtual servers


Client virtual server can use both virtual and physical resources

Enables sharing of physical Ethernet adapters


This allows for external access of virtual Ethernet network
The SEA provides a bridge to the client virtual servers network

Enables sharing of physical storage adapters and devices


Physical disks, logical volumes, or files (backing devices) can be
shared
Mapped to VSCSI server adapter
Appear as VSCSI disks in client

Fibre Channel adapters can be shared using N_Port ID Virtualization


(NPIV)

Enables shared storage pools and Active Memory Sharing


Copyright IBM Corporation 2012, 2013

Virtual I/O Server summary


Hosts physical adapters and devices

Virtual
Virtual
disks/
Disks
Optical

2
CPUs

6
CPUs

Linux

AIX6.1

AIX7.1

Ethernet
sharing

6 CPUs

AIX 5.3
Linux
AIX 7.1
5.3
Linux
AIX 6.1

Virtual I/O
Server

4
CPUs

Virtual I/O paths

POWER Hypervisor
Storage
SAN, SAS, SCSI disks
or
CD/DVD drives
or
SAN (NPIV) or SAS tape
drives

I/O
Network

I/O
Storage Network

I/O

I/O

Storage Network Storage

Network

Management
appliance

LAN, WAN, ...

Copyright IBM Corporation 2012, 2013

Power Systems virtualization topics


Power Systems virtual servers (virtual servers)
Creating Power Systems virtual servers

Power
Virtualization?

New node
details

What is Power?

Copyright IBM Corporation 2012, 2013

A new platform

Creating a partitioned environment


Access the platforms management appliance interface
Select the virtual server environment (AIX/Linux, VIO, IBM i)
Select both processor mode and memory modes
Shared memory virtual server is valid only when processor mode is shared
processor virtual server

Select I/O, both physical and virtual


Dedicated
Operating
environment

Management
appliance

Power node

Virtual
server
name and
ID

Processor
Shared

IBM i
VIOS
AIX/Linux

Dedicated
Memory
Shared
Physical
I/O
Virtual

Copyright IBM Corporation 2012, 2013

Creating virtual servers and profiles


Virtual servers name and ID
Virtual servers and profiles have names which can be changed easily.
Virtual servers have an ID, which cannot be changed on a virtual
server after it has been created.

Virtual server profiles


These are used when the virtual server is activated (started).
A virtual server can have more than one profile, but only one is in use
at a time.

A wizard is used to step through the configuration tasks:

Virtual server name, type, and ID number


Processor and memory characteristics and quantities
Physical adapters
Virtual adapters

Copyright IBM Corporation 2012, 2013

Accessing the virtual server creation wizard


The Power node is powered on and discovered by the FSM.
The process is very similar to that used on the SDMC for
Power rack servers.

Copyright IBM Corporation 2012, 2013

Create Virtual Server wizard


Upon completion of all the steps, a summary of the virtual server to be
created is displayed.
Name
vios1

Virtual server ID
One

Environment
VIOS

Memory
2 GB (dedicated)

Processors
Ten (shared)

Virtual Ethernet
Two adapters

Virtual disk

Click Finish.

None (yet)

Physical volume
All internal disks

Physical adapters
Two adapters
Copyright IBM Corporation 2012, 2013

Installing an OS in a virtual server


Completing the wizard saves the configuration in NVRAM on the node.
The next step is to activate the virtual server, open a console, and install
the operating system.
VIOS Virtual Media Repository
Verify that the required ISO file is in the VIOS media repository.
Configure the client virtual server to use the VIOS media repository.
Verify that the correct virtual adapters and virtual optical drives are configured in
the VIOS (running) and the client virtual server (profile).

NIM

Ensure a NIM server is installed and reachable from the Power nodes adapter.
Set up host name resolution to the address for the virtual server.
Define the virtual server as a machine on the NIM server.
Prepare an lpp_source and SPOT for the installation at a supported level.

Start the virtual server, booting to SMS


VIOS virtual media repository: Boot from CD/DVD and the virtual optical drive.
NIM: Boot from a network adapter to generate a bootp exchange with the NIM
server.
Copyright IBM Corporation 2012, 2013

IBM Power systems management


Advanced virtualization
and Cloud capabilities
will continue to be
added.

Upward Integration &


Service Management Software

Service
Management

End-to-End
Management

IBM Systems Director

Can manage
multiple HMCs

VMControl
AEM

Storage
Control
Network
Control

HMC
HMC

HMC

IVM

POWER servers

IVM
Power
Flex Nodes Stand alone servers
Copyright IBM Corporation 2012, 2013

BladeCenter

Keywords
PowerVM
Power Hypervisor
IBM Flex System Power compute nodes
Virtual I/O Server (VIOS)
Shared Ethernet Adapter
HMC, SDMC, IVM
VMControl

Copyright IBM Corporation 2012, 2013

Checkpoint (1 of 2)
1. Which of the following managers can be used to manage a
p260 or p460 Power node?
a.
b.
c.
d.

HMC
FSM
SDMC
IVM

2. The maximum memory on a p460 is (blank), and the


maximum number of cores on a p460 is (blank).

Copyright IBM Corporation 2012, 2013

Checkpoint solutions (1 of 2)
1. Which of the following managers can be used to manage a
p260 or p460 Power node?
a.
b.
c.
d.

HMC
FSM
SDMC
IVM

The answers are HMC, FSM, and IVM.


2. The maximum memory on a p460 is 1024 GB, and the
maximum number of cores on a p460 is 32.
The answers are 1024 GB and 32.

Copyright IBM Corporation 2012, 2013

Checkpoint (2 of 2)
3. True or False: All virtual servers on a p460 must run the
same operating system from a common data store.

4. Name the three resource types that are assigned to virtual


servers.

5. What is the name of the appliance that enables virtual


servers to share physical resources?

Copyright IBM Corporation 2012, 2013

Checkpoint solutions (2 of 2)
3. True or False: All virtual servers on a p460 must run the
same operating system from a common data store.
The answer is false.
4. Name the three resource types that are assigned to virtual
servers:
The answers are processor, memory, and I/O slots.
5. What is the name of the appliance that enables virtual
servers to share physical resources?
The answer is Virtual I/O Server (VIOS).

Copyright IBM Corporation 2012, 2013

Unit summary
Having completed this unit, you should be able to:
Recognize the features of the Power Systems family of servers
List the Power compute nodes and features
Plan adapter and I/O module placement to enable external
traffic flow
Explain PowerVM based virtualization on a Power node
Plan for the management of a Power virtualized environment

Copyright IBM Corporation 2012, 2013