Sie sind auf Seite 1von 49

IBM Power Systems Technical University

October 1822, 2010 Las Vegas, NV

Session Title:

Building a Robust Datacenter with Live Partition Mobility


Session ID: VM01
Speaker Name: Adekunle Bello

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Trademarks
The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.
Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.

For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:


*, AS/400, e business(logo), DBE, ESCO, eServer, FICON, IBM, IBM (logo), iSeries, MVS, OS/390, pSeries, RS/6000, S/30, VM/ESA, VSE/ESA, WebSphere, xSeries, z/OS, zSeries, z/VM, System i, System i5, System p, System p5, System x, System z, System z9, BladeCenter

The following are trademarks or registered trademarks of other companies.


Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.
* All other products may be trademarks or registered trademarks of their respective companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Agenda

A walk through Live Partition Mobility Application Migration using Live Partition Mobility Migrating Logical Partitions between POWER7 and POWER6 Systems Live Partition Mobility with N_Port ID Virtualization (NPIV)

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Live Partition Mobility


Live Partition Mobility (LPM), a component of the PowerVM Enterprise Edition hardware feature, provides the ability to move AIX and Linux logical partitions from one system to another. The mobility process transfers the system environment including the processor state, memory, attached virtual devices, and connected users. Active Partition Mobility allows you to move AIX and Linux logical partitions that are running, including the operating system and applications, from one system to another using Mover Service Partitions (MSP). The logical partition and the applications running on that migrated logical partition do not need to be shut down. Inactive Partition Mobility allows you to move a powered off AIX and Linux logical partition from one system to another.

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Live Partition Mobility Benefits


Resource balancing A system does not have enough resources for the workload while another system does New system deployment A workload running on an existing system must be migrated to a new, more powerful one Availability requirements When a system requires maintenance, its hosted applications must not be stopped and can be migrated to another system It is not a replacement for PowerHA It is not automatic Partitions cannot be migrated from failed systems Failed operating systems cannot be dynamically migrated Thus, it is not a Disaster Recovery Solution
5 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Applications can interlock with partition migration


Partition migration is presented as a new DLPAR event Existing semantic with check, pre, and post phase processing Check and pre-phase callouts on source server Post callout on destination server DLPAR scripts supported register subcommand returns DR_RESOURCE=pmig Other subcommands: checkmigrate, premigrate, undopremigrate, postmigrate Scripts invoke normal commands in post phase to determine changes DR API supported dr_reconfig() system call has new object and action bits for migration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Basic Requirements
Migrating partition must have fully virtualized resources (i.e no dedicated physical adapters) All systems in a migration domain must be connected on a Storage Area network (SAN) to shared physical disks No internal or VIOS-based disks SAN disks must be configured to be visible from both systems HMC(s) requires network connectivity to: VIO Servers providing virtual SCSI for the migrating partition VIO Servers providing virtual networking for the migrating partition VIO Servers providing Mover Service Partition (MSP) function Migrating partitions Both MSPs must have network connectivity to each other Mobile partition must be connected to same subnet after migration

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Partition migration high-level workflow


Inactive and active partition migration each have the same four-step sequence 1. Preparation Ready the infrastructure to support Live Partition Mobility. 2. Validation Check the configuration and readiness of the source and destination systems. 3. Migration Transfer of partition state from the source to destination takes place. One command is used to launch both inactive and active migrations. The HMC determines the appropriate type of migration to use based on the state of the mobile partition. If the partition is in the Not Activated state, the migration is inactive. If the partition is in the Running state, the migration is active. 4. Completion Free unused resources on the source system and the HMC.
8 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Managed systems preparation


Both source and destination systems must be at firmware level 01Ex320 or later lslic -m "Ops-Rack92-MMA-SN100F6B0" -t sys -F "ecnumber activated_level" 01EM320 76 Both source and destination systems must have the PowerVM Enterprise Edition license code installed HMC GUI Navigation: Systems Management > Severs > [select system] > Capacity on Demand (CoD) > Enterprise Enablement > View History Log

HMC CLI: lssyscfg -r sys -m "Ops-Rack92-MMA-SN100F6B0" -F "name active_lpar_mobility_capable Ops-Rack92-MMA-SN100F6B0 1


Note: Both source and destination should be using the same Logical Memory Block (LMB) size
9 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

VIO Server preparation


For Active Partition Mobility both source and destination VIO Servers should have Mover Service Partition and Time reference enabled HMC GUI Navigation: Systems Management > Severs > [select system] > [select vios] > Properties [General tab and Settings tab]

HMC CLI: lssyscfg -r lpar -m "Ops-Rack92-MMA-SN100F6B0" filter=name=zlab040-VIOS -F \ name zlab040-VIOS msp 0 time_ref 0

chsyscfg -r lpar -m "Ops-Rack92-MMA-SN100F6B0" -i name= zlab040-VIOS, msp=1,time_ref=1


10 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

No resource should be set as required for Active Migration


Example of a partition with IO adapter set as required
lssyscfg -r prof -m "Ops-Rack92-MMA-SN100F6B0" --filter "lpar_names=zlab046" -F "lpar_name io_slots" zlab046 21010222/none/1 io_slots format slot-DRC-index/[slot-IO-pool-ID]/is-required is-required values [0=no, 1=yes] Locating the actual adapter on the IO slot:

This partition has no dedicated I/O:

lshwres -r io --rsubtype slot -m "Ops-Rack92-MMA-SN100F6B0" --filter "lpar_names=zlab046" -F "lpar_name drc_index drc_name description" zlab046 21010222 U789D.001.DQDXYKM-P1-C5 "PCI 10/100/1000Mbps Ethernet UTP 2-port"
NOTE: Though most partitions will pass the following checks, all the same for active migration go through the partitions properties and ensure: - That the mobile partition is not the redundant error path reporting logical partition - That the mobile partition is not configured with barrier synchronization registers (BSR) - That the mobile partition is not configured with huge pages
11 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Remote Partition Mobility Requirement


Remote Partition Mobility is migrating a partition when the source and destination MSPs are controlled by two different HMCs. In this case, both HMCs must be configured to allow remote command execution through ssh. First update the HMCs to allow remote command execution:
HMC Management > Remote Command Execution

Now use the mkauthkeys command to generate security key to authenticate one MSP with the other. mkauthkeys ip 9.3.131.144 -u hscroot -g Enter the password for user hscroot on the remote host 9.3.131.144:
Note: use the -g option so that authentication keys are set up in both directions (from source to destination and vice-versa)
12 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Partition Migration using Live Partition Mobility

13

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Inactive migration workflow

Source System Destination System Source Virtual I/O Server Destination Virtual I/O Server Mobile Partition New LPAR Creation

LPAR removal

Validation

Virtual storage Adapter setup Virtual storage Adapter setup

Notification of completion

Time

14

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Inactive migration state flow

HMC

POWER Hypervisor

POWER Hypervisor

Source system

Destination system

15

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Active migration validation workflow

Destination System Source Virtual I/O Server Destination Virtual I/O Server Mobile Partition

Active partition migration capability and compatibility check

Source System

System resource Availability check RMC connection check Virtual Adapter Mapping Partition Readiness Check OS and Apps Readiness check

Time

16

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Migration phase of an active migration


LPAR running on source system
Source System Destination System Source Virtual I/O Server Destination Virtual I/O Server Mobile Partition New LPAR Creation

LPAR running on destination system


LPAR removal

Validation

MSP Setup

Virtual SCSI & FC setup

Memory copy

Virtual SCSI & FC removal

Notification of completion

Time

17

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Active migration partition state transfer path

Departing Mobile Partition

Mover Service Partition VASI

Mover Service Partition VASI

Arriving Mobile Partition Shell

1 POWER Hypervisor

3 POWER Hypervisor

Source system

Destination system

18

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Live Partition Mobility Enhancements


User defined Virtual Target Device names preserved across LPM Support for shared persistent (SCSI-3) reserves on LUNS of a migrating partition (VSCSI) Support for a migration of a client across non-symmetric VIOS configurations CLI interface to configure IPSEC tunneling for the data connection between MSPs Support to allow the user to select the MSP IP addresses to use during a migration Network bandwidth testing during migration validation

19

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Preservation of Custom VTD Names


If there is a name conflict (name already exists on target VIOS) validation phase will give a detail error message which includes the conflicting name Conflicting name can be changed with chdev dev <dev_name> -attr mig_name=<new_name> The validation name conflict error can be overridden by proceeding with migration A default name of type vtscsiX replaces the custom name on the destination VIOS An entry is written to the cfglog to link the conflicting name with the new name

20

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Shared Persistent (SCSI-3) Reserve Support


Reserve type is shown as PR_shared by lsattr Source and destination must use different keys VIOS makes no attempt to validate, change or otherwise configure the reserves If the disks cannot be accessed the cause is outside LPM single_path reserve type now explicitly rejected during validation

21

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Migration Across Non-Symmetric Configurations


Migrations that involve a loss of redundancy Requires HMC level V7R3.5.0+ and the GUI override errors option or command line --force flag Allows for moving a client partition to a system whose VIOS configuration does not provide the same level of redundancy found on the source
Examples: Client uses MPIO where each path goes through its own VIOS, the destination only has a single VIOS Client uses NPIV and MPIO to send data through 2 physical Fibre Channel ports (hosted by 1 or 2 VIOS) the destination only has 1

Validate will return a detailed error explaining the loss of redundancy If we continue with the migration it will result in an MPIO failed path Please note that redundancy is not restored when migrating back!

22

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

CLI Support for IPSEC Tunneling Between MSPs


User supplies source MSP IP address, destination MSP IP address and a key Controlled by new options on the existing cfgsvc, startsvc and stopsvc commands Status of tunnels can be viewed with the lssvc ipsec_tunnel command Requires the clic.rte package, available on the Virtual I/O Server Expansion Pack

23

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

User Selectable MSP IP Addresses


Requires HMC level V7R3.5.0+ (V7R7.2.0 recommended) Only available through the HMC command line User can select the IP address for both the source and destination MSP Removes many problems due to source and destination MSP mismatch and firewall restrictions Some customer have experienced migration failures because the IP addresses chosen by the HMC is blocked by their firewall configuration

24

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Bandwidth Testing
During migration validation the network bandwidth will be tested Results are logged in the configuration log alog t cfg o > cfglog MSP_MSP_BANDWIDTH=xxx,xxx is in Mb/s If bandwidth is less than 50 Mb/s xxx will be set to LOW Results are not 100% accurate, they are simply an idea as to what kind of bandwidth was available during the validation phase of migration. 64K-sized ping packets are used to test bandwidth Useful information if the migration takes a long time or fails

25

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Migration Bandwidth Values


Average ms from ping 0 1 2 3 4 5 6 7 8 9 10 Greater than 10
26

MSP_MSP_BANDWIDTH Equivalent MB/s value 1000 500 250 167 125 100 83 71 62 55 50 LOW 125.0 62.6 32.2 20.8 16.1 12.5 10.4 8.9 7.8 6.9 6.2 Less than 6.2
2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Trial-By-Fire Session
Live Partition Mobility Between POWER7 and POWER6

27

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Environment
1. POWER7- 8233-E8B, 12 CPUs and 64GB memory (firmware- AL710_065) 2. POWER6- 9117-MMA, 2 CPUs and 8GB memory (firmware- EM340_116) 3. HMC V7 Release 7.1.0, 7310CR4 4. Dual VIO Server each running 2.1.2.13-FP-22.1 SP02 with IZ70375 using NPIV, VIOS 2.1.3+ is highly recommended) 5. AIX 6.1 LPAR 6100-04-03-1009 6. AIX 5.3 LPAR 5300-11-03-1013 7. Storage-DS4K (1742-900) disks for both virtual SCSI and virtual FC 8. Logical memory block (LMB) size on both systems (32MB) 9. AIX native MPIO is used for multi-pathing 10. Ethernet adapters: HEA on POWER7 and 10/100/1000 Base-TX PCI-X adapters on POWER6 11. NPIV - 8Gb PCI Express Dual Port Adapters (df1000f114108a03) (If

28

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Test scenarios
1. 2. 3. 4. 5. 6. 7. 8. 9. Test Inactive Partition Migration Test Active Partition Migration Test assigning more CPU capacity to the mobile partition than is available on destination system Test CPU compatibility modes POWER6/POWER6+/POWER7 on the moving LPAR Test parallel active migrations between POWER7 and POWER6 systems Test effect of the reserve_policy when it is set to single_path Test interaction of partition migration and Active Memory Sharing Simulate a crash of the mobile while migration is active While LPM is running crash the VIO server on the source MSP

10. Crash the HMC during partition a partition migration 11. NPIV virtual FC adapter load test 12. Create a mirrored VG and run syncvg during partition migration 13. NFS mount files between two LPARs and do a partition migration of the NFS server RESULT: LPM passed all the tests failing a migration only when it is expected (for example, test 3, with not enough memory on the destination system)
29 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Lessons Learned
When doing multiple migration in parallel it is recommend to use HMC level V7R7.2.0+ so you can specify IP addresses for source and destination MSP communication. When migrating between POWER6 and POWER7 system it is recommended to set processor compatibility mode to default so the HMC can set suitable mode on migration. Please check the reserve_policy on all MPIO disks and make sure it is set to no_reserve or SCSI-3 PR_shared (in which case use different keys for source and destination MSPs) For NPIV to work, make sure you are using recommended VIOS levels and SAN switch firmware is at the recommended level

30

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Case Study: Partition Mobility & WebSphere App Server

Minimal change in response time except for close to the end of LPM when LPAR is switched to the new system. Notice workload is very CPU intensive (95%) yet could still be transferred. Source: http://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/whitepaper/power6_lpar/aix/v6r1
31 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Summary of Case Study


During the project, many migrations were run and all worked with the desired quality and reliability. Also, it has been observed that migration duration is almost the same for all the different work loads that were tested. There is very little difference between migrations with a heavy workload and those with a light workload. Only four seconds between a medium work load (20 clients) with respect to a high work load (50 clients). During LPM when active pages are migrated from source to destination, a temporary lock is imposed. Because of this lock, the WebSphere Application Server instance is not allowed to perform any transactions. Thus, at this moment, the performance of the WebSphere Application Server instance can be observed to dip. However, immediately after the migration is complete, the WebSphere Application Server instance regains its previous stable state. During the brief lock, the DayTrader application and the WebSphere Application Server see higher response times, but no downtime.
Source: http://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/whitepaper/power6_lpar/aix/v6r1
32 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

LPM with N_Port ID Virtualization (NPIV)


N_Port ID Virtualization (NPIV) is a technology that allows multiple logical partitions to access independent physical storage through the same physical Fibre Channel adapter. This adapter is attached to a Virtual I/O Server partition, which acts only as a pass-through managing the data transfer through the POWER Hypervisor. Each partition using NPIV is identified by a pair of unique worldwide port names, enabling you to connect each partition to independent physical storage on a SAN. Unlike virtual SCSI, only the client partitions see the disk. An MPIO device can be formed from an NPIV FC adapter and a physical FC HBA. This creates the possibility of migrating a running partition that is using dedicated I/O adapters.

33

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Dynamic Physical/Virtual Multipathing


VIOS AIX Client

Power Hypervisor

Storage Controller
SAN Switch Physical FC HBA NPIV Adapter SAN Switch

34

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

3
VIO Client

LPM
2
fcsZ

VIO Client

1
fcsY

4
fcsY

Power Hypervisor

vfchost# VIOS fcsX NPIV Capable SAN Switch

Steps to migrate a VIO client with a dedicated FC Adapter using NPIV (1) (2)
SAN Switch

(3) (4)

Add virtual FC adapter using NPIV/MPIO Unconfigure the physical FC adapter (using rmdev) and remove slot using DLPAR Migrate the partition using LPM A virtual FC adapter is automatically configured at the destination

35

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Live Partition Mobility with NPIV Demo

Source system: Source MSP:

Ops-Right-9117-MMA-SN10B31D0 zlab008-VIOS-Migration1 (hostname zlab008)

Migrating partition: zlab019-Migration2-Client (hostname zlab019) Destination system: Ops-Left-9117-MMA-SN10B32C0 Destination MSP:
36

zlab010-VIOS-Migration1 (hostname zlab011)


2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

NPIV Server Adapter setup

37

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

38

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

NPIV Client Adapter setup

39

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

At this point we shutdown both VIO Client and VIO Server, then we reactivated them using the updated profiles (the partitions are brought up in sequence, the VIO Server then the Client)
40 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

NPIV server adapter mapping using vfcmap command

Status BEFORE mapping

Map adapter to vfchost

Status AFTER mapping vfc client adapter

41

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Confirm virtual FC adapter on the mobile partition

WWPN

By using standard SAN configuration techniques, assign the mobile partitions storage to the virtual Fibre Channel adapters that use the WWPN pair generated and properly zone the virtual Fibre Channel WWPNs with the storage subsystems WWPN.
42 2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Pre-Migration

Source MSP

Destination MSP

43

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

44

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Hit Validate Button

45

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Hit Migrate button

46

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Source VIO Server during partition migration

vfcs not moved yet

vfcs have moved and vfchosts removed!

47

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Destination VIO Server during partition migration

vfchosts not yet created here


vfchosts created!

48

2010 IBM Corporation

IBM Power Systems Technical University Las Vegas, NV

Thank you!

contact info: kunle@us.ibm.com


49 2010 IBM Corporation

Das könnte Ihnen auch gefallen