Sie sind auf Seite 1von 90

HP 3PAR Solaris Implementation Guide

Abstract
This implementation guide provides information for establishing communications between an HP 3PAR Storage System and a Solaris 8, 9 or 10 host running on the SPARC, x64, and x86 platforms. General information is also provided on the basic steps required to allocate storage on the HP 3PAR Storage System that can then be accessed by the Solaris host.

HP Part Number: QL226-96253 Published: December 201 1

Copyright 201 Hewlett-Packard Development Company, L.P. 1 Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21 and 12.212, Commercial 1 Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Acknowledgments Microsoft, Windows, Windows XP, and Windows NT are U.S. registered trademarks of Microsoft Corporation. Java and Oracle are registered trademarks of Oracle and/or its affiliates.

Contents
1 Introduction...............................................................................................6
Supported Configurations..........................................................................................................6 InForm OS Upgrade Considerations............................................................................................6 Audience.................................................................................................................................6 Related Documentation..............................................................................................................7 Typographical Conventions........................................................................................................7 Advisories................................................................................................................................7

2 Configuring the HP 3PAR Storage System for Fibre Channel.............................9


Configuring the HP 3PAR Storage System Running InForm OS 3.1.x or 2.3.x.....................................9 Configuring Ports on the HP 3PAR Storage System in a Direct Topology.......................................9 Configuring Ports on the HP 3PAR Storage System in a Fabric Topology....................................10 Creating the Host Definition (Host Persona)............................................................................10 Configuring the HP 3PAR Storage System Running InForm OS 2.2.x..............................................11 Configuring Ports for an Emulex lpfc driver............................................................................11 Configuring Ports for a QLogic qla Driver..............................................................................12 Configuring Ports for a Solaris qlc or emlxs Driver..................................................................12 Configuring Ports for a JNI Tachyon driver.............................................................................13 Configuring Ports for a JNI Emerald driver.............................................................................13 Creating the Host Definition................................................................................................14 Setting Up and Zoning the Fabric.............................................................................................14 Configuration Guidelines For Fabric Vendors.........................................................................15 Target Port Limits and Specifications.....................................................................................15 Single Initiator to Single Target Zoning No Fan-In No Fan-Out.................................................16 Single Initiator to Single Target Zoning with Fan-Out from One HP 3PAR Storage System Port to Multiple Host Ports.............................................................................................................16 Single Initiator to Single Target Zoning with Fan-In from Multiple HP 3PAR Storage System Ports to One Host Port................................................................................................................17 Single Initiator to Single Target Zoning with Mixed Fan-In/Out Configurations...........................17 Non-Compliant Zoning Examples.........................................................................................18

3 Configuring the HP 3PAR Storage System for iSCSI.......................................19


Configuring the HP 3PAR Storage System iSCSI Ports...................................................................19 Creating an iSCSI Host Definition on an HP 3PAR Storage System Running InForm OS 3.1.x or 2.3.x.....................................................................................................................................20 Creating an iSCSI Host Definition on an HP 3PAR Storage System Running InForm OS 2.2.x.............22 Configuring CHAP Authentication (Optional)..............................................................................23 Enabling Unidirectional (Host) CHAP....................................................................................23 Disabling Unidirectional (Host) CHAP...................................................................................25 Enabling Bidirectional (Mutual) CHAP..................................................................................26 Disabling Bidirectional CHAP..............................................................................................27

4 Configuring the Host for a Fibre Channel Connection....................................29


Installing the HBA...................................................................................................................29 Installing the SUN SAN Driver Packages....................................................................................29 Installing the HBA Drivers.........................................................................................................29 Installation Notes for Emulex lpfc Drivers...............................................................................30 Configuration File Settings for Emulex lpfc Drivers..............................................................31 Installation Notes for QLogic qla Drivers...............................................................................31 Configuration File Settings for QLogic qla Drivers..............................................................31 Installation Notes for Solaris qlc and emlxs Drivers.................................................................32 Configuration File Settings for Solaris qlc and emlxs Drivers................................................32 Installation Notes for JNI Tachyon fcaw and fca-pci drivers......................................................32
Contents 3

Configuration File Settings for JNI Tachyon Drivers.............................................................34 Installation Notes for JNI Emerald JNIC146x Drivers...............................................................34 Configuration File Settings for JNI Emerald Drivers.............................................................35 Verifying the Driver Package Installation.....................................................................................36 Setting Up Dynamic Multipathing for the Solaris Host..................................................................36 Using Veritas Volume Manager VxDMP Multipathing..............................................................36 Verifying the VxDMP ASL Installation...............................................................................37 Using Sun StorageTek Traffic Manager (SSTM) Multipathing....................................................37 Edits to the /kernel/drv/scsi_vhci.conf file for SSTM Multipathing........................................38 Additional edit to the /kernel/drv/scsi_vhci.conf file for Solaris 8/9....................................38 Persistent Target Binding Considerations....................................................................................38 Persistent Target Binding for Emulex lpfc Drivers.....................................................................39 Persistent Target Binding for QLogic qla Drivers......................................................................40 Persistent Target Binding for Solaris qlc and emlxs Drivers.......................................................41 Persistent Target Binding for JNI Tachyon Drivers....................................................................41 Persistent Target Binding for JNI Emerald Drivers....................................................................41 System Settings for Minimizing I/O Stall Times on VLUN Paths......................................................42

5 Configuring the Host for an iSCSI Connection..............................................44


Solaris Host Server Requirements..............................................................................................44 Setting Up the Ethernet Switch..................................................................................................45 Configuring the Solaris Host Ports.............................................................................................45 Setting Up the iSCSI Initiator for Target Discovery.......................................................................46 Using the Static Device Discovery Method.............................................................................46 Using the SendTargets Discovery Method..............................................................................47 Using the iSNS Discovery Method........................................................................................47 Initiating and Verifying Target Discovery...............................................................................48 Setting Up Multipathing MPXIO................................................................................................50

6 Allocating Storage for Access by the Solaris Host.........................................52


Creating Storage on the HP 3PAR Storage System.......................................................................52 Creating Virtual Volumes for InForm OS 2.2.4 to 3.1.x............................................................52 Creating Virtual Volumes for InForm OS 2.2.3 and Earlier.......................................................53 Exporting LUNs to a Host with a Fibre Channel Connection..........................................................53 Creating a Virtual Logical Unit Number for Export..................................................................53 VLUN Exportation Limits Based on Host HBA Drivers...............................................................54 Exporting LUNs to a Solaris Host with an iSCSI Connection..........................................................54 Discovering LUNs on Fibre Channel Connections........................................................................56 Discovering LUNs for QLogic qla and Emulex lpfc Drivers........................................................56 Discovering LUNs for Solaris qlc and emlxs Drivers.................................................................56 Discovering LUNs for the JNI Tachyon Driver..........................................................................58 Discovering LUNs for the JNI Emerald Driver..........................................................................58 Discovering LUNs for Sun StorEdge Traffic Manager (SSTM )...................................................58 Discovering LUNs for Veritas Volume Managers DMP (VxDMP)...............................................60 Discovering LUNs on iSCSI Connections....................................................................................60 Removing Volumes for Fibre Channel Connections......................................................................61 Removing Volumes for iSCSI Connections...................................................................................61

7 Configuring the Host for an FCoE Connection..............................................63


Solaris Host Server Requirements..............................................................................................63 Configuring the FCoE switch and FC switch................................................................................63 Configuring the Solaris Host Ports.............................................................................................63

Contents

8 Using the SunCluster Cluster Server.............................................................65 9 Using the Veritas Cluster Server..................................................................66 10 Booting from the HP 3PAR Storage System.................................................67
Preparing a Bootable Solaris Image for Fibre Channel.................................................................67 Dump and Restore Method..................................................................................................67 Net Install Method.............................................................................................................67 Installing the Solaris OS Image onto a VLUN..............................................................................67 Configuring Additional Paths and Sun I/O Multipathing..............................................................69 Configuration for Multiple Path Booting.....................................................................................71 Additional Devices on the Booting Paths....................................................................................72 SAN Boot Example.................................................................................................................72

A Configuration Examples............................................................................74
Example of Discovering a VLUN Using qlc/emlx Drivers with SSTM...............................................74 Example of Discovering a VLUN Using an Emulex Driver and VxVM..............................................74 Example of Discovering a VLUN Using a QLogic Driver with VxVM...............................................75 Example of UFS/ZFS File System Creation..................................................................................75 Examples of Growing a Volume................................................................................................76 Growing an SSTM Volume..................................................................................................76 Growing a VxVM Volume...................................................................................................78 VxDMP Command Examples....................................................................................................80 Displaying I/O Statistics for Paths........................................................................................80 Managing Enclosures.........................................................................................................80 Changing Policies..............................................................................................................81 Accessing VxDMP Path Information......................................................................................81 Listing Controllers..........................................................................................................81 Displaying Paths............................................................................................................81

B Patch/Package Information........................................................................83
Minimum Patch Requirements for Solaris Versions........................................................................83 Patch Listings for Each SAN Version Bundle................................................................................85 HBA Driver/DMP Combinations...............................................................................................87 Minimum Requirements for a Valid QLogic qlc + VxDMP Stack................................................87 Minimum Requirements for a Valid Emulex emlxs + VxDMP Stack.............................................87 Default MU level Leadville Driver Table.................................................................................88

C FCoE-to-FC Connectivity............................................................................90

Contents

1 Introduction
This implementation guide provides information for establishing communications between an HP 3PAR Storage System and a Solaris 8, 9 or 10 host running on the SPARC, x64, and x86 platforms. General information is also provided on the basic steps required to allocate storage on the HP 3PAR Storage System that can then be accessed by the Solaris host. The information contained in this implementation guide is the outcome of careful testing of the HP 3PAR Storage System with as many representative hardware and software configurations as possible.

REQUIRED
For predictable performance and results with your HP 3PAR Storage System, the information in this guide must be used in concert with the documentation set provided by HP for the HP 3PAR Storage System and the documentation provided by the vendor for their respective products.

REQUIRED
All installation steps should be performed in the order described in this implementation guide.

Supported Configurations
For complete details on supported host configurations, consult the HP 3PAR InForm OS 3.1.1 Configuration Matrix, which is available on HPs Business Support Center (BSC). To obtain a copy of this documentation, go to http://www.hp.com/go/3par/, navigate to your product page, click HP Support & Drivers, and then click Manuals.

InForm OS Upgrade Considerations


Refer to the HP 3PAR InForm OS 3.1.1 Upgrade Pre-Planning Guide (PN QL226-96033) for information and planning of an online HP 3PAR InForm Operating System upgrade. To obtain a copy of this documentation, go to http://www.hp.com/go/3par/, navigate to your product page, click HP Support & Drivers, and then click Manuals.

Audience
This implementation guide is intended for system and storage administrators who monitor and direct system configurations and resource allocation for HP 3PAR Storage Systems. The tasks described in this manual assume that the administrator is familiar with Sun Solaris and the HP 3PAR Inform OS. Although this guide attempts to provide the basic information that is required to establish communications between the HP 3PAR Storage System and the Solaris host, and to allocate the required storage for a given configuration, the appropriate HP 3PAR documentation must be consulted in conjunction with the Solaris host and HBA vendor documentation for specific details and procedures. This implementation guide does NOT intend to reproduce any third-party product documentation. For details about devices such as host servers, HBAs, fabric and Ethernet switches, and non-HP 3PAR software management tools, consult the appropriate third-party documentation.

Introduction

Related Documentation
The following documents also provide information related to HP 3PAR Storage Systems and the InForm Operating System:
For information about Specific platforms supported Read the HP 3PAR InForm OS 3.1.1 Configuration Matrix

InForm command line interface commands and their usage HP 3PAR InForm OS CLI Reference Using the InForm Management Console to configure and administer HP 3PAR Storage Systems HP 3PAR Storage System concepts and terminology Determining HP 3PAR Storage System hardware specifications, installation considerations, power requirements, networking options, and cabling Identifying storage server components and detailed alert information Using HP 3PAR Remote Copy Using HP 3PAR CIM HP 3PAR InForm OS Management Console Online Help HP 3PAR InForm OS Concepts Guide HP 3PAR Storage System S-Class/T-Class Storage Server Physical Planning Manual or the HP 3PAR Storage System E-Class/F-Class Storage Server and Third-Party Rack Physical Planning Manual HP 3PAR InForm OS Messages and Operators Guide HP 3PAR Remote Copy Users Guide HP 3PAR CIM API Programming Reference

Typographical Conventions
This guide uses the following typographical conventions :
Typeface ABCDabcd Meaning Example

Used for dialog elements such as When prompted, click Finish to complete the titles, button labels, and other screen installation. elements. Used for paths, filenames, and screen Open the file output. \os\windows\setup.exe Used to differentiate user input from screen output. Used for variables in filenames, paths, and screen output. Used for options in user input. # cd \opt\3par\console # controlport offline <node:slot:port> Modify the content string by adding the -P[x] option after -jar inform.jar # .\java -jar inform.jar -P[x]

ABCDabcd

ABCDabcd <ABCDabcd> [ABCDabcd]

Advisories
To avoid injury to people or damage to data and equipment, be sure to observe the cautions and warnings in this guide. Always be careful when handling any electrical equipment. CAUTION: NOTE: guide. Cautions alert you to actions that can cause damage to equipment, software, or data.

Notes are reminders, tips, or suggestions that supplement the procedures included in this

Related Documentation

REQUIRED
Requirements signify procedures that must be followed as directed in order to achieve a functional and supported implementation based on testing at HP. WARNING! Warnings alert you to actions that can cause injury to people or irreversible damage to data or the operating system.

Introduction

2 Configuring the HP 3PAR Storage System for Fibre Channel


This chapter explains how to establish a Fibre Channel connection between the HP 3PAR Storage System and a Solaris host and covers InForm OS 2.2.x, 2.3.x, and 3.1.x versions. For information on setting up the physical connection for a particular HP 3PAR Storage System, see the appropriate HP 3PAR installation manual.

REQUIRED
If you are setting up a fabric along with your installation of the HP 3PAR Storage System, consult Setting Up and Zoning the Fabric (page 14) before configuring or connecting your HP 3PAR Storage System.

Configuring the HP 3PAR Storage System Running InForm OS 3.1.x or 2.3.x


This section describes how to connect the HP 3PAR Storage System to a Solaris Host over a Fibre Channel network when running InForm OS 2.3.x to OS 3.1.x. For information on setting up a connection for OS 2.2.x, see Configuring the HP 3PAR Storage System Running InForm OS 2.2.x (page 11).

REQUIRED
The following setup must be completed before connecting the HP 3PAR Storage System port to a device.

Configuring Ports on the HP 3PAR Storage System in a Direct Topology


To set up the HP 3PAR Storage System ports for a direct connection, issue the following set of commands with the appropriate parameters for each direct connect port.
# controlport offline <node:slot:port> # controlport config host -ct loop <node:slot:port> # controlport rst <node:slot:port>

The -ct loop parameter specifies a direct connection. NOTE: While the server is running, HP 3PAR Storage System ports that leave (e.g., due to an unplugged cable) and return will be tracked by their WWN. The WWN of each port is unique and constant which ensures correct tracking of a port and its LUNs by the host HBA driver. If a fabric zoning relationship exists such that a host HBA port has access to multiple targets (for example, multiple ports on the HP 3PAR Storage System), the driver will assign target IDs (cxtxdx) to each discovered target in the order that they are discovered. The target ID for a given target can change in this case as targets leave the fabric and return or when the host is rebooted while some targets are not present.

Configuring the HP 3PAR Storage System Running InForm OS 3.1.x or 2.3.x

Configuring Ports on the HP 3PAR Storage System in a Fabric Topology


To set up the HP 3PAR Storage System ports for a fabric connection, complete the following steps for each fabric connected port. CAUTION: Before taking a port offline in preparation for a fabric connection, you should verify that the port has not been previously defined and that it is not already connected to a host as this would interrupt the existing host connection. If an HP 3PAR Storage System port is already configured for a fabric connection, you can ignore Configuring the HP 3PAR Storage System Running InForm OS 2.2.x (page 11) as you do not have to take the port offline. 1. To determine if a port has already been configured for a host port in fabric mode, issue the InForm CLI showport -par command.
# showport -par N:S:P Connmode ConnType CfgRate MaxRate Class2 UniqNodeWwn VCN IntCoal 0:0:1 disk loop auto 2Gbps disabled disabled disabled enabled 0:0:2 disk loop auto 2Gbps disabled disabled disabled enabled 0:0:3 disk loop auto 2Gbps disabled disabled disabled enabled 0:0:4 disk loop auto 2Gbps disabled disabled disabled enabled 0:4:1 host point auto 4Gbps disabled disabled disabled enabled 0:4:2 host point auto 4Gbps disabled disabled disabled enabled 0:5:1 host point auto 2Gbps disabled disabled disabled enabled 0:5:2 host loop auto 2Gbps disabled disabled disabled enabled 0:5:3 host point auto 2Gbps disabled disabled disabled enabled 0:5:4 host loop auto 2Gbps disabled disabled disabled enabled 1:0:1 disk loop auto 2Gbps disabled disabled disabled enabled 1:0:2 disk loop auto 2Gbps disabled disabled disabled enabled 1:0:3 disk loop auto 2Gbps disabled disabled disabled enabled 1:0:4 disk loop auto 2Gbps disabled disabled disabled enabled 1:2:1 host point auto 2Gbps disabled disabled disabled enabled 1:2:2 host loop auto 2Gbps disabled disabled disabled enabled 1:4:1 host point auto 2Gbps disabled disabled disabled enabled 1:4:2 host point auto 2Gbps disabled disabled disabled enabled

2.

If the port has not been configured, take the port offline before configuring it for the Solaris host by issuing the InForm CLI controlport offline <node:slot:port> command. For example:
# controlport offline 1:5:1

3.

To configure the port for the host server, issue the controlport config host -ct point <node:slot:port> command. The -ct point parameter specifies a fabric connection. For example:
# controlport config host -ct point 1:5:1 # controlport rst 1:5:1

Creating the Host Definition (Host Persona)


Before connecting the Solaris host to the HP 3PAR Storage System, you need to create a host definition that specifies a valid host persona for each HP 3PAR Storage System that is to be connected to a host HBA port through a fabric or direct connection. Solaris uses the default generic host personality of 1. The following steps show how to create the host definition.

10

Configuring the HP 3PAR Storage System for Fibre Channel

1.

To create host definitions, issue the createhost command with the -persona option to specify the persona and the host name. For example:
# createhost -persona 1 solarishost 1122334455667788 1122334455667799

2.

To verify that the host has been created, issue the showhost command.
# showhost Id Name Persona -WWN/iSCSI_Name- Port 6 solarishost Generic 1122334455667788 --1122334455667799 ---

NOTE: HP recommends using host persona 1 for Solaris 10 (and above) hosts as it is required to enable Host Explorer functionality. However, host persona 6 is automatically assigned following a rolling upgrade from InForm OS 2.2.x. If appropriate, you can change host persona 6 after an upgrade to host persona 1. Host persona 1 enables two functional features: Host Explorer, which requires the SESLun element of Host persona 1 and UARepLun, which notifies the host of newly exported VLUNs and triggers a LUN discovery request on the host, making the VLUN automatically available in 'format'. NOTE: See the HP 3PAR InForm OS CLI Reference or the IMC help for complete details on using the controlport, createhost and showhost commands.

Configuring the HP 3PAR Storage System Running InForm OS 2.2.x


This section describes the steps that are required to connect the HP 3PAR Storage System to a Solaris host over a Fibre Channel network and to create the host definitions when running InForm OS 2.2.x. When setting up the HP 3PAR Storage System ports (Port Personas), consult the instructions for the type of HBA driver being used where X:X:X is the port location, expressed as <node:slot:port>.

REQUIRED
The following setup must be completed before connecting the HP 3PAR Storage System port to a device.

Configuring Ports for an Emulex lpfc driver


Direct Connect: Ports configured to personality 4

# controlport persona 4 X:X:X

Verify port personality 4, connection type loop, using the InForm CLI showport -par command.
# showport -par N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal 1:3:2 loop auto 4Gbps disable disabled (4) emx, g_hba, g_os, 0, DC enabled

Fabric Connect: Ports configured to personality 7

# controlport persona 7 X:X:X

Configuring the HP 3PAR Storage System Running InForm OS 2.2.x

1 1

Verify port personality 7, connection type point, using the InForm CLI showport -par command.
# showport -par N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal 0:5:1 point auto 4Gbps disable enabled (7) g_ven, g_hba, g_os, 0, FA enabled

Configuring Ports for a QLogic qla Driver


Direct Connect: Ports configured to personality 1

# controlport persona 1 X:X:X

Verify port personality 1, connection type loop, using the InForm CLI showport -par command.
# showport -par N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal 1:4:2 loop auto 2Gbps disable enabled (1) g_ven, g_hba, g_os, 0, DC enabled

Fabric Connect: Ports configured to personality 7

# controlport persona 7 X:X:X

Verify port personality 7, connection type point, using the InForm CLI showport -par command.
# showport -par N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal 0:5:1 point auto 4Gbps disable enabled (7) g_ven, g_hba, g_os, 0, FA enabled

Configuring Ports for a Solaris qlc or emlxs Driver


Direct Connect: Ports configured to personality 1

# controlport persona 1 X:X:X

Verify port personality 1, connection type loop, using the InForm CLI showport -par command.
# showport -par N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal 1:4:2 loop auto 2Gbps disable enabled (1) g_ven, g_hba, g_os, 0, DC enabled

Fabric Connect: Ports configured to personality 9

# controlport persona 9 X:X:X

Verify port personality 9, connection type point, using the InForm CLI showport -par command.
# showport -par N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal 0:5:1 point auto 4Gbps disable enabled (9) g_ven,g_hba, g_os, 0, FA enabled

12

Configuring the HP 3PAR Storage System for Fibre Channel

Configuring Ports for a JNI Tachyon driver


Direct Connect: Ports configured to personality 3

# controlport persona 3 X:X:X

Verify port personality 3, connection type loop, using the InForm CLI showport -par command.
# showport -par N:S:P ConnType CfgRate MaxRate Class2 VCN ----------Persona------------ IntCoal 1:4:2 loop auto 2Gbps disable disabled (3) jni, g_hba, g_os, 0, DC enabled

Fabric Connect: Ports configured to personality 7

# controlport persona 7 X:X:X # controlport vcn disable X:X:X

Verify port personality 7, connection type point, using the InForm CLI showport -par command.
# showport -par N:S:P ConnType CfgRate MaxRate Class2 VCN ----------Persona----------- IntCoal 0:5:1 point auto 4Gbps disable disabled *(7) g_ven,g_hba, g_os, 0, FA enabled

WARNING! The Controlport Offline command for the HP 3PAR Storage System LSI 929 HBA requires firmware versions greater than 02.00.21 when connected to a JNI Tachyon host HBA.

Configuring Ports for a JNI Emerald driver


Direct Connect: Ports configured to personality 3

# controlport persona 3 X:X:X

Verify port personality 3, connection type loop, using the InForm CLI showport -par command.
# showport -par N:S:P ConnType CfgRate MaxRate Class2 VCN ----------Persona------------ IntCoal 1:4:2 loop auto 2Gbps disable disabled (3) jni, g_hba, g_os, 0, DC enabled

Fabric Connect: Ports configured to personality 7

# controlport persona 7 X:X:X

Verify port personality 7, connection type point, using the InForm CLI showport -par command.
# showport -par N:S:P ConnType CfgRate MaxRat Class2 VCN -----------Persona------------ IntCoal. 0:4:1 point auto 4Gbps disable enabled(7) g_ven, g_hba, g_os, 0, FA enabled

Configuring the HP 3PAR Storage System Running InForm OS 2.2.x

13

Creating the Host Definition


Before connecting the Solaris host to the HP 3PAR Storage System, you need to create a host definition for each HP 3PAR Storage System that is to be connected to a host HBA port through a fabric or direct connection. 1. To create host definitions, issue the createhost command with the host name. For example:
# createhost solarishost 1122334455667788 1122334455667799

2.

To verify that the host has been created, issue the showhost command.
# showhost Id Name -WWN/iSCSI_Name- Port 0 sqa-solaris 1122334455667788 --1122334455667799 ---

NOTE: See the HP 3PAR InForm OS CLI Reference or the IMC help for complete details on using the controlport, showport, createhost and showhost commands.

Setting Up and Zoning the Fabric


Fabric zoning controls which devices have access to each other on the fabric. The required use of single initiator to single target zoning isolates the host server and HP 3PAR Storage System ports from registered state change notifications (RSCNs) that are irrelevant to these ports. When Fibre Channel devices (Initiators and Targets) log in to the fabric, they register to receive RSCNs so that they can be notified in the event that there is a state change (left or came back in the fabric) of the device(s) they are zoned to. In the case that many devices (targets and Initiators) are part of the same zone, if one of the devices leaves and comes back into the fabric for any reason (device rebooted, unstable link between device and the fabric, faulty device port and/or fabric port, etc. etc) all of the devices (whether target or initiator) that are part of the same zone, will be notified of the state change for the device that left or came back into the fabric. This could cause disruption of I/O as devices that received the RSCN would have to take the necessary steps to handle the RSCN. Fabric zoning can be set up by associating the device World Wide Names (WWNs) or switch ports with specified zones in the fabric. Although you can use either the WWN or the switch port zoning method with the HP 3PAR Storage System, the WWN zoning method is recommended because the zone survives port changes to ports when cables are reconnected on a fabric. Use the methods provided by the switch vendor to create one Initiator-one Target relationships between host server HBA ports and storage server ports before you connect the host server HBA ports or HP 3PAR Storage System ports to the fabric(s).

REQUIRED
When establishing zoning with the HP 3PAR Storage System, there must only be a single initiator zoned with a single target. If a customer experiences an issue using another zoning approach, HP may require the customer to implement this zoning approach as part of troubleshooting and/or corrective action. After connecting each host server HBA port and HP 3PAR Storage System port to the fabric(s), verify the switch and zone configurations using the InForm CLI showhost command, to ensure that each initiator is zoned with the correct target. In the following explanations, an initiator port (initiator for short) refers to a host server HBA port and a target port (target for short) refers to an HP 3PAR Storage System HBA port.
14 Configuring the HP 3PAR Storage System for Fibre Channel

Configuration Guidelines For Fabric Vendors


Use the following fabric vendor guidelines before configuring ports on the fabric to which the HP 3PAR Storage System connects. Brocade switch ports that connect to a host server HBA port or to an HP 3PAR Storage System port should be set in their default mode. On Brocade 3xxx switches running Brocade firmware 3.0.2 or later, verify that each switch port is in the correct mode using the Brocade telnet interface and the portcfgshow command as follows:

brocade2_1:admin> portcfgshow Ports 0 1 2 3 4 5 6 7 -----------------+--+--+--+--+----+--+--+-Speed AN AN AN AN AN AN AN AN Trunk Port ON ON ON ON ON ON ON ON Locked L_Port .. .. .. .. .. .. .. .. Locked G_Port .. .. .. .. .. .. .. .. Disabled E_Port .. .. .. .. .. .. .. .. where AN:AutoNegotiate, ..:OFF, ??:INVALID.

McData switch or director ports should be in their default modes as type GX-Port with a speed setting of Negotiate. Cisco switch ports that connect to HP 3PAR Storage System ports or host HBA ports should be set to AdminMode = FX and AdminSpeed = auto port, with the speed set to auto negotiate. QLogic switch ports should be set to port type GL-port and port speed auto-detect. QLogic switch ports that connect to the HP 3PAR Storage System should be set to I/O Stream Guard disable or auto but never enable.

Target Port Limits and Specifications


In order to avoid the overwhelming of a target port and ensure continuous I/O operations, refer to the following limitations on a target port: Maximum of 64 host server ports per HP 3PAR Storage System port, with a maximum total of 1,024 host server ports per HP 3PAR Storage System. I/O queue depth on each HP 3PAR Storage System HBA model as follows: QLogic 2G: 497 LSI 2G: 510 Emulex 4G: 959 HP 3PAR HBA 4G: 1638 HP 3PAR HBA 8G: 3276

The I/O queues are shared amongst the connected host server HBA ports on a first-come, first-served basis. When all queues are in use and a host HBA port tries to initiate I/O, it will receive a target queue full response from the HP 3PAR Storage System port. This can result in erratic I/O performance on each host server. If this condition occurs, each host server should be throttled so that they cannot overrun the HP 3PAR Storage System port's queues when all host servers are delivering their maximum number of I/O requests.

Setting Up and Zoning the Fabric

15

NOTE: When host server ports can access multiple targets on fabric zones, the assigned target number (which is assigned by the host driver) for each discovered target can change when the host server is booted and some targets are not present in the zone. This may change the device node access point for devices during a host server reboot. This issue can occur with any fabric-connected storage, and is not specific to the HP 3PAR Storage System.

Single Initiator to Single Target Zoning No Fan-In No Fan-Out


In a single initiator to single target zoning, no fan-in, no fan-out configuration, each HBA port is connected to only one HP 3PAR Storage System port (Figure 1 (page 16)). Figure 1 Single Initiator to Single Target Zoning No Fan-In/No Fan-Out

Single Initiator to Single Target Zoning with Fan-Out from One HP 3PAR Storage System Port to Multiple Host Ports
Fan-out refers to an HP 3PAR Storage System server port that is connected to more that one host port, as shown in Figure 2 (page 16). Figure 2 Single Initiator to Single Target Zoning with Fan-Out

16

Configuring the HP 3PAR Storage System for Fibre Channel

NOTE: port.

A maximum of 64 host server ports can fan-out from a single HP 3PAR Storage System

Single Initiator to Single Target Zoning with Fan-In from Multiple HP 3PAR Storage System Ports to One Host Port
Fan-in refers to a host server port connected to many HP 3PAR Storage System ports. This is shown in Figure 3 (page 17). Figure 3 Single Initiator to Single Host Target Zoning with Fan-In

NOTE:

A maximum of four HP 3PAR Storage System ports can fan-in to a single host server port.

Single Initiator to Single Target Zoning with Mixed Fan-In/Out Configurations


The following figure (Figure 4 (page 17)) shows a single initiator to a single target zoning with fan-in and fan-out from one HP 3PAR Storage System to multiple host servers. Figure 4 Single Initiator to Single Target Zoning with Fan-In and Fan-Out

Setting Up and Zoning the Fabric

17

Non-Compliant Zoning Examples


In the following examples, the zoning rule of one initiator zoned to one target is not respected. Non-compliant zoning is shown in Figure 5 (page 18). Figure 5 Non-Compliant Zoning

18

Configuring the HP 3PAR Storage System for Fibre Channel

3 Configuring the HP 3PAR Storage System for iSCSI


This chapter explains how to establish an iSCSI connection between the HP 3PAR Storage System and the Solaris host. For information on setting up the physical connection, see the appropriate HP 3PAR installation manual.

Configuring the HP 3PAR Storage System iSCSI Ports


This section applies to configurations based on installed NICs, up to and including 1-GB port speed. Each HP 3PAR Storage System iSCSI target port that will be connected to an iSCSI initiator must be set up appropriately for your configuration, as described in the following steps. The following example shows the default HP 3PAR Storage System iSCSI port settings, before configuration:
# showport -iscsi N:S:P State IPAddr Netmask Gateway TPGT 0:3:1 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 0:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 1:3:1 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 1:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0

MTU 131 132 131 132

Rate 1500 1500 1500 1500

DHCP iSNS_Prim iSNS_Sec iSNS_Port n/a 0 0.0.0.0 0.0.0.0 3205 n/a 0 0.0.0.0 0.0.0.0 3205 n/a 0 0.0.0.0 0.0.0.0 3205 n/a 0 0.0.0.0 0.0.0.0 3205

Each HP 3PAR Storage System iSCSI target port that will be connected to an iSCSI Initiator must be set up appropriately for your configuration as described in the following steps. 1. Set up the IP and netmask address on the iSCSI target port using the InForm CLI controliscsiport command. Here is an example:
# controliscsiport addr 10.1.0.110 255.0.0.0 -f 0:3:1 # controliscsiport addr 11.1.0.110 255.0.0.0 -f 1:3:1

2.

To verify the iSCSI target port configuration, issue the InForm CLI showport -iscsi command.
# showport -iscsi N:S:P State IPAddr Netmask Gateway TPGT MTU Rate DHCP iSNS_Prim iSNS_Sec iSNS_Port 0:3:1 ready 10.1.0.110 255.0.0.0 0.0.0.0 31 1500 1Gbps 0 0.0.0.0 0.0.0.0 3205 0:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 32 1500 n/a 0 0.0.0.0 0.0.0.0 3205 1:3:1 ready 11.1.0.110 255.0.0.0 0.0.0.0 131 1500 1Gbps 0 0.0.0.0 0.0.0.0 3205 1:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 132 1500 n/a 0 0.0.0.0 0.0.0.0 3205

NOTE: Make sure the IP switch ports, (where the HP 3PAR Storage System iSCSI target ports and iSCSI Initiators host are connected), are able to communicate with each other. You can use the ping command for this purpose on the Solaris host. 3. If the Solaris host uses the Internet Storage Name Service (iSNS) to discover the target port, configure the iSNS server IP Address on the target port by issuing the InForm CLI controliscsiport command with the isns parameter.
# controliscsiport isns 11.0.0.200 -f 1:3:1 # showport -iscsi N:S:P State IPAddr Netmask Gateway TPGT MTU Rate DHCP iSNS_Prim iSNS_Sec iSNS_Port 1:3:1 ready 11.1.0.110 255.0.0.0 0.0.0.0 31 1500 1Gbps 0 Configuring the HP 3PAR Storage System iSCSI Ports 19

11.0.0.200 0.0.0.0 3205 - - -

NOTE: The Solaris OS does not have its own iSNS server, so a Windows server that has been installed with the iSNS package must be used to provide the iSNS server functions instead. 4. Each HP 3PAR Storage System iSCSI port has a unique name, port location, and serial number as part of its iqn iSCSI name. Use the InForm CLI showport command with the -iscsiname parameter to get the iSCSI name.
# showport -iscsiname N:S:P IPAddr ---------------iSCSI_Name---------------0:3:1 10.1.0.110 iqn.2000-05.com.3pardata:20310002ac00003e 0:3:2 0.0.0.0 iqn.2000-05.com.3pardata:20320002ac00003e 1:3:1 11.1.0.110 iqn.2000-05.com.3pardata:21310002ac00003e 1:3:2 0.0.0.0 iqn.2000-05.com.3pardata:21320002ac00003e

5.

Use the ping command on the Solaris host to verify that the HP 3PAR Storage System target is pingable, and use the route get <IP> command to check that the configured network interface is used for the destination route. Example: After configuring the host and HP 3PAR Storage System ports, 1 1.1.0.1 is the 10 HP 3PAR Storage System target IP Address, 1 1.1.0.40 is host IP Address and the host uses a ce2 network interface to route the traffic to the destination.
# ping 11.1.0.110 11.1.0.110 is alive # route get 11.1.0.110 route to: 11.1.0.110 destination: 11.0.0.0 mask: 255.0.0.0 interface: ce2 flags: <UP,DONE>

As an alternative, you can use controliscsiport to ping the host from the HP 3PAR Storage System ports.
# controliscsiport ping [<count>] <ipaddr> <node:slot:port> # controliscsiport ping 1 11.1.0.40 1:3:1 Ping succeeded

For information on setting up target discovery on the Solaris host, see Section (page 46).

Creating an iSCSI Host Definition on an HP 3PAR Storage System Running InForm OS 3.1.x or 2.3.x
You will need the Host iqn name/names to create the iSCSI host definition on the HP 3PAR Storage System.
# iscsiadm list initiator-node Initiator node name: iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 Initiator node alias: Login Parameters (Default/Configured): Header Digest: NONE/-

20

Configuring the HP 3PAR Storage System for iSCSI

Data Digest: NONE/Authentication Type: NONE RADIUS Server: NONE RADIUS access: unknown Configured Sessions: 1

The following steps show how to create the host definition for an iSCSI connection. 1. You can verify that the iSCSI Initiator is connected to the iSCSI target port by using the InForm CLI showhost command.
# showhost Id Name --

Persona ---------------WWN/iSCSI_Name--------------- Port Generic iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 0:3:1 iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 1:3:1

2.

Create an iSCSI host definition entry by issuing the InForm CLI createhost -iscsi <hostname> <host iSCSI name> command.
# createhost -iscsi solaris-host-01 iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 Setting default host persona 1 (Generic)

# showport -iscsi N:S:P State IPAddr Netmask iSNS_Prim iSNS_Sec iSNS_Port 0:3:1 ready 10.100.0.101 255.0.0.0 0.0.0.0 0.0.0.0 3205 1:3:1 ready 10.101.0.201 255.0.0.0 0.0.0.0 0.0.0.0 3205

Gateway TPGT 0.0.0.0 31 0.0.0.0 131

MTU 1500 1500

Rate DHCP 1Gbps 0 1Gbps 0

NOTE: HP 3PAR recommends host persona 1 for Solaris 10 (and above) hosts as it is required to enable Host Explorer functionality. However, host persona 6 is automatically assigned following a rolling upgrade from 2.2.x. If appropriate, you can change host persona 6 after an upgrade to host persona 1. Host persona 1 enables Host Explorer, which requires the SESLun element of Host persona 1. Newly exported vLUNs can be seen in 'format' by issuing devfsadm -i iscsi. To register the data vLUN 254 on Solaris 'format', a host reboot is required. NOTE: You must configure the HP 3PAR Storage System iSCSI target port(s) and establish an iSCSI Initiator connection/session with the iSCSI target port from the host to be able to create a host definition entry. For details, see Configuring the Host for an iSCSI Connection (page 44). 3. Verify that the host entry has been created.
# showhost Id Name 1 solaris-host-01

Persona ---------------WWN/iSCSI_Name--------------- Port Generic iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 0:3:1 iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 1:3:1

The showhost -d command provides more details on the connection.


# showhost -d Id Name

Persona ---------------WWN/iSCSI_Name---------------

Port

Creating an iSCSI Host Definition on an HP 3PAR Storage System Running InForm OS 3.1.x or 2.3.x

21

IP_addr 1 solaris-host-01 10.1.0.40 1 solaris-host-01 11.1.0.40

Generic iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 Generic iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940

0:3:1 1:3:1

# showiscsisession N:S:P --IPAddr--- TPGT TSIH Conns -----------------iSCSI_Name-----------------------StartTime------0:3:1 10.105.3.10 31 11351 1 iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 2010-02-25 07:47:38 PST 1:3:1 10.105.4.10 131 11351 1 iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 2010-02-25 07:47:37 PST

Creating an iSCSI Host Definition on an HP 3PAR Storage System Running InForm OS 2.2.x
You will need the Host iqn name/names to create the iSCSI host definition on the HP 3PAR Storage System. The following steps show how to create the host definition for an iSCSI connection. 1. You can verify that the iSCSI Initiator is connected to the iSCSI target port by using the InForm CLI showhost command.
# showhost Id Name -----------WWN/iSCSI_Name------------ Portiqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 1:3:1 iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 0:3:1

2.

Create an iSCSI host definition entry by issuing the InForm CLI createhost -iscsi command.
# createhost -iscsi solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d

3.

Verify that the host entry has been created.


# showhost Id Name -----------WWN/iSCSI_Name------------ Port 1 solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 0:3:1 iqn.1986 03.com.sun:01:0003bac3b2e1.45219d0d 1:3:1

The showhost -d command provides more details on the connection.


# showhost -d Id Name -----------WWN/iSCSI_Name------------ Port IP_addr 1 solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 0:3:1 10.1.0.40 2 solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 1:3:1 11.1.0.40 # showiscsisession N:S:P --IPAddr-- TPGT TSIH Conns ---iSCSI_Name-- ------StartTime------0:3:1 10.1.0.40 31 24435 1 iqn.1986-3.com.sun:01:0003bac3b2e1.45219d0d Fri Dec 08 11:57:50 PST 2006 1:3:1 11.1.0.40 131 17955 1 iqn.1986-3.com.sun:01:0003bac3b2e1.45219d0d Fri Dec 08 12:06:58 PST 2006

22

Configuring the HP 3PAR Storage System for iSCSI

Configuring CHAP Authentication (Optional)


Solaris supports Challenge-Handshake Authentication Protocol (CHAP) for higher security connectivity. CHAP uses the notion of challenge and response and has two authentication types supported by the InForm OS. Unidirectional or Host CHAP authentication is used when the HP 3PAR Storage System iSCSI target port authenticates the iSCSI Host initiator when it tries to connect. Bidirectional (Mutual) CHAP authentication adds a second level of security where both the iSCSI target and host authenticate each other when the host tries to connect to the target.

Enabling Unidirectional (Host) CHAP


To set the host CHAP authentication after an iSCSI host definition has been created on the HP 3PAR Storage System, use the InForm CLI sethost initchap command to set the host CHAP secret. Example: a. Verify that a host definition has been created.
# showhost Id Name -----------WWN/iSCSI_Name------------ Port solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 0:3:1 iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 1:3:1

NOTE:

The CHAP secret length must be between 12 and 16 characters.

The following example sets host_secret0 as the host secret key.


# sethost initchap -f host_secret0 solarisiscsi

b.

Verify the host CHAP secret.


# showhost -chap Id Name -Initiator_CHAP_Name- -Target_CHAP_Name1 solarisiscsi solarisiscsi -

c.

Set the secret key host-secret0 on the host.


# iscsiadm modify initiator-node --CHAP-secret <prompts for secret key>

Enable CHAP as the authentication method after the secret key is set.
# iscsiadm modify initiator-node --authentication CHAP

d.

Enable CHAP as the authentication method.


# iscsiadm modify target-param --authentication CHAP iqn.2000-05.com.3pardata:21310002ac00003 # iscsiadm modify target-param --authentication CHAP iqn.2000-05.com.3pardata:20310002ac00003

Configuring CHAP Authentication (Optional)

23

e.

Verify that the authentication is enabled.


# iscsiadm list initiator-node Initiator node name: iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d Initiator node alias: Login Parameters (Default/Configured): Header Digest: NONE/Data Digest: NONE/Authentication Type: CHAP CHAP Name: iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d - - # iscsiadm list target-param -v Target: iqn.2000-05.com.3pardata:21310002ac00003e Alias: Bi-directional Authentication: disabled Authentication Type: CHAP CHAP Name: iqn.2000-05.com.3pardata:21310002ac00003e - - -

NOTE: In the example above, the default target CHAP Name is the target port iSCSI name (iqn.2000-05.com.3pardata:21310002ac00003e) and host CHAP Name is the initiator port iSCSI name (iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d). f. Create a new iSCSI connection session. Example: If you are using SendTargets as a discovery method, remove and add back the discovery address to create a new connection session.
# iscsiadm remove discovery-address 11.1.0.110:3260 # iscsiadm add discovery-address 11.1.0.110:3260

Or to apply for all connected targets:


# iscsiadm modify discovery --sendtargets disable # iscsiadm modify discovery --sendtargets enable

g.

Invoke devfsadm to discover the devices after the host is verified by the target.
# devfsadm -i iscsi

Use a similar procedure if other discovery methods are being used.


# iscsiadm list initiator-node Initiator node name: iqn.1986-03.com.sun:01:00144fb0534c.4a4e0673 Initiator node alias: Login Parameters (Default/Configured): Header Digest: NONE/NONE Data Digest: NONE/NONE Authentication Type: NONE RADIUS Server: NONE RADIUS access: unknown Configured Sessions: 1

24

Configuring the HP 3PAR Storage System for iSCSI

Disabling Unidirectional (Host) CHAP


To disable unidirectional CHAP, issue the iscsiadm command with the appropriate parameter as shown in the following example.
bash-3.00# iscsiadm modify initiator-node -a none bash-3.00# iscsiadm list target bash-3.00# iscsiadm modify target-param --authentication none <iSCSI name> For example: bash-3.00# iscsiadm modify target-param --authentication none iqn.2000-05.com.3pardata:20320002ac0000af bash-3.00# iscsiadm modify target-param --authentication none iqn.2000-05.com.3pardata:21310002ac0000af bash-3.00# iscsiadm list target-param -v Target: iqn.1986-03.com.sun:01:00144fb0534c.4a4e0673 Alias: Bi-directional Authentication: disabled Authentication Type: NONE Login Parameters (Default/Configured): Data Sequence In Order: yes/Data PDU In Order: yes/Default Time To Retain: 20/Default Time To Wait: 2/Error Recovery Level: 0/First Burst Length: 65536/Immediate Data: yes/Initial Ready To Transfer (R2T): yes/Max Burst Length: 262144/Max Outstanding R2T: 1/Max Receive Data Segment Length: 8192/Max Connections: 1/Header Digest: NONE/NONE Data Digest: NONE/NONE Configured Sessions: 1 Target: iqn.2000-05.com.3pardata:20320002ac0000af Alias: Bi-directional Authentication: enabled Authentication Type: NONE Login Parameters (Default/Configured): Data Sequence In Order: yes/Data PDU In Order: yes/Default Time To Retain: 20/Default Time To Wait: 2/Error Recovery Level: 0/First Burst Length: 65536/Immediate Data: yes/Initial Ready To Transfer (R2T): yes/Max Burst Length: 262144/Max Outstanding R2T: 1/Max Receive Data Segment Length: 8192/65536 Max Connections: 1/Header Digest: NONE/Data Digest: NONE/Configured Sessions: 1 iscsiadm list initiator-node Initiator node name: iqn.1986-03.com.sun:01:00144fb0534c.4a4e0673 Initiator node alias: Login Parameters (Default/Configured): Header Digest: NONE/NONE Data Digest: NONE/NONE Authentication Type: NONE RADIUS Server: NONE

Configuring CHAP Authentication (Optional)

25

RADIUS access: unknown Configured Sessions: 1

On the HP 3PAR Storage System, remove CHAP for the host:


# sethost removechap solarisiscsi

Enabling Bidirectional (Mutual) CHAP


To set bidirectional CHAP, a host definition must already be defined on the HP 3PAR Storage System. The InForm CLI sethost initchap and sethost targetchap commands are used to set bidirectional CHAP on the HP 3PAR Storage System as described in the following steps. 1. On the HP 3PAR Storage System, create and verify the host and target CHAP secrets.
# sethost initchap -f host_secret0 solarisiscsi # sethost targetchap -f target_secret0 solarisiscsi # showhost -chap Id Name -Initiator_CHAP_Name- -Target_CHAP_Name1 solarisiscsi solarisiscsi S062

NOTE: The Target Chap name is set by default to the HP 3PAR Storage System name. Use the InForm CLI showsys command to determine the HP 3PAR Storage System name. 2. Enter the Host Chap secret key host_secret0 on the host.
# iscsiadm modify initiator-node --CHAP-secret <prompts for secret key>

3.

Enable the Host CHAP authentication after the secret key is set.
# iscsiadm modify initiator-node --authentication CHAP

4.

Enable target or bidirectional authentication for each connected target port.


# iscsiadm list target Target: iqn.2000-05.com.3pardata:21310002ac00003e - Target: iqn.2000-05.com.3pardata:20310002ac00003e # iscsiadm modify target-param -B enable iqn.2000-05.com.3pardata:21310002ac00003e # iscsiadm modify target-param -B enable iqn.2000-05.com.3pardata:20310002ac00003e

5.

Enter the Target Chap secret key target_secret0 for each connected target.
# iscsiadm modify target-param --CHAP-secret iqn.2000-05.com.3pardata:21310002ac00003e <prompts for secret key> # iscsiadm modify target-param --CHAP-secret iqn.2000-05.com.3pardata:20310002ac00003e <prompts for secret key>

26

Configuring the HP 3PAR Storage System for iSCSI

6.

Enable CHAP as the authentication method.


# iscsiadm modify target-param --authentication CHAP iqn.2000-05.com.3pardata:21310002ac00003 # iscsiadm modify target-param --authentication CHAP iqn.2000-05.com.3pardata:20310002ac00003

7.

Set the CHAP name for the HP 3PAR Storage System for the iSCSI targets (Use the InForm CLI showsys command to determine the HP 3PAR Storage System name).
# iscsiadm modify target-param --CHAP-name s062 iqn.2000-05.com.3pardata:21310002ac00003e # iscsiadm modify target-param --CHAP-name s062 iqn.2000-05.com.3pardata:20310002ac00003e

8.

Verify that bidirectional authentication is enabled.


# iscsiadm list initiator-node Initiator node name: iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d Login Parameters (Default/Configured): Authentication Type: CHAP CHAP Name: iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d - # iscsiadm list target-param -v Target: iqn.2000-05.com.3pardata:20310002ac00003e Alias: Bi-directional Authentication: enabled Authentication Type: CHAP CHAP Name: -S062 Login Parameters (Default/Configured):

9.

Remove and create a new iSCSI session and invoke devfsadm -i iscsi to discover the targets and all the LUNs. NOTE: CHAP authentication will not be in effect for the most recently added devices until the current connection is removed and a new connection session is enabled. To enable authentication for all the devices, stop all associated I/O activity and unmount any file systems before creating the new connection session. This procedure is required each time a change is made to the CHAP configuration.

Disabling Bidirectional CHAP


To disable the CHAP authentication, follow these steps: 1. On the HP 3PAR Storage System, issue the sethost removechap <hostname> command.
# sethost removechap solarisiscsi # showhost -chap Id Name -Initiator_CHAP_Name- -Target_CHAP_Name1 solarisiscsi -- --

2.

On the host, disable and remove the target CHAP authentication on each target.
# iscsiadm list target # iscsiadm modify target-param -B disable iqn.2000-05.com.3pardata:21310002ac00003e # iscsiadm modify target-param -B disable iqn.2000-05.com.3pardata:20310002ac00003e

Configuring CHAP Authentication (Optional)

27

# iscsiadm modify target-param --authentication NONE iqn.2000-05.com.3pardata:21310002ac00003 # iscsiadm modify target-param --authentication NONE iqn.2000-05.com.3pardata:20310002ac00003e # iscsiadm modify initiator-node --authentication NONE

3.

Verify that authentication is disabled.


# iscsiadm list initiator-node scsiadm list initiator-node Initiator node name: iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d Login Parameters (Default/Configured): Authentication Type: NONE # iscsiadm list target-param -v Target: iqn.2000-05.com.3pardata:20310002ac00003e Alias: Bi-directional Authentication: disabled Authentication Type: NONE

28

Configuring the HP 3PAR Storage System for iSCSI

4 Configuring the Host for a Fibre Channel Connection


This chapter describes the procedures that are required to set up a Solaris host to communicate with an HP 3PAR Storage System over a Fibre Channel connection using a supported HBA.

Installing the HBA


Before setting up the Solaris host, make sure the host adapters are installed and operating properly. If necessary, consult the documentation provided by the HBA vendor. When the server boots after the HBA installation, the /var/adm/messages directory will contain messages for each HBA port. These messages will vary depending on the HBA type and drivers that are being used.

Installing the SUN SAN Driver Packages


Solaris 10 The required Sun SAN software is installed as part of the OS distribution. Consult the Solaris OS minimum patch listings in Chapter 6 (page 52). NOTE: For Solaris 10, a Sun MPXIO patch is required that contains MPXIO fixes applicable for SCSI 3 reservations if SUN cluster is to be configured. For SPARC-based servers, use patch 127127-1 and for x86 based servers use patch 127128-1 For availability of later versions, 1 1. check the following Web site: http:/support.oracle.com/CSP/ui/flash.html. Solaris 8/9 Install the appropriate Sun SAN software package for Solaris 8 or 9 hosts available at the following location: http://www.oracle.com/us/products/servers-storage/storage/ storage-networking/index.htm Consult the Solaris OS minimum patch listings in Chapter 6 (page 52).

Installing the HBA Drivers


If necessary, install the appropriate drivers for the type of HBA that is being used. For QLogic and Emulex HBAs, you have the option of using the (qlc or emlxs) drivers supplied with the Solaris SAN package or you can use the drivers supplied by the HBA vendor. Emulex LPFC Driver package(s) and driver installation instructions are available at: http://www.emulex.com/ QLogic QLA (qla2300) Driver package(s) and driver installation instructions are available at: http://www.qlogic.com/ NOTE: The SAN package may have an updated release of the emlxs /qlc drivers (also known as the Leadville drivers). For JNI HBAs, install the JNIfcaPCI (FCI-1063) or JNIfcaw (FC64-1063) driver package for the Solaris OS. The driver install package files fca-pci.pkg and fcaw.pkg contain the JNIfcaPCI, JNIfcaw and JNIsnia drivers. NOTE: The JNI HBA drivers are currently only supported for Solaris OS versions 8 and 9 in InForm OS 2.3.x. Refer to the Inform OS support matrices for updated information and support for Solaris 10.
Installing the HBA 29

For more details, consult the appropriate driver installation notes in this section for the type of HBA being installed. You can also consult the HP Single Point of Connectivity Web site (HP SPOCK) to determine which drivers are appropriate for a given HBA or version of the Solaris OS: www.hp.com/storage/

Installation Notes for Emulex lpfc Drivers


The following notes apply when connecting to a Solaris host that utilizes an Emulex HBA with an lpfc driver: The default or as installed parameter settings will allow the host to connect in either direct or fabric modes. Direct Connect: Configured by editing /kernel/drv/lpfc.conf and then running the udated_drv utility. On versions of Solaris prior to version 9, you have to manually reboot the host server to update the host with the modified driver configuration settings. Fabric Connect: Configured by editing /kernel/drv/lpfc.conf and then running the udated_drv utility. On versions of Solaris prior to version 9, you have to manually reboot the Solaris host to update with the modified driver configuration settings. The sd.conf file is read by the SD driver at boot time, so supporting entries for new LUNs must exist prior to the last server reboot. Add entries to the /kernel/drv/sd.conf file between the boundary comments generated by the Emulex driver package during installation.
# Start lpfc auto-generated configuration -- do NOT alter or delete this line name="sd" parent="lpfc" target=0 lun=0; name="sd" parent="lpfc" target=0 lun=1; ... name="sd" parent="lpfc" target=0 lun=255; # End lpfc auto-generated configuration -- do NOT alter or delete this line

A line is required for each LUN number (pre 6.20 driver requirement). For fabric configurations, entries must be made for all target LUNs that will be exported from the HP 3PAR Storage System to the Solaris host. These entries can be restricted to the Emulex lpfc driver only, so a useful strategy is to add entries for all possible LUNs (0 to 255) on target 0. Testing at HP did not reveal any noticeable increase in server boot time due to the probing of non-existent LUNs. WARNING! Installing version 6.21g of the lpfc driver for Solaris may be significantly different than in previous releases. Follow the driver instructions precisely as instructed for initial installation. Failure to follow the proper installation steps could render your system inoperable.

30

Configuring the Host for a Fibre Channel Connection

NOTE: Emulex lpfc drivers 6.20 and above do not require LUN and Target entries in the /kernel/drv/sd.conf file. The lpfc driver can support up to 256 targets, with a maximum of 256 LUNs per target; additional LUNs will not be visible on the host. Solaris 8/9 LUN discovery for driver 6.21g requires the following command: /opt/HBAnyware/hbacmd RescanLuns <hba WWPN> <target WWPN> HBAnyware software is available from the Emulex lpfc driver download site: http:// www.emulex.com/ NOTE: When adding specific entries in the sd.conf file for each LUN number that is expected to be exported from the HP 3PAR Storage System ports, new entries have to be added each time additional VLUNs are exported with new LUNs. Unless the host port will be communicating with more than one HP 3PAR Storage System port, Target=0 entries are sufficient. If a host port is communicating with more than a single HP 3PAR Storage System, then specific entries are required for the other targets (pre 6.20 driver requirement).

Configuration File Settings for Emulex lpfc Drivers


In the following example, all default values in the /kernel/drv/lpfc.conf file were used except for the Link_down_timeout variable that is changed to reduce I/O stall timings.
# # Determine how long the driver will wait [0 - 255] to begin linkdown # processing when the hba link has become inaccessible. Linkdown processing # includes failing back commands that have been waiting for the link to # come back up. Units are in seconds. linkdown-tmo works in conjuction # with nodev-tmo. I/O will fail when either of the two expires. linkdown-tmo=1; default is linkdown-tmo=30

WARNING! Any changes to the driver configuration file must be tested before going into a production environment.

Installation Notes for QLogic qla Drivers


The following notes apply when connecting to a Solaris host that utilizes a QLogic HBA with a qla2300 driver. The default or as installed parameter settings in the /kernel/drv/qla2300.conf file will allow the host to connect in either direct or fabric modes. NOTE: The currently supported QLogic driver versions, as listed in the current InForm OS Configuration Matrix, do not require target and LUN entries in the /kernel/drv/sd.conf file.

Configuration File Settings for QLogic qla Drivers


In the following example, all default values in the /kernel/drv/qla2300.conf file were used except for the hbaa0-link_down_timeout option that is used to reduce I/O stall timings.
# Amount of time to wait for loop to come up after it has gone down # before reporting I/O errors. # Range: 0 - 240 seconds hba0-link-down-timeout=1; default is hba0-link-down-timeout=60; DO NOT LOWER below 30 for solaris 9

WARNING! Any changes to the driver configuration file must be tested before going into a production environment. WARNING! DO NOT LOWER the qla2300.conf variable hba0-link-down-timeout below 30 seconds for Solaris 9 hosts.
Installing the HBA Drivers 31

Installation Notes for Solaris qlc and emlxs Drivers


The following notes apply when connecting to a Solaris host that utilizes a QLogic or Emulex HBA and relies on the qlc or emlxs drivers supplied as part of the Sun SAN installation. The default or as installed parameter settings in the /kernel/drv/qlc.conf or /kernel/drv/emlxs.conf files allow the host to connect in either direct or fabric modes. Early versions of Sun's qlc and emlxs drivers had a very limited set of parameters available for adjustment. Testing was performed with all of the parameters listed in these configuration files set to their originally installed or default settings. NOTE: 4 GB/s Sun StorageTek SG- SG-xxxxxxx-QF4 and QLogic QLA24xx will be limited to 256 LUNs per target unless patch 1 19130 or 1 19131 is at revision -21 or higher.

Configuration File Settings for Solaris qlc and emlxs Drivers


No configuration settings are required for Solaris qlc and emlxs drivers; the default /kernel/ drv/qlc.conf and /kernel/drv/emlxs.conf configuration settings are supported. WARNING! MPXIO on fp is enabled by default, so that running the stmsboot -e command erases the original fp.conf and replace it with a 2-line file. As a workaround, run stmsboot -d -D fp to disable the fp MPXIO first, then you should be able to run stmsboot -e successfully without loss of the fp HBA.

Installation Notes for JNI Tachyon fcaw and fca-pci drivers


The following notes apply when connecting to a Solaris host that utilizes a JNI Tachyon HBA and relies on the fcaw and fca-pci drivers. NOTE: These technical notes only apply to InForm OS 2.2.x.

Direct Connect: Configured by editing the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf files:


fca_nport = 0; # initialize on a loop public_loop = 0; # initialize according to what fca_nport is set to

Fabric Connect: Configured by editing the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf files:


fca_nport = 1; # initialize as N_Port public_loop = 0; # initialize according to what fca_nport is set to

The JNIsnia package is included with the driver installation but is optional and is not required to access the HP 3PAR Storage System from the Solaris host. The driver packages and driver installation instructions are available at: http://www.amcc.com The fca-pci.conf and fcaw.conf files will be installed in the /kernel/ drv directory when the driver package is installed. In both direct connect and fabric configurations, (where each host HBA port logically connects to only one HP 3PAR Storage System port), each initiator (host server HBA port) can only discover one target (HP 3PAR Storage System port). For these configurations, persistent target binding in the HBA driver, although possible, is not required since there will only be one target found by each host HBA driver instance. When the binding parameters in the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf files

32

Configuring the Host for a Fibre Channel Connection

are left at their default settings, each instance of the JNI driver will automatically discover one HP 3PAR Storage System port and assign it a target value of 0 each time the Solaris host is booted. The following example shows the default fca-pci.conf settings:
def_hba_binding = "fca-pci*"; def_wwpn_binding = "$xxxxxxxxxxxxxxxx"; def_wwnn_binding = "$xxxxxxxxxxxxxxxx"; def_port_binding = "xxxxxx"; Default fcaw.conf settings: def_hba_binding = "fcaw*"; def_wwpn_binding = "$xxxxxxxxxxxxxxxx"; def_wwnn_binding = "$xxxxxxxxxxxxxxxx"; def_port_binding = "xxxxxx";

If changes in the mapping of a device to its device node (/dev/rdsk/cxtxdx) cannot be tolerated for your configuration, you can assign and lock target IDs based on the HP 3PAR Storage System port's World Wide Port Name by adding specific target binding statements in the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf file. Refer to the fca-pci or fcaw driver documentation and the /opt/JNIfcaPCI/technotes or /opt/JNIfcaw/technotes files for more information about mapping discovered targets to specific target IDs on the host. The Solaris sd SCSI driver will only probe for targets and LUNs that are configured in the /kernel/ drv/sd.conf file. For fabric configurations, entries must exist for all target/LUNs that are exported from the HP 3PAR Storage System to the Solaris host. The sd.conf file is read by the sd driver at boot time, so supporting entries for new LUNs must exist prior to the last server reboot. These entries can be restricted to the JNI fca-pci or fcaw driver only, thus, a useful strategy is to add entries for all possible LUNs (0 to 255) on target "0". For instance, add the following entries to the sd.conf file: JNI fcaw driver:
name="sd" parent="fcaw" target=0 lun=0; ... name="sd" parent="fcaw" target=0 lun=255;

JNI fca-pci driver:


name="sd" parent="fca-pci" target=0 lun=0; ... name="sd" parent="fca-pci" target=0 lun=255;

Testing at HP did not reveal any noticeable increase in server boot time due to the probing of non-existent LUNs. For some installations, you may want to place specific entries for the actual LUN numbers exported from the HP 3PAR Storage System ports in the sd.conf file. However, this approach requires additional entries and a reboot of the Solaris host when new VLUNs are later exported with new LUN numbers.

Installing the HBA Drivers

33

NOTE: Each target/LUN entry in sd.conf for non-existent LUNs (a LUN that has not yet been exported from the HP 3PAR Storage System) will result in probed fail messages from the fca-pci or fcaw driver in the /var/adm/messages file and on the server console each time the driver scans for devices in response to the Solaris devfsadm command. These messages can be minimized or eliminated by populating /kern/drv/sd.conf with fewer entries. HP recommends that the sd.conf file be populated with all possible target/LUN combinations that may be exported from the HP 3PAR Storage System, despite the probe fail messages, to avoid having to reboot the host server to register newly exported HP 3PAR Storage System LUNs. NOTE: Target=0 entries are sufficient unless a host port will detect more than one HP 3PAR Storage System port, or the one that is detected has been persistently bound to a different target number. In this case, entries will be required for other targets. The optional EZFibre GUI utility is available at http://www.amcc.com/. This utility provides a view of each JNI HBA port on the server and the targets and LUNs each has acquired. This utility can also be used to statically target bind discovered targets and LUNsif that is a requirement of your specific configuration.

Configuration File Settings for JNI Tachyon Drivers


The following /kernel/drv/fcaw.conf configuration settings were used for testing purposes by HP on a fabric connection:
fca_nport = 1; public_loop = 0; def_wwpn_binding = "$xxxxxxxxxxxxxxxx"; def_wwnn_binding = "$xxxxxxxxxxxxxxxx"; def_port_binding = "xxxxxx";

Installation Notes for JNI Emerald JNIC146x Drivers


The following notes apply when connecting to a Solaris host that utilizes a JNI Emerald HBA and relies on the JNIC146x drivers. NOTE: These technical notes only apply to InForm OS 2.2.X.

Direct Connect: Configured by editing the /kernel/drv/jnic146x.conf file:


FcLoopEnabled = 1; FcFabricEnabled = 0; automap = 2;

Fabric Connect: Configured by editing the /kernel/drv/jnic146x.conf file:


FcLoopEnabled = 0; FcFabricEnabled = 1; automap = 1;

Install the JNI driver package version 5.3.1.3. The driver install package file JNIC146x.pkg contains the JNIC146x and JNIsnia packages. The JNIsnia package is optional and is not required to access the HP 3PAR Storage System from the Solaris host. The driver packages and driver installation instructions are available at http://www.amcc.com. The jnic146x.conf file

34

Configuring the Host for a Fibre Channel Connection

will be installed in the /kernel/drv directory as the driver package is installed. Edit the /kernel/ drv/jnic146x.conf file by adding the following entries. Direct Connect:
FcLoopEnabled=1;# disable loop mode FcFabricEnabled=0;# enable fabric mode automap=2;# automap target/LUN's For Fabric Connect: FcLoopEnabled=0;# disable loop mode FcFabricEnabled=1;# enable fabric mode automap=1;# automap target/LUN's

Unload and reload the jnic146x driver so that the edits to /kernel/drv/jnic146x.conf take effect.
# /opt/JNIC146x/jnic146x_unload # /opt/JNIC146x/jnic146x_load

Verify that each JNI HBA is loaded with FCode firmware version 3.91. There will be messages for each HBA port in the /var/adm/messages file. NOTE: If the HBAs are not using FCode firmware version 3.9.1 or later, upgrade the FCode firmware. FCode firmware and installation instructions are available as install packages (specific to each HBA model) from: http://www.amcc.com/ JNIC146x driver versions 5.3 and greater do not require LUN and target entries in the/kernel/drv/sd.conf file. The optional EZFibre GUI utility is available at http://www.amcc.com/. This utility gives a view of each JNI HBA port in the server and the targets and LUNs each has acquired. This utility can also be used to statically target bind discovered targets and LUNs if that is a requirement of your specific configuration. Perform a reconfigure reboot of the host server (reboot -- -r) or create the file/reconfigure so that the next server boot will be a reconfiguration boot.
# touch /reconfigure

Configuration File Settings for JNI Emerald Drivers


The following /kernel/drv/jnic146x.conf configuration settings were used for testing purposes by HP. All default values are used except for the following variations: Direct Connect:
FcLoopEnabled = 1; # use for direct connections FcFabricEnabled = 0; # use for direct connections automap = 2; # use for direct connections

Fabric Connect:
FcLoopEnabled = 0; # use for fabric connections FcFabricEnabled = 1; # use for fabric connections automap = 1; # use for fabric connections Installing the HBA Drivers 35

Verifying the Driver Package Installation


To verify that the driver has loaded properly, use the appropriate modinfo command for the type of driver you are installing.
# modinfo | egrep "lpfc|qla2300|qlc|emlxs|jni]"

Relevant messages are recorded in the /var/adm/messages file for each port that has an associated driver and can be useful for verification and troubleshooting. NOTE: The Solaris supplied emlxs driver may bind to the Emulex HBA ports and prevent the Emulex lpfc driver from attaching to the HBA ports. Emulex provides an emlxdrv utility as part of the "FCA Utilities" available for download from www.emulex.com. You can use the emlxdrv utility to adjust the driver bindings on a per HBA basis on the server between the Emulex lpfc driver and the Sun emlxs driver. You may need to use this utility if the lpfc driver does not bind to the Emulex based HBAs upon reconfigure-reboot. Solaris 8 requires that the emlxdrv pkg be removed before installing the lpfc driver.

Setting Up Dynamic Multipathing for the Solaris Host


Two options for multipathing (Veritas Volume Manager VxDMP and Sun StorEdge Traffic Manager SSTM) are supported by the Solaris OS.

Using Veritas Volume Manager VxDMP Multipathing


As an option, you can use Veritas Volume Manager for multi-path load balancing and failover. Install a supported version of Veritas Volume Manager using the Veritas Volume Manager user guide. Veritas Volume Manager and its user guide are available from:http:// www.veritas.com/ Refer to the HP 3PAR InForm OS Configuration Matrix for a list of supported Veritas Volume Manager versions. NOTE: Refer to Allocating Storage for Access by the Solaris Host (page 52) for supported driver/DMP combinations. To enable the Veritas DMP driver to manage multipathed server volumes, install the Array Support Library (ASL) for HP 3PAR Storage Systems (VRTS3par package) on the Solaris host. This ASL is installed automatically with the installation of 5.0MP3 and above. For older versions of VxDMP, the ASL will need to be installed separately. Install the VRTS3par package from the VRTS3par_SunOS_50 distribution package for Veritas Volume Manager versions 5.0 and 5.0MP1. Install the VRTS3par package from the VRTS3par_v1.2_SunOS_40 distribution package for Veritas Volume Manager versions 4.0 and 4.1.

These VRTS3par packages are available from http://support.veritas.com/. NOTE: Some distributions of the Veritas software include a VRTS3par package that is copied to the host server as the Veritas software is installed. This package is likely to be an older VRTS3par package (version 1.0 or 1.1), which should not be used. Instead, install the current VRTS3par package from the Veritas support site. The following setting on the enclosure is required if long failback times are causing some concern. This enclosure setting can be used with 5.0GA, 5.0MP1, 5.0MP3 and 5.1GA VxDMP:
# vxdmpadm setattr enclosure <name> recoveryoption=timebound iotimeout=60

36

Configuring the Host for a Fibre Channel Connection

If not set, I/O will eventually failback to the recovered paths. The default value for the enclosure is "fixed retry=5". To return the setting to default:
# vxdmpadm setattr enclosure <name> recoveryoption=default

Verifying the VxDMP ASL Installation


To confirm that the Veritas VxDMP driver has been registered to claim the HP 3PAR Storage System, issue the Veritas vxddladm listsupport libname=libvx3par.so command.
# vxddladm listsupport libname=libvx3par.so ATTR_NAME ATTR_VALUE ======================================================================= LIBNAME libvx3par.so VID 3PARdata PID VV ARRAY_TYPE A/A ARRAY_NAME 3PARDATA

You can also consult the following file:


# /opt/VRTS/bin/vxddladm listversion

WARNING! Failure to claim the HP 3PAR Storage System as an HP 3PAR array will affect the way devices are discovered by the multipathing layer. WARNING! The minimum supported software installation version for VxDMP_5.0MP3 is VxDMP_5.0MP3_RP1_HF3 with vxdmpadm settune dmp_fast_recovery=off. This tunable can be left at default values with later versions VxDMP_5.0MP3_RP2_HF1 and VxDMP_5.0MP3_RP3. CAUTION: You may need to reboot the host if you wish to reuse vLUN numbers with the following VxDMP versions: VxDMP_5.0MP3_RP3 or VxDMP_ 5.1. Veritas has enhanced data protection code which may be triggered if a vLUN number is reused, "Data Corruption Protection Activated".

Using Sun StorageTek Traffic Manager (SSTM) Multipathing


The Solaris 10 OS contains the Solaris FC and Storage Multipathing software (Sun StorageTek Traffic Manner - SSTM). The following notes apply for various OS versions. Solaris 8/9/10 For all releases of Solaris, edit the /kernel/drv/scsi_vhci.conf file to allow StorEdge Traffic Manager (SSTM) to recognize HP 3PAR VLUNs on the host system (see section Section (page 38) for context change). An additional variable change is required for Solaris 8 and 9 (see section Section (page 38) for context change.) Solaris 10

Setting Up Dynamic Multipathing for the Solaris Host

37

Additionally, to enable Sun StorageTek Traffic Manager (SSTM) for all HBAs on Solaris 10 systems, issue the stmsboot -e command to enable multipathing (stmsboot -d will disable multipathing). CAUTION: When running Solaris 10 MU7, enabling SSTM on a fresh install using stmsboot -e can corrupt the fp.conf configuration. To avoid this, issue stmsboot -d -D fp to disable the fp mpxio. You should then be able to run stmsboot -e successfully without loss of the fp HBA. For more information on this workaround, consult: http://bugs.opensolaris.org/bugdatabase view_bug.do;jsessionid=8de823511efa700410638295d36c?bug_id=6811044

Edits to the /kernel/drv/scsi_vhci.conf file for SSTM Multipathing


Edit the /kernel/drv/scsi_vhci.conf file and add the appropriate entries for the InForm OS version that is being used by the HP 3PAR Storage System. InForm 2.3.x and 3.1.x
device-type-scsi-options-list = "3PARdataVV", "symmetric-option", "3PARdataSES", "symmetric-option"; symmetric-option = 0x1000000;

InForm 2.2.x
device-type-scsi-options-list = "3PARdataVV", "symmetric-option"; symmetric-option = 0x1000000;

Additional edit to the /kernel/drv/scsi_vhci.conf file for Solaris 8/9


Edit the /kernel/drv/scsi_vhci.conf file to enable StorEdge Traffic Manager (SSTM) globally for all HBAs in the system by changing the parameter mpxio-disable to a value of no. Add the following entry at the end of the file:
mpxio-disable="no"; (default is "yes")

NOTE:

After editing the configuration file, perform a reconfiguration reboot of the Solaris host.

SPARC: issue reboot -- -r x64/x86: create the file /reconfigure so that the next server boot will be a reconfiguration boot. # touch /reconfigure NOTE: For detailed installation instructions, consult the Solaris Fiber Channel and Storage Multipathing Administration Guide located at the following address: http://www.sun.com/storage/san/ This document includes instructions for enabling Solaris I/O Multipathing on specific Sun HBA ports but does not apply for other HBAs.

Persistent Target Binding Considerations


Persistent target binding ensures that the mapping of a given target to a physical storage device remains the same from one reboot to the next. In most cases, where each HBA port logically
38 Configuring the Host for a Fibre Channel Connection

connects to only one HP 3PAR Storage System port, it is not necessary to specifically implement persistent target binding through configuration of the HBA driver since each initiator (Solaris host HBA port) can only discover one target (HP 3PAR Storage System port) as shown in Figure 6 (page 39). Figure 6 Persistent Target Binding

While the HP 3PAR Storage System is running, departing and returning HP 3PAR Storage System ports (e.g., un-plugged cable) are tracked by their World Wide Port Name (WWPN). The WWPN of each HP 3PAR Storage System port is unique and constant which ensures correct tracking of a port and its LUNs by the host HBA driver. However, in configurations where multiple HP 3PAR Storage System ports are available for discovery, some specific target binding may be necessary. The following section describes considerations for implementing persistent binding for each type of HBA that is supported by the Solaris OS.

Persistent Target Binding for Emulex lpfc Drivers


By having the Automap set to a value of 1 and the fcp-bind-method set to a value of 2 in the /kernel/drv/lpfc.conf file, each HP 3PAR Storage System port will automatically be discovered and assigned a target value of 0 each time the host server is booted. For configurations where a host HBA port logically connects to more than one HP 3PAR Storage System port, it can be useful to persistently bind each storage server port to a specified target ID. This process is discussed in Section (page 14). For more information on setting the persistent target binding capabilities of the Emulex HBA lpfc driver, consult the Emulex documentation that is available from the following location: //http:/www.emulex.com/

Persistent Target Binding Considerations

39

Persistent Target Binding for QLogic qla Drivers


By leaving the binding parameters at their default settings in /kernel/drv/qla2300.conf, each instance of the qla driver will automatically discover one HP 3PAR Storage System port and assign it a target value of 0 each time the Solaris host is booted. The target component of the device node for each HP 3PAR Storage System volume will be assigned a target "t" component equal to 0. The following example shows the default settings:
hba0-persistent-binding-configuration=0; # 0 = Reports to OS discovery of binded and non-binded devices hba0-persistent-binding-by-port-ID=0; # Persistent binding by FC port ID disabled

If a fabric zoning relationship exists such that a host HBA port has access to multiple targets (for example, multiple ports on the HP 3PAR Storage System) the driver will assign target IDs (cxtxdx) to each discovered target in the order that they are discovered. In this case, the target ID for a given target can change as targets leave the fabric and return or when the host is rebooted while some targets are not present. If changes in the mapping of a device to its device node (/dev/ rdsk/cxtxdx) cannot be tolerated for your configuration, you can assign and lock the target IDs based on the HP 3PAR Storage System port's World Wide Port Name by adding specific target binding statements in the /kernel/drv/qla2300.conf file. These statements associate a specified target ID assignment to a specified WWPN for a given instance of the qla driver (a host HBA port). For example, to bind HP 3PAR Storage System WWPN 20310002ac000040 to target ID 6 for qla2300 instance "0", you would add the following statement to /kernel/drv/qla2300.conf:
hba0-SCSI-target-id-6-fibre-channel-port-name="20310002ac000040";

With this binding statement active, a target with a WWPN of 20310002ac000040 that is discovered on the host HBA port for driver instance 1, will always receive a target ID assignment of 6, thus yielding a device node like the one shown in the following example.
/dev/rdsk/c4t6d20s2 hba0-persistent-binding-configuration=0; # 0 = Reports to OS discovery of binded and non-binded devices hba0-persistent-binding-by-port-ID=0; # Persistent binding by FC port ID disabled

The current HBA driver instance matching to discovered target WWPN associations (for connected devices) can be obtained from entries in the /var/adm/messages file generated from the last server boot.
# grep fibre-channel-port /var/adm/messages sunb1k-01 qla2300: [ID 558211 kern.info] hba0-SCSI-target-id-0-fibre-channel-portname="20310002ac000040"; sunb1k-01 qla2300: [ID 558211 kern.info] hba1-SCSI-target-id-0-fibre-channel-portname="21510002ac000040";

New or edited binding statement entries can be made active without rebooting the Solaris host by issuing the following command:
# /opt/QLogic_Corporation/drvutil/qla2300/qlreconfig -d qla2300

40

Configuring the Host for a Fibre Channel Connection

This command enables the persistent binding option in /kernel/drv/qla2300.conf:


hba0-persistent-binding-configuration=1;

CAUTION: This procedure should not be attempted while I/O is running through the qla driver instances as it will briefly interrupt that I/O and may also change a discovered device's device nodes if there have been changes made to the persistent binding statements. While running with Persistent binding only enabled, only persistently bound targets and their LUNs will be reported to the operating system. If the persistent binding option is disabled in /kernel/drv/qla2300.conf, changes to persistent target binding will only take effect during the next host server reboot.
hba0-persistent-binding-configuration=0;

While running with the persistent binding option disabled, both persistently bound targets and their LUNs and non-bound targets and their LUNs are reported to the operating system. For information about mapping discovered targets to specific target IDs on the host, consult the /opt/QLogic_Corporation/drvutil/qla2300/readme.txt file that is loaded with the qla driver. For more information on setting the persistent target binding capabilities of the QLogic HBA qla driver, consult the QLogic documentation that is available from the following location: http://www.qlogic.com/

Persistent Target Binding for Solaris qlc and emlxs Drivers


When using the QLogic qlc and Emulex exmlxs drivers supplied as part of the Solaris SAN Foundation suite, the target IDs are either the hard address of the device (in a private loop) or the WWN. So, no persistent target mapping is required.

Persistent Target Binding for JNI Tachyon Drivers


Target IDs can be assigned and locked based on the HP 3PAR Storage System ports World Wide Port Name by adding specific target binding statements in the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf file. Refer to the fca-pci or fcaw driver documentation and file /opt/JNIfcaPCI/technotes or /opt/JNIfcaw/technotes for more information about mapping discovered targets to specific target IDs on the host.

Persistent Target Binding for JNI Emerald Drivers


Each instance of the JNI driver will automatically discover each HP 3PAR Storage System port and assign it a target value of 0 each time the Solaris host is booted. For direct connections, the default setting of automap=2 in the /kernel/drv/jnic146x.conf file is used to statically bind each discovered target using its world-wide name (WWN). The WWN of each port is unique and constant which ensures correct tracking of a port and its LUNs by the host HBA driver.
def_hba_binding = "jnic146x*"; def_wwnn_binding = "$xxxxxxxxxxxxxxxx"; def_wwpn_binding = "$xxxxxxxxxxxxxxxx"; def_port_binding = "xxxxxx";

In fabric configurations, a setting of automap=1 is required to achieve the binding.


Persistent Target Binding Considerations 41

If a fabric zoning relationship exists such that a host HBA port has access to multiple targets (for example, multiple ports on the HP 3PAR Storage System) the driver will assign target IDs (cxtxdx) to each discovered target in the order that they are discovered. The target ID for a given target can change in this case as targets leave the fabric and return or when the host is rebooted while some targets are not present. Target IDs can be assigned and locked based on the HP 3PAR Storage System port's World Wide Port Name using the "configuration parameters for target to FC device mapping" section of the /kernel/drv/sd.conf file. If the automap parameter is set to 1, devices that do not have specified mapping will be automapped. If the automap parameter is set to 0, only specifically mapped targets and their LUNs will be seen by the operating system. Refer to the jnic146x driver documentation for more information about mapping discovered targets to specific target IDs on the host.

System Settings for Minimizing I/O Stall Times on VLUN Paths


This section provides system settings that can help minimize I/O stall times on VLUN paths for FC direct or fabric connected Solaris hosts. There is a delay of fp_offline_ticker before fp tells fcp about the link outage (default 90 seconds). There is a further delay of fcp_offline_delay before fcp offlines LUNs (default 20 seconds). You can change these setting by making the necessary edits in the/kernel/drv/ fp.conf file. For example, you could edit the fcp.conf file with the following fcp_offline_delay setting to change the timer to 10 seconds:
fcp_offline_delay=10;

Setting this value outside the range of 10 to 60 seconds will log a warning message to the /var/ adm/messages file. As another example, you could edit the fcp.conf file with the following fcp_offline_ticker setting to change the timer to 50:
fp_offline_ticker=50;

Setting this value outside the range of 10 to 90 seconds will log a warning message into the /var/ adm/messages file. Starting from Sun StorageTek[TM] SAN 4.4.1 and Solaris 10 U3, these parameters are made 1 tunables. They can be tuned by modifying the respective driver.conf file. The range of allowed values has been chosen considering the FC standards limits. Both can be tuned down - but not below 10 seconds (the driver code will either enforce a minimum value of 10 seconds, or issue a warning at boot time, or both). WARNING! Tuning these parameters may cause adverse affect on the system. If you are optimizing your storage configuration for stability, we recommend staying with the default values for these tunables. Any changes to these tunables are made at your risk, and could have unexpected consequences (e.g., fatal I/O errors when attempting to perform online firmware upgrades to attached devices, or during ISL or other SAN reconfigurations). Changes could also affect system performance due to excessive path failover events in the presence of minor intermittent faults etc. You may need to test any changes for your standard config/environment and specific tests, and determine the best 'tradeoff' between a quicker fail over and resilience to transient failures.

42

Configuring the Host for a Fibre Channel Connection

Refer to http://www.sun.com/bigadmin/features/hub_articles/tuning_sfs.pdf for the implications of changes to your host server. CAUTION: It is not presently possible on Solaris to lower I/O stalls on iSCSI attached array paths due to a Solaris related bug (Bug ID: 6497777). Until a fix is available in Solaris 10 update 9, the connection timeout is fixed at 180 seconds and cannot be modified.

System Settings for Minimizing I/O Stall Times on VLUN Paths

43

5 Configuring the Host for an iSCSI Connection


This chapter describes the procedures that are required to set up a Solaris host to communicate with an HP 3PAR Storage System over an iSCSI connection.

Solaris Host Server Requirements


To utilize an iSCSI connection, the Solaris host must meet the following software requirements. Solaris 10 (MU5 and later) for up to 1GB iSCSI Sun iSCSI Device Driver and Utilities Patch 1 19090-26 (SPARC) or 1 19091-26 (x86)

Patches are downloadable from the following Web site: https://support.oracle.com/CSP/ui/flash.html The following example shows how to generate the output for checking the current version levels for various components:
# more /etc/release Solaris 10 5/08 s10s_u5wos_10 SPARC Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 24 March 2008 # showrev -p | grep 119090 Patch: 119090-25 Obsoletes: 121980-03, 123500-02 Requires: 118833-36 Incompatibles: Packages: SUNWiscsir, SUNWiscsiu # pkginfo -l SUNWiscsir PKGINST: SUNWiscsir NAME: Sun iSCSI Device Driver (root) CATEGORY: system ARCH: sparc VERSION: 11.10.0,REV=2005.01.04.14.31 BASEDIR: / VENDOR: Sun Microsystems, Inc. DESC: Sun iSCSI Device Driver PSTAMP: bogglidite20061023141016 INSTDATE: Jul 03 2009 06:03 HOTLINE: Please contact your local service provider STATUS: completely installed FILES: 19 installed pathnames 14 shared pathnames 13 directories 2 executables 1266 blocks used (approx) # pkginfo -l SUNWiscsiu PKGINST: SUNWiscsiu NAME: Sun iSCSI Management Utilities (usr) CATEGORY: system ARCH: sparc VERSION: 11.10.0,REV=2005.01.04.14.31 BASEDIR: / VENDOR: Sun Microsystems, Inc. DESC: Sun iSCSI Management Utilities PSTAMP: bogglidite20071207145617 INSTDATE: Jul 03 2009 06:04 HOTLINE: Please contact your local service provider STATUS: completely installed FILES: 15 installed pathnames 5 shared pathnames 5 directories

44

Configuring the Host for an iSCSI Connection

5 executables 1005 blocks used (approx) # modinfo | grep iscsi 104 7bee0000 2b7e8 96 1 iscsi (Sun iSCSI Initiator v20071207-0)

Setting Up the Ethernet Switch


1. 2. Connect the Solaris (iSCSI Initiator) host Ethernet/Fibre cables and the HP 3PAR Storage System iSCSI target port's Ethernet/Fibre cables to the Ethernet switches. If you are using VLANs, make sure that the switch ports (where the HP 3PAR Storage System iSCSI target ports and iSCSI Initiators are connected) are in the same VLANs and/or that you can route the iSCSI traffic between the iSCSI Initiators and the HP 3PAR Storage System iSCSI target ports. Once the iSCSI Initiator and HP 3PAR Storage System iSCSI target ports are configured and connected to the switch, you can use the ping command on the iSCSI Initiator host to make sure that it sees the HP 3PAR Storage System iSCSI target ports. NOTE: Ethernet switch VLANs and routing setup and configuration is beyond the scope of this document. Consult your switch manufacturer's documentation for instructions of how to set up VLANs and routing.

Configuring the Solaris Host Ports


Configure the host network interface card (NIC) or CNA card IPs appropriately for the iSCSI Initiator software that is used to connect to the HP 3PAR Storage System iSCSI target ports. Ensure that the iSCSI initiator software is properly configured as described in section Setting Up the iSCSI Initiator for Target Discovery (page 46). The following example shows the steps that are required to configure a host with two iSCSI ports. 1. Create the two NICs required for iSCSI, on the host.
bash-3.00# ifconfig bge1 plumb && ifconfig bge1 10.105.1.10 netmask 255.255.255.0 up bash-3.00# ifconfig bge2 plumb && ifconfig bge2 10.105.2.10 netmask 255.255.255.0 up

2.

Check that the iSCSI NICs are created and configured correctly.
# ifconfig -al o0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL>mtu 8232 index 1 inet 127.0.0.1 netnask ff000000 bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.10.205 netmask ffffff00 broadcast 192.168.10.255 ether 0:14:4f:b0:53:4c bge1: flags-1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 inet 10.105.1.10 netmask ffffff00 broadcast 10.106.1.255 ether 0:14:4f:b0:53:4d bge2: flags=1000843<P,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 inet 10.105.2.10 netmask ffffff00 broadcast 10.106.2.255 ether 0:14:4f:b0:53:4e

3.

Add the IP addresses and a symbolic name for the iSCSI NICs to the hosts file.
::1 127.0.0.1 localhost localhost

Setting Up the Ethernet Switch

45

192.168.10.206 10.105.1.10 10.105.2.10

sqa-sunv245 bge1 bge2

4.

Create the following files for both iSCSI NICs on the host.
/etc/hostname.bge1 with contents: 10.105.1.10 netmask 255.255.255.0 /etc/hostname.bge2 with contents: 10.105.2.10 netmask 255.255.255.0

5.

Identify the IP address and netmask for both iSCSI host server NICs in the netmasks file.
bash-3.00# more /etc/netmasks ## The netmasks file associates Internet Protocol (IP) address # masks with IP network numbers. # # network-number netmask ## The term network-numberrefers to a number otainedfrom the Internet Network # Information Center. ## Both the network-number and the netmasks arespecified in # "decimal dot" notation, e.g.: ## 128.32.0.0 255.255.255.0 10.105.1.10 255.255.255.0 10.105.2.10 255.255.255.0

Setting Up the iSCSI Initiator for Target Discovery


Solaris uses an open iSCSI initiator which supports the following target discovery methods: Static Device Discovery SendTargets Dynamic Device Discovery iSNS Dynamic Device Discovery

NOTE: The Solaris OS does not currently support advertisement of the iSNS server address through DHCP, although support may be added in the future. The Solaris OS does not support Service Location Protocol (SLP) discovery of the iSNS server address. The HP 3PAR Storage System supports all of the above discovery methods. For details on iSCSI initiator configuration, see the System Administration Guide: Devices and File Systems and refer to the chapter Solaris iSCSI Initiators (Tasks), available at: http://docs.sun.com CAUTION: Configuring both static and dynamic device discovery for a given target is not recommended since it can cause problems communicating with the iSCSI target device.

Using the Static Device Discovery Method


The following example shows how to configure the Solaris host for the HP 3PAR Storage System target using the Static Device Discovery method. 1. Verify that the target is pingable.
# ping 11.1.0.110 46 Configuring the Host for an iSCSI Connection

2.

Define the static target address. Use showport -iscsiname to get the HP 3PAR Storage System target iSCSi name.
# iscsiadm add static-config iqn.2000-05.com.3pardata:21310002ac00003e,11.1.0.110:3260

(Repeat for the other port.) 3. Enable the static device discovery method.
# iscsiadm modify discovery --static enable

4.

Verify that the static discovery is enabled.


# iscsiadm list discovery Discovery: Static: enabled Send Targets: disabled iSNS: disabled

Using the SendTargets Discovery Method


The following example shows how to configure the Solaris host for an HP 3PAR Storage System target port using the SendTargets discovery method. 1. Verify that the target is pingable.
# ping 11.1.0.110

2.

Add the target discovery address.


# iscsiadm add discovery-address 11.1.0.110:3260

(Repeat for the other address port.) 3. Enable the SendTargets discovery method.
# iscsiadm modify discovery --sendtargets enable

4.

Verify that the SendTargets discovery is enabled.


# iscsiadm list discovery Discovery: Static: disabled Send Targets: enabled iSNS: disabled

Using the iSNS Discovery Method


The following example shows how to configure the Solaris host for an HP 3PAR Storage System target port using the iSNS discovery method.

Setting Up the iSCSI Initiator for Target Discovery

47

1. 2.

Verify that an iSNS server IP address has been configured on the target port using Inform CLI controliscsiport command. Verify that the target is pingable.
# ping 11.1.0.110

3.

Add the iSNS server IP address.


# iscsiadm add iSNS-server 11.0.0.200:3205

4.

Enable the iSNS discovery method.


# iscsiadm modify discovery --iSNS enable

5.

Verify that the iSNS discovery is enabled.


# iscsiadm list discovery Discovery: Static: disabled Send Targets: disabled iSNS: enabled

Initiating and Verifying Target Discovery


1. After configuring the discovery method, issue devfsadm the first time to cause the host to log in to target (HP 3PAR Storage System) and discover it.
# devfsadm -i iscsi

Once the target is discovered and configured, any events (e.g., host reboot, HP 3PAR Storage System node down or HP 3PAR Storage System target reboot), cause the host to automatically discover the target without the need to issue devfsadm. However, if any change is made in the target discovery address or method, a devfsadm command must be issued to reconfigure the altered discovery address. 2. Verify the discovered targets.
# iscsiadm list target Target: iqn.2000-05.com.3pardata:21310002ac00003e Alias: TPGT: 131 ISID: 4000002a0000 Connections: 1

3.

The Solaris iSCSI initiator sets the Max Receive Data Segment Length target parameter to a value of 8192 bytes and this variable determines the amount of data the HP 3PAR Storage System can receive or send to the Solaris host in a single iSCSI PDU. This parameter value should be changed to 65536 bytes for better I/O throughput and the capability to handle

48

Configuring the Host for an iSCSI Connection

large I/O blocks. The following command should be used to change the parameter and should be set on each individual target port.
# iscsiadm modify target-param -p maxrecvdataseglen=65536 <target iqn name>

Example: a. List the default target settings used by the iSCSI Initiator.
# iscsiadm list target-param -v Target: iqn.2000-05.com.3pardata:21310002ac00003e --Login Parameters (Default/Configured): Max Receive Data Segment Length: 8192/-

b.

List the target settings negotiated by the iSCSI Initiator.


# iscsiadm list target -v Target: iqn.2000-05.com.3pardata:21310002ac00003e Login Parameters (Negotiated): Max Receive Data Segment Length: 8192

c.

Change the value from 8192 to 65536 for all target ports.
# iscsiadm modify target-param -p maxrecvdataseglen=65536 iqn.2000-05.com.3pardata:21310002ac00003e

d.

Verify that the changed value is set.


# iscsiadm list target-param -v Target: iqn.2000-05.com.3pardata:21310002ac00003e --Max Receive Data Segment Length: 8192/65536 --# iscsiadm list target -v Target: iqn.2000-05.com.3pardata:21310002ac00003e --Login Parameters (Negotiated): --Max Receive Data Segment Length: 65536 ---

4.

Issue the iscsiadm list target -v command to list all the negotiated login parameters:
# iscsiadm list target -v Target: iqn.2000-05.com.3pardata:21310002ac00003e Alias: TPGT: 1 ISID: 4000002a0000 Connections: 1 CID: 0 IP address (Local): 11.1.0.40:33672 IP address (Peer): 11.1.0.110:3260 Discovery Method: SendTargets Login Parameters (Negotiated): Setting Up the iSCSI Initiator for Target Discovery 49

Data Sequence In Order: yes Data PDU In Order: yes Default Time To Retain: 20 Default Time To Wait: 2 Error Recovery Level: 0 First Burst Length: 65536 Immediate Data: no Initial Ready To Transfer (R2T): yes Max Burst Length: 262144 Max Outstanding R2T: 1 Max Receive Data Segment Length: 65536 Max Connections: 1 Header Digest: NONE Data Digest: NONE

5.

(Optional) You can enable CRC32 verification on the datadigest (SCSI data) and headerdigest (SCSI packet header) of an iSCSI PDU in addition to the default TCP/IP checksum. However, enabling this verification will cause a small degradation in the I/O throughput. The following example modifies the datadigest and headerdigest for the initiator:
# iscsiadm modify initiator-node -d CRC32 # iscsiadm modify initiator-node -h CRC32 # iscsiadm list initiator-node Initiator node name: iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d Initiator node alias: Login Parameters (Default/Configured): Header Digest: NONE/CRC32 Data Digest: NONE/CRC32 # iscsiadm list target -v Target: iqn.2000-05.com.3pardata:20310002ac00003e Login Parameters (Negotiated): Header Digest: CRC32 Data Digest: CRC332--

Setting Up Multipathing MPXIO


Sun StorEdge Traffic Manager (MPXIO) is the multipathing software on Solaris 10 and comes bundled with the installed OS. 1. Edit the /kernel/drv/scsi_vhci.conf file and add the following entry to enable Solaris I/O multipathing globally on all the HP 3PAR Storage System target ports:
device-type-scsi-options-list = "3PARdataVV", "symmetric-option", "3PARdataSES", "symmetric-option"; symmetric-option = 0x1000000;

2.

Make sure that multipathing is enabled in the iSCSI configuration file /kernel/drv/ iscsi.conf; it is enabled by default and should match the following example:
name="iscsi" parent="/" instance=0; ddi-forceattach=1; mpxio-disable="no";

50

Configuring the Host for an iSCSI Connection

3.

Reboot the system after enabling multipathing.


# reboot -- -r Or ok> boot -r

WARNING! If you are using Sun Multipath I/O (Sun StorEdge Traffic Manager), HP advises that you not reuse a LUN number to export a different HP 3PAR Storage System volume, as Solaris format output preserves the disk serial number of the first device ever seen on that LUN number since the last reboot. Any I/O performed on the older disk serial number causes the I/O to be driven to the new volume and can cause user configuration and data integrity issues. This is a general Solaris issue with SUN multipath I/O and is not specific to HP 3PAR Storage System target.

Setting Up Multipathing MPXIO

51

6 Allocating Storage for Access by the Solaris Host


This chapter describes the basic procedures that are required to create and export virtual volumes so they can be utilized by the Solaris host and provides specific details for various connection configurations. For complete details on creating and managing storage on the HP 3PAR Storage System, consult the appropriate HP 3PAR documentation.

Creating Storage on the HP 3PAR Storage System


This section describes the general procedures that are required to create the virtual volumes that can then be exported for discovery by the Solaris host. For complete details on creating virtual volumes, see the appropriate HP 3PAR documentation. NOTE: For InForm OS 2.3.X, Solaris 10 supports the largest virtual volume that can be created at 16TB. For InForm OS 2.2.x, Solaris 9 and 10 only support the largest virtual volume that can be created at 2047 gigabytes (2096128 megabytes). In either case, Veritas may not support the maximum size virtual volume possible. Consult Veritas support at: http://www.veritas.com/support NOTE: To create thinly provisioned virtual volumes, an HP 3PAR Thin Provisioning license is required.

Creating Virtual Volumes for InForm OS 2.2.4 to 3.1.x


After devising a plan for allocating space for the Solaris host, you need to create the required virtual volumes on the HP 3PAR Storage System. Volumes can be fully provisioned from a CPG or can be thinly provisioned.

Using the InForm Management Console:


1. 2. 3. From the menu bar, select: ActionsProvisioningVVCreate Virtual Volume Use the Create Virtual Volume wizard to create a base volume. Select one of the following options from the Provisioning list: Fully Provisioned from PDs Fully Provisioned from CPG Thinly Provisioned

Using the InForm CLI:


To create a fully provisioned or thinly provisioned virtual volume, issue the following InForm CLI command:
# createvv [options] <usr_CPG> <VV_name> [.<index>] <size>[g|G|t|T]

Here is an example:
# createvv -cnt 5 TESTLUNs 5G

52

Allocating Storage for Access by the Solaris Host

Creating Virtual Volumes for InForm OS 2.2.3 and Earlier


When running InForm OS 2.2.3 and earlier, issue the createaldvv command to create virtual volumes on the HP 3PAR Storage System server that can then be exported and discovered by the Solaris host. Here is the general form of the command:
# createa1dvv [options] <vvname>[.<index>] <size>[g|G|t|T]

Here is an example:
# createa1dvv cnt 5 TESTLUNs 5G

Consult the HP 3PAR InForm OS Management Console Online Help and the HP 3PAR InForm OS CLI Reference for complete details on creating volumes for InForm OS 2.2.3 and earlier.

Exporting LUNs to a Host with a Fibre Channel Connection


This section explains how to export virtual volumes created on the HP 3PAR Storage System as VLUNs for the Solaris host with caveats for the various drivers.

Creating a Virtual Logical Unit Number for Export


Creation of a virtual logical unit number (VLUN) template enables export of a virtual volume (VV) as a VLUN to one or more Solaris hosts. There are four types of VLUN templates: port presents - created when only the node:slot:port are specified. The VLUN is visible to any initiator on the specified port. host set - created when a host set is specified. The VLUN is visible to the initiators of any host that is a member of the set. host sees - created when the hostname is specified. The VLUN is visible to the initiators with any of the hosts World Wide Names (WWNs). matched set - created when both hostname and node:slot:port are specified. The VLUN is visible to initiators with the hosts WWNs only on the specified port.

You have the option of exporting the LUNs through the InForm Management Console or the InForm CLI.

Using the InForm Management Console


1. 2. From the menu bar, select ActionsProvisioningVLUNCreate VLUN. Use the Export Virtual Volume dialog box to create a VLUN template.

Using the InForm CLI


To create a port presents VLUN template, issue the following command:
# createvlun [options] <VV_name | VV_set> <LUN> <node:slot:port

To create a host sees or host set VLUN template, issue the following command:
# createvlun [options] <VV_name | VV_set> <LUN> <host_name/set>

To create a matched set VLUN template, issue the following command:


# createvlun [options] <VV_name | VV_set> <LUN> <node:slot:port>/<host_name>

Exporting LUNs to a Host with a Fibre Channel Connection

53

Here is an example:
# createvlun -cnt 5 TESTLUNs.0 0 hostname/hostdefinition

Consult the HP 3PAR InForm OS Management Console Online Help and the HP 3PAR InForm OS CLI Reference for complete details on exporting volumes and available options for the InForm OS version that is being used on the HP 3PAR Storage System. Note that the commands and options available for creating a virtual volume may vary for earlier versions of the InForm OS.

VLUN Exportation Limits Based on Host HBA Drivers


Even though the HP 3PAR Storage System supports the exportation of VLUNs with LUNs in the range from 0 to 16383 the host driver may have a lower limit as noted here: Solaris qlc/emlxs drivers: Supports the creation of LUNs with LUNs in the range from 0 to 16383. Support a theoretical quantity of 64K VLUNs (64-bit mode) or 4000 VLUNs (32-bit mode) per Sun HBA. Supports VLUNs with LUNs in the range from 0 to 255. Supports sparse LUNs (the skipping of LUNs). LUNs may be exported in non-ascending order (e.g., 5, 7, 3, 200). VLUNs can be exported on each interface. Only 256 VLUNs can be exported on each interface. If you export more than 256 VLUNs, VLUNs with LUNs above 255 will not appear on the host server.

QLogic qla/Emulex lpfc/JNI Tachyon and Emerald drivers:

NOTE: HP 3PAR Storage System with LUNs other than 0 will be discovered even when there is no VLUN exported with LUN 0. Without a LUN 0, error messages for LUN 0 may appear in /var/ adm/messages as the host server probes for LUN 0. It is recommended that a real LUN 0 be exported to avoid these errors. Virtual Volumes of 1 terabyte and larger will only be supported using the Sun EFI disk label and will appear in the output of the Sun format command without cylinder/head geometry. EFI labeled disks are not currently supported with Veritas Volume Manager 4.0 - 4.1. More information on EFI disk labels can be found in Sun document 817-0798. For configurations that use Veritas Volume Manager for multi-pathing, virtual volumes should be exported down multiple paths to the host server simultaneously. To do this, create a host definition on the HP 3PAR Storage System that includes the WWNs of multiple HBA ports on the host server. NOTE: All I/O to an HP 3PAR Storage System port should be stopped before running any InForm CLI controlport commands on that port. The InForm CLI controlport command executes a reset on the storage server port while it runs and causes the port to log out of and back onto the fabric. This event will be seen on the host as a "transient device missing" event for each HP 3PAR Storage System LUN that has been exported on that port. In addition, if any of the exported volumes are critical to the host server OS (e.g., the host server is booted from that volume), the host server should be shut down before issuing the InForm CLI controlport command.

Exporting LUNs to a Solaris Host with an iSCSI Connection


This section explains how to export virtual volumes as Virtual LUNs (VLUNs) to the Solaris host when using an iSCSI connection.

54

Allocating Storage for Access by the Solaris Host

The following set of commands is typically used to export a given HP 3PAR Storage System virtual volume to all the connected host paths.
# createaldvv -cnt 3 demo 2g # createvlun demo.0 0 solarisiscsi # showvlun -host solarisiscsi Active vLUNs Lun VVname Host ---------Host_WWN/iSCSI_Name--------- Port Type 0 demo.0 solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 0:3:1 host 0 demo.0 solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 1:3:1 host ---------------------------------------------------------------2 VLUN Templates Lun VVname Host -Host_WWN/iSCSI_Name- Port Type 0 demo.0 solarisiscsi ---------------- --- host ----------------------------------------------1

Even though the HP 3PAR Storage System supports the exportation of VLUNs with LUN numbers in the range from 0 to 16383, only VLUN creation with a LUN in the range from 0 to 255 is supported. This configuration does support sparse LUNs (the skipping of LUN numbers). LUNs may be exported in non-ascending order (e.g. 5, 7, 3, 200). Only 256 VLUNs can be exported on each interface. If you export more than 256 VLUNs, VLUNs with LUNs above 255 will not appear on the Solaris host. If you are using Sun Multipath I/O (Sun StorEdge Traffic Manager) you should avoid reusing a LUN number to export a different HP 3PAR Storage System volume as the Solaris format output preserves the disk serial number of the first device ever seen on that LUN number since the last reboot.

CAUTION: If any I/O is performed on the old disk serial number, the I/O will be driven to the new volume and can cause user configuration and data integrity issues. This is a general Solaris issue with SUN multipath I/O and is not specific to the HP 3PAR Storage System target. The following is an example: If the HP 3PAR Storage System volume demo.50 that has device serial number 50002AC01188003E is exported to LUN 50 and the format command output shows the correct HP 3PAR Storage System volume serial number (VV_WWN). LUN number 50 was used for the first time to present a device.
# showvv -d Id Name Rd Mstr Prnt Roch Rwch PPrnt PBlkRemain -----VV_WWN--- -----CreationTime---10 demo.50 RW 1/2/3 --- --- --- --- - 50002AC01188003E Fri Aug 18 10:22:57 PDT 2006 20 checkvol RW 1/2/3 --- --- --- --- - 50002AC011A8003E Fri Aug 18 10:22:57 PDT 2006 # showvlun -t Lun VVname Host ------------Host_WWN/iSCSI_Name------------- Port Type 50 demo.50 solarisiscsi ---------------- --- host # format AVAILABLE DISK SELECTIONS: 10 c5t50002AC01188003Ed0 <3PARdata-VV-0000 cyl 213 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac01188003e

Exporting LUNs to a Solaris Host with an iSCSI Connection

55

On removing demo.50 volume and exporting checkvol at the same LUN number 50, the host shows the new volume with the serial number of the earlier volume, demo.50 (50002AC01 188003E) and not the new volume serial number (50002AC01 1A8003E).
# showvv -d # showvlun -t Lun VVname Host ------------Host_WWN/iSCSI_Name------------- Port Type 50 checkvol solarisiscsi ------------ --- host # format AVAILABLE DISK SELECTIONS: 10 c5t50002AC01188003Ed0 <3PARdata-VV-0000 cyl 213 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac01188003e ?? Incorrect device serial number display

CAUTION: Issue devfsadm -C to clear any dangling /dev links and reboot the host to correct the device serial number or to reuse the LUN number. Solaris 10 can support the largest VV that can be created on an HP 3PAR Storage System at 16 terabytes. VVs of 1 terabyte and larger are only supported using the Sun EFI disk label and appear in the output of the Sun format command without cylinder/head geometry. All I/O to an HP 3PAR Storage System port should be halted before running the InForm CLI controlport command on that port. The InForm CLI controlport command executes a reset on the storage server port while it runs. The reset is done on a per-card basis, so any port reset (0:3:1) will cause reset on the partner port (0:3:2) and causes the port to log out and back to a ready state. This event will be seen on the host as a transient device missing event for each HP 3PAR Storage System LUN that has been exported on that port. In addition, if any of the exported volumes are critical to the host server OS (e.g., the host server is booted from that volume), the host server should be shut down before issuing the InForm CLI controlport command.

Discovering LUNs on Fibre Channel Connections


This section provides tips for discovering LUNs depending on the type of HBA driver and connection configuration that is being utilized by the Solaris host. For examples of discovering LUNs for various configurations, seeChapter 6 (page 52).

Discovering LUNs for QLogic qla and Emulex lpfc Drivers


NOTE: The HP 3PAR Storage System targets appear with their World Wide Port Names associated with the C number of the host HBA port they are logically connected to, but are initially in an unconfigured state. New VLUNs that are exported while the Solaris host is running will not be registered on the host until the following host command is issued on a Solaris 8, 9 Host:
# devfsadm -i sd

Before they can be used, newly-discovered VLUNs need to be labeled using the Solaris format or format -e command.

Discovering LUNs for Solaris qlc and emlxs Drivers


In Direct Connect mode, new VLUNs that are exported while the Solaris host is running will be registered automatically. In Fabric mode, the Sun driver stack will not make the HP 3PAR Storage System target port and its exported devices accessible until they are configured using the Solaris cfgadm command. For
56 Allocating Storage for Access by the Solaris Host

instance, when the HP 3PAR Storage System and the Solaris host are first connected, and the host is booted, no devices from the HP 3PAR Storage System will appear in the Solaris format command's output. The host server port WWNs will also not show up when using the InForm CLI showhost command. To make the ports accessible, issue the Solaris cfgadm -al command to verify the logical connections of the configuration. This will also scan for new targets. However, new LUNs will not appear in the Solaris format command's output until the connections are configured using the Solaris cfgadm -c configure command. All HBA ports that connect to the HP 3PAR Storage System in fabric mode, should be configured using the cfgadm -c command, as follows:
# # # # cfgadm cfgadm cfgadm cfgadm -c -c -c -c configure configure configure configure c8 c9 c10 c11

NOTE:

The cfgadm command can also be run on a per target basis:

# cfgadm -c configure c8::22010002ac000040 Once configured, the HP 3PAR Storage System VLUNs show up in the output from the Solaris format command as devices and are thus available for use on the host. The device node designation is comprised of 3 components: c -- represents the host HBA port t -- represents the target's WWPN d -- represents the LUN number

Therefore /dev/rdsk/c8t22010002AC000040d2s2 is a device node for VVC in the following example that is exported from port 2:0:1 of HP 3PAR Storage System, serial number 0x0040 to host port c8, as LUN 2.
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 2. c8t22010002AC000040d2 <3PARdata-VV-0000 cyl 8621 alt 2 hd 8 sec 304> /pci@1f,2000/SUNW,qlc@1/fp@0,0/ssd@w22010002ac000040,2

NOTE: The HP 3PAR Storage System targets appear with their World Wide Port Names associated with the C number of the host HBA port they are logically connected to. The host server port WWNs will now appear on HP 3PAR Storage System in the output of the showhost command. NOTE: The configuration will fail for visible targets that do not present any LUNs. At least one VLUN must be exported from each HP 3PAR Storage System port before its associated host port is configured. Running cfgadm with the configure option on an HP 3PAR Storage System port that has no LUNs exported does not harm the system and will display the following error:
# cfgadm -c configure c9 cfgadm: Library error: failed to create device node: 23320002ac000040: Invalid argument failed to create device node: 23520002ac000040: Invalid argument failed to configure ANY device on FCA port

Discovering LUNs on Fibre Channel Connections

57

Discovering LUNs for the JNI Tachyon Driver


New VLUNs that are exported while the host server is running will not be registered on the host until issuing the following Solaris command:
# devfsadm -i sd

NOTE: If VCN on LUN export is not disabled on each HP 3PAR Storage System port that connects to a host server port, newly exported HP 3PAR Storage System LUNs will result in target offline and online messages being generated on the host server console and in the /var/adm/messages file. HP 3PAR recommends disabling VCN on LUN export (as indicated in the HP 3PAR Storage System Setup section of this document) to prevent these messages and the possible disruption of I/O to already exported and registered LUNs. Newly discovered VLUNs need to be labeled using the Solaris format command before they can be used.

Discovering LUNs for the JNI Emerald Driver


New VLUNs that are exported while the host is running will not be registered on the host until issuing the following Solaris command:
# /opt/JNIC146x/jnic146x_update_drv -ra

NOTE: The -a option scans all instances of the JNIC14x driver (all host HBA ports). The command can be limited to specific instances with other options. Newly discovered VLUNs need to be labeled using the Solaris format command before they can be used.

Discovering LUNs for Sun StorEdge Traffic Manager (SSTM )


To discover LUNs, issue the Solaris format command. The following example shows the output generated by the format command when Sun StorEdge Traffic Manager is in use:
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 2. c14t50002AC000010038d0 <3PARdata-50002ac000010038-0000 cyl 43113 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac000010038

Additional options can be used with the cfgadm command to display more information about the HP 3PAR Storage System devices. For instance, issuing cfgadm with the -al option shows configuration information for each device (or LUN):
# cfgadm -o show_FCP_dev -al Ap_Id Type Receptacle Occupant Condition c9::23320002ac000040,2 disk connected configured unknown

Issuing cfgadm with the -alv option shows configuration information and the physical device path for each device (or LUN):
# cfgadm -o show_FCP_dev -alv Ap_Id Receptacle Occupant Condition Information

When

Type

Busy Phys_Id

58

Allocating Storage for Access by the Solaris Host

c9 connected configured unknown Dec 31 1969 fc-fabric n /devices/ssm@0,0/pci@1c,600000/ pci@1/SUNW,qlc@4/fp@0,0:fc c9::23320002ac000040,2 connected configured unknown Dec 31 1969 disk /devices/ssm@0,0/ pci@1c,600000/pci@1/SUNW,qlc@4/

NOTE: If Sun StorEdge Traffic Manager is enabled, the device nodes for the HP 3PAR Storage System devices contains a "t" component which matches the HP 3PAR Storage System virtual volume WWN (as generated by the InForm CLI showvv -d command). The HP 3PAR Storage System port is designed to respond to a SCSI REPORT LUNs command with one LUN (LUN 0) when there is no real VV exported as LUN 0 and no other VVs exported on any other LUN, in order to comply with the SCSI spec. A partial indication of LUN 0 will appear in the output of cfgadm for an HP 3PAR Storage System port that has no VVs exported from it. A real VV exported as LUN 0 can be distinguished from a non-real LUN 0 as follows:
# cfgadm -o show_FCP_dev -al Ap_Id Type Receptacle Occupant Condition c2 fc-fabric connected unconfigured unknown c3 fc-fabric connected configured unknown c3::20010002ac00003c,0 disk connected configured unknown c3::21010002ac00003c,0 unavailable connected configured unusable

HP 3PAR Storage System port 0:0:1 has a real VV exported as LUN 0. HP 3PAR Storage System port 1:0:1 has no VVs exported, which is indicated by an "unavailable" type and an "unusable" condition. In fabric mode, new VLUNs that are exported while the host is running will not be registered on the host (they do not appear in the output of the Solaris format command) until the cfgadm -c configure command is run again:
# cfgadm -c configure c<host port designator> # cfgadm -c configure c<host port designator>

NOTE: When HP 3PAR Storage System VVs are exported on multiple paths to the Solaris host, (and Sun StorEdge Traffic Manager is in use for multipath failure and load balancing), each path (cx) should be configured individually. The cfgadm command will accept multiple "cx" entries in one invocation but doing so may cause I/O errors to previously exiting LUNs under I/O load. For a configuration where the HP 3PAR Storage System connects at c4 and c5 on the host, and a new VV has been exported on those paths, the following commands should be run serially:
# cfgadm -c configure c3 # cfgadm -c configure c4

NOTE: If Sun StorEdge Traffic Manager is enabled for multipathing and a device (HP 3PAR Storage System VV) is only exported on one path, I/O to that device will be interrupted with an error if cfgadm -c configure is run on its associated host port. This error will not occur if Sun StorEdge Traffic Manager is disabled. This situation can be avoided by always preventing multiple paths to a VV when Sun StorEdge Traffic Manager is enabled. Alternatively, the I/O can be halted beforecfgadm -c configure is run. Newly discovered VLUNs need to be labeled using the Solaris format command before they can be used. If the Solaris host is rebooted while the HP 3PAR Storage System is powered off or disconnected, all device nodes for the hosts VLUNs will be removed. If the host is subsequently
Discovering LUNs on Fibre Channel Connections 59

brought up, the device nodes will not be restored (VLUNs will not appear in the output from the format command) until the cfgadm -c configure command is run for each host port. This phenomenon would occur for any fabric attached storage device. To reestablish the connection to the HP 3PAR Storage System devices, perform the following steps once the host has booted: 1. Run cfgadm -al on the Solaris host. This allows the HP 3PAR Storage System to see the host HBA ports (showhost) and export the VLUNs. 2. Configure all host HBA ports as follows:
# cfgadm -c configure c<host port designator> # cfgadm -c configure c<host port designator>

Discovering LUNs for Veritas Volume Managers DMP (VxDMP)


If you are using the Veritas Volume Manager's DMP driver, make the newly registered and labeled VLUNs visible to the DMP layer by issuing the following command:
# vxdctl enable

After issuing this command, the volume can be admitted to and used by Veritas Volume Manager.

Discovering LUNs on iSCSI Connections


To discover new LUNs, issue the Solaris devfsadm -i iscsi command on the host. You can export new LUNs while the host is serving I/O on existing iSCSI LUNs. If a LUN is exported to multiple paths on the host, and Solaris multipath I/O is enabled, only one device will be presented in the format output. The output will be in the form of cXt<VV_WWN>dX, where VV_WWN is the HP 3PAR Storage System volume ID. Do not use both static and dynamic device discovery for a given target as it causes problems communicating with the iSCSI target device. Use devfsadm -vC to clear the /dev links of non-existing devices. You can reduce the amount of time the format command takes to display a device or to label a disk by enabling the no-check variable NOINUSE_CHECK=1. Enabling the no-checking option is useful if you have a large number of devices being exported. All iSCSI error messages will be logged in /var/adm/messages. The iscsiadm list target command lists all the connected target ports, target devices and LUN numbers that are exported.
# iscsiadm list target -vS Target: iqn.2000-05.com.3pardata:21310002ac00003e Alias: TPGT: 131 ISID: 4000002a0000 Connections: 1 CID: 0 IP address (Local): 11.2.0.101:33376 IP address (Peer): 11.2.0.111:3260 Discovery Method: SendTargets Login Parameters (Negotiated): Data Sequence In Order: yes Data PDU In Order: yes Default Time To Retain: 20 Default Time To Wait: 2 60 Allocating Storage for Access by the Solaris Host

Error Recovery Level: 0 First Burst Length: 65536 Immediate Data: no Initial Ready To Transfer (R2T): yes Max Burst Length: 262144 Max Outstanding R2T: 1 Max Receive Data Segment Length: 65536 Max Connections: 1 Header Digest: NONE Data Digest: NONE LUN: 1 Vendor: 3PARdata Product: VV OS Device Name: /dev/rdsk/c5t50002AC010A8003Ed0s2 LUN: 2 Vendor: 3PARdata Product: VV OS Device Name: /dev/rdsk/c5t50002AC010A7003Ed0s2

The iscsiadm command can be used to remove and modify targets and their parameters, as in the following examples:
# iscsiadm remove discovery-address 10.106.2.12:3260

# iscsiadm modify initiator-node -d CRC32

Removing Volumes for Fibre Channel Connections


After removing the VLUN exported from the HP 3PAR Storage System, the VLUN removal from the Solaris host is performed in different ways depending on the HBA driver and the OS version. Appendix A, shows examples of a number of these different host configurations and the methods used to cleanly remove host references to removed HP 3PAR Storage System VLUNs.

Removing Volumes for iSCSI Connections


The following is an example of removing a virtual volume from the HP 3PAR Storage System when using an iSCSI connection. 1. Use the format command to see all HP 3PAR Storage System VLUNs that are discovered on the host.
bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. C5t5000c5000AF8554bd0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /scsi vhci/disk@g5000c5000af8554b 1. c5t5000c5000AF8642Fd0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /scsi_vhci/disk@g5000c5000af8642f 2. c5t5000c500077B2307d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /scsi_vhci/disk@g5000c500077b2307 3. c5t5000c5000AC007F100AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f100af 4. c5t5000c5000AC007F200AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f200af 5. c5t5000c5000AC007F300AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f300af 6. c5t5000c5000AC007F400AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f400af 7. c5t5000c5000AC007F500AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f500af 8. c5t5000c5000AC007F600AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f600af 9. c5t5000c5000AC007F700AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f700af 10. c5t5000c5000AC007F800AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f800af 11. c5t5000c5000AC007F900AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f900af 12. c5t5000c5000AC007F900AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007fa00af Specify disk (enter its number)

Removing Volumes for Fibre Channel Connections

61

2.

Use the devfsadm command to remove a VLUN. CAUTION: Notice now the removed VLUN is referenced in format on the host. This listing is not consistent across the x86/SPARC or the MU levels.
# devfsadm -i iscsi

Here is an example:
bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. C5t5000c5000AF8554bd0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /scsi vhci/disk@g5000c5000af8554b 1. c5t5000c5000AF8642Fd0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /scsi_vhci/disk@g5000c5000af8642f 2. c5t5000c500077B2307d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /scsi_vhci/disk@g5000c500077b2307 3. c5t5000c5000AC007F100AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f100af 4. c5t5000c5000AC007F200AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f200af 5. c5t5000c5000AC007F300AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f300af 6. c5t5000c5000AC007F400AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f400af 7. c5t5000c5000AC007F500AFd0 <drive not available> /scsi_vhci/ssd@g50002ac007f500af 8. c5t5000c5000AC007F600AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f600af 9. c5t5000c5000AC007F700AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f700af 10. c5t5000c5000AC007F800AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f800af 11. c5t5000c5000AC007F900AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007f900af 12. c5t5000c5000AC007F900AFd0 <3PARdata-VV-0000 cyl 4309 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac007fa00af Specify disk (enter its number)

iSCSI does not support removal of the last available path to the device if any iSCSI LUNs are in use (such as in a mounted file system or where associated I/O is being performed) and generates a logical unit in use error. Example: There are two paths to the device having a mounted file system.
# iscsiadm list discovery-address Discovery Address: 11.1.0.110:3260 Discovery Address: 10.1.0.110:3260 # iscsiadm remove discovery-address 11.1.0.110:3260 # iscsiadm remove discovery-address 10.1.0.110:3260 iscsiadm: logical unit in use iscsiadm: Unable to complete operation

CAUTION: A reboot -r should be performed on the host to properly clean the system after a VLUN has been removed.

62

Allocating Storage for Access by the Solaris Host

7 Configuring the Host for an FCoE Connection


This chapter describes the procedures that are required to set up a Solaris host to communicate with an HP 3PAR Storage System server over an FCoE initiator on a Solaris host to a FC target on the HP 3PAR Storage System server. Refer to Appendix C for a diagrammatic overview.

Solaris Host Server Requirements


To use an FCoE initiator, the Solaris host must meet the following software requirements. Solaris 10 (MU9 and later) Patches are downloadable from the following Web site: https://support.oracle.com/CSP/ui/flash.html The following example shows how to generate the output for checking the current version levels for various components:
bash-3.00# more /etc/release Oracle Solaris 10 9/10 s10s_u9wos_14a SPARC Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. Assembled 11 August 2010

Configuring the FCoE switch and FC switch


Connect the Solaris (FCoE Initiator) host ports to the FCoE-enabled switch and connect the HP Storage System server (FC target) ports of a FC switch. NOTE: The FCoE switch must be able to convert FCoE traffic to FC and also be able to trunk this traffic to the fabric that the HP Storage System target ports are connected to. FCoE switch VLANs and routing setup and configuration is beyond the scope of this document. Consult your switch manufacturer's documentation for instructions of how to set up VLANs and routing.

Configuring the Solaris Host Ports


Configure the host Network Interface Card (NIC) IP appropriately for the FCoE Initiator that is used to connect to the HP 3PAR Storage System server FC target ports. The following example shows the steps that are required to configure a host with two FCoE ports. 1. On the host, create the two NICs required for FCoE.
bash-3.00# ifconfig qlge0 plumb && ifconfig qle0 10.105.1.10 netmask 255.255.255.0 up bash-3.00# ifconfig qlge1 plumb && ifconfig qle1 10.105.2.10 netmask 255.255.255.0 up

2.

Check that the FCoE NICs are created and configured correctly.
bash-3.00# ifconfig a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netnask ff000000 bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.10.205 netmask ffffff00 broadcast 192.168.10.255 ether 0:14:4f:b0:53:4c qlge0: flags-1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 inet 10.105.1.10 netmask ffffff00 broadcast 10.106.1.255 ether 0:14:4f:b0:53:4d

Solaris Host Server Requirements

63

qlge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 inet 10.105.2.10 netmask ffffff00 broadcast 10.106.2.255 ether 0:14:4f:b0:53:4e

3.

Add the IP addresses and a symbolic name for the FCoE NICs to the hosts file.
::1 localhost 127.0.0.1 localhost 192.168.10.206 sqa-sunv245 10.105.1.10 qlge0 10.105.2.10 qlge1

4.

Create the following files for both FCoE NICs on the host. /etc/hostname.qlge0 qlge0 /etc/hostname.qlge1 qlge1

5.

Add the IP address and netmask for both FCoE host server NICs in the /etc/netmasks file.
# cat /etc/netmasks # # The netmasks file associates Internet Protocol (IP) address # masks with IP network numbers. # # network-number netmask # # The term network-number refers to a number obtained from the Internet Network # Information Center. # # Both the network-number and the netmasks are specified in # "decimal dot" notation, e.g: # # 128.32.0.0 255.255.255.0 # 10.106.1.0 255.255.255.0 10.106.2.0 255.255.255.0

64

Configuring the Host for an FCoE Connection

8 Using the SunCluster Cluster Server


For Solaris 10, a Sun MPXIO patch is required that contains MPXIO fixes applicable for SCSI3 reservations in a if SunCluster configuration. For SPARC-based servers, use patch 127127-1 and 1 for x86 based servers use patch 127128-1 For availability of later versions, check the following 1. Web site: https://support.oracle.com/CSP/ui/flash.html (Account required) See the Oracle (Sun) Web site for the latest advisor on SunCluster installation and configuration: http://www.oracle.com/us/products/servers-storage/solaris/ cluster-067314.html NOTE: It is recommended that I/O Fencing be enabled.

65

9 Using the Veritas Cluster Server


There are no specific settings required on the HP 3PAR array to work with Veritas Cluster server. For further information, refer to the Veritas documentation, which can be found at: http://seer.entsupport.symantec.com/docs/307506.htm NOTE: It is recommended that I/O Fencing be enabled.

66

Using the Veritas Cluster Server

10 Booting from the HP 3PAR Storage System


This chapter describes the procedures that are required to boot the Solaris OS from the SAN.

Preparing a Bootable Solaris Image for Fibre Channel


There are two methods for installing the Solaris boot image on a Fibre Channel storage device attached externally via Sun HBAs and drivers as described in the following sections.

Dump and Restore Method


With the Dump and Restore Method, a temporary install image is created that includes activation/installation of SSTM on an internal host server disk. A suitable virtual volume is then created on the HP 3PAR Storage System and is exported for discovery by the Solaris Host. After appropriately labeling the virtual volume, the temporary install image is copied from the internal host server disk to the HP 3PAR Storage System and, after some required edits, the internal disk can be removed and the Solaris OS can be booted from the SAN. You should perform the discovery and registry of an HP 3PAR Storage System virtual volume on a host that has been booted from an internal disk drive and then follow the instructions provided by Sun to move the boot image to the HP 3PAR Storage System volume for subsequent booting. Detailed instructions for performing the Dump and Restore Method can be found at either of the following sites: http://docs.sun.com http://docs.sun.com/app/docs/doc/820-1931?l=en& q=Sun+StorageTek+Traffic+Manager+software

Net Install Method


With the Net Install Method, a diskless host server connected to a Fibre Channel attached external storage device is used. The OS is installed directly onto the external storage device using a Solaris OS install. Sun recommends that each HBA used for booting from an external Fibre Channel storage device should be loaded with the most current FCODE/BCODE available. The FCODE is used early in the boot sequence to access the device. The HBAs are flashed by installing the FCODE/BCODE while the cards are in a running Solaris host, utilizing the procedures, software, and FCODE obtained from the HBA vendor. All HBAs should be flashed with the latest FCODE levels before attempting the procedures outlined in this document.

Installing the Solaris OS Image onto a VLUN


HBAs need to be configured for LUN booting. Consult the HBA vendor Web site for documentation on how to configure HBAs. Examples of what needs to be configured: Install the latest boot code and firmware on to the HBAs using the vendor's installation utilities For a SPARC platform, configure the PROM device paths For a x86 platform, configure the HBAs in the BIOS utility tool and set the boot device order

The following steps explain how to install the Solaris OS image onto a VLUN for subsequent booting:

Preparing a Bootable Solaris Image for Fibre Channel

67

1. 2.

3. 4. 5.

Connect the host server to the HP 3PAR Storage System either in a direct connect or fabric configuration. Create an appropriately sized virtual LUN on the HP 3PAR Storage System for the host server's OS installation (see Configuring the HP 3PAR Storage System Running InForm OS 3.1.x or 2.3.x (page 9)). Create the host definition on the HP 3PAR Storage System, which represents the host server's HBA port WWN. Export the VLUN to the host server using any LUN number. Prepare a Solaris OS install server on the same network as the host server, or use the Solaris OS CD install media. NOTE: For a Solaris 8 and 9 install image, the required SUN StorEdge SAN software must also be added to the install server boot image.

6.

For a SPARC host server, use the OpenBoot ok prompt to boot the host from the network or CD:
ok boot net # if using install server ok boot cdrom # if using CD

For an x86 host server, use the BIOS network boot option (i.e., the F12 key) to boot the host from the network or CD. The host server should boot from the install server or CD and enter the Solaris interactive installation program. Enter appropriate responses for your installation until you come to the Select Disks menu. The LUN will be listed as more than one device if multiple paths are used. The LUN will show as zero size, or you may receive the following warning:
No disks found. > Check to make sure disks are cabled and powered up. Enter F2 to exit to a command prompt.

The LUN needs to be labeled. Exit the installation process to a shell prompt. NOTE: The No disks found message appears if the HP 3PAR Storage System volume is the only disk attached to the host or if there are multiple disks attached to the host but none are labeled. If there are labeled disks that will not be used to install Solaris on, a list of disks will be presented but the unlabeled HP 3PAR Storage System VLUN will not be selectable as an install target. In this case, exit and proceed to the next step. 7. On the host server, issue the format command to label the HP 3PAR Storage System VLUN.
# format Searching for disks...WARNING: /pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/ssd@w20520002ac000040,a (ssd0): corrupt label - wrong magic number done c3t20520002AC000040d10: configured with capacity of 20.00GB AVAILABLE DISK SELECTIONS: 0. c3t20520002AC000040d10 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304> /pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/ssd@w20520002ac000040,a Specify disk (enter its number): 0 selecting c3t20520002AC000040d10 [disk formatted]

68

Booting from the HP 3PAR Storage System

Disk not labeled. Label it now? y .

NOTE: If multiple paths to the LUNs have been used, the LUN appears as multiple instances in the install program 8. Restart the Solaris interactive installation program. NOTE: Continue the Solaris installation with appropriate responses, including selecting the HP 3PAR Storage System LUN as an install target. A LUN will appear as multiple instances if multiple paths have been used. Select one instance for the Solaris OS installation. Configure the disk layout and confirm the system warning of CHANGING DEFAULT BOOT DEVICE should it appear. When the installation program completes, the server may not boot from the boot VLUN. If it does not, check the SPARC PROM or the x86 BIOS settings for the HBA paths.

Configuring Additional Paths and Sun I/O Multipathing


Optionally, a second path from the HP 3PAR Storage System to the Solaris host can be added for path failure redundancy and load balancing using Sun I/O Multipathing. 1. Connect an additional cable between the host server and the HP 3PAR Storage System server. 2. Reboot the host server. 3. Add the new path to the host servers host definition using the InForm OS createhost command:
# createhost -add solaris-server 210100E08B275AB5 # showhost Id Name ------WWN------- Port 1 solaris-server 210000E08B049BA2 0:5:2 210100E08B275AB5 1:5:2

If the HP 3PAR Storage System virtual volume was exported to the host definition, it will now be exported on both paths to the host server:
# showvlun -a Lun VVname Host ----Host_WWN---- Port Type 10 san-boot solaris-server 210000E08B049BA2 0:5:2 host 10 san-boot solaris-server 210100E08B275AB5 1:5:2 host 2

4.

Verify that two representations of the boot volume now appear in the Solaris format command:
# devfsadm # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t20520002AC000040d10 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304> /pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/ssd@w20520002ac000040,a 1. c5t21520002AC000040d10 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304> /pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w21520002ac000040,a Specify disk (enter its number):

Configuring Additional Paths and Sun I/O Multipathing

69

5.

Edit the /kernel/drv/scsi_vhci.conf file to register the HP 3PAR devices.


# # mpxio-disable="no";# for solaris 8 & 9 # symmetric-option = 0x1000000; device-type-scsi-options-list = "3PARdataVV", "symmetric-option"; symmetric-option = 0x1000000; "3PARdataSES", "symmetric-option";

6.

Use the Solaris stmsboot command to enable multipathing for the boot device. The host server will be rebooted when stmsboot e is run.
# stmsboot -e WARNING: This operation will require a reboot. Do you want to continue ? [y/n] (default: y) y The changes will come into effect after rebooting the system. Reboot the system now ? [y/n] (default: y) y

7.

The stmsboot command makes edits to the /etc/dumpadm.conf and /etc/vfstab files needed to boot successfully using the new Sun I/O Multipathing single device node for the multipathed boot device. The new single device node incorporates the HP 3PAR Storage System VLUN WWN: Solaris host:
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t50002AC000300040d0 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304> /scsi_vhci/ssd@g50002ac000300040 Specify disk (enter its number):

HP 3PAR Storage System:


# showvv -d sunboot Id Name Rd Mstr Prnt Roch Rwch PPrnt PBlkRemain -----VV_WWN------------CreationTime-------48 san-boot RW 0/1/3 --- --- --- --- - 50002AC000300040 Mon Mar 14 17:40:32 PST 2005#

8.

For SPARC, the Solaris install process enters a value for boot-device in OpenBoot NVRAM that represents the hardware path for the first path.
# eeprom . . boot-device=/pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/disk@w20520002ac000040,a:a . .

70

Booting from the HP 3PAR Storage System

The hardware path for the second path must be derived and passed to OpenBoot when the host server needs to boot from the second path. The second path can be deduced and constructed using the information from the Solaris luxadm display command:
# luxadm display /dev/rdsk/c7t50002AC000300040d0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c7t50002AC000300040d0s2 . . . State ONLINE Controller /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0 Device Address 21520002ac000040,a Host controller port WWN 210100e08b275ab5 Class primary State ONLINE .

9.

For SPARC, create aliases for the alternative hardware paths to the boot-disk. The host server console must be taken down to the OpenBoot ok prompt:
# init 0 # INIT: New run level: 0 The system is coming down. Please wait. System services are now being stopped. Print services stopped. May 23 16:51:46 sunb1k-01 syslogd: going down on signal 15 The system is down. syncing file systems... done Program terminated {1} ok ok nvalias path1 /pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/disk@w20520002ac000040,a:a ok nvalias path2 /pci@8,700000/SUNW,qlc@3,1/fp@0,0/disk@w21520002ac000040,a:a

Configuration for Multiple Path Booting


A Solaris host can boot from multiple paths to the boot LUN and this should be done to give redundancy in the event of failure of the primary boot path. Follow the examples below for SPARC and x86 platforms on how to configure multi-path booting. SPARC Set both paths as aliases in the PROM and set the boot-device parameter to both these aliases. For example:
ok nvalias sanboot1 /pci@1e,600000/pci@0/pci@2/emlx@0/fp@0,0/disk@w20340002ac000120,a ok nvalias sanboot2 /pci@1e,600000/pci@0/pci@2/emlx@0,1/fp@0,0/disk@w21540002ac000120,a ok setenv boot-device sanboot1 sanboot2

With these settings and the host server set to auto-boot on power up, the server should boot from the second path automatically in the event of a failure on the first path. x86 The ability to boot from either path is configured in the BIOS by adding the paths to the boot priority.

Configuration for Multiple Path Booting

71

NOTE: The host server in use should be updated to the newest version of OpenBoot available from Sun and tested for booting under failed path scenarios.

Additional Devices on the Booting Paths


Additional HP 3PAR Storage System virtual volumes can be created and exported on the booting paths and used for additional storage and they will also be managed by Sun StorEdge Traffic Manager or VxDMP.

SAN Boot Example


The following example shows how to set up a jumpstart boot net installation on an HP 3PAR Storage System running Solaris 10 MU9 and using the Solaris emlxs (Emulex) driver with SSTM. 1. Boot the Solaris host from an internal disk that is using the same HBA driver that will be used with the VLUN boot disk. See Preparing a Bootable Solaris Image for Fibre Channel (page 67) for details on preparing a boot image. 2. Create a virtual volume of the appropriate size on the HP 3PAR Storage System and export the VLUN to the Solaris host on one HP 3PAR Storage System port (either direct or fabric) to one Host port. 3. Discover the VLUN and label it using the format command (this step is performed from the booted internal disk OS). 4. Download and install the Emulex driver utilities (contains: HBAnyware, EmlxApps, EMLXemlxu). The emlxdrv driver may also be required to attach the required driver to the HBA. 5. Download the latest bootcode/firmware for the Host HBA (e.g., LP10000 Bcode Version 1.50a4) from http://www.emulex.com. 6. Extract the downloaded files to a location that is accessible to the host (e.g., /opt/ EMLXemlxu/downloads) and install using the 'emlxadm' utility: /opt/EMLXemlxu/bin/emlxadm Select one of the HBAs and upgrade the boot code and firmware . For example: emlxadm> download_boot /opt/EMLXemlxu/lib/TO310A3.PRG emlxadm> download_fw /opt/EMLXemlxu/lib/td192a1.all Make sure the boot code is enabled: emlxadm> boot_code Repeat for the other HBA(s). 7. Return to the ok prompt and configure the PROM for emlxs drivers: ok show-devs If there are paths that show lpfc, e.g. /pci@1c,600000/lpfc@1 /pci@1c,600000/lpfc@1,1 they will need to be changed to emlx: ok setenv auto-boot? false ok reset-all ok " /pci@1c,600000/lpfc@1" select-dev (Note the space after the first double-quote.) ok set-sfs-boot ok reset-all ok show-devs
72 Booting from the HP 3PAR Storage System

The lpfc@1 path should now be emlx@1. Repeat for the other path: ok " /pci@1c,600000/lpfc@1,1" select-dev ok set-sfs-boot ok reset-all ok show-devs ok setenv auto-boot? true 8. Create the boot aliases for the boot VLUN. The correct boot paths can be determined as follows: ok show-devs and probe-scsi-all For example: ok show-devs /pci@1c,600000/emlx@1/fp@0,0/disk /pci@1e,600000/emlx@0,1/fp@0,0/disk From probe-scsi-all there are the devices: 20340002ac000120 21540002ac000120 So the boot paths are: /pci@1c,600000/emlx@1/fp@0,0/disk@w20340002ac000120,a /pci@1e,600000/emlx@0,1/fp@0,0/disk@w21540002ac000120,a 9. You can now install the Solaris OS on the LUN using, for example, Jumpstart. The host should see the LUN as multiple instances. Select one for OS install.

SAN Boot Example

73

A Configuration Examples
This appendix provides sample configurations used successfully for HP testing purposes.

Example of Discovering a VLUN Using qlc/emlx Drivers with SSTM


The following example shows how to discover a VLUN on a Solaris 9 host that is using the qlc and emlxs drivers and SSTM over a direct Fibre Channel connection. 1. Make sure the host is in a clean state before you start.
# # # # cfgadm -o show_FCP_dev -al luxadm probe devfsadm -Cv format

2.

Export a VLUN to the host.


# cfgadm -o show_FCP_dev -al # cfgadm -c connect c3::21530002ac0000ae,0 (the c3 link number is known then add a link with comma and lun number) # cfgadm -o show_FCP_dev -al - now it appears connected but not configured # cfgadm -c configure c3::21530002ac0000ae,0 - reply is that the attachment point does not exist # luxadm -e forcelip /dev/cfg/c3 - had to issue this command to see the line above as configured in show_FCP_dev

3. 4. 5. 6.

Stop the traffic on the host. Issue the InForm CLI removevlun command on HP 3PAR Storage System to remove the VLUN. Use format on the host to see that the VLUN is removed. The VLUN is listed but "drive type unknown" is displayed. Clean up the remaining entries as in the following example.
# cfgadm -o show_FCP_dev -al - the lun 0 line has been removed # luxadm probe - no FC devices found # devfsadm -Cv - removes all the /dev/rdsk & /dev/dsk & /devices/scsi_vchi/ entries for the removed lun format - all clean

Example of Discovering a VLUN Using an Emulex Driver and VxVM


The following example shows how to discover a VLUN on a Solaris 10 MU6 host that is using the Emulex lpfc driver and VxVM over a direct and fabric Fibre Channel connection. Using the local HBA WWPN and the HP 3PAR Storage System WWPN, issue an HPAnywhere hbacmd RescanLuns command for each direct connection or each fabric zone. CAUTION: Always refer to the driver notes on the effect of issuing a RescanLUNs on the driver and already discovered VLUNs.
# /opt/HBAnyware/hbacmd RescanLuns xx.xx.xx.... xx.xx.xx.xx... bash-3.00# ./hbacmd RescanLuns 10:00:00:00:C9:7B:C5:D6 21:42:00:02:AC:00:00:AF HBACMD_RescanLuns: Success

74

Configuration Examples

# ./hbacmd RescanLuns 10:00:00:00:C9:7B:C5:D6 21:53:00:02:AC:00:00:AF HBACMD_RescanLuns: Success # ./hbacmd RescanLuns 10:00:00:00:C9:7B:C5:D6 20:52:00:02:AC:00:00:AF HBACMD_RescanLuns: Success # ./hbacmd RescanLuns 10:00:00:00:C9:7B:C5:D6 21:21:00:02:AC:00:00:AF HBACMD_RescanLuns: Success

Example of Discovering a VLUN Using a QLogic Driver with VxVM


The following example shows how to discover a VLUN on a Solaris 10 MU6 host that is using the QLogic qla driver and VxVM over a direct Fibre Channel connection. After exporting a VLUN to the host, run the following command for discovery.
# /opt/QLogic_Corporation/drvutil/qla2300/qlreconfig -d qla2300

All VLUNs are now seen in format. CAUTION: Always refer to the driver notes on the effect of issuing qlreconfig on the driver and already discovered VLUNs. Remove the VLUN from the host (e.g., using removevlun), then issue the format command on the host. You will see the same list as before but the removed LUNs are noted as offline. To correct this listing in format, run the following command:
# scli Driver Driver Driver Driver -do all rescan rescan completed rescan completed rescan completed rescan completed

on on on on

HBA HBA HBA HBA

instance instance instance instance

0. 1. 2. 3.

The format shows everything back as expected. Only local disks listed. CAUTION: Always refer to the Driver notes on the effect of issuing rescan on the driver and already discovered VLUNs. NOTE: If a new list of LUNs are exported to the host, only the LUNs which were discovered on the first run are seen. All others not already read by the qlreconfig on the first run are not listed in format. This is because the /dev/dsk and /dev/rdsk links are not removed. By default, vxvm saves a backup of all disk groups to /etc/vx/cbr/bk. This can fill up quickly and take up disk space. The directories inside /etc/vx/cbr/bk can be removed.

Example of UFS/ZFS File System Creation


The following example shows how to create a file system. To create a ZFS volume, issue the following commands:
# zpool create -f <name> <c050d0> # zfs create <name>/name_2> # cd /name>/<name_2>

Example of Discovering a VLUN Using a QLogic Driver with VxVM

75

You create file systems with the newfs command. The newfs command accepts only logical raw device names. The syntax is as follows:
# newfs [ -v ] [ mkfs-options ] raw-special-device

For example, to create a file system on the disk slice c0t3d0s4, you would use the following command:
# newfs -v /dev/rdsk/c0t3d0s4

The -v option prints the actions in verbose mode. The newfs command calls the mkfs command to create a file system. You can invoke the mkfs command directly by specifying a -F option followed by the type of file system. For example:
# mkfs -F ufs /dev/rdsk/c0t3d0s4 /test_mount

Examples of Growing a Volume


The following examples show how to grow a volume using SSTM and VxVM.

Growing an SSTM Volume


Host - Sol 10 MU9 Stack - SSTM with emlxs File System - UFS 1. Create a LUN on the HP 3PAR Storage System and export to the host:
# createvv <cpg_name> <lun_name> <size> # createvlun <lun_name> <LUN_ID> <host_name>

2.

Scan the device tree on the host (other commands are required for different HBA drivers):
# cfgadm -o show_FCP_dev -al # luxadm probe # devfsadm

3.

Run the format command on the host, set the LUN type and label it:
# format

Select the LUN and then 'type':


format> type

Select '0' (Auto configure):


Specify disk type (enter its number)[2]: 0

76

Configuration Examples

Label the LUN:


format> label

4.

Create and mount the file system. For example:


# newfs /dev/rdsk/c2t50002AC000010032d0s2 # mkdir /mnt/test # mount /dev/dsk/c2t50002AC000010032d0s2 /mnt/test

5.

Grow the LUN: On the HP 3PAR Storage System, use the growvv command to grow the LUN. Increase the LUN by 10GB (for example):
# growvv <lun_name> 10G

6. 7.

Rescan the device tree on the host as shown above. Use the luxadm command to verify the new LUN size. For example:
# luxadm display /dev/rdsk/c2t50002AC000010032d0s2

8.

Unmount the file system and re-read the resized LUN:


# umount /mnt/test # format

Select the LUN and then 'type':


format> type

Select '0' (Auto configure):


Specify disk type (enter its number)[2]: 0

Label the LUN:


format> label

NOTE: For Solaris x86, Auto configure under the type option in format does not resize the LUN. Resizing can be achieved by selecting other under the type option and manually entering the new LUN parameters, such as number of cylinders, heads, sectors, etc. 9. Re-mount and grow the file system:
# mount /dev/dsk/c2t50002AC000010032d0s2 /mnt/test # growfs -M /mnt/test /dev/rdsk/c2t50002AC000010032d0s2

Examples of Growing a Volume

77

Check the new size:


# df -k /mnt/test

Summary: Create and export the initial LUN Scan the device tree on the host Run 'format' to configure the LUN (set type and label) Create and mount the file system on the host Grow the LUN on the HP 3PAR Storage System Rescan the device tree on the host Unmount the file system on the host Run 'format' to re-configure the LUN (set type and label) Mount and grow the file system

Growing a VxVM Volume


The vxdisk resize command can update the VTOC of the disk automatically. It is not necessary to run the format command to change the length of partition 2 of the disk in advance. A disk group must have at least two disks to perform the DLE operation since the disk is temporarily removed from the disk group and it is not possible to remove the last disk from a disk group. If there is only one disk in the disk group, vxdisk resize fails with the following error message:
VxVM vxdisk ERROR V-5-1-8643 Device Disk_10: resize failed: Cannot remove last disk in disk group

WARNING!

Always refer to the Veritas release notes before attempting to grow a volume.

Host - Sol 10 MU9 Stack - SSTM with emlxs File System - VxFS 1. Create two LUNs (minimum) on the HP 3PAR Storage System and export to the host:
# # # # createvv <cpg_name> <lun_name1> <size> createvv <cpg_name> <lun_name2> <size> createvlun <lun_name1> <LUN_ID1> <host_name> createvlun <lun_name2> <LUN_ID2> <host_name>

2.

Scan the device tree on the host (other commands are required for different HBA drivers):
# # # # cfgadm -o show_FCP_dev -al luxadm probe devfsdam vxdctl enable

3.

Create a Veritas disk group with the two LUNs:


# vxdisk list # vxdg init <disk_group> <vx_diskname1>=<device1> # vxdg -g <disk_group> adddisk <vx_diskname2>=<device2>

78

Configuration Examples

('vxdiskadm' can also be used.) If you cannot initialize the LUNs, check the paths are enabled:
# vxdisk path

Create a VxVM volume and mount it:


# # # # vxassist -g <disk_group> make <vx_volume> <size> mkfs -F vxfs /dev/vx/rdsk/<disk_group>/<vx_volume> mkdir /mnt/test mount -F vxfs /dev/vx/dsk/<disk_group>/<vx_volume> /mnt/test

4.

Grow the LUN: On the HP 3PAR Storage System, use the growvv command to grow one of the LUNs. Increase the LUN by 10GB (e.g.):
# growvv <lun_name> 10G

5.

Rescan the device tree on the host as shown above. Additionally, resize the logical VxVM object to match the larger LUN size:
# vxdisk -g <disk_group> resize <vx_diskname>

6.

On the host, check there is the additional space in the disk group and grow the volume:
# vxassist -g <disk_group> maxsize

Grow the volume:


# vxresize -g <disk_group> <vx_volume> <new_size>

Check the new size:


# df -k /mnt/test

The updated LUN size will now be available to VxVM. Summary: Create and export the initial LUNs. Scan the device tree on the host. Create a Veritas disk group, make and mount the volume. Grow the LUN on the HP 3PAR Storage System. Rescan the device tree on the host. Grow the file system.

Examples of Growing a Volume

79

VxDMP Command Examples


This section provides information on some common commands used to configure VxDMP. For detailed information on Veritas SF and configuration, refer to:http://www.symantec.com/ business/storage-foundation CAUTION: Commands may vary with each version of Veritas Storage foundation. Always refer to the version release notes. Below are some examples of some common commands: Enable VXvM and discover new disks:
# vxdctl enable

Display disks:
# vxdisk list

Display disk groups:


# vxdg list

Displaying I/O Statistics for Paths


Enable the gathering of statistics:
# vxdmpadm iostat start [memory=size]

Reset the I/O counters to zero:


# vxdmpadm iostat reset

Display the accumulated statistics for all paths:


# vxdmpadm iostat show all

Managing Enclosures
Display attributes of all enclosures:
# vxdmpadm listenclosure all

Change the name of an enclosure:


# vxdmpadm setattr enclosure orig_name name=new_name

Check current I/O policy attributes:


# vxdmpadm getattr enclosure <enclosure name, example: 3PARDATA0 iopolicy

ENCLR_NAME DEFAULT CURRENT

80

Configuration Examples

============================================ 3PARDATA0 MinimumQ MinimumQ Setting I/O Policies and Path Attributes

Changing Policies
To change the I/O policy for balancing the I/O load across multiple paths to a disk array or enclosure:
# vxdmpadm setattr enclosure <enclosure name> iopolicy=policy

Here are some policies that can be set: adaptive automatically determines the paths that have the least delay balanced (default) takes the track cache into consideration when balancing I/O across paths. minimumq sends I/O on paths that have the minimum number of I/O requests in the queue. priority assigns the path with the highest load carrying capacity as the priority path. round-robin sets a simple round-robin policy for I/O. singleactive channels I/O through the single active path.

Accessing VxDMP Path Information


The vxdmpadm(1m) utility provides VxDMP path information.

Listing Controllers
To list the controllers on a host server, use the vxdmpadm(1m) utility with the listctlr option:
# vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c3 3PARdata ENABLED 3pardata0 c2 3PARdata ENABLED 3pardata0 c0 Disk ENABLED Disk

The vxdmpadm(1m) utility also has a getctlr option to display the physical device path associated with a controller:
# vxdmpadm getctlr c2 LNAME PNAME =============== c2 /pci@80,2000/lpfc@1

Displaying Paths
To list the paths on a host server, use the vxdmpadm(1m) utility with the getsubpaths option:
# vxdmpadm getsubpaths ctlr=CTLR-NAME

To display paths connected to a LUN, use the vxdmpadm(1m) utility with the getsubpaths option:
# vxdmpadm getsubpaths dmpnodename=node_name

VxDMP Command Examples

81

Here is an example:
# vxdmpadm getsubpaths dmpnodename=c2t21d36 NAME STATE PATH-TYPE CTLR-NAME ENCLR-TYPE ENCLR-NAME ============================================================================= c2t21d36s2 ENABLED c2 3PARdata 3pardata0 c2t23d36s2 ENABLED c2 3PARdata 3pardata0 c3t20d36s2 DISABLED c3 3PARdata 3pardata0 c3t22d36s2 DISABLED c3 3PARdata 3pardata0

To display DMP Nodes, use the vxdmpadm(1m) utility with the getdmpnode option:
# vxdmpadm getdmpnode nodename=c3t2d1

Here is an example:
# vxdmpadm getdmpnode nodename=c2t21d36s2 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ========================================================================= c2t21d36s2 ENABLED 3PARdata 4 2 2 3pardata0

82

Configuration Examples

B Patch/Package Information
This appendix provides minimum patch requirements for various versions of Solaris and other associated drivers.

Minimum Patch Requirements for Solaris Versions


The following tables list the minimum patch requirements based on Solaris version. Table 1 Solaris 10 MU Minimum Patch Requirements
SPARC Solaris 10 SPARC no MU 3/2005 1 18822-20 1 19374-01 1 19130-04 120222-01 MU1 SPARC 1/2006 1 18822-25 1 19130-04 1 18833-17 120222-05 MU2 SPARC 6/2006 1 18833-17 1 19130-04 120222-09 MU3 SPARC 1 1/2006 1 18833-33 1 19130-04 120222-13 MU4 SPARC 8/2007 1 19130-33 also 125166-07 (qlc) 1 18833-36 120222-21 125081-16 MU5 SPARC 5/2008 1 19130-33 also 125166-07 (qlc) 127127-1 1 1 18833-36 120222-26 MU6 SPARC 10/2008 127127-1 1 120222-31 (-29 has an issue) 1 18833-36 1 19130-33 also 125166-07 (qlc) x86 solaris10 x86 no MU 3/2005 1 18844-19 1 19131-09 1 19375-05 120223-01 MU1 x86 1/2006 1 18844-26 1 19131-09 1 19375-13 120223-05 MU2 x86 6/2006 1 18855-14 19131-09 120223-09 MU3 x86 1 1/2006 1 18855-33 1 19131-09 120223-13 MU4 x86 8/2007 1 19131-33 also 125165-07 (qlc) 1 18855-36 120223-21 125082-16 MU5 x86 5/2008 1 19131-33 also 125165-07 (qlc) 127128-1 1 1 18855-36 120223-26 MU6 x86 10/2008 127128-1 1 120223-31 (-29 has an issue) 1 18855-36 1 19131-33 also 125165-07 (qlc)

Minimum Patch Requirements for Solaris Versions

83

Table 1 Solaris 10 MU Minimum Patch Requirements (continued)


SPARC MU7 SPARC 5/2009 127127-1 1 139608-02 (emlxs) 1 18833-36 139606-01 (qlc) MU8 SPARC 10/2009 127127-1 1 141876-05 (emlxs) 1 18833-36 142084-02 (qlc) MU9 SPARC 127127-1 1 1 18833-36 144188-02 (emlxs) 145098-01 (emlxs) 145096-01 (emlxs) 120224-08 (emlxs 1 19130-33 (qlc) 143957-03 (qlc) 144486-03 (qlc) 1 19088-1 (qlc) 1 x86 MU7 x86 5/2009 127128-1 1 139609-02 (emlxs) 1 18855-36 139607-01 (qlc) MU8 x86 10/2009 127128-1 1 141877-05 (emlxs) 1 18855-36 142085-02 (qlc) MU9 x86 127128-1 1 1 18855-36 144189-02 (emlxs) 145097-01 (emlxs) 145099-01 (emlxs) 120225-08 (emlxs) 1 19131-33 (qlc) 143958-03 (qlc) 144487-03 (qlc) 1 19089-1 (qlc) 1

For the Emulex OCe10102 CNA card, the following patch revisions are required:

145098-04 (emlxs)

145099-04 (emlxs)

For the Qlogic QLE8142 CNA card, the following patch revisions are required:

143957-05 (qlc) 144486-05 (qlc)

143958-05 (qlc) 144487-05 (qlc)

Table 2 Solaris 9 Minimum Patch Requirements


Patch 1 18558-06 1 13277-01 1 14878-02 1 13040-06 Only required for JNI. Only required for JNI. Comment

84

Patch/Package Information

Table 3 Solaris 8 Minimum Patch Requirements


Patch 108974-02 1 14877-02 1 1095-14 1 Only required for JNI. Only required for JNI. Comment

NOTE:

Always install a SAN package with additions.

Patch Listings for Each SAN Version Bundle


The following tables list the patches and additions for each SAN version. Table 4 Sun Solaris 9 OS Patches for 4.4.13
Patch ID 1 1847-08 1 1 13039-20 1 13040-24 1 13041-14 1 13042-18 1 13043-15 1 13044-07 1 14476-09 1 14477-04 1 14478-08 1 14878-10 1 19914-12 Features Addressed SAN Foundation software kit Sun StorEdge Traffic Manager software fp/fcp/fctl driver fcip driver qlc driver luxadm, liba5k, and libg_fc cfgadm fp plug-in library fcsm driver Common Fibre Channel HBA API library SNIA Sun Fibre Channel HBA library JNI driver emlxs driver

Table 5 Sun Solaris 9 Patches for 4.4.14


Patch ID 1 1847-08 1 1 13039-20 1 13040-25 1 13041-14 1 13042-19 1 13043-15 1 13044-07 1 14476-09 1 14477-04 1 14478-08 1 14878-10 1 19914-13 Features Addressed SAN Foundation software kit Sun StorEdge Traffic Manager software fp/fcp/fctl driver fcip driver qlc driver luxadm, liba5k, and libg_fc cfgadm fp plug-in library fcsm driver Common Fibre Channel HBA API library SNIA Sun Fibre Channel HBA library JNI driver emlxs driver

Patch Listings for Each SAN Version Bundle

85

Table 6 Sun Solaris 9 OS Patches 4.4.15


Patch ID 1 1847-08 1 1 13039-21 1 13040-26 1 13041-14 1 13042-19 1 13043-15 1 13044-07 1 14476-09 1 14477-04 1 14478-08 1 14878-10 1 19914-14 Features Addressed SAN Foundation software kit Sun StorEdge Traffic Manager software fp/fcp/fctl driver fcip driver qlc driver luxadm, liba5k, and libg_fc cfgadm fp plug-in library fcsm driver Common Fibre Channel HBA API library SNIA Sun Fibre Channel HBA library JNI driver emlxs driver

Table 7 Sun Solaris 9 OS Patch Additions for SAN 4.4.15


Patch ID 1 13039-24 1 13040-27 1 13042-20 Feature Addressed Sun StorEdge Traffic Manager software fctl/fp/fcp driver patch qlc driver patch

Table 8 Sun Solaris 8 OS Patches for 4.4.13


Patch ID 1 1847-08 1 1 1412-23 1 1 1095-32 1 1 1096-17 1 1 1097-26 1 1 1413-20 1 1 1846-10 1 1 14475-08 1 13766-05 1 13767-09 1 14877-10 1 19913-12 Feature Addressed SAN Foundation software kit Sun StorEdge Traffic Manager software fp/fcp/fctl driver fcip driver qlc driver luxadm, liba5k, and libg_fc cfgadm fp plug-in library csm driver Common Fibre Channel HBA API library SNIA Sun Fibre Channel HBA library JJNI driver emlxs driver

Table 9 Sun Solaris 8 OS Patch Additions for SAN 4.4.13


Patch ID 1 1846-1 1 1 1 1095-33 1 86 Patch/Package Information Feature Addressed cfgadm fp plug-in library patch fp/fcp/fctl driver

Table 9 Sun Solaris 8 OS Patch Additions for SAN 4.4.13 (continued)


Patch ID 1 19913-14 1 1412-24 1 1 1097-27 1 Feature Addressed emlxs driver SAN 4.4.x: Sun StorEdge Traffic Manager patch qlc driver

WARNING!

The SAN version "additions" above are required as a minimum.

HBA Driver/DMP Combinations


Table 10 (page 87) lists Supported HBA driver and DMP combinations. Table 10 Supported HBA Drivers and DMP Combinations
HBA Driver and DMP Combinations qla/lpfc + VxVM qlc/emlxs + VxVM and SSTM JNI + VxVM qlc/emlxs + SSTM qla /lpfc + VxVM + VCS qlc/emlxs + VxVM + VCS (SSTM not enabled for HP 3PAR) qlc/emlxs + SSTM + SC

NOTE: SAN packages are installed on all combinations but they are only enabled for SSTM combinations.

Minimum Requirements for a Valid QLogic qlc + VxDMP Stack


A qlc driver with VxVM 4.1 and above is supported with the following requirements. SPARC Platform: Solaris 10 QLC driver patch 143957-03 or later Solaris 9 SAN 4.4.x: QLC driver patch 1 13042-19 or later (SAN 4.4.14) Veritas VxVM 4.1MP2_RP3 patch 124358-05 or later (for Solaris 8, 9 and 10) Veritas VxVM 5.0MP1_RP4 124361-05 or later (for Solaris 8, 9, and 10) Solaris 10 QLC driver patch 143958-03 or later Veritas VM_5.0_RP1_Solx86 (patches 127345-01 and 128060-02) for Solaris 10 x86

X86 Platform:

Minimum Requirements for a Valid Emulex emlxs + VxDMP Stack


An emlxs (SAN 4.4.1x version and above) driver with VxVM 4.1 and above is supported with the following requirements. SPARC Platform: (120223-27 was the minimum) now 120223-31 emlxs on Sol10 1 19914-13 emlxs on Sol9 (SAN 4.4.14) 1 19913-13 emlxs on Sol8 (SAN 4.4.13)

HBA Driver/DMP Combinations

87

x86 Platform: (120223-27 was the minimum) now 144189-02 emlxs on Sol10

Default MU level Leadville Driver Table


Table 1 (page 88) shows the version and package number for the applicable Leadville driver. 1 Table 1 Leadville Driver Version and Package 1
Solaris OS Version Leadville Driver Released MU Driver Level (SunSolve patch) Solaris 10 SPARC MU9 (9/10) qlc emlxs Solaris 10 x86 MU9 (9/10) qlc emlxs Solaris 10 SPARC MU8 (10/09) qlc emlxs Solaris 10 x86 MU8 (10/09) qlc emlxs Solaris 10 SPARC MU7 (05/09) qlc emlxs Solaris 10 x86 MU7 (05/09) qlc emlxs Solaris 10 SPARC MU6 (10/08) Do not use default 2.31h issues found Replace 2.31h with patch 120222-31 Solaris 10 x86 MU6 (10/08) qlc emlxs emlxs qlc emlxs Solaris 10 SPARC MU5 (05/08) qlc emlxs 20100301-3.00 (patches: 143957-03) 2.50o 2010.01.08.09.45 (patches: 144188-02) 20100301-3.00 (patches: 143958-03) 2.50o 2010.01.08.09.45 patches: 144189-02) 2.31 2009.05.19 (patches: 142084-02) 2.40s 2009.07.17.10.15 (patches: 141876-05) 2.31 2009.05.19 (patches: 142085-02) 2.40s 2009.07.17.10.15 (patches: 141877-05) 2.29 v20081 15-2.29 (patches: 1 139606-01) 2.31p v2008.12.1 1.10.30 (patches:139608-02) 2.29 v20081 15-2.29 (patches: 1 139607-01) 2.31p v2008.12.1 1.10.30 (patches:139609-02) 2.29 v20080617-2.29 (patches: 125166-12) 2.31h v20080616-2.31h (patches: 120222-29) 2.31p (patches: 120222-31) (not default driver) 2.29 v20080617-2.29 (patches: 125165-12) 2.31h v20080616-2.31h (patches: 120223-29) 2.26 v20071220-2.26 (patches: 125166-10) 2.30h v200801 16-2.30h (patches:120222-26)

88

Patch/Package Information

Table 1 Leadville Driver Version and Package (continued) 1


Solaris OS Version Leadville Driver Released MU Driver Level (SunSolve patch) Solaris 10 x86 MU5 (05/08) qlc emlxs 2.26 v20071220-2.26 (patches: 125165-10) 2.30h v20081 16-2.30h (patches: 120223-26)

NOTE: NOTE:

No configuration should break any vendor release note advice. Testing is conducted on the mu level default Leadville driver.

NOTE: Solaris Packages Rule State: Only drivers released for a certain mu level should be installed in that mu level. (Dependency issues may arise if this is not followed). NOTE: If firmware is not embedded and loaded by driver attachment, then follow the vendor advice on which firmware/driver matches are valid.

HBA Driver/DMP Combinations

89

C FCoE-to-FC Connectivity
This appendix provides a basic diagram of FCoE-to-FC connectivity. Figure 7 FCoE-to-FC Connectivity

90

FCoE-to-FC Connectivity