Sie sind auf Seite 1von 45

HTTPS://KB.NETAPP.COM/SUPPORT/S/ARTICLE/KA31A0000000ZCPQAA/HOW-TO-RESCAN-AND-RECOVER-LUN-PATHS-IN-A-HOST-AFTER-MODIFYING-SLM-REPORTING-NODES?

LANGUAGE=EN_US

How to rescan and recover LUN paths in a host after modifying SLM reporting
nodes
Sep 29, 2016How To
ARTICLE NUMBER
000028339
DESCRIPTION
Applicable to clustered Data ONTAP 8.3 GA and above.
This article describes the procedure that should be followed to rescan and recover LUN paths in different
Operating Systems (OS), while moving a LUN or a volume containing LUNs to another HA pair within the
same cluster. Modify the Selective LUN Map (SLM) reporting-nodes list, before initiating the move (add-
reporting-nodes) and after the move is completed (remove-reporting-nodes).

The procedure below ensures that active or optimized LUN paths are maintained in the host OS
multipathing.

Note: For more information on SLM, see Clustered Data ONTAP SAN Administration Guide.

PROCEDURE
Host rescan is required to recover the active or optimized LUN paths after add-reporting-nodes and to
clean up stale LUN paths after remove-reporting-nodes in SLM.
VMware ESX/ESXi hosts:
Manual Rescan should be performed after add-reporting-nodes and remove-reporting-nodes using
ESXi CLI or vSphere/VI/Web Client.
For more information, see VMware KB 1003988: Performing a rescan of the storage on an ESX/ESXi host.
Microsoft Windows hosts:
Rescan after add-reporting-nodes and remove-reporting-nodes using Windows GUI.
1. Open Computer Management (Local)
2. In the console tree, click Computer Management (Local) >> Storage >> Disk
Management
3. In the disk management page click Action >> Rescan Disks. This will rescan all
the disks and update any path changes.
Rescan after add-reporting-nodes and remove-reporting-nodes using command line.
1. Open Command Prompt and enter the following text:
#diskpart
2. At the DISKPART> prompt, enter the following text:
DISKPART>rescan.
This will rescan all the disks and updates any path changes. For more information,
see Microsoft TechNet Updatedisk.
Linux hosts:
Rescan after add-reporting-nodes.
1. Starting RHEL 6.5 & RHEL 7.0 onwards, run the following command to update
active/optimized paths after add-reporting-nodes:
#/usr/bin/rescanscsibus.sha
2. For RHEL 5 and RHEL 6.4 (including previous updates), run the following command to
update active/optimized paths after add-reporting-nodes:
#/usr/bin/rescanscsibus.sh
Note: Nothing additional has to be done in the multipath layer.
Rescan after remove-reporting-nodes
1. Separate rescan steps are required for SCSI layer and Multipathing layer in Linux
storage stack to clean up stale disk paths after remove-reporting-nodes in SLM.
2. Run the following command to remove stale LUN paths in SCSI layer
#/usr/bin/rescanscsibus.shr
3. Next run the following command to remove stale LUN paths in multipath layer:
#multipathr
Note: The /usr/bin/rescanscsibus.sh script is available as part of the
native sg3_utilspackage.
AIX hosts:
Rescan after add-reporting-nodes
1. Run the following command to identify adapters used for NetApp Storage:
#lsdevCcadapter|grepifcs
fcs0Available03008GbPCIeFCBladeExpansionCard(7710322577107601)
fcs1Available03018GbPCIeFCBladeExpansionCard(7710322577107601)
2. Now use the HBA names from above output to rescan each adapter.
#cfgmgrl<HBAhandle>
Example:
#cfgmgrlfcs0
#cfgmgrlfcs1
Rescan after remove-reporting-nodes
1. Identify <pathid> of the moved disk for which the stale paths has to be removed.
#lspathl<devicehandle>F'path_idnameparentconnectionstatus'
For Example:
#lspathlhdisk1F'path_idnameparentconnectionstatus'
0hdisk1fscsi023f000a09830ca3a,0Enabled
1hdisk1fscsi023f100a09830ca3a,0Enabled
2hdisk1fscsi0202800a09830ca3a,0Enabled
3hdisk1fscsi0202900a09830ca3a,0Enabled
2. Run the following command to remove stale paths:
#rmpathi<pathid>dl<devicehandle>
For example if 0 & 1 are the paths to be removed from above example then the command would be:
#rmpathi0dlhdisk1
#rmpathi1dlhdisk1
Solaris hosts:
Rescan after add-reporting-nodes
1. For iSCSI LUNs, run the following command:
#devfsadmiiscsi
2. For FC/FCoE LUNs, perform the following steps:
1. Run the following command to identify OS Device name of the HBA ports that
are accessing NetApp LUNs:
#cfgadmaloshow_FCP_dev|grepfcfabric
c3fcfabricconnectedconfiguredunknown
c4fcfabricconnectedconfiguredunknown
2. Now run the following command for each <controller> to be rescanned:
#cfgadmcconfigure<controller>
For example from Step1 c3 & c4 are the controller names and so the command would be:
#cfgadmcconfigurec3
#cfgadmcconfigurec4
Rescan after remove-reporting-nodes
1. For iSCSI LUNs, run the following command:
#devfsadmiiscsi
#devfsadmCv
2. For FC/FCoE LUNs, perform the following steps:
1. If the host is accessing NetApp LUNs using a single FC port, then it is
advised to reboot the host. Run the following commands to reconfigure and reboot the host.
#touch/reconfigure
#init6
2. But if host is accessing NetApp LUNs with 2 or more FC ports, then run the
following commands to identify OS Device names of the HBA ports:
#cfgadmaloshow_FCP_dev|grepfcfabric
c3fcfabricconnectedconfiguredunknown
c4fcfabricconnectedconfiguredunknown
3. Run the following command to reconfigure each port one after the other:
#cfgadmcunconfigure<controller>
#cfgadmcconfigure<controller>
For example from above output c3 & c4 are the controller names and so the commands would be
similar to the following:
#cfgadmcunconfigurec3
#cfgadmcconfigurec3
#cfgadmcunconfigurec4
#cfgadmcconfigurec4
Note: Above step should be peformed only for one port at a time.
4. Run the following command to clean up the devices:
#devfsadmCv
5. To clear MPxIO entries, an OS reboot is needed and this can be performed
during a planned downtime. Run the following command to reconfigure and reboot the host:
#touch/reconfigure
#init6
6. Once the host is back after reboot, run the following command :
#devfsadmCv
HPUX hosts:
Rescan after add-reporting-nodes
1. Run the following command to scan I/O system:
#ioscanfNCdisk
#ioscanfNClunpath
#ioiniti
#insfe
Rescan after remove-reporting-nodes
1. Scan I/O system to identify disks that has stale entries (H/W Type is shown
as NO_HW)
#ioscanfNkCdisk
ClassIH/WPathDriverS/WStateH/WTypeDescription
=======================================================================
disk2464000/0xfa00/0x16esdiskCLAIMEDDEVICENETAPPLUNCMode
disk3264000/0xfa00/0x22esdiskCLAIMEDDEVICENETAPPLUNCMode
disk3564000/0xfa00/0x23esdiskCLAIMEDDEVICENETAPPLUNCMode
disk4264000/0xfa00/0x24esdiskNO_HWDEVICENETAPPLUNCMode
2. Run the following command to remove special device file stale entries:
#rmsfH<hw_path>
For example from the above output the command would look like:
#rmsfH64000/0xfa00/0x24
3. Scan I/O system to identify LUN path that has stale entries (H/W Type is shown
as NO_HW)
#ioscanfNkClunpath

ClassIH/WPathDriverS/W
StateH/WTypeDescription
==============================================================================
==========================
lunpath610/6/0/0/0/0/4/0/0/0.0x200000a0981be096.0x4000000000000000
eslptCLAIMEDL
lunpath770/6/0/0/0/0/4/0/0/0.0x200000a0981be096.0x4001000000000000
eslptNO_HWL
lunpath760/6/0/0/0/0/4/0/0/1.0x200100a0981be096.0x4000000000000000
eslptCLAIMEDL
lunpath780/6/0/0/0/0/4/0/0/1.0x200100a0981be096.0x4001000000000000
eslptNO_HWL
4. Delete all stale hardware LUN path running the following command:
#ioscanfNkC<lunpath>
For example from above output the command would be similar to the following:
#rmsfH0/6/0/0/0/0/4/0/0/1.0x200100a0981be096.0x4001000000000000
#rmsfH0/6/0/0/0/0/4/0/0/0.0x200000a0981be096.0x4001000000000000
5. Run the following command to again identify the stale LUN path entries if any (H/W
Type is shown as NO_HW):
#ioscanfNClunpath
ClassIH/WPathDriver
S/WStateH/WTypeDescription
==============================================================================
==============================================
lunpath620/6/0/0/0/0/4/0/0/0.0x200200a0981be096.0x4000000000000000
eslptCLAIMEDLUN_PATHLUNpathfordisk24
lunpath810/6/0/0/0/0/4/0/0/0.0x200200a0981be096.0x4001000000000000
eslptNO_HWLUN_PATHLUNpathfordisk32
lunpath750/6/0/0/0/0/4/0/0/1.0x200300a0981be096.0x4000000000000000
eslptCLAIMEDLUN_PATHLUNpathfordisk24
lunpath830/6/0/0/0/0/4/0/0/1.0x200300a0981be096.0x4001000000000000
eslptNO_HWLUN_PATHLUNpathfordisk32
6. Now run the following command to replace and validate the change of a LUN
associated to a LUN path.
#scsimgrfreplace_wwidH<lunpath>
For example from above output the command would be similar to the following:
#scsimgrfreplace_wwidH
0/6/0/0/0/0/4/0/0/0.0x200200a0981be096.0x4001000000000000
#scsimgrfreplace_wwidH
0/6/0/0/0/0/4/0/0/1.0x200300a0981be096.0x4001000000000000
7. Finally run the following command which is part of HP-UX HU kit.
#/opt/NetApp/santools/bin/ntap_config_paths

ARTICLE FOOTER
Disclaimer
NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any information or
recommendations provided in this publication, or with respect to any results that may be obtained by the use of the
information or observance of any recommendations provided herein. The information in this document is distributed AS IS, and
the use of this information or the implementation of any recommendations or techniques herein is a customers responsibility
and depends on the customers ability to evaluate and integrate them into the customers operational environment. This
document and the information contained herein may be used solely in connection with the NetApp products discussed in this
document.
ATTACHMENT 1
ATTACHMENT 2

lun
[Copy the link]

yanyabo Apprentice Released on 2015-8-20 09:28:14


The last reply2015-08-24 18:17:51
View the author 1#
1527 7
7 LUN Windows 2008 LUN Windows

Favorites Share Like


Comment Reply
Report
direct elevator
Engineer Released on 2015-8-20 09:40:31
View the author 2#

win2008
UID LUN UID

Comment Reply Like (0)


Report

yanyabo Apprentice Released on 2015-8-20 09:43:37


View the author 3#

2
UID

Comment Reply Like (0)


Report

Engineer Released on 2015-8-20 09:48:39


View the author 4#

:
C Controller
T Target
D Disk

CTD
CTD signature(label in unix/linux)
LUN
CTD signature , LUN
:
C Controller FC HBA
T Target Storage FE Port WWN
D Disk LUN (Host LUN ID)

iSCSI , , iSCSI FC .
FC iSCSI
HBA iSCSI initiator (hardware or software)
storage target
WWN iSCSI name (eui or iqn)
FE port portal (IP + TCP ) :TCP default 3260
Name server iSNS

FC iSCSI windows
iSCSI LUN IDWWN
iqn

Comment Reply Like (0)


Report
yanyabo Apprentice Released on 2015-8-20 09:53:21
View the author 5 #

4
FC windows

Comment Reply Like (0)


Report

Engineer Released on 2015-8-20 10:01:17


View the author 6 #

Comment Reply Like (0)


Report

Engineer Released on 2015-8-20 10:06:54


View the author 7 #

3
UltraPath upadm show vlun

http://support.huawei.com/enterprise/docinforeader.action?
contentId=DOC1000018821&idPath=7919749|7941815|9519490|9858859|8576127

Comment Reply Like (0)


Report

zhengweichao88 Assistant Released on 2015-8-24 18:17:51


View the author 8 #

Windows (host) LUNIDs


Legacy HDS Forums 2012-10-1
Timo Tiihonen 2013-7-15
1 1
4
Originally posted by: drake

All, hello, new to Hitachi and this forum. I have 2xAMS2300 and 1xHUS110.

My question revolces around mapping LUNIDs from the host side to the HDS storage. via HSNM2
I have a H-LUN and LUN ID which works for me on from the storage view, however, how can I
have windows admins confirm which LUN to storage LUN ID?

I'm comparing it to symmetrix with solutions enabler commands which show host side mapping
back to strorage LUN IDs.

We have several windows backup hosts utilizing HDS strorage for BUs and all are of 4TB size and
some hosts have upwards of 8 LUNs. I simply need to understand the tools needed or commands
to view the LUN to host ID from the host side.

I did search the forums here and saw iscfg for AIX, and also mention of tools "raid manger" and
then CCI. Are the later two what I need for windows? Would I have this or need to download
install to end host? etc?

Thanks for any and all assistance.


Solution and Product Forums2763
2012-10-1 6:47
hds_forums
hds_forums
hdsforums
hdsforums

Legacy HDS Forums2013-6-7 8:35


Originally posted by: cris

Hi,

You can download CCI from the support portal (http://portal.hds.com). It has a binary called
raidscan that you can use to get hardisk to LDEV/LUN mapping.

raidscan -x drivescan harddisk0,n

If you are coming from a sym background you might find the HDS tools more obscure and
somewhat less functional. Solutions enable was the primary configuration interface for a long
time and (IMO) is more mature. To get mapping information I recommend you use HCS as the
agents will give you this information in a nice GUI interface.

Hope this helps.

Cris
0 0

Legacy HDS Forums2013-6-7 8:35


Originally posted by: drake

CRIS thank your for your reply. So the CCI is yet different than the SNM2 CLI I suspect/have
learned? I have attempted to use this CLI but with little success for my needs. Would you
compare the CCI to a solutions enabler type software you install to the end host you need details
on LUNS for?

I also see that solutions enabler has syminq with -hds and -hids switches. I will test these
however has anyone had any success using these to provide host side listings/mapping to Hitachi
devs?

Thanks for the help.


0 0

Legacy HDS Forums2013-6-7 8:35


Originally posted by: cris

Yes, CCI is a seperate tool and is HDS array agnostic (works on all HDS arrays). Its also the oldest
tool. The SNM2 CLI is fairly recent and specific to modular arrays (HUS/AMS).

You don't need to install CCI on each host to get mapping. You can install the HCS agent which
will give you host to LUN mapping. You should have recieved a license for HCS with your HUS.
HCS is more like SMC.

As for the SE hds flags, these should work, since SE mainly uses SCSI inqury commands and the
HDS device types (DF600F and Open-V) are known.
0 0

Timo Tiihonen2013-7-15 1:11


Hi Drake,

CCI installation includes one binary which is pretty similar to syminq and it's called inqraid. After
installation you can find if from c:\HORCM\usr\bin\inqraid. It's possible to copy just that inqraid file
to any other Windows server and run it.

Output will be something like this.

C:\inqraid.exe $LETALL -CLI


DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
C:\Vol1\Dsk0 - - - - - - - LOGICAL VOLUME
D:\Vol3\Dsk2 CL1-B 8701XXXX 4 - s/s/ss 0000 A:00000 DF600F
E:\Vol5\Dsk3 CL1-B 8701XXXX 6 - s/s/ss 0000 A:00001 DF600F

Here you have nicely drive letter, port, storage serial number and AMS volume number.

By giving inqraid.exe -h you will get plenty of other options which you can try also like $Phys to
see all physical devices, sorting, etc

What I would do is that copy that file to some tools share and use it everywhere in your
environment. Then you don't need to start installing CCI or any other time consuming software if
you only need drive to volume mapping. Inqraid is available also for other platforms in CCI
installation package.

-Timo-
0 0

How to check the LUN-ID from host side
in General, Software

If you ever wondered how you can check the lunid from the hostside and you don't have the option of
installing CCI then you are in the right place.
The easiest way to get the lunid is to use the inqraid command that is part of Hitachi CCI (command
control interface) package. You don't even need to install CCI on the host, you can just go ahead and copy
the executable to a folder and run it.
Sometimes however systems administrators are a bit scared to copy/install software on their machines
and going throught change control might take a while.
The alternative is to "decode" the device instance path from windows mpio
Here are the steps to read the device id and how to decode it.
From Computer Management, navigate to Storage->Disk Management, identify your disk and get it's
properties. Go to the last tab (Details) and Scroll down in the list until property is "Device Instance Path".
You will need the 8 digits to decode the lun id.

.
ignnone size-medium wp-image-626" />
In this particular case the last eight digits are 39373942.
Now, break the 8 digits into 4 groups. Each group will give you 1 digit from the lunid. If a group is lower
then 40 then you extract substract 30 , if the group is higher then 40 then you substract 31. All results will
be converted to hex.

Lun id will have to be in format a b: c d


We are breaking the last 8 digits 39 37 39 42
And perform the calculation.
a is lower then 40 so we remove 30 39-30 = 9
b is lower then 40 so we remove 30 37-30= 7
c is lower then 40 so we remove 30 39-30=9
d is higher then 40 so we remove 31 41-31=11 = B in hex
So our lunid is 00:97:9B
Here is the HEX table for those who want it.
30=0 34=4 38=8 42=C
31=1 35=5 39=9 43=D
32=2 36=6 40=A 44=E
33=3 37=7 41=B 45=F

Of course, there is the way easier way to use inqraid and obtain that info as well but this method above
does not require anything to be installed on the O/S side.

HP StorageWorks Disk Array XP - Operating


System Configuration Guide IBM AIX
Features and requirements
Installation procedures
This Document describes the requirements and procedures for connecting the XP family of
disk arrays to an IBM AIX system and configuring the new disk array for operation with AIX.
Features and requirements
The disk array and host have the following features and requirements.
HP StorageWorks disk arrays:
XP48: Up to 48 drives from 72 GB to 8.7 TB, 24 FC ports
XP128: From 8 to 128 drives for up to 18 TB, 48 FC ports
XP512: Up to 512 drives from 72 GB to 93 TB, 48 FC ports
XP1024: From 8 to 1024 drives for up to 149 TB, 64 FC ports
XP12000: Up to 1152 drives for up to 165 TB, 128 FC ports
IBM RS/6000 series, POWERstation, POWERserver, or SP series
IBM AIX operating system with current OS patches
superuser (root) login access to the system
Host Bus Adapters (HBAs): Install adapters and all utilities and drivers. Refer to the
adapter documentation for installation details.
(Recommended) HP StorageWorks Command View XP with LUN management feature
or Remote Control with the LUN Configuration Manager XP option for configuring disk array
ports and paths.
(Recommended) HP StorageWorks Secure Manager XP:
Allows the host to access only array devices for which it is authorized.
Other available XP Software (some may not apply to your system):
HP StorageWorks Business Copy XP
HP StorageWorks Continuous Access XP
HP StorageWorks Continuous Access Extension XP
HP StorageWorks Auto LUN XP
HP StorageWorks Data Exchange XP
HP StorageWorks Resource Manager XP
HP StorageWorks RAID Manager XP
HP StorageWorks Cache LUN XP
HP StorageWorks Auto Path XP
HP StorageWorks Cluster Extension XP
HP StorageWorks Performance Advisor XP software
Fibre Channel interface
The XP48, XP128, XP512, XP1024, and XP12000 disk arrays support these 1 Gbps and 2
Gbps Fibre Channel interfaces:
Short-wave non-OFC (open fiber control) optical interface
Multimode optical cables with SC or LC connectors
Public or private arbitrated loop (FC-AL) or fabric direct attach
Fibre Channel switches
Even though the interface is Fibre Channel, this guide uses the term "SCSI disk" because disk
array devices are defined to the host as SCSI disks.
Device types
The disk arrays support the following device types:
OPEN-x devices: OPEN-x logical units represent disk devices. Except for OPEN-V,
these devices are based on fixed sizes. OPEN-V is a user-defined size. Supported emulations
include OPEN-3, OPEN-8, OPEN-9, OPEN-E, OPEN-L, and OPEN-V devices.
LUSE devices (OPEN-x*n): Logical Unit Size Expansion (LUSE) allows you to combine
2 to 36 OPEN-x devices to create expanded LDEVs larger than standard OPEN-x disk
devices. For example, an OPEN-x LUSE volume created from ten OPEN-x CVS volumes is
designated as OPEN-x*10.
CVS devices (OPEN-x CVS): Volume Size Configuration (VSC) defines custom
volumes (CVS) that are smaller than normal fixed-sized logical disk devices (volumes).
(OPEN-V is a CVS-based custom disk size that you determine. OPEN-L does not support
CVS.)
LUSE (expanded) CVS devices (OPEN-x*n CVS): LUSE CVS combines CVS devices
to create an expanded device. This is done by first creating CVS custom-sized devices and
then using LUSE to combine from 2 to 36 CVS devices. For example, if three OPEN-9 CVS
volumes are combined to create an expanded device, this device is designated as OPEN-9*3-
CVS.
Failover
The disk arrays support many standard software products that provide host, application, or I/O
path failover and logical volume (storage) management.
SNMP configuration
The disk arrays support standard Simple Network Management Protocol (SNMP) for remotely
managing the disk array from the host. The SNMP agent on the remote console PC or
Command View can provide status and Remote Service Information Message (R-SIM)
reporting to the SNMP manager on the host for up to eight disk arrays. To configure the SNMP
manager on the host, refer to the operating system

documentation.
1 - Disk array
2 - SIM
3 - SNMP
4 - Remote Console
5 - Error info
6 - Public
7 - SNMP manager
8 - Open system host
RAID Manager command devices
RAID Manager manages Business Copy (BC) and/or Continuous Access (CA) operations from
a server host. To use RAID Manager with BC or CA, you must use Command View or LUN
Configuration Manager to designate at least one LDEV as a command device. Refer to
the Command View or LUN Configuration Manager user guide for information about how to
designate a command device.
top
Installation procedures
1. Install and configure the disk array
The HP service representative performs the following tasks:
Assembling hardware and installing software
Loading the microcode updates
Installing the channel adapters (CHAs) and cabling
Installing and formatting devices
You perform the additional tasks below. If you do not have Command View or LUN
Configuration Manager, your HP service representative can perform these tasks for you.
Setting the System Option Modes
The HP representative sets the System Option Mode(s) based on the operating system and
software configuration of the host.
Configuring the Fibre Channel ports
Configure the disk array Fibre Channel ports by using Command View or the Fibre Parameter
window in LUN Configuration Manager. Select the settings for each port based on your storage
area network topology. Use switch zoning if you connect different types of hosts to the array
through the same switch.
Fibre Address
In fabric environments, the port addresses are assigned automatically. In arbitrated loop
environments, you set the port addresses by selecting a unique arbitrated loop physical
address (AL-PA) or loop ID for each port.
Fabric and Connection parameter settings
You can set each array port to FABRIC ON or OFF with connections of POINT-TO-POINT or
FC-AL as shown in the following table and figures. For detailed topology information, refer to
the HP StorageWorks SAN Design Reference Guide on the http://www.hp.com Web site.
Figure 1: Simple Point-to-Point Fabric Topology Example

1 - Server
2 - N Port
3 - F Port
4 - Fabric Switch
5 - F Port
6 - N Port
7 - Disk Array
Figure 2: Arbitrated Loop Fabric Topology Example

1 - Server
2 - NL Port
3 - FL Port
4 - Fabric Switch
5 - FL Port
6 - NL Port
7 - Disk Array
Fabric Connection
Provides
Parameter Parameter
ON FC-AL NL-port (SAN public arbitrated loop)
ON Point-to-Point N-port (SAN fabric port)
Fabric Connection
Provides
Parameter Parameter
NL-port (private arbitrated loop; direct
OFF FC-AL
connect without a SAN)
OFF Point-to-Point Not supported
Setting the Host Mode for the disk array ports
The disk array ports have Host Modes that you must set depending on the host you use. After
the disk array is installed, use Command View (shown) or LUN Configuration Manager to set
the Host Mode for each port.
Figure 3: The host mode for AIX is OF

2. Install and configure the host


Install and configure the host and host bus adapters (HBAs) that connect the host to the disk
array.
Loading the OS and software
Follow the manufacturers instructions to load the operating system and software onto the host.
Load all OS patches and configuration utilities supported by HP and the HBA manufacturer.
Installing and configuring the HBAs
Install and configure the host bus adapters using the HBA manufacturers instructions.
Supported HBAs:
Supported HBAs include the IBM FC6227, IBM FC6228, and IBM FC6239.
To check whether the drivers are installed:
Example (FC 6227)
Use the lslpp command to display the drivers currently installed on the system.
#lslppl|grepdf1000f7
Check the list for the required two drivers:
For the IBM FC 6227 HBA, the following drivers are required:
devices.pci.df1000f7
devices.fcp.disk
If the drivers are displayed, you do not need to install the drivers.
If the drivers are not displayed, install the drivers by using the installp command or SMIT.
Example (FC 6228)
Use the lslpp command to display the drivers currently installed on the system.
#lslppl|grepdf1000f9
Check the list for the required two drivers:
For the IBM FC 6227 HBA, the following drivers are required:
devices.pci.df1000f9
devices.fcp.disk
If the drivers are displayed, you do not need to install the drivers.
If the drivers are not displayed, install the drivers by using the installp command or SMIT.
To check the firmware level:
The HBA should have the proper version of firmware installed.
Use the lsdev command to display the device object.
#lsdevCcadapter
Use the lscfg command to display the firmware level.
#lscgfgvlfcsX
fcsX is the fiber devids object (typically fcs0). The field Devices Specific.(Z9) shows the
installed firmware revision of the HBA.
To install the drivers using the AIX command line:
Insert the IBM drivers CD.
Use the installp command to install the drivers.
Example
#installpadlpfc.installpall
Use the lslpp command to verify that the drivers are installed on the
system.
Configure the new devices by rebooting the system with the shutdown
r command or using the cfgmgr command to run Configuration Manager.
To install the drivers using SMIT:
Insert the IBM drivers CD.
Start SMIT.
Example
#smit
The System Management screen appears.
Select Install Additional Software.
Select Input Device / Directory For Software and press F4 to select
device.
Select Software To Install and pressF4 to display a list of software.
Select the drivers needed using F7. You can use the slash (/) to search for
the components in the list.
HBA FC 6227 requires these drivers:
devices.pci.df1000f7
devices.fcp.disk
Press Enter. Wait for status to change from RUNNING to OK.
Check installation summary result (SUCCESS).
Press F10.
Use the smitdevinst command to configure the devices.
ii. Clustering and Fabric zoning
If you plan to use clustering, install and configure the clustering software on the servers.
Clustering is the organization of multiple servers into groups. Within a cluster, each server is a
node. Multiple clusters compose a multi-cluster environment. The following example shows a
multi-cluster environment with three clusters, each containing two nodes. The nodes share
access to the disk array.
Figure 4: Multi-cluster environment with three clusters
Within the Storage Area Network (SAN), the clusters may be homogeneous (all the same
operating system) or they may be heterogeneous (mixed operating systems). How you
configure LUN Security and fabric zoning depends on the operating system mix and the SAN
configuration.
iii. Fabric zoning and LUN security for multiple operating systems
By using appropriate zoning and LUN security, you can connect multiple clusters of various
operating systems to the same switch and fabric:
Host zones must contain only homogeneous operating systems.
Storage port zones may overlap if more than one operating system needs to
share an array port.
Heterogeneous operating systems may share an XP array port if you use
Secure Manager and set the appropriate host group and mode; all others must connect to a
dedicated XP array port.
Use Secure Manager for LUN isolation when multiple hosts connect through
a shared array port. Secure Manager provides LUN security by allowing you to restrict which
LUNs each host can access.
Fabric
Environment OS Mix LUN Security
Zoning
homogeneous (a single
Not
OS type present in the
required Must be used when
Standalone SAN SAN)
multiple hosts connect
(non-clustered) heterogeneous (more through a shared port
than one OS type Required
present in the SAN)
homogeneous (a single
Not
OS type present in the
required Must be used when
SAN) multiple cluster nodes
Clustered SAN
heterogeneous (more connect through a
than one OS type Required shared port
present in the SAN)
Multi-Cluster SAN homogeneous (a single Must be used when
Not
OS type present in the multiple cluster nodes
required
SAN) connect through a
heterogeneous (more Required shared port
than one OS type
Fabric
Environment OS Mix LUN Security
Zoning
present in the SAN)
b. Connect the disk array
Connect the disk array to the host as follows:
The HP service representative verifies operational status of the disk array
channel adapters, LDEVs, and paths.
The HP representative connects the Fibre Channel cables between the disk
array and the host.
Verify the ready status of the disk array and peripherals.
Defining the paths
Use Command View (shown) or LUN Configuration Manager to create paths (LUNs) between
hosts and volumes in the disk array, also called LUN mapping. LUN mapping includes these
tasks:
Configuring ports
Setting LUN security
Creating host groups by operating system and setting their host modes
Assigning host bus adapter WWNs to host groups.
Mapping volumes to host groups (by assigning LUNs).
For details, see the Command View or LUN Configuration Manager guide. HP recommends
that you note LUNS and their ports, WWNs, nicknames, and LDEVs for later use in verifying
host and device configuration.
Verifying disk array device recognition
Log into the host as an administrator (root).
If the disk array LUNs are defined after the IBM system is powered on, issue
a cfgmgr command to recognize the new devices.
Use the lsdev command to display system device data and verify that the
system recognizes the newly installed devices.
#lsdevCcdisk
The devices are listed by device file name. All new devices should be listed as Available. If
they are listed as Define, you must do more configuration before they can be used.
Figure 5: Example (Fibre Channel)

The example shows that Device hdisk0 is installed on bus 60 and has TID=5 and LUN=0.
Record the device file names for the new devices. You will use this
information in changing the device parameters.
Use the lscfg command to find out the AIX disk devices corresponding
array LDEV designation.
Example
#lscfgvlhdisk3
In this example, the emulation type, LDEV number, CU number and array port designation
should all be displayed for disk device hdisk3.
b. Configure disk array devices
Configure the disk array devices in much the same way you would configure any new disk on
the host. Creating scripts to configure all devices at once may save you considerable time.
Changing the device parameters
When the device files are created, the system sets the device parameters to the system
default values. You may need to change a few of those values for each new OPEN-x device:
read/write (R/W) timeout value
queue depth
queue type
The recommended queue depth settings may not provide the best I/O performance for your
system. You can adjust the queue depth setting to optimize the I/O performance of the disk
array.
Parameter Default
Type Required Value for Disk Array
Name Value
Read/write
30 60
time-out
2 (For LUSE devices use 2 for each LUN. For
SCSI Queue depth 1 example, if one LUSE device contains 8 LUNs,
use 2 8 = 16 for the queue depth.)
Queue type None Simple
Read/write
30 60
timeout
Use 2 if exclusively OPEN-x volumes are
mapped to the SCSI/FC port.
Use 8 if exclusively LUSE volumes are mapped
Before 52- to the SCSI/FC port.
38-xx Use 2 if an intermix of LUSE and OPEN-x
Fibre volumes is mapped to the SCSI/FC port.
Channel Queue depth Use 8 if an intermix of LUSE and OPEN-x
volumes is mapped for dummy LU (I-7135-Emu).
52-40-xx to Number of volumes queue-depth 256 AND
52-44-xx queue-depth 8
52-45-xx or Number of volumes queue-depth 256 and
later queue-depth 32
Queue type None Simple
To show the device parameters using the AIX command line:
At the command line prompt, enter lsattrElhdiskx, where hdiskx is the device file
name.
Example
#lsattrElhdisk2
To change the device parameters using the AIX command line:
Change the parameters as follows:
To change the R/W timeout parameter, enter: chdev1hdiskxarw_timeout='60'
To change the queue depth parameter, enter: chdevlhdiskxaqueue_depth='x'
where x is a value from the above table.
To change the queue type parameter, enter: chdevlhdiskxaq_type='simple'
Example This example changes the queue depth for device hdisk3:
#chdevlhdisk3aqueue_depth='2'
Verify that the parameters for all devices were successfully changed.
Example
#lsattrElhdisk3
Repeat these steps for each OPEN-x device on the disk array.
NOTE: The lsattr command also shows other useful information, such as LUN ID of the
mapped LDEV, worldwide name of the disk array FC port, and N-Port ID.
Another useful command for determining the slot position and port worldwide name of the HBA
is the lscfgvlhdiskx command.
To change the device parameters using SMIT:
Start SMIT. (Optional) For an ASCII session, use the smitC command.
Example #smit
The System Management screen appears.
Select Devices.
Example
Figure 6: The System Management screen

The Devices screen appears.


Select Fixed Disk. The Fixed Disk screen appears. .
Select Change/Show Characteristics of a Disk. The Disk screen appears
Select the desired device from the Disk menu. The Change/Show
Characteristics of a Disk screen for that device is displayed.
Example
Figure 7: Change/Show Characteristics of a Disk screen

Enter the correct values for the read/write timeout value, queue depth, and
queue type parameters. Press Enter to complete the parameter changes.
Repeat these steps for each OPEN-x device on the disk array.
Assigning the new devices to volume groups
Assign the new devices to volume groups, using the AIX systems Logical Volume Manager
(accessed from within SMIT). This operation is not required when the volumes are used as raw
devices.
To assign a device to a volume group:
Start SMIT. (Optional) For an ASCII session, use the smitC command.
Example #smit
The System Management screen appears.
Select System Storage Management (Physical & Logical Storage).
Example
Figure 8: System Managment screen

Select Logical Volume Manager.


Select Volume Groups.
Select Add a Volume Group.
Enter or select values for the following fields:
Volume Group name (the volume group can contain multiple hdisk devices)
Physical partition size in megabytes, see Physical partition size table (page
65).
Physical Volume names
To enter values, place the cursor in the field and type the value.
To select values, place the cursor in the field and press F4.
Enter yes or no in the Activate volume group AUTOMATICALLY at
system restart? field.
If you are not using HACMP (High Availability Cluster Multi-Processing) or HAGEO (High
Availability Geographic), enter yes.
If you are using HACMP and/or HAGEO, enter no.
Press Enter when you have entered the values. The confirmation screen
appears.
Press Enter again. The Command Status screen will appear. To ensure the
devices have been assigned to a volume group, wait for OK to appear on the Command Status
line.
Figure 9: Cammand startus screen

Repeat these steps for each volume group needed.


Creating the Journaled File Systems
Create the Journaled File Systems using the System Manager Information Tool (SMIT). This
operation is not required when the volumes are used as raw devices. The largest file system
permitted in AIX is 64 GB.
To create the Journaled File Systems:
Start SMIT.
#smitC
Select System Storage Management (Physical & Logical Storage).
Select File Systems.
Select Add/Change/Show/Delete File Systems.
Select Journaled File Systems
Select Add a Journaled File System.
Select Add a Standard Journaled File System.
Select a volume group, and press Enter.
Enter values for the following four fields:
SIZE of file system (in 512-byte blocks): Enter the lsvg command to
display the number of free physical partitions and physical partition size. Calculate the
maximum size of the file system as follows: (FREE PPs - 1) x (PP SIZE) x 2048
Mount Point: Enter mount point name. (Make a list of the mount point
names for reference.)
Mount AUTOMATICALLY at system restart?: Enter yes.
CAUTION: In high availability systems (HACMP and/or HAGEO), enter no.
Number of bytes per inode: Enter the number of bytes appropriate for the
application, or use the default value.
Press Enter to create the Journaled File System.
The Command Status screen appears.
To ensure that the Journaled File System has been created, wait for OK to
appear on the Command Status line.
To continue creating Journaled File Systems, press the F3 key until you
return to the Add a Journaled File System screen.
Repeat steps b through k for each Journaled File System to be created.
To exit SMIT, press the F10 key.
Mounting and verifying the file systems.
Mount the file systems and verify that the file systems were created correctly and are
functioning properly.
b. To mount and verify the file systems:
Mount the file system. Enter: mountmount_point_name
Example #mount/vg01
Repeat step a for each new file system.
Use the df command to verify the size of the file systems. The capacity is
listed in 512-byte blocks. To list capacity in 1024-byte blocks, use the dfk command.
Figure 10: df command screen output

Verify that the file system is usable by performing some basic operations (for
example, file creation, copying, and deletion) on each logical device.
Use the df command to verify that the file systems have successfully
automounted after a reboot. Any file systems that were not automounted can be set to
automount using the SMIT Change a Journaled File System screen. If you are using HACMP
or HAGEO, do not set the file systems to automount.
NOTE: HACMP and HAGEO do not provide a complete disaster recovery or backup solution,
and are not a replacement for standard disaster recovery planning and backup/recovery
methodology.
top
How do I correlate /dev/sd devices to the hardware
they represent?
up A drive is beginning to fail and I only know the device by its /dev/sdb device file
vote31down designation. What are the ways that I can use to correlate that device file to an
vote favorite
actual hardware device to know which drive to physically replace?
14 Bonus: What if I don't have /dev/disk/ and its sub directories on this installation?
(Which, sadly, I don't)
linux hard-disk block-device
shareimprove this question edited Jun 9 '12 at 22:49 asked Jun 8 '12 at 21:02

Gilles Wesley
409k847571236 4,394102543
No RAID controller? ewwhite Jun 8 '12 at 21:44

@ewwhite Not for the purposes of this question, no. =) Wesley Jun 8 '12 at 21:51

btw. if you don't have the /dev/disk/ directory you probably don't have udev. Ulrich
Dangel Jun 9 '12 at 3:32

@UlrichDangel Very observant because... I don't have udev. =( Wesley Jun 9 '12 at 3:43
add a comment
7 Answers
activeoldestvotes
up You can look in /sys/block:
vote20down -bash-3.2$ ls -ld /sys/block/sd*/device
vote lrwxrwxrwx 1 root root 0 Jun 8 21:09 /sys/block/sda/device ->
accepted

../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdb/device ->
../../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdc/device ->
../../devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdd/device ->
../../devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0
Or if you don't have /sys, you can look at /proc/scsi/scsi:
-bash-3.2$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: PepperC Model: Virtual Disc 1 Rev: 0.01
Type: CD-ROM ANSI SCSI revision: 03
shareimprove this answer answered Jun 8 '12 at 21:13

Handyman5
46125
Awesome, I have /sys/ (as well as /proc/) Lots of great info there. Still consuming
it. Wesley Jun 8 '12 at 21:24

Also, take a look at /dev/disk/by-path (assuming your udev supports it). Of course, OPs
doesn't, so... derobert Jun 8 '12 at 21:39
add a comment
up hdparm -i /dev/sdb
vote14down That should give you the model and serial number of the drive.
vote shareimprove this answer edited Jun 8 '12 at 23:00 answered Jun 8 '12 at 21:13
user13742

Martin Barry
1412
This would work for most situations, I believe. However, for some reason the controller in
this server is sketchy. Performing that command earns me this: HDIO_GET_IDENTITY failed:
Invalid argument Wesley Jun 8 '12 at 21:19

1 smartctl -i is worth trying, too. Works on SCSI drives, whereas hdparm often
won't. derobert Jun 8 '12 at 21:41

Does not work with USB drices Mads Skjern Jul 30 '15 at 15:17
add a comment
up As the inimitable Gilles mentioned in this answer of his, if your kernel uses udev you
vote10down can use the udevadm command to interrogate a device:
vote udevadm info -n /dev/sda -a
(Sadly, in some cases [doubly sad is that it's true in this case for me] udev is not
used and/or udevadm is not available.)
shareimprove this answer edited Apr 13 at 12:36 answered Jun 8 '12 at 21:28

Community Wesley
1 4,394102543
add a comment
up If you can see the LED on the drive, or listen to the disk noise, you can run
vote9down sudo cat /dev/sdb >/dev/null
vote and see which drive suddenly becomes continuously active. Or, if you're going by noise,
sudo find /mount/point >/dev/null
which will make the heads move more (it may be better not to do it on the failing disk,
and instead use a process of elimination with the other disks).
shareimprove this answer edited Jun 9 '12 at 22:53 answered Jun 8 '12 at 21:12

Gilles jippie
409k847571236 7,33652752
2 I had considered how to get the lights to go blinky-blinky, so this is an answer to that
curiosity of mine. =) Wesley Jun 8 '12 at 21:20

1 Some drives have an extra LED for this, but they're usually only found in enterprise grade
drives (read bizarrely expensive at relatively low capacity). Don't know how to work those
LED's, but the dd trick usually works well enough. jippie Jun 8 '12 at 21:24

@WesleyDavid Even if there are no LEDs, listening to the noise can be a last recourse.
There's no need to use dd here (nor in most circumstances), cat or any other program that
reads from a file will do. GillesJun 9 '12 at 22:55
add a comment
up Assuming this is Linux, most obvious thing is to check dmesg for where the kernel first
vote4down initializes the device. It logs the drive model.
vote shareimprove this answer answered Jun 8 '12 at 21:11

Julian Yon
411
Check unix.stackexchange.com/questions/39886/ for more details. jippie Jun 8 '12 at
21:16

Yes, it's Linux. Specifically Debian 4. I checked dmesg, but didn't see any mention of a
drive model, oddly. Perhaps I'm misreading it. Wesley Jun 8 '12 at 21:24
add a comment
up I have 4 methods. The first one is the easiest:
vote3down dmesg | egrep "sd[a-z]"
vote For the others, I'm not sure if they need /dev/disk except for this one:
ls -lF /dev/disk/by-uuid
The others:
blkid -o list -c /dev/null
And the obvious:
fdisk -l
shareimprove this answer answered Jun 30 '12 at 21:59

Samuel Duclos
311
add a comment
up Here are some ways I know to find the SCSI device name
vote1down dmesg | egrep "sd[a-z]"
vote lsblk --scsi (from package util-linux >= v2.22)
lshw -C disk
ls -ld /sys/block/sd*/device
shareimprove this answer answered Feb 27 at 10:05

SamK
5502512
add a comment

How to translate Windows disk ids to storage arrays LUNs


Posted by Rob Koper on March 12, 2014Leave a comment (7)Go to comments
Converting disk information in a VM into the actual LUN information

Weve all been there: you have a certain Windows virtual machine with several disks of the same size and you
dont know which Windows-disk is in fact which storage LUN.

The VMware settings for this VM might look like this:


In Windows a disk might show this information:
It turns out converting the Windows information can be converted into VMware settings quite easily. How?

VMware SCSI controller conversion table (into Windows locations)

First of all disks coming from SCSI controller 0 show as coming from location 160 in Windows disk
management.

Disks connected to SCSI controller 1 appear as coming from location 161. And so on. After that each controller
increments by 32.

controller 0 = location 160


controller 1 = location 161
controller 2 = location 193
controller 3 = location 225
controller 4 = location 257
Furthermore, when a 2nd SCSI controller is added and vdisks are added from a data store, the bus number will
increment, as seen in Windows.

When RDMs are used, in Windows the bus will always be 0.When you add a SCSI id to an RDM in Virtual Center,
you add an x:y number as identifier to that disk. x Represents the SCSI controller and y is the LUN id. In
Windows this translates to a location and target. The LUN id as seen by Windows disk manager is not used.
So in my screenshot from disk manager as shown above disk9 has the information Location 193, bus 0,
target 3, LUN 0. So translating this into the VCenter settings for this VM:

SCSI id 2:3 (2 is the controller and 3 is the LUN on that controller).

So now you look up disk 2:3 in this VMs settings and find:

Note that the disk numbering in the VM settings are different from the Windows disk numbers. Windows starts at
0 and in VCenter a disk number starts at 1 and Ive seen Windows environments where the disk numbers
changed after reboots, so dont use disk numbers as reference!

In the top right red encircled number youll find the naa number. This usually starts with 6006, at least for EMC
storage it does.

By using the command line tool naviseccli you can now check the naa-number with the LUNs presented to the
ESX host where the VM resides. Its a lot of work, but you might want to consider adding this information to your
documentation and update it every time disk changes are performed.
As you can see the naa number I found using the CLI command matches the RDM I found in the VM settings.
Note that in VMware some lead-in numbers appear (in this case vml.02001d0000) as well as some lead-out
numbers (anything after e311).

This is how Id track a Windows disk all the way back to its storage array LUN.

Conclusion

Documentation! Make sure you add the naa numbers of RDMs in your sheet (or whatever you use to document
all the settings and configurations). Even better: for every disk you add in the VMs settings, write down which
LUN that represents.

External sources:

VMware knowledge base website kb2051606.

You may also like:


How to get started setting up ESRS on the latest OE for Block and MCx codes

How to enable SSH on a VMware 5.5 ESXi host using the vSphere client

Migrating hosts to new storage ports, LIVE

Forget Botox - Do This Once Daily


Fit Mom Daily

DIP upgrade from EMC VNX 5500 to VNX 5700

EMC World 2016: what to do? Theres not enough time, thats for sure!

How to set the NTP server, time and timezone in a Brocade switch

EMC World 2014: short week and too much to do

Ads by Shareaholic

Share this:

inShare1

Share on Tumblr

Email

More

Related
DIP upgrade from EMC VNX 5500 to VNX 5700May 23, 2013In "Configuration"
Live VMware datastore expansion with running VMsJuly 9, 2013In "Configuration"
Migrating hosts to new storage ports, LIVEJuly 1, 2013In "Configuration"
common knowledge, Configuration, hardware, Storage array, VMwaredisk management, naa, VM settings,VMware
Will the new WiFi super antenna change the world?
EMC World 2014 discount codes available first come, first serve

Leave a comment ?

7 Comments.

1. Hermann Weiss July 7, 2014 at 16:44


Unfortunately the controller-to-location calculation works not always. We have VMs where Windows
does show location 160 and 225 but not 193. The VM has 2 controllers 0:0 and 1:0.
Even a complete swap of numbers are possible.
It seems there will be a missmatch when controllers where added, removed, added etc.
Up to now I found no reliable way to match VM SCSI-IDs to Windows-IDs (but where interested in)
Reply

o Rob Koper July 7, 2014 at 17:26


Thats really too bad. Nevertheless I hope my way of doing things still helps with relatively
unaltered VMs then

Reply

2. Fredrik January 30, 2015 at 11:33


Ive gone through the biggest file server we have.
In my case I have these numbers as the SCSI Controller and location:
Location 160 SCSI 0 (meaning SCSI(0:X) in Vmware)
Location 256 SCSI 1 (meaning SCSI(1:X) in Vmware)
Location 161 SCSI 2 (meaning SCSI(2:X) in Vmware)
Location 224 SCSI 3 (meaning SCSI(3:X) in Vmware)
The location numbers have even switched place on my server.
Reply

o Rob Koper January 31, 2015 at 13:26


Its a guideline, unfortunately not a guarantee.
If somebody knows a way to make a 100% certain translation: let me know!
Reply

Rob Koper February 1, 2015 at 13:35


I would blame it on Windows. Do a reboot and everything changes again

Reply

3. Bjorn Houben May 4, 2015 at 21:08


Cant this be used instead? http://www.van-lieshout.com/2009/12/match-vm-and-windows-harddisks-
using-powercli/
Reply
4. How to match Windows Disks to VMware Hard Disks | The Tired Admin - pingback on November 28, 2016 at 16:24

Bringing Windows hosts online after transition


After you transition your LUNs using the 7-Mode Transition Tool (7MTT) for Windows hosts, you must complete several steps to
bring your host online and begin servicing data again.

Before you begin


If you are doing a copy-free transition (CFT), procedures for vol rehost must be complete. See the 7-Mode Transition Tool
Copy-Free Transition Guide for details.
About this task
For copy-based transitions (CBTs), perform these steps after completing the Storage Cutover operation in the 7-Mode
Transition Tool (7MTT).
For CFTs, perform these steps after completing the Import & Data Configuration operation in the 7MTT.
Steps
1. Generate the 7-Mode to clustered Data ONTAP LUN mapping file:

For copy-based transitions, run the following command from the host where the 7MTT is installed:transition
export lunmap -s sub-project-name/session-name -o file_path
For example:transition export lunmap -s SanWorkLoad -o c:/Libraires/Documents/7-to-
C-LUN-MAPPING.csv
For copy-free transitions, run the following command from the system where the 7MTT is installed:transition
cft export lunmap -p project-name -s svm-name -o output-file
Note: You must run this command for each of your Storage Virtual Machines (SVMs).
For example:transition cft export lunmap -p SANWorkLoad -s svml -o
c:/Libraries/Documents/7-to-C-LUN-MAPPING-svml.csv

2. If the Windows host is SAN-booted and the boot LUN was transitioned, power on the host.

3. Update the FC BIOS to enable the system to boot from the LUN on the clustered Data ONTAP controller.

See the HBA documentation for more information.

4. On the Windows host, rescan the disks from the Disk Manager.

5. Obtain the LUN serial numbers, LUN IDs, and corresponding Windows physical disk numbers of the LUNs mapped to
the host.

For systems running Data ONTAP ONTAPDSM: Use the Data ONTAPDSM Management Extension Snap-In or
the get-sandisk Windows PowerShell cmdlet.
For systems running MSDSM: Use the Inventory Collect Tool (ICT).

The LUN ID, LUN serial number, and corresponding serial number is captured under the SAN Host LUNs tab.

6. Use the LUN serial numbers, LUN IDs, and corresponding Windows physical disk numbers of the LUNs along with the
LUN map output and the data collected in the pretransition state, to determine whether the LUNs have transitioned
successfully.

7. Note whether the physical disk numbers of the transitioned LUNs have changed.

8. Bring your disks online.

Use Windows Disk Manager to bring online disks that are not part of Cluster Failover.
Use Failover Cluster Manager to bring online disks that are part of Cluster Failover.

9. If the host you are transitioning is running Windows Server 2003 and you have migrated the quorum device, start the
cluster services on all of the cluster nodes.

10. If Hyper-V is enabled on the host and pass-through devices are configured to the VMs, modify the settings from Hyper-
V Manager.

The physical disk number of the LUN corresponding to the pass-through device might have changed as a result of the
transition.

Search for:

Go
Advanced Search Options
Or select a product...

Identifying additional storage drives on a Windows Virtual server


How do I identify the additional storage drives on my Windows Virtual Server?
In certain circumstances, such as when removing an additional storage drive from your Virtual Server, you need to be able to
identify which drives on your server relate to which drive listed in your Fasthosts control panel.

This article shows how to identify storage drives on a Windows virtual server, the process is slightly different for Linux servers.

Your additional storage drives are listed on the overview page for your Virtual Server in your Fasthosts control panel. Each
additional drive is numbered. Typically your first storage drive will be 0 and each additional drive added will increment by one.

The number shown in the control panel corresponds to the Logical Unit Number (LUN) of the disk in your server. If you need to
identify which disk in your server relates to one of your storage drives, you need to connect to your server, open the Disk
Management utility, and find the disk with the corresponding LUN ID.

Identifying the LUN for your additional drives

You can find the LUN ID for each of your additional drives within the Computer Management console on your server.

Step 1

Log in to your server using Remote Desktop.

Step 2

In the Start menu, right click on Computer, and then select Manage from the pull out menu.
Step 3

In the menu on the left-hand side of the Window, expand the Storage icon, and then select Disk Management.

Step 4

The disks attached to your server are listed. The first disk, Disk 0, is the primary disk in your server. Any additional devices that
you have added in your Fasthosts control panel will be listed below the primary disk.
While the disks are numbered in your server, and in your Fasthosts control panel, the numbers may not correspond. To be sure
that you have identified the correct disk within the Disk Management console you must confirm that the LUN ID matches the
drive number in your control panel.
Step 5

Right click on the drive you would like to check, and select Properties from the popup menu.

Step 6

The disk properties window displays the LUN ID in the Location information on the General tab. The number given here directly
corresponds to the storage drive number given in your control panel, but may not be the same as the disk number listed in the
Disk Management screen.
In this example, the LUN number is 0. This disk corresponds to the disk listed in the control panel as Storage Drive 0.
The primary disk in your Virtual Server will usually have a LUN number of 0 However, the first additional storage drive added will
usually also have a LUN number of 0. Take care not to incorrectly identify the primary disk as an additional drive.

Identify LUN IDs in Windows Server


2012 R2
December 26, 2013
During a Windows Server 2012 R2 Hyper-V implementation I needed to identify all the iSCSI
disks (LUNs) presented by an EMC VNX SAN to the Hyper-v Failover cluster. Each presented
iSCSI disk has a unique LUN ID. To View the LUN ID of a disk, you can use
the diskpart command. Here are the steps to view the LUN ID of a disk:
View the disks
list disk
Select a disk
select disk <number>

View the LUN ID of the disk


detail disk

I didnt find a PowerShell command to view the LUN IDs. You can create a PowerShell script
that uses Diskpart to view all the disks and the corresponding LUN IDs.

Related posts:

1. Create a bootable USB to install Windows 8 Server or Hyper-V Server 8


2. Switch between Windows Server 2012 core and Windows Server 2012 GUI
3. Identify NICs in Hyper-V 2012
4. Windows Server 2012 PowerShell Cmdlets for Hyper-V help
5. Nest Windows Server 2012 Hyper-V in VMware ESXi 5
Post navigation
vCloud Networking and Security license in Evaluation Mode
Check for duplicate MAC Address pools in your Hyper-V environment
2 thoughts on Identify LUN IDs in Windows Server 2012 R2

1.
Lee
December 27, 2013 at 21:08
Im not at work now so I dont have access to my script library. I think the WMI Class
Win32_DiskDrive class has this attribute as well. So something like
Get-WmiObject Win32_DiskDrive | Select Name, SCSIBus,SCSILun

Will work

2.
Nick M
January 13, 2014 at 02:31
So i dug into the WMI and wanted to post what it actually was, since that Win32_DiskDrive
class is pretty big and has a lot of stuff. Also since the above poster wasnt exactly right on the
select statement, i wanted to help.

If you wanted to get a little more detailed information to ensure youre looking at the right
storage provider as well, use this select afterwards.

get-wmiobject Win32_DiskDrive | select name, caption, scsibus, scsilogicalunit | sort-object


name | ft -autosize -wrap

CentOS / RHEL : How to identify/match LUN


presented from SAN with underlying OS disk
The post mentions few ways to exactly identify/match the LUN presented from SAN with underlying OS
disk.
Method 1
Execute command below command to obtain Vendor, Model and Port, Channel, SCSI-ID, LUN
# cat /proc/scsi/scsi
Host: scsi2 Channel: 00 Id: 00 Lun: 29
Vendor: EMC Model: SYMMETRIX Rev: 5874
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 29
Vendor: EMC Model: SYMMETRIX Rev: 5874
Type: Direct-Access ANSI SCSI revision: 05
Then execute below command:
# ls -ld /sys/block/sd*/device
lrwxrwxrwx 1 root root 0 Oct 4 12:12 /sys/block/sdaz/device ->
../../devices/pci0000:20/0000:20:02.0/0000:27:00.0/host2/rport-2:0-0/target2:0:0/2:0:0:29
lrwxrwxrwx 1 root root 0 Oct 4 12:12 /sys/block/sdbi/device ->
../../devices/pci0000:20/0000:20:02.2/0000:24:00.0/host3/rport-3:0-0/target3:0:0/3:0:0:29
Now compare hostX info with target with previous command( /proc/scsi/scsi ) to obtain details which disk is
mapped to which LUN ID. The numbers marked at the end represent host, channel, target and LUN
respectively. so the first device in command ls -ld /sys/block/sd*/device corresponds to the first device
scene in the command cat /proc/scsi/scsi command above. i.e. Host: scsi2 Channel: 00 Id: 00 Lun:
29 corresponds to 2:0:0:29. Check the highlighted portion in both commands to correlate.

To get WWID of LUN you can use the /dev/disk/by-id/ file:


# ls -la /dev/disk/by-id/
scsi-3600508b400105e210000900000490000 -> ../../dm-1
Now its easier to understand that dm-1 has WWID 3600508b400105e210000900000490000
Method 2
Another way is to use sg_map command. Make sure you have sg3-utils installed before running this
command.
# yum install sg3-utils
# sg_scan -i

/dev/sg2: scsi1 channel=0 id=0 lun=1 [em] type=0

SanDisk ImageMate CF-SM 0100 [wide=0 sync=0 cmdq=0 sftre=0 pq=0x0]


Above command will give mapping for devices. after this execute:
# sg_map -x

/dev/sg2 0 0 2 0 0 /dev/sdc
From the outputs of 2 commands above we can determine that sg2 ( SAN DISK ) is actually /dev/sdc
device

Method 3
If multipath is being used ( device-mapper ) below command can be used:
# multipath -v4 -ll

mpathc (360000970000195900437533030382310) dm-1 EMC,SYMMETRIX


size=253G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 3:0:0:1 sde 8:64 active ready running
`- 5:0:0:1 sdc 8:32 active ready running
How to understand the output
mpathc - user defined name
360000970000195900437533030382310 - WWID
dm-1 - sys-fs name
EMC - Vendor
2:0:0:29 - host,channel,scsi-id,lun
This output can be compared with the one we get with cat /proc/scsi/scsi command.
# cat /proc/scsi/scsi
Host: scsi2 Channel: 00 Id: 00 Lun: 29
Vendor: EMC Model: SYMMETRIX Rev: 5874
Type: Direct-Access ANSI SCSI revision: 05
SHARE THIS

Obtaining LUN pathing information for ESX or ESXi


hosts (1003973)
Purpose
This article explains using tools to determine LUN pathing information for ESX hosts.
Resolution
There are two methods used to obtain the multipath information from the ESX host:

ESX command line use the command line to obtain the multipath information when performing troubleshooting procedures.

VMware Infrastructure/vSphere Client use this option when you are performing system maintenance.

ESXi 5.x / ESXi 6.x


Command line

To obtain LUN multipathing information from the ESXi host command line:

1. Log in to the ESXi host console.

2. Type esxcli storage core path list to get detailed information regarding the paths.

For example:

fc.5001438005685fb7:5001438005685fb6-fc.5006048c536915af:5006048c536915af-
naa.60060480000290301014533030303130
UID: fc.5001438005685fb7:5001438005685fb6-fc.5006048c536915af:5006048c536915af-
naa.60060480000290301014533030303130
Runtime Name: vmhba1:C0:T0:L0
Device: naa.60060480000290301014533030303130
Device Display Name: EMC Fibre Channel Disk (naa.60060480000290301014533030303130)
Adapter: vmhba1
Channel: 0
Target: 0
LUN: 0
Plugin: NMP
State: active
Transport: fc
Adapter Identifier: fc.5001438005685fb7:5001438005685fb6
Target Identifier: fc.5006048c536915af:5006048c536915af
Adapter Transport Details: WWNN: xx:xx:xx:xx:xx:xx:xx:xx WWPN: xx:xx:xx:xx:xx:xx:xx:xx
Target Transport Details: WWNN: 50:06:04:8c:53:69:15:af WWPN: 50:06:04:8c:53:69:15:af

3. Type esxcli storage core path list -d naaID to list the detailed information of the corresponding paths for a
specific device.

4. The command esxcli storage nmp device list lists the LUN multipathing information:

naa.60060480000290301014533030303130
Device Display Name: EMC Fibre Channel Disk (naa.60060480000290301014533030303130)
Storage Array Type: VMW_SATP_SYMM
Storage Array Type Device Config: SATP VMW_SATP_SYMM does not support device
configuration.
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config:
{preferred=vmhba0:C0:T1:L0;current=vmhba0:C0:T1:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba0:C0:T1:L0

Notes:

For information on multipathing and path selection options, see Multipathing policies in ESX/ESXi 4.x and ESXi 5.x (1011340).

If a Connect to local host failed: Connection failure message is received, the hostd management agent
process may not be running, which is required to use esxcli. In this situation, you can use localcli instead of esxcli.

For more information, see the 5.5 Command Line Reference Guide.

vSphere Client
To obtain multipath settings for your storage in vSphere Client:

1. Select an ESX/ESXi host, and click the Configuration tab.


2. Click Storage.

3. Select a datastore or mapped LUN.

4. Click Properties.

5. In the Properties dialog, select the desired extent, if necessary.

6. Click Extent Device > Manage Paths and obtain the paths in the Manage Path dialog.

For information on multipathing options, see Multipathing policies in ESXi 5.x and ESXi/ESX 4.x (1011340).
ESX 4.x
Command line

To obtain LUN multipathing information from the ESX/ESXi host command line:

1. Log in to the ESX host console.

2. Type esxcfg-mpath -b to list all devices with their corresponding paths:

naa.6006016095101200d2ca9f57c8c2de11: DGC Fibre Channel Disk


(naa.6006016095101200d2ca9f57c8c2de11)
vmhba3:C0:T0:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:1b:32:86:5b:73 WWPN:
21:00:00:1b:32:86:5b:73 Target: WWNN: 50:06:01:60:b0:20:f2:d9 WWPN:
50:06:01:60:b0:20:f2:d9
vmhba3:C0:T1:L0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:1b:32:86:5b:73 WWPN:
21:00:00:1b:32:86:5b:73 Target: WWNN: 50:06:01:60:b0:20:f2:d9 WWPN:
50:06:01:60:b0:20:f2:d9

The device naa.6006016095101200d2ca9f57c8c2de11 has 2 paths: vmhba3:C0:T0:L0 and vmhba3:C0:T1:L0.

3. Type esxcfg-mpath -l to get more detailed information regarding the paths.

For example:

fc.2000001b32865b73:2100001b32865b73-fc.50060160c6e018eb:5006016646e018eb-
naa.6006016095101200d2ca9f57c8c2de11
Runtime Name: vmhba3:C0:T1:L0
Device: naa.6006016095101200d2ca9f57c8c2de11
Device Display Name: DGC Fibre Channel Disk (naa.6006016095101200d2ca9f57c8c2de11)
Adapter: vmhba3 Channel: 0 Target: 1 LUN: 0
Adapter Identifier: fc.20000000c98f3436:10000000c98f3436
Target Identifier: fc.50060160c6e018eb:5006016646e018eb
Plugin: NMP
State: active
Transport: fc
Adapter Transport Details: WWNN: 20:00:00:1b:32:86:5b:73 WWPN: 21:00:00:1b:32:86:5b:73
Target Transport Details: WWNN: 50:06:01:60:b0:20:f2:d9 WWPN: 50:06:01:60:b0:20:f2:d9

4. The command esxcli nmp device list lists of LUN multipathing information:

naa.6006016010202a0080b3b8a4cc56e011
Device Display Name: DGC Fibre Channel Disk (naa.6006016010202a0080b3b8a4cc56e011)
Storage Array Type: VMW_SATP_ALUA_CX
Storage Array Type Device Config: {navireg=on, ipfilter=on}
{implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;
{TPG_id=2,TPG_state=ANO}{TPG_id=1,TPG_state=AO}}
Path Selection Policy: VMW_PSP_FIXED_AP
Path Selection Policy Device Config:
{preferred=vmhba3:C0:T1:L0;current=vmhba3:C0:T1:L0}
Working Paths: vmhba3:C0:T1:L0

The Path Selection Plug-Ins (PSP) policy is what ESX host uses when it determines which path to use in the event of a failover.
Supported PSP options are:

o VMW_PSP_FIXED Fixed Path Selection


o V MW_PSP_MRU Most Recently Used Path Selection

o VMW_PSP_RR Round Robin Path Selection

o VMW_PSP_FIXED_AP Fixed Path Selection with Array Preference (introduced in ESX


4.1)

Note: For information on multipathing and path selection options, see Multipathing policies in ESXi 5.x and ESXi/ESX 4.x
(1011340).

vSphere Client
To obtain multipath settings for your storage in vSphere Client:

1. Select an ESX/ESXi host, and click the Configuration tab.

2. Click Storage.

3. Select a datastore or mapped LUN.

4. Click Properties.

5. In the Properties dialog, select the desired extent, if necessary.

6. Click Extent Device >Manage Paths and obtain the paths in the Manage Path dialog.

For information on multipathing options, see Multipathing policies in ESXi 5.x and ESXi/ESX 4.x(1011340).

ESX 3.x
Command line

To obtain LUN multipathing information from the ESX host command line:

1. Log in to the ESX host console.

2. Type esxcfg-mpath -l and press and press Enter.

You see output similar to:

Disk vmhba2:1:4 /dev/sdh (30720MB) has 2 paths and policy of Most Recently Used

FC 10:3.0 210000e08b89a99b<-> 5006016130221fdd vmhba2:1:4 On active preferred

FC 10:3.0 210000e08b89a99b<-> 5006016930221fdd vmhba2:3:4 Standby

Disk vmhba2:1:1 /dev/sde (61440MB) has 2 paths and policy of Most Recently Used

FC 10:3.0 210000e08b89a99b<->5006016130221fdd vmhba2:1:1 On active preferred


FC 10:3.0 210000e08b89a99b<->5006016930221fdd vmhba2:3:1 Standby

In this example, two LUNs are presented.

As there are no descriptions given, here is an analysis of the information provided for the first LUN:

o vmhba2:1:4- This is the canonical device name the ESX host used to refer to the LUN.

Note: When there are multiple paths to a LUN, the canonical name is the first path that was detected for this LUN.

o /dev/sdh- This is the associated Linux device handle for the LUN. You must use this reference when using utilities like fdisk.

o 30720MB- The disk capacity of the LUN, e.g 30720MB or 30GB.

o Most Recently Used -This is the policy the ESX host uses when it determines which path to use in the event of a failover.
The choices are:
Most Recently Used-The path used by a LUN is not be altered unless an event (user, ESX host, or array initiated)
instructs the path to change. If the path changed because of a service interruption along the original path, the
path does not fail-back when service is restored. This policy is used for Active/Passive arrays and many pseudo
active/active arrays.

Fixed-The path used by a LUN is always the one marked as preferred, unless that path is unavailable. As soon as
the path becomes available again, the preferred becomes the active path again. This policy is used for
Active/Active arrays. An Active/Passive array should never be set to Fixed unless specifically instructed to do so.
This can lead to path thrashing, performance degradations and virtual machine instability.

Round Robin-This is experimentally supported in ESX 3.x. It is fully supported in ESX 4.x.

Note: See the Additional Information section for references to the arrays and the policy they are using

o FC-The LUN disk type. There are three possible values for LUN disk type:

FC: This LUN is presented through a fibre channel device.

iScsi: This LUN is presented through an iSCSI device

Local: This LUN is a local disk

o 10:3.0-This is the PCI slot identifier, which indicates the physical bus location this HBA is plugged into.

o 210000e08b89a99b-The HBA World Wide Port Names (WWPN) are the hardware addresses (much like the MAC address
on a network adapter) of the HBAs.

o 5006016930221fdd-The Storage processor port World Wide Port Names (WWPN) are the hardware addresses of the ports
on the storage processors of the array.

o vmhba2:1:4- This is the true name for this path. In this example, there are two possible paths to the LUN (vmhba2:1:4
andvmhba2:3:4).

o On active preferred- The Path status contains the status of the path. There are six attributes that comprise the status:

On: This path is active and able process I/O. When queried, it returns a status of READY.

Off: The path has been disabled by the administrator.

Dead: This path is no longer available for processing I/O. This can be caused by physical medium error, switch, or
array misconfiguration.

Standby: This path is inactive and cannot process I/O. When queried, it returns a status of NOT_READY

Active: This path is processing I/O for the ESX Server host

Preferred: This is the path that is preferred to be active. This attribute is ignored when the policy is set to Most
Recently Used (mru).

3. VI Client

To obtain multipathing information from VI Client:

o Select an ESX host.


o Click the Configuration tab.

o Click Storage.

o Click the VMFS-3 datastore you are interested in.

o Click Properties.

The following dialog appears:

From this example, you can see that the canonical name is vmhba2:1:0 and the true paths are vmhba2:1:0 and
vmhba2:3:0 .
The active path is vmhba2:1:0 and the policy is Most Recently Used.

o Click Manage Paths. The Manage Paths dialog appears:

Additional Information
For more information, see the documentation for your version of ESX and consult the Storage/SAN Compatibility Guide.

For more information on ESXi 5.5, see the VMware vSphere 5.5 Documentation Center.

To be alerted when this article is updated, click Subscribe to Document in the Actions box.

How to find the corresponding vmdk of


the /dev/sd* disk added in linux system?

ssuvasanth 2014-4-15 12:06


Hi,

I have a situation where there are 3 disks -> /dev/sdj, /dev/sdk, /dev/sdl -> all 3 of same size in a
RHEL system.

I want to remove a disk(/dev/sdj) from the system from Vsphere.

How to find the corresponding .vmdk name?

I have been trying.

Needing your help.


0 (0)
20225
storage
storage
, vmdk
vmdk
, vmware
vmware
, rhel
rhel
, disk
disk
, linux
linux
, redhat
redhat
, remove
remove

1. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?

will373794 2014-4-15 6:39 ssuvasanth


You should be able to tell from the VM's properties ( In vmware client, right click VM -> Edit
Settings -> The hard disk you want to remove ). You can tell by size or SCSI node which should be
in same order as in linux.
o 0 (0)
o
2. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?

kashifkarar01 2014-4-15 10:59 will373794


hi,

In our vm which is running with RHEL 6. I notice that /dev/sdb, /dev/sdc and /dev/sdd and mapped
to hard disk 1, hardisk 2 and harddisk 3 . You can co relate this at the VMs edit settings.

NOTE: While removing the Hardisk from VM make sure that you just remove the .vmdk from the
VM and do not select the option delete from disk. In this way you can be in safer side and if
everything works out as expected you can manually delete the .vmdk file from Datastore (If you
wish).
o 0 (0)
o
3. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?

macvirtual 2014-4-16 12:55 kashifkarar01


Hi ssuvasanth,

It's easy to identify the device file(/dev/sdX) and VMware virtual hard disk.

right click your virtual machine on the vCenter ( or vSphere Client), and click "Edit Settings".
You will see your virtual hard disk as above.

On the right pane, SCSI address shows up like "SCSI(0:0)". The number of the SCSI(X:X) can map
to Linux device file such as;
SCSI(0:0) -> /dev/sda
SCSI(0:1) -> /dev/sdb



So I guess /dev/sdj, which you wish to delete, SCSI(0:10). Note that the virtual hard disk name
"Hard disk X" is not necessarily corresponding to /dev/sdX.
This mapping is based on Linux device naming mechanism, thus if you customize this
configuration, say using /etc/udev.rules, you should check Red Hat document before you delete
your vdisk.

Best,
MAC
o 0 (0)
o
4. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?

sajal1 2014-4-16 3:26 ssuvasanth


Hello,
The best way to do it a mix of the above and the following:

Run the command "lssci". A sample output is provided below:

Next follow the above suggestion from macvirtual to find out the mapping of the disks with the
VMDK. Now you are sure and safe to know which is which.

Hope this helps


o 0 (0)
o
5. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?

UofS 2014-10-20 2:00 sajal1


We have found that when using multiple pvscsi controllers the only reliable thing one can count on
seems to be the LUN number. Controller host (first column) is completely random. We believe
that the only way to reliably map these is to embed the controller and LUN id into the label or
volume group name (assuming 1 vmdk per volume group)

If there are any linux method to sequentially map the host controllers this would make it more
reliable.

Ideas anyone?

eg:

[0:0:0:0] disk VMware Virtual disk 1.0 /dev/sda


[0:0:1:0] disk VMware Virtual disk 1.0 /dev/sdb
[0:0:2:0] disk VMware Virtual disk 1.0 /dev/sdc
[1:0:0:0] disk VMware Virtual disk 1.0 /dev/sdd
[1:0:1:0] disk VMware Virtual disk 1.0 /dev/sde
[2:0:0:0] disk VMware Virtual disk 1.0 /dev/sdf
[2:0:1:0] disk VMware Virtual disk 1.0 /dev/sdg
[3:0:0:0] disk VMware Virtual disk 1.0 /dev/sdh
[3:0:1:0] disk VMware Virtual disk 1.0 /dev/sdi
^

Update: The following link describes the problem and we believe the problem is as
stated that the pvSCSI driver just does now pass the info through such as the wwn
for the controller.

vmware esx - How does Linux determine the SCSI address of a disk? - Server Fault

o 0 (0)
o
6. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?

UnixArena 2015-8-5 12:48 ssuvasanth


1. Use the dmesg command to find the existing disk SCSI ID and try to map it with VMware scsi id.
2. Pefrom the sg name validation on Linux to get the exact hard-disk name in the virtual Machine
level.

For step by step guide, Please go through the below link.


How to Map the VMware virtual Disks for Linux VM ? - UnixArena
o 0 (0)
o
7. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?

suyashjain 2016-11-14 2:22 ssuvasanth


dmesg is the easiest way to map the disk with vmdk files.

execute the following command on your linux.

# dmesg | grep -i 'Attached SCSI disk'

sample outout.

sd 2:0:1:0: [sdb] Attached SCSI disk


sd 2:0:3:0: [sdd] Attached SCSI disk
sd 2:0:6:0: [sdg] Attached SCSI disk
sd 2:0:4:0: [sde] Attached SCSI disk
sd 2:0:0:0: [sda] Attached SCSI disk
sd 2:0:5:0: [sdf] Attached SCSI disk
sd 2:0:2:0: [sdc] Attached SCSI disk
now match second and third number from first column sd '2:0:1:0' with VMWARE scsi Id
mentioned in VM properties as mentioned in above screen shot.

Das könnte Ihnen auch gefallen