Beruflich Dokumente
Kultur Dokumente
LANGUAGE=EN_US
How to rescan and recover LUN paths in a host after modifying SLM reporting
nodes
Sep 29, 2016How To
ARTICLE NUMBER
000028339
DESCRIPTION
Applicable to clustered Data ONTAP 8.3 GA and above.
This article describes the procedure that should be followed to rescan and recover LUN paths in different
Operating Systems (OS), while moving a LUN or a volume containing LUNs to another HA pair within the
same cluster. Modify the Selective LUN Map (SLM) reporting-nodes list, before initiating the move (add-
reporting-nodes) and after the move is completed (remove-reporting-nodes).
The procedure below ensures that active or optimized LUN paths are maintained in the host OS
multipathing.
Note: For more information on SLM, see Clustered Data ONTAP SAN Administration Guide.
PROCEDURE
Host rescan is required to recover the active or optimized LUN paths after add-reporting-nodes and to
clean up stale LUN paths after remove-reporting-nodes in SLM.
VMware ESX/ESXi hosts:
Manual Rescan should be performed after add-reporting-nodes and remove-reporting-nodes using
ESXi CLI or vSphere/VI/Web Client.
For more information, see VMware KB 1003988: Performing a rescan of the storage on an ESX/ESXi host.
Microsoft Windows hosts:
Rescan after add-reporting-nodes and remove-reporting-nodes using Windows GUI.
1. Open Computer Management (Local)
2. In the console tree, click Computer Management (Local) >> Storage >> Disk
Management
3. In the disk management page click Action >> Rescan Disks. This will rescan all
the disks and update any path changes.
Rescan after add-reporting-nodes and remove-reporting-nodes using command line.
1. Open Command Prompt and enter the following text:
#diskpart
2. At the DISKPART> prompt, enter the following text:
DISKPART>rescan.
This will rescan all the disks and updates any path changes. For more information,
see Microsoft TechNet Updatedisk.
Linux hosts:
Rescan after add-reporting-nodes.
1. Starting RHEL 6.5 & RHEL 7.0 onwards, run the following command to update
active/optimized paths after add-reporting-nodes:
#/usr/bin/rescanscsibus.sha
2. For RHEL 5 and RHEL 6.4 (including previous updates), run the following command to
update active/optimized paths after add-reporting-nodes:
#/usr/bin/rescanscsibus.sh
Note: Nothing additional has to be done in the multipath layer.
Rescan after remove-reporting-nodes
1. Separate rescan steps are required for SCSI layer and Multipathing layer in Linux
storage stack to clean up stale disk paths after remove-reporting-nodes in SLM.
2. Run the following command to remove stale LUN paths in SCSI layer
#/usr/bin/rescanscsibus.shr
3. Next run the following command to remove stale LUN paths in multipath layer:
#multipathr
Note: The /usr/bin/rescanscsibus.sh script is available as part of the
native sg3_utilspackage.
AIX hosts:
Rescan after add-reporting-nodes
1. Run the following command to identify adapters used for NetApp Storage:
#lsdevCcadapter|grepifcs
fcs0Available03008GbPCIeFCBladeExpansionCard(7710322577107601)
fcs1Available03018GbPCIeFCBladeExpansionCard(7710322577107601)
2. Now use the HBA names from above output to rescan each adapter.
#cfgmgrl<HBAhandle>
Example:
#cfgmgrlfcs0
#cfgmgrlfcs1
Rescan after remove-reporting-nodes
1. Identify <pathid> of the moved disk for which the stale paths has to be removed.
#lspathl<devicehandle>F'path_idnameparentconnectionstatus'
For Example:
#lspathlhdisk1F'path_idnameparentconnectionstatus'
0hdisk1fscsi023f000a09830ca3a,0Enabled
1hdisk1fscsi023f100a09830ca3a,0Enabled
2hdisk1fscsi0202800a09830ca3a,0Enabled
3hdisk1fscsi0202900a09830ca3a,0Enabled
2. Run the following command to remove stale paths:
#rmpathi<pathid>dl<devicehandle>
For example if 0 & 1 are the paths to be removed from above example then the command would be:
#rmpathi0dlhdisk1
#rmpathi1dlhdisk1
Solaris hosts:
Rescan after add-reporting-nodes
1. For iSCSI LUNs, run the following command:
#devfsadmiiscsi
2. For FC/FCoE LUNs, perform the following steps:
1. Run the following command to identify OS Device name of the HBA ports that
are accessing NetApp LUNs:
#cfgadmaloshow_FCP_dev|grepfcfabric
c3fcfabricconnectedconfiguredunknown
c4fcfabricconnectedconfiguredunknown
2. Now run the following command for each <controller> to be rescanned:
#cfgadmcconfigure<controller>
For example from Step1 c3 & c4 are the controller names and so the command would be:
#cfgadmcconfigurec3
#cfgadmcconfigurec4
Rescan after remove-reporting-nodes
1. For iSCSI LUNs, run the following command:
#devfsadmiiscsi
#devfsadmCv
2. For FC/FCoE LUNs, perform the following steps:
1. If the host is accessing NetApp LUNs using a single FC port, then it is
advised to reboot the host. Run the following commands to reconfigure and reboot the host.
#touch/reconfigure
#init6
2. But if host is accessing NetApp LUNs with 2 or more FC ports, then run the
following commands to identify OS Device names of the HBA ports:
#cfgadmaloshow_FCP_dev|grepfcfabric
c3fcfabricconnectedconfiguredunknown
c4fcfabricconnectedconfiguredunknown
3. Run the following command to reconfigure each port one after the other:
#cfgadmcunconfigure<controller>
#cfgadmcconfigure<controller>
For example from above output c3 & c4 are the controller names and so the commands would be
similar to the following:
#cfgadmcunconfigurec3
#cfgadmcconfigurec3
#cfgadmcunconfigurec4
#cfgadmcconfigurec4
Note: Above step should be peformed only for one port at a time.
4. Run the following command to clean up the devices:
#devfsadmCv
5. To clear MPxIO entries, an OS reboot is needed and this can be performed
during a planned downtime. Run the following command to reconfigure and reboot the host:
#touch/reconfigure
#init6
6. Once the host is back after reboot, run the following command :
#devfsadmCv
HPUX hosts:
Rescan after add-reporting-nodes
1. Run the following command to scan I/O system:
#ioscanfNCdisk
#ioscanfNClunpath
#ioiniti
#insfe
Rescan after remove-reporting-nodes
1. Scan I/O system to identify disks that has stale entries (H/W Type is shown
as NO_HW)
#ioscanfNkCdisk
ClassIH/WPathDriverS/WStateH/WTypeDescription
=======================================================================
disk2464000/0xfa00/0x16esdiskCLAIMEDDEVICENETAPPLUNCMode
disk3264000/0xfa00/0x22esdiskCLAIMEDDEVICENETAPPLUNCMode
disk3564000/0xfa00/0x23esdiskCLAIMEDDEVICENETAPPLUNCMode
disk4264000/0xfa00/0x24esdiskNO_HWDEVICENETAPPLUNCMode
2. Run the following command to remove special device file stale entries:
#rmsfH<hw_path>
For example from the above output the command would look like:
#rmsfH64000/0xfa00/0x24
3. Scan I/O system to identify LUN path that has stale entries (H/W Type is shown
as NO_HW)
#ioscanfNkClunpath
ClassIH/WPathDriverS/W
StateH/WTypeDescription
==============================================================================
==========================
lunpath610/6/0/0/0/0/4/0/0/0.0x200000a0981be096.0x4000000000000000
eslptCLAIMEDL
lunpath770/6/0/0/0/0/4/0/0/0.0x200000a0981be096.0x4001000000000000
eslptNO_HWL
lunpath760/6/0/0/0/0/4/0/0/1.0x200100a0981be096.0x4000000000000000
eslptCLAIMEDL
lunpath780/6/0/0/0/0/4/0/0/1.0x200100a0981be096.0x4001000000000000
eslptNO_HWL
4. Delete all stale hardware LUN path running the following command:
#ioscanfNkC<lunpath>
For example from above output the command would be similar to the following:
#rmsfH0/6/0/0/0/0/4/0/0/1.0x200100a0981be096.0x4001000000000000
#rmsfH0/6/0/0/0/0/4/0/0/0.0x200000a0981be096.0x4001000000000000
5. Run the following command to again identify the stale LUN path entries if any (H/W
Type is shown as NO_HW):
#ioscanfNClunpath
ClassIH/WPathDriver
S/WStateH/WTypeDescription
==============================================================================
==============================================
lunpath620/6/0/0/0/0/4/0/0/0.0x200200a0981be096.0x4000000000000000
eslptCLAIMEDLUN_PATHLUNpathfordisk24
lunpath810/6/0/0/0/0/4/0/0/0.0x200200a0981be096.0x4001000000000000
eslptNO_HWLUN_PATHLUNpathfordisk32
lunpath750/6/0/0/0/0/4/0/0/1.0x200300a0981be096.0x4000000000000000
eslptCLAIMEDLUN_PATHLUNpathfordisk24
lunpath830/6/0/0/0/0/4/0/0/1.0x200300a0981be096.0x4001000000000000
eslptNO_HWLUN_PATHLUNpathfordisk32
6. Now run the following command to replace and validate the change of a LUN
associated to a LUN path.
#scsimgrfreplace_wwidH<lunpath>
For example from above output the command would be similar to the following:
#scsimgrfreplace_wwidH
0/6/0/0/0/0/4/0/0/0.0x200200a0981be096.0x4001000000000000
#scsimgrfreplace_wwidH
0/6/0/0/0/0/4/0/0/1.0x200300a0981be096.0x4001000000000000
7. Finally run the following command which is part of HP-UX HU kit.
#/opt/NetApp/santools/bin/ntap_config_paths
ARTICLE FOOTER
Disclaimer
NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any information or
recommendations provided in this publication, or with respect to any results that may be obtained by the use of the
information or observance of any recommendations provided herein. The information in this document is distributed AS IS, and
the use of this information or the implementation of any recommendations or techniques herein is a customers responsibility
and depends on the customers ability to evaluate and integrate them into the customers operational environment. This
document and the information contained herein may be used solely in connection with the NetApp products discussed in this
document.
ATTACHMENT 1
ATTACHMENT 2
lun
[Copy the link]
win2008
UID LUN UID
2
UID
:
C Controller
T Target
D Disk
CTD
CTD signature(label in unix/linux)
LUN
CTD signature , LUN
:
C Controller FC HBA
T Target Storage FE Port WWN
D Disk LUN (Host LUN ID)
iSCSI , , iSCSI FC .
FC iSCSI
HBA iSCSI initiator (hardware or software)
storage target
WWN iSCSI name (eui or iqn)
FE port portal (IP + TCP ) :TCP default 3260
Name server iSNS
FC iSCSI windows
iSCSI LUN IDWWN
iqn
4
FC windows
3
UltraPath upadm show vlun
http://support.huawei.com/enterprise/docinforeader.action?
contentId=DOC1000018821&idPath=7919749|7941815|9519490|9858859|8576127
All, hello, new to Hitachi and this forum. I have 2xAMS2300 and 1xHUS110.
My question revolces around mapping LUNIDs from the host side to the HDS storage. via HSNM2
I have a H-LUN and LUN ID which works for me on from the storage view, however, how can I
have windows admins confirm which LUN to storage LUN ID?
I'm comparing it to symmetrix with solutions enabler commands which show host side mapping
back to strorage LUN IDs.
We have several windows backup hosts utilizing HDS strorage for BUs and all are of 4TB size and
some hosts have upwards of 8 LUNs. I simply need to understand the tools needed or commands
to view the LUN to host ID from the host side.
I did search the forums here and saw iscfg for AIX, and also mention of tools "raid manger" and
then CCI. Are the later two what I need for windows? Would I have this or need to download
install to end host? etc?
Hi,
You can download CCI from the support portal (http://portal.hds.com). It has a binary called
raidscan that you can use to get hardisk to LDEV/LUN mapping.
If you are coming from a sym background you might find the HDS tools more obscure and
somewhat less functional. Solutions enable was the primary configuration interface for a long
time and (IMO) is more mature. To get mapping information I recommend you use HCS as the
agents will give you this information in a nice GUI interface.
Cris
0 0
CRIS thank your for your reply. So the CCI is yet different than the SNM2 CLI I suspect/have
learned? I have attempted to use this CLI but with little success for my needs. Would you
compare the CCI to a solutions enabler type software you install to the end host you need details
on LUNS for?
I also see that solutions enabler has syminq with -hds and -hids switches. I will test these
however has anyone had any success using these to provide host side listings/mapping to Hitachi
devs?
Yes, CCI is a seperate tool and is HDS array agnostic (works on all HDS arrays). Its also the oldest
tool. The SNM2 CLI is fairly recent and specific to modular arrays (HUS/AMS).
You don't need to install CCI on each host to get mapping. You can install the HCS agent which
will give you host to LUN mapping. You should have recieved a license for HCS with your HUS.
HCS is more like SMC.
As for the SE hds flags, these should work, since SE mainly uses SCSI inqury commands and the
HDS device types (DF600F and Open-V) are known.
0 0
CCI installation includes one binary which is pretty similar to syminq and it's called inqraid. After
installation you can find if from c:\HORCM\usr\bin\inqraid. It's possible to copy just that inqraid file
to any other Windows server and run it.
Here you have nicely drive letter, port, storage serial number and AMS volume number.
By giving inqraid.exe -h you will get plenty of other options which you can try also like $Phys to
see all physical devices, sorting, etc
What I would do is that copy that file to some tools share and use it everywhere in your
environment. Then you don't need to start installing CCI or any other time consuming software if
you only need drive to volume mapping. Inqraid is available also for other platforms in CCI
installation package.
-Timo-
0 0
How to check the LUN-ID from host side
in General, Software
If you ever wondered how you can check the lunid from the hostside and you don't have the option of
installing CCI then you are in the right place.
The easiest way to get the lunid is to use the inqraid command that is part of Hitachi CCI (command
control interface) package. You don't even need to install CCI on the host, you can just go ahead and copy
the executable to a folder and run it.
Sometimes however systems administrators are a bit scared to copy/install software on their machines
and going throught change control might take a while.
The alternative is to "decode" the device instance path from windows mpio
Here are the steps to read the device id and how to decode it.
From Computer Management, navigate to Storage->Disk Management, identify your disk and get it's
properties. Go to the last tab (Details) and Scroll down in the list until property is "Device Instance Path".
You will need the 8 digits to decode the lun id.
.
ignnone size-medium wp-image-626" />
In this particular case the last eight digits are 39373942.
Now, break the 8 digits into 4 groups. Each group will give you 1 digit from the lunid. If a group is lower
then 40 then you extract substract 30 , if the group is higher then 40 then you substract 31. All results will
be converted to hex.
Of course, there is the way easier way to use inqraid and obtain that info as well but this method above
does not require anything to be installed on the O/S side.
documentation.
1 - Disk array
2 - SIM
3 - SNMP
4 - Remote Console
5 - Error info
6 - Public
7 - SNMP manager
8 - Open system host
RAID Manager command devices
RAID Manager manages Business Copy (BC) and/or Continuous Access (CA) operations from
a server host. To use RAID Manager with BC or CA, you must use Command View or LUN
Configuration Manager to designate at least one LDEV as a command device. Refer to
the Command View or LUN Configuration Manager user guide for information about how to
designate a command device.
top
Installation procedures
1. Install and configure the disk array
The HP service representative performs the following tasks:
Assembling hardware and installing software
Loading the microcode updates
Installing the channel adapters (CHAs) and cabling
Installing and formatting devices
You perform the additional tasks below. If you do not have Command View or LUN
Configuration Manager, your HP service representative can perform these tasks for you.
Setting the System Option Modes
The HP representative sets the System Option Mode(s) based on the operating system and
software configuration of the host.
Configuring the Fibre Channel ports
Configure the disk array Fibre Channel ports by using Command View or the Fibre Parameter
window in LUN Configuration Manager. Select the settings for each port based on your storage
area network topology. Use switch zoning if you connect different types of hosts to the array
through the same switch.
Fibre Address
In fabric environments, the port addresses are assigned automatically. In arbitrated loop
environments, you set the port addresses by selecting a unique arbitrated loop physical
address (AL-PA) or loop ID for each port.
Fabric and Connection parameter settings
You can set each array port to FABRIC ON or OFF with connections of POINT-TO-POINT or
FC-AL as shown in the following table and figures. For detailed topology information, refer to
the HP StorageWorks SAN Design Reference Guide on the http://www.hp.com Web site.
Figure 1: Simple Point-to-Point Fabric Topology Example
1 - Server
2 - N Port
3 - F Port
4 - Fabric Switch
5 - F Port
6 - N Port
7 - Disk Array
Figure 2: Arbitrated Loop Fabric Topology Example
1 - Server
2 - NL Port
3 - FL Port
4 - Fabric Switch
5 - FL Port
6 - NL Port
7 - Disk Array
Fabric Connection
Provides
Parameter Parameter
ON FC-AL NL-port (SAN public arbitrated loop)
ON Point-to-Point N-port (SAN fabric port)
Fabric Connection
Provides
Parameter Parameter
NL-port (private arbitrated loop; direct
OFF FC-AL
connect without a SAN)
OFF Point-to-Point Not supported
Setting the Host Mode for the disk array ports
The disk array ports have Host Modes that you must set depending on the host you use. After
the disk array is installed, use Command View (shown) or LUN Configuration Manager to set
the Host Mode for each port.
Figure 3: The host mode for AIX is OF
The example shows that Device hdisk0 is installed on bus 60 and has TID=5 and LUN=0.
Record the device file names for the new devices. You will use this
information in changing the device parameters.
Use the lscfg command to find out the AIX disk devices corresponding
array LDEV designation.
Example
#lscfgvlhdisk3
In this example, the emulation type, LDEV number, CU number and array port designation
should all be displayed for disk device hdisk3.
b. Configure disk array devices
Configure the disk array devices in much the same way you would configure any new disk on
the host. Creating scripts to configure all devices at once may save you considerable time.
Changing the device parameters
When the device files are created, the system sets the device parameters to the system
default values. You may need to change a few of those values for each new OPEN-x device:
read/write (R/W) timeout value
queue depth
queue type
The recommended queue depth settings may not provide the best I/O performance for your
system. You can adjust the queue depth setting to optimize the I/O performance of the disk
array.
Parameter Default
Type Required Value for Disk Array
Name Value
Read/write
30 60
time-out
2 (For LUSE devices use 2 for each LUN. For
SCSI Queue depth 1 example, if one LUSE device contains 8 LUNs,
use 2 8 = 16 for the queue depth.)
Queue type None Simple
Read/write
30 60
timeout
Use 2 if exclusively OPEN-x volumes are
mapped to the SCSI/FC port.
Use 8 if exclusively LUSE volumes are mapped
Before 52- to the SCSI/FC port.
38-xx Use 2 if an intermix of LUSE and OPEN-x
Fibre volumes is mapped to the SCSI/FC port.
Channel Queue depth Use 8 if an intermix of LUSE and OPEN-x
volumes is mapped for dummy LU (I-7135-Emu).
52-40-xx to Number of volumes queue-depth 256 AND
52-44-xx queue-depth 8
52-45-xx or Number of volumes queue-depth 256 and
later queue-depth 32
Queue type None Simple
To show the device parameters using the AIX command line:
At the command line prompt, enter lsattrElhdiskx, where hdiskx is the device file
name.
Example
#lsattrElhdisk2
To change the device parameters using the AIX command line:
Change the parameters as follows:
To change the R/W timeout parameter, enter: chdev1hdiskxarw_timeout='60'
To change the queue depth parameter, enter: chdevlhdiskxaqueue_depth='x'
where x is a value from the above table.
To change the queue type parameter, enter: chdevlhdiskxaq_type='simple'
Example This example changes the queue depth for device hdisk3:
#chdevlhdisk3aqueue_depth='2'
Verify that the parameters for all devices were successfully changed.
Example
#lsattrElhdisk3
Repeat these steps for each OPEN-x device on the disk array.
NOTE: The lsattr command also shows other useful information, such as LUN ID of the
mapped LDEV, worldwide name of the disk array FC port, and N-Port ID.
Another useful command for determining the slot position and port worldwide name of the HBA
is the lscfgvlhdiskx command.
To change the device parameters using SMIT:
Start SMIT. (Optional) For an ASCII session, use the smitC command.
Example #smit
The System Management screen appears.
Select Devices.
Example
Figure 6: The System Management screen
Enter the correct values for the read/write timeout value, queue depth, and
queue type parameters. Press Enter to complete the parameter changes.
Repeat these steps for each OPEN-x device on the disk array.
Assigning the new devices to volume groups
Assign the new devices to volume groups, using the AIX systems Logical Volume Manager
(accessed from within SMIT). This operation is not required when the volumes are used as raw
devices.
To assign a device to a volume group:
Start SMIT. (Optional) For an ASCII session, use the smitC command.
Example #smit
The System Management screen appears.
Select System Storage Management (Physical & Logical Storage).
Example
Figure 8: System Managment screen
Verify that the file system is usable by performing some basic operations (for
example, file creation, copying, and deletion) on each logical device.
Use the df command to verify that the file systems have successfully
automounted after a reboot. Any file systems that were not automounted can be set to
automount using the SMIT Change a Journaled File System screen. If you are using HACMP
or HAGEO, do not set the file systems to automount.
NOTE: HACMP and HAGEO do not provide a complete disaster recovery or backup solution,
and are not a replacement for standard disaster recovery planning and backup/recovery
methodology.
top
How do I correlate /dev/sd devices to the hardware
they represent?
up A drive is beginning to fail and I only know the device by its /dev/sdb device file
vote31down designation. What are the ways that I can use to correlate that device file to an
vote favorite
actual hardware device to know which drive to physically replace?
14 Bonus: What if I don't have /dev/disk/ and its sub directories on this installation?
(Which, sadly, I don't)
linux hard-disk block-device
shareimprove this question edited Jun 9 '12 at 22:49 asked Jun 8 '12 at 21:02
Gilles Wesley
409k847571236 4,394102543
No RAID controller? ewwhite Jun 8 '12 at 21:44
@ewwhite Not for the purposes of this question, no. =) Wesley Jun 8 '12 at 21:51
btw. if you don't have the /dev/disk/ directory you probably don't have udev. Ulrich
Dangel Jun 9 '12 at 3:32
@UlrichDangel Very observant because... I don't have udev. =( Wesley Jun 9 '12 at 3:43
add a comment
7 Answers
activeoldestvotes
up You can look in /sys/block:
vote20down -bash-3.2$ ls -ld /sys/block/sd*/device
vote lrwxrwxrwx 1 root root 0 Jun 8 21:09 /sys/block/sda/device ->
accepted
../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdb/device ->
../../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdc/device ->
../../devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdd/device ->
../../devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0
Or if you don't have /sys, you can look at /proc/scsi/scsi:
-bash-3.2$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: PepperC Model: Virtual Disc 1 Rev: 0.01
Type: CD-ROM ANSI SCSI revision: 03
shareimprove this answer answered Jun 8 '12 at 21:13
Handyman5
46125
Awesome, I have /sys/ (as well as /proc/) Lots of great info there. Still consuming
it. Wesley Jun 8 '12 at 21:24
Also, take a look at /dev/disk/by-path (assuming your udev supports it). Of course, OPs
doesn't, so... derobert Jun 8 '12 at 21:39
add a comment
up hdparm -i /dev/sdb
vote14down That should give you the model and serial number of the drive.
vote shareimprove this answer edited Jun 8 '12 at 23:00 answered Jun 8 '12 at 21:13
user13742
Martin Barry
1412
This would work for most situations, I believe. However, for some reason the controller in
this server is sketchy. Performing that command earns me this: HDIO_GET_IDENTITY failed:
Invalid argument Wesley Jun 8 '12 at 21:19
1 smartctl -i is worth trying, too. Works on SCSI drives, whereas hdparm often
won't. derobert Jun 8 '12 at 21:41
Does not work with USB drices Mads Skjern Jul 30 '15 at 15:17
add a comment
up As the inimitable Gilles mentioned in this answer of his, if your kernel uses udev you
vote10down can use the udevadm command to interrogate a device:
vote udevadm info -n /dev/sda -a
(Sadly, in some cases [doubly sad is that it's true in this case for me] udev is not
used and/or udevadm is not available.)
shareimprove this answer edited Apr 13 at 12:36 answered Jun 8 '12 at 21:28
Community Wesley
1 4,394102543
add a comment
up If you can see the LED on the drive, or listen to the disk noise, you can run
vote9down sudo cat /dev/sdb >/dev/null
vote and see which drive suddenly becomes continuously active. Or, if you're going by noise,
sudo find /mount/point >/dev/null
which will make the heads move more (it may be better not to do it on the failing disk,
and instead use a process of elimination with the other disks).
shareimprove this answer edited Jun 9 '12 at 22:53 answered Jun 8 '12 at 21:12
Gilles jippie
409k847571236 7,33652752
2 I had considered how to get the lights to go blinky-blinky, so this is an answer to that
curiosity of mine. =) Wesley Jun 8 '12 at 21:20
1 Some drives have an extra LED for this, but they're usually only found in enterprise grade
drives (read bizarrely expensive at relatively low capacity). Don't know how to work those
LED's, but the dd trick usually works well enough. jippie Jun 8 '12 at 21:24
@WesleyDavid Even if there are no LEDs, listening to the noise can be a last recourse.
There's no need to use dd here (nor in most circumstances), cat or any other program that
reads from a file will do. GillesJun 9 '12 at 22:55
add a comment
up Assuming this is Linux, most obvious thing is to check dmesg for where the kernel first
vote4down initializes the device. It logs the drive model.
vote shareimprove this answer answered Jun 8 '12 at 21:11
Julian Yon
411
Check unix.stackexchange.com/questions/39886/ for more details. jippie Jun 8 '12 at
21:16
Yes, it's Linux. Specifically Debian 4. I checked dmesg, but didn't see any mention of a
drive model, oddly. Perhaps I'm misreading it. Wesley Jun 8 '12 at 21:24
add a comment
up I have 4 methods. The first one is the easiest:
vote3down dmesg | egrep "sd[a-z]"
vote For the others, I'm not sure if they need /dev/disk except for this one:
ls -lF /dev/disk/by-uuid
The others:
blkid -o list -c /dev/null
And the obvious:
fdisk -l
shareimprove this answer answered Jun 30 '12 at 21:59
Samuel Duclos
311
add a comment
up Here are some ways I know to find the SCSI device name
vote1down dmesg | egrep "sd[a-z]"
vote lsblk --scsi (from package util-linux >= v2.22)
lshw -C disk
ls -ld /sys/block/sd*/device
shareimprove this answer answered Feb 27 at 10:05
SamK
5502512
add a comment
Weve all been there: you have a certain Windows virtual machine with several disks of the same size and you
dont know which Windows-disk is in fact which storage LUN.
First of all disks coming from SCSI controller 0 show as coming from location 160 in Windows disk
management.
Disks connected to SCSI controller 1 appear as coming from location 161. And so on. After that each controller
increments by 32.
When RDMs are used, in Windows the bus will always be 0.When you add a SCSI id to an RDM in Virtual Center,
you add an x:y number as identifier to that disk. x Represents the SCSI controller and y is the LUN id. In
Windows this translates to a location and target. The LUN id as seen by Windows disk manager is not used.
So in my screenshot from disk manager as shown above disk9 has the information Location 193, bus 0,
target 3, LUN 0. So translating this into the VCenter settings for this VM:
So now you look up disk 2:3 in this VMs settings and find:
Note that the disk numbering in the VM settings are different from the Windows disk numbers. Windows starts at
0 and in VCenter a disk number starts at 1 and Ive seen Windows environments where the disk numbers
changed after reboots, so dont use disk numbers as reference!
In the top right red encircled number youll find the naa number. This usually starts with 6006, at least for EMC
storage it does.
By using the command line tool naviseccli you can now check the naa-number with the LUNs presented to the
ESX host where the VM resides. Its a lot of work, but you might want to consider adding this information to your
documentation and update it every time disk changes are performed.
As you can see the naa number I found using the CLI command matches the RDM I found in the VM settings.
Note that in VMware some lead-in numbers appear (in this case vml.02001d0000) as well as some lead-out
numbers (anything after e311).
This is how Id track a Windows disk all the way back to its storage array LUN.
Conclusion
Documentation! Make sure you add the naa numbers of RDMs in your sheet (or whatever you use to document
all the settings and configurations). Even better: for every disk you add in the VMs settings, write down which
LUN that represents.
External sources:
How to enable SSH on a VMware 5.5 ESXi host using the vSphere client
EMC World 2016: what to do? Theres not enough time, thats for sure!
How to set the NTP server, time and timezone in a Brocade switch
Ads by Shareaholic
Share this:
inShare1
Share on Tumblr
More
Related
DIP upgrade from EMC VNX 5500 to VNX 5700May 23, 2013In "Configuration"
Live VMware datastore expansion with running VMsJuly 9, 2013In "Configuration"
Migrating hosts to new storage ports, LIVEJuly 1, 2013In "Configuration"
common knowledge, Configuration, hardware, Storage array, VMwaredisk management, naa, VM settings,VMware
Will the new WiFi super antenna change the world?
EMC World 2014 discount codes available first come, first serve
Leave a comment ?
7 Comments.
Reply
Reply
For copy-based transitions, run the following command from the host where the 7MTT is installed:transition
export lunmap -s sub-project-name/session-name -o file_path
For example:transition export lunmap -s SanWorkLoad -o c:/Libraires/Documents/7-to-
C-LUN-MAPPING.csv
For copy-free transitions, run the following command from the system where the 7MTT is installed:transition
cft export lunmap -p project-name -s svm-name -o output-file
Note: You must run this command for each of your Storage Virtual Machines (SVMs).
For example:transition cft export lunmap -p SANWorkLoad -s svml -o
c:/Libraries/Documents/7-to-C-LUN-MAPPING-svml.csv
2. If the Windows host is SAN-booted and the boot LUN was transitioned, power on the host.
3. Update the FC BIOS to enable the system to boot from the LUN on the clustered Data ONTAP controller.
4. On the Windows host, rescan the disks from the Disk Manager.
5. Obtain the LUN serial numbers, LUN IDs, and corresponding Windows physical disk numbers of the LUNs mapped to
the host.
For systems running Data ONTAP ONTAPDSM: Use the Data ONTAPDSM Management Extension Snap-In or
the get-sandisk Windows PowerShell cmdlet.
For systems running MSDSM: Use the Inventory Collect Tool (ICT).
The LUN ID, LUN serial number, and corresponding serial number is captured under the SAN Host LUNs tab.
6. Use the LUN serial numbers, LUN IDs, and corresponding Windows physical disk numbers of the LUNs along with the
LUN map output and the data collected in the pretransition state, to determine whether the LUNs have transitioned
successfully.
7. Note whether the physical disk numbers of the transitioned LUNs have changed.
Use Windows Disk Manager to bring online disks that are not part of Cluster Failover.
Use Failover Cluster Manager to bring online disks that are part of Cluster Failover.
9. If the host you are transitioning is running Windows Server 2003 and you have migrated the quorum device, start the
cluster services on all of the cluster nodes.
10. If Hyper-V is enabled on the host and pass-through devices are configured to the VMs, modify the settings from Hyper-
V Manager.
The physical disk number of the LUN corresponding to the pass-through device might have changed as a result of the
transition.
Search for:
Go
Advanced Search Options
Or select a product...
This article shows how to identify storage drives on a Windows virtual server, the process is slightly different for Linux servers.
Your additional storage drives are listed on the overview page for your Virtual Server in your Fasthosts control panel. Each
additional drive is numbered. Typically your first storage drive will be 0 and each additional drive added will increment by one.
The number shown in the control panel corresponds to the Logical Unit Number (LUN) of the disk in your server. If you need to
identify which disk in your server relates to one of your storage drives, you need to connect to your server, open the Disk
Management utility, and find the disk with the corresponding LUN ID.
You can find the LUN ID for each of your additional drives within the Computer Management console on your server.
Step 1
Step 2
In the Start menu, right click on Computer, and then select Manage from the pull out menu.
Step 3
In the menu on the left-hand side of the Window, expand the Storage icon, and then select Disk Management.
Step 4
The disks attached to your server are listed. The first disk, Disk 0, is the primary disk in your server. Any additional devices that
you have added in your Fasthosts control panel will be listed below the primary disk.
While the disks are numbered in your server, and in your Fasthosts control panel, the numbers may not correspond. To be sure
that you have identified the correct disk within the Disk Management console you must confirm that the LUN ID matches the
drive number in your control panel.
Step 5
Right click on the drive you would like to check, and select Properties from the popup menu.
Step 6
The disk properties window displays the LUN ID in the Location information on the General tab. The number given here directly
corresponds to the storage drive number given in your control panel, but may not be the same as the disk number listed in the
Disk Management screen.
In this example, the LUN number is 0. This disk corresponds to the disk listed in the control panel as Storage Drive 0.
The primary disk in your Virtual Server will usually have a LUN number of 0 However, the first additional storage drive added will
usually also have a LUN number of 0. Take care not to incorrectly identify the primary disk as an additional drive.
I didnt find a PowerShell command to view the LUN IDs. You can create a PowerShell script
that uses Diskpart to view all the disks and the corresponding LUN IDs.
Related posts:
1.
Lee
December 27, 2013 at 21:08
Im not at work now so I dont have access to my script library. I think the WMI Class
Win32_DiskDrive class has this attribute as well. So something like
Get-WmiObject Win32_DiskDrive | Select Name, SCSIBus,SCSILun
Will work
2.
Nick M
January 13, 2014 at 02:31
So i dug into the WMI and wanted to post what it actually was, since that Win32_DiskDrive
class is pretty big and has a lot of stuff. Also since the above poster wasnt exactly right on the
select statement, i wanted to help.
If you wanted to get a little more detailed information to ensure youre looking at the right
storage provider as well, use this select afterwards.
/dev/sg2 0 0 2 0 0 /dev/sdc
From the outputs of 2 commands above we can determine that sg2 ( SAN DISK ) is actually /dev/sdc
device
Method 3
If multipath is being used ( device-mapper ) below command can be used:
# multipath -v4 -ll
ESX command line use the command line to obtain the multipath information when performing troubleshooting procedures.
VMware Infrastructure/vSphere Client use this option when you are performing system maintenance.
To obtain LUN multipathing information from the ESXi host command line:
2. Type esxcli storage core path list to get detailed information regarding the paths.
For example:
fc.5001438005685fb7:5001438005685fb6-fc.5006048c536915af:5006048c536915af-
naa.60060480000290301014533030303130
UID: fc.5001438005685fb7:5001438005685fb6-fc.5006048c536915af:5006048c536915af-
naa.60060480000290301014533030303130
Runtime Name: vmhba1:C0:T0:L0
Device: naa.60060480000290301014533030303130
Device Display Name: EMC Fibre Channel Disk (naa.60060480000290301014533030303130)
Adapter: vmhba1
Channel: 0
Target: 0
LUN: 0
Plugin: NMP
State: active
Transport: fc
Adapter Identifier: fc.5001438005685fb7:5001438005685fb6
Target Identifier: fc.5006048c536915af:5006048c536915af
Adapter Transport Details: WWNN: xx:xx:xx:xx:xx:xx:xx:xx WWPN: xx:xx:xx:xx:xx:xx:xx:xx
Target Transport Details: WWNN: 50:06:04:8c:53:69:15:af WWPN: 50:06:04:8c:53:69:15:af
3. Type esxcli storage core path list -d naaID to list the detailed information of the corresponding paths for a
specific device.
4. The command esxcli storage nmp device list lists the LUN multipathing information:
naa.60060480000290301014533030303130
Device Display Name: EMC Fibre Channel Disk (naa.60060480000290301014533030303130)
Storage Array Type: VMW_SATP_SYMM
Storage Array Type Device Config: SATP VMW_SATP_SYMM does not support device
configuration.
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config:
{preferred=vmhba0:C0:T1:L0;current=vmhba0:C0:T1:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba0:C0:T1:L0
Notes:
For information on multipathing and path selection options, see Multipathing policies in ESX/ESXi 4.x and ESXi 5.x (1011340).
If a Connect to local host failed: Connection failure message is received, the hostd management agent
process may not be running, which is required to use esxcli. In this situation, you can use localcli instead of esxcli.
For more information, see the 5.5 Command Line Reference Guide.
vSphere Client
To obtain multipath settings for your storage in vSphere Client:
4. Click Properties.
6. Click Extent Device > Manage Paths and obtain the paths in the Manage Path dialog.
For information on multipathing options, see Multipathing policies in ESXi 5.x and ESXi/ESX 4.x (1011340).
ESX 4.x
Command line
To obtain LUN multipathing information from the ESX/ESXi host command line:
For example:
fc.2000001b32865b73:2100001b32865b73-fc.50060160c6e018eb:5006016646e018eb-
naa.6006016095101200d2ca9f57c8c2de11
Runtime Name: vmhba3:C0:T1:L0
Device: naa.6006016095101200d2ca9f57c8c2de11
Device Display Name: DGC Fibre Channel Disk (naa.6006016095101200d2ca9f57c8c2de11)
Adapter: vmhba3 Channel: 0 Target: 1 LUN: 0
Adapter Identifier: fc.20000000c98f3436:10000000c98f3436
Target Identifier: fc.50060160c6e018eb:5006016646e018eb
Plugin: NMP
State: active
Transport: fc
Adapter Transport Details: WWNN: 20:00:00:1b:32:86:5b:73 WWPN: 21:00:00:1b:32:86:5b:73
Target Transport Details: WWNN: 50:06:01:60:b0:20:f2:d9 WWPN: 50:06:01:60:b0:20:f2:d9
4. The command esxcli nmp device list lists of LUN multipathing information:
naa.6006016010202a0080b3b8a4cc56e011
Device Display Name: DGC Fibre Channel Disk (naa.6006016010202a0080b3b8a4cc56e011)
Storage Array Type: VMW_SATP_ALUA_CX
Storage Array Type Device Config: {navireg=on, ipfilter=on}
{implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;
{TPG_id=2,TPG_state=ANO}{TPG_id=1,TPG_state=AO}}
Path Selection Policy: VMW_PSP_FIXED_AP
Path Selection Policy Device Config:
{preferred=vmhba3:C0:T1:L0;current=vmhba3:C0:T1:L0}
Working Paths: vmhba3:C0:T1:L0
The Path Selection Plug-Ins (PSP) policy is what ESX host uses when it determines which path to use in the event of a failover.
Supported PSP options are:
Note: For information on multipathing and path selection options, see Multipathing policies in ESXi 5.x and ESXi/ESX 4.x
(1011340).
vSphere Client
To obtain multipath settings for your storage in vSphere Client:
2. Click Storage.
4. Click Properties.
6. Click Extent Device >Manage Paths and obtain the paths in the Manage Path dialog.
For information on multipathing options, see Multipathing policies in ESXi 5.x and ESXi/ESX 4.x(1011340).
ESX 3.x
Command line
To obtain LUN multipathing information from the ESX host command line:
Disk vmhba2:1:4 /dev/sdh (30720MB) has 2 paths and policy of Most Recently Used
Disk vmhba2:1:1 /dev/sde (61440MB) has 2 paths and policy of Most Recently Used
As there are no descriptions given, here is an analysis of the information provided for the first LUN:
o vmhba2:1:4- This is the canonical device name the ESX host used to refer to the LUN.
Note: When there are multiple paths to a LUN, the canonical name is the first path that was detected for this LUN.
o /dev/sdh- This is the associated Linux device handle for the LUN. You must use this reference when using utilities like fdisk.
o Most Recently Used -This is the policy the ESX host uses when it determines which path to use in the event of a failover.
The choices are:
Most Recently Used-The path used by a LUN is not be altered unless an event (user, ESX host, or array initiated)
instructs the path to change. If the path changed because of a service interruption along the original path, the
path does not fail-back when service is restored. This policy is used for Active/Passive arrays and many pseudo
active/active arrays.
Fixed-The path used by a LUN is always the one marked as preferred, unless that path is unavailable. As soon as
the path becomes available again, the preferred becomes the active path again. This policy is used for
Active/Active arrays. An Active/Passive array should never be set to Fixed unless specifically instructed to do so.
This can lead to path thrashing, performance degradations and virtual machine instability.
Round Robin-This is experimentally supported in ESX 3.x. It is fully supported in ESX 4.x.
Note: See the Additional Information section for references to the arrays and the policy they are using
o FC-The LUN disk type. There are three possible values for LUN disk type:
o 10:3.0-This is the PCI slot identifier, which indicates the physical bus location this HBA is plugged into.
o 210000e08b89a99b-The HBA World Wide Port Names (WWPN) are the hardware addresses (much like the MAC address
on a network adapter) of the HBAs.
o 5006016930221fdd-The Storage processor port World Wide Port Names (WWPN) are the hardware addresses of the ports
on the storage processors of the array.
o vmhba2:1:4- This is the true name for this path. In this example, there are two possible paths to the LUN (vmhba2:1:4
andvmhba2:3:4).
o On active preferred- The Path status contains the status of the path. There are six attributes that comprise the status:
On: This path is active and able process I/O. When queried, it returns a status of READY.
Dead: This path is no longer available for processing I/O. This can be caused by physical medium error, switch, or
array misconfiguration.
Standby: This path is inactive and cannot process I/O. When queried, it returns a status of NOT_READY
Active: This path is processing I/O for the ESX Server host
Preferred: This is the path that is preferred to be active. This attribute is ignored when the policy is set to Most
Recently Used (mru).
3. VI Client
o Click Storage.
o Click Properties.
From this example, you can see that the canonical name is vmhba2:1:0 and the true paths are vmhba2:1:0 and
vmhba2:3:0 .
The active path is vmhba2:1:0 and the policy is Most Recently Used.
Additional Information
For more information, see the documentation for your version of ESX and consult the Storage/SAN Compatibility Guide.
For more information on ESXi 5.5, see the VMware vSphere 5.5 Documentation Center.
To be alerted when this article is updated, click Subscribe to Document in the Actions box.
I have a situation where there are 3 disks -> /dev/sdj, /dev/sdk, /dev/sdl -> all 3 of same size in a
RHEL system.
1. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?
In our vm which is running with RHEL 6. I notice that /dev/sdb, /dev/sdc and /dev/sdd and mapped
to hard disk 1, hardisk 2 and harddisk 3 . You can co relate this at the VMs edit settings.
NOTE: While removing the Hardisk from VM make sure that you just remove the .vmdk from the
VM and do not select the option delete from disk. In this way you can be in safer side and if
everything works out as expected you can manually delete the .vmdk file from Datastore (If you
wish).
o 0 (0)
o
3. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?
It's easy to identify the device file(/dev/sdX) and VMware virtual hard disk.
right click your virtual machine on the vCenter ( or vSphere Client), and click "Edit Settings".
You will see your virtual hard disk as above.
On the right pane, SCSI address shows up like "SCSI(0:0)". The number of the SCSI(X:X) can map
to Linux device file such as;
SCSI(0:0) -> /dev/sda
SCSI(0:1) -> /dev/sdb
So I guess /dev/sdj, which you wish to delete, SCSI(0:10). Note that the virtual hard disk name
"Hard disk X" is not necessarily corresponding to /dev/sdX.
This mapping is based on Linux device naming mechanism, thus if you customize this
configuration, say using /etc/udev.rules, you should check Red Hat document before you delete
your vdisk.
Best,
MAC
o 0 (0)
o
4. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?
Next follow the above suggestion from macvirtual to find out the mapping of the disks with the
VMDK. Now you are sure and safe to know which is which.
If there are any linux method to sequentially map the host controllers this would make it more
reliable.
Ideas anyone?
eg:
Update: The following link describes the problem and we believe the problem is as
stated that the pvSCSI driver just does now pass the info through such as the wwn
for the controller.
vmware esx - How does Linux determine the SCSI address of a disk? - Server Fault
o 0 (0)
o
6. Re: How to find the corresponding vmdk of the /dev/sd* disk added in
linux system?
sample outout.